diff --git a/docs/edge-stack/latest b/docs/edge-stack/latest deleted file mode 120000 index 98fccd6d0..000000000 --- a/docs/edge-stack/latest +++ /dev/null @@ -1 +0,0 @@ -3.8 \ No newline at end of file diff --git a/docs/edge-stack/latest/about/aes-emissary-eol.md b/docs/edge-stack/latest/about/aes-emissary-eol.md new file mode 100644 index 000000000..1e4b2caa9 --- /dev/null +++ b/docs/edge-stack/latest/about/aes-emissary-eol.md @@ -0,0 +1,56 @@ +# $productName$ End of Life Policy + +This document describes the End of Life policy and maintenance windows for Ambassador Edge Stack, and to the open source project Emissary Ingress. + +## Supported Versions + +Ambassador Edge Stack and Emissary-ingress versions are expressed as **x.y.z**, where **x** is the major version, **y** is the minor version, and **z** is the patch version, following [Semantic Versioning](https://semver.org/) terminology. + +**X-series (Major Versions)** + +- **1.y**: 1.0 GA on January 2020 +- **2.y**: 2.0.4 GA on October 2021, and 2.1.0 in December 2021. + +**Y-release (Minor versions)** + +- For 1.y, that is **1.14.z** +- For 2.y, that is **2.3.z** + +In this document, **Current** refers to the latest X-series release. + +Maintenance refers to the previous X-series release, including security and Sev1 defect patches. + +## CNCF Ecosystem Considerations + +- Envoy releases a major version every 3 months and supports its previous releases for 12 months. Envoy does not support any release longer than 12 months. +- Kubernetes 1.19 and newer receive 12 months of patch support (The [Kubernetes Yearly Support Period](https://github.com/kubernetes/enhancements/blob/master/keps/sig-release/1498-kubernetes-yearly-support-period/README.md)). + +# The Policy + +> We will offer a 6 month maintenance window for the latest Y-release of an X-series after a new X-series goes GA and becomes the current release. For example, we will support 2.3 for severity 1 and defect patches for six months after 3.0 is released. +> + +> During the maintenance window, Y-releases will only receive security and Sev1 defect patches. Users desiring new features or bug fixes for lower severity defects will need to upgrade to the current X-series. +> + +> The current X-series will receive as many Y-releases as necessary and as often as we have new features or patches to release. +> + +> Ambassador Labs offers no-downtime migration to current versions from maintenance releases. Migration from releases that are outside of the maintenance window may be subject to downtime. +> + +> Artifacts of releases outside of the maintenance window will be frozen and will remain available publicly for download with the best effort. These artifacts include Docker images, application binaries, Helm charts, etc. +> + +### When we say support with “defect patches”, what do we mean? + +- We will fix security issues in our Emissary-ingress and Ambassador Edge Stack code +- We will pick up security fixes from dependencies as they are made available +- We will not maintain forks of our major dependencies +- We will not attempt our own back ports of critical fixes to dependencies which are out of support from their own communities + +## Extended Maintenance for 1.14 + +Given this policy, we should have dropped maintenance for 1.14 in March 2022, however we recognize that the introduction of an EOL policy necessitates a longer maintenance window. For this reason, we do offer an "extended maintenance" window for 1.14 until the end of September 2022, 3 months after the latest 2.3 release. Please note that this extended maintenance window will not apply to customers using Kubernetes 1.22 and above, and this extended maintenance will also not provide a no-downtime migration path from 1.14 to 3.0. + +After September 2022, the current series will be 3.x, and the maintenance series will be 2.y. diff --git a/docs/edge-stack/latest/about/changes-2.x.md b/docs/edge-stack/latest/about/changes-2.x.md new file mode 100644 index 000000000..89938a44b --- /dev/null +++ b/docs/edge-stack/latest/about/changes-2.x.md @@ -0,0 +1,243 @@ +import Alert from '@material-ui/lab/Alert'; + +Major Changes in $productName$ 2.X +================================== + +The 2.X family introduces a number of changes to allow $productName$ +to more gracefully handle larger installations, reduce global configuration to +better handle multitenant or multiorganizational installations, reduce memory +footprint, and improve performance. We welcome feedback!! Join us on +[Slack](http://a8r.io/slack) and let us know what you think. + +While $productName$ 2 is functionally compatible with $productName$ 1.14, note +that this is a **major version change** and there are important differences between +$productName$ 1.X and $productName$ $version$. For details, read on. + +## 1. Configuration API Version `getambassador.io/v3alpha1` + +$productName$ 2.0 introduced API version `getambassador.io/v3alpha1` to allow +certain changes in configuration resources that are not backwards compatible with +$productName$ 1.X. The most notable example of change is the addition of the +**mandatory** `Listener` resource; however, there are important changes +in `Host` and `Mapping` as well. + + + $productName$ 2.X supports only API versions getambassador.io/v2 + and getambassador.io/v3alpha1. If you are using any resources with + older API versions, you will need to upgrade them. + + +API version `getambassador.io/v3alpha1` replaces `x.getambassador.io/v3alpha1` from +the 2.0 developer previews. `getambassador.io/v3alpha1` may still change as we receive +feedback. + +## 2. Kubernetes 1.22 and Structural CRDs + +Kubernetes 1.22 requires [structural CRDs](https://kubernetes.io/blog/2019/06/20/crd-structural-schema/). +This change is primarily meant to support better CRD validation, but it also has the +effect that union types are no longer allowed in CRDs: for example, an element that can be +either a string or a list of strings is not allowed. Several such elements appeared in the +`getambassador.io/v2` CRDs, requiring changes. In `getambassador.io/v3alpha1`: + +- `ambassador_id` must always be a list of strings +- `Host.mappingSelector` supersedes `Host.selector`, and controls association between Hosts and Mappings +- `Mapping.hostname` supersedes `Mapping.host` and `Mapping.host_regex` +- `Mapping.tls` can only be a string +- `Mapping.labels` always requires maps instead of strings + +## 2. `Listener`s, `Host`s, and `Mapping`s + +$productName$ 2.0 introduced the new **mandatory** `Listener` CRD, and made some changes +to the `Host` and `Mapping` resources. + +### The `Listener` CRD + +The new [`Listener` CRD](../../topics/running/listener) defines where and how $productName$ should listen for requests from the network, and which `Host` definitions should be used to process those requests. + +**Note that `Listener`s are never created by $productName$, and must be defined by the user.** If you do not +define any `Listener`s, $productName$ will not listen anywhere for connections, and therefore won't do +anything useful. It will log a `WARNING` to this effect. + +A `Listener` specifically defines + +- `port`: a port number on which to listen for new requests; +- `protocol` and `securityModel`: the protocol stack and security model to use (e.g. `HTTPS` using the `X-Forwarded-Proto` header); and +- `hostBinding`: how to tell if a given `Host` should be associated with this `Listener`: + - a `Listener` can choose to consider all `Host`s, or only `Host`s in the same namespace as the `Listener`, or + - a `Listener` can choose to consider only `Host`s with a particular Kubernetes `label`. + +**Note that the `hostBinding ` is mandatory.** A `Listener` _must_ specify how to identify the `Host`s to associate with the `Listener`', or the `Listener` will be rejected. This is intended to help prevent cases where a `Listener` mistakenly grabs too many `Host`s: if you truly need a `Listener` that associates with all `Host`s, the easiest way is to tell the `Listener` to look for `Host`s in all namespaces, with no further selectors, for example: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: listener +metadata: + name: all-hosts-listener +spec: + port: 8080 + securityModel: XFP + protocol: HTTPS + hostBinding: + namespace: + from: ALL +``` + +A `Listener` that has no associated `Host`s will be logged as a `WARNING`, and will not be included in the Envoy configuration generated by $productName$. + +Note also that there is no limit on how many `Listener`s may be created, and as such no limit on the number of ports to which a `Host` may be associated. + + + Learn more about Listener.
+ Learn more about Host. +
+ +### Wildcard `Host`s No Longer Created + +In $productName$ 1.X, $productName$ would make sure that a wildcard `Host`, with a `hostname` of `"*"`, was always present. +$productName$ 2.X does **not** force a wildcard `Host`: if you need the wildcard behavior, you will need to create +a `Host` with a hostname of `"*"`. + +Of particular note is that $productName$ **will not** respond to queries to an IP address unless a wildcard +`Host` is present. If `foo.example.com` resolves to `10.11.12.13`, and the only `Host` has a +`hostname` of `foo.example.com`, then: + +- requests to `http://foo.example.com/` will work, but +- requests to `http://10.11.12.13/` will **not** work. + +Adding a `Host` with a `hostname` of `"*"` will allow the second query to work. + + + Learn more about Host. + + +### `Host` and `Mapping` Association + +The [`Host` CRD](../../topics/running/host-crd) continues to define information about hostnames, TLS certificates, and how to handle requests that are "secure" (using HTTPS) or "insecure" (using HTTP). The [`Mapping` CRD](../../topics/using/intro-mappings) continues to define how to map the URL space to upstream services. + +However, as of $productName$ 2.0, a `Mapping` will not be associated with a `Host` unless at least one of the following is true: + +- The `Mapping` specifies a `hostname` attribute that matches the `Host` in question. + + - Note that a `getambassador.io/v2` `Mapping` has `host` and `host_regex`, rather than `hostname`. + - A `getambassador.io/v3alpha1` `Mapping` will honor `host` and `host_regex` as a transition aid, but `host` and `host_regex` are deprecated in favor of `hostname`. + - A `Mapping` that specifies `host_regex: true` will be associated with all `Host`s. This is generally far less desirable than using `hostname` with a DNS glob. + +- The `Host` specifies a `mappingSelector` that matches the `Mapping`'s Kubernetes `label`s. + + - Note that a `getambassador.io/v2` `Host` has a `selector`, rather than a `mappingSelector`. + - A `getambassador.io/v3alpha1` `Host` ignores `selector` and, instead, looks only at `mappingSelector`. + - Where a `selector` got a default value if not specified, `mappingSelector` must be explicitly stated. + +Without either a `hostname` match or a `label` match, the `Mapping` will not be associated with the `Host` in question. This is intended to help manage memory consumption with large numbers of `Host`s and large numbers of `Mapping`s. + + + Learn more about Host.
+ Learn more about Mapping. +
+ +### Independent `Host` Actions + +Each `Host` can specify its `requestPolicy.insecure.action` independently of any other `Host`, allowing for HTTP routing as flexible as HTTPS routing. + + + Learn more about Host. + + +### `Host`, `TLSContext`, and TLS Termination + +As of $productName$ 2.0, **`Host`s are required for TLS termination**. It is no longer sufficient to create a [`TLSContext`](../../topics/running/tls/#tlscontext) by itself; the [`Host`](../../topics/running/host-crd) is required. + +The minimal setup for TLS termination is therefore a Kubernetes `Secret` of type `kubernetes.io/tls`, and a `Host` that uses it: + +```yaml +--- +kind: Secret +type: kubernetes.io/tls +metadata: + name: minimal-secret +data: + tls secret goes here +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: minimal-host +spec: + hostname: minimal.example.com + tlsSecret: + name: minimal-secret +``` + +It is **not** necessary to explicitly state a `TLSContext` in the `Host`: setting `tlsSecret` is enough. Of course, `TLSContext` is still the ideal way to share TLS configuration between more than one `Host`. For further examples, see [Configuring $productName$ Communications](../../howtos/configure-communications). + + + Learn more about Host.
+ Learn more about TLSContext. +
+ +### `Mapping`s, `TCPMapping`s, and TLS Origination + +A `getambassador.io/v2` `Mapping` or `TCPMapping` could specify `tls: true` to indicate TLS origination without supplying a certificate. This is not supported in `getambassador.io/v3alpha1`: instead, use an `https://` prefix on the `service`. In the [Mapping](../../topics/using/mappings/#using-tls), this is straightforward, but [there are more details for the `TCPMapping` when using TLS](../../topics/using/tcpmappings/#tcpmapping-and-tls). + + + Learn more about Mapping. + + +### `Mapping`s and `labels` + +The `Mapping` CRD includes a `labels` field, used with rate limiting. The +[syntax of the `labels`](../../topics/using/rate-limits#attaching-labels-to-requests) has changed +for compatibility with Kubernetes 1.22. + + + Learn more about Mapping. + + +### `Host`s and ACME + +In $productName$ 2.0, ACME will be disabled if a `Host` does not set `acmeProvider` at all (prior to $productName$ 2.0, not mentioning `acmeProvider` would result in the ACME client attempting, and failing, to start). If `acmeProvider` is set, but `acmeProvider.authority` is not set, the ACME client will continue to default to Let's Encrypt, in order to preserve compatibility with $productName$ prior to $productName$ 2.0. For further examples, see [Configuring $productName$ to Communicate](../../howtos/configure-communications). + + + Learn more about Host. + + +## 3. Other Changes + +### Envoy V3 API by Default + +By default, $productName$ 2.X will configure Envoy using the +[V3 Envoy API](https://www.envoyproxy.io/docs/envoy/latest/api-v3/api). + +### More Performant Reconfiguration by Default + +In $productName$ 1.X, the environment variable `AMBASSADOR_FAST_RECONFIGURE` could be used to enable a higher performance implementation of the code $productName$ uses to validate and generate Envoy configuration. In $productName$ 2.X, this higher-performance mode is always enabled. + +### Changes to the `ambassador` `Module`, and the `tls` `Module` + +It is no longer possible to configure TLS using the `tls` element of the `ambassador` `Module` or using the `tls` `Module`. Both of these cases are correctly covered by the `TLSContext` resource. + +With the introduction of the `Listener` resource, a few settings have moved from the `Module` to the `Listener`. + +Configuration for the `PROXY` protocol is part of the `Listener` resource in $productName$ 2.X, so the `use_proxy_protocol` element of the `ambassador` `Module` is no longer supported. Note that the `Listener` resource can configure `PROXY` resource per-`Listener`, rather than having a single global setting. For further information, see the [`Listener` documentation](../../topics/running/listener). + +`xff_num_trusted_hops` has been removed from the `Module`, and its functionality has been moved to the `l7Depth` setting in the `Listener` resource. + + + Learn more about Listener. + + +### `TLSContext` `redirect_cleartext_from` and `Host` `insecure.additionalPort` + +`redirect_cleartext_from` has been removed from the `TLSContext` resource; `insecure.additionalPort` has been removed from the `Host` CRD. Both of these cases are covered by adding additional `Listener`s. For further examples, see [Configuring $productName$ Communications](../../howtos/configure-communications). + +### Service Preview No Longer Supported + +Service Preview is no longer supported as of $productName$ 2.X, as its use cases are supported by Telepresence. + +### Edge Policy Console No Longer Supported + +The Edge Policy Console has been removed as of $productName$ 2.X, in favor of Ambassador Cloud. + +### `Project` CRD No Longer Supported + +The `Project` CRD has been removed as of $productName$ 2.X, in favor of Argo. diff --git a/docs/edge-stack/latest/about/changes-3.y.md b/docs/edge-stack/latest/about/changes-3.y.md new file mode 100644 index 000000000..fddc2b62e --- /dev/null +++ b/docs/edge-stack/latest/about/changes-3.y.md @@ -0,0 +1,56 @@ +import Alert from '@material-ui/lab/Alert'; + +Major Changes in $productName$ 3.X +================================== + +The 3.X family introduces a number of changes to ensure $productName$ +keeps up with latest Envoy versions and to support new features such as HTTP/3. +We welcome feedback! Join us on [Slack](http://a8r.io/slack) and let us know what you think. + +$productName$ 3 is functionally compatible with $productName$ 2.x, but with any major upgrade there are some changes to consider. Such as, Envoy removing support for V2 Transport Protocol features. Below we will outline some of these changes and things to consider when upgrading. + +## 1. Envoy Upgraded to 1.22 + +$productName$ 3.X has been upgraded from Envoy 1.17.X to Envoy **1.22** which keeps $productName$ up-to-date with +the latest security fixes, bug fixes, performance improvements and feature enhancements provided by Envoy Proxy. Most of the changes are under the hood but the most notable change to developers is the removal of support for Envoy V2 Transport Protocol. This means all external filters and LogServices must be updated to use the V3 Protocol. + +This also means some of the v2 runtime bootstrap flags have been removed as well: + +```yaml +# No longer necessary because this was removed from Envoy +# $productName$ already was converted to use the compressor API +# https://www.envoyproxy.io/docs/envoy/v1.22.0/configuration/http/http_filters/compressor_filter#config-http-filters-compressor +"envoy.deprecated_features.allow_deprecated_gzip_http_filter": true, + +# Upgraded to v3, all support for V2 Transport Protocol removed +"envoy.deprecated_features:envoy.api.v2.route.HeaderMatcher.regex_match": true, +"envoy.deprecated_features:envoy.api.v2.route.RouteMatch.regex": true, + +# Developer will need to upgrade TracingService to V3 protocol which no longer supports HTTP_JSON_V1 +"envoy.deprecated_features:envoy.config.trace.v2.ZipkinConfig.HTTP_JSON_V1": true, + +# V2 protocol removed so flag no longer necessary +"envoy.reloadable_features.enable_deprecated_v2_api": true, +``` + + + Learn more about Envoy Proxy changes. + + +## 2. Envoy V2 xDS Transport Protocol Support Removed + +With the upgrade to Envoy **1.22**, the V2 Envoy Transport Protocol is no longer supported and has been removed. +$productName$ 3.X **only** supports [V3 Envoy API](https://www.envoyproxy.io/docs/envoy/latest/api-v3/api). + +The `AuthService`, `RatelimitService`, `LogService` and `ExternalFilters` that use the `grpc` protocol will now need to explicilty set `protocol_version: "v3"`. If not set or set to `v2` then an error will be posted and a static response will be returned. + + +## 3. Envoy V2 xDS Configration Support Removed + +Envoy can no longer be configured to use the v2 xDS configuration and will always use v3 xDS configuration. This change removes the AMBASSADOR_ENVOY_API_VERSION because it no longer configurable and will have no effect. + + +## 4. Zipkin HTTP_JSON_V1 support is removed + +Envoy removed support for the older `HTTP_JSON_V1` collector_endpoint_version. If using the `zipkin` driver with the `TracingService`, +then you will have to update it to use `HTTP_JSON` or `HTTP_PROTO`. diff --git a/docs/edge-stack/latest/about/faq.md b/docs/edge-stack/latest/about/faq.md new file mode 100644 index 000000000..59b1633f6 --- /dev/null +++ b/docs/edge-stack/latest/about/faq.md @@ -0,0 +1,78 @@ +# Frequently Asked Questions + +## General + +### Why $productName$? + +Kubernetes shifts application architecture for microservices, as well as the +development workflow for a full-cycle development. $productName$ is designed for +the Kubernetes world with: + +* Sophisticated traffic management capabilities (thanks to its use of [Envoy Proxy](https://www.envoyproxy.io)), such as load balancing, circuit breakers, rate limits, and automatic retries. +* API management capabilities such as a developer portal and OpenID Connect integration for Single Sign-On. +* A declarative, self-service management model built on Kubernetes Custom Resource Definitions, enabling GitOps-style continuous delivery workflows. + +We've written about [the history of $productName$](https://blog.getambassador.io/building-ambassador-an-open-source-api-gateway-on-kubernetes-and-envoy-ed01ed520844), [Why $productName$ In Depth](../why-ambassador), [Features and Benefits](../features-and-benefits) and about the [evolution of API Gateways](../../topics/concepts/microservices-api-gateways/). + +### What's the difference between $OSSproductName$ and $AESproductName$? + +$OSSproductName$ is a CNCF Incubating project and provides the open-source core of $AESproductName$. Originally we called $OSSproductName$ the "Ambassador API Gateway", but as the project evolved, we realized that the functionality we were building had extended far beyond an API Gateway. In particular, the $AESproductName$ is intended to provide all the functionality you need at the edge -- hence, an "edge stack." This includes an API Gateway, ingress controller, load balancer, developer portal, and more. + +### How is $AESproductName$ licensed? + +The core $OSSproductName$ is open source under the Apache Software License 2.0. The GitHub repository for the core is [https://github.com/emissary-ingress/emissary](https://github.com/emissary-ingress/emissary). Some additional features of the $AESproductName$ (e.g., Single Sign-On) are not open source and available under a proprietary license. + +### Can I use the add-on features for $AESproductName$ for free? + +Yes! For more details please see the [$productName$ Licenses page](../../topics/using/licenses). + +### How does $productName$ use Envoy Proxy? + +$productName$ uses [Envoy Proxy](https://www.envoyproxy.io) as its core proxy. Envoy is an open-source, high-performance proxy originally written by Lyft. Envoy is now part of the Cloud Native Computing Foundation. + +### Is $productName$ production ready? + +Yes. Thousands of organizations, large and small, run $productName$ in production. +Public users include Chick-Fil-A, ADP, Microsoft, NVidia, and AppDirect, among others. + +### What is the performance of $productName$? + +There are many dimensions to performance. We published a benchmark of [$productName$ performance on Kubernetes](/resources/envoyproxy-performance-on-k8s/). Our internal performance regressions cover many other scenarios; we expect to publish more data in the future. + +### What's the difference between a service mesh (such as Istio) and $productName$? + +Service meshes focus on routing internal traffic from service to service +("east-west"). $productName$ focuses on traffic into your cluster ("north-south"). +While both a service mesh and $productName$ can route L7 traffic, the reality is that +these use cases are quite different. Many users will integrate $productName$ with a +service mesh. Production customers of $productName$ have integrated with Consul, +Istio, and Linkerd2. + +## Common Configurations + +### How do I disable the 404 landing page? + +See the [Controlling the $productName$ 404 Page](../../howtos/controlling-404) how-to. + +### How do I disable the default Admin mappings? + +See the [Protecting the Diagnostics Interface](../../howtos/protecting-diag-access) how-to. + +## Troubleshooting + +### How do I get help for $productName$? + +We have an online [Slack community](http://a8r.io/slack) with thousands of +users. We try to help out as often as possible, although we can't promise a +particular response time. If you need a guaranteed SLA, we also have commercial +contracts. [Contact sales](/contact-us/) for more information. + +### What do I do when I get the error `no healthy upstream`? + +This error means that $productName$ could not connect to your backend service. +Start by verifying that your backend service is actually available and +responding by sending an HTTP response directly to the pod. Then, verify that +$productName$ is routing by deploying a test service and seeing if the mapping +works. Then, verify that your load balancer is properly routing requests to +$productName$. In general, verifying each network hop between your client and +backend service is critical to finding the source of the problem. diff --git a/docs/edge-stack/latest/about/features-and-benefits.md b/docs/edge-stack/latest/about/features-and-benefits.md new file mode 100644 index 000000000..ecad16175 --- /dev/null +++ b/docs/edge-stack/latest/about/features-and-benefits.md @@ -0,0 +1,39 @@ +# Features and benefits + +In cloud-native organizations, developers frequently take on responsibility for the full development lifecycle of a service, from development to QA to operations. $productName$ was specifically designed for these organizations where developers have operational responsibility for their service(s). + +As such, the $productName$ is designed to be used by both developers and operators. + +## Self-Service via Kubernetes Annotations + +$productName$ is built from the start to support _self-service_ deployments -- a developer working on a new service doesn't have to go to Operations to get their service added to the mesh, they can do it themselves in a matter of seconds. Likewise, a developer can remove their service from the mesh, or merge services, or separate services, as needed, at their convenience. All of these operations are performed via Kubernetes resources or annotations, so they can easily integrate with your existing development workflow. + +## Flexible canary deployments + +Canary deployments are an essential component of cloud-native development workflows. In a canary deployment, a small percentage of production traffic is routed to a new version of a service to test it under real-world conditions. $productName$ allows developers to easily control and manage the amount of traffic routed to a given service through annotations. [This tutorial](https://www.datawire.io/faster/canary-workflow/) covers a complete canary workflow using the $productName$. + +## Kubernetes-native architecture + +$productName$ relies entirely on Kubernetes for reliability, availability, and scalability. For example, $productName$ persists all state in Kubernetes, instead of requiring a separate database. Scaling the $productName$ is as simple as changing the replicas in your deployment, or using a [horizontal pod autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/). + +$productName$ uses [Envoy](https://www.envoyproxy.io) for all traffic routing and proxying. Envoy is a modern L7 proxy that is used in production at companies including Lyft, Apple, Google, and Stripe. + +## gRPC and HTTP/2 support + +$productName$ fully supports gRPC and HTTP/2 routing, thanks to Envoy's extensive capabilities in this area. See [gRPC and $productName$](../../howtos/grpc) for more information. + +## Istio Integration + +$productName$ integrates with the [Istio](https://istio.io) service mesh as the edge proxy. In this configuration, $productName$ routes external traffic to the internal Istio service mesh. See [Istio and $productName$](../../howtos/istio) for details. + +## Authentication + +$productName$ supports authenticating incoming requests with a custom authentication service, OAuth/OpenID Connect, or JWT. When configured, the $productName$ will check with a third party authentication service prior to routing an incoming request. For more information, see the [authentication guide](../../topics/using/filters/). + +## Rate limiting + +$productName$ supports rate limiting incoming requests. When configured, the $productName$ will check with a third party rate limit service prior to routing an incoming request. For more information, see the [rate limiting guide](../../topics/using/rate-limits/). + +## Integrated UI + +$productName$ includes a diagnostics service so that you can quickly debug issues associated with configuring the $productName$. For more information, see [running $productName$ in Production](../../topics/running). diff --git a/docs/edge-stack/latest/about/known-issues.md b/docs/edge-stack/latest/about/known-issues.md new file mode 100644 index 000000000..4a5c45f5b --- /dev/null +++ b/docs/edge-stack/latest/about/known-issues.md @@ -0,0 +1,20 @@ +import Alert from '@material-ui/lab/Alert'; + +Known Issues in $productName$ +============================= + +## 2.2.1 + +- TLS certificates using elliptic curves were incorrectly flagged as invalid. This issue is + corrected in $productName$ 2.2.2. + +## 2.2.0 + +- If $productName$'s Pods start before Redis is responding, it may be necessary to restart + $productName$ for rate limiting to function correctly. + +- When using the ACME client provided with $productName$, a delayed ACME response can + prevent the `Host` using ACME from becoming active. + + - Workaround: Make sure you have a wildcard `Host` that does not use ACME. The insecure routing + action doesn't matter: it's fine for this `Host` to redirect or even reject insecure requests. diff --git a/docs/edge-stack/latest/about/why-ambassador.md b/docs/edge-stack/latest/about/why-ambassador.md new file mode 100644 index 000000000..f16def3a1 --- /dev/null +++ b/docs/edge-stack/latest/about/why-ambassador.md @@ -0,0 +1,54 @@ +# Why $productName$? + +$productName$ gives platform engineers a comprehensive, self-service edge stack for managing the boundary between end-users and Kubernetes. Built on the [Envoy Proxy](https://www.envoyproxy.io) and fully Kubernetes-native, $productName$ is made to support multiple, independent teams that need to rapidly publish, monitor, and update services for end-users. A true edge stack, $productName$ can also be used to handle the functions of an API Gateway, a Kubernetes ingress controller, and a layer 7 load balancer (for more, see [this blog post](https://blog.getambassador.io/kubernetes-ingress-nodeport-load-balancers-and-ingress-controllers-6e29f1c44f2d)). + +## How Does $productName$ work? + +$productName$ is a Kubernetes-native [microservices API gateway](../../topics/concepts/microservices-api-gateways) built on the open core of $OSSproductName$ and the [Envoy Proxy](https://www.envoyproxy.io). $productName$ is built from the ground up to support multiple, independent teams that need to rapidly publish, monitor, and update services for end-users. $productName$ can also be used to handle the functions of a Kubernetes ingress controller and load balancer (for more, see [this blog post](https://blog.getambassador.io/kubernetes-ingress-nodeport-load-balancers-and-ingress-controllers-6e29f1c44f2d)). + +## Cloud-native applications today + +Traditional cloud applications were built using a monolithic approach. These applications were designed, coded, and deployed as a single unit. Today's cloud-native applications, by contrast, consist of many individual (micro)services. This results in an architecture that is: + +* __Heterogeneous__: Services are implemented using multiple (polyglot) languages, they are designed using multiple architecture styles, and they communicate with each other over multiple protocols. +* __Dynamic__: Services are frequently updated and released (often without coordination), which results in a constantly-changing application. +* __Decentralized__: Services are managed by independent product-focused teams, with different development workflows and release cadences. + +### Heterogeneous services + +$productName$ is commonly used to route traffic to a wide variety of services. It supports: + +* configuration on a *per-service* basis, enabling fine-grained control of timeouts, rate limiting, authentication policies, and more. +* a wide range of L7 protocols natively, including HTTP, HTTP/2, gRPC, gRPC-Web, and WebSockets. +* Can route raw TCP for services that use protocols not directly supported by $productName$. + +### Dynamic services + +Service updates result in a constantly changing application. The dynamic nature of cloud-native applications introduces new challenges around configuration updates, release, and testing. $productName$: + +* Enables [progressive delivery](../../topics/concepts/progressive-delivery), with support for canary routing and traffic shadowing. +* Exposes high-resolution observability metrics, providing insight into service behavior. +* Uses a zero downtime configuration architecture, so configuration changes have no end-user impact. + +### Decentralized workflows + +Independent teams can create their own workflows for developing and releasing functionality that are optimized for their specific service(s). With $productName$, teams can: + +* Leverage a [declarative configuration model](../../topics/concepts/gitops-continuous-delivery), making it easy to understand the canonical configuration and implement GitOps-style best practices. +* Independently configure different aspects of $productName$, eliminating the need to request configuration changes through a centralized operations team. + +## $productName$ is engineered for Kubernetes + +$productName$ takes full advantage of Kubernetes and Envoy Proxy. + +* All of the state required for $productName$ is stored directly in Kubernetes, eliminating the need for an additional database. +* The $productName$ team has added extensive engineering efforts and integration testing to ensure optimal performance and scale of Envoy and Kubernetes. + +## For more information + +[Deploy $productName$ today](../../tutorials/getting-started) and join the community [Slack Channel](http://a8r.io/slack). + +Interested in learning more? + +* [Why did we start building $productName$?](https://blog.getambassador.io/building-ambassador-an-open-source-api-gateway-on-kubernetes-and-envoy-ed01ed520844) +* [$productName$ Architecture overview](../../topics/concepts/architecture) diff --git a/docs/edge-stack/latest/aes-pages.yml b/docs/edge-stack/latest/aes-pages.yml new file mode 100644 index 000000000..bd03e5c39 --- /dev/null +++ b/docs/edge-stack/latest/aes-pages.yml @@ -0,0 +1,24 @@ +# AES pages should be represented in a yaml array with the pathnames as the value +# Ex: +# - /docs +# - /reference +# + +/topics/using/filters/ +/topics/using/filters/oauth2 +/topics/using/filters/jwt +/topics/using/filters/external +/topics/using/filters/plugin +/topics/using/edgectl/ +/topics/using/edgectl/edge-control +/topics/using/edgectl/edge-control-in-ci +/topics/using/edgectl/service-preview-install +/topics/using/edgectl/service-preview-reference +/topics/using/edgectl/service-preview-tutorial +/topics/using/dev-portal +/topics/using/edge-policy-console +/topics/using/rate-limits/rate-limits/ +/topics/running/aes-redis +/topics/running/aes-extensions/ +/topics/running/aes-extensions/authentication +/topics/running/aes-extensions/ratelimit diff --git a/docs/edge-stack/latest/api-gateway-pages.yml b/docs/edge-stack/latest/api-gateway-pages.yml new file mode 100644 index 000000000..b9010fdbc --- /dev/null +++ b/docs/edge-stack/latest/api-gateway-pages.yml @@ -0,0 +1,72 @@ +# API Gateway pages should be represented in a yaml array with the pathnames as the value +# Ex: +# - /docs +# - /reference +# +- /docs/dev-guide/canary-release-concepts +- /docs/dev-guide/test-in-prod +- /docs/guides/ +- /reference/add_request_headers +- /reference/add_response_headers +- /reference/ambassador-with-aws +- /reference/canary +- /reference/circuit-breakers +- /reference/configuration +- /reference/core/ambassador +- /reference/core/crds +- /reference/core/ingress-controller +- /reference/core/load-balancer +- /reference/core/resolvers +- /reference/core/tls +- /reference/cors +- /reference/diagnostics +- /reference/gzip +- /reference/headers +- /reference/host +- /reference/host-crd +- /reference/ambassadormappings +- /reference/modules +- /reference/prefix_regex +- /reference/rate-limits +- /reference/redirects +- /reference/remove_request_headers +- /reference/remove_response_headers +- /reference/retries +- /reference/rewrites +- /reference/running +- /reference/services/auth-service +- /reference/services/log-service +- /reference/services/rate-limit-service +- /reference/services/services +- /reference/services/tracing-service +- /reference/shadowing +- /reference/statistics +- /reference/tcpmappings +- /reference/timeouts +- /reference/tls/cleartext-redirection +- /reference/tls/client-cert-validation +- /reference/tls/mtls +- /reference/tls/origination +- /user-guide/auth-tutorial +- /user-guide/bare-metal +- /user-guide/cd-declarative-gitops +- /user-guide/cert-manager +- /user-guide/consul +- /user-guide/early-access +- /user-guide/gitops-ambassador +- /user-guide/grpc +- /user-guide/helm +- /user-guide/install-ambassador-oss +- /user-guide/knative +- /user-guide/linkerd2 +- /user-guide/monitoring +- /user-guide/rate-limiting +- /user-guide/rate-limiting-tutorial +- /user-guide/security +- /user-guide/sni +- /user-guide/tls-termination +- /user-guide/tracing-tutorial +- /user-guide/tracing-tutorial-datadog +- /user-guide/tracing-tutorial-zipkin +- /user-guide/websockets-ambassador +- /user-guide/with-istio diff --git a/docs/edge-stack/latest/doc-links.yml b/docs/edge-stack/latest/doc-links.yml new file mode 100644 index 000000000..50ebb708a --- /dev/null +++ b/docs/edge-stack/latest/doc-links.yml @@ -0,0 +1,287 @@ + - title: Quick start + link: /tutorials/getting-started + - title: Core concepts + items: + - title: Kubernetes network architecture + link: /topics/concepts/kubernetes-network-architecture + - title: 'The Ambassador operating model: GitOps and continuous delivery' + link: /topics/concepts/gitops-continuous-delivery + - title: Progressive delivery + link: /topics/concepts/progressive-delivery + - title: Microservices API gateways + link: /topics/concepts/microservices-api-gateways + - title: $productName$ architecture + link: /topics/concepts/architecture + - title: Rate limiting at the edge + link: /topics/concepts/rate-limiting-at-the-edge + - title: Installation and updates + link: /topics/install/ + items: + - title: Install with Helm + link: /topics/install/helm + - title: Install with Kubernetes YAML + link: /topics/install/yaml-install + - title: Try the demo with Docker + link: /topics/install/docker + - title: Upgrade or migrate to a newer version + link: /topics/install/migration-matrix + - title: Edge Stack user guide + items: + - title: Deployment + items: + - title: Deployment architecture + link: /topics/running/ambassador-deployment + - title: $productName$ environment variables and ports + link: /topics/running/environment + - title: $productName$ and Redis + link: /topics/running/aes-redis + - title: $productName$ with AWS + link: /topics/running/ambassador-with-aws + - title: $productName$ with GKE + link: /topics/running/ambassador-with-gke + - title: Advanced deployment configuration + link: /topics/running/running + - title: Performance and scaling $productName$ + link: /topics/running/scaling + - title: Active health checking configuration + link: /howtos/active-health-checking + - title: HTTP/3 configuration + items: + - title: HTTP3 setup in $productName$ + link: /topics/running/http3 + - title: HTTP/3 with AKS + link: /howtos/http3-aks + - title: HTTP/3 with EKS + link: /howtos/http3-eks + - title: HTTP/3 with GKE + link: /howtos/http3-gke + - title: Web Application Firewalls + items: + - title: $productName$'s Web Application Firewall + link: /howtos/web-application-firewalls + - title: Configuring Web Application Firewall rules + link: /howtos/web-application-firewalls-config + - title: Using Web Application Firewalls in Production + link: /howtos/web-application-firewalls-in-production + - title: Service routing and communication + items: + - title: Configuring $productName$ to communicate + link: /howtos/configure-communications + - title: Get traffic from the edge + link: /howtos/route + - title: TCP connections + link: /topics/using/tcpmappings + - title: gRPC connections + link: /howtos/grpc + - title: WebSocket connections + link: /howtos/websockets + - title: Authentication + items: + - title: Basic authentication + link: /howtos/ext-filters + - title: Using the OAuth2 filter for SSO + link: /howtos/oauth-oidc-auth + - title: Single Sign-On with Google + link: /howtos/sso/google + - title: Single Sign-On with Keycloak + link: /howtos/sso/keycloak + - title: Kubernetes SSO with OIDC and Keycloak + link: /howtos/auth-kubectl-keycloak + - title: Single Sign-On with Okta + link: /howtos/sso/okta + - title: Single Sign-On with Auth0 + link: /howtos/sso/auth0 + - title: Single Sign-On with Azure AD + link: /howtos/sso/azure + - title: Single Sign-On with OneLogin + link: /howtos/sso/onelogin + - title: Single Sign-On with Salesforce + link: /howtos/sso/salesforce + - title: Single Sign-On with UAA + link: /howtos/sso/uaa + - title: Authentication extension + link: /topics/running/aes-extensions/authentication + - title: Rate limiting + items: + - title: Rate limiting in $productName$ + link: /howtos/advanced-rate-limiting + - title: Basic rate limiting + link: /topics/using/rate-limits/ + - title: Rate limiting on token claims + link: /howtos/token-ratelimit + - title: Rate limiting reference + link: /topics/using/rate-limits/rate-limits + - title: Rate limiting extension + link: /topics/running/aes-extensions/ratelimit + - title: Service monitoring + items: + - title: Explore distributed tracing and Kubernetes monitoring + link: /howtos/dist-tracing + - title: Distributed tracing with Datadog + link: /howtos/tracing-datadog + - title: Distributed tracing with Zipkin + link: /howtos/tracing-zipkin + - title: Distributed tracing with LightStep + link: /howtos/tracing-lightstep + - title: Monitoring with Prometheus and Grafana + link: /howtos/prometheus + - title: Statistics + link: /topics/running/statistics + - title: Envoy statistics with StatsD + link: /topics/running/statistics/envoy-statsd + - title: The metrics endpoint + link: /topics/running/statistics/8877-metrics + - title: $productName$ integrations + items: + - title: Knative Serverless Framework + link: /howtos/knative + - title: ExternalDNS integration + link: /howtos/external-dns + - title: Consul integration + link: /howtos/consul + - title: Istio integration + link: /howtos/istio + - title: Linkerd 2 integration + link: /howtos/linkerd2 + - title: Technical reference + items: + - title: Custom resources + items: + - title: The Host resource + link: /topics/running/host-crd + - title: The Listener resource + link: /topics/running/listener + - title: The Module resource + link: /topics/running/ambassador + - title: The Mapping resource + link: /topics/using/intro-mappings + - title: Advanced Mapping configuration + link: /topics/using/mappings + - title: TLS configuration + items: + - title: TLS overview + link: /topics/running/tls/ + - title: Cleartext support + link: /topics/running/tls/cleartext-redirection + - title: Mutual TLS (mTLS) + link: /topics/running/tls/mtls + - title: Server Name Indication (SNI) + link: /topics/running/tls/sni + - title: TLS origination + link: /topics/running/tls/origination + - title: TLS termination and enabling HTTPS + link: /howtos/tls-termination + - title: Using cert-manager + link: /howtos/cert-manager + - title: Client certificate validation + link: /howtos/client-cert-validation + - title: Filters + items: + - title: Filters and Filter policies + link: /topics/using/filters/ + - title: OAuth2 Filter + link: /topics/using/filters/oauth2 + - title: JWT Filter + link: /topics/using/filters/jwt + - title: External Filter + link: /topics/using/filters/external + - title: Plugin Filter + link: /topics/using/filters/plugin + - title: API Keys Filter + link: /topics/using/filters/apikeys + - title: Ingress and load balancing + items: + - title: AuthService settings + link: /topics/using/authservice + - title: Automatic retries + link: /topics/using/retries + - title: Canary releases + link: /topics/using/canary + - title: Circuit Breakers + link: /topics/using/circuit-breakers + - title: Cross-Origin Resource Sharing (CORS) + link: /topics/using/cors + - title: Ingress controller + link: /topics/running/ingress-controller + - title: Load balancing + link: /topics/running/load-balancer + - title: Service discovery and resolvers + link: /topics/running/resolvers + - title: Headers + items: + - title: Headers overview + link: /topics/using/headers/headers + - title: Add request headers + link: /topics/using/headers/add_request_headers + - title: Remove request headers + link: /topics/using/headers/remove_request_headers + - title: Add response headers + link: /topics/using/headers/add_response_headers + - title: Remove response headers + link: /topics/using/headers/remove_response_headers + - title: Header-based routing + link: /topics/using/headers/headers + - title: Host header + link: /topics/using/headers/host + - title: Routing + items: + - title: Keepalive + link: /topics/using/keepalive + - title: Method-based routing + link: /topics/using/method + - title: Prefix regex + link: /topics/using/prefix_regex + - title: Query parameter-based routing + link: /topics/using/query_parameters/ + - title: Redirects + link: /topics/using/redirects + - title: Rewrites + link: /topics/using/rewrites + - title: Timeouts + link: /topics/using/timeouts + - title: Traffic shadowing + link: /topics/using/shadowing + - title: Plug-in services + items: + - title: Authentication service + link: /topics/running/services/auth-service + - title: ExtAuth protocol + link: /topics/running/services/ext_authz + - title: Log service + link: /topics/running/services/log-service + - title: Rate limit service + link: /topics/running/services/rate-limit-service + - title: Tracing service + link: /topics/running/services/tracing-service + - title: Traffic management + items: + - title: Custom error responses + link: /topics/running/custom-error-responses + - title: Gzip compression + link: /topics/running/gzip + - title: API + items: + - title: Gateway API + link: /topics/using/gateway-api + - title: Developer Portal + link: /topics/using/dev-portal + - title: FAQs + link: /about/faq + - title: Troubleshooting + link: /topics/running/debugging + - title: Known issues + link: /about/known-issues + - title: Changes in $productName$ 2.X + link: /about/changes-2.x + - title: Changes in $productName$ 3.X + link: /about/changes-3.y + - title: Release Notes + link: /release-notes + - title: Community + link: /community + - title: End of Life Policy + link: /about/aes-emissary-eol + - title: $productName$ Licenses + link: topics/using/licenses + - title: Open Source Dependency Licenses + link: licenses diff --git a/docs/edge-stack/latest/howtos/advanced-rate-limiting.md b/docs/edge-stack/latest/howtos/advanced-rate-limiting.md new file mode 100644 index 000000000..1c6f6ff51 --- /dev/null +++ b/docs/edge-stack/latest/howtos/advanced-rate-limiting.md @@ -0,0 +1,246 @@ +# Advanced rate limiting + +$productName$ features a built-in [Rate Limit Service (RLS)](../../topics/running/services/rate-limit-service/#external-rate-limit-service). The $productName$ RLS uses a decentralized configuration model that enables individual teams the ability to independently manage [rate limits](https://www.getambassador.io/learn/kubernetes-glossary/rate-limiting) independently. + +All of the examples on this page use the backend service of the quote sample application to illustrate how to perform the rate limiting functions. + +## Rate Limiting in $productName$ + +In $productName$, the `RateLimit` resource defines the policy for rate limiting. The rate limit policy is applied to individual requests according to the labels you add to the `Mapping` resource. This allows you to assign labels based on the particular needs of you rate limiting policies and apply the `RateLimit` policies to only the domains in the related `Mapping` resource. + +You can apply the `RateLimit` policy globally to all requests with matching labels from the `Module` resource. This can be used in conjunction with the `Mapping` resource to have a global rate limit with more granular rate limiting for specific requests that go through that specific `Mapping` resource. + + In order for you to enact rate limiting policies: + +* Each domain you target needs to have labels. +* For individual request, the service's `Mapping` resource needs to contain the labels related to the domains you want to apply the rate limiting policy to. +* For global requests, the service's `Module` resource needs to contain the labels related to the policy you want to apply. +* The `RateLimit` resource needs to set the rate limit policy for the labels the `Mapping` resource. + + +## Rate limiting for availability + +Global rate limiting applies to the entire Kubernetes service mesh. This example shows how to limit the `quote` service to 3 requests per minute. + +1. First, add a request label to the `request_label_group` of the `quote` service's `Mapping` resource. This example uses `backend` for the label: + + ```yaml + apiVersion: getambassador.io/v3alpha1 + kind: Mapping + metadata: + name: quote-backend + spec: + hostname: "*" + prefix: /backend/ + service: quote + labels: + ambassador: + - request_label_group: + - generic_key: + value: backend + ``` + + Apply the mapping configuration changes with `kubectl apply -f quote-backend.yaml`. + + + You need to use v2 or later for the apiVersion in the Mapping resource. Previous versions do not support labels. + + +2. Next, configure the `RateLimit` resource for the service. Create a new YAML file named `backend-ratelimit.yaml` and apply the rate limit details as follows: + + ```yaml + apiVersion: getambassador.io/v3alpha1 + kind: RateLimit + metadata: + name: backend-rate-limit + spec: + domain: ambassador + limits: + - pattern: [{generic_key: backend}] + rate: 3 + unit: minute + ``` + + In the code above, the `generic_key` is a hard-coded value that is used when you add a single string label to a request. + +3. Deploy the rate limit with `kubectl apply -f backend-ratelimit.yaml`. + +## Per user rate limiting + +Per user rate limiting enables you to apply the defined rate limit to specific IP addresses. To allow per user rate limits, you need to make sure you've properly configured $productName$ to [propagate your original client IP address](../../topics/running/ambassador/#trust-downstream-client-ip). + +This example shows how to use the `remote_address` special value in the mapping to target specific IP addresses: + +1. Add a request label to the `request_label_group` of the `quote` service's `Mapping` resource. This example uses `remote_address` for the label: + + ```yaml + apiVersion: getambassador.io/v3alpha1 + kind: Mapping + metadata: + name: quote-backend + spec: + hostname: "*" + prefix: /backend/ + service: quote + labels: + ambassador: + - request_label_group: + - remote_address: + key: remote_address + ``` + +2. Update the rate limit amounts for the `RateLimit` service and enter the `remote_address` to the following pattern: + + ```yaml + apiVersion: getambassador.io/v3alpha1 + kind: RateLimit + metadata: + name: backend-rate-limit + spec: + domain: ambassador + limits: + - pattern: [{remote_address: "*"}] + rate: 3 + unit: minute + ``` + +## Load shedding + +Another technique for rate limiting involves load shedding. With load shedding, you can define which HTTP request method to allow or deny. + +This example shows how to implement load per user rate limiting along with load shedding on `GET` requests. +To allow per user rate limits, you need to make sure you've properly configured $productName$ to [propagate your original client IP address](../../topics/running/ambassador#trust-downstream-client-ip). + +1. Add a request labels to the `request_label_group` of the `quote` service's `Mapping` resource. This example uses `remote_address` for the per user limit, and `backend_http_method`for load shedding. The load shedding uses `":method"` to identify that the `RateLimit` will use a HTTP request method in its pattern. + + ```yaml + apiVersion: getambassador.io/v3alpha1 + kind: Mapping + metadata: + name: quote-backend + spec: + hostname: "*" + prefix: /backend/ + service: quote + labels: + ambassador: + - request_label_group: + - remote_address: + key: remote_address + - request_headers: + key: backend_http_method + header_name: ":method" + ``` + +2. Update the rate limit amounts for the `RateLimit` service. +For the rate limit `pattern`, include the `remote_address` IP address and the `backend_http_mthod`. + + ```yaml + apiVersion: getambassador.io/v3alpha1 + kind: RateLimit + metadata: + name: backend-rate-limit + spec: + domain: ambassador + limits: + - pattern: [{remote_address: "*"}, {backend_http_method: GET}] + rate: 3 + unit: minute + ``` + + When a pattern has multiple criteria, the rate limit runs when when any of the rules of the pattern match. For the example above, this means either a `remote_address` or `backend_http_method` pattern triggers the rate limiting. + +## Global rate limiting + +Similar to the per user rate limiting, you can use [global rate limiting](../../topics/using/rate-limits) to assign a rate limit to any unique IP addresses call to your service. Unlike the previous examples, you need to add your labels to the `Module` resource rather than the `Mapping` resource. This is because the `Module` resource applies the labels to all the requests in $productName$, whereas the labels in `Mapping` only apply to the requests that use that `Mapping` resource. + +1. Add a request label to the `request_label_group` of the `quote` service's `Module` resource. This example uses the `remote_address` special value. + + ```yaml + --- + apiVersion: getambassador.io/v3alpha1 + kind: Module + metadata: + name: ambassador + spec: + config: + use_remote_address: true + default_label_domain: ambassador + default_labels: + ambassador: + defaults: + - remote_address: + key: remote_address + ``` +2. Update the rate limit amounts for the `RateLimit` service and enter the `remote_address` to the following pattern: + + ```yaml + apiVersion: getambassador.io/v3alpha1 + kind: RateLimit + metadata: + name: global-rate-limit + spec: + domain: ambassador + limits: + - pattern: [{remote_address: "*"}] + rate: 10 + unit: minute + ``` + +### Bypassing a global rate limit + +Sometimes, you may have an API that cannot handle as much load as others in your cluster. In this case, a global rate limit may not be enough to ensure this API is not overloaded with requests from a user. To protect this API, you can create a label that tells $productName$ to apply a stricter limit on requests. +In the example above, the global rate limit is defined in the `Module` resource. This applies the limit to all requests. In conjunction with the global limit defined in the `Module` resource, you can add more granular rate limiting to a `Mapping` resource, which will only apply to requests that use that 'Mapping'. + +1. In addition to the configurations applied in the global rate limit example above, add an additional label to the `request_label_group` of the `Mapping` resource. This example uses `backend` for the label: + + ```yaml + apiVersion: getambassador.io/v3alpha1 + kind: Mapping + metadata: + name: quote-backend + spec: + hostname: "*" + prefix: /backend/ + service: quote + labels: + ambassador: + - request_label_group: + - generic_key: + value: backend + ``` + +2. Now, the `request_label_group` contains both the `generic_key: backend` and the `remote_address` key applied from the global rate limit. This creates a separate `RateLimit` object for this route: + + ```yaml + apiVersion: getambassador.io/v3alpha1 + kind: RateLimit + metadata: + name: backend-rate-limit + spec: + domain: ambassador + limits: + - pattern: [{remote_address: "*"}, {generic_key: backend}] + rate: 3 + unit: minute + ``` + + Requests to `/backend/` now are now limited after 3 requests. All other requests use the global rate limit policy. + +## Rate limit matching rules + +The following rules apply to the rate limit patterns: + +* Patterns are order-sensitive and must be entered in the same order in which a request is labeled. +* Every label in a label group must exist in the pattern in order for matching to occur. +* By default, any type of failure lets the request pass through (fail open). +* $productName$ sets a hard timeout of 20ms on the rate limiting service. If the rate limit service does not respond within the timeout period, the request passes through. +* If a pattern does not match, the request passes through. + +## Troubleshooting rate limiting + +The most common source of failure of the rate limiting service occurs when the labels generated by $productName$ do not match the rate limiting pattern. By default, the rate limiting service logs all incoming labels from $productName$. Use a tool such as [Stern](https://github.com/stern/stern) to watch the rate limiting logs from $productName$ and ensure the labels match your descriptor. + +## More + +For more on rate limiting, see the [rate limit guide](../../topics/using/rate-limits/). diff --git a/docs/edge-stack/latest/howtos/auth-kubectl-keycloak.md b/docs/edge-stack/latest/howtos/auth-kubectl-keycloak.md new file mode 100644 index 000000000..04996fd35 --- /dev/null +++ b/docs/edge-stack/latest/howtos/auth-kubectl-keycloak.md @@ -0,0 +1,294 @@ +# Kubernetes SSO with OIDC and Keycloak + +Developers use `kubectl` to access Kubernetes clusters. By default `kubectl` uses a certificate to authenticate to the Kubernetes API. This means that when multiple developers need to access a cluster, the certificate needs to be shared. Sharing the credentials to access a Kubernetes cluster presents a significant security problem. Compromise of the certificate is very easy and the consequences can be catastrophic. + +In this tutorial, we walk through how to set up your Kubernetes cluster to add Single Sign-On support for `kubectl` using OpenID Connect (OIDC) and Keycloak. Instead of using a shared certificate, users will be able to use their own personal credentials to use `kubectl` with `kubelogin`. + +## Prerequisites + +This tutorial relies on $AESproductName$ to manage access to your Kubernetes cluster, and uses Keycloak as your identity provider. To get started: + +*Note* This guide was designed and validated using an Azure AKS Cluster. It's possible that this procedure will work with other cloud providers, but there is a lot of variance in the Authentication mechanisms for the Kubernetes API. See the troubleshooting note at the bottom for more info. + +* Azure AKS Cluster [here](https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-deploy-cluster) +* Install $AESproductName$ [here](../../topics/install/) +* Deploy Keycloak on Kubernetes [here](https://www.keycloak.org/getting-started/getting-started-kube) + +## Cluster Setup + +In this section, we'll configure your Kubernetes cluster for single-sign on. + +### 1. Authenticate $AESproductName$ with Kubernetes API + +1. Delete the openapi mapping from the Ambassador namespace `kubectl delete -n ambassador ambassador-devportal-api`. (this mapping can conflict with `kubectl` commands) + +2. Create a new private key using `openssl genrsa -out aes-key.pem 4096`. + +3. Create a file `aes-csr.cnf` and paste the following config. + + ```cnf + [ req ] + default_bits = 2048 + prompt = no + default_md = sha256 + distinguished_name = dn + + [ dn ] + CN = ambassador-kubeapi # Required + + [ v3_ext ] + authorityKeyIdentifier=keyid,issuer:always + basicConstraints=CA:FALSE + keyUsage=keyEncipherment,dataEncipherment + extendedKeyUsage=serverAuth,clientAuth + ``` + +4. Create a certificate signing request with the config file we just created. `openssl req -config ./aes-csr.cnf -new -key aes-key.pem -nodes -out aes-csr.csr`. + +5. Create and apply the following YAML for a CertificateSigningRequest. Replace {{BASE64_CSR}} with the value from `cat aes-csr.csr | base64`. Note that this is `aes-csr.csr`, and not `aes-csr.cnf`. + + ```yaml + apiVersion: certificates.k8s.io/v1beta1 + kind: CertificateSigningRequest + metadata: + name: aes-csr + spec: + groups: + - system:authenticated + request: {{BASE64_CSR}} # Base64 encoded aes-csr.csr + usages: + - digital signature + - key encipherment + - server auth + - client auth + ``` + +6. Check csr was created: `kubectl get csr` (it will be in pending state). After confirmation, run `kubectl certificate approve aes-csr`. You can check `kubectl get csr` again to see that it's in the `Approved, Issued` state. + +7. Get the resulting certificate and put it into a pem file. `kubectl get csr aes-csr -o jsonpath='{.status.certificate}' | base64 -d > aes-cert.pem`. + +8. Create a TLS `Secret` using our private key and public certificate. `kubectl create secret tls -n ambassador aes-kubeapi --cert ./aes-cert.pem --key ./aes-key.pem` + +9. Create a `Mapping` and `TLSContext` for the Kube API. + + ```yaml + --- + apiVersion: getambassador.io/v3alpha1 + kind: TLSContext + metadata: + name: aes-kubeapi-context + namespace: ambassador + spec: + hosts: + - "*" + secret: aes-kubeapi + --- + apiVersion: getambassador.io/v3alpha1 + kind: Mapping + metadata: + name: aes-kubeapi-mapping + namespace: ambassador + spec: + hostname: "*" + prefix: / + allow_upgrade: + - spdy/3.1 + service: https://kubernetes.default.svc + timeout_ms: 0 + tls: aes-kubeapi-context + ``` + +10. Create RBAC for the "aes-kubeapi" user by applying the following YAML. + + ```yaml + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: aes-impersonator-role + rules: + - apiGroups: [""] + resources: ["users", "groups", "serviceaccounts"] + verbs: ["impersonate"] + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: aes-impersonator-rolebinding + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: User + name: aes-kubeapi + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: aes-impersonator-role + ``` + +As a quick check, you should be able to `curl https:///api` and get a response similar to the following: + + ```json + { + "kind": "APIVersions", + "versions": [ + "v1" + ], + "serverAddressByClientCIDRs": [ + { + "clientCIDR": "0.0.0.0/0", + "serverAddress": "\"\":443" + } + ] + }% + ``` + +### 2. Set up Keycloak config + +1. Create a new Realm and Client (e.g. ambassador, ambassador) +2. Make sure that `http://localhost:8000` and `http://localhost:18000` are valid Redirect URIs +3. Set access type to confidential and Save +4. Go to the Credentials tab and note down the secret +5. Go to the user tab and create a user with the first name "john" + +### 3. Create a ClusterRole and ClusterRoleBinding for the OIDC user "john" + +1. Add the following RBAC to create a user "john" that only allowed to perform `kubectl get services` in the cluster. + + ```yaml + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: john-binding + subjects: + - kind: User + name: john + apiGroup: rbac.authorization.k8s.io + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: john-role + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: john-role + rules: + - apiGroups: [""] + resources: ["services"] + verbs: ["get", "list"] + ``` + +2. Test the API again with the following 2 `curls`: `curl https:///api/v1/namespaces/default/services?limit=500 -H "Impersonate-User: "john"` and `curl https:///api/v1/namespaces/default/pods?limit=500 -H "Impersonate-User: "john"`. You will find that the first curl should succeeds and the second curl should fail with the following response. + +```json +{ + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "pods is forbidden: User \"john\" cannot list resource \"pods\" in API group \"\" in the namespace \"default\"", + "reason": "Forbidden", + "details": { + "kind": "pods" + }, + "code": 403 +} +``` + +### 4. Create a JWT filter to authenticate the user + +1. Create the following JWT `Filter` and `FilterPolicy` based on this template: + + ```yaml + --- + apiVersion: getambassador.io/v3alpha1 + kind: Filter + metadata: + name: "kubeapi-jwt-filter" + namespace: "ambassador" + spec: + JWT: + jwksURI: https:///auth/realms//protocol/openid-connect/certs # If the keycloak instance is internal, you may want to use the internal k8s endpoint (e.g. http://keycloak.keycloak) instead of figuring out how to exclude JWKS requests from the FilterPolicy + injectRequestHeaders: + - name: "Impersonate-User" # Impersonate-User is mandatory, you can also add an Impersonate-Groups if you want to do group-based RBAC + value: "{{ .token.Claims.given_name }}" # This uses the first name we specified in the Keycloak user account + --- + apiVersion: getambassador.io/v3alpha1 + kind: FilterPolicy + metadata: + name: "kubeapi-filter-policy" + namespace: "ambassador" + spec: + rules: + - host: "*" + path: "*" + filters: + - name: kubeapi-jwt-filter + ``` + +## Client set up + +Now, we need to set up the client. Each user who needs to access the Kubernetes cluster will need to follow these steps. + +### 1. Install kubelogin + +1. Install [kubelogin](https://github.com/int128/kubelogin#getting-started). Kubelogin is a `kubectl` plugin that enables OpenID Connect login with `kubectl`. + +2. Edit your local Kubernetes config file (either `~/.kube/config`, or your `$KUBECONFIG` file) to include the following, making sure to replace the templated values. + + ```yaml + apiVersion: v1 + kind: Config + clusters: + - name: azure-ambassador + cluster: + server: https:// + contexts: + - name: azure-ambassador-kube-api + context: + cluster: azure-ambassador + user: azure-ambassador + users: + - name: azure-ambassador + user: + exec: + apiVersion: client.authentication.k8s.io/v1beta1 + command: kubectl + args: + - oidc-login + - get-token + - --oidc-issuer-url=https:///auth/realms/ + - --oidc-client-id= + - --oidc-client-secret= + ``` + +3. Switch to the context set above (in the example it's `azure-ambassador-kube-api`). + +4. Run `kubectl get svc`. This should open a browser page to the Keycloak login. Type in the credentials for "john" and, on success, return to the terminal to see the kubectl response. Congratulations, you've set up Single Sign-On with Kubernetes! + +5. Now try running `kubectl get pods`, and notice we get an `Error from server (Forbidden): pods is forbidden: User "john" cannot list resource "pods" in API group "" in the namespace "default"`. This is expected because we explicitly set up "john" to only have access to view `Service` resources, and not `Pods`. + +### 7. Logging Out + +1. Delete the token cache with `rm -r ~/.kube/cache/oidc-login` +2. You may also have to remove session cookies in your browser or do a remote logout in the keycloak admin page. + +### Troubleshooting + +1. Why isn't this process working in my `` cluster? + Authentication to the Kubernetes API is highly cluster specific. Many use x509 certificates, but as a notable exception, Amazon's Elastic Kubernetes Service, for example, uses an Authenticating Webhook that connects to their IAM solution for Authentication, and so is not compatible specifically with this guide. +2. What if I want to use RBAC Groups? + User impersonation allows you to specify a Group using the `Impersonate-Group` header. As such, if you wanted to use any kind of custom claims for the ID token, they can be mapped to the `Impersonate-Group` header. Note that you always have to use an `Impersonate-Name` header, even if you're relying solely on the Group for Authorization. +3. I keep getting a 401 `Failure`, `Unauthorized` message, even for `https:///api`. + This likely means that there is either something wrong with the Certificate that was issued, or there's something wrong with your `TLSContext` or `Mapping` config. $AESproductName$ must present the correct certificate to the Kubernetes API and the RBAC usernames and the CN of the certificate have to be consistent with one another. +4. Do I have to use `kubelogin`? + Technically no. Any method of obtaining an ID or Access token from an Identity Provider will work. You can then pass the token using `--token ` when running `kubectl`. `kubelogin` simply automates the process of getting the ID token and attaching it to a `kubectl` request. + +## Under the Hood + +In this tutorial, we set up $AESproductName$ to [impersonate a user](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation) to access the Kubernetes API. Requests get sent to $AESproductName$, which functions as an Authenticating Proxy. $AESproductName$ uses its integrated authentication mechanism to authenticate the external request's identity and sets the User and Group based on Claims recieved by the `Filter`. + +The general flow of the `kubectl` command is as follows: On making an unauthenticated kubectl command, `kubelogin` does a browser open/redirect in order to do OIDC token negotiation. `kubelogin` obtains an OIDC Identity Token (notice this is not an access token) and sends it to $AESproductName$ in an Authorization header. $AESproductName$ validates the Identity Token and parses Claims from it to put into `Impersonate-XXX` headers. $AESproductName$ then scrubs the Authorization header and replaces it with the Admin token we set up in step 1. $AESproductName$ then forwards this request with the new Authorization and Impersonate headers to the KubeAPI to first Authenticate, and then Authorize based on Kubernetes RBAC. diff --git a/docs/edge-stack/latest/howtos/controlling-404.md b/docs/edge-stack/latest/howtos/controlling-404.md new file mode 100644 index 000000000..5d8357fc5 --- /dev/null +++ b/docs/edge-stack/latest/howtos/controlling-404.md @@ -0,0 +1,23 @@ +# Controlling the Edge Stack 404 Page + +Established users will want to better control 404 behavior both for usability and +security. You can leverage the `Mapping` resource to implement this +functionality to your cluster. $productName$ users can use a 'catch-all' mapping +using the `/` prefix in a `Mapping` configuration. The simplest `Mapping`, described +below, returns only 404 text. To use a custom 404 landing page, simply insert your +service and remove the rewrite value. + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: "404-fallback" +spec: + hostname: "*" + prefix: "/" + rewrite: "/404/" # This must not map to any existing prefix! + service: localhost:8500 # This needs to exist, but _not_ respond on /404/ +``` + +For more information on the `Mapping` resource, see +[Advanced `Mapping` Configuration](../../topics/using/mappings). diff --git a/docs/edge-stack/latest/howtos/ext-filters.md b/docs/edge-stack/latest/howtos/ext-filters.md new file mode 100644 index 000000000..f7e5edc47 --- /dev/null +++ b/docs/edge-stack/latest/howtos/ext-filters.md @@ -0,0 +1,208 @@ +import Alert from '@material-ui/lab/Alert'; + +# Basic authentication + + + This guide applies to $AESproductName$, use of this guide on the $OSSproductName$ is not recommended. API Gateway does authentication using the AuthService resource instead of the Filter resource as described below. + + +$AESproductName$ can authenticate incoming requests before routing them to a backing +service. In this tutorial, we'll configure $AESproductName$ to use an external third +party authentication service. We're assuming also that you are running the +quote application in your cluster as described in the +[$AESproductName$ tutorial](../../tutorials/quickstart-demo/). + +## 1. Deploy the authentication service + +$AESproductName$ delegates the actual authentication logic to a third party authentication service. We've written a [simple authentication service](https://github.com/datawire/ambassador-auth-service) that: + +- listens for requests on port 3000; +- expects all URLs to begin with `/extauth/`; +- performs HTTP Basic Auth for all URLs starting with `/backend/get-quote/` (other URLs are always permitted); +- accepts only user `username`, password `password`; and +- makes sure that the `x-qotm-session` header is present, generating a new one if needed. + +$AESproductName$ routes _all_ requests through the authentication service: it relies on the auth service to distinguish between requests that need authentication and those that do not. If $AESproductName$ cannot contact the auth service, it will return a 503 for the request; as such, **it is very important to have the auth service running before configuring $AESproductName$ to use it.** + +Here's the YAML we'll start with: + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: example-auth +spec: + type: ClusterIP + selector: + app: example-auth + ports: + - port: 3000 + name: http-example-auth + targetPort: http-api +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: example-auth +spec: + replicas: 1 + strategy: + type: RollingUpdate + selector: + matchLabels: + app: example-auth + template: + metadata: + labels: + app: example-auth + spec: + containers: + - name: example-auth + image: docker.io/datawire/ambassador-auth-service:2.0.0 + imagePullPolicy: Always + ports: + - name: http-api + containerPort: 3000 + resources: + limits: + cpu: "0.1" + memory: 100Mi +``` + +Note that the cluster does not yet contain any $AESproductName$ AuthService definition. This is intentional: we want the service running before we tell $AESproductName$ about it. + +The YAML above is published at getambassador.io, so if you like, you can just do + +``` +kubectl apply -f https://app.getambassador.io/yaml/v2-docs/$ossVersion$/demo/demo-auth.yaml +``` + +to spin everything up. (Of course, you can also use a local file, if you prefer.) + +Wait for the pod to be running before continuing. The output of `kubectl get pods` should look something like + +``` +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +example-auth-6c5855b98d-24clp 1/1 Running 0 4m +``` +Note that the `READY` field says `1/1` which means the pod is up and running. + +## 2. Configure $AESproductName$ authentication + +Once the auth service is running, we need to tell $AESproductName$ about it. The easiest way to do that is to first map the `example-auth` service with the following `Filter`: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Filter +metadata: + name: authentication +spec: + External: + auth_service: "example-auth:3000" + path_prefix: "/extauth" + allowed_request_headers: + - "x-qotm-session" + allowed_authorization_headers: + - "x-qotm-session" +``` + +This configuration tells $AESproductName$ about the `Filter`, notably that it needs the `/extauth` prefix, and that it's OK for it to pass back the `x-qotm-session` header. Note that `path_prefix` and `allowed_headers` are optional. + +Next you must apply the `Filter` to your desired hosts and paths using a `FilterPolicy`. The following would enable your `Filter` on requests to all hosts and paths (just remember that our authentication service is only configured to perform authentication on requests to `/backend/get-quote/`, see the [auth service's repo](https://github.com/datawire/ambassador-auth-service) for more information). + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: FilterPolicy +metadata: + name: authentication +spec: + rules: + - host: "*" + path: /* + filters: + - name: authentication +``` + +You can also apply the `Filter` only to specific hosts and/or paths, allowing you to only require authentication on certain routes. The following `FilterPolicy` would only run your `Filter` to requests to the `/backend/get-quote/` path: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: FilterPolicy +metadata: + name: authentication +spec: + rules: + - host: "*" + path: /backend/get-quote/ + filters: + - name: authentication +``` + +If the auth service uses a framework like [Gorilla Toolkit](http://www.gorillatoolkit.org) which enforces strict slashes as HTTP path separators, it is possible to end up with an infinite redirect where the filter's framework redirects any request with non-conformant slashing. This would arise if the above example had ```path_prefix: "/extauth/"```, the filter would see a request for ```/extauth//backend/get-quote/``` which would then be redirected to ```/extauth/backend/get-quote/``` rather than actually be handled by the authentication handler. For this reason, remember that the full path of the incoming request including the leading slash, will be appended to ```path_prefix``` regardless of non-conformant slashing. + +## 3. Test authentication + +If we `curl` to a protected URL: + +``` +$ curl -Lv $AMBASSADORURL/backend/get-quote/ +``` + +We get a 401 since we haven't authenticated. + +``` +* TCP_NODELAY set +* Connected to 54.165.128.189 (54.165.128.189) port 32281 (#0) +> GET /backend/get-quote/ HTTP/1.1 +> Host: 54.165.128.189:32281 +> User-Agent: curl/7.63.0 +> Accept: */* +> +< HTTP/1.1 401 Unauthorized +< www-authenticate: Basic realm="Ambassador Realm" +< content-length: 0 +< date: Thu, 23 May 2019 15:24:55 GMT +< server: envoy +< +* Connection #0 to host 54.165.128.189 left intact +``` + +If we authenticate to the service, we will get a quote successfully: + +``` +$ curl -Lv -u username:password $AMBASSADORURL/backend/get-quote/ + +* TCP_NODELAY set +* Connected to 54.165.128.189 (54.165.128.189) port 32281 (#0) +* Server auth using Basic with user 'username' +> GET /backend/get-quote/ HTTP/1.1 +> Host: 54.165.128.189:32281 +> Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ= +> User-Agent: curl/7.63.0 +> Accept: */* +> +< HTTP/1.1 200 OK +< content-type: application/json +< date: Thu, 23 May 2019 15:25:06 GMT +< content-length: 172 +< x-envoy-upstream-service-time: 0 +< server: envoy +< +{ + "server": "humble-blueberry-o2v493st", + "quote": "Nihilism gambles with lives, happiness, and even destiny itself!", + "time": "2019-05-23T15:25:06.544417902Z" +* Connection #0 to host 54.165.128.189 left intact +} +``` + +## What's next? + +* Get started with authentication by [installing $AESproductName$](../../tutorials/getting-started/). + +* For more details see the [`External` filter](../../topics/using/filters) documentation. diff --git a/docs/edge-stack/latest/howtos/external-dns.md b/docs/edge-stack/latest/howtos/external-dns.md new file mode 100644 index 000000000..fd75f1b47 --- /dev/null +++ b/docs/edge-stack/latest/howtos/external-dns.md @@ -0,0 +1,126 @@ +import Alert from '@material-ui/lab/Alert'; + +# ExternalDNS with $productName$ + +[ExternalDNS](https://github.com/kubernetes-sigs/external-dns) configures your existing DNS provider to make Kubernetes resources discoverable via public DNS servers by getting resources from the Kubernetes API to create a list of DNS records. + + +## Getting started + +### Prerequisites + +Before you begin, review [ExternalDNS repo's deployment instructions](https://github.com/kubernetes-sigs/external-dns#deploying-to-a-cluster) to get information about supported DNS providers and steps to setup ExternalDNS for your provider. Each DNS provider has its own required steps, as well as annotations, arguments, and permissions needed for the following configuration. + + +### Installation + +Configuration for a `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` is necessary for the ExternalDNS deployment to support compatibility with $productName$ and allow ExternalDNS to get hostnames from $productName$'s `Hosts`. + +The following configuration is an example configuring $productName$ - ExternalDNS integration with [AWS Route53](https://aws.amazon.com/route53/) as the DNS provider. Refer to the [ExternalDNS documentation](https://github.com/kubernetes-sigs/external-dns#deploying-to-a-cluster) for annotations and arguments for your DNS Provider. + + +1. Create a YAML file named `externaldns-config.yaml`, and copy the following configuration into it: + + + Ensure that the apiGroups include "getambassador.io" following "networking.k8s.io", and that the resources include "hosts" after "ingresses". + + + ```yaml + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: external-dns + annotations: + eks.amazonaws.com/role-arn: {ARN} # AWS ARN role + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: external-dns + rules: + - apiGroups: [""] + resources: ["services","endpoints","pods"] + verbs: ["get","watch","list"] + - apiGroups: ["extensions","networking.k8s.io", "getambassador.io"] + resources: ["ingresses", "hosts"] + verbs: ["get","watch","list"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["list","watch"] + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: external-dns-viewer + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: external-dns + subjects: + - kind: ServiceAccount + name: external-dns + namespace: default + --- + apiVersion: apps/v1 + kind: Deployment + metadata: + name: external-dns + spec: + strategy: + type: Recreate + selector: + matchLabels: + app: external-dns + template: + metadata: + labels: + app: external-dns + annotations: + iam.amazonaws.com/role: {ARN} # AWS ARN role + spec: + serviceAccountName: external-dns + containers: + - name: external-dns + image: registry.opensource.zalan.do/teapot/external-dns:latest + args: + - --source=ambassador-host + - --domain-filter=example.net # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones + - --provider=aws + - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization + - --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both) + - --registry=txt + - --txt-owner-id= {Hosted Zone ID} # Insert Route53 Hosted Zone ID here + ``` + +2. Review the arguments section from the ExternalDNS deployment. + + Configure or remove arguments to fit your needs. Additional arguments required for your DNS provider can be found by checking the [ExternalDNS repo's deployment instructions](https://github.com/kubernetes-sigs/external-dns#deploying-to-a-cluster). + + * `--source=ambassador-host` - required across all DNS providers to tell ExternalDNS to look for hostnames in the $productName$ `Host` configurations. + +3. Apply the above config with the following command to deploy ExternalDNS to your cluster and configure support for $productName$: + + ```shell + kubectl apply -f externaldns-ambassador.yaml + ``` + +## Usage + +After you've applied the above configuration, ExternalDNS is ready to use. Configure a `Host` with the following annotation to allow ExternalDNS to get the IP address of your $productName$'s LoadBalancer and register it with your DNS provider: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: your-hostname + annotations: + external-dns.ambassador-service: edge-stack.ambassador +spec: + acmeProvider: + authority: none + hostname: your-hostname.example.com +``` + + +Victory! ExternalDNS is now running and configured to report $productName$'s IP and hostname with your DNS provider. diff --git a/docs/edge-stack/latest/howtos/index.md b/docs/edge-stack/latest/howtos/index.md new file mode 100644 index 000000000..7270d09e9 --- /dev/null +++ b/docs/edge-stack/latest/howtos/index.md @@ -0,0 +1,36 @@ +# "How-to" guides + +These guides are designed to help users quickly accomplish common tasks. The guides assume a certain level of understanding of $productName$. Many of these guides are contributed by third parties; we welcome contributions via Pull Request at https://github.com/emissary-ingress/emissary. + +* Integrating with Service Mesh. $productName$ natively integrates with many service meshes. + * [HashiCorp Consul](consul) + * [Istio](istio) + * [Linkerd](linkerd2) +* Distributed tracing. $productName$ natively supports a number of distributed tracing systems to enable developers to visualize request flow in microservice and service-oriented architectures. + * [Datadog](tracing-datadog) + * [Zipkin](tracing-zipkin) +* Identity providers. $AESproductName$ integrates with a number of OAuth Identity Providers via OpenID Connect. + * [Auth0](sso/auth0) + * [Azure Active Directory](sso/azure) + * [Google Identity](sso/google) + * [Keycloak](sso/keycloak) + * [Okta](sso/okta) + * [Onelogin](sso/onelogin) + * [Salesforce](sso/salesforce) + * [UAA](sso/uaa) +* Monitoring. $productName$ integrates with a number of different monitoring/metrics providers. + * [Prometheus](prometheus) +* [Developing Custom Filters](filter-dev-guide) +* Frameworks and Protocols. $productName$ supports a wide range of protocols and cloud-native frameworks. + * [gRPC](grpc) + * [Knative Serverless Framework](knative) + * [WebSockets](websockets) +* Security. $productName$ supports a number of strategies for securing Kubernetes services. + * [Controlling the $productName$ 404 Page](controlling-404) + * [Protecting the Diagnostics Interface](protecting-diag-access) + * [HTTPS and TLS termination](tls-termination) + * [Certificate Manager](cert-manager) can be used to automatically obtain and renew TLS certificates; $AESproductName$ natively integrates this functionality. + * [Client Certificate Validation](client-cert-validation) + * [Basic Authentication](basic-auth) is a tutorial on how to use the external authentication API to code your own authentication service. + * [Rate Limiting in $productName$](advanced-rate-limiting) + * [Single Sign-On with OAuth and OpenID Connect](oauth-oidc-auth) diff --git a/docs/edge-stack/latest/howtos/istio.md b/docs/edge-stack/latest/howtos/istio.md new file mode 100644 index 000000000..4c54bd1a4 --- /dev/null +++ b/docs/edge-stack/latest/howtos/istio.md @@ -0,0 +1,438 @@ +import Alert from '@material-ui/lab/Alert'; + +# Istio integration + +$productName$ and Istio: Edge Proxy and Service Mesh together in one. $productName$ is deployed at the edge of your network and routes incoming traffic to your internal services (aka "north-south" traffic). [Istio](https://istio.io/) is a service mesh for microservices, and is designed to add application-level Layer (L7) observability, routing, and resilience to service-to-service traffic (aka "east-west" traffic). Both Istio and $productName$ are built using [Envoy](https://www.envoyproxy.io). + +$productName$ and Istio can be deployed together on Kubernetes. In this configuration, $productName$ manages +traditional edge functions such as authentication, TLS termination, and edge routing. Istio mediates communication +from $productName$ to services, and communication between services. + +This allows the operator to have the best of both worlds: a high performance, modern edge service ($productName$) combined with a state-of-the-art service mesh (Istio). While Istio has introduced a [Gateway](https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/) abstraction, $productName$ still has a much broader feature set for edge routing than Istio. For more on this topic, see our blog post on [API Gateway vs Service Mesh](https://blog.getambassador.io/api-gateway-vs-service-mesh-104c01fa4784). + +This guide explains how to take advantage of both $productName$ and Istio to have complete control and observability over how requests are made in your cluster: + +- [Install Istio](#install-istio) and configure auto-injection +- [Install $productName$ with Istio integration](#install-edge) +- [Configure an mTLS `TLSContext`](#configure-an-mtls-tlscontext) +- [Route to services using mTLS](#route-to-services-using-mtls) + +If desired, you may also + +- [Enable strict mTLS](#enable-strict-mtls) +- [Configure Prometheus metrics collection](#configure-prometheus-metrics-collection) +- [Configure Istio distributed tracing](#configure-istio-distributed-tracing) + +To follow this guide, you need: + +- A Kubernetes cluster version 1.15 and above +- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) +- Istio version 1.10 or higher + +## Install Istio + +Start by [installing Istio](https://istio.io/docs/setup/getting-started/). Any supported installation method for +Istio will work for use with $productName$. + +### Configure Istio Auto-Injection + +Istio functions by supplying a sidecar container running Envoy with every service in the mesh (including +$productName$). The sidecar is what enforces Istio policies for traffic to and from the service, notably +including mTLS encryption and certificate handling. As such, it is very important that the sidecar be +correctly supplied for every service in the mesh! + +While it is possible to manage sidecars by hand, it is far easier to allow Istio to automatically inject +the sidecar as necessary. To do this, set the `istio-injection` label on each Kubernetes Namespace for +which you want auto-injection: + +```yaml +kubectl label namespace $namespace istio-injection=enabled --overwrite +``` + + + The following example uses the `istio-injection` label to arrange for auto-injection in the + `$productNamespace$` Namespace below. You can manage sidecar injection by hand if you wish; what + is critical is that every service that participates in the Istio mesh have the Istio + sidecar. + + +## Install $productName$ with Istio Integration + +Properly integrating $productName$ with Istio provides support for: + +* [Mutual TLS (mTLS)](../../topics/running/tls/mtls), with certificates managed by Istio, to allow end-to-end encryption +for east-west traffic; +* Automatic generation of Prometheus metrics for services; and +* Istio distributed tracing for end-to-end observability. + +The simplest way to enable everything is to install $productName$ using [Helm](https://helm.sh), though +you can use manual installation with YAML if you wish. + +### Installation with Helm (Recommended) + +To install with Helm, write the following YAML to a file called `istio-integration.yaml`: + +```yaml +# All of the values we need to customize live under the emissary-ingress toplevel key. +emissary-ingress: + # Listeners are required in $productName$ 2.0. + # This will create the two default Listeners for HTTP on port 8080 and HTTPS on port 8443. + createDefaultListeners: true + + # These are annotations that will be added to the $productName$ pods. + podAnnotations: + # These first two annotations tell Istio not to try to do port management for the + # $productName$ pod itself. Though these annotations are placed on the $productName$ + # pods, they are interpreted by Istio. + traffic.sidecar.istio.io/includeInboundPorts: "" # do not intercept any inbound ports + traffic.sidecar.istio.io/includeOutboundIPRanges: "" # do not intercept any outbound traffic + + # We use proxy.istio.io/config to tell the Istio proxy to write newly-generated mTLS certificates + # into /etc/istio-certs, which will be mounted below. Though this annotation is placed on the + # $productName$ pods, it is interpreted by Istio. + proxy.istio.io/config: | + proxyMetadata: + OUTPUT_CERTS: /etc/istio-certs + + # We use sidecar.istio.io/userVolumeMount to tell the Istio sidecars to mount the istio-certs + # volume at /etc/istio-certs, allowing the sidecars to see the generated certificates. Though + # this annotation is placed on the $productName$ pods, it is interpreted by Istio. + sidecar.istio.io/userVolumeMount: '[{"name": "istio-certs", "mountPath": "/etc/istio-certs"}]' + + # We define a single storage volume called "istio-certs". It starts out empty, and Istio + # uses it to communicate mTLS certs between the Istio proxy and the Istio sidecars (see the + # annotations above). + volumes: + - emptyDir: + medium: Memory + name: istio-certs + + # We also tell $productName$ to mount the "istio-certs" volume at /etc/istio-certs in the + # $productName$ pod. This gives $productName$ access to the mTLS certificates, too. + volumeMounts: + - name: istio-certs + mountPath: /etc/istio-certs/ + readOnly: true + + # Finally, we need to set some environment variables for $productName$. + env: + # AMBASSADOR_ISTIO_SECRET_DIR tells $productName$ to look for Istio mTLS certs, and to + # make them available as a secret named "istio-certs". + AMBASSADOR_ISTIO_SECRET_DIR: "/etc/istio-certs" + + # AMBASSADOR_ENVOY_BASE_ID is set to prevent collisions with the Istio sidecar's Envoy, + # which runs with base-id 0. + AMBASSADOR_ENVOY_BASE_ID: "1" +``` + +To install $productName$ with Helm, use these values to configure Istio integration: + +1. Install $productName$ if you are not already running it by [following the quickstart](../../tutorials/getting-started): + +2. Enable Istio auto-injection for $productName$'s namespace: + + ```bash + kubectl label namespace $productNamespace$ istio-injection=enabled --overwrite + ``` + +3. Use Helm to configure $productName$'s Istio integration + +4. Use Helm to install $productName$ in $productNamespace$: + + ```bash + helm upgrade $productHelmName$ --namespace $productNamespace$ -f istio-integration.yaml datawire/$productHelmName$ && \ + kubectl -n $productNamespace$ wait --for condition=available --timeout=90s deploy -lapp.kubernetes.io/instance=$productDeploymentName$ + ``` + +### Installation Using YAML + +If you are not using Helm to manage your $productName$ installation, you need to manually incorporate the contents of the `istio-integration.yaml` file shown above into your deployment YAML: + +- `pod-annotations` should be configured as Kubernetes `annotations` on the $productName$ Pods; +- `volumes`, `volumeMounts`, and `env` contents should be included in the $productDeploymentName$ Deployment; and +- you must also label the $productNamespace$ Namespace for auto-injection as described above. + +### Configuring an Existing Installation + +If you have already installed $productName$ and want to enable Istio: + +1. Install Istio. +2. Label the $productNamespace$ namespace for Istio auto-injection, as above. +3. Edit the $productName$ Deployments to contain the `annotations`, `volumes`, `volumeMounts`, and `env` elements + shown above. + - If you installed with Helm, you can use `helm upgrade` with `-f istio-integration.yaml` to modify the + installation for you. +4. Restart the $productName$ pods. + +## Configure an mTLS `TLSContext` + +After configuring $productName$ for Istio integration, the Istio mTLS certificates are available within +$productName$: + +- Both the `istio-proxy` sidecar and $productName$ mount the `istio-certs` volume at `/etc/istio-certs`. +- The `istio-proxy` sidecar saves the mTLS certificates into `/etc/istio-certs` (per the `OUTPUT_CERTS` + environment variable). +- $productName$ reads the mTLS certificates from `/etc/istio-certs` (per the `AMBASSADOR_ISTIO_SECRET_DIR` + environment variable) and creates a Secret named `istio-certs`. + + + At present, the Secret name istio-certs cannot be changed. + + +To make use of the `istio-certs` Secret, create a `TLSContext` referencing it: + + ```yaml + kubectl apply -f - < + You must either explicitly specify port 80 in your Mapping's service + element, or set up the Kubernetes Service resource for your upstream service to map port + 443. If you don't do one of these, connections to your upstream will hang — see the + "Configure Service Ports" section below for more information. + + +The behavior of your service will not seem to change, even though mTLS is active: + + ```console + $ curl -k https://{{AMBASSADOR_HOST}}/backend/ + + { + "server": "bewitched-acai-5jq7q81r", + "quote": "A late night does not make any sense.", + "time": "2020-06-02T10:48:45.211178139Z" + } + ``` + +This request first went to $productName$, which routed it over an mTLS connection to the quote service in the +default namespace. That connection was intercepted by the `istio-proxy` which authenticated the request as +being from $productName$, exported various metrics, and finally forwarded it on to the actual quote service. + +### Configure Service Ports + +When mTLS is active, Istio makes TLS connections to your services. Since Istio handles the TLS protocol for +you, you don't need to modify your services — however, the TLS connection will still use port 443 +if you don't configure your `Mapping`s to _explicitly_ use port 80. + +If your upstream service was not written to use TLS, its `Service` resource may only map port 80. If Istio +attempts a TLS connection on port 443 when port 443 is not defined by the `Service` resource, the connection +will hang _even though the Istio sidecar is active_, because Kubernetes itself doesn't know how to handle +the connection to port 443. + +As shown above, one simple way to deal with this situation is to explicitly specify port 80 in the `Mapping`'s +`service`: + + ```yaml + service: quote:80 # Be explicit about port 80. + ``` + +Another way is to set up your Kubernetes `Service` to map both port 80 and port 443. For example, the +Quote deployment (which listens on port 8080 in its pod) might use a `Service` like this: + + ```yaml + apiVersion: v1 + kind: Service + metadata: + name: quote + spec: + type: ClusterIP + selector: + app: quote + ports: + - name: http + port: 80 + protocol: TCP + targetPort: 8080 + - name: https + port: 443 + protocol: TCP + targetPort: 8080 + ``` + +Note that ports 80 and 443 are both mapped to `targetPort` 8080, where the service is actually listening. +This permits Istio routing to work whether mTLS is active or not. + +## Enable Strict mTLS + +Istio defaults to _permissive_ mTLS, where mTLS is allowed between services, but not required. Configuring +[_strict_ mTLS](https://istio.io/docs/tasks/security/authentication/authn-policy/#globally-enabling-istio-mutual-tls-in-strict-mode) requires all connections within the cluster be encrypted. To switch Istio to use strict mTLS, +apply a `PeerAuthentication` resource in each namespace that should operate in strict mode: + + ```yaml + $ kubectl apply -f - < + secret: + protectedOrigins: + - origin: http://domain1.example.com +--- +apiVersion: getambassador.io/v3alpha1 +kind: Filter +metadata: + name: domain2-tenant +spec: + OAuth2: + authorizationURL: https://example.auth0.com + extraAuthorizationParameters: + audience: https://example.auth0.com/api/v2/ + clientId: + secret: + protectedOrigins: + - origin: http://domain2.example.com +``` + +Create a separate `FilterPolicy` that specifies which specific filters are applied to particular hosts or URLs. + +## Further reading + +The [filter reference](../../topics/using/filters/) covers the specifics of filters and filter policies in much more detail. diff --git a/docs/edge-stack/latest/howtos/sso/auth0.md b/docs/edge-stack/latest/howtos/sso/auth0.md new file mode 100644 index 000000000..2d5903f9e --- /dev/null +++ b/docs/edge-stack/latest/howtos/sso/auth0.md @@ -0,0 +1,75 @@ +# Single Sign-On with Auth0 + +With Auth0 as your IdP, you will need to create an `Application` to handle authentication requests from $AESproductName$. + +1. Navigate to Applications and Select "CREATE APPLICATION" + + ![](../../images/create-application.png) + +2. In the pop-up window, give the application a name and create a "Machine to Machine App" + + ![](../../images/machine-machine.png) + +3. Select the Auth0 Management API. Grant any scope values you may + require. (You may grant none.) The API is required so that an + `audience` can be specified which will result in a JWT being + returned rather than opaque token. A custom API can also be used. + + ![](../../images/scopes.png) + +4. In your newly created application, click on the Settings tab, add the Domain and Callback URLs for your service and ensure the "Token Endpoint Authentication Method" is set to `Post`. The default YAML installation of $AESproductName$ uses `/.ambassador/oauth2/redirection-endpoint` for the URL, so the values should be the domain name that points to $AESproductName$, e.g., `example.com/.ambassador/oauth2/redirection-endpoint` and `example.com`. + + ![](../../images/Auth0_none.png) + + Click Advanced Settings > Grant Types and check "Authorization Code" + +## Configure Filter and FilterPolicy + +Update the Auth0 `Filter` and `FilterPolicy`. You can get the `ClientID` and `secret` from your application settings: + + + ![](../../images/Auth0_secret.png) + + The `audience` is the API Audience of your Auth0 Management API: + + ![](../../images/Auth0_audience.png) + + The `authorizationURL` is your Auth0 tenant URL. + + ```yaml + --- + apiVersion: getambassador.io/v3alpha1 + kind: Filter + metadata: + name: auth0-filter + namespace: default + spec: + OAuth2: + authorizationURL: https://datawire-ambassador.auth0.com + extraAuthorizationParameters: + audience: https://datawire-ambassador.auth0.com/api/v2/ + clientID: fCRAI7svzesD6p8Pv22wezyYXNg80Ho8 + secret: CLIENT_SECRET + protectedOrigins: + - origin: https://datawire-ambassador.com + ``` + + ```yaml + --- + apiVersion: getambassador.io/v3alpha1 + kind: FilterPolicy + metadata: + name: httpbin-policy + namespace: default + spec: + rules: + - host: "*" + path: /httpbin/ip + filters: + - name: auth0-filter ## Enter the Filter name from above + arguments: + scope: + - "openid" + ``` + + **Note:** By default, Auth0 requires the `openid` scope. diff --git a/docs/edge-stack/latest/howtos/sso/azure.md b/docs/edge-stack/latest/howtos/sso/azure.md new file mode 100644 index 000000000..5e0fc071b --- /dev/null +++ b/docs/edge-stack/latest/howtos/sso/azure.md @@ -0,0 +1,72 @@ +# Single Sign-On with Azure Active Directory (AD) + +## Set up Azure AD + +To use Azure as your IdP, you will first need to register an OAuth application with your Azure tenant. + +1. Follow the steps in the Azure documentation [here](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal) to register your application. Make sure to select "web application" (not native application) when creating your OAuth application. + +2. After you have registered your application, click on `App Registrations` in the navigation panel on the left and select the application you just created. + +3. Make a note of both the client and tenant IDs as these will be used later when configuring $AESproductName$. + +4. Click on `Authentication` in the left sidebar. + + - Under the `Platform configurations` section, click on `+ Add a platform`, then select `Web` and add this URL `https://{{AMBASSADOR_URL}}/.ambassador/oauth2/redirection-endpoint` into the `Redirect URIs` input field + + **Note:** Azure AD requires the redirect endpoint to handle TLS + - Make sure your application is issuing `access tokens` by clicking on the `Access tokens (used for implicit flows)` checkbox under the `Implicit grant and hybrid flows` section + - Finally, click on `Configure` to save your changes + +5. Click on `Certificates & secrets` in the left sidebar. Click `+ New client secret` and set the expiration date you wish. Copy the value of this secret somewhere. You will need it when configuring $AESproductName$. + +## Set Up $AESproductName$ + +After configuring an OAuth application in Azure AD, configuring $AESproductName$ to make use of it for authentication is simple. + +1. Create an [OAuth Filter](../../../topics/using/filters/oauth2) with the credentials from above: + + ```yaml + apiVersion: getambassador.io/v3alpha1 + kind: Filter + metadata: + name: azure-ad + spec: + OAuth2: + # Azure AD openid-configuration endpoint can be found at https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration + authorizationURL: https://login.microsoftonline.com/{{TENANT_ID}}/v2.0 + # Client ID from step 3 above + clientID: CLIENT_ID + # Secret created in step 5 above + secret: CLIENT_SECRET + # The protectedOrigin is the scheme and Host of your $AESproductName$ endpoint + protectedOrigins: + - origin: https://{{AMBASSADOR_URL}} + ``` + +2. Create a [FilterPolicy](../../../topics/using/filters/) to use the `Filter` created above + + ```yaml + apiVersion: getambassador.io/v3alpha1 + kind: FilterPolicy + metadata: + name: azure-policy + spec: + rules: + # Requires authentication on requests from any hostname + - host: "*" + # Tells $AESproductName$ to apply the Filter only on request to the quote /backend/get-quote/ endpoint + path: /backend/get-quote/ + # Identifies which Filter to use for the path and host above + filters: + - name: azure-ad + ``` + +3. Apply both the `Filter` and `FilterPolicy` above with `kubectl` + + ``` + kubectl apply -f azure-ad-filter.yaml + kubectl apply -f azure-policy.yaml + ``` + +Now any requests to `https://{{AMBASSADOR_URL}}/backend/get-quote/` will require authentication from Azure AD. diff --git a/docs/edge-stack/latest/howtos/sso/google.md b/docs/edge-stack/latest/howtos/sso/google.md new file mode 100644 index 000000000..d16f91517 --- /dev/null +++ b/docs/edge-stack/latest/howtos/sso/google.md @@ -0,0 +1,65 @@ +# Single Sign-On with Google + +## Create an OAuth client in the Google API Console + +To use Google as an IdP for Single Sign-On, you will first need to create an OAuth web application in the Google API Console. + +1. Open the [Credentials page](https://console.developers.google.com/apis/credentials) in the API Console +2. Click `Create credentials > OAuth client ID`. +3. Select `Web application` and give it a name +4. Under **Restrictions**, fill in the **Authorized redirect URIs** with + + ``` + http(s)://{{AMBASSADOR_URL}}/.ambassador/oauth2/redirection-endpoint + ``` +5. Click `Create` +6. Record the `client ID` and `client secret` in the pop-up window. You will need these when configuring $AESproductName$ + +## Set up $AESproductName$ + +After creating an OAuth client in Google, configuring $AESproductName$ to make use of it for authentication is simple. + +1. Create an [OAuth Filter](../../../topics/using/filters/oauth2) with the credentials from above: + + ```yaml + apiVersion: getambassador.io/v3alpha1 + kind: Filter + metadata: + name: google + spec: + OAuth2: + # Google openid-configuration endpoint can be found at https://accounts.google.com/.well-known/openid-configuration + authorizationURL: https://accounts.google.com + # Client ID from step 6 above + clientID: CLIENT_ID + # Secret created in step 6 above + secret: CLIENT_SECRET + # The protectedOrigin is the scheme and Host of your $AESproductName$ endpoint + protectedOrigins: + - origin: http(s)://{{AMBASSADOR_URL}} + ``` +2. Create a [FilterPolicy](../../../topics/using/filters/) to use the `Filter` created above + + ```yaml + apiVersion: getambassador.io/v3alpha1 + kind: FilterPolicy + metadata: + name: google-policy + spec: + rules: + # Requires authentication on requests from any hostname + - host: "*" + # Tells $AESproductName$ to apply the Filter only on request to the quote /backend/get-quote/ endpoint + path: /backend/get-quote/ + # Identifies which Filter to use for the path and host above + filters: + - name: google + ``` +3. Apply both the `Filter` and `FilterPolicy` above with `kubectl` + + ``` + kubectl apply -f google-filter.yaml + kubectl apply -f google-policy.yaml + ``` + +Now any requests to `https://{{AMBASSADOR_URL}}/backend/get-quote/` will require authentication from Google. diff --git a/docs/edge-stack/latest/howtos/sso/keycloak.md b/docs/edge-stack/latest/howtos/sso/keycloak.md new file mode 100644 index 000000000..4e55cc8bc --- /dev/null +++ b/docs/edge-stack/latest/howtos/sso/keycloak.md @@ -0,0 +1,72 @@ +# Single Sign-On with Keycloak + +With Keycloak as your IdP, you will need to create a `Client` to handle authentication requests from $AESproductName$. The below instructions are known to work for Keycloak 4.8. + +1. Under "Realm Settings", record the "Name" of the realm your client is in. This will be needed to configure your `authorizationURL`. +2. Create a new client: navigate to Clients and select `Create`. Use the following settings: + - Client ID: Any value (e.g. `ambassador`); this value will be used in the `clientID` field of the Keycloak filter + - Client Protocol: "openid-connect" + - Root URL: Leave Blank + +3. Click Save. + +4. On the next screen configure the following options: + - Access Type: "confidential" + - Valid Redirect URIs: `*` + +5. Click Save. +6. Navigate to the `Mappers` tab in your Client and click `Create`. +7. Configure the following options: + - Protocol: "openid-connect". + - Name: Any string. This is just a name for the Mapper + - Mapper Type: select "Audience" + - Included Client Audience: select from the dropdown the name of your Client. This will be used as the `audience` in the Keycloak `Filter`. + +8. Click Save. + +9. Configure client scope as desired in "Client Scopes" + (e.g. `offline_access`). It's possible to set up Keycloak to not + use scope by removing all of them from "Assigned Default Client + Scopes". + + **Note:** All "Assigned Default Client Scopes" must be included in + the `FilterPolicy` `scope` argument. + +## Configure Filter and FilterPolicy + +Update the Keycloak `Filter` and `FilterPolicy` with the following: + + ```yaml + --- + apiVersion: getambassador.io/v3alpha1 + kind: Filter + metadata: + name: keycloak-filter + namespace: default + spec: + OAuth2: + authorizationURL: https://{KEYCLOAK_URL}/auth/realms/{KEYCLOAK_REALM} + audience: ambassador + clientID: ambassador + secret: {CLIENT_SECRET} + protectedOrigins: + - origin: https://{PROTECTED_URL} + ``` + + ```yaml + --- + apiVersion: getambassador.io/v3alpha1 + kind: FilterPolicy + metadata: + name: httpbin-policy + namespace: default + spec: + rules: + - host: "*" + path: /httpbin/ip + filters: + - name: keycloak-filter ## Enter the Filter name from above + arguments: + scope: + - "offline_access" + ``` diff --git a/docs/edge-stack/latest/howtos/sso/okta.md b/docs/edge-stack/latest/howtos/sso/okta.md new file mode 100644 index 000000000..f0735012a --- /dev/null +++ b/docs/edge-stack/latest/howtos/sso/okta.md @@ -0,0 +1,64 @@ +# Single Sign-On with Okta + +1. Create an OIDC application + + **Note:** If you have a [standard Okta account](https://www.okta.com) you must first navigate to your Okta Org's admin portal (step 1). [Developer accounts](https://developer.okta.com) can skip to Step 2. + + - Go to your org and click `Admin` in the top right corner to access the admin portal + - Select `Applications` + - Select `Add Application` + - Choose `Web` and `OpenID Connect`. Then click `Create`. + - Give it a name, enter the URL of your $AESproductName$ load balancer in `Base URIs` and the callback URL `{AMBASSADOR_URL}/.ambassador/oauth2/redirection-endpoint` as the `Login redirect URIs` + +2. Copy the `Client ID` and `Client Secret` and use them to fill in the `ClientID` and `Secret` of you Okta OAuth `Filter`. + +3. Get the `audience` configuration + + - Select `API` and `Authorization Servers` + - You can use the default `Authorization Server` or create your own. + - If you are using the default, the `audience` of your Okta OAuth `Filter` is `api://default` + - The value of the `authorizationURL` is the `Issuer URI` of the `Authorization Server` + +## Configure Filter and FilterPolicy + +Configure your OAuth `Filter` and `FilterPolicy` with the following: + + + ```yaml + --- + apiVersion: getambassador.io/v3alpha1 + kind: Filter + metadata: + name: okta-filter + namespace: default + spec: + OAuth2: + authorizationURL: https://{OKTA_DOMAIN}.okta.com/oauth2/default + audience: api://default + clientID: CLIENT_ID + secret: CLIENT_SECRET + protectedOrigins: + - origin: https://datawire-ambassador.com + ``` + + ```yaml + --- + apiVersion: getambassador.io/v3alpha1 + kind: FilterPolicy + metadata: + name: httpbin-policy + namespace: default + spec: + rules: + - host: "*" + path: /httpbin/ip + filters: + - name: okta-filter ## Enter the Filter name from above + arguments: + scope: + - "openid" + - "profile" + ``` + +**Note:** Scope values `openid` and `profile` are required at a +minimum. Other scope values can be added to the `Authorization Server`. diff --git a/docs/edge-stack/latest/howtos/sso/onelogin.md b/docs/edge-stack/latest/howtos/sso/onelogin.md new file mode 100644 index 000000000..59d318803 --- /dev/null +++ b/docs/edge-stack/latest/howtos/sso/onelogin.md @@ -0,0 +1,93 @@ +# Single Sign-On with OneLogin + +OneLogin is an application that manages authentication for your users on your network, and can provide backend access to $AESproductName$. + +To use OneLogin with $AESproductName$: + +1. Create an App Connector +2. Gather OneLogin Credentials +3. Configure $AESproductName$ + +## Create an App Connector + +To use OneLogin as your IdP, you will first need to create an OIDC custom connector and create an application from that connector. + +**To do so**: + +1. In your OneLogin portal, select **Administration** from the top right. +2. From the top left menu, select **Applications > Custom Connectors** and click the **New Connector** button. +3. Give your connector a name. +4. Select the `OpenID Connect` option as your "Sign on method." +5. Use `http(s)://{{AMBASSADOR_URL/.ambassador/oauth2/redirection-endpoint` as the value for "Redirect URI." +6. Optionally provide a login URL. +7. Click the **Save** button to create the connector. You will see a confirmation message. +8. In the "More Actions" tab, select **Add App to Connector**. +9. Select the connector you just created. +10. Click the **Save** button. + +You will see a success banner, which also brings you back to the main portal page. OneLogin is now configured to function as an OIDC backend for authentication with $AESproductName$. + +## Gather OneLogin Credentials + +Next, configure $AESproductName$ to require authentication with OneLogin, so you must collect the client information credentials from the application you just created. + +**To do so:** + +1. In your OneLogin portal, go to **Administration > Applications > Applications.** +2. Select the application you previously created. +3. On the left, select the **SSO** tab to see the client information. +4. Copy the value of Client ID for later use. +5. Click the **Show Client Secret** link and copy the value for later use. + +## Configure $AESproductName$ + +Now you must configure your $AESproductName$ instance to use OneLogin. + +1. First, create an [OAuth Filter](../../../topics/using/filters/oauth2) with the credentials you copied earlier. + +Here is an example YAML: + +```yaml + apiVersion: getambassador.io/v3alpha1 + kind: Filter + metadata: + name: onelogin + spec: + OAuth2: + # onelogin openid-configuration endpoint can be found at https://{{subdomain}}.onelogin.com/oidc/.well-known/openid-configuration + authorizationURL: https://{{subdomain}}.onelogin.com/oidc + clientID: {{Client ID}} + secret: {{Client Secret}} + # The protectedOrigin is the scheme and Host of your $AESproductName$ endpoint + protectedOrigins: + - origin: httpi(s)://{{AMBASSADOR_URL}} +``` + +2. Next, create a [FilterPolicy](../../../topics/using/filters/) to use the `Filter` you just created. + +Some example YAML: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: FilterPolicy +metadata: + name: oauth-policy +spec: + rules: + # Requires authentication on requests from any hostname + - host: "*" + # Tells $AESproductName$ to apply the Filter only on request to the /backend/get-quote/ endpoint from the quote application + path: /backend/get-quote/ + # Identifies which Filter to use for the path and host above + filters: + - name: onelogin +``` + +3. Lastly, apply both the `Filter` and `FilterPolicy` you created with a `kubectl` command in your terminal: + +``` +kubectl apply -f onelogin-filter.yaml +kubectl apply -f oauth-policy.yaml +``` + +Now any requests to `https://{{AMBASSADOR_URL}}/backend/get-quote/` will require authentication from OneLogin. diff --git a/docs/edge-stack/latest/howtos/sso/salesforce.md b/docs/edge-stack/latest/howtos/sso/salesforce.md new file mode 100644 index 000000000..1410a4a42 --- /dev/null +++ b/docs/edge-stack/latest/howtos/sso/salesforce.md @@ -0,0 +1,79 @@ +# Single Sign-On with Salesforce + +## Set up Salesforce + +To use Salesforce as your IdP, you will first need to register an OAuth application with your Salesforce tenant. This guide will walk you through the most basic setup via the "Salesforce Classic Experience". + +1. In the `Setup` page, under `Build` click the dropdown next to `Create` and select `Apps`. +2. Under `Connected Apps` at the bottom of the page, click on `New` at the top. +3. Fill in the following fields with whichever values you want: + + - Connected App Name + - API Name + - Contact Email + +4. Under `API (Enable OAuth Settings)` check the box next to `Enable OAuth Settings`. +5. Fill in the `Callback URL` section with `https://{{AMBASSADOR_HOST}}/.ambassador/oauth2/redirection-endpoint`. +6. Under `Selected OAuth Scopes` you must select the `openid` scope + value at the minimum. Select any other scope values you want to + include in the response as well. +7. Click `Save` and `Continue` to create the application. +8. Record the `Consumer Key` and `Consumer Secret` values from the `API (Enable OAuth Settings)` section in the newly created application's description page. + +After waiting for salesforce to register the application with their servers, you should be ready to configure $AESproductName$ to Salesforce as an IdP. + +## Set up $AESproductName$ + +After configuring an OAuth application in Salesforce, configuring $AESproductName$ to make use of it for authentication is simple. + +1. Create an [OAuth Filter](../../../topics/using/filters/oauth2) with the credentials from above: + + ```yaml + apiVersion: getambassador.io/v3alpha1 + kind: Filter + metadata: + name: salesforce + spec: + OAuth2: + # Salesforce's generic OpenID configuration endpoint at https://login.salesforce.com/ will work but you can also use your custom Salesforce domain i.e.: http://datawire.my.salesforce.com + authorizationURL: https://login.salesforce.com/ + # Consumer Key from above + clientID: {{Consumer Key}} + # Consumer Secret from above + secret: {{Consumer Secret}} + # The protectedOrigin is the scheme and Host of your $AESproductName$ endpoint + protectedOrigins: + - origin: https://{{AMBASSADOR_HOST}} + ``` + +2. Create a [FilterPolicy](../../../topics/using/filters/) to use the `Filter` created above + + ```yaml + apiVersion: getambassador.io/v3alpha1 + kind: FilterPolicy + metadata: + name: oauth-policy + spec: + rules: + # Requires authentication on requests from any hostname + - host: "*" + # Tells $AESproductName$ to apply the Filter only on request to the quote /backend/get-quote/ endpoint + path: /backend/get-quote/ + # Identifies which Filter to use for the path and host above + filters: + - name: salesforce + # Any additional scope values granted in step 6 above can be requested with the arguments field + # arguments: + # scope: + # - refresh_token + + ``` + +3. Apply both the `Filter` and `FilterPolicy` above with `kubectl` + + ``` + kubectl apply -f salesforce-filter.yaml + kubectl apply -f oauth-policy.yaml + ``` + +Now any requests to `https://{{AMBASSADOR_URL}}/backend/get-quote/` will require authentication from Salesforce. diff --git a/docs/edge-stack/latest/howtos/sso/uaa.md b/docs/edge-stack/latest/howtos/sso/uaa.md new file mode 100644 index 000000000..4e0ebc9ba --- /dev/null +++ b/docs/edge-stack/latest/howtos/sso/uaa.md @@ -0,0 +1,72 @@ +# SSO with User Account and Authentication Service (UAA) + +**IMPORTANT:** $AESproductName$ requires the IdP to return a JWT signed by the RS256 algorithm (asymmetric key). Cloud Foundry's UAA defaults to symmetric key encryption which $AESproductName$ cannot read. + +1. When configuring UAA, you will need to provide your own asymmetric key in a file called `uaa.yml`. For example: + + ```yaml + jwt: + token: + signing-key: | + -----BEGIN RSA PRIVATE KEY----- + MIIEpAIBAAKCAQEA7Z1HBM6QFqnIJ1UA3NWnYMuubt4XlfbP1/GopTWUmchKataM + ... + ... + QSbJdIbUBwL8BcrfNw4ebp1DgTI9F45Re+evky0A82aL0/BvBHu8og== + -----END RSA PRIVATE KEY----- + ``` + +2. Create an OIDC Client: + + ``` + uaac client add ambassador --name ambassador-client --scope openid --authorized_grant_types authorization_code,refresh_token --redirect_uri {AMBASSADOR_URL}/.ambassador/oauth2/redirection-endpoint --secret CLIENT_SECRET + ``` + + **Note:** Change the value of `{AMBASSADOR_URL}` with the IP or DNS of your $AESproductName$ load balancer. + +## Configure Filter and FilterPolicy + +Configure your OAuth `Filter` and `FilterPolicy` with the following: + + Use the clientID (`ambassador`) and secret (`CLIENT_SECRET`) from Step 2 to configure the OAuth `Filter`. + + ```yaml + --- + apiVersion: getambassador.io/v3alpha1 + kind: Filter + metadata: + name: uaa-filter + namespace: default + spec: + OAuth2: + authorizationURL: {UAA_DOMAIN} + audience: {UAA_DOMAIN} + clientID: ambassador + secret: CLIENT_SECRET + protectedOrigins: + - origin: https://datawire-ambassador.com + ``` + + **Note:** The `authorizationURL` and `audience` are the same for UAA configuration. + + ```yaml + --- + apiVersion: getambassador.io/v3alpha1 + kind: FilterPolicy + metadata: + name: httpbin-policy + namespace: default + spec: + rules: + - host: "*" + path: /httpbin/ip + filters: + - name: uaa-filter ## Enter the Filter name from above + arguments: + scope: + - "openid" + ``` + +**Note:** The `scope` field was set when creating the client in +Step 2. You can add any scope values you would like when creating the +client. diff --git a/docs/edge-stack/latest/howtos/token-ratelimit.md b/docs/edge-stack/latest/howtos/token-ratelimit.md new file mode 100644 index 000000000..42a6b454e --- /dev/null +++ b/docs/edge-stack/latest/howtos/token-ratelimit.md @@ -0,0 +1,154 @@ +import Alert from '@material-ui/lab/Alert'; + +# Rate limiting on token claims + +This guide applies to $AESproductName$, use of this guide on the $OSSproductName$ is not recommended. + +$AESproductName$ is able to perform Rate Limiting based on JWT Token claims from either a JWT or OAuth2 Filter implementation. This is because $AESproductName$ deliberately calls the `ext_authz` filter in Envoy as the first step when processing incoming requests. In $AESproductName$, the `ext_authz` filter is implemented as a [Filter resource](../../topics/using/filters/). This explicitly means that $AESproductName$ Filters are ALWAYS processed prior to RateLimit implementations. As a result, you can use the `injectRequestHeader` field in either a JWT Filter or an OAuth Filter and pass that header along to be used for RateLimiting purposes. + +## Prerequisites + +- $AESproductName$ +- A working Keycloak instance and Keycloak Filter +- A service exposed with a Mapping and protected by a FilterPolicy + +We'll use Keycloak to generate tokens with unique claims. It will work in a similar manner for any claims present on a JWT token issued by any other provider. See our guide here on using Keycloak with $AESproductName$. + +Here is a YAML example that describes the setup: + +```yaml +--- +# Mapping to expose the Quote service +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + hostname: "*" + prefix: /backend/ + service: quote +--- +# Basic OAuth filter for Keycloak +apiVersion: getambassador.io/v3alpha1 +kind: Filter +metadata: + name: keycloak-filter-ambassador +spec: + OAuth2: + authorizationURL: https:///auth/realms/ + audience: + clientID: + secret: + protectedOrigins: + - origin: https://host.example.com +--- +# Basic FilterPolicy that covers everything +apiVersion: getambassador.io/v3alpha1 +kind: FilterPolicy +metadata: + name: ambassador-policy +spec: + rules: + - host: "*" + path: "*" + filters: + - name: keycloak-filter-ambassador +``` + +## 1. Configure the Filter to extract the claim + +In order to extract the claim, we need to have the Filter use the `injectRequestHeader` config and use a golang template to pull out the exact value of the `name` claim in our access token JWT and put it in a Header for our RateLimit to catch. Configuration is similar for both [OAuth2](../../topics/using/filters/oauth2/#oauth-resource-server-settings) and [JWT](../../topics/using/filters/jwt/). + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Filter +metadata: + name: keycloak-filter-ambassador +spec: + OAuth2: + authorizationURL: https:///auth/realms/ + audience: + clientID: + secret: + protectedOrigins: + - origin: https://host.example.com + injectRequestHeaders: + - name: "x-token-name" + value: "{{ .token.Claims.name }}" # This extracts the "name" claim and puts it in the "x-token-name" header. +``` + +## 2. Add Labels to our Mapping + +Now that the header is properly added, we need to add a label to the Mapping of the service that we want to rate limit. This will determine if the route established by the Mapping will use a label when $AESproductName$ is processing where to send the request. If so, it will add the labels as metadata to be attached when sent to the `RateLimitService` to determine whether or not the request should be rate-limited. + +Use `ambassador` as the label domain, unless you have already set up $AESproductName$ to use something else. + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + hostname: "*" + prefix: /backend/ + service: quote + labels: + ambassador: + - header_request_label: + - request_headers: + key: headerkey # In pattern matching, they key queried will be "headerkey" and the value + header_name: "x-token-name" # queried will be the value of "x-token-name" header +``` + +## 3. Create our RateLimit + +We now have appropriate labels added to the request when we send it to the rate limit service, but how do we know what rate limit to apply and how many requests should we allow before returning an error? This is where the RateLimit comes in. The RateLimit allows us to create specific rules based on the labels associated with a particular request. If a value is not specified, then each unique value of the `x-token-name` header that comes in will be associated with its own counter. So, someone with a `name` JWT claim of "Julian" will be tracked separately from "Jane". + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: RateLimit +metadata: + name: token-name-rate-limit +spec: + domain: ambassador + - name: token-name-per-minute + action: Enforce + pattern: + - headerkey: "" # Each unique header value of "x-token-name" will be tracked individually + rate: 10 + unit: "minute" # Per-minute tracking is useful for debugging +``` + +## 4. Test + +Now we can navigate to our backend in a browser at `https://host.example.com/backend/`. After logging in, if we keep refreshing, we will find that our 11th attempt will respond with a blank page. Success! + +## 5. Enforce a different rate limit for a specific user + +We've noticed that the user "Julian" uses bad code that abuses the API and consumes way too much bandwidth with his retries. As such, we want a user with the exact `name` claim of "Julian" to only get 2 requests per minute before getting an error. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: RateLimit +metadata: + name: token-name-rate-limit +spec: + domain: ambassador + limits: + - name: julians-rule-enforcement + action: Enforce + pattern: + - headerkey: "Julian" # Only matches for x-token-name = "Julian" + rate: 2 + unit: "minute" + - name: token-name-per-minute + action: Enforce + pattern: + - headerkey: "" # Each unique header value of "x-token-name" will be tracked individually + rate: 10 + unit: "minute" # Per-minute tracking is useful for debugging +``` + +This tutorial only scratches the surface of the rate limiting capabilities of $AESproductName$. Please see our documentation [here](../../topics/using/rate-limits/) and [here](../../topics/using/rate-limits/rate-limits/) to learn more about how you can use rate limiting. diff --git a/docs/edge-stack/latest/howtos/web-application-firewalls-config.md b/docs/edge-stack/latest/howtos/web-application-firewalls-config.md new file mode 100644 index 000000000..323e3da78 --- /dev/null +++ b/docs/edge-stack/latest/howtos/web-application-firewalls-config.md @@ -0,0 +1,59 @@ +--- + Title: Configuring Web Application Firewall rules in Edge Stack + description: Get Web Application Firewalls quickly setup with Edge Stack and create custom firewall rules. +--- + +# Configuring Web Application Firewall rules in $productName$ + +When writing your own firewall rules it's important to first take note of a few ways that $productName$'s `WebApplicationFirewalls` work. + +1. Requests are either denied or allowed, redirects and dropped requests are not supported +2. If you have a rule in your firewall configuration that specifies the `deny` action and you do not specify a `status`, then we will default to +using status code `403`. +3. State is not preserved across the different phases of proceeing a request. For this reason it is advised to use early blocking mode +rather than anamoly scoring mode and to avoid creating any firewall rules that require state or information created by rules in a different phase. For more information about waf phases refer to the [Coraza Seclang Execution Flow docs][]. + +## Ambassador Labs Firewall Ruleset + +Ambassador Labs publishes and maintains a set of firewall rules that are ready to use. +The latest version of the Ambassador Labs Web Application Firewall ruleset can be downloaded with these commands: + +```bash +wget https://app.getambassador.io/download/waf/v1-20230825/aes-waf.conf +wget https://app.getambassador.io/download/waf/v1-20230825/crs-setup.conf +wget https://app.getambassador.io/download/waf/v1-20230825/waf-rules.conf +``` + +Each file must be imported into $productName$'s Web Application Firewall in the following order: + +1. aes-waf.conf +2. crs-setup.conf +3. waf-rules.conf + +The Ambassador Labs ruleset largely focuses on incoming requests and by default it does not perform processing on response bodies from upstream services to minimize the request round-trip latency. + +If processing of responses is desired, then you can create your own custom rule set or add additional rules to be loaded after the Ambassador Labs ruleset to add custom validation of responses from upstream services. + +If you are adding rules to process response bodies after the Ambassador Labs ruleset, then you will need to set `SecResponseBodyAccess On` in your rules to enable access to the response body. + +If you'd like to customize the Ambassador Labs default ruleset, you can load your own files before or after waf-rules.conf. Keep in mind that the `WebApplicationFirewall` resource loads firewall configurations via a list of rules sources, and sources lower in the list can overwrite rules and settings from sources higher in the list. See files [REQUEST-900-EXCLUSION-RULES-BEFORE-CRS.conf.example][] and [RESPONSE-999-EXCLUSION-RULES-AFTER-CRS.conf.example][] for more information. + +## Web Application Firewall Rules Release Notes + + +To install any of the rules below, import all the files for the desired version in the order they are listed. + + +### Version v1-20230825 + +Initial version of $productName$'s Web Application Firewall rules. + +Files: + +- [aes-waf.conf](https://app.getambassador.io/download/waf/v1-20230825/aes-waf.conf) +- [crs-setup.conf](https://app.getambassador.io/download/waf/v1-20230825/crs-setup.conf) +- [waf-rules.conf](https://app.getambassador.io/download/waf/v1-20230825/waf-rules.conf) + +[REQUEST-900-EXCLUSION-RULES-BEFORE-CRS.conf.example]: https://github.com/coreruleset/coreruleset/blob/v4.0/dev/rules/REQUEST-900-EXCLUSION-RULES-BEFORE-CRS.conf.example +[RESPONSE-999-EXCLUSION-RULES-AFTER-CRS.conf.example]: https://github.com/coreruleset/coreruleset/blob/v4.0/dev/rules/RESPONSE-999-EXCLUSION-RULES-AFTER-CRS.conf.example +[Coraza Seclang Execution Flow docs]: https://coraza.io/docs/seclang/execution-flow/ diff --git a/docs/edge-stack/latest/howtos/web-application-firewalls-in-production.md b/docs/edge-stack/latest/howtos/web-application-firewalls-in-production.md new file mode 100644 index 000000000..4b8d61db3 --- /dev/null +++ b/docs/edge-stack/latest/howtos/web-application-firewalls-in-production.md @@ -0,0 +1,223 @@ +--- + Title: Using Web Application Firewalls in production + description: Learn about best practices for enabling Edge Stack's Web Application Firewalls in production environments +--- + +# Using Web Application Firewall in production + +By default, Ambassador Labs rules are configured to block malicious requests. However, when a Web Application Firewall is +first deployed in a production environment, it is recommended to set it in a non-blocking mode and monitor its behavior +to identify potential issues. + +The following procedure can be followed to deploy $productName$'s Web Application Firewall in detection-only mode and +customize the rules. + +1. Enable Detection Only mode. Detection Only mode will run all rules, but won't execute any disruptive actions. + This is configured using the directive [SecRuleEngine][]. + + You also want to enable debug logs, which are necessary to identify false positives. You can them in the + `WebApplicationFirewall` resource as described in the [documentation][]. + + Optionally, Coraza debug logs can be enabled by setting the directive [SecDebugLogLevel][]. These logs are very verbose + but can help identify issues when the `WebApplicationFirewall` logs don't show enough information. + + The following example illustrates this: + + ```yaml + apiVersion: v1 + kind: ConfigMap + metadata: + name: "waf-configuration" + data: + waf-overrides.conf: | + SecRuleEngine DetectionOnly + SecDebugLogLevel 4 + + --- + + apiVersion: gateway.getambassador.io/v1alpha1 + kind: WebApplicationFirewall + metadata: + name: "waf-rules" + spec: + firewallRules: + - sourceType: "http" + http: + url: "https://app.getambassador.io/download/waf/v1-20230825/aes-waf.conf" + - configMapRef: + key: waf-overrides.conf + name: waf-configuration + sourceType: configmap + - sourceType: "http" + http: + url: "https://app.getambassador.io/download/waf/v1-20230825/crs-setup.conf" + - sourceType: "http" + http: + url: "https://app.getambassador.io/download/waf/v1-20230825/waf-rules.conf" + logging: + onInterrupt: + enabled: true + ``` + +2. Identify false positives. $productName$'s container logs will have one or more entries indicating which rules + were applied to a request and why. + + For example, the following log entry (formatted for readability) shows that a request to `https://34.123.92.3/backend/` was + blocked by rule 920350 because the Host header contains an IP address. + + ```text + 2023-06-14T17:37:29.145Z INFO waf/manager.go:73 request interrupted by waf: default/example-waf + { + "message": "Host header is a numeric IP address", + "data": "34.123.92.3", + "uri": "https://34.123.92.3/backend/", + "disruptive": true, + "matchedDatas": [ + { + "Variable_": 54, + "Key_": "Host", + "Value_": "34.123.92.3", + "Message_": "Host header is a numeric IP address", + "Data_": "34.123.92.3", + "ChainLevel_": 0 + } + ], + "rule": { + "ID_": 920350, + "File_": "", + "Line_": 9892, + "Rev_": "", + "Severity_": 4, + "Version_": "OWASP_CRS/4.0.0-rc1", + "Tags_": [ + "application-multi", + "language-multi", + "platform-multi", + "attack-protocol", + "paranoia-level/1", + "OWASP_CRS", + "capec/1000/210/272", + "PCI/6.5.10" + ], + "Maturity_": 0, + "Accuracy_": 0, + "Operator_": "", + "Phase_": 1, + "Raw_": "SecRule REQUEST_HEADERS:Host \"@rx (?:^([\\d.]+|\\[[\\da-f:]+\\]|[\\da-f:]+)(:[\\d]+)?$)\" \"id:920350,phase:1,block,t:none,msg:'Host header is a numeric IP address',logdata:'%{MATCHED_VAR}',tag:'application-multi',tag:'language-multi',tag:'platform-multi',tag:'attack-protocol',tag:'paranoia-level/1',tag:'OWASP_CRS',tag:'capec/1000/210/272',tag:'PCI/6.5.10',ver:'OWASP_CRS/4.0.0-rc1',severity:'WARNING',setvar:'tx.inbound_anomaly_score_pl1=+%{tx.warning_anomaly_score}'\"", + "SecMark_": "" + } + } + ``` + + If you enabled Coraza debug logs, use the rule ID to identify entries that are not important as follows: + + - Rules in the range 900000 to 901999 define some Coraza behaviors and can be ignored. + + - Rules like the one below are used to skip other rules and can be ignored as well. + + ```text + SecRule TX:DETECTION_PARANOIA_LEVEL "@lt 1" "id:911012,phase:2,pass,nolog,skipAfter:END-REQUEST-911-METHOD-ENFORCEMENT" + ``` + + + Each Web Application Firewall configuration file has rules in predefined ranges as follows: Rules in the range + 900000 to 900999 are in crs-setup.conf, rules IDs 901000 to 999999 are in waf-rules.conf, and all other rules are in aes-waf.conf. + + + +## Customizing Ambassador Labs rules + +There are several options to configure if/when a rule runs: +1. Disable a rule completely. +2. Apply a rule to some requests. + +### Disabling a rule completely + +To disable a rule, follow the instructions in [RESPONSE-999-EXCLUSION-RULES-AFTER-CRS.conf.example][], save the +configuration as a ConfigMap, and load it after `waf-rules.conf`. + +For example, let's say that we want to disable the rule with ID `913110`. The first step is to create the configuration: + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: "waf-configuration" +data: + disabled-rules.conf: | + SecRuleRemoveById 913110 +``` + +And then load it after `waf-rules.conf`: + +```yaml +apiVersion: gateway.getambassador.io/v1alpha1 +kind: WebApplicationFirewall +metadata: + name: "waf-rules" +spec: + firewallRules: + - sourceType: "http" + http: + url: "https://app.getambassador.io/download/waf/v1-20230825/aes-waf.conf" + - sourceType: "http" + http: + url: "https://app.getambassador.io/download/waf/v1-20230825/crs-setup.conf" + - sourceType: "http" + http: + url: "https://app.getambassador.io/download/waf/v1-20230825/waf-rules.conf" + - configMapRef: + key: disabled-rules.conf + name: waf-configuration + sourceType: configmap +``` + +### Applying a rule to some requests + +To apply a rule only to some requests, update it as described in [REQUEST-900-EXCLUSION-RULES-BEFORE-CRS.conf.example][] and +load the new settings before `waf-rules.conf`. + +The following example shows how to disable all rules tagged `attack-sqli` when the URI does not start with '/api/': + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: "waf-configuration" +data: + website-rules.conf: | + SecRule REQUEST_URI "!@beginsWith /api/" \ + "id:1000,\ + phase:2,\ + pass,\ + nolog,\ + ctl:ruleRemoveByTag=attack-sqli" + +--- + +apiVersion: gateway.getambassador.io/v1alpha1 +kind: WebApplicationFirewall +metadata: + name: "waf-rules" +spec: + firewallRules: + - sourceType: "http" + http: + url: "https://app.getambassador.io/download/waf/v1-20230825/aes-waf.conf" + - sourceType: "http" + http: + url: "https://app.getambassador.io/download/waf/v1-20230825/crs-setup.conf" + - configMapRef: + key: website-rules.conf + name: waf-configuration + sourceType: configmap + - sourceType: "http" + http: + url: "https://app.getambassador.io/download/waf/v1-20230825/waf-rules.conf" +``` + +[SecRuleEngine]: https://coraza.io/docs/seclang/directives/#secruleengine +[SecDebugLogLevel]: https://coraza.io/docs/seclang/directives/#secdebugloglevel +[REQUEST-900-EXCLUSION-RULES-BEFORE-CRS.conf.example]: https://github.com/coreruleset/coreruleset/blob/v4.0/dev/rules/REQUEST-900-EXCLUSION-RULES-BEFORE-CRS.conf.example +[RESPONSE-999-EXCLUSION-RULES-AFTER-CRS.conf.example]: https://github.com/coreruleset/coreruleset/blob/v4.0/dev/rules/RESPONSE-999-EXCLUSION-RULES-AFTER-CRS.conf.example +[documentation]: ../web-application-firewalls diff --git a/docs/edge-stack/latest/howtos/web-application-firewalls.md b/docs/edge-stack/latest/howtos/web-application-firewalls.md new file mode 100644 index 000000000..2872d02f0 --- /dev/null +++ b/docs/edge-stack/latest/howtos/web-application-firewalls.md @@ -0,0 +1,250 @@ +--- + Title: Protect your services with Edge Stack's Web Application Firewalls + description: Quickly block common attacks in the OWASP Top 10 vulnerabilities like cross-site-scripting (XSS) and SQL injection with Edge Stack's self-service Web Application Firewalls (WAF) +--- + +# Web Application Firewalls in $productName$ + +[$productName$][] comes fully equipped with a Web Application Firewall solution (commonly referred to as WAF) that is easy to set up and can be configured to help protect your web applications by preventing and mitigating many common attacks. To accomplish this, the [Coraza Web Application Firewall library][] is used to check incoming requests against a user-defined configuration file containing rules and settings for the firewall to determine whether to allow or deny incoming requests. + +$productName$ also has additional authentication features such as [Filters][] and [Rate Limiting][]. When `Filters`, `Ratelimits`, and `WebApplicationFirewalls` are all used at the same time, the order of operations is as follows and is not currently configurable. + +1. `WebApplicationFirewalls` are always executed first +2. `Filters` are executed next (so long as any configured `WebApplicationFirewalls` did not already reject the request) +3. Lastly `Ratelimits` are executed (so long as any configured `WebApplicationFirewalls` and Filters did not already reject the request) + +## The Web Application Firewalls Resource + +In $productName$, the `WebApplicationFirewall` resource defines the configuration for an instance of the firewall. + +```yaml +--- +apiVersion: gateway.getambassador.io/v1alpha1 +kind: WebApplicationFirewall +metadata: + name: "example-waf" + namespace: "example-namespace" +spec: + ambassadorSelector: # optional; default = {ambassadorIds: ["default"]} + ambassadorIds: []string # optional; default = ["default"] + firewallRules: # required; One of configMapRef;file;http must be set below + sourceType: "enum" # required; allowed values are file;configmap;http + configMapRef: # optional + name: "string" # required + namespace: "string" # required + key: "string" # required + file: "string" # optional + http: # optional + url: "string" # required; must be a valid URL. + logging: # optional + onInterrupt: # required + enabled: bool # required +status: # set and updated by application + conditions: []metav1.Condition +``` + +- `ambassadorSelector` Configures how this resource is allowed to be watched/used by instances of Edge Stack + - `ambassadorIds`: This optional field allows you to limit which instances of $productName$ can watch and use this resource. This allows for the separation of resources when running multiple instances of $productName$ in the same Kubernetes cluster. Additional documentation on [configuring Ambassador IDs can be found here][]. By default, all instances of $productName$ will be able to watch and use this resource. +- `firewallRules`: Defines the rules to be used for the Web Application Firewall + - `sourceType`: Identifies which method is being used to load the firewall rules. Value must be one of `configMapRef`;`file`;`http`. The value corresponds to the following fields for configuring the selected method. + - `configMapRef`: Defines a reference to a `ConfigMap` in the Kubernetes cluster to load firewall rules from. + - `name`: Name of the `ConfigMap`. + - `namespace`: Namespace of the `ConfigMap`. It must be an RFC 1123 label. Valid values include: `"example"`. Invalid values include: `"example.com"` - `"."` is an invalid character. The maximum allowed length is `63` characters and the regex pattern `^[a-z0-9]([-a-z0-9]*[a-z0-9])?$` is used for validation. + - `key`: The key in the `ConfigMap` to pull the rules data from. + - `file`: Location of a file on disk to load the firewall rules from. Example: `"/ambassador/firewall/waf.conf`. + - `http`: Configuration for downloading firewall rules from the internet. The rules will only be downloaded once when the `WebApplicationFirewall` is loaded. The rules will then be cached in-memory until a restart of $productName$ occurs or the `WebApplicationFirewall` is modified. + - `url`: URL to fetch firewall rules from. If the rules are unable to be downloaded/parsed from the provided url for whatever reason, the requests matched to this `WebApplicationFirewall` will be allowed/denied based on the configuration of the `onError` +- `logging`: Provides a way to configure additional logging in the $productName$ pods for the `WebApplicationFirewall`. This is in addition to the logging config that is available via the firewall configuration files. The following logs will always be output to the container logs when enabled. + - `onInterrupt`: Controls logging behavior when the WebApplicationFirewall interrupts a request. + - `enabled`: Configures whether the container should output logs. + +`status`: This field is automatically set to reflect the status of the `WebApplicationFirewall`. + +- `conditions`: Conditions describe the current conditions of the `WebApplicationFirewall`, known conditions are `Accepted`;`Ready`;`Rejected`. + +## The Web Application Firewalls Policy Resource + +The `WebApplicationFirewallPolicy` resource controls which requests to match on and which `WebApplicationFirewall` configuration to use. This gives users total control over the firewall configuration and when it is executed on requests. It is possible to set up multiple different firewall configurations for specific requests or a single firewall configuration that is applied to all requests. + +```yaml +--- +apiVersion: gateway.getambassador.io/v1alpha1 +kind: WebApplicationFirewallPolicy +metadata: + name: "example-waf-policy" + namespace: "example-namespace" +spec: + ambassadorSelector: # optional; default = {ambassadorIds: ["default"]} + ambassadorIds: []string # optional; default = ["default"] + rules: # required + - host: "string" # optional; default = * (runs on all hosts) + path: "string" # optional; default = * (runs on all paths) + ifRequestHeader: # optional + type: "enum" # optional; allowed values are Exact;RegularExpression + name: "string" # required + value: "string" # optional + negate: bool # optional; default: false + wafRef: # required + name: "string" # required + namespace: "string" # required + onError: # optional; min=400, max=599 + statusCode: int # required + precedence: int # optional +status: # set and updated by application + conditions: []metav1.Condition + ruleStatuses: + - index: int + host: "string" + path: "string" + conditions: []metav1.Condition +``` + +`spec`: Defines which requests to match on and which `WebApplicationFirewall` to be used against those requests. + +- `ambassadorSelector` Configures how this resource is allowed to be watched/used by instances of Edge Stack + - `ambassadorIds`: This optional field allows you to limit which instances of $productName$ can watch and use this resource. This allows for the separation of resources when running multiple instances of $productName$ in the same Kubernetes cluster. Additional documentation on [configuring Ambassador IDs can be found here][]. By default, all instances of $productName$ will be able to watch and use this resource. +- `rules`: This object configures matching requests and executes `WebApplicationFirewall`s on them. + - `host`: Host is a "glob-string" that matches on the `:authority` header of the incoming request. If not set, it will match on all incoming requests. + - `path`: Path is a "glob-string" that matches on the request path. If not provided, then it will match on all incoming requests. + - `ifRequestHeader`: Checks if exact or regular expression matches a value in a request header to determine if the `WebApplicationFirewall` is executed or not. + - `type`: Specifies how to match against the value of the header. Allowed values are `Exact`;`RegularExpression` + - `name`: Name of the HTTP Header to be matched. Name matching MUST be case-insensitive. (See ) + - `value`: Value of HTTP Header to be matched. If type is `RegularExpression`, then this must be a valid regex with a length of at least 1. + - `negate`: Allows the match criteria to be negated or flipped. + - `wafRef`: A reference to a `WebApplicationFirewall` to be applied against the request. + - `name`: Identifies the `WebApplicationFirewall` + - `namespace`: Namespace of the `WebApplicationFirewall`. This field is required. It must be a RFC 1123 label. Valid values include: `"example"`. Invalid values include: `"example.com"` - `"."` is an invalid character. The maximum allowed length is `63` characters, and the regex pattern `^[a-z0-9]([-a-z0-9]*[a-z0-9])?$` is used for validation. + - `onError`: provides a way to configure how requests are handled when a request matches the rule but there is a configuration or runtime error. By default, requests are allowed on error if this field is not configured. This covers runtime errors such as those caused by networking/request parsing as well as configuration errors such as if the `WebApplicationFirewall` that is referenced is misconfigured, cannot be found, or when its configuration cannot be loaded properly. Details about the errors can be found either in the `WebApplicationFirewall` status or container logs. + - `statusCode`: The status code to return to downstream clients when an error occurs. + - `precedence`: Allows forcing a precedence ordering on the rules. By default the rules are evaluated in the order they are in the `WebApplicationFirewallPolicy.spec.rules` field. However, multiple `WebApplicationFirewallPolicys` can be applied to a cluster. `precedence` can optionally be used to ensure that a specific ordering is enforced. + +`status`: This field is automatically set to reflect the status of the `WebApplicationFirewallPolicy`. + +- `conditions`: Conditions describe the current conditions of the `WebApplicationFirewallPolicy`, known conditions are `Accepted`;`Ready`;`Rejected`. If any rules have an error then the whole `WebApplicationFirewallPolicyPolicy` will be rejected. +- `ruleStatuses`: + - `index`: Provides the zero-based index in the list of Rules to help identify the rule with an error. + - `host`: host of the rule with the error + - `path`: path of the rule with the error + - `conditions`: Describe the current condition of this Rule. Known values are `Accepted`;`Ready`;`Rejected`. If any rules have an error then the whole `WebApplicationFirewallPolicy` will be rejected. + +## Quickstart + +1. First, start by creating your firewall configuration. The example will download [the firewall rules][] published by [Ambassador Labs][], but you are free to write your own or use the published rules as a reference. + + ```yaml + kubectl apply -f -</test -H 'User-Agent: Arachni/0.2.1' + ``` + +Congratulations, you've successfully set up a Web Application Firewall to secure all requests coming into $productName$. + + + After applying your WebApplicationFirewall and WebApplicationFirewall resources, check their statuses to make sure that they were not rejected due to any configuration errors. + + +## Rules for Web Application Firewalls + +Since the [Coraza Web Application Firewall library][] $productName$'s Web Application Firewall implementation, the firewall rules configuration uses [Coraza's Seclang syntax][] which is compatible with the OWASP Core Rule Set. + +Ambassador Labs publishes and maintains a list of rules to be used with the Web Application Firewall that should be a good solution for most users and [Coraza also provides their own ruleset][] based on the [OWASP][] core rule set. It also +satisifies [PCI 6.6][] compliance requirements. + +Ambassador Labs rules differ from the OWASP Core ruleset in the following areas: + +- WAF engine is enabled by default. +- A more comprehensive set of rules is enabled, including rules related to compliance with PCI DSS 6.5 and 12.1 requirements. + +See [Configuring $productName$'s Web Application Firewall rules][] for more information about installing Ambassador Labs rules. + +For specific information about rule configuration, please refer to [Coraza's Seclang documentation][] + +## Observability + +To make using $productName$'s Web Application Firewall system easier and to enable automated workflows and alerts, there are three main methods of observability for Web Application Firewall behavior. + +### Logging + + $productName$ will log information about requests approved and denied by any `WebApplicationFirewalls` along with the reason why the request was denied. + You can configure the logging policies in the [coraza rules configuration][] where logs are sent to and how much information is logged. + Ambassador Labs' default ruleset sends the WAF logs to stdout so they show up in the container logs. + +### Metrics + + $productName$ also outputs metrics about the Web Application Firewall, including the number of requests approved and denied, and performance information. + +| Metric | Type | Description | | +|-------------------------------------|-----------------------|-----------------------------------------------------------------------------------------------| +| `waf_created_wafs` | Gauge | Number of created web application firewall | +| `waf_managed_wafs_total` | Counter | Number of managed web application firewalls | +| `waf_added_latency_ms` | Histogram | Added latency in milliseconds | +| `waf_total_denied_requests_total` | Counter (with labels) | Number of requests denied by any web application firewall | +| `waf_total_denied_responses_total` | Counter (with labels) | Number of responses denied by any web application firewall | +| `waf_denied_breakdown_total` | Counter (with labels) | Breakdown of requests/responses denied and the web application firewall that denied them | +| `waf_total_allowed_requests_total` | Counter (with labels) | Number of requests allowed by any web application firewall | +| `waf_total_allowed_responses_total` | Counter (with labels) | Number of responses allowed by any web application firewall | +| `waf_allowed_breakdown_total` | Counter (with labels) | Breakdown of requests/responses allowed and the web application firewall that allowed them | +| `waf_errors` | Counter (with labels) | Tracker for any errors encountered by web application firewalls and the reason for the error | + +### Grafana Dashboard + + $productName$ provides a [Grafana dashboard][] that can be imported to [Grafana][]. In addition, the dashboard has pre-built panels that help visualize the metrics that are collected about Web Application Firewall activity. For more information about getting [Prometheus][] and Grafana set up for gathering and visualizing metrics from $productName$ please refer to the [Prometheus and Grafana documentation][]. + +[Coraza Web Application Firewall library]: https://coraza.io/docs/tutorials/introduction/ +[Filters]: ../../topics/using/filters +[Rate limiting]: ../../topics/using/rate-limits/rate-limits#rate-limiting-reference +[Coraza's Seclang syntax]: https://coraza.io/docs/seclang/directives/ +[Coraza also provides their own ruleset]: https://coraza.io/docs/tutorials/coreruleset/ +[Coraza's Seclang documentation]: https://coraza.io/docs/seclang/ +[OWASP]: https://owasp.org/ +[PCI 6.6]: https://listings.pcisecuritystandards.org/documents/information_supplement_6.6.pdf +[Grafana dashboard]: https://grafana.com/grafana/dashboards/4698-ambassador-edge-stack/ +[Grafana]: https://grafana.com/ +[Prometheus]: https://prometheus.io/docs/introduction/overview/ +[Prometheus and Grafana documentation]:../prometheus +[configuring Ambassador IDs can be found here]: ../../topics/running/running#ambassador_id +[$productName$]: https://www.getambassador.io/products/edge-stack/api-gateway +[Ambassador Labs]: https://www.getambassador.io/ +[Configuring $productName$'s Web Application Firewall rules]: ../web-application-firewalls-config +[coraza rules configuration]: https://coraza.io/docs/seclang/directives/#secauditlog +[the firewall rules]: ../web-application-firewalls-config diff --git a/docs/edge-stack/latest/images/Auth0_audience.png b/docs/edge-stack/latest/images/Auth0_audience.png new file mode 100644 index 000000000..6cb706817 Binary files /dev/null and b/docs/edge-stack/latest/images/Auth0_audience.png differ diff --git a/docs/edge-stack/latest/images/Auth0_none.png b/docs/edge-stack/latest/images/Auth0_none.png new file mode 100644 index 000000000..1e87f6c0e Binary files /dev/null and b/docs/edge-stack/latest/images/Auth0_none.png differ diff --git a/docs/edge-stack/latest/images/Auth0_secret.png b/docs/edge-stack/latest/images/Auth0_secret.png new file mode 100644 index 000000000..d0636a50d Binary files /dev/null and b/docs/edge-stack/latest/images/Auth0_secret.png differ diff --git a/docs/edge-stack/latest/images/ambassador_oidc_flow.jpg b/docs/edge-stack/latest/images/ambassador_oidc_flow.jpg new file mode 100644 index 000000000..4f1c0c7e6 Binary files /dev/null and b/docs/edge-stack/latest/images/ambassador_oidc_flow.jpg differ diff --git a/docs/edge-stack/latest/images/create-application.png b/docs/edge-stack/latest/images/create-application.png new file mode 100644 index 000000000..d181be2ed Binary files /dev/null and b/docs/edge-stack/latest/images/create-application.png differ diff --git a/docs/edge-stack/latest/images/docker.png b/docs/edge-stack/latest/images/docker.png new file mode 100644 index 000000000..1f35e5ea4 Binary files /dev/null and b/docs/edge-stack/latest/images/docker.png differ diff --git a/docs/edge-stack/latest/images/edge-stack-1.13.10-consul-cert-log.png b/docs/edge-stack/latest/images/edge-stack-1.13.10-consul-cert-log.png new file mode 100644 index 000000000..1e045bf42 Binary files /dev/null and b/docs/edge-stack/latest/images/edge-stack-1.13.10-consul-cert-log.png differ diff --git a/docs/edge-stack/latest/images/edge-stack-1.13.10-docs-timeout.png b/docs/edge-stack/latest/images/edge-stack-1.13.10-docs-timeout.png new file mode 100644 index 000000000..1dc9087be Binary files /dev/null and b/docs/edge-stack/latest/images/edge-stack-1.13.10-docs-timeout.png differ diff --git a/docs/edge-stack/latest/images/edge-stack-1.13.4.png b/docs/edge-stack/latest/images/edge-stack-1.13.4.png new file mode 100644 index 000000000..954ac1a9c Binary files /dev/null and b/docs/edge-stack/latest/images/edge-stack-1.13.4.png differ diff --git a/docs/edge-stack/latest/images/edge-stack-1.13.7-json-logging.png b/docs/edge-stack/latest/images/edge-stack-1.13.7-json-logging.png new file mode 100644 index 000000000..4a47cbdfc Binary files /dev/null and b/docs/edge-stack/latest/images/edge-stack-1.13.7-json-logging.png differ diff --git a/docs/edge-stack/latest/images/edge-stack-1.13.7-memory.png b/docs/edge-stack/latest/images/edge-stack-1.13.7-memory.png new file mode 100644 index 000000000..9c415ba36 Binary files /dev/null and b/docs/edge-stack/latest/images/edge-stack-1.13.7-memory.png differ diff --git a/docs/edge-stack/latest/images/edge-stack-1.13.7-tcpmapping-consul.png b/docs/edge-stack/latest/images/edge-stack-1.13.7-tcpmapping-consul.png new file mode 100644 index 000000000..c455a47f1 Binary files /dev/null and b/docs/edge-stack/latest/images/edge-stack-1.13.7-tcpmapping-consul.png differ diff --git a/docs/edge-stack/latest/images/edge-stack-1.13.8-cloud-bugfix.png b/docs/edge-stack/latest/images/edge-stack-1.13.8-cloud-bugfix.png new file mode 100644 index 000000000..6beaf653b Binary files /dev/null and b/docs/edge-stack/latest/images/edge-stack-1.13.8-cloud-bugfix.png differ diff --git a/docs/edge-stack/latest/images/emissary-1.13.10-cors-origin.png b/docs/edge-stack/latest/images/emissary-1.13.10-cors-origin.png new file mode 100644 index 000000000..b7538e5f4 Binary files /dev/null and b/docs/edge-stack/latest/images/emissary-1.13.10-cors-origin.png differ diff --git a/docs/edge-stack/latest/images/helm-navy.png b/docs/edge-stack/latest/images/helm-navy.png new file mode 100644 index 000000000..a97101435 Binary files /dev/null and b/docs/edge-stack/latest/images/helm-navy.png differ diff --git a/docs/edge-stack/latest/images/jaeger.png b/docs/edge-stack/latest/images/jaeger.png new file mode 100644 index 000000000..3b821c09e Binary files /dev/null and b/docs/edge-stack/latest/images/jaeger.png differ diff --git a/docs/edge-stack/latest/images/kubernetes.png b/docs/edge-stack/latest/images/kubernetes.png new file mode 100644 index 000000000..a392a886b Binary files /dev/null and b/docs/edge-stack/latest/images/kubernetes.png differ diff --git a/docs/edge-stack/latest/images/machine-machine.png b/docs/edge-stack/latest/images/machine-machine.png new file mode 100644 index 000000000..32a112f9c Binary files /dev/null and b/docs/edge-stack/latest/images/machine-machine.png differ diff --git a/docs/edge-stack/latest/images/mapping-editor.png b/docs/edge-stack/latest/images/mapping-editor.png new file mode 100644 index 000000000..f8b751a19 Binary files /dev/null and b/docs/edge-stack/latest/images/mapping-editor.png differ diff --git a/docs/edge-stack/latest/images/scopes.png b/docs/edge-stack/latest/images/scopes.png new file mode 100644 index 000000000..f78d22a0c Binary files /dev/null and b/docs/edge-stack/latest/images/scopes.png differ diff --git a/docs/edge-stack/latest/images/xkcd.png b/docs/edge-stack/latest/images/xkcd.png new file mode 100644 index 000000000..ed0d5c33b Binary files /dev/null and b/docs/edge-stack/latest/images/xkcd.png differ diff --git a/docs/edge-stack/latest/release-notes/edge-stack-1.13.4.png b/docs/edge-stack/latest/release-notes/edge-stack-1.13.4.png new file mode 100644 index 000000000..954ac1a9c Binary files /dev/null and b/docs/edge-stack/latest/release-notes/edge-stack-1.13.4.png differ diff --git a/docs/edge-stack/latest/release-notes/edge-stack-1.13.7-json-logging.png b/docs/edge-stack/latest/release-notes/edge-stack-1.13.7-json-logging.png new file mode 100644 index 000000000..4a47cbdfc Binary files /dev/null and b/docs/edge-stack/latest/release-notes/edge-stack-1.13.7-json-logging.png differ diff --git a/docs/edge-stack/latest/release-notes/edge-stack-1.13.7-memory.png b/docs/edge-stack/latest/release-notes/edge-stack-1.13.7-memory.png new file mode 100644 index 000000000..9c415ba36 Binary files /dev/null and b/docs/edge-stack/latest/release-notes/edge-stack-1.13.7-memory.png differ diff --git a/docs/edge-stack/latest/release-notes/edge-stack-1.13.7-tcpmapping-consul.png b/docs/edge-stack/latest/release-notes/edge-stack-1.13.7-tcpmapping-consul.png new file mode 100644 index 000000000..c455a47f1 Binary files /dev/null and b/docs/edge-stack/latest/release-notes/edge-stack-1.13.7-tcpmapping-consul.png differ diff --git a/docs/edge-stack/latest/release-notes/edge-stack-1.13.8-cloud-bugfix.png b/docs/edge-stack/latest/release-notes/edge-stack-1.13.8-cloud-bugfix.png new file mode 100644 index 000000000..6beaf653b Binary files /dev/null and b/docs/edge-stack/latest/release-notes/edge-stack-1.13.8-cloud-bugfix.png differ diff --git a/docs/edge-stack/latest/release-notes/edge-stack-2.0.0-host_crd.png b/docs/edge-stack/latest/release-notes/edge-stack-2.0.0-host_crd.png new file mode 100644 index 000000000..c77ef5287 Binary files /dev/null and b/docs/edge-stack/latest/release-notes/edge-stack-2.0.0-host_crd.png differ diff --git a/docs/edge-stack/latest/release-notes/edge-stack-2.0.0-ingressstatus.png b/docs/edge-stack/latest/release-notes/edge-stack-2.0.0-ingressstatus.png new file mode 100644 index 000000000..6856d308d Binary files /dev/null and b/docs/edge-stack/latest/release-notes/edge-stack-2.0.0-ingressstatus.png differ diff --git a/docs/edge-stack/latest/release-notes/edge-stack-2.0.0-insecure_action_hosts.png b/docs/edge-stack/latest/release-notes/edge-stack-2.0.0-insecure_action_hosts.png new file mode 100644 index 000000000..79c20bad1 Binary files /dev/null and b/docs/edge-stack/latest/release-notes/edge-stack-2.0.0-insecure_action_hosts.png differ diff --git a/docs/edge-stack/latest/release-notes/edge-stack-2.0.0-listener.png b/docs/edge-stack/latest/release-notes/edge-stack-2.0.0-listener.png new file mode 100644 index 000000000..ea45a02ba Binary files /dev/null and b/docs/edge-stack/latest/release-notes/edge-stack-2.0.0-listener.png differ diff --git a/docs/edge-stack/latest/release-notes/edge-stack-2.0.0-prune_routes.png b/docs/edge-stack/latest/release-notes/edge-stack-2.0.0-prune_routes.png new file mode 100644 index 000000000..bc43229fc Binary files /dev/null and b/docs/edge-stack/latest/release-notes/edge-stack-2.0.0-prune_routes.png differ diff --git a/docs/edge-stack/latest/release-notes/edge-stack-2.0.0-tlscontext.png b/docs/edge-stack/latest/release-notes/edge-stack-2.0.0-tlscontext.png new file mode 100644 index 000000000..68dbad807 Binary files /dev/null and b/docs/edge-stack/latest/release-notes/edge-stack-2.0.0-tlscontext.png differ diff --git a/docs/edge-stack/latest/release-notes/edge-stack-2.0.0-v3alpha1.png b/docs/edge-stack/latest/release-notes/edge-stack-2.0.0-v3alpha1.png new file mode 100644 index 000000000..c0ac35962 Binary files /dev/null and b/docs/edge-stack/latest/release-notes/edge-stack-2.0.0-v3alpha1.png differ diff --git a/docs/edge-stack/latest/release-notes/edge-stack-GA.png b/docs/edge-stack/latest/release-notes/edge-stack-GA.png new file mode 100644 index 000000000..2e6341881 Binary files /dev/null and b/docs/edge-stack/latest/release-notes/edge-stack-GA.png differ diff --git a/docs/edge-stack/latest/release-notes/emissary-1.13.10-cors-origin.png b/docs/edge-stack/latest/release-notes/emissary-1.13.10-cors-origin.png new file mode 100644 index 000000000..b7538e5f4 Binary files /dev/null and b/docs/edge-stack/latest/release-notes/emissary-1.13.10-cors-origin.png differ diff --git a/docs/edge-stack/latest/release-notes/tada.png b/docs/edge-stack/latest/release-notes/tada.png new file mode 100644 index 000000000..c8832e8e3 Binary files /dev/null and b/docs/edge-stack/latest/release-notes/tada.png differ diff --git a/docs/edge-stack/latest/release-notes/v2.0.4-k8s-1.22.png b/docs/edge-stack/latest/release-notes/v2.0.4-k8s-1.22.png new file mode 100644 index 000000000..ed9b04158 Binary files /dev/null and b/docs/edge-stack/latest/release-notes/v2.0.4-k8s-1.22.png differ diff --git a/docs/edge-stack/latest/release-notes/v2.0.4-l7depth.png b/docs/edge-stack/latest/release-notes/v2.0.4-l7depth.png new file mode 100644 index 000000000..9314324cb Binary files /dev/null and b/docs/edge-stack/latest/release-notes/v2.0.4-l7depth.png differ diff --git a/docs/edge-stack/latest/release-notes/v2.0.4-mapping-dns-type.png b/docs/edge-stack/latest/release-notes/v2.0.4-mapping-dns-type.png new file mode 100644 index 000000000..7770c77d2 Binary files /dev/null and b/docs/edge-stack/latest/release-notes/v2.0.4-mapping-dns-type.png differ diff --git a/docs/edge-stack/latest/release-notes/v2.0.4-v3alpha1.png b/docs/edge-stack/latest/release-notes/v2.0.4-v3alpha1.png new file mode 100644 index 000000000..9c50b8fb8 Binary files /dev/null and b/docs/edge-stack/latest/release-notes/v2.0.4-v3alpha1.png differ diff --git a/docs/edge-stack/latest/release-notes/v2.0.4-version.png b/docs/edge-stack/latest/release-notes/v2.0.4-version.png new file mode 100644 index 000000000..9481b7dbd Binary files /dev/null and b/docs/edge-stack/latest/release-notes/v2.0.4-version.png differ diff --git a/docs/edge-stack/latest/release-notes/v2.0.5-auth-circuit-breaker.png b/docs/edge-stack/latest/release-notes/v2.0.5-auth-circuit-breaker.png new file mode 100644 index 000000000..cac8cf7b2 Binary files /dev/null and b/docs/edge-stack/latest/release-notes/v2.0.5-auth-circuit-breaker.png differ diff --git a/docs/edge-stack/latest/release-notes/v2.0.5-cache-change.png b/docs/edge-stack/latest/release-notes/v2.0.5-cache-change.png new file mode 100644 index 000000000..8471ab3fa Binary files /dev/null and b/docs/edge-stack/latest/release-notes/v2.0.5-cache-change.png differ diff --git a/docs/edge-stack/latest/release-notes/v2.0.5-mappingselector.png b/docs/edge-stack/latest/release-notes/v2.0.5-mappingselector.png new file mode 100644 index 000000000..31942ede6 Binary files /dev/null and b/docs/edge-stack/latest/release-notes/v2.0.5-mappingselector.png differ diff --git a/docs/edge-stack/latest/release-notes/v2.1.0-canary.png b/docs/edge-stack/latest/release-notes/v2.1.0-canary.png new file mode 100644 index 000000000..39d3bbbfb Binary files /dev/null and b/docs/edge-stack/latest/release-notes/v2.1.0-canary.png differ diff --git a/docs/edge-stack/latest/release-notes/v2.1.0-edge-stack-validation.png b/docs/edge-stack/latest/release-notes/v2.1.0-edge-stack-validation.png new file mode 100644 index 000000000..dc82e2821 Binary files /dev/null and b/docs/edge-stack/latest/release-notes/v2.1.0-edge-stack-validation.png differ diff --git a/docs/edge-stack/latest/release-notes/v2.1.0-gzip-enabled.png b/docs/edge-stack/latest/release-notes/v2.1.0-gzip-enabled.png new file mode 100644 index 000000000..061fcbc97 Binary files /dev/null and b/docs/edge-stack/latest/release-notes/v2.1.0-gzip-enabled.png differ diff --git a/docs/edge-stack/latest/release-notes/v2.1.0-smoother-migration.png b/docs/edge-stack/latest/release-notes/v2.1.0-smoother-migration.png new file mode 100644 index 000000000..ebd77497d Binary files /dev/null and b/docs/edge-stack/latest/release-notes/v2.1.0-smoother-migration.png differ diff --git a/docs/edge-stack/latest/release-notes/v2.1.2-annotations.png b/docs/edge-stack/latest/release-notes/v2.1.2-annotations.png new file mode 100644 index 000000000..b5498c3c1 Binary files /dev/null and b/docs/edge-stack/latest/release-notes/v2.1.2-annotations.png differ diff --git a/docs/edge-stack/latest/release-notes/v2.1.2-filter-jwtassertion.png b/docs/edge-stack/latest/release-notes/v2.1.2-filter-jwtassertion.png new file mode 100644 index 000000000..da58bdd91 Binary files /dev/null and b/docs/edge-stack/latest/release-notes/v2.1.2-filter-jwtassertion.png differ diff --git a/docs/edge-stack/latest/release-notes/v2.1.2-host-mapping-matching.png b/docs/edge-stack/latest/release-notes/v2.1.2-host-mapping-matching.png new file mode 100644 index 000000000..1cfba5ede Binary files /dev/null and b/docs/edge-stack/latest/release-notes/v2.1.2-host-mapping-matching.png differ diff --git a/docs/edge-stack/latest/release-notes/v2.1.2-mapping-cors.png b/docs/edge-stack/latest/release-notes/v2.1.2-mapping-cors.png new file mode 100644 index 000000000..f76ea01ca Binary files /dev/null and b/docs/edge-stack/latest/release-notes/v2.1.2-mapping-cors.png differ diff --git a/docs/edge-stack/latest/release-notes/v2.1.2-mapping-less-weighted.png b/docs/edge-stack/latest/release-notes/v2.1.2-mapping-less-weighted.png new file mode 100644 index 000000000..7e299062e Binary files /dev/null and b/docs/edge-stack/latest/release-notes/v2.1.2-mapping-less-weighted.png differ diff --git a/docs/edge-stack/latest/release-notes/v2.1.2-mapping-no-rewrite.png b/docs/edge-stack/latest/release-notes/v2.1.2-mapping-no-rewrite.png new file mode 100644 index 000000000..5d3d5a29f Binary files /dev/null and b/docs/edge-stack/latest/release-notes/v2.1.2-mapping-no-rewrite.png differ diff --git a/docs/edge-stack/latest/release-notes/v2.2.0-cloud.png b/docs/edge-stack/latest/release-notes/v2.2.0-cloud.png new file mode 100644 index 000000000..5923fcb44 Binary files /dev/null and b/docs/edge-stack/latest/release-notes/v2.2.0-cloud.png differ diff --git a/docs/edge-stack/latest/release-notes/v2.2.0-percent-escape.png b/docs/edge-stack/latest/release-notes/v2.2.0-percent-escape.png new file mode 100644 index 000000000..df4d81b94 Binary files /dev/null and b/docs/edge-stack/latest/release-notes/v2.2.0-percent-escape.png differ diff --git a/docs/edge-stack/latest/release-notes/v2.2.0-tls-cert-validation.png b/docs/edge-stack/latest/release-notes/v2.2.0-tls-cert-validation.png new file mode 100644 index 000000000..f8635b5af Binary files /dev/null and b/docs/edge-stack/latest/release-notes/v2.2.0-tls-cert-validation.png differ diff --git a/docs/edge-stack/latest/releaseNotes.yml b/docs/edge-stack/latest/releaseNotes.yml new file mode 100644 index 000000000..b11e72c9a --- /dev/null +++ b/docs/edge-stack/latest/releaseNotes.yml @@ -0,0 +1,1944 @@ +# -*- fill-column: 100 -*- + +# This file should be placed in the folder for the version of the +# product that's meant to be documented. A `/release-notes` page will +# be automatically generated and populated at build time. +# +# Note that an entry needs to be added to the `doc-links.yml` file in +# order to surface the release notes in the table of contents. +# +# The YAML in this file should contain: +# +# changelog: An (optional) URL to the CHANGELOG for the product. +# items: An array of releases with the following attributes: +# - version: The (optional) version number of the release, if applicable. +# - date: The date of the release in the format YYYY-MM-DD. +# - notes: An array of noteworthy changes included in the release, each having the following attributes: +# - type: The type of change, one of `bugfix`, `feature`, `security` or `change`. +# - title: A short title of the noteworthy change. +# - body: >- +# Two or three sentences describing the change and why it +# is noteworthy. This is HTML, not plain text or +# markdown. It is handy to use YAML's ">-" feature to +# allow line-wrapping. +# - image: >- +# The URL of an image that visually represents the +# noteworthy change. This path is relative to the +# `release-notes` directory; if this file is +# `FOO/releaseNotes.yml`, then the image paths are +# relative to `FOO/release-notes/`. +# - docs: The path to the documentation page where additional information can be found. +# - href: A path from the root to a resource on the getambassador website, takes precedence over a docs link. + +changelog: https://github.com/datawire/edge-stack/blob/$branch$/CHANGELOG.md +items: + - version: 3.8.1 + date: '2023-09-18' + notes: + - title: Upgrade Golang to 1.20.8 + type: security + body: >- + Upgrading to the latest release of Golang as part of our general dependency upgrade process. This includes security fixes for CVE-2023-39318, CVE-2023-39319. + docs: https://go.dev/doc/devel/release#go1.20.minor + + - version: 3.8.0 + date: '2023-08-29' + notes: + - title: Ambassador Edge Stack will fail to run if a valid license is not present + type: change + body: >- + $productName$ will now require a valid non-expired license to run the product. If a valid license is not present or your clusters are not connected to and showing licensed in Ambassador Cloud, then $productName$ will refuse to startup. If you already have an enterprise license then you do not need to do anything so long as it is properly applied and not expired. Please view the license documentation page for more information on your license. + If you do not have an enterprise license for $productName$ then you can visit the quickstart guide to get setup with a free community license by signing into Ambassador Cloud and connecting your installation. + docs: tutorials/getting-started/ + - title: Account for matchLabels when associating mappings with the same prefix to different Hosts + type: bugfix + body: >- + As of v2.2.2, if two mappings were associated with different Hosts through host + mappingSelector labels but share the same prefix, the labels were not taken into + account which would cause one Mapping to be correctly routed but the other not. + + This change fixes this issue so that Mappings sharing the same prefix but associated + with different Hosts will be correctly routed. + docs: https://github.com/emissary-ingress/emissary/issues/4170 + - title: Duplication of values when using multiple Headers/QueryParameters in Mappings + type: bugfix + body: >- + In previous versions, if multiple Headers/QueryParameters were used in a v3alpha1 mapping, + these values would duplicate and cause all the Headers/QueryParameters to have the same value. + This is no longer the case and the expected values for unique Headers/QueryParameters will apply. + + This issue was only present in v3alpha1 Mappings. For users who may have this issue, please + be sure to re-apply any v3alpha1 Mappings in order to update the stored v2 Mapping and resolve the + issue. + docs: topics/using/headers/headers + - title: Ambassador Agent no longer collects Envoy metrics + type: change + body: >- + When the Ambassador agent is being used, it will no longer attempt to collect and report Envoy metrics. + In previous versions, $productName$ would always create an Envoy stats sink for the agent as long as the AMBASSADOR_GRPC_METRICS_SINK + environment variable was provided. This environment variable was hardcoded on the release manifests and has now been removed + and an Envoy stats sink for the agent is no longer created. + docs: topics/running/environment#ambassador_grpc_metrics_sink + - version: 3.7.2 + date: '2023-07-25' + notes: + - title: Upgrade to Envoy 1.26.4 + type: security + body: >- + This upgrades $productName$ to be built on Envoy v1.26.4 which includes a fixes for CVE-2023-35942, CVE-2023-35943, CVE-2023-35944. + docs: https://www.envoyproxy.io/docs/envoy/v1.26.1/version_history/v1.26/v1.26 + + - title: Shipped Helm chart v8.7.2 + type: change + body: >- + - Update default image to $productName$ v3.7.2.
+ docs: https://github.com/datawire/edge-stack/blob/rel/v3.7.2/charts/edge-stack/CHANGELOG.md + + - version: 3.7.1 + date: '2023-07-13' + notes: + - title: Upgrade to Envoy 1.26.3 + type: security + body: >- + This upgrades $productName$ to be built on Envoy v1.26.3 which includes a fix for CVE-2023-35945. + docs: https://www.envoyproxy.io/docs/envoy/v1.26.1/version_history/v1.26/v1.26 + + - version: 3.7.0 + date: '2023-06-20' + notes: + - title: Configurable Web Application Firewalls + type: feature + body: >- + $productName$ now provides configurable Web Application Firewalls (WAFs) that can be used to add additional security to your services by blocking dangerous requests. They can be configured globally or route by route. We have also published a ready to use set of rules to get you started and protected against the OWASP Top 10 + vulnerabilities and adheres to PCI 6.6 requirements. + The published rule set will be updated and maintained regularly. + docs: howtos/web-application-firewalls/ + + - title: Upgrade to Envoy 1.26.1 + type: feature + body: >- + This upgrades $productName$ to be built on Envoy v1.26.1 which provides security, performance and feature enhancements. You can read more about them here: Envoy Proxy 1.26.1 Release Notes + docs: https://www.envoyproxy.io/docs/envoy/v1.26.1/version_history/v1.26/v1.26 + + - title: ExternalFilter - Add support for configuring TLS Settings + type: feature + body: >- + The ExternalFilter now supports configuring a CA certificate and/or client certificate via the new tlsConfig attribute. This allows $productName$ to communicate with the configured AuthService using custom TLS certificates signed by a different CA. It also allows the ExternalFilter to originate mTLS and have $productName$ present mTLS client certificates to the AuthService. Custom TLS certificates are provided as Kubernetes Secrets. + docs: topics/using/filters/external/#configuring-tls-settings + + - version: 3.6.0 + date: '2023-04-17' + notes: + - title: Deprecation of insteadOfRedirect.filters argument in FilterPolicy + type: change + body: >- + The insteadOfRedirect.filters field within the OAuth2 path-specific arguments has been deprecated and it will be fully removed in a future version of $productName$. Similiar behavior can + be accomplished using onDeny=continue and chaining a + fallback Filter to run. + docs: topics/using/filters/oauth2#oauth2-path-specific-arguments + + - title: Upgrade to Envoy 1.25.4 + type: feature + body: >- + This upgrades $productName$ to be built on Envoy v1.25.4 which provides security, performance and feature enhancements. You can read more about them here: Envoy Proxy 1.25.4 Release Notes + docs: https://www.envoyproxy.io/docs/envoy/v1.25.4/version_history/v1.25/v1.25 + + - title: Shipped Helm chart v8.6.0 + type: change + body: >- + - Update default image to $productName$ v3.6.0.
+ + - Add support for setting nodeSelector, tolerations and affinity on the Ambassador Agent. Thanks to Philip Panyukov.
+ + - Use autoscaling API version based on Kubernetes version. Thanks to Elvind Valderhaug.
+ + - Upgrade KubernetesEndpointResolver & ConsulResolver apiVersions to getambassador.io/v3alpha1 + + docs: https://github.com/emissary-ingress/emissary/blob/master/charts/emissary-ingress/CHANGELOG.md + + - version: 3.5.2 + date: '2023-04-05' + notes: + - title: Upgrade to Envoy 1.24.5 + type: security + body: >- + This upgrades $productName$ to be built on Envoy v1.24.5. This update includes various security patches including CVE-2023-27487, CVE-2023-27491, CVE-2023-27492, CVE-2023-27493, CVE-2023-27488, and CVE-2023-27496. It also contains the dependency update for c-ares which was patched on top.

+ + One notable item is that upstream header names and values are now validated according to RFC 7230, section 3.2. Users utilizing external filters should check whether their external service is forwarding headers containing forbidden characters + - title: Upgrade to Golang 1.20.3 + type: security + body: >- + Upgrading to the latest release of Golang as part of our general dependency upgrade process. This includes security fixes for CVE-2023-24537, CVE-2023-24538, CVE-2023-24534, CVE-2023-24536. + + - version: 3.5.1 + date: '2023-02-24' + notes: + - title: Fix regression with ExternalFilter parsing port incorrectly + type: bugfix + body: >- + A regression with parsing the authService field of the ExternalFilter has been fixed. This would cause the ExternalFilter to fail without sending a request to the service causing a 403 response. + + - title: Shipped Helm chart v8.5.1 + type: bugfix + body: >- + Fix regression where the Module resource fails validation when setting the ambassador_id after upgrading to getambassador.io/v3alpha1.

+ + Thanks to Pier. + + docs: https://github.com/datawire/edge-stack + + - version: 3.5.0 + date: '2023-02-15' + notes: + - title: Upgraded to golang 1.20.1 + type: security + body: >- + Upgraded to the latest release of Golang as part of our general dependency upgrade process. This includes + security fixes for CVE-2022-41725, CVE-2022-41723. + + - title: TracingService support for native OpenTelemetry driver + type: feature + body: >- + In Envoy 1.24, experimental support for a native OpenTelemetry tracing driver was introduced that allows exporting spans in the otlp format. Many observability platforms accept that format and is the recommended replacement for the LightStep driver. $productName$ now supports setting the TracingService.spec.driver=opentelemetry to export traces in the otlp format.

+ + Thanks to Paul for helping us get this tested and over the finish line! + + - title: Switch to a non-blocking readiness check + type: feature + body: >- + The /ready endpoint used by $productName$ was using the Envoy admin port (8001 by default).This generates a problem during config reloads with large configs as the admin thread is blocking so the /ready endpoint can be very slow to answer (in the order of several seconds, even more).

+ + $productName$ will now use a specific envoy listener that can answer /ready calls from an Envoy worker thread so the endpoint is always fast and it does not suffer from single threaded admin thread slowness on config reloads and other slow endpoints handled by the admin thread.

+ + Configure the listener port using AMBASSADOR_READY_PORT and enable access log using AMBASSADOR_READY_LOG environment variables. + + docs: https://www.getambassador.io/docs/edge-stack/latest/topics/running/environment/ + + - title: Fix envoy config generated when including port in Host.hostname + type: bugfix + body: >- + When wanting to expose traffic to clients on ports other than 80/443, users will set a port in the Host.hostname (eg.Host.hostname=example.com:8500). The config generated allowed matching on the :authority header. This worked in v1.Y series due to the way $productName$ was generating Envoy configuration under a single wild-card virtual_host and matching on :authority.

+ + In v2.Y/v3.Y+, the way $productName$ generates Envoy configuration changed to address memory pressure and improve route lookup speed in Envoy. However, when including a port in the hostname, an incorrect configuration was generated with an sni match including the port. This caused incoming request to never match causing a 404 Not Found.This has been fixed and the correct envoy configuration is + being generated which restores existing behavior. + + - title: Fix GRPC TLS support with ExternalFilter + type: bugfix + body: >- + Configuring an ExternalFilter to communicate using gRPC with TLS would fail due to $productName$ trying to connect via cleartext. This has been fixed so that setting ExternalFilter.spec.tls=true $productName$ will talk to the external filter using TLS.

+ + If using self-signed certs see installing self-signed certificates on how to include it into the $productName$ deployment. + + - title: Add support for resolving port names in Ingress resource + type: change + body: >- + Previously, specifying backend ports by name in Ingress was not supported and would result in defaulting to port 80. This allows $productName$ to now resolve port names for backend services. If the port number cannot be resolved by the name (e.g named port in the Service doesn't exist) then it will continue to default back + to port 80.

+ + Thanks to Anton Ustyuzhanin!. + github: + - title: '#4809' + link: https://github.com/emissary-ingress/emissary/pull/4809 + + - title: Upgraded to Python 3.10 + type: change + body: >- + Upgraded to Python 3.10 as part of continued investment in keeping dependencies updated. + + - title: Upgraded base image to alpine-3.17 + type: change + body: >- + Upgraded base image to alpine-3.17 as part of continued investment in keeping dependencies updated. + + - title: Shipped Helm chart v8.5.0 + type: change + body: >- + - Update default image to $productName$ v3.5.0.
+ + - Add support for configuring startupProbes on the $productName$ deployment.
+ + - Allow setting pod and container security settings on the Ambassador Agent.
+ + - Added new securityContext fields to the Redis and Agent helm charts, allowing users to further manage privilege and access control settings which can be used for tools such as PodSecurityPolicy.
+ + - Added deprecation notice in the values.yaml file for podSecurityPolicy value because support has been removed in Kubernetes 1.25. + + docs: https://github.com/datawire/edge-stack + + - version: 3.4.1 + date: '2023-02-07' + notes: + - title: Upgrade to Envoy 1.24.2 + type: security + body: >- + This upgrades $productName$ to be built on Envoy v1.24.2. This update addresses the folowing notable items:

+ + - Updates boringssl to address High CVE-2023-0286
+ - Updates c-ares dependency to address issue with cname wildcard dns resolution for upstream clusters

+ + Users that are using $productName$ with Certificate Revocation List and allow external users to provide input should upgrade to ensure they are not vulnerable to CVE-2023-0286. + + - version: 3.4.0 + date: '2023-01-03' + notes: + - title: Upgrade to Envoy 1.24.1 + type: feature + body: >- + This upgrades $productName$ to be built on Envoy v1.24.1. Two notable changes were introduced:

+ + First, the team at LightStep and the Envoy Maintainers have decided to no longer support the native LightStep tracing driver in favor of using the Open Telemetry driver. The code for the native Enovy LightStep driver has been removed from the Envoy code base. This means $productName$ will no longer support LightStep in the TracingService. The recommended upgrade path is to leverage a supported Tracing driver such as Zipkin and use the Open Telemetry Collector to collect and forward Observabity data to LightStep. A guide for this can be found here: Distributed Tracing with Open Telemetry and LightStep.

+ + Second, a bug was fixed in Envoy 1.24 that changes how the upstream clusters distributed tracing span is named. Prior to Envoy 1.24 it would always set the span name to the cluster.name. The expected behavior from Envoy was that if provided an alt_stat_name then use it else fallback to cluster.name. + + docs: https://www.envoyproxy.io/docs/envoy/latest/version_history/v1.24/v1.24 + + - title: Re-add support for getambassador.io/v1 + type: feature + body: >- + Support for the getambassador.io/v1 apiVersion has been re-introduced, in order to facilitate smoother migrations from $productName$ 1.y. Previously, in order to make migrations possible, an "unserved" v1 version was declared to Kubernetes, but was unsupported by $productName$. That unserved v1 could + cause an excess of errors to be logged by the Kubernetes Nodes (regardless of whether the installation was migrated from 1.y or was a fresh 2.y install). + + It is still recommeded that `v3alpha1` be used but fully supporting v1 again should resolve these errors. + docs: https://github.com/emissary-ingress/emissary/pull/4055 + + - title: Add support for active health checking configuration. + type: feature + body: >- + It is now possible to configure active healhchecking for upstreams within a Mapping. If the upstream fails its configured health check then Envoy will mark the upstream as unhealthy and no longer send traffic to that upstream. Single pods within a group may can be marked as unhealthy. The healthy pods will continue to receive + traffic normally while the unhealthy pods will not receive any traffic until they recover by passing the health check. + docs: howtos/active-health-checking/ + + - title: Add environment variables to the healthcheck server. + type: feature + body: >- + The healthcheck server's bind address, bind port and IP family can now be configured using environment variables:

+ + AMBASSADOR_HEALTHCHECK_BIND_ADDRESS: The address to bind the healthcheck server to.

+ + AMBASSADOR_HEALTHCHECK_BIND_PORT: The port to bind the healthcheck server to.

+ + AMBASSADOR_HEALTHCHECK_IP_FAMILY: The IP family to use for the healthcheck server.

+ + This allows the healthcheck server to be configured to use IPv6-only k8s environments. (Thanks to Dmitry Golushko!). + + - title: Added metrics for External Filters to the /metrics endpoint + type: feature + body: >- + $productName$ now tracks metrics for External Filters which includes responses approved/denied, the response codes returned as well as configuration and connection errors. + docs: topics/using/filters/external/#metrics + + - title: Allow setting the OAuth2 client's session max idle time + type: feature + body: >- + When using the OAuth2 Filter, $productName$ creates a new session when submitting requests to the upstream backend server and sets a cookie containing the sessionID. This session has a limited lifetime before it expires or is extended, prompting the user to log back in. + + This session idle length can now be controlled under a new field in the OAuth2 Filter, clientSessionMaxIdle, which controls how long the session will be active without activity before it is expired. + docs: topics/using/filters/oauth2 + + - title: Updated redis client to improve performance with Redis + type: change + body: >- + We have updated the client library used to communicate with Redis. The new client provides support for better connection handling and sharing and improved overall performance. As part of our update to + the new driver we reduced chattiness with Redis by taking advantage + of Pipelinig and Scripting features of Redis. + + This means the AES_REDIS_EXPERIMENTAL_DRIVER_ENABLED flag is now a no-op and can be safely removed. + + - title: Adopt stand alone Ambassador Agent + type: change + body: >- + Previously, the Agent used for communicating with Ambassador Cloud was bundled into $productName$. This tied it to the same release schedule as $productName$ and made it difficult to iterate on its feature set. It has now been extracted into its own repository and has its own release process and schedule. + docs: https://github.com/datawire/ambassador-agent + + - title: Fix Filters not properly caching large jwks responses + type: bugfix + body: >- + In some cases, a Filter would fail to properly cache the response from the jwks endpoint due to the response being too large to cache. This would hurt performance and cause $productName$ to be rate-limited by the iDP. This has been fixed to accomodate iDP's that are configured to support multiple key sets thus returning a response that is larger than the typical default response from most iDP's. + + - version: 3.3.1 + date: '2022-12-08' + notes: + - title: Update Golang to 1.19.4 + type: security + body: >- + Updated Golang to latest 1.19.4 patch release which contained two CVEs: CVE-2022-41720, CVE-2022-41717. + + CVE-2022-41720 only affects Windows and $productName$ only ships on Linux. CVE-2022-41717 affects HTTP/2 servers that are exposed to external clients. By default, $productName$ exposes the endpoints for DevPortal, Authentication Service, and RateLimitService via Envoy. Envoy enforces a limit on request header size which mitigates the vulnerability. + + - version: 3.3.0 + date: '2022-11-02' + notes: + - title: Update Golang to 1.19.2 + type: security + body: >- + Updated Golang to 1.19.2 to address the CVEs: CVE-2022-2879, CVE-2022-2880, CVE-2022-41715. + + - title: Update golang.org/x/net + type: security + body: >- + Updated golang.org/x/net to address the CVE: CVE-2022-27664. + + - title: Update golang.org/x/text + type: security + body: >- + Updated golang.org/x/text to address the CVE: CVE-2022-32149. + + - title: Update JWT library + type: security + body: >- + Updated our JWT library from https://github.com/dgrijalva/jwt-go to https://github.com/golang-jwt/jwt in order to address spurious complaints about CVE-2020-26160. Edge Stack has never been affected by CVE-2020-26160. + + - title: Fix regression in http to https redirects with AuthService + type: bugfix + body: >- + By default $productName$ adds routes for http to https redirection. When + an AuthService is applied in v2.Y of $productName$, Envoy would skip the + ext_authz call for non-tls http request and would perform the https + redirect. In Envoy 1.20+ the behavior has changed where Envoy will + always call the ext_authz filter and must be disabled on a per route + basis. + This new behavior change introduced a regression in v3.0 of + $productName$ when it was upgraded to Envoy 1.22. The http to https + redirection no longer works when an AuthService was applied. This fix + restores the previous behavior by disabling the ext_authz call on the + https redirect routes. + github: + - title: '#4620' + link: https://github.com/emissary-ingress/emissary/issues/4620 + + - title: Fix regression in host_redirects with AuthService + type: bugfix + body: >- + When an AuthService is applied in v2.Y of $productName$, + Envoy would skip the ext_authz call for all redirect routes and + would perform the redirect. In Envoy 1.20+ the behavior has changed + where Envoy will always call the ext_authz filter so it must be + disabled on a per route basis. + This new behavior change introduced a regression in v3.0 of + $productName$ when it was upgraded to Envoy 1.22. The host_redirect + would call an AuthService prior to redirect if applied. This fix + restores the previous behavior by disabling the ext_authz call on the + host_redirect routes. + github: + - title: '#4640' + link: https://github.com/emissary-ingress/emissary/issues/4640 + + - title: Propagate trace headers to http external filter + type: change + body: >- + Previously, tracing headers were not propagated to an ExternalFilter configured with proto: http. Now, adding supported tracing headers (b3, etc...) to the spec.allowed_request_headers will propagate them to the configured service. + docs: topics/using/filters/external/#tracing-header-propagation + github: + - title: '#3078' + link: https://github.com/datawire/apro/issues/3078 + + - version: 3.2.0 + date: '2022-09-27' + notes: + - title: Update Golang to 1.19.1 + type: security + body: >- + Updated Golang to 1.19.1 to address the CVEs: CVE-2022-27664, CVE-2022-32190. + + - title: Add Post Logout Redirect URI support for Oauth2 Filter + type: feature + body: >- + You may now define (on supported IDPs) a postLogoutRedirectURI to your Oauth2 filter. + This will allow you to redirect to a specific URI upon logging out. However, in order to achieve this you must + define your IDP logout URL to https:{{host}}/.ambassador/oauth2/post-logout-redirect. Upon logout + $productName$ will redirect to the custom URI which will then redirect to the URI you have defined in postLogoutRedirectURI. + docs: topics/using/filters/oauth2 + + - title: Add support for Host resources using secrets from different namespaces + type: feature + body: >- + Previously the Host resource could only use secrets that are in the namespace as the + Host. The tlsSecret field in the Host has a new subfield namespace that will allow + the use of secrets from different namespaces. + docs: topics/running/tls/#bring-your-own-certificate + + - title: Allow bypassing of EDS for manual endpoint insertion + type: feature + body: >- + Set AMBASSADOR_EDS_BYPASS to true to bypass EDS handling of endpoints and have endpoints be + inserted to clusters manually. This can help resolve with 503 UH caused by certification rotation relating to + a delay between EDS + CDS. The default is false. + docs: topics/running/environment/#ambassador_eds_bypass + + - title: Add support for config change batch window before reconfiguring Envoy + type: feature + body: >- + The AMBASSADOR_RECONFIG_MAX_DELAY env var can be optionally set to batch changes for the specified + non-negative window period in seconds before doing an Envoy reconfiguration. Default is "1" if not set + + - title: Allow setting custom_tags for traces + type: feature + body: >- + It is now possible to set custom_tags in the + TracingService. Trace tags can be set based on + literal values, environment variables, or request headers. The existing tag_headers field is now deperacated. If both tag_headers and custom_tags are set then tag_headers will be ignored. + (Thanks to Paul!) + docs: topics/running/services/tracing-service/ + + - title: Add failure_mode_deny option to the RateLimitService + type: feature + body: >- + By default, when Envoy is unable to communicate with the configured + RateLimitService then it will allow traffic through. The + RateLimitService resource now exposes the + failure_mode_deny + option. Set failure_mode_deny: true, then Envoy will + deny traffic when it is unable to communicate to the RateLimitService + returning a 500. + + - title: Change to behavior for associating Hosts with Mappings and Listeners with Hosts + type: change + body: >- + Changes to label matching will change how Hosts are associated with Mappings and how Listeners are associated with Hosts. There was a bug with label + selectors that was causing resources that configure a selector to be incorrectly associated with more resources than intended. + If any single label from the selector was matched then the resources would be associated. + Now it has been updated to correctly only associate these resources if all labels required by + their selector are present. This brings the mappingSelector/selector fields in-line with how label selectors are used + in Kubernetes. To avoid unexpected behavior after the upgrade, add all labels that Hosts/Listeners have in their + mappingSelector/selector to Mappings/Hosts you want to associate with them. You can opt-out of the new behavior + by setting the environment variable DISABLE_STRICT_LABEL_SELECTORS to "true" (default: "false"). + (Thanks to Filip Herceg and Joe Andaverde!). + docs: topics/running/environment/#disable_strict_label_selectors + + - title: Envoy upgraded to 1.23.0 + type: change + body: >- + The envoy version included in $productName$ has been upgraded from 1.22 to that latest release of 1.23.0. This provides $productName$ with the latest security patches, performances enhancments,and features offered by the envoy proxy. + docs: https://www.envoyproxy.io/docs/envoy/latest/version_history/v1.23/v1.23.0 + + - title: Properly convert FilterPolicy and ExternalFilter between CRD versions + type: bugfix + body: >- + Previously, $productName$ would incorrectly include empty fields when converting a FilterPolicy or ExternalFilter between versions. This would cause undesired state to be persisted in k8s which would lead to validation issues when trying to kubectl apply the custom resource. This fixes these issues to ensure the correct data is being persisted and roundtripped properly between CRD versions. + + - title: Correctly manage cluster names when service names are very long + type: bugfix + body: >- + Distinct services with names that are the same in the first forty characters will no longer be incorrectly mapped to the same cluster. + github: + - title: '#4354' + link: https://github.com/emissary-ingress/emissary/issues/4354 + + - title: Properly populate alt_stats_name for Tracing, Auth and RateLimit Services + type: bugfix + body: >- + Previously, setting the stats_name for the TracingService, RateLimitService + or the AuthService would have no affect because it was not being properly passed to the Envoy cluster + config. This has been fixed and the alt_stats_name field in the cluster config is now set correctly. + (Thanks to Paul!). + + - title: Diagnostics stats properly handles parsing envoy metrics with colons + type: bugfix + body: >- + If a Host or TLSContext contained a hostname with a : when using the + diagnostics endpoints ambassador/v0/diagd then an error would be thrown due to the parsing logic not + being able to handle the extra colon. This has been fixed and $productName$ will not throw an error when parsing + envoy metrics for the diagnostics user interface. + + - title: TCPMappings use correct SNI configuration + type: bugfix + body: >- + $productName$ 2.0.0 introduced a bug where a TCPMapping that uses SNI, + instead of using the hostname glob in the TCPMapping, uses the hostname glob + in the Host that the TLS termination configuration comes from. + + - title: TCPMappings configure TLS termination without a Host resource + type: bugfix + body: >- + $productName$ 2.0.0 introduced a bug where a TCPMapping that terminates TLS + must have a corresponding Host that it can take the TLS configuration from. + This was semi-intentional, but didn't make much sense. You can now use a + TLSContext without a Hostas in $productName$ 1.y releases, or a + Host with or without a TLSContext as in prior 2.y releases. + + - title: TCPMappings and HTTP Hosts can coexist on Listeners that terminate TLS + type: bugfix + body: >- + Prior releases of $productName$ had the arbitrary limitation that a + TCPMapping cannot be used on the same port that HTTP is served on, even if + TLS+SNI would make this possible. $productName$ now allows TCPMappings to be + used on the same Listener port as HTTP Hosts, as long as that + Listener terminates TLS. + + - version: 3.1.0 + date: '2022-08-01' + notes: + - title: Add new Filter to support authenticating APIKey's + type: feature + body: >- + A new Filter has been added to support validating APIKey's on incoming requests.The new APIKeyFilter when applied with a FilterPolicy will check to + see if the incoming requests has a valid API Key in the request header. $productName$ uses Kubernetes Secret's to lookup valid keys for authorizing requests. + docs: topics/using/filters/apikeys + - title: Add support to watch for secrets with APIKey's + type: feature + body: >- + Emissary-ingress has been taught to watch for APIKey secrets when $productName$ is running and + makes them available to be used with the new APIKeyFilter. + - title: A new experimental Redis driver for use with the OAuth2 Filter + type: feature + body: >- + A new opt-in feature flag has been added that allows $productName$ to use a new Redis driver when storing state between requests for the OAuth2 Filter. The new driver has better connection pool handling, shares connections and supports the Redis RESP3 protocol. + + Set AES_REDIS_EXPERIMENTAL_DRIVER_ENABLED=true to enable the experimental feature. Most of the standard Redis configuration fields (e.g.REDIS_*) can be used with the driver. + However, due to the drivers better connection handling the new driver no longer supports setting REDIS_SURGE_LIMIT_INTERVAL, REDIS_SURGE_LIMIT_AFTER, REDIS_SURGE_POOL_SIZE, REDIS_SURGE_POOL_DRAIN_INTERVAL and these will be ignored. + + Note: Other $productName$ features such as the RateLimitService will continue to use the current + Redis driver and in future releases we plan to roll out the new driver for those features as well. + - title: Add support for injecting a valid synthetic RateLimitService + type: change + body: >- + If $productName$ is running then Emissary-ingress ensures that only a single RateLimitService is active. If a user doesn't provide one or provides an invalid one then a synthetic RateLimitService will be + injected. If the protocol_version field is not set or set to an invalid value then it will automatically get upgraded protocol_version: v3. + + This matches the existing behavior that was introduced in $productName$ v3.0.0 for the AuthService. For new installs a valid RateLimitService will be added but this + change ensures a smooth upgrade from $productName$ to v2.3.Z to v3.Y for users who use the manifest in a GitOps scenario. + - title: Add Agent support for OpenAPI 2 contracts + type: feature + body: >- + The agent is now able to parse api contracts using swagger 2, and to convert them to OpenAPI 3, making them available for use in the dev portal. + - title: Default YAML enables the diagnostics interface from non-local clients on the admin service port + type: change + body: >- + In the standard published .yaml files, the Module resource enables serving remote client requests to the :8877/ambassador/v0/diag/ endpoint.The associated Helm chart release also now enables it by default. + - title: Add additional pprof endpoints + type: change + body: >- + Add pprof endpoints serving additional profiles including CPU profiles (/debug/pprof/profile) and tracing (/debug/pprof/trace). Also add additional endpoints serving the command line running (/debug/pprof/cmdline) and program counters (/debug/pprof/symbol) for the sake of completeness. + - title: Correct cookies for mixed HTTP/HTTPS OAuth2 origins + type: bugfix + body: >- + When an OAuth2 filter sets cookies for a protectedOrigin, it should set a cookie's "Secure" flag to true for https:// origins and false for http:// origins. However, for filters with multiple origins, it set the cookie's flag based on the first origin listen in the Filter, rather than the origin that the cookie is actually for. + - title: Correctly handle refresh tokens for OAuth2 filters with multiple origins + type: bugfix + body: >- + When an OAuth2 filter with multiple protectedOrigins needs to adjust the cookies for an active login (which only happens when using a refresh token), it + would erroneously redirect the web browser to the last origin listed, rather than returning to the original URL. This has been fixed. + - title: Correctly handle CORS and CORs preflight request within the OAuth2 Fitler known endpoints + type: bugfix + body: >- + Previously, the OAuth2 filter's known endpoints /.ambassador/oauth2/logout and /.ambassador/oauth2/multicookie did not understand CORS or CORS preflight request + which would cause the browser to reject the request. This has now been fixed and these endpoints will attach the appropriate CORS headers to the response. + - title: Fix regression in the agent for the metrics transfer. + type: bugfix + body: >- + A regression was introduced in 2.3.0 causing the agent to miss some of the metrics coming from emissary ingress before sending them to Ambassador cloud. This issue has been resolved to ensure that all the nodes composing the emissary ingress cluster are reporting properly. + - title: Handle long cluster names for injected acme-challenge route. + type: bugfix + body: >- + Previously, we would inject an upstream route for acme-challenge that was targeting the localhost auth service cluster. This route is injected to make Envoy configuration happy and the AuthService + that is shipped with $productName$ will handle it properly. However, if the cluster name is longer than 60 characters due to a long namespace, etc... then $productName$ will truncate it and make + sure it is unique. When this happens the name of the cluster assigned to the acme-challenge route would get out-of-sync and would introduce invalid Envoy configuration. + + To avoid this $productName$ will now inject a route that returns a direct 404 response rather than pointing at an arbitrary cluster. This matches existing behavior and is a transparent + change to the user. + + - title: Update Golang to 1.17.12 + type: security + body: >- + Updated Golang to 1.17.12 to address the CVEs: CVE-2022-23806, CVE-2022-28327, CVE-2022-24675, + CVE-2022-24921, CVE-2022-23772. + + - title: Update Curl to 7.80.0-r2 + type: security + body: >- + Updated Curl to 7.80.0-r2 to address the CVEs: CVE-2022-32207, CVE-2022-27782, CVE-2022-27781, + CVE-2022-27780. + + - title: Update openSSL-dev to 1.1.1q-r0 + type: security + body: >- + Updated openSSL-dev to 1.1.1q-r0 to address CVE-2022-2097. + + - title: Update ncurses to 1.1.1q-r0 + type: security + body: >- + Updated ncurses to 1.1.1q-r0 to address CVE-2022-29458 + + - title: Upgrade jwt-go + type: security + body: >- + Upgrade jwt-go to latest commit to resolve CVE-2020-26160. + + - version: 3.0.0 + date: '2022-06-28' + notes: + - title: Upgrade to Envoy 1.22 + type: change + body: >- + $productName$ has been upgraded to Envoy 1.22 which provides security, performance and feature enhancements. You can read more about them here: Envoy Proxy 1.22.0 Release Notes + + This is a major jump in Envoy versions from the current of 1.17 in EdgeStack 2.X. Most of the changes are under the hood and allow $productName$ to adopt new features in the future. However, one major change that will effect users is the removal of V2 Transport Protocol support. You can find a transition guide here: + + - title: Envoy V2 xDS Transport Protocol Support Removed + type: change + body: >- + Envoy removed support for V2 xDS Transport Protocol which means $productName$ now only supports the Envoy V3 xDS Transport Protocol. + + User should first upgrade to $productName$ 2.3 prior to ensure that the LogServices and External Filters are working properly by setting protocol_version: "v3". + + If protocol_version is not specified in 3.Y, the default value of v2 will cause an error to be posted and a static response will be returned. Therefore, you must set it to protocol_version: v3. If upgrading from a previous version, you will want to set it to v3 and ensure it is working before upgrading to Emissary-ingress 3.Y. The default value for protocol_version remains v2 in the getambassador.io/v3alpha1 CRD specifications to avoid making breaking changes outside of a CRD version change. Future versions of CRD's will deprecate it. + docs: topics/using/filters/external + + - title: Initial HTTP/3 Downstream Support + type: feature + body: >- + With the upgrade to Envoy, $productName$ is now able to provide downstream support for HTTP/3. The initial implementation supports exposing http/3 endpoints on port `443`. Future version of $productName$ will seek to provide additional configuration to support more scenarios. + + HTTP/3 uses the Quic protocol over UDP. Changes to your cloud provider provisioned Load Balancer will be required to support UDP traffic if using HTTP/3. External Load Balancers must serve traffic on port 443 because the alt-svc header is not configurable in the initial release of the feature. + docs: topics/running/http3 + + - version: 2.5.1 + date: '2022-12-08' + notes: + - title: Update Golang to 1.19.4 + type: security + body: >- + Updated Golang to latest 1.19.4 patch release which contained two CVEs: CVE-2022-41720, CVE-2022-41717. + + CVE-2022-41720 only affects Windows and $productName$ only ships on Linux. CVE-2022-41717 affects HTTP/2 servers that are exposed to external clients. By default, $productName$ exposes the endpoints for DevPortal, Authentication Service, and RateLimitService via Envoy. Envoy enforces a limit on request header size which mitigates the vulnerability. + + - version: 2.5.0 + date: '2022-11-03' + notes: + - title: Propagate trace headers to http external filter + type: change + body: >- + Previously, tracing headers were not propagated to an ExternalFilter configured with + `proto: http`. Now, adding supported tracing headers (b3, etc...) to the + `spec.allowed_request_headers` will propagate them to the configured service. + github: + - title: '#3078' + link: https://github.com/datawire/apro/issues/3078 + + - title: Diagnostics stats properly handles parsing envoy metrics with colons + type: bugfix + body: >- + If a Host or TLSContext contained a hostname with a : then when using the + diagnostics endpoints ambassador/v0/diagd then an error would be thrown due to the parsing logic not + being able to handle the extra colon. This has been fixed and $productName$ will not throw an error when parsing + envoy metrics for the diagnostics user interface. + + - title: Bump Golang to 1.19.2 + type: security + body: >- + Bump Go from 1.17.12 to 1.19.2. This is to keep the Go version current. + + - version: 2.4.2 + date: '2022-10-10' + notes: + - title: Diagnostics stats properly handles parsing envoy metrics with colons + type: bugfix + body: >- + If a Host or TLSContext contained a hostname with a : then when using the diagnostics endpoints ambassador/v0/diagd then an error would be thrown due to the parsing logic not being able to handle the extra colon. This has been fixed and $productName$ will not throw an error when parsing envoy metrics for the diagnostics user interface. + + - title: Backport fixes for handling synthetic auth services + type: bugfix + body: >- + The synthetic AuthService didn't correctly handle AmbassadorID, which was fixed in version 3.1 of $productName$.The fix has been backported to make sure the AuthService is handled correctly during upgrades. + + - version: 2.4.1 + date: '2022-09-27' + notes: + - title: Addressing release issue with 2.4.0 + type: bugfix + body: >- + During the $productName$ 2.4.0 release process there was an issue with the Emissary binary. This has been patched and resolved. + + - version: 2.4.0 + date: '2022-09-19' + notes: + - title: Add support for Host resources using secrets from different namespaces + type: feature + body: >- + Previously the Host resource could only use secrets that are in the namespace as the + Host. The tlsSecret field in the Host has a new subfield namespace that will allow + the use of secrets from different namespaces. + docs: topics/running/tls/#bring-your-own-certificate + + - title: Allow bypassing of EDS for manual endpoint insertion + type: change + body: >- + Set AMBASSADOR_EDS_BYPASS to true to bypass EDS handling of endpoints and have endpoints be + inserted to clusters manually. This can help resolve with 503 UH caused by certification rotation relating to + a delay between EDS + CDS. The default is false. + docs: topics/running/environment/#ambassador_eds_bypass + + - title: Properly convert FilterPolicy and ExternalFilter between CRD versions + type: bugfix + body: >- + Previously, $productName$ would incorrectly include empty fields when converting a FilterPolicy or ExternalFilter between versions. This would cause undesired state to be persisted in k8s which would lead to validation issues when trying to kubectl apply the custom resource. This fixes these issues to ensure the correct data is being persisted and roundtripped properly between CRD versions. + + - title: Properly populate alt_stats_name for Tracing, Auth and RateLimit Services + type: bugfix + body: >- + Previously, setting the stats_name for the TracingService, RateLimitService + or the AuthService would have no affect because it was not being properly passed to the Envoy cluster + config. This has been fixed and the alt_stats_name field in the cluster config is now set correctly. + (Thanks to Paul!) + + - title: Diagnostics stats properly handles parsing envoy metrics with colons + type: bugfix + body: >- + If a Host or TLSContext contained a hostname with a : when using the + diagnostics endpoints ambassador/v0/diagd then an error would be thrown due to the parsing logic not + being able to handle the extra colon. This has been fixed and $productName$ will not throw an error when parsing + envoy metrics for the diagnostics user interface. + + - title: TCPMappings use correct SNI configuration + type: bugfix + body: >- + $productName$ 2.0.0 introduced a bug where a TCPMapping that uses SNI, + instead of using the hostname glob in the TCPMapping, uses the hostname glob + in the Host that the TLS termination configuration comes from. + + - title: TCPMappings configure TLS termination without a Host resource + type: bugfix + body: >- + $productName$ 2.0.0 introduced a bug where a TCPMapping that terminates TLS + must have a corresponding Host that it can take the TLS configuration from. + This was semi-intentional, but didn't make much sense. You can now use a + TLSContext without a Hostas in $productName$ 1.y releases, or a + Host with or without a TLSContext as in prior 2.y releases. + + - title: TCPMappings and HTTP Hosts can coexist on Listeners that terminate TLS + type: bugfix + body: >- + Prior releases of $productName$ had the arbitrary limitation that a + TCPMapping cannot be used on the same port that HTTP is served on, even if + TLS+SNI would make this possible. $productName$ now allows TCPMappings to be + used on the same Listener port as HTTP Hosts, as long as that + Listener terminates TLS. + + - version: 2.3.2 + date: '2022-08-01' + notes: + - title: Correct cookies for mixed HTTP/HTTPS OAuth2 origins + type: bugfix + body: >- + When an OAuth2 filter sets cookies for a protectedOrigin, it + should set a cookie's "Secure" flag to true for https:// origins and false + for http:// origins. However, for filters with multiple origins, it set the + cookie's flag based on the first origin listen in the Filter, rather than the origin that + the cookie is actually for. + + - title: Correctly handle refresh tokens for OAuth2 filters with multiple origins + type: bugfix + body: >- + When an OAuth2 filter with multiple protectedOrigins needs to + adjust the cookies for an active login (which only happens when using a refresh token), it + would erroneously redirect the web browser to the last origin listed, rather than + returning to the original URL. This has been fixed. + + - title: Correctly handle CORS and CORs preflight request within the OAuth2 Fitler known endpoints + type: bugfix + body: >- + Previously, the OAuth2 filter's known endpoints /.ambassador/oauth2/logout + and /.ambassador/oauth2/multicookie did not understand CORS or CORS preflight request + which would cause the browser to reject the request. This has now been fixed and these endpoints will + attach the appropriate CORS headers to the response. + + - title: Fix regression in the agent for the metrics transfer. + type: bugfix + body: >- + A regression was introduced in 2.3.0 causing the agent to miss some of the metrics coming from + emissary ingress before sending them to Ambassador cloud. This issue has been resolved to ensure + that all the nodes composing the emissary ingress cluster are reporting properly. + + - title: Update Golang to 1.17.12 + type: security + body: >- + Updated Golang to 1.17.12 to address the CVEs: CVE-2022-23806, CVE-2022-28327, CVE-2022-24675, + CVE-2022-24921, CVE-2022-23772. + + - title: Update Curl to 7.80.0-r2 + type: security + body: >- + Updated Curl to 7.80.0-r2 to address the CVEs: CVE-2022-32207, CVE-2022-27782, CVE-2022-27781, + CVE-2022-27780. + + - title: Update openSSL-dev to 1.1.1q-r0 + type: security + body: >- + Updated openSSL-dev to 1.1.1q-r0 to address CVE-2022-2097. + + - title: Update ncurses to 1.1.1q-r0 + type: security + body: >- + Updated ncurses to 1.1.1q-r0 to address CVE-2022-29458 + + - title: Upgrade jwt-go + type: security + body: >- + Upgrade jwt-go to latest commit to resolve CVE-2020-26160. + + - version: 2.3.1 + date: '2022-06-10' + notes: + - title: Fix regression in tracing service config + type: bugfix + body: >- + A regression was introduced in 2.3.0 that leaked zipkin default config fields into the configuration + for the other drivers (lightstep, etc...). This caused $productName$ to crash on startup. This issue has been resolved + to ensure that the defaults are only applied when driver is zipkin + docs: https://github.com/emissary-ingress/emissary/issues/4267 + + - title: Envoy security updates + type: security + body: >- + We have backported patches from the Envoy 1.19.5 security update to $productName$'s + 1.17-based Envoy, addressing CVE-2022-29224 and CVE-2022-29225. $productName$ is not + affected by CVE-2022-29226, CVE-2022-29227, or CVE-2022-29228; as it does not support internal + redirects, and does not use Envoy's built-in OAuth2 filter. + docs: https://groups.google.com/g/envoy-announce/c/8nP3Kn4jV7k + + - version: 2.3.0 + date: '2022-06-06' + notes: + - title: Remove unused packages + type: security + body: >- + Completely remove gdbm, pip, smtplib, and sqlite packages, as they are unused. + + - title: CORS now happens before auth + type: bugfix + body: >- + When CORS is specified (either in a Mapping or in the Ambassador + Module), CORS processing will happen before authentication. This corrects a + problem where XHR to authenticated endpoints would fail. + + - title: Correctly handle caching of Mappings with the same name in different namespaces + type: bugfix + body: >- + In 2.x releases of $productName$ when there are multiple Mappings that have the same + metadata.name across multiple namespaces, their old config would not properly be removed + from the cache when their config was updated. This resulted in an inability to update configuration + for groups of Mappings that share the same name until the $productName$ pods restarted. + + - title: Fix support for Zipkin API-v1 with Envoy xDS-v3 + type: bugfix + body: >- + It is now possible for a TracingService to specify + collector_endpoint_version: HTTP_JSON_V1 when using xDS v3 to configure Envoy + (which has been the default since $productName$ 1.14.0). The HTTP_JSON_V1 + value configures Envoy to speak to Zipkin using Zipkin's old API-v1, while the + HTTP_JSON value configures Envoy to speak to Zipkin using Zipkin's new + API-v2. In previous versions of $productName$ it was only possible to use + HTTP_JSON_V1 when explicitly setting the + AMBASSADOR_ENVOY_API_VERSION=V2 environment variable to force use of xDS v2 + to configure Envoy. + docs: topics/running/services/tracing-service/ + + - title: Added Support for transport protocol v3 in External Filters + type: feature + body: >- + External Filters can now make use of the v3 transport protocol. In addtion to the support for the v3 transport protocol, the default AuthService installed with $productName$ will now only operate with transport protocol v3. In order to support existing External Filters using v2, $productName$ will automatically translate + v2 to the new default of v3. Any External Filters will be assumed to be using transport protocol v2 and will use the automatic conversion to v3 unless the new protocol_version field on the External Filter is explicitly set to v3. + docs: topics/using/filters/external + + - title: Allow setting propagation modes for Lightstep tracing + type: feature + body: >- + It is now possible to set propagation_modes in the + TracingService config when using lightstep as the driver. + (Thanks to Paul!) + docs: topics/running/services/tracing-service/ + github: + - title: '#4179' + link: https://github.com/emissary-ingress/emissary/pull/4179 + + - title: Added Support for Certificate Revocation Lists + type: feature + body: >- + $productName$ now supports Envoy's Certificate Revocation lists. + This allows users to specify a list of certificates that $productName$ should reject even if the certificate itself is otherwise valid. + docs: topics/running/tls + + - title: Added support for the LogService v3 transport protocol + type: feature + body: >- + Previously, a LogService would always have $productName$ communicate with the + external log service using the envoy.service.accesslog.v2.AccessLogService + API. It is now possible for the LogService to specify + protocol_version: v3 to use the newer + envoy.service.accesslog.v3.AccessLogService API instead. This functionality + is not available if you set the AMBASSADOR_ENVOY_API_VERSION=V2 environment + variable. + docs: topics/running/services/log-service/ + + - title: Improved performance processing OAuth2 Filters + type: change + body: >- + When each OAuth2 Filter that references a Kubernetes secret is loaded, $productName$ previously needed to communicate with the API server to request and validate that secret before loading the next Filter. To improve performance, $productName$ will now load and validate all secrets required by OAuth2 Filters at once prior to loading the filters. + + - title: Deprecated v2 transport protocol for External Filters and AuthServices + type: change + body: >- + A future release of $productName$ will remove support for the now deprecated v2 transport protocol in both AuthServices as well as External Filters. Migrating Existing External Filters from v2 to v3 + is simple and and example can be found on the External Filter page. This change only impacts gRPC External Filters. HTTP External Filters are unaffected by this change. + docs: topics/using/filters/external + + - version: 2.2.2 + date: '2022-02-25' + notes: + - title: TLS Secret validation is now opt-in + type: change + body: >- + You may now choose to enable TLS Secret validation by setting the + AMBASSADOR_FORCE_SECRET_VALIDATION=true environment variable. The default configuration does not + enforce secret validation. + docs: topics/running/tls#certificates-and-secrets + + - title: Correctly validate EC (Elliptic Curve) Private Keys + type: bugfix + body: >- + Kubernetes Secrets that should contain an EC (Elliptic Curve) TLS Private Key are now properly validated. + github: + - title: '#4134' + link: https://github.com/emissary-ingress/emissary/issues/4134 + docs: topics/running/tls#certificates-and-secrets + + - version: 2.2.1 + date: '2022-02-22' + notes: + - title: Envoy V2 API deprecation + type: change + body: >- + Support for the Envoy V2 API is deprecated as of $productName$ v2.1, and will be removed in $productName$ + v3.0. The AMBASSADOR_ENVOY_API_VERSION environment variable will be removed at the same + time. Only the Envoy V3 API will be supported (this has been the default since $productName$ v1.14.0). + + - title: Envoy security updates + type: security + body: >- + Upgraded Envoy to address security vulnerabilities CVE-2021-43824, CVE-2021-43825, CVE-2021-43826, + CVE-2022-21654, and CVE-2022-21655. + docs: https://groups.google.com/g/envoy-announce/c/bIUgEDKHl4g + - title: Correctly support canceling rollouts + type: bugfix + body: >- + The Ambassador Agent now correctly supports requests to cancel a rollout. + docs: ../../argo/latest/howtos/manage-rollouts-using-cloud + + - version: 2.2.0 + date: '2022-02-10' + notes: + - title: Envoy V2 API deprecation + type: change + body: >- + Support for the Envoy V2 API is deprecated as of $productName$ v2.1, and will be removed in $productName$ + v3.0. The AMBASSADOR_ENVOY_API_VERSION environment variable will be removed at the same + time. Only the Envoy V3 API will be supported (this has been the default since $productName$ v1.14.0). + + - title: Ambassador Edge Stack will watch for Cloud Connect Tokens + type: change + body: >- + $productName$ will now watch for ConfigMap or Secret resources specified by the + AGENT_CONFIG_RESOURCE_NAME environment variable in order to allow all + components (and not only the Ambassador Agent) to authenticate requests to + Ambassador Cloud. + image: ./v2.2.0-cloud.png + + - title: Update Alpine and libraries + type: security + body: >- + $productName$ has updated Alpine to 3.15, and Python and Go dependencies + to their latest compatible versions, to incorporate numerous security patches. + + - title: Support a log-level metric + type: feature + body: >- + $productName$ now supports the metric ambassador_log_level{label="debug"} + which will be set to 1 if debug logging is enabled for the running Emissary + instance, or to 0 if not. This can help to be sure that a running production + instance was not actually left doing debugging logging, for example. + (Thanks to Fabrice!) + github: + - title: '#3906' + link: https://github.com/emissary-ingress/emissary/issues/3906 + docs: topics/running/statistics/8877-metrics/ + + - title: Envoy configuration % escaping + type: feature + body: >- + $productName$ is now leveraging a new Envoy Proxy patch that allows Envoy to accept escaped + '%' characters in its configuration. This means that error_response_overrides and other + custom user content can now contain '%' symbols escaped as '%%'. + docs: topics/running/custom-error-responses + github: + - title: 'DW Envoy: 74' + link: https://github.com/datawire/envoy/pull/74 + - title: 'Upstream Envoy: 19383' + link: https://github.com/envoyproxy/envoy/pull/19383 + image: ./v2.2.0-percent-escape.png + + - title: Stream metrics from Envoy to Ambassador Cloud + type: feature + body: >- + Support for streaming Envoy metrics about the clusters to Ambassador Cloud. + github: + - title: '#4053' + link: https://github.com/emissary-ingress/emissary/pull/4053 + docs: https://github.com/emissary-ingress/emissary/pull/4053 + + - title: Support received commands to pause, continue and abort a Rollout via Agent directives + type: feature + body: >- + The Ambassador agent now receives commands to manipulate Rollouts (pause, continue, and + abort are currently supported) via directives and executes them in the cluster. A report + is sent to Ambassador Cloud including the command ID, whether it ran successfully, and + an error message in case there was any. + github: + - title: '#4040' + link: https://github.com/emissary-ingress/emissary/pull/4040 + docs: https://github.com/emissary-ingress/emissary/pull/4040 + + - title: Validate certificates in TLS Secrets + type: bugfix + body: >- + Kubernetes Secrets that should contain TLS certificates are now validated before being + accepted for configuration. A Secret that contains an invalid TLS certificate will be logged + as an invalid resource. + github: + - title: '#3821' + link: https://github.com/emissary-ingress/emissary/issues/3821 + docs: ../topics/running/tls + image: ./v2.2.0-tls-cert-validation.png + + - title: Devportal support for using API server definitions from OpenAPI docs + type: feature + body: >- + You can now set preserve_servers in Ambassador Edge Stack's + DevPortal resource to configure the DevPortal to use server definitions from + the OpenAPI document when displaying connection information for services in the DevPortal. + docs: topics/using/dev-portal/ + + - version: 2.1.2 + date: '2022-01-25' + notes: + - title: Envoy V2 API deprecation + type: change + body: >- + Support for the Envoy V2 API is deprecated as of $productName$ v2.1, and will be removed in $productName$ + v3.0. The AMBASSADOR_ENVOY_API_VERSION environment variable will be removed at the same + time. Only the Envoy V3 API will be supported (this has been the default since $productName$ v1.14.0). + + - title: Docker BuildKit always used for builds + type: change + body: >- + Docker BuildKit is enabled for all Emissary builds. Additionally, the Go + build cache is fully enabled when building images, speeding up repeated builds. + docs: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md + + - title: Fix OAuth2 Filter jwtAssertion + type: bugfix + body: >- + In $productName$ 2.1.0 and 2.1.1, an OAuth2 Filter with + clientAuthentication.method=jwtAssertion would not function correctly as it + would fail to select the signing-method-appropriate function to parse the private key. + docs: topics/using/filters/oauth2 + image: ./v2.1.2-filter-jwtassertion.png + + - title: Fix ifRequestHeader without a value + type: bugfix + body: >- + In $productName$ 2.1.0 and 2.1.1, an ifRequestHeader selector (in a + FilterPolicy, OAuth2 Filter useSessionCookies, or OAuth2 Filter + insteadOfRedirect) without a value or valueRegex + would erroneously behave as if valueRegex='^$', rather than performing a + simple presence check. + docs: topics/using/filters/#filterpolicy-definition + + - title: Fix support for for v2 Mappings with CORS + type: bugfix + body: >- + Ambassador Edge Stack 2.1.1 generated invalid Envoy configuration for + getambassador.io/v2 Mappings that set + spec.cors.origins to a string rather than a list of strings; this has been + fixed, and these Mappings should once again function correctly. + docs: topics/using/cors/#the-cors-attribute + image: ./v2.1.2-mapping-cors.png + + - title: Correctly handle canary Mapping weights when reconfiguring + type: bugfix + body: >- + Changes to the weight of Mapping in a canary group + will now always be correctly managed during reconfiguration; such changes could + have been missed in earlier releases. + docs: topics/using/canary/#the-weight-attribute + + - title: Correctly handle solitary Mappings with explicit weights + type: bugfix + body: >- + A Mapping that is not part of a canary group, but that has a + weight less than 100, will be correctly configured to receive all + traffic as if the weight were 100. + docs: topics/using/canary/#the-weight-attribute + image: ./v2.1.2-mapping-less-weighted.png + + - title: Correctly handle empty rewrite in a Mapping + type: bugfix + body: >- + Using rewrite: "" in a Mapping is correctly handled + to mean "do not rewrite the path at all". + docs: topics/using/rewrites + image: ./v2.1.2-mapping-no-rewrite.png + + - title: Correctly use Mappings with host redirects + type: bugfix + body: >- + Any Mapping that uses the host_redirect field is now properly discovered and used. Thanks + to Gabriel Féron for contributing this bugfix! + github: + - title: '#3709' + link: https://github.com/emissary-ingress/emissary/issues/3709 + docs: https://github.com/emissary-ingress/emissary/issues/3709 + + - title: Correctly handle DNS wildcards when associating Hosts and Mappings + type: bugfix + body: >- + Mappings with DNS wildcard hostname will now be correctly + matched with Hosts. Previously, the case where both the Host + and the Mapping use DNS wildcards for their hostnames could sometimes + not correctly match when they should have. + docs: howtos/configure-communications/ + image: ./v2.1.2-host-mapping-matching.png + + - title: Fix overriding global settings for adding or removing headers + type: bugfix + body: >- + If the ambassador Module sets a global default for + add_request_headers, add_response_headers, + remove_request_headers, or remove_response_headers, it is often + desirable to be able to turn off that setting locally for a specific Mapping. + For several releases this has not been possible for Mappings that are native + Kubernetes resources (as opposed to annotations), as an empty value ("mask the global + default") was erroneously considered to be equivalent to unset ("inherit the global + default"). This is now fixed. + docs: topics/using/defaults/ + + - title: Fix empty error_response_override bodies + type: bugfix + body: >- + It is now possible to set a Mapping + spec.error_response_overrides body.text_format to an empty + string or body.json_format to an empty dict. Previously, this was possible + for annotations but not for native Kubernetes resources. + docs: topics/running/custom-error-responses/ + + - title: Annotation conversion and validation + type: bugfix + body: >- + Resources that exist as getambassador.io/config annotations rather than as + native Kubernetes resources are now validated and internally converted to v3alpha1 and, + the same as native Kubernetes resources. + image: ./v2.1.2-annotations.png + + - title: Validation error reporting + type: bugfix + body: >- + Resource validation errors are now reported more consistently; it was the case that in + some situations a validation error would not be reported. + + - version: 2.1.1 + date: '2022-01-14' + notes: + - title: Not recommended; upgrade to 2.1.2 instead + type: change + isHeadline: true + body: >- + Ambassador Edge Stack 2.1.1 is not recommended; upgrade to 2.1.2 instead. + + - title: Envoy V2 API deprecation + type: change + body: >- + Support for the Envoy V2 API is deprecated as of $productName$ v2.1, and will be removed in $productName$ + v3.0. The AMBASSADOR_ENVOY_API_VERSION environment variable will be removed at the same + time. Only the Envoy V3 API will be supported (this has been the default since $productName$ v1.14.0). + + - title: Fix discovery of Filters, FilterPolicies, and RateLimits + type: bugfix + body: >- + In Edge Stack 2.1.0, it erroneously ignored Filters, + FilterPolicies, and RateLimits that were created as + v3alpha1 (but correctly paid attention to them if they were created as + v2 or older). This is fixed; it will now correctly pay attention to both API + versions. + github: + - title: '#3982' + link: https://github.com/emissary-ingress/emissary/issues/3982 + + - version: 2.1.0 + date: '2021-12-16' + notes: + - title: Not recommended; upgrade to 2.1.2 instead + type: change + isHeadline: true + body: >- + Ambassador Edge Stack 2.1.0 is not recommended; upgrade to 2.1.2 instead. + + - title: Envoy V2 API deprecation + type: change + body: >- + Support for the Envoy V2 API is deprecated as of $productName$ v2.1, and will be removed in $productName$ + v3.0. The AMBASSADOR_ENVOY_API_VERSION environment variable will be removed at the same + time. Only the Envoy V3 API will be supported (this has been the default since $productName$ v1.14.0). + + - title: Smoother migrations with support for getambassador.io/v2 CRDs + type: feature + body: >- + $productName$ supports getambassador.io/v2 CRDs, to simplify migration from $productName$ + 1.X. Note: it is important to read the migration + documentation before starting migration. + docs: topics/install/migration-matrix + image: ./v2.1.0-smoother-migration.png + + - title: Ambassador Edge Stack CRDs are fully validated + type: change + body: >- + The $productName$ CRDs (Filter, FilterPolicy, and RateLimit) + will now be validated for correct syntax by Kubernetes itself. This means that kubectl apply + will reject invalid CRDs before they are actually applied, preventing them from causing errors. + image: ./v2.1.0-edge-stack-validation.png + + - title: Correctly handle all changing canary configurations + type: bugfix + body: >- + The incremental reconfiguration cache could miss some updates when multiple + Mappings had the same prefix ("canary"ing multiple + Mappings together). This has been corrected, so that all such + updates correctly take effect. + github: + - title: '#3945' + link: https://github.com/emissary-ingress/emissary/issues/3945 + docs: https://github.com/emissary-ingress/emissary/issues/3945 + image: ./v2.1.0-canary.png + + - title: Secrets used for ACME private keys will not log errors + type: bugfix + body: >- + When using Kubernetes Secrets to store ACME private keys (as the Edge Stack + ACME client does), an error would always be logged about the Secret not being + present, even though it was present, and everything was working correctly. + This error is no longer logged. + + - title: When using gzip, upstreams will no longer receive encoded data + type: bugfix + body: >- + When using gzip compression, upstream services will no longer receive compressed + data. This bug was introduced in 1.14.0. The fix restores the default behavior of + not sending compressed data to upstream services. + github: + - title: '#3818' + link: https://github.com/emissary-ingress/emissary/issues/3818 + docs: https://github.com/emissary-ingress/emissary/issues/3818 + image: ./v2.1.0-gzip-enabled.png + + - title: Update to busybox 1.34.1 + type: security + body: >- + Update to busybox 1.34.1 to resolve CVE-2021-28831, CVE-2021-42378, + CVE-2021-42379, CVE-2021-42380, CVE-2021-42381, CVE-2021-42382, CVE-2021-42383, + CVE-2021-42384, CVE-2021-42385, and CVE-2021-42386. + + - title: Update Python dependencies + type: security + body: >- + Update Python dependencies to resolve CVE-2020-28493 (jinja2), CVE-2021-28363 + (urllib3), and CVE-2021-33503 (urllib3). + + - title: Remove test-only code from the built image + type: security + body: >- + Previous built images included some Python packages used only for test. These + have now been removed, resolving CVE-2020-29651. + + - version: 2.0.5 + date: '20211109' + notes: + - title: More aggressive HTTP cache behavior + type: change + body: >- + When Ambassador Edge Stack makes a cacheable internal request (such as fetching the JWKS + endpoint for a JWT Filter), if a cache-miss occurs but a request + for that resource is already in-flight, then instead of performing a second request in + parallel, it will now wait for the first request to finish and (if the response is + cacheable) use that response. If the response turns out to be non-cacheable, then it will + proceed to make the second request. This avoids the situation where if a cache entry + expires during a moment with high number of concurrent requests, then Edge Stack creates a + deluge of concurrent requests to the resource when one aught to have sufficed; this allows + the result to be returned more quickly while putting less load on the remote resource. + However, if the response turns out to be non-cacheable, then this does effectively + serialize requests, increasing the latency for concurrent requests. + image: ./v2.0.5-cache-change.png + + - title: AuthService circuit breakers + type: feature + body: >- + It is now possible to set the circuit_breakers for AuthServices, + exactly the same as for Mappings and TCPMappings. This makes it + possible to configure your AuthService to be able to handle more than 1024 + concurrent requests. + docs: topics/running/services/auth-service/ + image: ./v2.0.5-auth-circuit-breaker.png + + - title: More accurate durations in the logs + type: bugfix + body: >- + When Ambassador Edge Stack completes an internal request (such as fetching the JWKS + endpoint for a JWT Filter) it logs (at the info log + level) how long the request took. Previously, the duration logged was how long it took to + receive the response header, and did not count the time it takes to receive the entire + response body; now it properly times the entire thing. Additionally, it now separately + logs the "total duration" and the "networking duration", in order to make it possible to + identify when a request was delayed waiting for other requests to finish. + + - title: Improved validity checking for error response overrides + type: bugfix + body: >- + Any token delimited by '%' is now validated agains a whitelist of valid + Envoy command operators. Any mapping containing an error_response_overrides + section with invalid command operators will be discarded. + docs: topics/running/custom-error-responses + + - title: mappingSelector is now correctly supported in the Host CRD + type: bugfix + body: >- + The Host CRD now correctly supports the mappingSelector + element, as documented. As a transition aid, selector is a synonym for + mappingSelector; a future version of $productName$ will remove the + selector element. + github: + - title: '#3902' + link: https://github.com/emissary-ingress/emissary/issues/3902 + docs: https://github.com/emissary-ingress/emissary/issues/3902 + image: ./v2.0.5-mappingselector.png + + - version: 2.0.4 + date: '2021-10-19' + notes: + - title: General availability! + type: feature + body: >- + We're pleased to introduce $productName$ 2.0.4 for general availability! The + 2.X family introduces a number of changes to allow $productName$ to more + gracefully handle larger installations, reduce global configuration to better + handle multitenant or multiorganizational installations, reduce memory footprint, and + improve performance. We welcome feedback!! Join us on + Slack and let us know what you think. + isHeadline: true + docs: about/changes-2.x + image: ./edge-stack-GA.png + + - title: API version getambassador.io/v3alpha1 + type: change + body: >- + The x.getambassador.io/v3alpha1 API version has become the + getambassador.io/v3alpha1 API version. The Ambassador- + prefixes from x.getambassador.io/v3alpha1 resources have been + removed for ease of migration. Note that getambassador.io/v3alpha1 + is the only supported API version for 2.0.4 — full support for + getambassador.io/v2 will arrive soon in a later 2.X version. + docs: about/changes-2.x + image: ./v2.0.4-v3alpha1.png + + - title: Support for Kubernetes 1.22 + type: feature + body: >- + The getambassador.io/v3alpha1 API version and the published chart + and manifests have been updated to support Kubernetes 1.22. Thanks to + Mohit Sharma for contributions to + this feature! + docs: about/changes-2.x + image: ./v2.0.4-k8s-1.22.png + + - title: Mappings support configuring strict or logical DNS + type: feature + body: >- + You can now set dns_type between strict_dns and + logical_dns in a Mapping to configure the Service + Discovery Type. + docs: topics/using/mappings/#dns-configuration-for-mappings + image: ./v2.0.4-mapping-dns-type.png + + - title: Mappings support controlling DNS refresh with DNS TTL + type: feature + body: >- + You can now set respect_dns_ttl to true to force the + DNS refresh rate for a Mapping to be set to the record's TTL + obtained from DNS resolution. + docs: topics/using/mappings/#dns-configuration-for-mappings + + - title: Support configuring upstream buffer sizes + type: feature + body: >- + You can now set buffer_limit_bytes in the ambassador + Module to to change the size of the upstream read and write buffers. + The default is 1MiB. + docs: topics/running/ambassador/#modify-default-buffer-size + + - title: Version number reported correctly + type: bugfix + body: >- + The release now shows its actual released version number, rather than + the internal development version number. + github: + - title: '#3854' + link: https://github.com/emissary-ingress/emissary/issues/3854 + docs: https://github.com/emissary-ingress/emissary/issues/3854 + image: ./v2.0.4-version.png + + - title: Large configurations work correctly with Ambassador Cloud + type: bugfix + body: >- + Large configurations no longer cause $productName$ to be unable + to communicate with Ambassador Cloud. + github: + - title: '#3593' + link: https://github.com/emissary-ingress/emissary/issues/3593 + docs: https://github.com/emissary-ingress/emissary/issues/3593 + + - title: Listeners correctly support l7Depth + type: bugfix + body: >- + The l7Depth element of the Listener CRD is + properly supported. + docs: topics/running/listener#l7depth + image: ./v2.0.4-l7depth.png + + - version: 2.0.3-ea + date: '2021-09-16' + notes: + - title: Developer Preview! + body: We're pleased to introduce $productName$ 2.0.3 as a developer preview. The 2.X family introduces a number of changes to allow $productName$ to more gracefully handle larger installations, reduce global configuration to better handle multitenant or multiorganizational installations, reduce memory footprint, and improve performance. We welcome feedback!! Join us on Slack and let us know what you think. + type: change + isHeadline: true + docs: about/changes-2.x + + - title: AES_LOG_LEVEL more widely effective + body: The environment variable AES_LOG_LEVEL now also sets the log level for the diagd logger. + type: feature + docs: topics/running/running/ + github: + - title: '#3686' + link: https://github.com/emissary-ingress/emissary/issues/3686 + - title: '#3666' + link: https://github.com/emissary-ingress/emissary/issues/3666 + + - title: AmbassadorMapping supports setting the DNS type + body: You can now set dns_type in the AmbassadorMapping to configure how Envoy will use the DNS for the service. + type: feature + docs: topics/using/mappings/#using-dns_type + + - title: Building Emissary no longer requires setting DOCKER_BUILDKIT + body: It is no longer necessary to set DOCKER_BUILDKIT=0 when building Emissary. A future change will fully support BuildKit. + type: bugfix + docs: https://github.com/emissary-ingress/emissary/issues/3707 + github: + - title: '#3707' + link: https://github.com/emissary-ingress/emissary/issues/3707 + + - version: 2.0.2-ea + date: '2021-08-24' + notes: + - title: Developer Preview! + body: We're pleased to introduce $productName$ 2.0.2 as a developer preview. The 2.X family introduces a number of changes to allow $productName$ to more gracefully handle larger installations, reduce global configuration to better handle multitenant or multiorganizational installations, reduce memory footprint, and improve performance. We welcome feedback!! Join us on Slack and let us know what you think. + type: change + isHeadline: true + docs: about/changes-2.x + + - title: Envoy security updates + type: bugfix + body: 'Upgraded envoy to 1.17.4 to address security vulnerabilities CVE-2021-32777, CVE-2021-32778, CVE-2021-32779, and CVE-2021-32781.' + docs: https://groups.google.com/g/envoy-announce/c/5xBpsEZZDfE?pli=1 + + - title: Expose Envoy's allow_chunked_length HTTPProtocolOption + type: feature + body: 'You can now set allow_chunked_length in the Ambassador Module to configure the same value in Envoy.' + docs: topics/running/ambassador/#content-length-headers + + - title: Envoy-configuration snapshots saved + type: change + body: Envoy-configuration snapshots get saved (as ambex-#.json) in /ambassador/snapshots. The number of snapshots is controlled by the AMBASSADOR_AMBEX_SNAPSHOT_COUNT environment variable; set it to 0 to disable. The default is 30. + docs: topics/running/running/ + + - version: 2.0.1-ea + date: '2021-08-12' + notes: + - title: Developer Preview! + body: We're pleased to introduce $productName$ 2.0.1 as a developer preview. The 2.X family introduces a number of changes to allow $productName$ to more gracefully handle larger installations, reduce global configuration to better handle multitenant or multiorganizational installations, reduce memory footprint, and improve performance. We welcome feedback!! Join us on Slack and let us know what you think. + type: change + isHeadline: true + docs: about/changes-2.x + + - title: Improved Ambassador Cloud visibility + type: feature + body: Ambassador Agent reports sidecar process information and AmbassadorMapping OpenAPI documentation to Ambassador Cloud to provide more visibility into services and clusters. + docs: /docs/cloud/latest/service-catalog/quick-start/ + + - title: Configurable per-AmbassadorListener statistics prefix + body: The optional stats_prefix element of the AmbassadorListener CRD now determines the prefix of HTTP statistics emitted for a specific AmbassadorListener. + type: feature + docs: topics/running/listener + + - title: Configurable statistics names + body: The optional stats_name element of AmbassadorMapping, AmbassadorTCPMapping, AuthService, LogService, RateLimitService, and TracingService now sets the name under which cluster statistics will be logged. The default is the service, with non-alphanumeric characters replaced by underscores. + type: feature + docs: topics/running/statistics + + - title: Configurable Dev Portal fetch timeout + type: bugfix + body: The AmbassadorMapping resource can now specify docs.timeout_ms to set the timeout when the Dev Portal is fetching API specifications. + docs: topics/using/dev-portal/ + + - title: Dev Portal search strips HTML tags + type: bugfix + body: The Dev Portal will now strip HTML tags when displaying search results, showing just the actual content of the search result. + docs: topics/using/dev-portal/ + + - title: Updated klog to reduce log noise + type: bugfix + body: We have updated to k8s.io/klog/v2 to track upstream and to quiet unnecessary log output. + docs: https://github.com/emissary-ingress/emissary/issues/3603 + + - title: Subsecond time resolution in logs + type: change + body: Logs now include subsecond time resolutions, rather than just seconds. + docs: https://github.com/emissary-ingress/emissary/pull/3650 + + - title: Configurable Envoy-configuration rate limiting + type: change + body: Set AMBASSADOR_AMBEX_NO_RATELIMIT to true to completely disable ratelimiting Envoy reconfiguration under memory pressure. This can help performance with the endpoint or Consul resolvers, but could make OOMkills more likely with large configurations. The default is false, meaning that the rate limiter is active. + docs: topics/concepts/rate-limiting-at-the-edge/ + + - title: Improved Consul certificate rotation visibility + type: change + body: Consul certificate-rotation logging now includes the fingerprints and validity timestamps of certificates being rotated. + docs: howtos/consul/#consul-connector-and-encrypted-tls + + - title: Add configurable cache for OIDC replies to the JWT Filter + type: feature + body: >- + The maxStale field is now supported in in the JWT Filter to configure how long $productname$ should cache OIDC responses for similar to the existing maxStale field in the OAuth2 Filter. + docs: topics/using/filters/jwt + + - version: 2.0.0-ea + date: '2021-06-24' + notes: + - title: Developer Preview! + body: We're pleased to introduce $productName$ 2.0.0 as a developer preview. The 2.X family introduces a number of changes to allow $productName$ to more gracefully handle larger installations, reduce global configuration to better handle multitenant or multiorganizational installations, reduce memory footprint, and improve performance. We welcome feedback!! Join us on Slack and let us know what you think. + type: change + docs: about/changes-2.x + isHeadline: true + + - title: Configuration API v3alpha1 + body: >- + $productName$ 2.0.0 introduces API version x.getambassador.io/v3alpha1 for + configuration changes that are not backwards compatible with the 1.X family. API versions + getambassador.io/v0, getambassador.io/v1, and + getambassador.io/v2 are deprecated. Further details are available in the Major Changes + in 2.X document. + type: feature + docs: about/changes-2.x/#1-configuration-api-version-getambassadoriov3alpha1 + image: ./edge-stack-2.0.0-v3alpha1.png + + - title: The AmbassadorListener Resource + body: The new AmbassadorListener CRD defines where and how to listen for requests from the network, and which AmbassadorHost definitions should be used to process those requests. Note that the AmbassadorListener CRD is mandatory and consolidates all port configuration; see the AmbassadorListener documentation for more details. + type: feature + docs: topics/running/listener + image: ./edge-stack-2.0.0-listener.png + + - title: AmbassadorMapping hostname DNS glob support + body: >- + Where AmbassadorMapping's host field is either an exact match or (with host_regex set) a regex, + the new hostname element is always a DNS glob. Use hostname instead of host for best results. + docs: about/changes-2.x/#ambassadorhost-and-ambassadormapping-association + type: feature + + - title: Memory usage improvements for installations with many AmbassadorHosts + body: The behavior of the Ambassador module prune_unreachable_routes field is now automatic, which should reduce Envoy memory requirements for installations with many AmbassadorHosts + docs: topics/running/ambassador/#prune-unreachable-routes + image: ./edge-stack-2.0.0-prune_routes.png + type: feature + + - title: Independent Host actions supported + body: Each AmbassadorHost can specify its requestPolicy.insecure.action independently of any other AmbassadorHost, allowing for HTTP routing as flexible as HTTPS routing. + docs: topics/running/host-crd/#secure-and-insecure-requests + github: + - title: '#2888' + link: https://github.com/datawire/ambassador/issues/2888 + image: ./edge-stack-2.0.0-insecure_action_hosts.png + type: bugfix + + - title: Correctly set Ingress resource status in all cases + body: $productName$ 2.0.0 fixes a regression in detecting the Ambassador Kubernetes service that could cause the wrong IP or hostname to be used in Ingress statuses -- thanks, Noah Fontes! + docs: topics/running/ingress-controller + type: bugfix + image: ./edge-stack-2.0.0-ingressstatus.png + + - title: Stricter mTLS enforcement + body: $productName$ 2.0.0 fixes a bug where mTLS could use the wrong configuration when SNI and the :authority header didn't match + type: bugfix + + - title: Port configuration outside AmbassadorListener has been moved to AmbassadorListener + body: The TLSContext redirect_cleartext_from and AmbassadorHost requestPolicy.insecure.additionalPort elements are no longer supported. Use a AmbassadorListener for this functionality instead. + type: change + docs: about/changes-2.x/#tlscontext-redirect_cleartext_from-and-host-insecureadditionalport + + - title: PROXY protocol configuration has been moved to AmbassadorListener + body: The use_proxy_protocol element of the Ambassador Module is no longer supported, as it is now part of the AmbassadorListener resource (and can be set per-AmbassadorListener rather than globally). + type: change + docs: about/changes-2.x/#proxy-protocol-configuration + + - title: Stricter rules for AmbassadorHost/AmbassadorMapping association + body: An AmbassadorMapping will only be matched with an AmbassadorHost if the AmbassadorMapping's host or the AmbassadorHost's selector (or both) are explicitly set, and match. This change can significantly improve $productName$'s memory footprint when many AmbassadorHosts are involved. Further details are available in the 2.0.0 Changes document. + docs: about/changes-2.x/#host-and-mapping-association + type: change + + - title: AmbassadorHost or Ingress now required for TLS termination + body: An AmbassadorHost or Ingress resource is now required when terminating TLS -- simply creating a TLSContext is not sufficient. Further details are available in the AmbassadorHost CRD documentation. + docs: about/changes-2.x/#host-tlscontext-and-tls-termination + type: change + image: ./edge-stack-2.0.0-host_crd.png + + - title: Envoy V3 APIs + body: By default, $productName$ will configure Envoy using the V3 Envoy API. This change is mostly transparent to users, but note that Envoy V3 does not support unsafe regular expressions or, e.g., Zipkin's V1 collector protocol. Further details are available in the Major Changes in 2.X document. + type: change + docs: about/changes-2.x/#envoy-v3-api-by-default + + - title: Module-based TLS no longer supported + body: The tls module and the tls field in the Ambassador module are no longer supported. Please use TLSContext resources instead. + docs: about/changes-2.x/#tls-the-ambassador-module-and-the-tls-module + image: ./edge-stack-2.0.0-tlscontext.png + type: change + + - title: Higher performance while generating Envoy configuration now enabled by default + body: The environment variable AMBASSADOR_FAST_RECONFIGURE is now set by default, enabling the higher-performance implementation of the code that $productName$ uses to generate and validate Envoy configurations. + docs: topics/running/scaling/#ambassador_fast_reconfigure-and-ambassador_legacy_mode-flags + type: change + + - title: Service Preview no longer supported + body: >- + Service Preview and the AGENT_SERVICE environment variable are no longer supported. + The Telepresence product replaces this functionality. + docs: https://www.getambassador.io/docs/telepresence/ + type: change + + - title: edgectl no longer supported + body: The edgectl CLI tool has been deprecated; please use the emissary-ingress helm chart instead. + docs: topics/install/helm/ + type: change + + - version: 1.14.2 + date: '2021-09-29' + notes: + - title: Mappings support controlling DNS refresh with DNS TTL + type: feature + body: >- + You can now set respect_dns_ttl in Ambassador Mappings. When true it + configures that upstream's refresh rate to be set to resource record’s TTL + docs: topics/using/mappings/#dns-configuration-for-mappings + + - title: Mappings support configuring strict or logical DNS + type: feature + body: >- + You can now set dns_type in Ambassador Mappings to use Envoy's + logical_dns resolution instead of the default strict_dns. + docs: topics/using/mappings/#dns-configuration-for-mappings + + - title: Support configuring upstream buffer size + type: feature + body: >- + You can now set buffer_limit_bytes in the ambassador + Module to to change the size of the upstream read and write buffers. + The default is 1MiB. + docs: topics/running/ambassador/#modify-default-buffer-size + + - version: 1.14.1 + date: '2021-08-24' + notes: + - title: Envoy security updates + type: change + body: >- + Upgraded Envoy to 1.17.4 to address security vulnerabilities CVE-2021-32777, + CVE-2021-32778, CVE-2021-32779, and CVE-2021-32781. + docs: https://groups.google.com/g/envoy-announce/c/5xBpsEZZDfE + + - version: 1.14.0 + date: '2021-08-19' + notes: + - title: Envoy upgraded to 1.17.3! + type: change + body: >- + Update from Envoy 1.15 to 1.17.3 + docs: https://www.envoyproxy.io/docs/envoy/latest/version_history/version_history + + - title: Expose Envoy's allow_chunked_length HTTPProtocolOption + type: feature + body: >- + You can now set allow_chunked_length in the Ambassador Module to configure + the same value in Envoy. + docs: topics/running/ambassador/#content-length-headers + + - title: Default Envoy API version is now V3 + type: change + body: >- + AMBASSADOR_ENVOY_API_VERSION now defaults to V3 + docs: topics/running/running/#ambassador_envoy_api_version + + - title: Subsecond time resolution in logs + type: change + body: Logs now include subsecond time resolutions, rather than just seconds. + docs: https://github.com/emissary-ingress/emissary/pull/3650 + + - version: 1.13.10 + date: '2021-07-28' + notes: + - title: Fix for CORS origins configuration on the Mapping resource + type: bugfix + body: >- + Fixed a regression when specifying a comma separated string for cors.origins + on the Mapping resource. + ([#3609](https://github.com/emissary-ingress/emissary/issues/3609)) + docs: topics/using/cors + image: ../images/emissary-1.13.10-cors-origin.png + + - title: New Envoy-configuration snapshots for debugging + body: 'Envoy-configuration snapshots get saved (as ambex-#.json) in /ambassador/snapshots. The number of snapshots is controlled by the AMBASSADOR_AMBEX_SNAPSHOT_COUNT environment variable; set it to 0 to disable. The default is 30.' + type: change + docs: topics/running/environment/ + + - title: Optionally remove ratelimiting for Envoy reconfiguration + body: >- + Set AMBASSADOR_AMBEX_NO_RATELIMIT to true to completely disable + ratelimiting Envoy reconfiguration under memory pressure. This can help performance with + the endpoint or Consul resolvers, but could make OOMkills more likely with large + configurations. The default is false, meaning that the rate limiter is + active. + type: change + docs: topics/running/environment/ + + - title: Mappings support configuring the DevPortal fetch timeout + type: bugfix + body: >- + The Mapping resource can now specify docs.timeout_ms to set the + timeout when the Dev Portal is fetching API specifications. + docs: topics/using/dev-portal + image: ../images/edge-stack-1.13.10-docs-timeout.png + + - title: Dev Portal will strip HTML tags when displaying results + type: bugfix + body: >- + The Dev Portal will now strip HTML tags when displaying search results, showing just the + actual content of the search result. + docs: topics/using/dev-portal + + - title: Consul certificate rotation logs more information + type: change + body: >- + Consul certificate-rotation logging now includes the fingerprints and validity timestamps + of certificates being rotated. + docs: howtos/consul/ + image: ../images/edge-stack-1.13.10-consul-cert-log.png + + - version: 1.13.9 + date: '2021-06-30' + notes: + - title: Fix for TCPMappings + body: >- + Configuring multiple TCPMappings with the same ports (but different hosts) no longer + generates invalid Envoy configuration. + type: bugfix + docs: topics/using/tcpmappings/ + + - version: 1.13.8 + date: '2021-06-08' + notes: + - title: Fix Ambassador Cloud Service Details + body: >- + Ambassador Agent now accurately reports up-to-date Endpoint information to Ambassador + Cloud + type: bugfix + docs: tutorials/getting-started/#3-connect-your-cluster-to-ambassador-cloud + image: ../images/edge-stack-1.13.8-cloud-bugfix.png + + - title: Improved Argo Rollouts Experience with Ambassador Cloud + body: >- + Ambassador Agent reports ConfigMaps and Deployments to Ambassador Cloud to provide a + better Argo Rollouts experience. See [Argo+Ambassador + documentation](https://www.getambassador.io/docs/argo) for more info. + type: feature + docs: https://www.getambassador.io/docs/argo + + - version: 1.13.7 + date: '2021-06-03' + notes: + - title: JSON logging support + body: >- + Add AMBASSADOR_JSON_LOGGING to enable JSON for most of the Ambassador control plane. Some + (but few) logs from gunicorn and the Kubernetes client-go package still log text. + image: ../images/edge-stack-1.13.7-json-logging.png + docs: topics/running/running/#log-format + type: feature + + - title: Consul resolver bugfix with TCPMappings + body: >- + Fixed a bug where the Consul resolver would not actually use Consul endpoints with + TCPMappings. + image: ../images/edge-stack-1.13.7-tcpmapping-consul.png + docs: topics/running/resolvers/#the-consul-resolver + type: bugfix + + - title: Memory usage calculation improvements + body: >- + Ambassador now calculates its own memory usage in a way that is more similar to how the + kernel OOMKiller tracks memory. + image: ../images/edge-stack-1.13.7-memory.png + docs: topics/running/scaling/#inspecting-ambassador-performance + type: change + + - version: 1.13.6 + date: '2021-05-24' + notes: + - title: Quieter logs in legacy mode + type: bugfix + body: >- + Fixed a regression where Ambassador snapshot data was logged at the INFO label + when using AMBASSADOR_LEGACY_MODE=true. + + - version: 1.13.5 + date: '2021-05-13' + notes: + - title: Correctly support proper_case and preserve_external_request_id + type: bugfix + body: >- + Fix a regression from 1.8.0 that prevented ambassador Module + config keys proper_case and preserve_external_request_id + from working correctly. + docs: topics/running/ambassador/#header-case + + - title: Correctly support Ingress statuses in all cases + type: bugfix + body: >- + Fixed a regression in detecting the Ambassador Kubernetes service that could cause the + wrong IP or hostname to be used in Ingress statuses (thanks, [Noah + Fontes](https://github.com/impl)! + docs: topics/running/ingress-controller + + - version: 1.13.4 + date: '2021-05-11' + notes: + - title: Envoy 1.15.5 + body: >- + Incorporate the Envoy 1.15.5 security update by adding the + reject_requests_with_escaped_slashes option to the Ambassador module. + image: ../images/edge-stack-1.13.4.png + docs: topics/running/ambassador/#rejecting-client-requests-with-escaped-slashes + type: security +# Don't go any further back than 1.13.4. diff --git a/docs/edge-stack/latest/topics/install/helm.md b/docs/edge-stack/latest/topics/install/helm.md new file mode 100644 index 000000000..84c40d586 --- /dev/null +++ b/docs/edge-stack/latest/topics/install/helm.md @@ -0,0 +1,107 @@ +import Alert from '@material-ui/lab/Alert'; + +# Install with Helm + + + + To migrate from $productName$ 1.X to $productName$ 2.X, see the + [$productName$ migration matrix](../migration-matrix/). This guide + **will not work** for that, due to changes to the configuration + resources used for $productName$ 2.X. + + + +[Helm](https://helm.sh) is a package manager for Kubernetes that automates the release and management of software on Kubernetes. $productName$ can be installed via a Helm chart with a few simple steps, depending on if you are deploying for the first time, upgrading $productName$ from an existing installation, or migrating from $productName$. + +## Before you begin + + + $productName$ requires a valid license or cloud connect token to start. You can refer to the quickstart guide + for instructions on how to obtain a free community license and connect your installation to Ambassador cloud. + + +The $productName$ Helm chart is hosted by Datawire and published at `https://app.getambassador.io`. + +Start by adding this repo to your helm client with the following command: + +```bash +helm repo add datawire https://app.getambassador.io +helm repo update +``` + +## Install with Helm + +When you run the Helm chart, it installs $productName$. + +1. Install the $productName$ CRDs. + + Before installing $productName$ $version$ itself, you must configure your + Kubernetes cluster to support the `getambassador.io/v3alpha1` and `getambassador.io/v2` + configuration resources. This is required. + + ``` + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$version$/aes-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $version$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$OSSproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. Install the $productName$ Chart with the following command: + + ``` + helm install -n $productNamespace$ --create-namespace \ + $productHelmName$ datawire/$productHelmName$ && \ + kubectl rollout status -n $productNamespace$ deployment/$productDeploymentName$ -w + ``` + +3. Next Steps + + $productName$ should now be successfully installed and running, but in order to get started deploying Services and test routing to them you need to configure a few more resources. + + - [The `Listener` Resource](../../running/listener/) is required to configure which ports the $productName$ pods listen on so that they can begin responding to requests. + - [The `Mapping` Resouce](../../using/intro-mappings/) is used to configure routing requests to services in your cluster. + - [The `Host` Resource](../../running/host-crd/) configures TLS termination for enablin HTTPS communication. + - Explore how $productName$ [configures communication with clients](../../../howtos/configure-communications) + + + We strongly recommend following along with our Quickstart Guide to get started by creating a Listener, deploying a simple service to test with, and setting up a Mapping to route requests from $productName$ to the demo service. + + + + $productName$ $version$ includes a Deployment in the $productNamespace$ namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + +For more advanced configuration and details about helm values, +[please see the helm chart.](https://artifacthub.io/packages/helm/datawire/edge-stack/$aesChartVersion$) + +## Upgrading an existing installation + +See the [migration matrix](../migration-matrix) for instructions about upgrading +$productName$. diff --git a/docs/edge-stack/latest/topics/install/index.less b/docs/edge-stack/latest/topics/install/index.less new file mode 100644 index 000000000..bc649e7ca --- /dev/null +++ b/docs/edge-stack/latest/topics/install/index.less @@ -0,0 +1,57 @@ +@media (max-width: 769px) { + #index-installContainer { + flex-direction: column; + } + .index-dropdown { + width: auto; + } + .index-dropBtn { + width: 100%; + } +} + +.index-dropBtn { + background-color: #8e77ff; + color: white; + padding: 10px; + font-size: 16px; + border: none; + margin-top: -20px; +} + +.index-dropdown { + position: relative; + display: inline-block; +} + +.index-dropdownContent { + display: none; + position: absolute; + background-color: #f1f1f1; + width: 100%; + box-shadow: 0px 8px 16px 0px rgba(0, 0, 0, 0.2); + z-index: 1; +} + +.index-dropdownContent a { + color: black; + padding: 12px 16px; + text-decoration: none; + display: block; +} + +.index-dropdownContent a:hover { + background-color: #ddd; +} + +.index-dropdown:hover .index-dropdownContent { + display: block; +} + +.index-dropdown:hover .index-dropBtn { + background-color: #5f3eff; +} + +#index-installContainer { + display: flex; +} diff --git a/docs/edge-stack/latest/topics/install/index.md b/docs/edge-stack/latest/topics/install/index.md new file mode 100644 index 000000000..ac6a79d41 --- /dev/null +++ b/docs/edge-stack/latest/topics/install/index.md @@ -0,0 +1,40 @@ +import Alert from '@material-ui/lab/Alert'; +import './index.less' + +# Installing $productName$ + +## Install with Helm +Helm, the package manager for Kubernetes, is the recommended way to install +$productName$. Full details are in the [Helm instructions.](helm/) + +## Install with Kubernetes YAML +Another way to install $productName$ if you are unable to use Helm is to +directly apply Kubernetes YAML. See details in the +[manual YAML installation instructions.](yaml-install). + +## Try the demo with Docker +The Docker install will let you try the $productName$ locally in seconds, +but is not supported for production workloads. [Try $productName$ on Docker.](docker/) + +## Upgrade or migrate to a newer version +If you already have an existing installation of $AESproductName$ or +$OSSproductName$, you can upgrade your instance. The [migration matrix](migration-matrix/) +shows you how. + +## Container Images +Although our installation guides will favor using the `docker.io` container registry, +we publish $AESproductName$ and $OSSproductName$ releases to multiple registries. + +Starting with version 1.0.0, you can pull the aes image from any of the following registries: +- `docker.io/datawire/` +- `gcr.io/datawire/` + +We want to give you flexibility and independence from a hosting platform's uptime to support +your production needs for $AESproductName$ or $OSSproductName$. Read more about +[Running $productName$ in Production](../running). + +# What’s Next? +$productName$ has a comprehensive range of [features](/features/) to +support the requirements of any edge microservice. To learn more about how $productName$ works, along with use cases, best practices, and more, +check out the [Welcome page](../../tutorials/getting-started/) or read the [$productName$ +Story](../../about/why-ambassador). diff --git a/docs/edge-stack/latest/topics/install/migrate-to-3-alternate.md b/docs/edge-stack/latest/topics/install/migrate-to-3-alternate.md new file mode 100644 index 000000000..d0b791a12 --- /dev/null +++ b/docs/edge-stack/latest/topics/install/migrate-to-3-alternate.md @@ -0,0 +1,36 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrading $productName$ with a separate cluster + +You can upgrade from any version of $AESproductName$ or $OSSproductName$ to +any version of either by installing the new version in a new Kubernetes cluster, +then copying over configuration as needed. This is the way to be absolutely +certain that each installation cannot affect the other: it is extremely safe, +but is also significantly more effort. + +For example, to upgrade from some other version of $AESproductName$ or +$OSSproductName$ to $productName$ $version$: + +1. Install $productName$ $version$ in a completely new cluster. + +2. **Create `Listener`s for $productName$ $version$.** + + When $productName$ $version$ starts, it will not have any `Listener`s, and it will not + create any. You must create `Listener` resources by hand, or $productName$ $version$ + will not listen on any ports. + +3. Copy the entire configuration from the $productName$ 1.X cluster to the $productName$ + $version$ cluster. This is most simply done with `kubectl get -o yaml | kubectl apply -f -`. + + This will create `getambassador.io/v2` resources in the $productName$ $version$ cluster. + $productName$ $version$ will translate them internally to `getambassador.io/v3alpha1` + resources. + +4. Each $productName$ instance has its own cluster, so you can test the new + instance without disrupting traffic to the existing instance. + +5. If you need to make changes, you can change the `getambassador.io/v2` resource, or convert the + resource you're changing to `getambassador.io/v3alpha1` by using `kubectl edit`. + +6. Once everything is working with both versions, transfer incoming traffic to the $productName$ + $version$ cluster. diff --git a/docs/edge-stack/latest/topics/install/migration-matrix.md b/docs/edge-stack/latest/topics/install/migration-matrix.md new file mode 100644 index 000000000..21253feaf --- /dev/null +++ b/docs/edge-stack/latest/topics/install/migration-matrix.md @@ -0,0 +1,46 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrading $productName$ + + + Read the instructions below before making any changes to your cluster! + + +There are currently multiple paths for upgrading $productName$, depending on what version you're currently +running, what you want to be running, and whether you installed $productName$ using [Helm](../helm) or +YAML. + +(To check out if you installed $productName$ using Helm, run `helm list --all` and see if +$productName$ is listed. If so, you installed using Helm.) + + + Read the instructions below before making any changes to your cluster! + + +## If you are currently running $OSSproductName$ + +See the [instructions on updating $OSSproductName$](../../../../../emissary/$ossDocsVersion$/topics/install/migration-matrix). + +## If you installed $productName$ using Helm + +| If you're running. | You can upgrade to | +|-----------------------------------------|----------------------------------------------------------------------------------| +| $AESproductName$ 3.7.X | [$AESproductName$ $version$](../upgrade/helm/edge-stack-3.7/edge-stack-3.X) | +| $AESproductName$ $versionTwoX$ | [$AESproductName$ $version$](../upgrade/helm/edge-stack-2.5/edge-stack-3.X) | +| $AESproductName$ 2.4.X | [$AESproductName$ $versionTwoX$](../upgrade/helm/edge-stack-2.4/edge-stack-2.X) | +| $AESproductName$ 2.0.X | [$AESproductName$ $versionTwoX$](../upgrade/helm/edge-stack-2.0/edge-stack-2.X) | +| $AESproductName$ $versionOneX$ | [$AESproductName$ $versionTwoX$](../upgrade/helm/edge-stack-1.14/edge-stack-2.X) | +| $AESproductName$ prior to $versionOneX$ | [$AESproductName$ $versionOneX$](../../../../1.14/topics/install/upgrading) | +| $OSSproductName$ $ossVersion$ | [$AESproductName$ $version$](../upgrade/helm/emissary-3.8/edge-stack-3.X) | + +## If you installed $AESproductName$ manually by applying YAML + +| If you're running. | You can upgrade to | +|-----------------------------------------|----------------------------------------------------------------------------------| +| $AESproductName$ 3.7.X | [$AESproductName$ $version$](../upgrade/yaml/edge-stack-3.7/edge-stack-3.X) | +| $AESproductName$ $versionTwoX$ | [$AESproductName$ $version$](../upgrade/yaml/edge-stack-2.5/edge-stack-3.X) | +| $AESproductName$ 2.4.X | [$AESproductName$ $versionTwoX$](../upgrade/yaml/edge-stack-2.4/edge-stack-2.X) | +| $AESproductName$ 2.0.X | [$AESproductName$ $versionTwoX$](../upgrade/yaml/edge-stack-2.0/edge-stack-2.X) | +| $AESproductName$ $versionOneX$ | [$AESproductName$ $versionTwoX$](../upgrade/yaml/edge-stack-1.14/edge-stack-2.X) | +| $AESproductName$ prior to $versionOneX$ | [$AESproductName$ $versionOneX$](../../../../1.14/topics/install/upgrading) | +| $OSSproductName$ $ossVersion$ | [$AESproductName$ $version$](../upgrade/yaml/emissary-3.8/edge-stack-3.X) | diff --git a/docs/edge-stack/latest/topics/install/upgrade/helm/edge-stack-1.14/edge-stack-2.X.md b/docs/edge-stack/latest/topics/install/upgrade/helm/edge-stack-1.14/edge-stack-2.X.md new file mode 100644 index 000000000..88983d794 --- /dev/null +++ b/docs/edge-stack/latest/topics/install/upgrade/helm/edge-stack-1.14/edge-stack-2.X.md @@ -0,0 +1,378 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 1.14.X to $productName$ $versionTwoX$ (Helm) + + + This guide covers migrating from $productName$ 1.14.X to $productName$ $versionTwoX$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation originally made using Helm. + If you did not install with Helm, see the YAML-based + upgrade instructions. + + +We're pleased to introduce $productName$ $versionTwoX$! The 2.X family introduces a number of +changes to allow $productName$ to more gracefully handle larger installations (including +multitenant or multiorganizational installations), reduce memory footprint, and improve +performance. In keeping with [SemVer](https://semver.org), $productName$ 2.X introduces +some changes that aren't backward-compatible with 1.X. These changes are detailed in +[Major Changes in $productName$ 2.X](../../../../../../about/changes-2.x). + +## Migration Overview + + + Read the migration instructions below before making any changes to your + cluster! + + +The recommended strategy for migration is to run $productName$ 1.14 and $productName$ +$versionTwoX$ side-by-side in the same cluster. This gives $productName$ $versionTwoX$ +and $productName$ 1.14 access to all the same configuration resources, with some +important caveats: + +1. **$productName$ 1.14 will not see any `getambassador.io/v3alpha1` resources.** + + This is intentional; it provides a way to apply configuration only to + $productName$ $versionTwoX$, while not interfering with the operation of your + $productName$ 1.14 installation. + +2. **If needed, you can use labels to further isolate configurations.** + + If you need to prevent your $productName$ $versionTwoX$ installation from + seeing a particular bit of $productName$ 1.14 configuration, you can apply + a Kubernetes label to the configuration resources that should be seen by + your $productName$ $versionTwoX$ installation, then set its + `AMBASSADOR_LABEL_SELECTOR` environment variable to restrict its configuration + to only the labelled resources. + + For example, you could apply a `version-two: true` label to all resources + that should be visible to $productName$ $versionTwoX$, then set + `AMBASSADOR_LABEL_SELECTOR=version-two=true` in its Deployment. + +3. **$productName$ 1.14 must remain in control of ACME while both installations are running.** + + The processes that handle ACME challenges cannot be managed by both $productName$ + 1.X and $productName$ $versionTwoX$ at the same time. The instructions below disable ACME + in $productName$ $versionTwoX$, allowing $productName$ 1.14 to continue managing it. + + This implies that any new `Host`s used for $productName$ 1.14 should be created using + `getambassador.io/v2` so that $productName$ 1.14 can see them. + +4. **Check `AuthService` and `RateLimitService` resources, if any.** + + If you have an [`AuthService`](../../../../../using/authservice/) or + [`RateLimitService`](../../../../../running/services/rate-limit-service) installed, make + sure that they are using the [namespace-qualified DNS name](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#namespaces-of-services). + If they are not, the initial migration tests may fail. + + Additionally, when installing with Helm, you must make sure that $productName$ $versionTwoX$ + does not attempt to create duplicate `AuthService` and `RateLimitService` entries. Add + + ``` + --set rateLimit.create=false + ``` + + and + + ``` + --set authService.create=false + ``` + + on the Helm command line to prevent duplicating these resources. + +5. **Be careful to only have one $productName$ Agent running at a time.** + + The $productName$ Agent is responsible for communications between + $productName$ and Ambassador Cloud. If multiple versions of the Agent are + running simultaneously, Ambassador Cloud could see conflicting information + about your cluster. + + The best way to avoid multiple agents when installing with Helm is to use + `--set emissary-ingress.agent.enabled=false` to tell Helm not to install a + new Agent with $productName$ $versionTwoX$. Once testing is done, you can switch + Agents safely. + +6. **If you use ACME for multiple `Host`s, add a wildcard `Host` too.** + + This is required to manage a known issue. This issue will be resolved in a future + $AESproductName$ release. + +7. **Be careful about label selectors on Kubernetes Services!** + + If you have services in $productName$ 1.14 that use selectors that will match + Pods from $productName$ $versionTwoX$, traffic will be erroneously split between + $productName$ 1.14 and $productName$ $versionTwoX$. The labels used by $productName$ + $versionTwoX$ include: + + ```yaml + app.kubernetes.io/name: edge-stack + app.kubernetes.io/instance: edge-stack + app.kubernetes.io/part-of: edge-stack + app.kubernetes.io/managed-by: getambassador.io + product: aes + profile: main + ``` + +You can also migrate by [installing $productName$ $versionTwoX$ in a separate cluster](../../../../migrate-to-2-alternate). +This permits absolute certainty that your $productName$ 1.14 configuration will not be +affected by changes meant for $productName$ $versionTwoX$, and it eliminates concerns about +ACME, but it is more effort. + +## Side-by-Side Migration Steps + +Migration is an eight-step process: + +1. **Make sure that older configuration resources are not present.** + + $productName$ 2.X does not support `getambassador.io/v0` or `getambassador.io/v1` + resources, and Kubernetes will not permit removing support for CRD versions that are + still in use for stored resources. To verify that no resources older than + `getambassador.io/v2` are active, run + + ``` + kubectl get crds -o 'go-template={{range .items}}{{.metadata.name}}={{.status.storedVersions}}{{"\n"}}{{end}}' | fgrep getambassador.io + ``` + + If `v1` is present in the output, **do not begin migration.** The old resources must be + converted to `getambassador.io/v2` and the `storedVersion` information in the cluster + must be updated. If necessary, contact Ambassador Labs on [Slack](http://a8r.io/slack) + for more information. + +2. **Install new CRDs.** + + Before installing $productName$ $versionTwoX$ itself, you must configure your + Kubernetes cluster to support its new `getambassador.io/v3alpha1` configuration + resources. Note that `getambassador.io/v2` resources are still supported, but **you + must install support for `getambassador.io/v3alpha1`** to run $productName$ $versionTwoX$, + even if you intend to continue using only `getambassador.io/v2` resources for some + time. + + ``` + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$versionTwoX$/aes-crds.yaml && \ + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $versionTwoX$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$OSSproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +3. **Install $productName$ $versionTwoX$.** + + After installing the new CRDs, you need to install $productName$ $versionTwoX$ itself + **in the same namespace as your existing $productName$ 1.14 installation**. It's important + to use the same namespace so that the two installations can see the same secrets, etc. + + + Make sure that you set the AES_ACME_LEADER_DISABLE flag. This prevents + $productName$ $versionTwoX$ from trying to manage ACME, so that $productName$ 1.14 can + do it instead. + + + Start by making sure that your `datawire` Helm repo is set correctly: + + ```bash + helm repo remove datawire + helm repo add datawire https://app.getambassador.io + helm repo update + ``` + + Typically, $productName$ 1.14 was installed in the `ambassador` namespace. If you installed + $productName$ 1.14 in a different namespace, change the namespace in the commands below. + + - If you do not need to set `AMBASSADOR_LABEL_SELECTOR`: + + ```bash + helm install -n ambassador \ + --set emissary-ingress.agent.enabled=false \ + --set rateLimit.create=false \ + --set authService.create=false \ + --set emissary-ingress.env.AES_ACME_LEADER_DISABLE=true \ + edge-stack datawire/edge-stack && \ + kubectl rollout status -n ambassador deployment/edge-stack -w + ``` + + - If you do need to set `AMBASSADOR_LABEL_SELECTOR`, use `--set`, for example: + + ```bash + helm install -n ambassador \ + --set emissary-ingress.agent.enabled=false \ + --set rateLimit.create=false \ + --set authService.create=false \ + --set emissary-ingress.env.AES_ACME_LEADER_DISABLE=true \ + --set emissary-ingress.env.AMBASSADOR_LABEL_SELECTOR="version-two=true" \ + edge-stack datawire/edge-stack && \ + kubectl rollout status -n ambassador deployment/edge-stack -w + ``` + + + You must use the $productHelmName$ Helm chart for $productName$ $versionTwoX$. + Do not use the ambassador Helm chart. + + +4. **Install `Listener`s and `Host`s as needed.** + + An important difference between $productName$ 1.14 and $productName$ $versionTwoX$ is the + new **mandatory** `Listener` CRD. Also, when running both installations side by side, + you will need to make sure that a `Host` is present for the new $productName$ $versionTwoX$ + Service. For example: + + ```bash + kubectl apply -f - < + Make sure that any Hosts you create use API version getambassador.io/v2, + so that they can be managed by $productName$ 1.14 as long as both installations are running. + + + This example requires that you know the hostname for the $productName$ Service (`$EMISSARY_HOSTNAME`) + and that you have created a TLS Secret for it in `$EMISSARY_TLS_SECRET`. + +5. **Test!** + + Your $productName$ $versionTwoX$ installation can support the `getambassador.io/v2` + configuration resources used by $productName$ 1.14, but you may need to make some + changes to the configuration, as detailed in the documentation on + [configuring $productName$ Communications](../../../../../../howtos/configure-communications) + and [updating CRDs to `getambassador.io/v3alpha1`](../../../../convert-to-v3alpha1). + + + Kubernetes will not allow you to have a getambassador.io/v3alpha1 resource + with the same name as a getambassador.io/v2 resource or vice versa: only + one version can be stored at a time.
+
+ If you find that your $productName$ $versionTwoX$ installation and your $productName$ 1.14 + installation absolutely must have resources that are only seen by one version or the + other way, see overview section 2, "If needed, you can use labels to further isolate configurations". +
+ + **If you find that you need to roll back**, just reinstall your 1.14 CRDs, delete your + installation of $productName$ $versionTwoX$, and delete the `emissary-system` namespace. + +6. **When ready, switch over to $productName$ $versionTwoX$.** + + You can run $productName$ 1.14 and $productName$ $versionTwoX$ side-by-side as long as you care + to. However, taking full advantage of $productName$ 2.X's capabilities **requires** + [updating your configuration to use `getambassador.io/v3alpha1` configuration resources](../../../../convert-to-v3alpha1), + since some useful features in $productName$ $versionTwoX$ are only available using + `getambassador.io/v3alpha1` resources. + + When you're ready to have $productName$ $versionTwoX$ handle traffic on its own, switch + your original $productName$ 1.14 Service to point to $productName$ $versionTwoX$. Use + `kubectl edit service ambassador` and change the `selectors` to: + + ``` + app.kubernetes.io/instance: edge-stack + app.kubernetes.io/name: edge-stack + profile: main + ``` + + Repeat using `kubectl edit service ambassador-admin` for the `ambassador-admin` + Service. + +7. **Install the $productName$ $versionTwoX$ Ambassador Agent.** + + First, scale the 1.14 agent to 0: + + ``` + kubectl scale -n ambassador deployment/ambassador-agent --replicas=0 + ``` + + Once that's done, install the new Agent. **Note that if you needed to set + `AMBASSADOR_LABEL_SELECTOR`, you must add that to this `helm upgrade` command.** + + ```bash + helm upgrade -n ambassador \ + --set emissary-ingress.agent.enabled=true \ + --set rateLimit.create=false \ + --set authService.create=false \ + --set emissary-ingress.env.AES_ACME_LEADER_DISABLE=true \ + $productHelmName$ datawire/$productHelmName$ && \ + kubectl rollout status -n ambassador deployment/edge-stack -w + ``` + +8. **Finally, enable ACME in $productName$ $versionTwoX$.** + + First, scale the 1.14 Ambassador to 0: + + ``` + kubectl scale -n ambassador deployment/ambassador --replicas=0 + ``` + + Once that's done, enable ACME in $productName$ $versionTwoX$. **Note that if you + needed to set `AMBASSADOR_LABEL_SELECTOR`, you must add that to this + `helm upgrade` command.** + + ```bash + helm upgrade -n ambassador \ + --set emissary-ingress.agent.enabled=true \ + --set rateLimit.create=false \ + --set authService.create=false \ + --set emissary-ingress.env.AES_ACME_LEADER_DISABLE= \ + $productHelmName$ datawire/$productHelmName$ && \ + kubectl rollout status -n ambassador deployment/edge-stack -w + ```` + +Congratulations! At this point, $productName$ $versionTwoX$ is fully running, and +it's safe to remove the old `ambassador` and `ambassador-agent` Deployments: + +``` +kubectl delete -n ambassador deployment/ambassador deployment/ambassador-agent +``` + +Once $productName$ 1.14 is no longer running, you may [convert](../../../../convert-to-v3alpha1) +any remaining `getambassador.io/v2` resources to `getambassador.io/v3alpha1`. +You may also want to redirect DNS to the `edge-stack` Service and remove the +`ambassador` Service. diff --git a/docs/edge-stack/latest/topics/install/upgrade/helm/edge-stack-2.0/edge-stack-2.X.md b/docs/edge-stack/latest/topics/install/upgrade/helm/edge-stack-2.0/edge-stack-2.X.md new file mode 100644 index 000000000..c9d337839 --- /dev/null +++ b/docs/edge-stack/latest/topics/install/upgrade/helm/edge-stack-2.0/edge-stack-2.X.md @@ -0,0 +1,92 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 2.0.5 to $productName$ $versionTwoX$ (Helm) + + + This guide covers migrating from $productName$ 2.0.5 to $productName$ $versionTwoX$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation originally made using Helm. + If you did not install with Helm, see the YAML-based + upgrade instructions. + + + + Upgrading from $productName$ 2.0.5 to $productName$ $versionTwoX$ typically requires downtime. + In some situations, Ambassador Labs Support may be able to assist with a zero-downtime migration; + contact support with questions. + + +Migrating from $productName$ 2.0.5 to $productName$ $versionTwoX$ is a four-step process: + +1. **Install new CRDs.** + + Before installing $productName$ $versionTwoX$ itself, you need to update the CRDs in + your cluster; Helm will not do this for you. This is mandatory during any upgrade of $productName$. + + ``` + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$versionTwoX$/aes-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $versionTwoX$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$OSSproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. **Delete $productName$ 2.0.5 Deployment.** + + + Delete only the Deployment for $productName$ 2.0.5 in order to preserve all of + your existing configuration. + + + Use `kubectl` to delete the Deployment for $productName$ 2.0.5. Typically, this will be found + in the `ambassador` namespace. + + ``` + kubectl delete -n ambassador deployment edge-stack + ``` + +3. **Install $productName$ $versionTwoX$.** + + After installing the new CRDs, use Helm to install $productName$ $versionTwoX$. Start by + making sure that your `datawire` Helm repo is set correctly: + + ```bash + helm repo remove datawire + helm repo add datawire https://app.getambassador.io + helm repo update + ``` + + Then, install $productName$ in the `$productNamespace$` namespace. If necessary for + your installation (e.g. if you were running with `AMBASSADOR_SINGLE_NAMESPACE` set), + you can choose a different namespace. + + ```bash + helm install -n $productNamespace$ \ + $productHelmName$ datawire/$productHelmName$ && \ + kubectl rollout status -n $productNamespace$ deployment/$productDeploymentName$ -w + ``` + + + You must use the $productHelmName$ Helm chart to install $productName$ 2.X. + Do not use the ambassador Helm chart. + diff --git a/docs/edge-stack/latest/topics/install/upgrade/helm/edge-stack-2.4/edge-stack-2.X.md b/docs/edge-stack/latest/topics/install/upgrade/helm/edge-stack-2.4/edge-stack-2.X.md new file mode 100644 index 000000000..f11054800 --- /dev/null +++ b/docs/edge-stack/latest/topics/install/upgrade/helm/edge-stack-2.4/edge-stack-2.X.md @@ -0,0 +1,75 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 2.4.X to $productName$ $versionTwoX$ (Helm) + + + This guide covers migrating from $productName$ 2.4 to $productName$ $versionTwoX$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation made using Helm. + If you did not originally install with Helm, see the YAML-based + upgrade instructions. + + +Since $productName$'s configuration is entirely stored in Kubernetes resources, upgrading between minor +versions is straightforward. + +Migration is a two-step process: + +1. **Install new CRDs.** + + Before installing $productName$ $versionTwoX$ itself, you need to update the CRDs in + your cluster; Helm will not do this for you. This is mandatory during any upgrade of $productName$. + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$versionTwoX$/aes-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $versionTwoX$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$OSSproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. **Install $productName$ $versionTwoX$.** + + After installing the new CRDs, use Helm to install $productName$ $versionTwoX$. Start by + making sure that your `datawire` Helm repo is set correctly: + + ```bash + helm repo remove datawire + helm repo add datawire https://app.getambassador.io + helm repo update + ``` + + Then, update your $productName$ installation in the `$productNamespace$` namespace. + If necessary for your installation (e.g. if you were running with + `AMBASSADOR_SINGLE_NAMESPACE` set), you can choose a different namespace. + + ```bash + helm upgrade -n $productNamespace$ \ + $productHelmName$ datawire/$productHelmName$ && \ + kubectl rollout status -n $productNamespace$ deployment/$productDeploymentName$ -w + ``` + + + You must use the $productHelmName$ Helm chart to install $productName$ 2.X. + Do not use the ambassador Helm chart. + diff --git a/docs/edge-stack/latest/topics/install/upgrade/helm/edge-stack-2.5/edge-stack-3.X.md b/docs/edge-stack/latest/topics/install/upgrade/helm/edge-stack-2.5/edge-stack-3.X.md new file mode 100644 index 000000000..acbc1e4a8 --- /dev/null +++ b/docs/edge-stack/latest/topics/install/upgrade/helm/edge-stack-2.5/edge-stack-3.X.md @@ -0,0 +1,154 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 2.5.X to $productName$ $version$ (Helm) + + + This guide covers migrating from $productName$ 2.5.Z to $productName$ $version$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation made using Helm. + If you did not originally install with Helm, see the YAML-based + upgrade instructions. + + + + Make sure that you have converted your External Filters to `protocol_version: "v3"` before upgrading. + If not set or set to `v2` then an error will be posted and a static response will be returned in $productName$ 3.Y. + + +Since $productName$'s configuration is entirely stored in Kubernetes resources, upgrading between +versions is straightforward. + +$productName$ 3 is functionally compatible with $productName$ 2.x, but with any major upgrade there are some changes to consider. Such as, Envoy removing support for V2 Transport Protocol features. Below we will outline some of these changes and things to consider when upgrading. + +### Resources to check before migrating to $version$. + +$productName$ 3.X has been upgraded from Envoy 1.17.X to Envoy 1.22 which removed support for the Envoy V2 Transport Protocol. This means all `AuthService`, `RatelimitService`, `LogServices`, and External `Filters` must be updated to use the V3 Protocol. Additionally support for some of the runtime bootstrap flags has been removed. + +You can refer to the [Major changes in $productName$ 3.x](../../../../../../about/changes-3.y/) guide for an overview of the changes. + +1. $productName$ 3.2 fixed a bug with `Host.spec.selector\mappingSelector` and `Listener.spec.selector` not being properly enforced. + In previous versions, if only a single label from the selector was present on the resource then they would be associated. Additionally, when associating `Hosts` with `Mappings`, if the `Mapping` configured a `hostname` that matched the `hostname` of the `Host` then they would be associated regardless of the configuration of the `selector\mappingSelector` on the `Host`. + + Before upgrading, review your Ambassador resources, and if you make use of the selectors, ensure that every other resource you want it to be associated with contains all the required labels. + + The environment variable `DISABLE_STRICT_LABEL_SELECTORS` can be set to `"true"` on the $productName$ deployment to revert to the + old incorrect behavior to help prevent any configuration issues after upgrading in the event that not all manifests making use of the selectors have been corrected yet. + + For more information on `DISABLE_STRICT_LABEL_SELECTORS` see the [Environment Variables page](../../../../../running/environment). + + +2. Check Transport Protocol usage on all resources before migrating. + + The `AuthService`, `RatelimitService`, `LogServices`, and External `Filters` that use the `grpc` protocol will now need to explicilty set `protocol_version: "v3"`. If not set or set to `v2` then an error will be posted and a static response will be returned. + + `protocol_version` should be updated to `v3` for all of the above resources while still running $productName$ $versionTwoX$. As of version `2.3.z`+, support for `protocol_version` `v2` and `v3` is supported in order to allow migration from `protocol_version` `v2` to `v3` before upgrading to $productName$ $version$ where support for `v2` is removed. + + Upgrading any application code for your own implementations of these services is very straightforward. + + The following imports simply need to be updated to switch from Envoy's Transport Protocol `v2` to `v3`, and then the configuration for these resources can be updated to add the `protocl_version: "v3"` when the updated service is deployed. + + `v2` Imports: + ```golang + envoyCoreV2 "github.com/datawire/ambassador/pkg/api/envoy/api/v2/core" + envoyAuthV2 "github.com/datawire/ambassador/pkg/api/envoy/service/auth/v2" + envoyType "github.com/datawire/ambassador/pkg/api/envoy/type" + ``` + + `v3` Imports: + ```golang + envoyCoreV3 "github.com/datawire/ambassador/v2/pkg/api/envoy/config/core/v3" + envoyAuthV3 "github.com/datawire/ambassador/v2/pkg/api/envoy/service/auth/v3" + envoyType "github.com/datawire/ambassador/v2/pkg/api/envoy/type/v3" + ``` + +3. Check removed runtime changes + + ```yaml + # No longer necessary because this was removed from Envoy + # $productName$ already was converted to use the compressor API + # https://www.envoyproxy.io/docs/envoy/v1.22.0/configuration/http/http_filters/compressor_filter#config-http-filters-compressor + "envoy.deprecated_features.allow_deprecated_gzip_http_filter": true, + + # Upgraded to v3, all support for V2 Transport Protocol removed + "envoy.deprecated_features:envoy.api.v2.route.HeaderMatcher.regex_match": true, + "envoy.deprecated_features:envoy.api.v2.route.RouteMatch.regex": true, + + # Developers will need to upgrade TracingService to V3 protocol which no longer supports HTTP_JSON_V1 + "envoy.deprecated_features:envoy.config.trace.v2.ZipkinConfig.HTTP_JSON_V1": true, + + # V2 protocol removed so flag no longer necessary + "envoy.reloadable_features.enable_deprecated_v2_api": true, + ``` + +4. Support for LightStep tracing driver removed + + + As of $productName$ 3.4.Z, the LightStep tracing driver is no longer supported. To ensure you do not drop any tracing data, be sure to read before upgrading. + + +$productName$ 3.4 is based on Envoy 1.24.1 which removed support for the `LightStep` tracing driver. The team at LightStep and the maintainers of Envoy-Proxy recommend that users instead leverage the OpenTelemetry Collector to send tracing information to LightStep. We have written a guide which can be found here Distributed Tracing with OpenTelemetry and Lightstep that outlines how to set this up. **It is important that you follow this upgrade path prior to upgrading or you will drop tracing data.** + +## Migration Steps + +Migration is a two-step process: + +1. **Install new CRDs.** + + After reviewing the changes in 3.x and confirming that you are ready to upgrade, the process is the same as upgrading minor versions + in previous version of $productName$ and does not require the complex migration steps that the migration from 1.x tto 2.x required. + + Before installing $productName$ $version$ itself, you need to update the CRDs in + your cluster; Helm will not do this for you. This is mandatory during any upgrade of $productName$. + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$version$/aes-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $version$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$OSSproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. **Install $productName$ $version$.** + + After installing the new CRDs, use Helm to install $productName$ $version$. Start by + making sure that your `datawire` Helm repo is set correctly: + + ```bash + helm repo remove datawire + helm repo add datawire https://app.getambassador.io + helm repo update + ``` + + Then, update your $productName$ installation in the `$productNamespace$` namespace. + If necessary for your installation (e.g. if you were running with + `AMBASSADOR_SINGLE_NAMESPACE` set), you can choose a different namespace. + + ```bash + helm upgrade -n $productNamespace$ \ + $productHelmName$ datawire/$productHelmName$ && \ + kubectl rollout status -n $productNamespace$ deployment/$productDeploymentName$ -w + ``` + + + You must use the $productHelmName$ Helm chart to install $productName$ 3.X. + diff --git a/docs/edge-stack/latest/topics/install/upgrade/helm/edge-stack-3.4/edge-stack-3.X.md b/docs/edge-stack/latest/topics/install/upgrade/helm/edge-stack-3.4/edge-stack-3.X.md new file mode 100644 index 000000000..ef874e5d2 --- /dev/null +++ b/docs/edge-stack/latest/topics/install/upgrade/helm/edge-stack-3.4/edge-stack-3.X.md @@ -0,0 +1,152 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 3.4.X to $productName$ $version$ (Helm) + + + This guide covers migrating from $productName$ 3.4.Z to $productName$ $version$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation made using Helm. + If you did not originally install with Helm, see the YAML-based + upgrade instructions. + + + + Make sure that you have converted your External Filters to `protocol_version: "v3"` before upgrading. + If not set or set to `v2` then an error will be posted and a static response will be returned in $productName$ 3.Y. + + +Since $productName$'s configuration is entirely stored in Kubernetes resources, upgrading between +versions is straightforward. + +$productName$ 3 is functionally compatible with $productName$ 2.x, but with any major upgrade there are some changes to consider. Such as, Envoy removing support for V2 Transport Protocol features. Below we will outline some of these changes and things to consider when upgrading. + +### Resources to check before migrating to $version$. + +$productName$ 3.X has been upgraded from Envoy 1.17.X to Envoy 1.22 which removed support for the Envoy V2 Transport Protocol. This means all `AuthService`, `RatelimitService`, `LogServices`, and External `Filters` must be updated to use the V3 Protocol. Additionally support for some of the runtime bootstrap flags has been removed. + +You can refer to the [Major changes in $productName$ 3.x](../../../../../../about/changes-3.y/) guide for an overview of the changes. + +1. $productName$ 3.2 fixed a bug with `Host.spec.selector\mappingSelector` and `Listener.spec.selector` not being properly enforced. + In previous versions, if only a single label from the selector was present on the resource then they would be associated. Additionally, when associating `Hosts` with `Mappings`, if the `Mapping` configured a `hostname` that matched the `hostname` of the `Host` then they would be associated regardless of the configuration of the `selector\mappingSelector` on the `Host`. + + Before upgrading, review your Ambassador resources, and if you make use of the selectors, ensure that every other resource you want it to be associated with contains all the required labels. + + The environment variable `DISABLE_STRICT_LABEL_SELECTORS` can be set to `"true"` on the $productName$ deployment to revert to the + old incorrect behavior to help prevent any configuration issues after upgrading in the event that not all manifests making use of the selectors have been corrected yet. + + For more information on `DISABLE_STRICT_LABEL_SELECTORS` see the [Environment Variables page](../../../../../running/environment). + + +2. Check Transport Protocol usage on all resources before migrating. + + The `AuthService`, `RatelimitService`, `LogServices`, and External `Filters` that use the `grpc` protocol will now need to explicilty set `protocol_version: "v3"`. If not set or set to `v2` then an error will be posted and a static response will be returned. + + `protocol_version` should be updated to `v3` for all of the above resources while still running $productName$ $versionTwoX$. As of version `2.3.z`+, support for `protocol_version` `v2` and `v3` is supported in order to allow migration from `protocol_version` `v2` to `v3` before upgrading to $productName$ $version$ where support for `v2` is removed. + + Upgrading any application code for your own implementations of these services is very straightforward. + + The following imports simply need to be updated to switch from Envoy's Transport Protocol `v2` to `v3`, and then the configuration for these resources can be updated to add the `protocl_version: "v3"` when the updated service is deployed. + + `v2` Imports: + ```golang + envoyCoreV2 "github.com/datawire/ambassador/pkg/api/envoy/api/v2/core" + envoyAuthV2 "github.com/datawire/ambassador/pkg/api/envoy/service/auth/v2" + envoyType "github.com/datawire/ambassador/pkg/api/envoy/type" + ``` + + `v3` Imports: + ```golang + envoyCoreV3 "github.com/datawire/ambassador/v2/pkg/api/envoy/config/core/v3" + envoyAuthV3 "github.com/datawire/ambassador/v2/pkg/api/envoy/service/auth/v3" + envoyType "github.com/datawire/ambassador/v2/pkg/api/envoy/type/v3" + ``` + +3. Check removed runtime changes + + ```yaml + # No longer necessary because this was removed from Envoy + # $productName$ already was converted to use the compressor API + # https://www.envoyproxy.io/docs/envoy/v1.22.0/configuration/http/http_filters/compressor_filter#config-http-filters-compressor + "envoy.deprecated_features.allow_deprecated_gzip_http_filter": true, + + # Upgraded to v3, all support for V2 Transport Protocol removed + "envoy.deprecated_features:envoy.api.v2.route.HeaderMatcher.regex_match": true, + "envoy.deprecated_features:envoy.api.v2.route.RouteMatch.regex": true, + + # Developers will need to upgrade TracingService to V3 protocol which no longer supports HTTP_JSON_V1 + "envoy.deprecated_features:envoy.config.trace.v2.ZipkinConfig.HTTP_JSON_V1": true, + + # V2 protocol removed so flag no longer necessary + "envoy.reloadable_features.enable_deprecated_v2_api": true, + ``` + +4. Support for LightStep tracing driver removed + + + As of $productName$ 3.4.Z, the LightStep tracing driver is no longer supported. To ensure you do not drop any tracing data, be sure to read before upgrading. + + +$productName$ 3.4 is based on Envoy 1.24.1 which removed support for the `LightStep` tracing driver. The team at LightStep and the maintainers of Envoy-Proxy recommend that users instead leverage the OpenTelemetry Collector to send tracing information to LightStep. We have written a guide which can be found here Distributed Tracing with OpenTelemetry and Lightstep that outlines how to set this up. **It is important that you follow this upgrade path prior to upgrading or you will drop tracing data.** + +## Migration Steps + +1. **Install new CRDs.** + + After reviewing the changes in 3.x and confirming that you are ready to upgrade, the process is the same as upgrading minor versions + in previous version of $productName$ and does not require the complex migration steps that the migration from 1.x to 2.x required. + + Before installing $productName$ $version$ itself, you need to update the CRDs in + your cluster; Helm will not do this for you. This is mandatory during any upgrade of $productName$. + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$version$/aes-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $version$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$OSSproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. **Install $productName$ $version$.** + + After installing the new CRDs, use Helm to install $productName$ $version$. Start by + making sure that your `datawire` Helm repo is set correctly: + + ```bash + helm repo remove datawire + helm repo add datawire https://app.getambassador.io + helm repo update + ``` + + Then, update your $productName$ installation in the `$productNamespace$` namespace. + If necessary for your installation (e.g. if you were running with + `AMBASSADOR_SINGLE_NAMESPACE` set), you can choose a different namespace. + + ```bash + helm upgrade -n $productNamespace$ \ + $productHelmName$ datawire/$productHelmName$ && \ + kubectl rollout status -n $productNamespace$ deployment/$productDeploymentName$ -w + ``` + + + You must use the $productHelmName$ Helm chart to install $productName$ 3.X. + diff --git a/docs/edge-stack/latest/topics/install/upgrade/helm/edge-stack-3.7/edge-stack-3.X.md b/docs/edge-stack/latest/topics/install/upgrade/helm/edge-stack-3.7/edge-stack-3.X.md new file mode 100644 index 000000000..9876747d8 --- /dev/null +++ b/docs/edge-stack/latest/topics/install/upgrade/helm/edge-stack-3.7/edge-stack-3.X.md @@ -0,0 +1,152 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 3.7.X to $productName$ $version$ (Helm) + + + This guide covers migrating from $productName$ 3.7.Z to $productName$ $version$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation made using Helm. + If you did not originally install with Helm, see the YAML-based + upgrade instructions. + + + + Make sure that you have converted your External Filters to `protocol_version: "v3"` before upgrading. + If not set or set to `v2` then an error will be posted and a static response will be returned in $productName$ 3.Y. + + +Since $productName$'s configuration is entirely stored in Kubernetes resources, upgrading between +versions is straightforward. + +$productName$ 3 is functionally compatible with $productName$ 2.x, but with any major upgrade there are some changes to consider. Such as, Envoy removing support for V2 Transport Protocol features. Below we will outline some of these changes and things to consider when upgrading. + +### Resources to check before migrating to $version$. + +$productName$ 3.X has been upgraded from Envoy 1.17.X to Envoy 1.22 which removed support for the Envoy V2 Transport Protocol. This means all `AuthService`, `RatelimitService`, `LogServices`, and External `Filters` must be updated to use the V3 Protocol. Additionally support for some of the runtime bootstrap flags has been removed. + +You can refer to the [Major changes in $productName$ 3.x](../../../../../../about/changes-3.y/) guide for an overview of the changes. + +1. $productName$ 3.2 fixed a bug with `Host.spec.selector\mappingSelector` and `Listener.spec.selector` not being properly enforced. + In previous versions, if only a single label from the selector was present on the resource then they would be associated. Additionally, when associating `Hosts` with `Mappings`, if the `Mapping` configured a `hostname` that matched the `hostname` of the `Host` then they would be associated regardless of the configuration of the `selector\mappingSelector` on the `Host`. + + Before upgrading, review your Ambassador resources, and if you make use of the selectors, ensure that every other resource you want it to be associated with contains all the required labels. + + The environment variable `DISABLE_STRICT_LABEL_SELECTORS` can be set to `"true"` on the $productName$ deployment to revert to the + old incorrect behavior to help prevent any configuration issues after upgrading in the event that not all manifests making use of the selectors have been corrected yet. + + For more information on `DISABLE_STRICT_LABEL_SELECTORS` see the [Environment Variables page](../../../../../running/environment). + + +2. Check Transport Protocol usage on all resources before migrating. + + The `AuthService`, `RatelimitService`, `LogServices`, and External `Filters` that use the `grpc` protocol will now need to explicilty set `protocol_version: "v3"`. If not set or set to `v2` then an error will be posted and a static response will be returned. + + `protocol_version` should be updated to `v3` for all of the above resources while still running $productName$ $versionTwoX$. As of version `2.3.z`+, support for `protocol_version` `v2` and `v3` is supported in order to allow migration from `protocol_version` `v2` to `v3` before upgrading to $productName$ $version$ where support for `v2` is removed. + + Upgrading any application code for your own implementations of these services is very straightforward. + + The following imports simply need to be updated to switch from Envoy's Transport Protocol `v2` to `v3`, and then the configuration for these resources can be updated to add the `protocl_version: "v3"` when the updated service is deployed. + + `v2` Imports: + ```golang + envoyCoreV2 "github.com/datawire/ambassador/pkg/api/envoy/api/v2/core" + envoyAuthV2 "github.com/datawire/ambassador/pkg/api/envoy/service/auth/v2" + envoyType "github.com/datawire/ambassador/pkg/api/envoy/type" + ``` + + `v3` Imports: + ```golang + envoyCoreV3 "github.com/datawire/ambassador/v2/pkg/api/envoy/config/core/v3" + envoyAuthV3 "github.com/datawire/ambassador/v2/pkg/api/envoy/service/auth/v3" + envoyType "github.com/datawire/ambassador/v2/pkg/api/envoy/type/v3" + ``` + +3. Check removed runtime changes + + ```yaml + # No longer necessary because this was removed from Envoy + # $productName$ already was converted to use the compressor API + # https://www.envoyproxy.io/docs/envoy/v1.22.0/configuration/http/http_filters/compressor_filter#config-http-filters-compressor + "envoy.deprecated_features.allow_deprecated_gzip_http_filter": true, + + # Upgraded to v3, all support for V2 Transport Protocol removed + "envoy.deprecated_features:envoy.api.v2.route.HeaderMatcher.regex_match": true, + "envoy.deprecated_features:envoy.api.v2.route.RouteMatch.regex": true, + + # Developers will need to upgrade TracingService to V3 protocol which no longer supports HTTP_JSON_V1 + "envoy.deprecated_features:envoy.config.trace.v2.ZipkinConfig.HTTP_JSON_V1": true, + + # V2 protocol removed so flag no longer necessary + "envoy.reloadable_features.enable_deprecated_v2_api": true, + ``` + +4. Support for LightStep tracing driver removed + + + As of $productName$ 3.4.Z, the LightStep tracing driver is no longer supported. To ensure you do not drop any tracing data, be sure to read before upgrading. + + +$productName$ 3.4 is based on Envoy 1.24.1 which removed support for the `LightStep` tracing driver. The team at LightStep and the maintainers of Envoy-Proxy recommend that users instead leverage the OpenTelemetry Collector to send tracing information to LightStep. We have written a guide which can be found here Distributed Tracing with OpenTelemetry and Lightstep that outlines how to set this up. **It is important that you follow this upgrade path prior to upgrading or you will drop tracing data.** + +## Migration Steps + +1. **Install new CRDs.** + + After reviewing the changes in 3.x and confirming that you are ready to upgrade, the process is the same as upgrading minor versions + in previous version of $productName$ and does not require the complex migration steps that the migration from 1.x to 2.x required. + + Before installing $productName$ $version$ itself, you need to update the CRDs in + your cluster; Helm will not do this for you. This is mandatory during any upgrade of $productName$. + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$version$/aes-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $version$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$OSSproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. **Install $productName$ $version$.** + + After installing the new CRDs, use Helm to install $productName$ $version$. Start by + making sure that your `datawire` Helm repo is set correctly: + + ```bash + helm repo remove datawire + helm repo add datawire https://app.getambassador.io + helm repo update + ``` + + Then, update your $productName$ installation in the `$productNamespace$` namespace. + If necessary for your installation (e.g. if you were running with + `AMBASSADOR_SINGLE_NAMESPACE` set), you can choose a different namespace. + + ```bash + helm upgrade -n $productNamespace$ \ + $productHelmName$ datawire/$productHelmName$ && \ + kubectl rollout status -n $productNamespace$ deployment/$productDeploymentName$ -w + ``` + + + You must use the $productHelmName$ Helm chart to install $productName$ 3.X. + diff --git a/docs/edge-stack/latest/topics/install/upgrade/helm/emissary-3.8/edge-stack-3.X.md b/docs/edge-stack/latest/topics/install/upgrade/helm/emissary-3.8/edge-stack-3.X.md new file mode 100644 index 000000000..488d4c0e5 --- /dev/null +++ b/docs/edge-stack/latest/topics/install/upgrade/helm/emissary-3.8/edge-stack-3.X.md @@ -0,0 +1,241 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $OSSproductName$ $version$ to $AESproductName$ $version$ (Helm) + + + This guide covers migrating from $OSSproductName$ $version$ to $AESproductName$ $version$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation originally made using Helm. + If you did not install with Helm, see the YAML-based + upgrade instructions. + + +You can upgrade from $OSSproductName$ to $AESproductName$ with a few simple commands. When you upgrade to $AESproductName$, you'll be able to access additional capabilities such as **automatic HTTPS/TLS termination, Swagger/OpenAPI support, API catalog, Single Sign-On, and more.** For more about the differences between $AESproductName$ and $OSSproductName$, see the [Editions page](/editions). + +## Migration Overview + + + Read the migration instructions below before making any changes to your + cluster! + + +The recommended strategy for migration is to run $OSSproductName$ $version$ and $AESproductName$ +$version$ side-by-side in the same cluster. This gives $AESproductName$ $version$ +and $AESproductName$ $version$ access to all the same configuration resources, with some +important notes: + +1. **If needed, you can use labels to further isolate configurations.** + + If you need to prevent your $AESproductName$ $version$ installation from + seeing a particular bit of $OSSproductName$ $version$ configuration, you can apply + a Kubernetes label to the configuration resources that should be seen by + your $AESproductName$ $version$ installation, then set its + `AMBASSADOR_LABEL_SELECTOR` environment variable to restrict its configuration + to only the labelled resources. + + For example, you could apply a `version-two: true` label to all resources + that should be visible to $AESproductName$ $version$, then set + `AMBASSADOR_LABEL_SELECTOR=version-two=true` in its Deployment. + +2. **$AESproductName$ ACME and `Filter`s will be disabled while $OSSproductName$ is still running.** + + Since $AESproductName$ and $OSSproductName$ share configuration, $AESproductName$ cannot + configure its ACME or other filter processors without also affecting $OSSproductName$. This + migration process is written to simply disable these $AESproductName$ features to make + it simpler to roll back, if needed. Alternate, you can isolate the two configurations + as described above. + +3. **Be careful to only have one $productName$ Agent running at a time.** + + The $productName$ Agent is responsible for communications between + $productName$ and Ambassador Cloud. If multiple versions of the Agent are + running simultaneously, Ambassador Cloud could see conflicting information + about your cluster. + + The best way to avoid multiple agents when installing with Helm is to use + `--set emissary-ingress.agent.enabled=false` to tell Helm not to install a + new Agent with $productName$ $version$. Once testing is done, you can switch + Agents safely. + +4. **Be careful about label selectors on Kubernetes Services!** + + If you have services in $OSSproductName$ 3.X that use selectors that will match + Pods from $AESproductName$ $version$, traffic will be erroneously split between + $OSSproductName$ 3.X and $AESproductName$ $version$. The labels used by $AESproductName$ + $version$ include: + + ```yaml + app.kubernetes.io/name: edge-stack + app.kubernetes.io/instance: edge-stack + app.kubernetes.io/part-of: edge-stack + app.kubernetes.io/managed-by: getambassador.io + product: aes + profile: main + ``` + +You can also migrate by [installing $AESproductName$ $version$ in a separate cluster](../../../../migrate-to-3-alternate/). +This permits absolute certainty that your $OSSproductName$ $version$ configuration will not be +affected by changes meant for $AESproductName$ $version$, but it is more effort. + +## Side-by-Side Migration Steps + +Migration is a six-step process: + +1. **Install new CRDs.** + + Before installing $productName$ $version$ itself, you need to update the CRDs in + your cluster; Helm will not do this for you. This is mandatory during any upgrade of $productName$. + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$version$/aes-crds.yaml && \ + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $AESproductName$ $version$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $OSSproductName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$OSSproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. **Install $AESproductName$ $version$.** + + + $productName$ requires a valid license or cloud connect token to start. You can refer to the quickstart guide + for instructions on how to obtain a free community license. Copy the cloud token command from the guide in Ambassador cloud for use below. If you already have a cloud connect token or + a valid enterprise license, then you can skip this step. + + + After installing the new CRDs, you need to install $AESproductName$ $version$ itself + **in the same namespace as your existing $OSSproductName$ $version$ installation**. It's important + to use the same namespace so that the two installations can see the same secrets, etc. + + + Make sure that you set the various `create` flags when running Helm. This prevents + $AESproductName$ $version$ from trying to configure filters that will adversely affect + $OSSproductName$ $version$. + + + Start by making sure that your `datawire` Helm repo is set correctly: + + ```bash + helm repo remove datawire + helm repo add datawire https://app.getambassador.io + helm repo update + ``` + + Typically, $OSSproductName$ $version$ was installed in the `emissary` namespace. If you installed + $OSSproductName$ $version$ in a different namespace, change the namespace in the commands below. + + - If you do not need to set `AMBASSADOR_LABEL_SELECTOR`: + + ```bash + helm install -n emissary \ + --set emissary-ingress.agent.enabled=false \ + edge-stack datawire/edge-stack && \ + kubectl rollout status -n emissary deployment/edge-stack -w + ``` + + - If you do need to set `AMBASSADOR_LABEL_SELECTOR`, use `--set`, for example: + + ```bash + helm install -n emissary \ + --set emissary-ingress.agent.enabled=false \ + --set emissary-ingress.env.AMBASSADOR_LABEL_SELECTOR="version-two=true" \ + edge-stack datawire/edge-stack && \ + kubectl rollout status -n emissary deployment/edge-stack -w + ``` + + + You must use the $productHelmName$ Helm chart to install $AESproductName$ $version$. + + +3. **Test!** + + Your $AESproductName$ $version$ installation should come up running with the configuration + resources used by $OSSproductName$ $version$, including `Listener`s and `Host`s. + + + If you find that your $AESproductName$ $version$ installation and your $OSSproductName$ $version$ + installation absolutely must have resources that are only seen by one version or the + other way, see overview section 1, "If needed, you can use labels to further isolate configurations". + + + **If you find that you need to roll back**, just reinstall your $OSSproductName$ $version$ CRDs + and delete your installation of $AESproductName$ $version$. + +4. **When ready, switch over to $AESproductName$ $version$.** + + You can run $OSSproductName$ $version$ and $AESproductName$ $version$ side-by-side as long as you care + to. When you're ready to have $AESproductName$ $version$ handle traffic on its own, switch + your original $OSSproductName$ $version$ Service to point to $AESproductName$ $version$. Use + `kubectl edit -n emissary service emissary-ingress` and change the `selectors` to: + + ```yaml + app.kubernetes.io/instance: edge-stack + app.kubernetes.io/name: edge-stack + profile: main + ``` + + Repeat using `kubectl edit service ambassador-admin` for the `ambassador-admin` + Service. + +5. **Install the $productName$ $version$ Ambassador Agent.** + + First, scale the $OSSproductName$ agent to 0: + + ```bash + kubectl scale -n emissary deployment/emissary-agent --replicas=0 + ``` + + Once that's done, install the new Agent: + + ```bash + helm upgrade -n emissary \ + --set emissary-ingress.agent.enabled=true \ + $productHelmName$ datawire/$productHelmName$ && \ + kubectl rollout status -n emissary-ingress deployment/edge-stack -w + ``` + +6. **Finally, enable ACME and filtering in $productName$ $version$.** + + First, scale the $OSSproductName$ Deployment to 0: + + ```bash + kubectl scale -n emissary deployment/emissary --replicase=0 + ``` + + Once that's done, enable ACME and filtering in $productName$ $version$: + + ```bash + helm upgrade -n emissary \ + --set emissary-ingress.agent.enabled=true + edge-stack datawire/edge-stack && \ + kubectl rollout status -n emissary deployment/edge-stack -w + ```` + +Congratulations! At this point, $productName$ $version$ is fully running, and +it's safe to remove the old `emissary` and `emissary-agent` Deployments: + +```bash +kubectl delete -n emissary deployment/emissary deployment/emissary-agent +``` + +You may also want to redirect DNS to the `edge-stack` Service and remove the +`ambassador` Service. diff --git a/docs/edge-stack/latest/topics/install/upgrade/yaml/edge-stack-1.14/edge-stack-2.X.md b/docs/edge-stack/latest/topics/install/upgrade/yaml/edge-stack-1.14/edge-stack-2.X.md new file mode 100644 index 000000000..3dee2c343 --- /dev/null +++ b/docs/edge-stack/latest/topics/install/upgrade/yaml/edge-stack-1.14/edge-stack-2.X.md @@ -0,0 +1,354 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 1.14.X to $productName$ $versionTwoX$ (YAML) + + + This guide covers migrating from $productName$ 1.14.X to $productName$ $versionTwoX$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation made without using Helm. + If you originally installed with Helm, see the Helm-based + upgrade instructions. + + +We're pleased to introduce $productName$ $versionTwoX$! The 2.X family introduces a number of +changes to allow $productName$ to more gracefully handle larger installations (including +multitenant or multiorganizational installations), reduce memory footprint, and improve +performance. In keeping with [SemVer](https://semver.org), $productName$ 2.X introduces +some changes that aren't backward-compatible with 1.X. These changes are detailed in +[Major Changes in $productName$ 2.X](../../../../../../about/changes-2.x). + +## Migration Overview + + + Read the migration instructions below before making any changes to your + cluster! + + +The recommended strategy for migration is to run $productName$ 1.14 and $productName$ +$versionTwoX$ side-by-side in the same cluster. This gives $productName$ $versionTwoX$ +and $productName$ 1.14 access to all the same configuration resources, with some +important caveats: + +1. **$productName$ 1.14 will not see any `getambassador.io/v3alpha1` resources.** + + This is intentional; it provides a way to apply configuration only to + $productName$ $versionTwoX$, while not interfering with the operation of your + $productName$ 1.14 installation. + +2. **If needed, you can use labels to further isolate configurations.** + + If you need to prevent your $productName$ $versionTwoX$ installation from + seeing a particular bit of $productName$ 1.14 configuration, you can apply + a Kubernetes label to the configuration resources that should be seen by + your $productName$ $versionTwoX$ installation, then set its + `AMBASSADOR_LABEL_SELECTOR` environment variable to restrict its configuration + to only the labelled resources. + + For example, you could apply a `version-two: true` label to all resources + that should be visible to $productName$ $versionTwoX$, then set + `AMBASSADOR_LABEL_SELECTOR=version-two=true` in its Deployment. + +3. **$productName$ 1.14 must remain in control of ACME while both installations are running.** + + The processes that handle ACME challenges cannot be managed by both $productName$ + 1.X and $productName$ $versionTwoX$ at the same time. The instructions below disable ACME + in $productName$ $versionTwoX$, allowing $productName$ 1.14 to continue managing it. + + This implies that any new `Host`s used for $productName$ 1.14 should be created using + `getambassador.io/v2` so that $productName$ 1.14 can see them. + +4. **Check `AuthService` and `RateLimitService` resources, if any.** + + If you have an [`AuthService`](../../../../../using/authservice/) or + [`RateLimitService`](../../../../../running/services/rate-limit-service) installed, make + sure that they are using the [namespace-qualified DNS name](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#namespaces-of-services). + If they are not, the initial migration tests may fail. + + Additionally, when installing with Helm, you must make sure that $productName$ $versionTwoX$ + does not attempt to create duplicate `AuthService` and `RateLimitService` entries. Add + + ``` + --set rateLimit.create=false + ``` + + and + + ``` + --set authService.create=false + ``` + + on the Helm command line to prevent duplicating these resources. + +5. **Be careful to only have one $productName$ Agent running at a time.** + + The $productName$ Agent is responsible for communications between + $productName$ and Ambassador Cloud. If multiple versions of the Agent are + running simultaneously, Ambassador Cloud could see conflicting information + about your cluster. + + The migration YAML used below to install $productName$ $versionTwoX$ will not + install a duplicate agent. If you are building your own YAML, make sure not + to include a duplicate agent. + +6. **If you use ACME for multiple `Host`s, add a wildcard `Host` too.** + + This is required to manage a known issue. This issue will be resolved in a future + $AESproductName$ release. + +7. **Be careful about label selectors on Kubernetes Services!** + + If you have services in $productName$ 1.14 that use selectors that will match + Pods from $productName$ $versionTwoX$, traffic will be erroneously split between + $productName$ 1.14 and $productName$ $versionTwoX$. The labels used by $productName$ + $versionTwoX$ include: + + ```yaml + app.kubernetes.io/name: edge-stack + app.kubernetes.io/instance: edge-stack + app.kubernetes.io/part-of: edge-stack + app.kubernetes.io/managed-by: getambassador.io + product: aes + profile: main + ``` + +You can also migrate by [installing $productName$ $versionTwoX$ in a separate cluster](../../../../migrate-to-2-alternate). +This permits absolute certainty that your $productName$ 1.14 configuration will not be +affected by changes meant for $productName$ $versionTwoX$, and it eliminates concerns about +ACME, but it is more effort. + +## Side-by-Side Migration Steps + +Migration is an eight-step process: + +1. **Make sure that older configuration resources are not present.** + + $productName$ 2.X does not support `getambassador.io/v0` or `getambassador.io/v1` + resources, and Kubernetes will not permit removing support for CRD versions that are + still in use for stored resources. To verify that no resources older than + `getambassador.io/v2` are active, run + + ``` + kubectl get crds -o 'go-template={{range .items}}{{.metadata.name}}={{.status.storedVersions}}{{"\n"}}{{end}}' | fgrep getambassador.io + ``` + + If `v1` is present in the output, **do not begin migration.** The old resources must be + converted to `getambassador.io/v2` and the `storedVersion` information in the cluster + must be updated. If necessary, contact Ambassador Labs on [Slack](http://a8r.io/slack) + for more information. + +2. **Install new CRDs.** + + Before installing $productName$ $versionTwoX$ itself, you must configure your + Kubernetes cluster to support its new `getambassador.io/v3alpha1` configuration + resources. Note that `getambassador.io/v2` resources are still supported, but **you + must install support for `getambassador.io/v3alpha1`** to run $productName$ $versionTwoX$, + even if you intend to continue using only `getambassador.io/v2` resources for some + time. + + ``` + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$versionTwoX$/aes-crds.yaml && \ + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $versionTwoX$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$OSSproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +3. **Install $productName$ $versionTwoX$.** + + After installing the new CRDs, you need to install $productName$ $versionTwoX$ itself + **in the same namespace as your existing $productName$ 1.14 installation**. It's important + to use the same namespace so that the two installations can see the same secrets, etc. + + We publish three manifests for different namespaces. Use only the one that + matches the namespace into which you installed $productName$ 1.14: + + - [`aes-emissaryns-migration.yaml`] for the `emissary` namespace; + - [`aes-defaultns-migration.yaml`] for the `default` namespace; and + - [`aes-ambassadorns-migration.yaml`] for the `ambassador` namespace. + + All three files are set up as follows: + + - They set the `AES_ACME_LEADER_DISABLE` environment variable to prevent $productName$ $versionTwoX$ + from trying to manage ACME (leaving $productName$ 1.14 to do it instead). + - They do NOT set `AMBASSADOR_LABEL_SELECTOR`. + - They do NOT install the Ambassador Agent. + - They do NOT create an `AuthService` or a `RateLimitService`. It is very important that $productName$ + $versionTwoX$ not attempt to create these resources, as they are already provided for your $productName$ + 1.14 installation. + + If any of these do not match your situation, download [`aes-ambassadorns-migration.yaml`] and edit it + as needed. + + [`aes-emissaryns-migration.yaml`]: https://app.getambassador.io/yaml/edge-stack/$versionTwoX$/aes-emissaryns-migration.yaml + [`aes-defaultns-migration.yaml`]: https://app.getambassador.io/yaml/edge-stack/$versionTwoX$/aes-defaultns-migration.yaml + [`aes-ambassadorns-migration.yaml`]: https://app.getambassador.io/yaml/edge-stack/$versionTwoX$/aes-ambassadorns-migration.yaml + + Assuming you're using the `ambassador` namespace, as was typical for $productName$ 1.14: + + ``` + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$versionTwoX$/aes-ambassadorns-migration.yaml && \ + kubectl rollout status -n ambassador deployment/aes -w + ``` + + + Make sure that at most one installation of $productName$ is running without setting + the AES_ACME_LEADER_DISABLE flag. This prevents collisions in trying to manage + ACME. + + +4. **Install `Listener`s and `Host`s as needed.** + + An important difference between $productName$ 1.14 and $productName$ $versionTwoX$ is the + new **mandatory** `Listener` CRD. Also, when running both installations side by side, + you will need to make sure that a `Host` is present for the new $productName$ $versionTwoX$ + Service. For example: + + ```bash + kubectl apply -f - < + Make sure that any Hosts you create use API version getambassador.io/v2, + so that they can be managed by $productName$ 1.14 as long as both installations are running. + + + This example requires that you know the hostname for the $productName$ Service (`$EMISSARY_HOSTNAME`) + and that you have created a TLS Secret for it in `$EMISSARY_TLS_SECRET`. + +5. **Test!** + + Your $productName$ $versionTwoX$ installation can support the `getambassador.io/v2` + configuration resources used by $productName$ 1.14, but you may need to make some + changes to the configuration, as detailed in the documentation on + [configuring $productName$ Communications](../../../../../../howtos/configure-communications) + and [updating CRDs to `getambassador.io/v3alpha1`](../../../../convert-to-v3alpha1). + + + Kubernetes will not allow you to have a getambassador.io/v3alpha1 resource + with the same name as a getambassador.io/v2 resource or vice versa: only + one version can be stored at a time.
+
+ If you find that your $productName$ $versionTwoX$ installation and your $productName$ 1.14 + installation absolutely must have resources that are only seen by one version or the + other way, see overview section 2, "If needed, you can use labels to further isolate configurations". +
+ + **If you find that you need to roll back**, just reinstall your 1.14 CRDs, delete your + installation of $productName$ $versionTwoX$, and delete the `emissary-system` namespace. + +6. **When ready, switch over to $productName$ $versionTwoX$.** + + You can run $productName$ 1.14 and $productName$ $versionTwoX$ side-by-side as long as you care + to. However, taking full advantage of $productName$ 2.X's capabilities **requires** + [updating your configuration to use `getambassador.io/v3alpha1` configuration resources](../../../../convert-to-v3alpha1), + since some useful features in $productName$ $versionTwoX$ are only available using + `getambassador.io/v3alpha1` resources. + + When you're ready to have $productName$ $versionTwoX$ handle traffic on its own, switch + your original $productName$ 1.14 Service to point to $productName$ $versionTwoX$. Use + `kubectl edit -n ambassador service ambassador` and change the `selectors` to: + + ``` + app.kubernetes.io/instance: edge-stack + app.kubernetes.io/name: edge-stack + profile: main + ``` + + Repeat using `kubectl edit -n ambassador service ambassador-admin` for the `ambassador-admin` + Service. + +7. **Install the $productName$ $versionTwoX$ Ambassador Agent.** + + First, scale the 1.14 agent to 0: + + ``` + kubectl scale -n ambassador deployment/ambassador-agent --replicas=0 + ``` + + Once that's done, install the new Agent: + + ``` + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$versionTwoX$/aes-ambassadorns-agent.yaml && \ + kubectl rollout status -n ambassador deployment/edge-stack-agent -w + ``` + +8. **Finally, enable ACME in $productName$ $versionTwoX$.** + + First, scale the 1.14 Ambassador to 0: + + ``` + kubectl scale -n ambassador deployment/ambassador --replicase=0 + ``` + + Once that's done, enable ACME in $productName$ $versionTwoX$: + + ```bash + kubectl set env -n ambassador deployment/aes AES_ACME_LEADER_DISABLE- + kubectl rollout status -n ambassador deployment/aes -w + ```` + +Congratulations! At this point, $productName$ $versionTwoX$ is fully running, and +it's safe to remove the `ambassador` and `ambassador-agent` Deployments: + +``` +kubectl delete -n ambassador deployment/ambassador deployment/ambassador-agent +``` + +Once $productName$ 1.14 is no longer running, you may [convert](../../../../convert-to-v3alpha1) +any remaining `getambassador.io/v2` resources to `getambassador.io/v3alpha1`. +You may also want to redirect DNS to the `edge-stack` Service and remove the +`ambassador` Service. diff --git a/docs/edge-stack/latest/topics/install/upgrade/yaml/edge-stack-2.0/edge-stack-2.X.md b/docs/edge-stack/latest/topics/install/upgrade/yaml/edge-stack-2.0/edge-stack-2.X.md new file mode 100644 index 000000000..23723ab84 --- /dev/null +++ b/docs/edge-stack/latest/topics/install/upgrade/yaml/edge-stack-2.0/edge-stack-2.X.md @@ -0,0 +1,78 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 2.0.5 to $productName$ $versionTwoX$ (YAML) + + + This guide covers migrating from $productName$ 2.0.5 to $productName$ $versionTwoX$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation made without using Helm. + If you originally installed with Helm, see the Helm-based + upgrade instructions. + + + + Upgrading from $productName$ 2.0.5 to $productName$ $versionTwoX$ typically requires downtime. + In some situations, Ambassador Labs Support may be able to assist with a zero-downtime migration; + contact support with questions. + + +Migrating from $productName$ 2.0.5 to $productName$ $versionTwoX$ is a three-step process: + +1. **Install new CRDs.** + + Before installing $productName$ $versionTwoX$ itself, you need to update the CRDs in + your cluster. This is mandatory during any upgrade of $productName$. + + ``` + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$versionTwoX$/aes-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $versionTwoX$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$OSSproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. **Delete $productName$ 2.0.5 Deployment.** + + + Delete only the Deployment for $productName$ 2.0.5 in order to preserve all of + your existing configuration. + + + Use `kubectl` to delete the Deployment for $productName$ 2.0.5. Typically, this will be found + in the `ambassador` namespace. + + ``` + kubectl delete -n ambassador deployment edge-stack + ``` + +3. **Install $productName$ $versionTwoX$.** + + After installing the new CRDs, use Helm to install $productName$ $versionTwoX$. This will install + in the `$productNamespace$` namespace. If necessary for your installation (e.g. if you were + running with `AMBASSADOR_SINGLE_NAMESPACE` set), you can download `aes.yaml` and edit as + needed. + + ``` + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$versionTwoX$/aes.yaml && \ + kubectl rollout status -n $productNamespace$ deployment/edge-stack -w + ``` diff --git a/docs/edge-stack/latest/topics/install/upgrade/yaml/edge-stack-2.4/edge-stack-2.X.md b/docs/edge-stack/latest/topics/install/upgrade/yaml/edge-stack-2.4/edge-stack-2.X.md new file mode 100644 index 000000000..d8bde7af6 --- /dev/null +++ b/docs/edge-stack/latest/topics/install/upgrade/yaml/edge-stack-2.4/edge-stack-2.X.md @@ -0,0 +1,60 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 2.4.X to $productName$ $versionTwoX$ (YAML) + + + This guide covers migrating from $productName$ 2.4 to $productName$ $versionTwoX$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation made without using Helm. + If you originally installed with Helm, see the Helm-based + upgrade instructions. + + +Since $productName$'s configuration is entirely stored in Kubernetes resources, upgrading between minor +versions is straightforward. + +## Migration Steps + +Migration is a two-step process: + +1. **Install new CRDs.** + + Before installing $productName$ $versionTwoX$ itself, you need to update the CRDs in + your cluster. This is mandatory during any upgrade of $productName$. + + ``` + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$versionTwoX$/aes-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $versionTwoX$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$OSSproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. **Install $productName$ $versionTwoX$.** + + After installing the new CRDs, upgrade $productName$ $versionTwoX$: + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$versionTwoX$/aes.yaml && \ + kubectl rollout status -n $productNamespace$ deployment/edge-stack -w + ``` diff --git a/docs/edge-stack/latest/topics/install/upgrade/yaml/edge-stack-2.5/edge-stack-3.X.md b/docs/edge-stack/latest/topics/install/upgrade/yaml/edge-stack-2.5/edge-stack-3.X.md new file mode 100644 index 000000000..685378bab --- /dev/null +++ b/docs/edge-stack/latest/topics/install/upgrade/yaml/edge-stack-2.5/edge-stack-3.X.md @@ -0,0 +1,127 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 2.5.Z to $productName$ $version$ (YAML) + + + This guide covers migrating from $productName$ 2.5.Z to $productName$ $version$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation made without using Helm. + If you originally installed with Helm, see the Helm-based + upgrade instructions. + + + + Make sure that you have converted your External Filters to `protocol_version: "v3"` before upgrading. + If not set or set to `v2` then an error will be posted and a static response will be returned in $productName$ 3.Y. + + +Since $productName$'s configuration is entirely stored in Kubernetes resources, upgrading between +versions is straightforward. + +$productName$ 3 is functionally compatible with $productName$ 2.x, but with any major upgrade there are some changes to consider. Such as, Envoy removing support for V2 Transport Protocol features. Below we will outline some of these changes and things to consider when upgrading. + +### Resources to check before migrating to $version$. + +$productName$ 3.X has been upgraded from Envoy 1.17.X to Envoy 1.24.1. Envoy has removed support for the Envoy V2 Transport Protocol and it is no longer valid. This means all `AuthService`, `RatelimitService`, `LogServices`, and External `Filters` must be updated to use the V3 Protocol. Additionally, support for some of the runtime bootstrap flags has been removed. + +You can refer to the [Major changes in $productName$ 3.x](../../../../../../about/changes-3.y/) guide for an overview of the changes. Here are a few items that need to be checked or addressed before upgrading: + +1. Check Transport Protocol usage on all resources before migrating. + + The `AuthService`, `RatelimitService`, `LogServices`, and External `Filters` that use the `grpc` protocol will now need to explicilty set `protocol_version: "v3"`. If not set or set to `v2` then an error will be posted and a static response will be returned. + + `protocol_version` should be updated to `v3` for all of the above resources while still running $productName$ $versionTwoX$. As of version `2.3.z`+, support for `protocol_version` `v2` and `v3` is supported in order to allow migration from `protocol_version` `v2` to `v3` before upgrading to $productName$ $version$ where support for `v2` is removed. + + Upgrading any application code for your own implementations of these services is very straightforward. + + The following imports simply need to be updated to switch from Envoy's Transport Protocol `v2` to `v3`, and then the configuration for these resources can be updated to add the `protocl_version: "v3"` when the updated service is deployed. + + `v2` Imports: + ```golang + envoyCoreV2 "github.com/datawire/ambassador/pkg/api/envoy/api/v2/core" + envoyAuthV2 "github.com/datawire/ambassador/pkg/api/envoy/service/auth/v2" + envoyType "github.com/datawire/ambassador/pkg/api/envoy/type" + ``` + + `v3` Imports: + ```golang + envoyCoreV3 "github.com/datawire/ambassador/v2/pkg/api/envoy/config/core/v3" + envoyAuthV3 "github.com/datawire/ambassador/v2/pkg/api/envoy/service/auth/v3" + envoyType "github.com/datawire/ambassador/v2/pkg/api/envoy/type/v3" + ``` + +2. Check removed runtime flags for behavior changes that may affect you: + + ```yaml + # No longer necessary because this was removed from Envoy + # $productName$ already was converted to use the compressor API + # https://www.envoyproxy.io/docs/envoy/v1.22.0/configuration/http/http_filters/compressor_filter#config-http-filters-compressor + "envoy.deprecated_features.allow_deprecated_gzip_http_filter": true, + + # Upgraded to v3, all support for V2 Transport Protocol removed + "envoy.deprecated_features:envoy.api.v2.route.HeaderMatcher.regex_match": true, + "envoy.deprecated_features:envoy.api.v2.route.RouteMatch.regex": true, + + # Developers will need to upgrade TracingService to V3 protocol which no longer supports HTTP_JSON_V1 + "envoy.deprecated_features:envoy.config.trace.v2.ZipkinConfig.HTTP_JSON_V1": true, + + # V2 protocol removed so flag no longer necessary + "envoy.reloadable_features.enable_deprecated_v2_api": true, + ``` + +3. Support for LightStep tracing driver removed + + + As of $productName$ 3.4.Z, the LightStep tracing driver is no longer supported. To ensure you do not drop any tracing data, be sure to read before upgrading. + + +$productName$ 3.4 is based on Envoy 1.24.1 which removed support for the `LightStep` tracing driver. The team at LightStep and the maintainers of Envoy-Proxy recommend that users instead leverage the OpenTelemetry Collector to send tracing information to LightStep. We have written a guide which can be found here Distributed Tracing with OpenTelemetry and Lightstep that outlines how to set this up. **It is important that you follow this upgrade path prior to upgrading or you will drop tracing data.** + +## Migration Steps + +Migration is a two-step process: + +1. **Install new CRDs.** + + After reviewing the changes in 3.x and confirming that you are ready to upgrade, the process is the same as upgrading minor versions + in previous version of $productName$ and does not require the complex migration steps that the migration from 1.x tto 2.x required. + + Before installing $productName$ $version$ itself, you need to update the CRDs in + your cluster. This is mandatory during any upgrade of $productName$. + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$version$/aes-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $version$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$OSSproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. **Install $productName$ $version$.** + + After installing the new CRDs, upgrade $productName$ $version$: + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$version$/aes.yaml && \ + kubectl rollout status -n $productNamespace$ deployment/edge-stack -w + ``` diff --git a/docs/edge-stack/latest/topics/install/upgrade/yaml/edge-stack-3.4/edge-stack-3.X.md b/docs/edge-stack/latest/topics/install/upgrade/yaml/edge-stack-3.4/edge-stack-3.X.md new file mode 100644 index 000000000..d48526638 --- /dev/null +++ b/docs/edge-stack/latest/topics/install/upgrade/yaml/edge-stack-3.4/edge-stack-3.X.md @@ -0,0 +1,71 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 3.4.Z to $productName$ $version$ (YAML) + + + This guide covers migrating from $productName$ 3.4.Z to $productName$ $version$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation made without using Helm. + If you originally installed with Helm, see the Helm-based + upgrade instructions. + + +Since $productName$'s configuration is entirely stored in Kubernetes resources, upgrading between +versions is straightforward. + +### Resources to check before migrating to $version$. + + + As of $productName$ 3.4.Z, the LightStep tracing driver is no longer supported. To ensure you do not drop any tracing data, be sure to read below before upgrading. + + +$productName$ 3.4 has been upgraded from Envoy 1.23 to Envoy 1.24.1 which removed support for the `LightStep` tracing driver. The team at LightStep and the maintainers of Envoy-Proxy recommend that users instead leverage the OpenTelemetry Collector to send tracing information to LightStep. We have written a guide which can be found here Distributed Tracing with OpenTelemetry and Lightstep that outlines how to set this up. **It is important that you follow this upgrade path prior to upgrading or you will drop tracing data.** + +## Migration Steps + +Migration is a two-step process: + +1. **Install new CRDs.** + + After reviewing the changes in 3.x and confirming that you are ready to upgrade, the process is the same as upgrading minor versions + in previous version of $productName$ and does not require the complex migration steps that the migration from 1.x tto 2.x required. + + Before installing $productName$ $version$ itself, you need to update the CRDs in + your cluster. This is mandatory during any upgrade of $productName$. + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$version$/aes-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $version$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$OSSproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. **Install $productName$ $version$.** + + After installing the new CRDs, upgrade $productName$ $version$: + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$version$/aes.yaml && \ + kubectl rollout status -n $productNamespace$ deployment/edge-stack -w + ``` diff --git a/docs/edge-stack/latest/topics/install/upgrade/yaml/edge-stack-3.7/edge-stack-3.X.md b/docs/edge-stack/latest/topics/install/upgrade/yaml/edge-stack-3.7/edge-stack-3.X.md new file mode 100644 index 000000000..435cc9dcc --- /dev/null +++ b/docs/edge-stack/latest/topics/install/upgrade/yaml/edge-stack-3.7/edge-stack-3.X.md @@ -0,0 +1,63 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 3.7.Z to $productName$ $version$ (YAML) + + + This guide covers migrating from $productName$ 3.7.Z to $productName$ $version$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation made without using Helm. + If you originally installed with Helm, see the Helm-based + upgrade instructions. + + +Since $productName$'s configuration is entirely stored in Kubernetes resources, upgrading between +versions is straightforward. + +## Migration Steps + +Migration is a two-step process: + +1. **Install new CRDs.** + + After reviewing the changes in 3.x and confirming that you are ready to upgrade, the process is the same as upgrading minor versions + in previous version of $productName$ and does not require the complex migration steps that the migration from 1.x tto 2.x required. + + Before installing $productName$ $version$ itself, you need to update the CRDs in + your cluster. This is mandatory during any upgrade of $productName$. + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$version$/aes-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $version$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$OSSproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. **Install $productName$ $version$.** + + After installing the new CRDs, upgrade $productName$ $version$: + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$version$/aes.yaml && \ + kubectl rollout status -n $productNamespace$ deployment/edge-stack -w + ``` diff --git a/docs/edge-stack/latest/topics/install/upgrade/yaml/emissary-3.8/edge-stack-3.X.md b/docs/edge-stack/latest/topics/install/upgrade/yaml/emissary-3.8/edge-stack-3.X.md new file mode 100644 index 000000000..00c66d937 --- /dev/null +++ b/docs/edge-stack/latest/topics/install/upgrade/yaml/emissary-3.8/edge-stack-3.X.md @@ -0,0 +1,252 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $OSSproductName$ $version$ to $AESproductName$ $version$ (YAML) + + + This guide covers migrating from $OSSproductName$ $version$ to $AESproductName$ $version$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation made without using Helm. + If you originally installed with Helm, see the Helm-based + upgrade instructions. + + +You can upgrade from $OSSproductName$ to $AESproductName$ with a few simple commands. When you upgrade to $AESproductName$, you'll be able to access additional capabilities such as **automatic HTTPS/TLS termination, Swagger/OpenAPI support, API catalog, Single Sign-On, and more.** For more about the differences between $AESproductName$ and $OSSproductName$, see the [Editions page](/editions). + +## Migration Overview + + + Read the migration instructions below before making any changes to your + cluster! + + +The recommended strategy for migration is to run $OSSproductName$ $version$ and $AESproductName$ +$version$ side-by-side in the same cluster. This gives $AESproductName$ $version$ +and $AESproductName$ $version$ access to all the same configuration resources, with some +important notes: + +1. **If needed, you can use labels to further isolate configurations.** + + If you need to prevent your $AESproductName$ $version$ installation from + seeing a particular bit of $OSSproductName$ $version$ configuration, you can apply + a Kubernetes label to the configuration resources that should be seen by + your $AESproductName$ $version$ installation, then set its + `AMBASSADOR_LABEL_SELECTOR` environment variable to restrict its configuration + to only the labelled resources. + + For example, you could apply a `version-two: true` label to all resources + that should be visible to $AESproductName$ $version$, then set + `AMBASSADOR_LABEL_SELECTOR=version-two=true` in its Deployment. + +2. **$AESproductName$ ACME and `Filter`s will be disabled while $OSSproductName$ is still running.** + + Since $AESproductName$ and $OSSproductName$ share configuration, $AESproductName$ cannot + configure its ACME or other filter processors without also affecting $OSSproductName$. This + migration process is written to simply disable these $AESproductName$ features to make + it simpler to roll back, if needed. Alternate, you can isolate the two configurations + as described above. + +3. **Be careful to only have one $productName$ Agent running at a time.** + + The $productName$ Agent is responsible for communications between + $productName$ and Ambassador Cloud. If multiple versions of the Agent are + running simultaneously, Ambassador Cloud could see conflicting information + about your cluster. + + The best way to avoid multiple agents when installing with Helm is to use + `--set emissary-ingress.agent.enabled=false` to tell Helm not to install a + new Agent with $productName$ $version$. Once testing is done, you can switch + Agents safely. + +4. **Be careful about label selectors on Kubernetes Services!** + + If you have services in $OSSproductName$ 3.X that use selectors that will match + Pods from $AESproductName$ $version$, traffic will be erroneously split between + $OSSproductName$ 2.4 and $AESproductName$ $version$. The labels used by $AESproductName$ + $version$ include: + + ```yaml + app.kubernetes.io/name: edge-stack + app.kubernetes.io/instance: edge-stack + app.kubernetes.io/part-of: edge-stack + app.kubernetes.io/managed-by: getambassador.io + product: aes + profile: main + ``` + +You can also migrate by [installing $AESproductName$ $version$ in a separate cluster](../../../../migrate-to-3-alternate/). +This permits absolute certainty that your $OSSproductName$ $version$ configuration will not be +affected by changes meant for $AESproductName$ $version$, but it is more effort. + +## Side-by-Side Migration Steps + +Migration is a six-step process: + +1. **Install new CRDs.** + + Before installing $productName$ $version$ itself, you need to update the CRDs in + your cluster. This is mandatory during any upgrade of $productName$. + + ``` + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$version$/aes-crds.yaml && \ + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $AESproductName$ $version$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $OSSproductName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$OSSproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. **Install $AESproductName$ $version$.** + + After installing the new CRDs, you need to install $AESproductName$ $version$ itself + **in the same namespace as your existing $OSSproductName$ $version$ installation**. It's important + to use the same namespace so that the two installations can see the same secrets, etc. + + We publish three manifests for different namespaces. Use only the one that + matches the namespace into which you installed $OSSproductName$ $version$: + + - [`aes-emissaryns-migration.yaml`] for the `emissary` namespace; + - [`aes-defaultns-migration.yaml`] for the `default` namespace; and + - [`aes-ambassadorns-migration.yaml`] for the `ambassador` namespace. + + All three files are set up as follows: + + - They set the `AES_ACME_LEADER_DISABLE` environment variable; you'll enable ACME towards the end of + the migration. + - They do NOT create any `AuthService` or a `RateLimitService`, since your $OSSproductName$ may have + these defined. Again, you'll manage these at the end of migration. + - They do NOT set `AMBASSADOR_LABEL_SELECTOR`. + - They do NOT install the Ambassador Agent, since there is already an Ambassador Agent running for + $OSSproductName$ $version$. + + If any of these do not match your situation, download [`aes-emissaryns-migration.yaml`] and edit it + as needed. + + [`aes-emissaryns-migration.yaml`]: https://app.getambassador.io/yaml/edge-stack/$version$/aes-emissaryns-migration.yaml + [`aes-defaultns-migration.yaml`]: https://app.getambassador.io/yaml/edge-stack/$version$/aes-defaultns-migration.yaml + [`aes-ambassadorns-migration.yaml`]: https://app.getambassador.io/yaml/edge-stack/$version$/aes-ambassadorns-migration.yaml + + Assuming you're using the `emissary` namespace, as was typical for $OSSproductName$ $version$: + + **If you need to set `AMBASSADOR_LABEL_SELECTOR`**, download `aes-emissaryns-migration.yaml` and edit it to + do so. + + ``` + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$version$/aes-emissaryns-migration.yaml && \ + kubectl rollout status -n emissary deployment/aes -w + ``` + +3. **Test!** + + Your $AESproductName$ $version$ installation should come up running with the configuration + resources used by $OSSproductName$ $version$, including `Listener`s and `Host`s. + + + If you find that your $AESproductName$ $version$ installation and your $OSSproductName$ $version$ + installation absolutely must have resources that are only seen by one version or the + other way, see overview section 1, "If needed, you can use labels to further isolate configurations". + + + **If you find that you need to roll back**, just reinstall your $OSSproductName$ $version$ CRDs + and delete your installation of $AESproductName$ $version$. + +4. **When ready, switch over to $AESproductName$ $version$.** + + You can run $OSSproductName$ $version$ and $AESproductName$ $version$ side-by-side as long as you care + to. When you're ready to have $AESproductName$ $version$ handle traffic on its own, switch + your original $OSSproductName$ $version$ Service to point to $AESproductName$ $version$. Use + `kubectl edit -n emissary service emissary-ingress` and change the `selectors` to: + + ```yaml + app.kubernetes.io/instance: edge-stack + app.kubernetes.io/name: edge-stack + profile: main + ``` + + Repeat using `kubectl edit service ambassador-admin` for the `ambassador-admin` + Service. + +5. **Install the $productName$ $version$ Ambassador Agent.** + + First, scale the $OSSproductName$ agent to 0: + + ``` + kubectl scale -n emissary deployment/emissary-ingress-agent --replicas=0 + ``` + + Once that's done, install the new Agent: + + ``` + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$version$/aes-emissaryns-agent.yaml && \ + kubectl rollout status -n emissary deployment/edge-stack-agent -w + ``` + +6. **Finally, enable ACME and filtering in $productName$ $version$.** + + + Enabling filtering correctly in $productName$ $version$ requires that no + AuthService or RateLimitService resources be present; see + below for more. + + + First, make sure that no `AuthService` or `RateLimitService` resources are present; delete + these if necessary. + + - If you are currently using an external authentication service that provides functionality + you'll still require, turn it into an [`External` `Filter`] (with a [`FilterPolicy`] to + direct requests that need it correctly). + + - If you are currently using a `RateLimitService`, you can set up + [Edge Stack Rate Limiting] instead. + + [`External` `Filter`]: ../../../../../../howtos/ext-filters#2-configure-aesproductname-authentication + [`FilterPolicy`]: ../../../../../../howtos/ext-filters#2-configure-aesproductname-authentication + [Edge Stack Rate Limiting]: ../../../../../using/rate-limits + + After making sure no `AuthService` or `RateLimitService` resources are present, scale the + $OSSproductName$ Deployment to 0: + + ```bash + kubectl scale -n emissary deployment/emissary-ingress --replicase=0 + ``` + + Once that's done, apply resources specific to $AESproductName$: + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$version$/resources-migration.yaml + ``` + + Then, finally, enable ACME and filtering in $productName$ $version$: + + ```bash + kubectl set env -n emissary deployment/aes AES_ACME_LEADER_DISABLE- + kubectl rollout status -n emissary deployment/aes -w + ```` + +Congratulations! At this point, $productName$ $version$ is fully running, and +it's safe to remove the old `emissary` and `emissary-agent` Deployments: + +``` +kubectl delete -n emissary deployment/emissary deployment/emissary-agent +``` + +You may also want to redirect DNS to the `edge-stack` Service and remove the +`ambassador` Service. diff --git a/docs/edge-stack/latest/topics/install/yaml-install.md b/docs/edge-stack/latest/topics/install/yaml-install.md new file mode 100644 index 000000000..da4b2d916 --- /dev/null +++ b/docs/edge-stack/latest/topics/install/yaml-install.md @@ -0,0 +1,93 @@ +--- + description: In this guide, we'll walk through the process of deploying $productName$ in Kubernetes for ingress routing. +--- + +import Alert from '@material-ui/lab/Alert'; + +# Install manually + + + + To migrate from $productName$ 1.X to $productName$ 2.X, see the + [$productName$ migration matrix](../migration-matrix/). This guide + **will not work** for that, due to changes to the configuration + resources used for $productName$ 2.X. + + + +In this guide, we'll walk you through installing $productName$ in your Kubernetes cluster. + +The manual install process does not allow for as much control over configuration +as the [Helm install method](../helm), so if you need more control over your $productName$ +installation, it is recommended that you use helm. + +## Before you begin + + + $productName$ requires a valid license or cloud connect token to start. You can refer to the quickstart guide + for instructions on how to obtain a free community license. Copy the cloud token command from the guide in Ambassador cloud for use below. If you already have a cloud connect token or + a valid enterprise license, then you can skip this step. + + +$productName$ is designed to run in Kubernetes for production. The most essential requirements are: + +* Kubernetes 1.11 or later +* The `kubectl` command-line tool + +## Install with YAML + +$productName$ is typically deployed to Kubernetes from the command line. If you don't have Kubernetes, you should use our [Docker](../docker) image to deploy $productName$ locally. + +1. In your terminal, run the following command: + + ``` + kubectl create namespace $productNamespace$ || true + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$version$/aes-crds.yaml && \ + kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$version$/aes.yaml && \ + kubectl -n $productNamespace$ wait --for condition=available --timeout=90s deploy $productDeploymentName$ + ``` + + + $productName$ $version$ includes a Deployment in the $productNamespace$ namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$OSSproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. Determine the IP address or hostname of your cluster by running the following command: + + ``` + kubectl get -n $productNamespace$ service $productDeploymentName$ -o "go-template={{range .status.loadBalancer.ingress}}{{or .ip .hostname}}{{end}}" + ``` + + Your load balancer may take several minutes to provision your IP address. Repeat the provided command until you get an IP address. + +3. Next Steps + + $productName$ shold now be successfully installed and running, but in order to get started deploying Services and test routing to them you need to configure a few more resources. + + - [The `Listener` Resource](../../running/listener/) is required to configure which ports the $productName$ pods listen on so that they can begin responding to requests. + - [The `Mapping` Resouce](../../using/intro-mappings/) is used to configure routing requests to services in your cluster. + - [The `Host` Resource](../../running/host-crd/) configures TLS termination for enablin HTTPS communication. + - Explore how $productName$ [configures communication with clients](../../../howtos/configure-communications) + + + We strongly recommend following along with our Quickstart Guide to get started by creating a Listener, deploying a simple service to test with, and setting up a Mapping to route requests from $productName$ to the demo service. + + +## Upgrading an existing installation + +See the [migration matrix](../migration-matrix) for instructions about upgrading +$productName$. diff --git a/docs/edge-stack/latest/topics/running/aes-extensions/authentication.md b/docs/edge-stack/latest/topics/running/aes-extensions/authentication.md new file mode 100644 index 000000000..79b005a66 --- /dev/null +++ b/docs/edge-stack/latest/topics/running/aes-extensions/authentication.md @@ -0,0 +1,78 @@ +import Alert from '@material-ui/lab/Alert'; + +# Authentication extension + +Edge Stack ships with an authentication service that is enabled +to perform OAuth, JWT validation, and custom authentication schemes. It can +perform different authentication schemes on different requests allowing you to +enforce authentication as your application needs. + +The Filter and FilterPolicy resources are used to [configure how to do authentication](../../../using/filters). This doc focuses on how to deploy and manage the authentication extension. + +## Edge Stack configuration + +Edge Stack uses the [AuthService plugin](../../services/auth-service) +to connect to the authentication extension. + +The default AuthService is named `ambassador-edge-stack-auth` and is defined +as: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: AuthService +metadata: + name: ambassador-edge-stack-auth + namespace: ambassador +spec: + auth_service: 127.0.0.1:8500 + proto: grpc + status_on_error: + code: 503 + allow_request_body: false +``` + +This configures Envoy to talk to the extension process running on port 8500 +using gRPC and trim the body from the request when doing so. The default error +code of 503 is usually overwritten by the Filter that is authenticating the +request. + +This default AuthService works for most use cases. If you need to +tune how Edge Stack connects to the authentication extension (like changing the +default timeout), you can find the full configuration options in the +[AuthService plugin docs](../../services/auth-service). + +## Authentication extension configuration + +Certain use cases may require some tuning of the authentication extension. +Configuration of this extension is managed via environment variables. +[The Ambassador container](../../environment) has a full list of environment +variables available for configuration, including the variables used by the +authentication extension. + +#### Redis + +The authentication extension uses Redis for caching the response from the +`token endpoint` when performing OAuth. + +Edge Stack shares the same Redis pool for all features that use Redis. More information is available for [tuning Redis](../../aes-redis) if needed. + +#### Timeout variables + +The `AES_AUTH_TIMEOUT` environment variable configures the default timeout in +the authentication extension. + +This timeout is necessary so that any error responses configured by Filters +that the extension runs make their way to the client. Otherwise they would be +overruled by the timeout from Envoy if a request takes longer than five seconds. + +If you have a long chain of Filters or a Filter that takes five or more seconds to respond, +you can increase the timeout value to give your Filters enough time to run. + + +The timeout_ms of the ambassador-edge-stack-auth AuthService defaults +to a value of 5000 (five seconds). You will need to adjust this as well. +
+AES_AUTH_TIMEOUT should always be around one second shorter than the timeout_ms of the AuthService to ensure Filter error responses make it to the client. +
+The External Filter also have a timeout_ms field that must be set if a single Filter will take longer than five seconds. +
diff --git a/docs/edge-stack/latest/topics/running/aes-extensions/index.md b/docs/edge-stack/latest/topics/running/aes-extensions/index.md new file mode 100644 index 000000000..df71fcad6 --- /dev/null +++ b/docs/edge-stack/latest/topics/running/aes-extensions/index.md @@ -0,0 +1,33 @@ +# Ambassador Edge Stack extensions + +The Ambassador Edge Stack contains a number of pre-built extensions that make +running, deploying, and exposing your applications in Kubernetes easier. + +Use of AES extensions is implemented via Kubernetes Custom Resources. +Documentation for how to uses the various extensions can be found throughout the +[Using AES AES for Developer](../../using/) section of the docs. This section +is concerned with how to operate and tune deployment of these extensions in AES. + +## Redis + +Since AES does not use a database, Redis is uses for caching state information +when an extension requires it. + +The Ambassador Edge Stack shares the same Redis pool for all features that use +Redis. + +The [Redis documentation](../aes-redis) contains detailed information on tuning +how AES talks to Redis. + +## The Extension process + +The various extensions of the Ambassador Edge Stack run as a separate process +from the Ambassador control plane and Envoy data plane. + +### `AES_LOG_LEVEL` + +The `AES_LOG_LEVEL` controls the logging of all of the extensions in AES. + +Log level names are case-insensitive. From least verbose to most +verbose, valid log levels are `error`, `warn`/`warning`, `info`, +`debug`, and `trace`. diff --git a/docs/edge-stack/latest/topics/running/aes-extensions/ratelimit.md b/docs/edge-stack/latest/topics/running/aes-extensions/ratelimit.md new file mode 100644 index 000000000..3a758a52f --- /dev/null +++ b/docs/edge-stack/latest/topics/running/aes-extensions/ratelimit.md @@ -0,0 +1,84 @@ +# Rate limiting extension + +The Ambassador Edge Stack ships with a rate limiting service that is enabled +to perform advanced rate limiting out of the box. + +Configuration of the `Mapping` and `RateLimit` resources that control **how** +to rate limit requests can be found in the +[Rate Limiting](../../../using/rate-limits) section of the documentation. + +This document focuses on how to deploy and manage the rate limiting extension. + +## Ambassador configuration + +Ambassador uses the [`RateLimitService` plugin](../../services/rate-limit-service) +to connect to the rate limiting extension in the Ambassador Edge Stack. + +The default `RateLimitService` is named `ambassador-edge-stack-ratelimit` and is +defined as: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: RateLimitService +metadata: + name: ambassador-edge-stack-ratelimit + namespace: ambassador +spec: + service: 127.0.0.1:8500 +``` + +This configures Envoy to send requests that are labeled for rate limiting to the +extension process running on port 8500. The rate limiting extension will then +use that request to count against any `RateLimit` whose pattern matches the +request labels. + +## Authentication extension configuration + +Certain use cases may require some tuning of the rate limiting extension. +Configuration of this extension is managed via environment variables. +[The Ambassador Container](../../environment) has a full list of environment +variables available for configuration. This document highlights the ones used +by the rate limiting extension. + +#### Redis + +The rate limiting extension relies heavily on redis for writing and reading +counters for the different `RateLimit` patterns. + +The Ambassador Edge Stack shares the same Redis pool for all features that use +Redis. + +See the [Redis documentation](../../aes-redis) for information on Redis tuning. + +### REDIS_PERSECOND + +If `REDIS_PERSECOND` is true, a second Redis connection pool is created (to a +potentially different Redis instance) that is only used for per-second +RateLimits; this second connection pool is configured by the `REDIS_PERSECOND_*` +variables rather than the usual `REDIS_*` variables. + +#### `AES_RATELIMIT_PREVIEW` + +Set `AES_RATELIMIT_PREVIEW` to `true` to access support for redis clustering, +local caching, and an upgraded redis client with improved scalability in +preview mode. + +#### `LOCAL_CACHE_SIZE_IN_BYTES` + +* Only available if `AES_RATELIMIT_PREVIEW: "true`. + +The AES rate limit extension can optionally cache over-the-limit keys so it does +not need to read the redis cache again for requests with labels that are already +over the limit. + +Setting `LOCAL_CACHE_SIZE_IN_BYTES` to a non-zero value with enable local +caching. + +#### `NEAR_LIMIT_RATIO` + +* Only available if `AES_RATELIMIT_PREVIEW: "true"` + +Adjusts the ratio used by the `near_limit` statistic for tracking requests that +are "near the limit". + +Defaults to `0.8` (80%) of the limit defined in the `RateLimit` rule. diff --git a/docs/edge-stack/latest/topics/running/aes-redis.md b/docs/edge-stack/latest/topics/running/aes-redis.md new file mode 100644 index 000000000..22c70125a --- /dev/null +++ b/docs/edge-stack/latest/topics/running/aes-redis.md @@ -0,0 +1,247 @@ +import Alert from '@material-ui/lab/Alert'; + +# Edge Stack and Redis + +The Ambassador Edge Stack make use of Redis for several purposes. By default, +all components of the Ambassador Edge Stack share a Redis connection pool. + +## Rate Limit Service + +The rate limiting service, can be configured to use different connection pools +for handling per-second rate limits or connecting to Redis clusters. + +### AES_RATELIMIT_PREVIEW + +Set `AES_RATELIMIT_PREVIEW` to `true` to access support for redis clustering, +local caching, and an upgraded redis client with improved scalability in +preview mode. + +### REDIS_PERSECOND + +If `REDIS_PERSECOND` is true, a second Redis connection pool is created (to a +potentially different Redis instance) that is only used for per-second +RateLimits; this second connection pool is configured by the `REDIS_PERSECOND_*` +variables rather than the usual `REDIS_*` variables. + +## Redis layer 4 connectivity (L4) + +#### `SOCKET_TYPE` + +The Go network to use to talk to Redis. see [Go net.Dial](https://golang.org/src/net/dial.go) + +Most users will leave this as the default of `tcp`. + +#### `URL` + +The URL to dial to talk to Redis + +This will be either a hostname:port pair or a comma separated list of +hostname:port pairs depending on the [TYPE](#redis-type) you are using. + +For `REDIS_URL` (but not `REDIS_PERSECOND_URL`), not setting a value disables +Ambassador Edge Stack features that require Redis. + +#### `TLS_ENABLED` + +Specifies whether to use TLS when talking to Redis. + +#### `TLS_INSECURE` + +Specifies whether to skip certificate verification when +using TLS to talk to Redis. + +Consider [installing the self-signed certificate for your Redis in to the +Ambassador Edge Stack container](../../using/filters/#installing-self-signed-certificates) +in order to leave certificate verification on. + +## Redis authentication (auth) + +**Default** + +Configure authentication to a redis pool using the default implementation. + +#### `PASSWORD` + +If set, it is used to [AUTH](https://redis.io/commands/auth) to Redis immediately after the connection is +established. + +#### `USERNAME` + +If set, then that username is used with the password to log in as that user in +the [Redis 6 ACL](https://redis.io/docs/manual/security/acl/). It is invalid to set a username without setting a +password. It is invalid to set a username with Redis 5 or lower. + +The following YAML snippet is an example of configuring Redis authentication in the Ambassador deployment's environment variables. + +```yaml +env: +- name: REDIS_USERNAME: + value: "default" +- name: REDIS_PASSWORD: + valueFrom: + secretKeyRef: + key: password + name: ambassador-redis-password +``` + + + This example demonstrates getting the redis password from a secret called ambassador-redis-password instead + of providing the value directly. + + +**Rate Limit Preview** + +Configure authentication to a redis pool using the preview rate limiting +implementation + +#### `AUTH` + +Required for authentication with Rate Limit Preview. You must also configure `REDIS_USERNAME` +and `REDIS_PASSWORD` for the rest of Ambassador's Redis usage. + +If you configure `REDIS_AUTH`, then `REDIS_USERNAME` cannot be changed from the value `default`, and +`REDIS_PASSWORD` should contain the same value as `REDIS_AUTH`. + +`REDIS_USERNAME` and `REDIS_PASSWORD` handle all Redis authentication that is separate from Rate Limit Preview so +failing to set them when using `REDIS_AUTH` will result in Ambassador not being able to authenticate with Redis for +all of its other functionality. + +Adding `AUTH` to the example above for rate limit preview would look like the following snippet. + +```yaml +env: +- name: REDIS_USERNAME: + value: "default" +- name: REDIS_PASSWORD: + valueFrom: + secretKeyRef: + key: password + name: ambassador-redis-password +- name: REDIS_AUTH + valueFrom: + secretKeyRef: + key: password + name: ambassador-redis-password +``` + + + Setting AUTH without USERNAME and PASSWORD can result in various problems since AUTH does not + overwrite the basic Redis authentication behavior for systems outside of rate limit preview. + + +## Redis performance tuning (tune) + +#### `POOL_SIZE` + +The number of connections to keep around when idle. + +The total number of connections may go lower than this if there are errors. + +The total number of connections may go higher than this during a load surge. + +#### `PING_INTERVAL` + +The rate at which Ambassador will ping the idle connections in the normal pool +(not extra connections created for a load surge). + +Ambassador will `PING` one of them every `PING_INTERVAL÷POOL_SIZE` so +that each connection will on average be `PING`ed every `PING_INTERVAL`. + +#### `TIMEOUT` + +Sets 4 different timeouts: + +1. `(*net.Dialer).Timeout` for establishing connections +2. `(*redis.Client).ReadTimeout` for reading a single complete response +3. `(*redis.Client).WriteTimeout` for writing a single complete request +4. The timeout when waiting for a connection to become available from the + pool (not including the dial time, which is timed out separately) + +A value of "0" means "no timeout". + +#### `SURGE_LIMIT_INTERVAL` + +During a load surge, if the pool is depleted, then Ambassador may create new +connections to Redis in order to fulfill demand, at a maximum rate of one new +connection per `SURGE_LIMIT_INTERVAL`. + +A value of "0" (the default) means "allow new connections to be created as +fast as necessary. + +The total number of connections that Ambassador can surge to is unbounded. + +#### `SURGE_LIMIT_AFTER` + +The number of connections that can be created _after_ the normal pool is +depleted before `SURGE_LIMIT_INTERVAL` kicks in. + +The first `POOL_SIZE+SURGE_LIMIT_AFTER` connections are allowed to +be created as fast as necessary. + +This setting has no effect if `SURGE_LIMIT_INTERVAL` is 0. + +#### `SURGE_POOL_SIZE` + +Normally during a surge, excess connections beyond `POOL_SIZE` are +closed immediately after they are done being used, instead of being returned +to a pool. + +`SURGE_POOL_SIZE` configures a "reserve" pool for excess connections +created during a surge. + +Excess connections beyond `POOL_SIZE+SURGE_POOL_SIZE` will still +be closed immediately after use. + +#### `SURGE_POOL_DRAIN_INTERVAL` + +How quickly to drain connections from the surge pool after a surge is over. + +Connections are closed at a rate of one connection per +`SURGE_POOL_DRAIN_INTERVAL`. + +This setting has no effect if `SURGE_POOL_SIZE` is 0. + +## Redis type + +Redis currently support three different deployment methods. Ambassador Edge +Stack can now support using a Redis deployed in any of these ways for rate +limiting when `AES_RATELIMIT_PREVIEW=true`. + +#### `TYPE` + +- `SINGLE`: Talk to a single instance of redis, or a redis proxy. + + Requires the redis `REDIS_URL` or `REDIS_PERSECOND_URL` to be either a + single hostname:port pair or a unix domain socket reference. + +- `SENTINEL`: Talk to a redis deployment with sentinel instances (see + https://redis.io/topics/sentinel). + + Requires the redis `REDIS_URL` or `REDIS_PERSECOND_URL` to be a comma + separated list with the first string as the master name of the sentinel + cluster followed by hostname:port pairs. The list size should be >= 2. + The first item is the name of the master and the rest are the sentinels. + +- `CLUSTER`: Talk to a redis in cluster mode (see + https://redis.io/topics/cluster-spec) + + Requires the redis `REDIS_URL` or `REDIS_PERSECOND_URL` to be either a + single hostname:port pair of the read/write endpoint or a comma separated + list of hostname:port pairs with all the nodes in the cluster. + + `PIPELINE_WINDOW` must be set when `TYPE: CLUSTER`. + +#### `PIPELINE_WINDOW` + +The duration after which internal pipelines will be flushed. + +If window is zero then implicit pipelining will be disabled. + +> `150us` is recommended when using implicit pipelining in production. + +#### `PIPELINE_LIMIT` + +The maximum number of commands that can be pipelined before flushing. + +If limit is zero then no limit will be used and pipelines will only be limited +by the specified time window. diff --git a/docs/edge-stack/latest/topics/running/ambassador.md b/docs/edge-stack/latest/topics/running/ambassador.md new file mode 100644 index 000000000..fb8a5fa4e --- /dev/null +++ b/docs/edge-stack/latest/topics/running/ambassador.md @@ -0,0 +1,619 @@ +import Alert from '@material-ui/lab/Alert'; + +# The **Ambassador** **Module** Resource + +
+

Contents

+ +* [Envoy](#envoy) +* [General](#general) +* [gRPC](#grpc) +* [Header behavior](#header-behavior) +* [Misc](#miscellaneous) +* [Observability](#observability) +* [Protocols](#protocols) +* [Security](#security) +* [Service health / timeouts](#service-health--timeouts) +* [Traffic management](#traffic-management) + + +
+ +If present, the `ambassador` `Module` defines system-wide configuration for $productName$. **You may very well not need this resource.** To use the `ambassador` `Module` to configure $productName$, it MUST be named `ambassador`, otherwise it will be ignored. To create multiple `ambassador` `Module`s in the same Kubernetes namespace, you will need to apply them as annotations with separate `ambassador_id`s: you will not be able to use multiple CRDs. + +The defaults in the `ambassador` `Module` are: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: +# Use ambassador_id only if you are using multiple instances of $productName$ in the same cluster. +# See below for more information. + ambassador_id: [ "" ] + config: + # Use the items below for config fields +``` + +There are many config field items that can be configured on the `ambassador` `Module`. They are listed below with examples and grouped by category. + +## Envoy + +##### Content-Length headers + +* `allow_chunked_length: true` tells Envoy to allow requests or responses with both `Content-Length` and `Transfer-Encoding` headers set. + +By default, messages with both `Content-Length` and `Content-Transfer-Encoding` are rejected. If `allow_chunked_length` is `true`, $productName$ will remove the `Content-Length` header and process the message. See the [Envoy documentation for more details](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/core/v3/protocol.proto.html?highlight=allow_chunked_length#config-core-v3-http1protocoloptions). + +##### Envoy access logs + +* `envoy_log_path` defines the path of Envoy's access log. By default this is standard output. +* `envoy_log_type` defines the type of access log Envoy will use. Currently, only `json` or `text` are supported. +* `envoy_log_format` defines the Envoy access log line format. + +These logs can be formatted using [Envoy operators](https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#command-operators) to display specific information about an incoming request. The example below will show only the protocol and duration of a request: + +```yaml +envoy_log_path: /dev/fd/1 +envoy_log_type: json +envoy_log_format: + { + "protocol": "%PROTOCOL%", + "duration": "%DURATION%" + } +``` + +See the Envoy documentation for the [standard log format](https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#default-format-string) and a [complete list of log format operators](https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/access_log). + +##### Envoy validation timeout + +* `envoy_validation_timeout` defines the timeout, in seconds, for validating a new Envoy configuration. + +The default is 10; a value of 0 disables Envoy configuration validation. Most installations will not need to use this setting. + +For example: + +```yaml +envoy_validation_timeout: 30 +``` + +would allow 30 seconds to validate the generated Envoy configuration. + +##### Error response overrides + +* `error_response_overrides` permits changing the status code and body text for 4XX and 5XX response codes. + +By default, $productName$ will pass through error responses without modification, and errors generated locally will use Envoy's default response body, if any. + +See [using error response overrides](../custom-error-responses) for usage details. For example, this configuration: + +```yaml +error_response_overrides: + - on_status_code: 404 + body: + text_format: "File not found" +``` + +would explicitly modify the body of 404s to say "File not found". + +##### Forwarding client cert details + +Two attributes allow providing information about the client's TLS certificate to upstream certificates: + +* `forward_client_cert_details: true` will tell Envoy to add the `X-Forwarded-Client-Cert` to upstream + requests. +* `set_current_client_cert_details` will tell Envoy what information to include in the + `X-Forwarded-Client-Cert` header. + +$productName$ will not forward information about a certificate that it cannot validate. + +See the Envoy documentation on [X-Forwarded-Client-Cert](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers.html?highlight=xfcc#x-forwarded-client-cert) and [SetCurrentClientCertDetails](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/network/http_connection_manager/v3/http_connection_manager.proto.html#extensions-filters-network-http-connection-manager-v3-httpconnectionmanager-setcurrentclientcertdetails) for more information. + +```yaml +forward_client_cert_details: true +set_current_client_cert_details: SANITIZE +``` + +##### Server name + +* `server_name` allows overriding the server name that Envoy sends with responses to clients. + +By default, Envoy uses a server name of `envoy`. + +##### Suppress Envoy headers + +* `suppress_envoy_headers: true` will prevent $productName$ from emitting certain additional + headers to HTTP requests and responses. + +For the exact set of headers covered by this config, see the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/router_filter#config-http-filters-router-headers-set) + +--- +## General + +##### Ambassador ID + +* `ambassador_id` allows using multiple instances of $productName$ in the same cluster. + +We recommend _not_ setting `ambassador_id` if you are running only one instance of $productName$ in your cluster. For more information, see the [Running and Deployment documentation](../running/#ambassador_id). + +If used, the `ambassador_id` value must be an array, for example: + +```yaml +ambassador_id: [ "test_environment" ] +``` + +##### Defaults + +* `defaults` provides a dictionary of default values that will be applied to various $productName$ resources. + +See [Using `ambassador` `Module` Defaults](../../using/defaults) for more information. + +--- + +## gRPC + +##### Bridges + +* `enable_grpc_http11_bridge: true` will enable the gRPC-HTTP/1.1 bridge. +* `enable_grpc_web: true` will enable the gRPC-Web bridge. + +gRPC is a binary HTTP/2-based protocol. While this allows high performance, it can be problematic for clients that are unable to speak HTTP/2 (such as JavaScript in many browsers, or legacy clients in difficult-to-update environments). + +The gRPC-HTTP/1.1 bridge can translate HTTP/1.1 calls with `Content-Type: application/grpc` into gRPC calls: $productName$ will perform buffering and translation as necessary. For more details on the translation process, see the [Envoy gRPC HTTP/1.1 bridge documentation](https://www.envoyproxy.io/docs/envoy/v1.11.2/configuration/http_filters/grpc_http1_bridge_filter.html). + +Likewise, gRPC-Web is a JSON and HTTP-based protocol that allows browser-based clients to take advantage of gRPC protocols. The gRPC-Web specification requires a server-side proxy to translate between gRPC-Web requests and gRPC backend services, and $productName$ can fill this role when the gRPC-Web bridge is enabled. For more details on the translation process, see the [Envoy gRPC HTTP/1.1 bridge documentation](https://www.envoyproxy.io/docs/envoy/v1.11.2/configuration/http_filters/grpc_http1_bridge_filter.html); for more details on gRPC-Web itself, see the [gRPC-Web client GitHub repo](https://github.com/grpc/grpc-web). + +##### Statistics + +* `grpc_stats` allows enabling telemetry for gRPC calls using Envoy's [gRPC Statistics Filter](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/grpc_stats_filter). + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: + config: + grpc_stats: + upstream_stats: true + services: + - name: . + method_names: [] +``` + +Supported parameters: +* `all_methods` +* `services` +* `upstream_stats` + +Available metrics: +* `envoy_cluster_grpc__` +* `envoy_cluster_grpc__request_message_count` +* `envoy_cluster_grpc__response_message_count` +* `envoy_cluster_grpc__success` +* `envoy_cluster_grpc__total` +* `envoy_cluster_grpc_upstream_` - **only when `upstream_stats: true`** + +Please note that `` will only be present if `all_methods` is set or the service and the method are present under `services`. If `all_methods` is false or the method is not on the list, the available metrics will be in the format `envoy_cluster_grpc_`. + +* `all_methods`: If set to true, emit stats for all service/method names. +If set to false, emit stats for all service/message types to the same stats without including the service/method in the name. +**This option is only safe if all clients are trusted. If this option is enabled with untrusted clients, the clients could cause unbounded growth in the number +of stats in Envoy, using unbounded memory and potentially slowing down stats pipelines.** + +* `services`: If set, specifies an allow list of service/methods that will have individual stats emitted for them. Any call that does not match the allow list will be counted in a stat with no method specifier (generic metric). + + + If both all_methods and services are present, all_methods will be ignored. + + +* `upstream_stats`: If true, the filter will gather a histogram for the request time of the upstream. + +--- + +## Header behavior + +##### Header case + +* `proper_case: true` forces headers to have their "proper" case as shown in RFC7230. +* `header_case_overrides` allows forcing certain headers to have specific casing. + +proper_case and header_case_overrides are mutually exclusive. + +RFC7230 specifies that HTTP header names are case-insensitive, but always shows and refers to headers as starting with a capital letter, continuing in lowercase, then repeating the single capital letter after each non-alpha character. This has become an established convention when working with HTTP: + +- `Host`, not `host` or `HOST` +- `Content-Type`, not `content-type`, `Content-type`, or `cOnTeNt-TyPe` + +Internally, Envoy typically uses [all lowercase](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/header_casing) for header names. This is fully compliant with RFC7230, but some services and clients may require headers to follow the stricter casing rules implied by RFC7230 section headers: in that situation, setting `proper_case: true` will tell Envoy to force all headers to use the casing above. + +Alternately, it is also possible - although less common - for services or clients to require some other specific casing for specific headers. `header_case_overrides` specifies an array of header names: if a case-insensitive match for a header is found in the list, the matching header will be replaced with the one in the list. For example, the following configuration will force headers that match `X-MY-Header` and `X-EXPERIMENTAL` to use that exact casing, regardless of the original case used in flight: + +```yaml +header_case_overrides: +- X-MY-Header +- X-EXPERIMENTAL +``` + +If the upstream service responds with `x-my-header: 1`, $productName$ will return `X-MY-Header: 1` to the client. Similarly, if the client includes `x-ExperiMENTAL: yes` in its request, the request to the upstream service will include `X-EXPERIMENTAL: yes`. Other headers will not be altered; $productName$ will use its default lowercase header. + +Please see the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/core/v3/protocol.proto.html#config-core-v3-http1protocoloptions-headerkeyformat) for more information. Note that in general, we recommend updating clients and services rather than relying on `header_case_overrides`. + +##### Linkerd interoperability + +* `add_linkerd_headers: true` will force $productName$ to include the `l5d-dst-override` header for Linkerd. + +When using older Linkerd installations, requests going to an upstream service may need to include the `l5d-dst-override` header to ensure that Linkerd will route them correctly. Setting `add_linkerd_headers` does this automatically. See the [Mapping](../../using/mappings#linkerd-interoperability-add_linkerd_headers) documentation for more details. + +##### Max request headers size + +* `max_request_headers_kb` sets the maximum allowed request header size in kilobytes. If not set, the default is 60 KB. + +See [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/network/http_connection_manager/v3/http_connection_manager.proto.html) for more information. + +##### Preserve external request ID + +* `preserve_external_request_id: true` will preserve any `X-Request-Id` header presented by the client. The default is `false`, in which case Envoy will always generate a new `X-Request-Id` value. + +##### Strip matching host port + +* `strip_matching_host_port: true` will tell $productName$ to strip any port number from the host/authority header before processing and routing the request if that port number matches the port number of Envoy's listener. The default is `false`, which will preserve any port number. + +In the default installation of $productName$ the public port is 443, which then maps internally to 8443, so this only works in custom installations where the public Service port and Envoy listener port match. + +A common reason to try using this property is if you are using gRPC with TLS and your client library appends the port to the Host header (i.e. `myurl.com:443`). We have an alternative solution in our [gRPC guide](../../../../../emissary/pre-release/howtos/grpc#working-with-host-headers-that-include-the-port) that uses a [Lua script](#lua-scripts) to remove all ports from every Host header for that use case. + +--- + +## Miscellaneous + + +##### Envoy's admin port + +* `admin_port` specifies the port where $productName$'s Envoy will listen for low-level admin requests. The default is 8001; it should almost never need changing. + +##### Lua scripts + +* `lua_scripts` allows defining a custom Lua script to run on every request. + +This is useful for simple use cases that mutate requests or responses, for example to add a custom header: + +```yaml +lua_scripts: | + function envoy_on_response(response_handle) + response_handle:headers():add("Lua-Scripts-Enabled", "Processed") + end +``` + +For more details on the Lua API, see the [Envoy Lua filter documentation](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/lua_filter.html). + +Some caveats around the embedded scripts: + +* They run in-process, so any bugs in your Lua script can break every request. +* They're run on every request/response to every URL. +* They're inlined in the $productName$ YAML; as such, we do not recommend using Lua scripts for long, complex logic. + +If you need more flexible and configurable options, $AESproductName$ supports a [pluggable Filter system](../../using/filters/). + +##### Merge slashes + +* `merge_slashes: true` will cause $productName$ to merge adjacent slashes in incoming paths when doing route matching and request filtering: for example, a request for `//foo///bar` would be matched to a `Mapping` with prefix `/foo/bar`. + +##### Modify Default Buffer Size + +By default, the Envoy that ships with $productName$ uses a defailt of 1MiB soft limit for an upstream service's read and write buffer limits. This setting allows you to configure that buffer limit. See the [Envoy docs](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/cluster/v3/cluster.proto.html?highlight=per_connection_buffer_limit_bytes) for more information. + +```yaml +buffer_limit_bytes: 5242880 # Sets the default buffer limit to 5 MiB +``` + +##### Use $productName$ namespace for service resolution + +* `use_ambassador_namespace_for_service_resolution: true` tells $productName$ to assume that unqualified services are in the same namespace as $productName$ + +By default, when $productName$ sees a service name without a namespace, it assumes that the namespace is the same as the resource referring to the service. For example, for this `Mapping`: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: mapping-1 + namespace: foo +spec: + hostname: "*" + prefix: / + service: upstream +``` + +$productName$ would look for a Service named `upstream` in namespace `foo`. + +However, if `use_ambassador_namespace_for_service_resolution` is `true`, this `Mapping` would look for a Service named `foo` in the namespace in which $productName$ is installed instead. + +--- + +## Observability + +##### Diagnostics + +* `diagnostics` controls access to the diagnostics interface. + +By default, $productName$ creates a `Mapping` that allows access to the diagnostic interface at `/ambassador/v0/diag` from anywhere in the cluster. To disable the `Mapping` entirely, set `diagnostics.enabled` to `false`: + + +```yaml +diagnostics: + enabled: false +``` + +With diagnostics disabled, `/ambassador/v0/diag` will respond with 404; however, the service itself is still running, and `/ambassador/v0/diag/` is reachable from inside the $productName$ Pod at `https://localhost:8877`. You can use Kubernetes port forwarding to set up remote access to the diagnostics page temporarily: + +``` +kubectl port-forward -n ambassador deploy/ambassador 8877 +``` + +Alternately, to leave the `Mapping` intact but restrict access to only the local Pod, set `diagnostics.allow_non_local` to `false`: + +```yaml +diagnostics: + allow_non_local: true +``` + +See [Protecting Access to the Diagnostics Interface](../../../howtos/protecting-diag-access) for more information. + +--- +## Protocols + +##### Enable IPv4 and IPv6 + +* `enable_ipv4` determines whether IPv4 DNS lookups are enabled. The default is `true`. +* `enable_ipv6` determines whether IPv6 DNS lookups are enabled. The default is `false`. + +If both IPv4 and IPv6 are enabled, $productName$ will prefer IPv6. This can have strange effects if $productName$ receives `AAAA` records from a DNS lookup, but the underlying network of the pod doesn't actually support IPv6 traffic. For this reason, the default is IPv4 only. + +A [`Mapping`](../../using/mappings) can override both `enable_ipv4` and `enable_ipv6`, but if either is not stated explicitly in a `Mapping`, the values here are used. Most $productName$ installations will probably be able to avoid overriding these settings in Mappings. + +##### HTTP/1.0 support + +* `enable_http10: true` will enable handling incoming HTTP/1.0 and HTTP/0.9 requests. The default is `false`. + +--- +## Security + +##### Cross origin resource sharing (CORS) + +* `cors` sets the default CORS configuration for all mappings in the cluster. See the [CORS syntax](../../using/cors). + +For example: + +```yaml +cors: + origins: http://foo.example,http://bar.example + methods: POST, GET, OPTIONS + ... +``` + +##### IP allow and deny + +* `ip_allow` and `ip_deny` define HTTP source IP address ranges to allow or deny. + +Only one of ip_allow and ip_deny may be specified. + +The default is to allow all traffic. + +If `ip_allow` is specified, any traffic not matching a range to allow will be denied. If `ip_deny` is specified, any traffic not matching a range to deny will be allowed. A list of ranges to allow and a separate list to deny may not both be specified. + +Both take a list of IP address ranges with a keyword specifying how to interpret the address, for example: + +```yaml +ip_allow: +- peer: 127.0.0.1 +- remote: 99.99.0.0/16 +``` + +The keyword `peer` specifies that the match should happen using the IP address of the other end of the network connection carrying the request: `X-Forwarded-For` and the `PROXY` protocol are both ignored. Here, our example specifies that connections originating from the $productName$ pod itself should always be allowed. + +The keyword `remote` specifies that the match should happen using the IP address of the HTTP client, taking into account `X-Forwarded-For` and the `PROXY` protocol if they are allowed (if they are not allowed, or not present, the peer address will be used instead). This permits matches to behave correctly when, for example, $productName$ is behind a layer 7 load balancer. Here, our example specifies that HTTP clients from the IP address range `99.99.0.0` - `99.99.255.255` will be allowed. + +You may specify as many ranges for each kind of keyword as desired. + +##### Rejecting Client Requests With Escaped Slashes + +* `reject_requests_with_escaped_slashes: true` will tell $productName$ to reject requests containing escaped slashes. + +When set to `true`, $productName$ will reject any client requests that contain escaped slashes (`%2F`, `%2f`, `%5C`, or `%5c`) in their URI path by returning HTTP 400. By default, $productName$ will forward these requests unmodified. + + - **Envoy and $productName$ behavior** + + Internally, Envoy treats escaped and unescaped slashes distinctly for matching purposes. This means that an $productName$ mapping + for path `/httpbin/status` will not be matched by a request for `/httpbin%2fstatus`. + + On the other hand, when using $productName$, escaped slashes will be treated like unescaped slashes when applying FilterPolicies. For example, a request to `/httpbin%2fstatus/200` will be matched against a FilterPolicy for `/httpbin/status/*`. + + - **Security Concern Example** + + With $productName$, this can become a security concern when combined with `bypass_auth` in the following scenario: + + - Use a `Mapping` for path `/prefix` with `bypass_auth` set to true. The intention here is to apply no FilterPolicies under this prefix, by default. + + - Use a `Mapping` for path `/prefix/secure/` without setting bypass_auth to true. The intention here is to selectively apply a FilterPolicy to this longer prefix. + + - Have an upstream service that receives both `/prefix` and `/prefix/secure/` traffic (from the Mappings above), but the upstream service treats escaped and unescaped slashes equivalently. + + In this scenario, when a client makes a request to `/prefix%2fsecure/secret.txt`, the underlying Envoy configuration will _not_ match the routing rule for `/prefix/secure/`, but will instead + match the routing rule for `/prefix` which has `bypass_auth` set to true. $productName$ FilterPolicies will _not_ be enforced in this case, and the upstream service will receive + a request that it treats equivalently to `/prefix/secure/secret.txt`, potentially leaking information that was assumed protected by an $productName$ FilterPolicy. + + One way to avoid this particular scenario is to avoid using `bypass_auth` and instead use a FilterPolicy that applies no filters when no authorization behavior is desired. + + The other way to avoid this scenario is to reject client requests with escaped slashes altogether to eliminate this class of path confusion security concerns. This is recommended when there is no known, legitimate reason to accept client requests that contain escaped slashes. This is especially true if it is not known whether upstream services will treat escaped and unescaped slashes equivalently. + + This document is not intended to provide an exhaustive set of scenarios where path confusion can lead to security concerns. As part of good security practice it is recommended to audit end-to-end request flow and the behavior of each component’s escaped path handling to determine the best configuration for your use case. + + - **Summary** + + Envoy treats escaped and unescaped slashes _distinctly_ for matching purposes. Matching is the underlying behavior used by $productName$ Mappings. + + $productName$ treats escaped and unescaped slashes _equivalently_ when selecting FilterPolicies. FilterPolicies are applied by $productName$ after Envoy has performed route matching. + + Finally, whether upstream services treat escaped and unescaped slashes equivalently is entirely dependent on the upstream service, and therefore dependent on your use case. Configuration intended to implement security policies will require audit with respect to escaped slashes. By setting reject_requests_with_escaped_slashes, this class of security concern can largely be eliminated. + +##### Trust downstream client IP + +* `use_remote_address: false` tells $productName$ that it cannot trust the remote address of incoming connections, and must instead rely exclusively on the `X-Forwarded-For` header. + +When `true` (the default), $productName$ will append its own IP address to the `X-Forwarded-For` header so that upstream services of $productName$ can get the full set of IP addresses that have propagated a request. You may also need to set `externalTrafficPolicy: Local` on your `LoadBalancer` to propagate the original source IP address. + +See the [Envoy documentation on the `X-Forwarded-For header` ](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers) and the [Kubernetes documentation on preserving the client source IP](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip) for more details. + +##### `X-Forwarded-For` trusted hops + +* `xff_num_trusted_hops` sets how many L7 proxies ahead of $productName$ should be trusted. + + + This value is not dynamically configurable in Envoy. A restart is required changing the value of xff_num_trusted_hops for Envoy to respect the change. + + +The value of `xff_num_trusted_hops` indicates the number of trusted proxies in front of $productName$. The default setting is 0 which tells Envoy to use the immediate downstream connection's IP address as the trusted client address. The trusted client address is used to populate the `remote_address` field used for rate limiting and can affect which IP address Envoy will set as `X-Envoy-External-Address`. + +`xff_num_trusted_hops` behavior is determined by the value of `use_remote_address` (which is true by default). + +* If `use_remote_address` is false and `xff_num_trusted_hops` is set to a value N that is greater than zero, the trusted client address is the (N+1)th address from the right end of XFF. (If the XFF contains fewer than N+1 addresses, Envoy falls back to using the immediate downstream connection’s source address as a trusted client address.) + +* If `use_remote_address` is true and `xff_num_trusted_hops` is set to a value N that is greater than zero, the trusted client address is the Nth address from the right end of XFF. (If the XFF contains fewer than N addresses, Envoy falls back to using the immediate downstream connection’s source address as a trusted client address.) + +Refer to [Envoy's documentation](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers.html#x-forwarded-for) for some detailed examples of this interaction. + +--- + +## Service health / timeouts + +##### Incoming connection idle timeout + +* `listener_idle_timeout_ms` sets the idle timeout for incoming connections. + +If set, this specifies the length of time (in milliseconds) that an incoming connection is allowed to be idle before being dropped. This can useful if you have proxies and/or firewalls in front of $productName$ and need to control how $productName$ initiates closing an idle TCP connection. + +If not set, the default is no timeout, meaning that incoming connections may remain idle forever. + +Please see the [Envoy documentation on HTTP protocol options](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/core/v3/protocol.proto#config-core-v3-httpprotocoloptions) for more information. + +##### Keepalive + +* `keepalive` sets the global TCP keepalive settings. + +$productName$ will use these settings for all `Mapping`s unless overridden in a `Mapping`'s configuration. Without `keepalive`, $productName$ follows the operating system defaults. + +For example, the following configuration: + +```yaml +keepalive: + time: 2 + interval: 2 + probes: 100 +``` + +would enable keepalives every two seconds (`interval`), starting after two seconds of idleness (`time`), with the connection being dropped if 100 keepalives are sent with no response (`probes`). For more information, see the [Envoy keepalive documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/core/v3/address.proto.html#config-core-v3-tcpkeepalive). + +##### Upstream idle timeout + +* `cluster_idle_timeout_ms` sets the default idle timeout for upstream connections (by default, one hour). + +If set, this specifies the timeout (in milliseconds) after which an idle connection upstream is closed. The idle timeout can be completely disabled by setting `cluster_idle_timeout_ms: 0`, which risks idle upstream connections never getting closed. + +If not set, the default idle timeout is one hour. + +You can override this setting with [`idle_timeout_ms` on a `Mapping`](../../using/timeouts/). + +##### Upstream max lifetime + +* `cluster_max_connection_lifetime_ms` sets the default maximum lifetime of an upstream connection. + +If set, this specifies the maximum amount of time (in milliseconds) after which an upstream connection is drained and closed, regardless of whether it is idle or not. Connection recreation incurs additional overhead when processing requests. The overhead tends to be nominal for plaintext (HTTP) connections within the same cluster, but may be more significant for secure HTTPS connections or upstreams with high latency. For this reason, it is generally recommended to set this value to at least 10000 ms to minimize the amortized cost of connection recreation while providing a reasonable bound for connection lifetime. + +If not set (or set to zero), then upstream connections may remain open for arbitrarily long. + +You can override this setting with [`cluster_max_connection_lifetime_ms` on a `Mapping`](../../using/timeouts/). + +##### Request timeout + +* `cluster_request_timeout_ms` sets the default end-to-end timeout for a single request. + +If set, this specifies the default end-to-end timeout for every request. + +If not set, the default is three seconds. + +You can override this setting with [`timeout_ms` on a `Mapping`](../../using/timeouts/). + +##### Readiness and liveness probes + +* `readiness_probe` sets whether `/ambassador/v0/check_ready` is automatically mapped +* `liveness_probe` sets whether `/ambassador/v0/check_alive` is automatically mapped + +By default, $productName$ creates `Mapping`s that support readiness and liveness checks at `/ambassador/v0/check_ready` and `/ambassador/v0/check_alive`. To disable the readiness `Mapping` entirely, set `readiness_probe.enabled` to `false`: + + +```yaml +readiness_probe: + enabled: false +``` + +Likewise, to disable the liveness `Mapping` entirely, set `liveness_probe.enabled` to `false`: + + +```yaml +liveness_probe: + enabled: false +``` + +A disabled probe endpoint will respond with 404; however, the service is still running, and will be accessible on localhost port 8877 from inside the $productName$ Pod. + +You can change these to route requests to some other service. For example, to have the readiness probe map to the `quote` application's health check: + +```yaml +readiness_probe: + enabled: true + service: quote + rewrite: /backend/health +``` + +The liveness and readiness probes both support `prefix` and `rewrite`, with the same meanings as for [Mappings](../../using/mappings). + +##### Retry policy + +This lets you add resilience to your services in case of request failures by performing automatic retries. + +```yaml +retry_policy: + retry_on: "5xx" +``` + +--- + +## Traffic management + +##### Circuit breaking + +* `circuit_breakers` sets the global circuit breaking configuration defaults + +You can override the circuit breaker settings for individual `Mapping`s. By default, $productName$ does not configure any circuit breakers. For more information, see the [circuit breaking reference](../../using/circuit-breakers). + +##### Default label domain and labels + +* `default_labels` sets default domains and labels to apply to every request. + +For more on how to use the default labels, , see the [Rate Limit reference](../../using/rate-limits/#attaching-labels-to-requests). + +##### Default load balancer + +* `load_balancer` sets the default load balancing type and policy + +For example, to set the default load balancer to `least_request`: + +```yaml +load_balancer: + policy: least_request +``` + +If not set, the default is to use round-robin load balancing. For more information, see the [load balancer reference](../load-balancer). diff --git a/docs/edge-stack/latest/topics/running/debugging.md b/docs/edge-stack/latest/topics/running/debugging.md new file mode 100644 index 000000000..8f62c7239 --- /dev/null +++ b/docs/edge-stack/latest/topics/running/debugging.md @@ -0,0 +1,203 @@ +# Debugging + +If you’re experiencing issues with the $productName$ and cannot diagnose the issue through the `/ambassador/v0/diag/` diagnostics endpoint, this document covers various approaches and advanced use cases for debugging $productName$ issues. + +We assume that you already have a running $productName$ installation in the following sections. + +## A Note on TLS + +[TLS] can appear intractable if you haven't set up [certificates] correctly. If you're +having trouble with TLS, always [check the logs] of your $productName$ Pods and look for +certificate errors. + +[tls]: ../tls +[certificates]: ../tls#certificates-and-secrets +[check the logs]: #review-logs + +## Check $productName$ status + +1. First, check the $productName$ Deployment with the following: `kubectl get -n $productNamespace$ deployments` + + After a brief period, the terminal will print something similar to the following: + + ``` + $ kubectl get -n $productNamespace$ deployments + NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE + $productDeploymentName$ 3 3 3 3 1m + $productDeploymentName$-apiext 3 3 3 3 1m + ``` + +2. Check that the “desired” number of Pods matches the “current” and “available” number of Pods. + +3. If they are **not** equal, check the status of the associated Pods with the following command: `kubectl get pods -n $productNamespace$`. + + The terminal should print something similar to the following: + + ``` + $ kubectl get pods -n $productNamespace$ + NAME READY STATUS RESTARTS AGE + $productDeploymentName$-85c4cf67b-4pfj2 1/1 Running 0 1m + $productDeploymentName$-85c4cf67b-fqp9g 1/1 Running 0 1m + $productDeploymentName$-85c4cf67b-vg6p5 1/1 Running 0 1m + $productDeploymentName$-apiext-736f8497d-j34pf 1/1 Running 0 1m + $productDeploymentName$-apiext-736f8497d-9gfpq 1/1 Running 0 1m + $productDeploymentName$-apiext-736f8497d-p5wgx 1/1 Running 0 1m + ``` + + The actual names of the Pods will vary. All the Pods should indicate `Running`, and all should show 1/1 containers ready. + +4. If the Pods do not seem reasonable, use the following command for details about the history of the Deployment: `kubectl describe -n $productNamespace$ deployment $productDeploymentName$` + + - Look for data in the “Replicas” field near the top of the output. For example: + `Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable` + + - Look for data in the “Events” log field near the bottom of the output, which often displays data such as a fail image pull, RBAC issues, or a lack of cluster resources. For example: + + ``` + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set $productDeploymentName$-85c4cf67b to 3 + ``` + +5. Additionally, use the following command to “describe” the individual Pods: `kubectl describe pods -n $productNamespace$ <$productDeploymentName$-pod-name>` + + - Look for data in the “Status” field near the top of the output. For example, `Status: Running` + + - Look for data in the “Events” field near the bottom of the output, as it will often show issues such as image pull failures, volume mount issues, and container crash loops. For example: + ``` + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Scheduled 4m default-scheduler Successfully assigned $productDeploymentName$-85c4cf67b-4pfj2 to gke-ambassador-demo-default-pool-912378e5-dkxc + Normal SuccessfulMountVolume 4m kubelet, gke-ambassador-demo-default-pool-912378e5-dkxc MountVolume.SetUp succeeded for volume "$productDeploymentName$-token-tmk94" + Normal Pulling 4m kubelet, gke-ambassador-demo-default-pool-912378e5-dkxc pulling image "docker.io/datawire/ambassador:0.40.0" + Normal Pulled 4m kubelet, gke-$productDeploymentName$-demo-default-pool-912378e5-dkxc Successfully pulled image "docker.io/datawire/ambassador:0.40.0" + Normal Created 4m kubelet, gke-$productDeploymentName$-demo-default-pool-912378e5-dkxc Created container + Normal Started 4m kubelet, gke-$productDeploymentName$-demo-default-pool-912378e5-dkxc Started container + ``` + +In both the Deployment Pod and the individual Pods, take the necessary action to address any discovered issues. + +

Review $productName$ logs

+ +$productName$ logging can provide information on anything that might be abnormal or malfunctioning. While there may be a large amount of data to sort through, look for key errors such as the $productName$ process restarting unexpectedly, or a malformed Envoy configuration. + +$productName$ has two major log mechanisms: $productName$ logging and Envoy logging. Both appear in the normal `kubectl logs` output, and both can have additional debug-level logging enabled. + + + Enabling debug-level logging can produce a lot of log output — enough to + potentially impact the performance of $productName$. We don't recommend running with debug + logging enabled as a matter of course; it's usually better to enable it only when needed, + then reset logging to normal once you're finished debugging. + + +### $productName$ debug logging + +Much of $productName$'s logging is concerned with the business of noticing changes to +Kubernetes resources that specify the $productName$ configuration, and generating new +Envoy configuration in response to those changes. $productName$ also logs information +about its built-in authentication, rate limiting, developer portal, ACME, etc. There +are multiple environment variables controlling debug logging; which is required depends +on which aspect of the system you want to debug: + +- Set `AES_LOG_LEVEL=debug` to debug the early boot sequence, $productName$'s interactions + with the Kubernetes cluster (finding changed resources, etc.), and the built-in services + (auth, rate limiting, etc.). +- Set `AMBASSADOR_DEBUG=diagd` to debug the process of generating an Envoy configuration from + the input resources. + +### $productName$ Envoy logging + +Envoy logging is concerned with the actions Envoy is taking for incoming requests. +Typically, Envoy will only output access logs, and certain errors, but enabling Envoy +debug logging will show very verbose information about the actions Envoy is actually +taking. It can be useful for understanding why connections are being closed, or whether +an error status is coming from Envoy or from the upstream service. + +It is possible to enable Envoy logging at boot, but for the most part, it's safer to +enable it at runtime, right before sending a request that is known to have problems. +To enable Envoy debug logging, use `kubectl exec` to get a shell on the $productName$ +pod, then: + + ``` + curl -XPOST http://localhost:8001/logging?level=trace && \ + sleep 10 && \ + curl -XPOST http://localhost:8001/logging?level=warning + ``` + +This will turn on Envoy debug logging for ten seconds, then turn it off again. + +#### Interpreting Response Codes + +Envoys default access log format includes the `%RESPONSE_FLAGS%` which provides additional information about the response or connection that can help with debugging issues. + +For example, if a log line includes `UAEX` then this indicates that an Edge Stack Filter has denied the request. This can occur because a user was not authenticated or because of an error. Therefore, this can indicate that further investigation of the logs is needed. + +See Envoy's documentation for a full list of the supported [%RESPONSE_FLAGS%](https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#command-operators). + +### Viewing logs + +To view the logs from $productName$: + +1. Use the following command to target an individual $productName$ Pod: `kubectl get pods -n $productNamespace$` + + The terminal will print something similar to the following: + + ``` + $ kubectl get pods -n $productNamespace$ + NAME READY STATUS RESTARTS AGE + $productDeploymentName$-85c4cf67b-4pfj2 1/1 Running 0 3m + ``` + +2. Then, run the following: `kubectl logs -n $productNamespace$ <$productDeploymentName$-pod-name>` + +The terminal will print something similar to the following: + + ``` + $ kubectl logs -n $productNamespace$ $productDeploymentName$-85c4cf67b-4pfj2 + 2018-10-10 12:26:50 kubewatch 0.40.0 INFO: generating config with gencount 1 (0 changes) + /usr/lib/python3.6/site-packages/pkg_resources/__init__.py:1235: UserWarning: /ambassador is writable by group/others and vulnerable to attack when used with get_resource_filename. Consider a more secure location (set with .set_extraction_path or the PYTHON_EGG_CACHE environment variable). + warnings.warn(msg, UserWarning) + 2018-10-10 12:26:51 kubewatch 0.40.0 INFO: Scout reports {"latest_version": "0.40.0", "application": "ambassador", "notices": [], "cached": false, "timestamp": 1539606411.061929} + + 2018-10-10 12:26:54 diagd 0.40.0 [P15TMainThread] INFO: thread count 3, listening on 0.0.0.0:8877 + [2018-10-10 12:26:54 +0000] [15] [INFO] Starting gunicorn 19.8.1 + [2018-10-10 12:26:54 +0000] [15] [INFO] Listening at: http://0.0.0.0:8877 (15) + [2018-10-10 12:26:54 +0000] [15] [INFO] Using worker: threads + [2018-10-10 12:26:54 +0000] [42] [INFO] Booting worker with pid: 42 + 2018-10-10 12:26:54 diagd 0.40.0 [P42TMainThread] INFO: Starting periodic updates + [2018-10-10 12:27:01.977][21][info][main] source/server/drain_manager_impl.cc:63] shutting down parent after drain + ``` + +Note that many deployments will have multiple logs, and the logs are independent for each Pod. + +## Examine Pod and container contents + +You can examine the contents of the $productName$ Pod for issues, such as if volume mounts are correct and TLS certificates are present in the required directory, to determine if the Pod has the latest $productName$ configuration, or if the generated Envoy configuration is correct or as expected. In these instructions, we will look for problems related to the Envoy configuration. + +1. To look into an $productName$ Pod, get a shell on the Pod using `kubectl exec`. For example, + + ``` + kubectl exec -it -n $productNamespace$ <$productDeploymentName$-pod-name> -- bash + ``` + +2. Determine the latest configuration. If you haven't overridden the configuration directory, the latest configuration will be in `/ambassador/snapshots`. If you have overridden it, $productName$ saves configurations in `$AMBASSADOR_CONFIG_BASE_DIR/snapshots`. + + In the snapshots directory: + + - `snapshot.yaml` contains the full input configuration that $productName$ has found; + - `aconf.json` contains the $productName$ configuration extracted from the snapshot; + - `ir.json` contains the IR constructed from the $productName$ configuration; and + - `econf.json` contains the Envoy configuration generated from the IR. + + In the snapshots directory, the current configuration will be stored in files with no digit suffix, and older configurations have increasing numbers. For example, `ir.json` is current, `ir-1.json` is the next oldest, then `ir-2.json`, etc. + +3. If something is wrong with `snapshot` or `aconf`, there is an issue with your configuration. If something is wrong with `ir` or `econf`, you should [open an issue on Github](https://github.com/emissary-ingress/emissary/issues/new/choose). + +4. The actual input provided to Envoy is split into `$AMBASSADOR_CONFIG_BASE_DIR/bootstrap-ads.json` and `$AMBASSADOR_CONFIG_BASE_DIR/envoy/envoy.json`. + + - The `bootstrap-ads.json` file contains details about Envoy statistics, logging, authentication, etc. + - The `envoy.json` file contains information about request routing. + - You may generally find it simplest to just look at the `econf.json` files in the `snapshot` + directory, which includes both kinds of configuration. diff --git a/docs/edge-stack/latest/topics/running/environment.md b/docs/edge-stack/latest/topics/running/environment.md new file mode 100644 index 000000000..03c607c68 --- /dev/null +++ b/docs/edge-stack/latest/topics/running/environment.md @@ -0,0 +1,746 @@ +# $productName$ Environment variables + +Use the following variables for the environment of your $productName$ container: + +| Variable | Default value | Value type | +|----------------------------------------------------------------------------------------------------------- |-----------------------------------------------------|-------------------------------------------------------------------------------| +| [`AMBASSADOR_ID`](#ambassador_id) | `[ "default" ]` | List of strings | +| [`AES_LOG_LEVEL`](#aes_log_level) | `warn` | Log level | +| [`AGENT_CONFIG_RESOURCE_NAME`](#agent_config_resource_name) | `ambassador-agent-cloud-token` | String | +| [`AMBASSADOR_AMBEX_NO_RATELIMIT`](#ambassador_ambex_no_ratelimit) | `false` | Boolean: `true`=true, any other value=false | +| [`AMBASSADOR_AMBEX_SNAPSHOT_COUNT`](#ambassador_ambex_snapshot_count) | `30` | Integer | +| [`AMBASSADOR_CLUSTER_ID`](#ambassador_cluster_id) | Empty | String | +| [`AMBASSADOR_CONFIG_BASE_DIR`](#ambassador_config_base_dir) | `/ambassador` | String | +| [`AMBASSADOR_DISABLE_FEATURES`](#ambassador_disable_features) | Empty | Any | +| [`AMBASSADOR_DRAIN_TIME`](#ambassador_drain_time) | `600` | Integer | +| [`AMBASSADOR_ENVOY_API_VERSION`](#ambassador_envoy_api_version) | `V3` | String Enum; `V3` or `V2` | +| [`AMBASSADOR_GRPC_METRICS_SINK`](#ambassador_grpc_metrics_sink) | Empty | String (address:port) | +| [`AMBASSADOR_HEALTHCHECK_BIND_ADDRESS`](#ambassador_healthcheck_bind_address)| `0.0.0.0` | String | +| [`AMBASSADOR_HEALTHCHECK_BIND_PORT`](#ambassador_healthcheck_bind_port)| `8877` | Integer | +| [`AMBASSADOR_HEALTHCHECK_IP_FAMILY`](#ambassador_healthcheck_ip_family)| `ANY` | String Enum; `IPV4_ONLY` or `IPV6_ONLY`| +| [`AMBASSADOR_ISTIO_SECRET_DIR`](#ambassador_istio_secret_dir) | `/etc/istio-certs` | String | +| [`AMBASSADOR_JSON_LOGGING`](#ambassador_json_logging) | `false` | Boolean; non-empty=true, empty=false | +| [`AMBASSADOR_READY_PORT`](#ambassador_ready_port) | `8006` | Integer | +| [`AMBASSADOR_READY_LOG`](#ambassador_ready_log) | `false` | Boolean; [Go `strconv.ParseBool`] | +| [`AMBASSADOR_LABEL_SELECTOR`](#ambassador_label_selector) | Empty | String (label=value) | +| [`AMBASSADOR_NAMESPACE`](#ambassador_namespace) | `default` ([^1]) | Kubernetes namespace | +| [`AMBASSADOR_RECONFIG_MAX_DELAY`](#ambassador_reconfig_max_delay) | `1` | Integer | +| [`AMBASSADOR_SINGLE_NAMESPACE`](#ambassador_single_namespace) | Empty | Boolean; non-empty=true, empty=false | +| [`AMBASSADOR_SNAPSHOT_COUNT`](#ambassador_snapshot_count) | `4` | Integer | +| [`AMBASSADOR_VERIFY_SSL_FALSE`](#ambassador_verify_ssl_false) | `false` | Boolean; `true`=true, any other value=false | +| [`DD_ENTITY_ID`](#dd_entity_id) | Empty | String | +| [`DOGSTATSD`](#dogstatsd) | `false` | Boolean; Python `value.lower() == "true"` | +| [`SCOUT_DISABLE`](#scout_disable) | `false` | Boolean; `false`=false, any other value=true | +| [`STATSD_ENABLED`](#statsd_enabled) | `false` | Boolean; Python `value.lower() == "true"` | +| [`STATSD_PORT`](#statsd_port) | `8125` | Integer | +| [`STATSD_HOST`](#statsd_host) | `statsd-sink` | String | +| [`STATSD_FLUSH_INTERVAL`](#statsd_flush_interval) | `1` | Integer | +| [`_AMBASSADOR_ID`](#_ambassador_id) | Empty | String | +| [`_AMBASSADOR_TLS_SECRET_NAME`](#_ambassador_tls_secret_name) | Empty | String | +| [`_AMBASSADOR_TLS_SECRET_NAMESPACE`](#_ambassador_tls_secret_namespace) | Empty | String | +| [`_CONSUL_HOST`](#_consul_host) | Empty | String | +| [`_CONSUL_PORT`](#_consul_port) | Empty | Integer | +| [`AMBASSADOR_DISABLE_SNAPSHOT_SERVER`](#ambassador_disable_snapshot_server) | `false` | Boolean; non-empty=true, empty=false | +| [`AMBASSADOR_ENVOY_BASE_ID`](#ambassador_envoy_base_id) | `0` | Integer | | `false` | Boolean; Python `value.lower() == "true"` | +| [`AES_RATELIMIT_PREVIEW`](#aes_ratelimit_preview) | `false` | Boolean; [Go `strconv.ParseBool`][] | +| [`AES_AUTH_TIMEOUT`](#aes_auth_timeout) | `4s` | Duration; [Go `time.ParseDuration`][] +| [`REDIS_SOCKET_TYPE`](#redis_socket_type) | `tcp` | Go network such as `tcp` or `unix`; see [Go `net.Dial`][] | +| [`REDIS_URL`](#redis_url) | None, must be set explicitly | Go network address; for TCP this is a `host:port` pair; see [Go `net.Dial`][] | +| [`REDIS_TLS_ENABLED`](#redis_tls_enabled) | `false` | Boolean; [Go `strconv.ParseBool`][] | +| [`REDIS_TLS_INSECURE`](#redis_tls_insecure) | `false` | Boolean; [Go `strconv.ParseBool`][] | +| [`REDIS_USERNAME`](#redis_username) | Empty | Plain string | +| [`REDIS_PASSWORD`](#redis_password) | Empty | Plain string | +| [`REDIS_AUTH`](#redis_auth) | Empty | Requires AES_RATELIMIT_PREVIEW; Plain string | +| [`REDIS_POOL_SIZE`](#redis_pool_size) | `10` | Integer | +| [`REDIS_PING_INTERVAL`](#redis_ping_interval) | `10s` | Duration; [Go `time.ParseDuration`][] | +| [`REDIS_TIMEOUT`](#redis_timeout) | `0s` | Duration; [Go `time.ParseDuration`][] | +| [`REDIS_SURGE_LIMIT_INTERVAL`](#redis_surge_limit_interval) | `0s` | Duration; [Go `time.ParseDuration`][] | +| [`REDIS_SURGE_LIMIT_AFTER`](#redis_surge_limit_after) | The value of `REDIS_POOL_SIZE` | Integer | +| [`REDIS_SURGE_POOL_SIZE`](#redis_surge_pool_size) | `0` | Integer | +| [`REDIS_SURGE_POOL_DRAIN_INTERVAL`](#redis_surge_pool_drain_interval) | `1m` | Duration; [Go `time.ParseDuration`][] | +| [`REDIS_PIPELINE_WINDOW`](#redis_pipeline_window) | `0` | Requires AES_RATELIMIT_PREVIEW; Duration; [Go `time.ParseDuration`][] | +| [`REDIS_PIPELINE_LIMIT`](#redis_pipeline_limit) | `0` | Requires AES_RATELIMIT_PREVIEW; Integer; [Go `strconv.ParseInt`][] | +| [`REDIS_TYPE`](#redis_type) | `SINGLE` | Requires AES_RATELIMIT_PREVIEW; String; SINGLE, SENTINEL, or CLUSTER | +| [`REDIS_PERSECOND`](#redis_persecond) | `false` | Boolean; [Go `strconv.ParseBool`][] | +| [`REDIS_PERSECOND_SOCKET_TYPE`](#redis_persecond_socket_type) | None, must be set explicitly (if `REDIS_PERSECOND`) | Go network such as `tcp` or `unix`; see [Go `net.Dial`][] | +| [`REDIS_PERSECOND_URL`](#redis_persecond_url) | None, must be set explicitly (if `REDIS_PERSECOND`) | Go network address; for TCP this is a `host:port` pair; see [Go `net.Dial`][] | +| [`REDIS_PERSECOND_TLS_ENABLED`](#redis_persecond_tls_enabled) | `false` | Boolean; [Go `strconv.ParseBool`][] | +| [`REDIS_PERSECOND_TLS_INSECURE`](#redis_persecond_tls_insecure) | `false` | Boolean; [Go `strconv.ParseBool`][] | +| [`REDIS_PERSECOND_USERNAME`](#redis_persecond_username) | Empty | Plain string | +| [`REDIS_PERSECOND_PASSWORD`](#redis_persecond_password) | Empty | Plain string | +| [`REDIS_PERSECOND_AUTH`](#redis_persecond_auth) | Empty | Requires AES_RATELIMIT_PREVIEW; Plain string | +| [`REDIS_PERSECOND_POOL_SIZE`](#redis_persecond_pool_size) | `10` | Integer | +| [`REDIS_PERSECOND_PING_INTERVAL`](#redis_persecond_ping_interval) | `10s` | Duration; [Go `time.ParseDuration`][] | +| [`REDIS_PERSECOND_TIMEOUT`](#redis_persecond_timeout) | `0s` | Duration; [Go `time.ParseDuration`][] | +| [`REDIS_PERSECOND_SURGE_LIMIT_INTERVAL`](#redis_persecond_surge_limit_interval) | `0s` | Duration; [Go `time.ParseDuration`][] | +| [`REDIS_PERSECOND_SURGE_LIMIT_AFTER`](#redis_persecond_surge_limit_after) | The value of `REDIS_PERSECOND_POOL_SIZE` | Integer | +| [`REDIS_PERSECOND_SURGE_POOL_SIZE`](#redis_persecond_surge_pool_size) | `0` | Integer | +| [`REDIS_PERSECOND_SURGE_POOL_DRAIN_INTERVAL`](#redis_persecond_surge_pool_drain_interval) | `1m` | Duration; [Go `time.ParseDuration`][] | +| [`REDIS_PERSECOND_TYPE`](#redis_persecond_type) | `SINGLE` | Requires AES_RATELIMIT_PREVIEW; String; SINGLE, SENTINEL, or CLUSTER | +| [`REDIS_PERSECOND_PIPELINE_WINDOW`](#redis_persecond_pipeline_window) | `0` | Requires AES_RATELIMIT_PREVIEW; Duration; [Go `time.ParseDuration`][] | +| [`REDIS_PERSECOND_PIPELINE_LIMIT`](#redis_persecond_pipeline_limit) | `0` | Requires AES_RATELIMIT_PREVIEW; Integer | +| [`EXPIRATION_JITTER_MAX_SECONDS`](#expiration_jitter_max_seconds) | `300` | Integer | +| [`USE_STATSD`](#use_statsd) | `false` | Boolean; [Go `strconv.ParseBool`][] | +| [`STATSD_HOST`](#statsd_host) | `localhost` | Hostname | +| [`STATSD_PORT`](#statsd_port) | `8125` | Integer | +| [`GOSTATS_FLUSH_INTERVAL_SECONDS`](#gostats_flush_interval_seconds) | `5` | Integer | +| [`LOCAL_CACHE_SIZE_IN_BYTES`](#local_cache_size_in_bytes) | `0` | Requires AES_RATELIMIT_PREVIEW; Integer | +| [`NEAR_LIMIT_RATIO`](#near_limit_ratio) | `0.8` | Requires AES_RATELIMIT_PREVIEW; Float; [Go `strconv.ParseFloat`][] || Developer Portal | `AMBASSADOR_URL` | `https://api.example.com` | URL | +| [`DEVPORTAL_CONTENT_URL`](#devportal_content_url) | `https://github.com/datawire/devportal-content` | git-remote URL | +| [`DEVPORTAL_CONTENT_DIR`](#devportal_content_dir) | `/` | Rooted Git directory | +| [`DEVPORTAL_CONTENT_BRANCH`](#devportal_content_branch) | `master` | Git branch name | +| [`DEVPORTAL_DOCS_BASE_PATH`](#devportal_docs_base_path) | `/doc/` | Git branch name | +| [`POLL_EVERY_SECS`](#poll_every_secs) | `60` | Integer | +| [`AES_ACME_LEADER_DISABLE`](#aes_acme_leader_disable) | `false` | Boolean; [Go `strconv.ParseBool`][] | +| [`AES_REPORT_DIAGNOSTICS_TO_CLOUD`](#aes_report_diagnostics_to_cloud) | `true` | Boolean; [Go `strconv.ParseBool`][] | +| [`AES_SNAPSHOT_URL`](#aes_snapshot_url) | `http://emissary-ingress-admin.default:8005/snapshot-external` | hostname | +| [`ENV_AES_SECRET_NAME`](#env_aes_secret_name) | `ambassador-edge-stack` | String | +| [`ENV_AES_SECRET_NAMESPACE`](#env_aes_secret_namespace) | $productName$'s Namespace | String | + + +## Feature Flag Environment Variables + +| Variable | Default value | Value type | +|----------------------------------------------------------------------------------------- |-----------------------------------------------------|-------------------------------------------------------------------------------| +| [`AMBASSADOR_EDS_BYPASS`](#ambassador_eds_bypass) | `false` | Boolean; Python `value.lower() == "true"` | +| [`AMBASSADOR_FORCE_SECRET_VALIDATION`](#ambassador_force_secret_validation) | `false` | Boolean: `true`=true, any other value=false | +| [`AMBASSADOR_KNATIVE_SUPPORT`](#ambassador_knative_support) | `false` | Boolean; non-empty=true, empty=false | +| [`AMBASSADOR_UPDATE_MAPPING_STATUS`](#ambassador_update_mapping_status) | `false` | Boolean; `true`=true, any other value=false | +| [`ENVOY_CONCURRENCY`](#envoy_concurrency) | Empty | Integer | +| [`DISABLE_STRICT_LABEL_SELECTORS`](#disable_strict_label_selectors) | `false` | Boolean: `true`=true, any other value=false | + +### `AMBASSADOR_ID` + +$productName$ supports running multiple installs in the same cluster without restricting a given instance of $productName$ to a single namespace. +The resources that are visible to an installation can be limited with the `AMBASSADOR_ID` environment variable. + +[More information](../../running/running#ambassador_id) + +### `AES_LOG_LEVEL` + +Adjust the log level by setting the `AES_LOG_LEVEL` environment variable; from least verbose to most verbose, the valid values are `error`, `warn`/`warning`, `info`, `debug`, and `trace`. The default is `info`. +Log level names are case-insensitive. + +[More information](../../running/running#log-levels-and-debugging) + +### `AGENT_CONFIG_RESOURCE_NAME` + +Allows overriding the default config_map/secret that is used for extracting the CloudToken for connecting with Ambassador cloud. It allows all +components (and not only the Ambassador Agent) to authenticate requests to Ambassador Cloud. +If unset it will just fallback to searching for a config map or secret with the name of `ambassador-agent-cloud-token`. Note: the secret will take precedence if both a secret and config map are set. + +### `AMBASSADOR_AMBEX_NO_RATELIMIT` + +Completely disables ratelimiting Envoy reconfiguration under memory pressure. This can help performance with the endpoint or Consul resolvers, but could make OOMkills more likely with large configurations. +The default is `false`, meaning that the rate limiter is active. + +[More information](../../../topics/concepts/rate-limiting-at-the-edge/) + +### `AMBASSADOR_AMBEX_SNAPSHOT_COUNT` + +Envoy-configuration snapshots get saved (as `ambex-#.json`) in `/ambassador/snapshots`. The number of snapshots is controlled by the `AMBASSADOR_AMBEX_SNAPSHOT_COUNT` environment variable. +Set it to 0 to disable. + +[More information](../../running/debugging#examine-pod-and-container-contents) + +### `AMBASSADOR_CLUSTER_ID` + +Each $productName$ installation generates a unique cluster ID based on the UID of its Kubernetes namespace and its $productName$ ID: the resulting cluster ID is a UUID which cannot be used +to reveal the namespace name nor $productName$ ID itself. $productName$ needs RBAC permission to get namespaces for this purpose, as shown in the default YAML files provided by Datawire; +if not granted this permission it will generate a UUID based only on the $productName$ ID. To disable cluster ID generation entirely, set the environment variable +`AMBASSADOR_CLUSTER_ID` to a UUID that will be used for the cluster ID. + +[More information](../../running/running#ambassador-edge-stack-update-checks-scout) + +### `AMBASSADOR_CONFIG_BASE_DIR` + +Controls where $productName$ will store snapshots. By default, the latest configuration will be in `/ambassador/snapshots`. If you have overridden it, $productName$ saves configurations in `$AMBASSADOR_CONFIG_BASE_DIR/snapshots`. + +[More information](../../running/debugging#examine-pod-and-container-contents) + +### `AMBASSADOR_DISABLE_FEATURES` + +To completely disable feature reporting, set the environment variable `AMBASSADOR_DISABLE_FEATURES` to any non-empty value. + +[More information](../../running/running/#ambassador-edge-stack-update-checks-scout) + +### `AMBASSADOR_DRAIN_TIME` + +At each reconfiguration, $productName$ keeps around the old version of it's envoy config for the duration of the configured drain time. +The `AMBASSADOR_DRAIN_TIME` variable controls how much of a grace period $productName$ provides active clients when reconfiguration happens. +Its unit is seconds and it defaults to 600 (10 minutes). This can impact memory usage because $productName$ needs to keep around old versions of its configuration +for the duration of the drain time. + +[More information](../../running/scaling#ambassador_drain_time) + +### `AMBASSADOR_ENVOY_API_VERSION` + +By default, $productName$ will configure Envoy using the [V3 Envoy API](https://www.envoyproxy.io/docs/envoy/latest/api-v3/api). +In $productName$ 2.0, you were able switch back to Envoy V2 by setting the `AMBASSADOR_ENVOY_API_VERSION` environment variable to "V2". +$productName$ 3.0 has removed support for the V2 API and only the V3 API is used. While this variable cannot be set to another value in 3.0, it may +be used when introducing new API versions that are not yet available in $productName$ such as V4. + +### `AMBASSADOR_GRPC_METRICS_SINK` + +Configures $productName$ (envoy) to send metrics to the Agent which are then relayed to the Cloud. If not set then we don’t configure envoy to send metrics to the agent. If set with a bad address:port then we log an error message. In either scenario, it just stops metrics from being sent to the Agent which has no negative effect on general routing or $productName$ uptime. + +### `AMBASSADOR_HEALTHCHECK_BIND_ADDRESS` + +Configures $productName$ to bind its health check server to the provided address. If not set $productName$ will bind to all addresses (`0.0.0.0`). + +### `AMBASSADOR_HEALTHCHECK_BIND_PORT` + +Configures $productName$ to bind its health check server to the provided port. If not set $productName$ will listen on the admin port(`8877`). + +### `AMBASSADOR_HEALTHCHECK_IP_FAMILY` + +Allows the IP Family used by health check server to be overriden. By default, the health check server will listen for both IPV4 and IPV6 addresses. In some clusters you may want to force `IPV4_ONLY` or `IPV6_ONLY`. + +### `AMBASSADOR_ISTIO_SECRET_DIR` + +$productName$ will read the mTLS certificates from `/etc/istio-certs` unless configured to use a different directory with the `AMBASSADOR_ISTIO_SECRET_DIR` +environment variable and create a secret in that location named `istio-certs`. + +[More information](../../../howtos/istio#configure-an-mtls-tlscontext) + +### `AMBASSADOR_JSON_LOGGING` + +When `AMBASSADOR_JSON_LOGGING` is set to `true`, JSON format will be used for most of the control plane logs. +Some (but few) logs from `gunicorn` and the Kubernetes `client-go` package will still be in text only format. + +[More information](../../running/running#log-format) + +### `AMBASSADOR_READY_PORT` + +A dedicated Listener is created for non-blocking readiness checks. By default, the Listener will listen on the loopback address +and port `8006`. `8006` is part of the reserved ports dedicated to $productName$. If their is a conflict then setting +`AMBASSADOR_READY_PORT` to a valid port will configure Envoy to Listen on that port. + +### `AMBASSADOR_READY_LOG` + +When `AMBASSADOR_READY_LOG` is set to `true`, the envoy `/ready` endpoint will be logged. It will honor format +provided in the `Module` resource or default to the standard log line format. + +### `AMBASSADOR_LABEL_SELECTOR` + +Restricts $productName$'s configuration to only the labelled resources. For example, you could apply a `version-two: true` label +to all resources that should be visible to $productName$, then set `AMBASSADOR_LABEL_SELECTOR=version-two=true` in its Deployment. +Resources without the specified label will be ignored. + +### `AMBASSADOR_NAMESPACE` + +Controls namespace configuration for Amabssador. + +[More information](../../running/running#namespaces) + +### `AMBASSADOR_RECONFIG_MAX_DELAY` + +Controls up to how long Ambassador will wait to receive changes before doing an Envoy reconfiguration. The unit is +in seconds and must be > 0. + +### `AMBASSADOR_SINGLE_NAMESPACE` + +When set, configures $productName$ to only work within a single namespace. + +[More information](../../running/running#namespaces) + +### `AMBASSADOR_SNAPSHOT_COUNT` + +The number of snapshots that $productName$ should save. + +### `AMBASSADOR_VERIFY_SSL_FALSE` + +By default, $productName$ will verify the TLS certificates provided by the Kubernetes API. In some situations, the cluster may be +deployed with self-signed certificates. In this case, set `AMBASSADOR_VERIFY_SSL_FALSE` to `true` to disable verifying the TLS certificates. + +[More information](../../running/running#ambassador_verify_ssl_false) + +### `DD_ENTITY_ID` + +$productName$ supports setting the `dd.internal.entity_id` statitics tag using the `DD_ENTITY_ID` environment variable. If this value +is set, statistics will be tagged with the value of the environment variable. Otherwise, this statistics tag will be omitted (the default). + +[More information](../../running/statistics/envoy-statsd#using-datadog-dogstatsd-as-the-statsd-sink) + +### `DOGSTATSD` + +If you are a user of the [Datadog](https://docs.datadoghq.com/) monitoring system, pulling in the Envoy statistics from $productName$ is very easy. +Because the DogStatsD protocol is slightly different than the normal StatsD protocol, in addition to setting $productName$'s +`STATSD_ENABLED=true` environment variable, you also need to set the`DOGSTATSD=true` environment variable. + +[More information](../../running/statistics/envoy-statsd#using-datadog-dogstatsd-as-the-statsd-sink) + +### `SCOUT_DISABLE` + +$productName$ integrates Scout, a service that periodically checks with Datawire servers to advise of available updates. Scout also sends anonymized usage +data and the $productName$ version. This information is important to us as we prioritize test coverage, bug fixes, and feature development. Note that the $productName$ will +run regardless of the status of Scout. + +We do not recommend you disable Scout, since we use this mechanism to notify users of new releases (including critical fixes and security issues). This check can be disabled by setting +the environment variable `SCOUT_DISABLE` to `1` in your $productName$ deployment. + +[More information](../../running/running#ambassador-edge-stack-update-checks-scout) + +### `STATSD_ENABLED` + +If enabled, then $productName$ has Envoy expose metrics information via the ubiquitous and well-tested [StatsD](https://github.com/etsy/statsd) +protocol. To enable this, you will simply need to set the environment variable `STATSD_ENABLED=true` in $productName$'s deployment YAML + +[More information](../../running/statistics/envoy-statsd#envoy-statistics-with-statsd) + +### `STATSD_HOST` + +When this variable is set, $productName$ by default sends statistics to a Kubernetes service named `statsd-sink` on UDP port 8125 (the usual +port of the StatsD protocol). You may instead tell $productName$ to send the statistics to a different StatsD server by setting the +`STATSD_HOST` environment variable. This can be useful if you have an existing StatsD sink available in your cluster. + +[More information](../../running/statistics/envoy-statsd#envoy-statistics-with-statsd) + +### `STATSD_PORT` + +Allows for configuring StatsD on a port other than the default (8125) + +[More information](../../running/statistics/envoy-statsd#envoy-statistics-with-statsd) + +### `STATSD_FLUSH_INTERVAL` + +How often, in seconds, to submit statsd reports (if `STATSD_ENABLED`) + +[More information](../../running/statistics/envoy-statsd#envoy-statistics-with-statsd) + +### `_AMBASSADOR_ID` + +Used with the Ambassador Consul connector. Sets the Ambassador ID so multiple instances of this integration can run per-Cluster when there are multiple $productNamePlural$ (Required if `AMBASSADOR_ID` is set in your $productName$ `Deployment` + +[More information](../../../howtos/consul#environment-variables) + +### `_AMBASSADOR_TLS_SECRET_NAME` + +Used with the Ambassador Consul connector. Sets the name of the Kubernetes `v1.Secret` created by this program that contains the Consul-generated TLS certificate. + +[More information](../../../howtos/consul#environment-variables) + +### `_AMBASSADOR_TLS_SECRET_NAMESPACE` + +Used with the Ambassador Consul connector. Sets the namespace of the Kubernetes `v1.Secret` created by this program. + +[More information](../../../howtos/consul#environment-variables) + +### `_CONSUL_HOST` + +Used with the Ambassador Consul connector. Sets the IP or DNS name of the target Consul HTTP API server + +[More information](../../../howtos/consul#environment-variables) + +### `_CONSUL_PORT` + +Used with the Ambassador Consul connector. Sets the port number of the target Consul HTTP API server. + +[More information](../../../howtos/consul#environment-variables) + +### `AMBASSADOR_DISABLE_SNAPSHOT_SERVER` + +Disables the built-in snapshot server + +### `AMBASSADOR_ENVOY_BASE_ID` + +Base ID of the Envoy process + +### `AES_RATELIMIT_PREVIEW` + +Enables support for redis clustering, local caching, and an upgraded redis client with improved scalability in +preview mode. + +[More information](../aes-redis/#aes_ratelimit_preview) + +### `AES_AUTH_TIMEOUT` + +Configures the default timeout in the authentication extension. + +[More information](../aes-extensions/authentication/#timeout-variables) + +### `REDIS_SOCKET_TYPE` + +Redis currently support three different deployment methods. $productName$ can now support using a Redis deployed in any of these ways for rate +limiting when `AES_RATELIMIT_PREVIEW=true`. + +[More information](../aes-redis#socket_type) + +### `REDIS_URL` + +The URL to dial to talk to Redis. + +This will be either a hostname:port pair or a comma separated list of +hostname:port pairs depending on the [`REDIS_TYPE`](#redis_type) you are using. + +[More information](../aes-redis#url) + +### `REDIS_TLS_ENABLED` + +Specifies whether to use TLS when talking to Redis. + +[More information](../aes-redis#tls_enabled) + +### `REDIS_TLS_INSECURE` + +Specifies whether to skip certificate verification when using TLS to talk to Redis. + +[More information](../aes-redis#tls_insecure) + +### `REDIS_USERNAME` + +`REDIS_USERNAME` and `REDIS_PASSWORD` handle all Redis authentication that is separate from Rate Limit Preview so failing to set them when using `REDIS_AUTH` will result in Ambassador not being able to authenticate with Redis for all of its other functionality. + +[More information](../aes-redis#username) + +### `REDIS_PASSWORD` + +`REDIS_USERNAME` and `REDIS_PASSWORD` handle all Redis authentication that is separate from Rate Limit Preview so failing to set them when using `REDIS_AUTH` will result in Ambassador not being able to authenticate with Redis for all of its other functionality. + +[More information](../aes-redis#password) + +### `REDIS_AUTH` + +If you configure `REDIS_AUTH`, then `REDIS_USERNAME` cannot be changed from the value `default`, and +`REDIS_PASSWORD` should contain the same value as `REDIS_AUTH`. + +[More information](../aes-redis#auth) + +### `REDIS_POOL_SIZE` + +The number of connections to keep around when idle. The total number of connections may go lower than this if there are errors. +The total number of connections may go higher than this during a load surge. + +[More information](../aes-redis#pool_size) + +### `REDIS_PING_INTERVAL` + +The rate at which Ambassador will ping the idle connections in the normal pool +(not extra connections created for a load surge). + +[More information](../aes-redis#ping_interval) + +### `REDIS_TIMEOUT` + +Sets 4 different timeouts: + + 1. `(*net.Dialer).Timeout` for establishing connections + 2. `(*redis.Client).ReadTimeout` for reading a single complete response + 3. `(*redis.Client).WriteTimeout` for writing a single complete request + 4. The timeout when waiting for a connection to become available from the pool (not including the dial time, which is timed out separately) + +A value of "0" means "no timeout". + +[More information](../aes-redis#timeout) + +### `REDIS_SURGE_LIMIT_INTERVAL` + +During a load surge, if the pool is depleted, then Ambassador may create new +connections to Redis in order to fulfill demand, at a maximum rate of one new +connection per `REDIS_SURGE_LIMIT_INTERVAL`. + +[More information](../aes-redis#surge_limit_interval) + +### `REDIS_SURGE_LIMIT_AFTER` + +The number of connections that can be created _after_ the normal pool is +depleted before `REDIS_SURGE_LIMIT_INTERVAL` kicks in. + +[More information](../aes-redis#surge_limit_after) + +### `REDIS_SURGE_POOL_SIZE` + +Normally during a surge, excess connections beyond `REDIS_POOL_SIZE` are +closed immediately after they are done being used, instead of being returned +to a pool. + +`REDIS_SURGE_POOL_SIZE` configures a "reserve" pool for excess connections +created during a surge. + +[More information](../aes-redis#surge_pool_size) + +### `REDIS_SURGE_POOL_DRAIN_INTERVAL` + +How quickly to drain connections from the surge pool after a surge is over. + +[More information](../aes-redis#surge_pool_drain_interval) + +### `REDIS_PIPELINE_WINDOW` + +The duration after which internal pipelines will be flushed. + +[More information](../aes-redis#pipeline_window) + +### `REDIS_PIPELINE_LIMIT` + +The maximum number of commands that can be pipelined before flushing. + +[More information](../aes-redis#pipeline_limit) + +### `REDIS_TYPE` + +Redis currently support three different deployment methods. $productName$ can now support using a Redis deployed in any of these ways for rate +limiting when `AES_RATELIMIT_PREVIEW=true`. + +[More information](../aes-redis#type) + +### `REDIS_PERSECOND` + +If true, a second Redis connection pool is created (to a potentially different Redis instance) that is only used for per-second +RateLimits; this second connection pool is configured by the `REDIS_PERSECOND_*` variables rather than the usual `REDIS_*` variables. + +[More information](../aes-redis#redis_persecond) + +### `REDIS_PERSECOND_SOCKET_TYPE` + +Configures the [REDIS_SOCKET_TYPE](#redis_socket_type) for the second [REDIS_PERSECOND](#redis_persecond) connection pool. + +[More information](../aes-redis#socket_type) + +### `REDIS_PERSECOND_URL` + +Configures the [REDIS_URL](#redis_url) for the second [REDIS_PERSECOND](#redis_persecond) connection pool. + +[More information](../aes-redis#url) + +### `REDIS_PERSECOND_TLS_ENABLED` + +Configures [REDIS_TLS_ENABLED](#redis_tls_enabled) for the second [REDIS_PERSECOND](#redis_persecond) connection pool. + +[More information](../aes-redis#tls_enabled) + +### `REDIS_PERSECOND_TLS_INSECURE` + +Configures [REDIS_TLS_INSECURE](#redis_tls_insecure) for the second [REDIS_PERSECOND](#redis_persecond) connection pool. + +[More information](../aes-redis#tls_insecure) + +### `REDIS_PERSECOND_USERNAME` + +Configures the [REDIS_USERNAME](#redis_username) for the second [REDIS_PERSECOND](#redis_persecond) connection pool. + +[More information](../aes-redis#username) + +### `REDIS_PERSECOND_PASSWORD` + +Configures the [#REDIS_PASSWORD](#redis_password) for the second [REDIS_PERSECOND](#redis_persecond) connection pool. + +[More information](../aes-redis#password) + +### `REDIS_PERSECOND_AUTH` + +Configures [REDIS_AUTH](#redis_auth) for the second [REDIS_PERSECOND](#redis_persecond) connection pool. + +[More information](../aes-redis#auth) + +### `REDIS_PERSECOND_POOL_SIZE` + +Configures the [REDIS_POOL_SIZE](#redis_pool_size) for the second [REDIS_PERSECOND](#redis_persecond) connection pool. + +[More information](../aes-redis#pool_size) + +### `REDIS_PERSECOND_PING_INTERVAL` + +Configures the [REDIS_PING_INTERVAL](#redis_ping_interval) for the second [REDIS_PERSECOND](#redis_persecond) connection pool. + +[More information](../aes-redis#ping_interval) + +### `REDIS_PERSECOND_TIMEOUT` + +Configures the [REDIS_TIMEOUT](#redis_timeout) for the second [REDIS_PERSECOND](#redis_persecond) connection pool. + +[More information](../aes-redis#timeout) + +### `REDIS_PERSECOND_SURGE_LIMIT_INTERVAL` + +Configures the [REDIS_SURGE_LIMIT_INTERVAL](#redis_surge_limit_interval) for the second [REDIS_PERSECOND](#redis_persecond) connection pool. + +[More information](../aes-redis#surge_limit_interval) + +### `REDIS_PERSECOND_SURGE_LIMIT_AFTER` + +Configures [REDIS_SURGE_LIMIT_AFTER](#redis_surge_limit_after) for the second [REDIS_PERSECOND](#redis_persecond) connection pool. + +[More information](../aes-redis#surge_limit_after) + +### `REDIS_PERSECOND_SURGE_POOL_SIZE` + +Configures the [REDIS_SURGE_POOL_SIZE](#redis_surge_pool_size) for the second [REDIS_PERSECOND](#redis_persecond) connection pool. + +[More information](../aes-redis#surge_pool_size) + +### `REDIS_PERSECOND_SURGE_POOL_DRAIN_INTERVAL` + +Configures the [REDIS_SURGE_POOL_DRAIN_INTERVAL](#redis_surge_pool_drain_interval) for the second [REDIS_PERSECOND](#redis_persecond) connection pool. + +[More information](../aes-redis#surge_pool_drain_interval) + +### `REDIS_PERSECOND_TYPE` + +Configures the [REDIS_TYPE](#redis_type) for the second [REDIS_PERSECOND](#redis_persecond) connection pool. + +[More information](../aes-redis#type) + +### `REDIS_PERSECOND_PIPELINE_WINDOW` + +Configures the [REDIS_PIPELINE_WINDOW](#redis_pipeline_window) for the second [REDIS_PERSECOND](#redis_persecond) connection pool. + +[More information](../aes-redis#pipeline_window) + +### `REDIS_PERSECOND_PIPELINE_LIMIT` + +Configures the [REDIS_PIPELING_LIMIT](#redis_pipeline_limit) for the second [REDIS_PERSECOND](#redis_persecond) connection pool. + +[More information](../aes-redis#pipeline_limit) + +### `EXPIRATION_JITTER_MAX_SECONDS` + +### `USE_STATSD` + +The `RateLimitService` reports to statsd, and attempts to do so by default (`USE_STATSD`, `STATSD_HOST`, `STATSD_PORT`, `GOSTATS_FLUSH_INTERVAL_SECONDS`). + +### `GOSTATS_FLUSH_INTERVAL_SECONDS` + +Configures the flush interval in seconds for the go statistics. + +### `LOCAL_CACHE_SIZE_IN_BYTES` + +Only available if `AES_RATELIMIT_PREVIEW: "true`. The AES rate limit extension can optionally cache over-the-limit keys so it does +not need to read the redis cache again for requests with labels that are already over the limit. + +Setting `LOCAL_CACHE_SIZE_IN_BYTES` to a non-zero value with enable local caching. + +[More information](../aes-extensions/ratelimit#local_cache_size_in_bytes) + +### `NEAR_LIMIT_RATIO` + +Only available if `AES_RATELIMIT_PREVIEW: "true"`. Adjusts the ratio used by the `near_limit` statistic for tracking requests that +are "near the limit". Defaults to `0.8` (80%) of the limit defined in the `RateLimit` rule. + +[More information](../aes-extensions/ratelimit#near_limit_ratio) + +### `DEVPORTAL_CONTENT_URL` + +Default URL to the repository hosting the content for the Portal + +[More information](../../using/dev-portal) + +### `DEVPORTAL_CONTENT_DIR` + +Default content subdirectory within the `DEVPORTAL_CONTENT_URL` the devportal content is located at (defaults to `/`) + +[More information](../../using/dev-portal) + +### `DEVPORTAL_CONTENT_BRANCH` + +Default content branch within the repo at `DEVPORTAL_CONTENT_URL` to use for the devportal content (defaults to `master`) + +[More information](../../using/dev-portal) + +### `DEVPORTAL_DOCS_BASE_PATH` + +Base path for each api doc (defaults to `/doc/`) + +[More information](../../using/dev-portal) + +### `POLL_EVERY_SECS` + +Interval for polling OpenAPI docs; default 60 seconds. Set to 0 to disable devportal polling. + +[More information](../../using/dev-portal) + +### `AES_ACME_LEADER_DISABLE` + +This prevents $productName$ from trying to manage ACME. When enabled, `Host` resources will be unable to use ACME to manage their tls secrets +regardless of the config on the `Host` resource. + +### `AES_REPORT_DIAGNOSTICS_TO_CLOUD` + +By setting `AES_REPORT_DIAGNOSTICS_TO_CLOUD` to false, you can disable the feature where diagnostic information about your installation of $productName$ +will be sent to Ambassador cloud. If this variable is disabled, you will be unable to access cluster diagnostic information in the cloud. + +### `AES_SNAPSHOT_URL` + +Configures the default endpoint where config snapshots are stored and accessed. + +### `ENV_AES_SECRET_NAME` + +Use to override the name of the secret that $productName$ attempts to find licensing information in. + +### `ENV_AES_SECRET_NAMESPACE` + +Use to override the namespace of the secret that $productName$ attempts to find licensing information in. +By default, $productName$ will look for the secret in the same namespace that $productName$ was installed in. + +### `AMBASSADOR_EDS_BYPASS` + +Bypasses EDS handling of endpoints and causes endpoints to be inserted to clusters manually. This can help resolve with `503 UH` +caused by certification rotation relating to a delay between EDS + CDS. + +### `AMBASSADOR_FORCE_SECRET_VALIDATION` + +If you set the `AMBASSADOR_FORCE_SECRET_VALIDATION` environment variable, invalid Secrets will be rejected, +and a `Host` or `TLSContext` resource attempting to use an invalid certificate will be disabled entirely. + +[More information](../../running/tls#certificates-and-secrets) + +### `AMBASSADOR_KNATIVE_SUPPORT` + +Enables support for knative + +### `AMBASSADOR_UPDATE_MAPPING_STATUS` + +If `AMBASSADOR_UPDATE_MAPPING_STATUS` is set to the string `true`, $productName$ will update the `status` of every `Mapping` +CRD that it accepts for its configuration. This has no effect on the proper functioning of $productName$ itself, and can be a +performance burden on installations with many `Mapping`s. It has no effect for `Mapping`s stored as annotations. + +The default is `false`. We recommend leaving `AMBASSADOR_UPDATE_MAPPING_STATUS` turned off unless required for external systems. + +[More information](../../running/running#ambassador_update_mapping_status) + +### `ENVOY_CONCURRENCY` + +Configures the optional [--concurrency](https://www.envoyproxy.io/docs/envoy/latest/operations/cli#cmdoption-concurrency) command line option when launching Envoy. +This controls the number of worker threads used to serve requests and can be used to fine-tune system resource usage. + +### `DISABLE_STRICT_LABEL_SELECTORS` + +In $productName$ version `3.2`, a bug with how `Hosts` are associated with `Mappings` was fixed and with how `Listeners` are associated with `Hosts`. The `mappingSelector`\\`selector` fields in `Hosts` and `Listeners` were not +properly being enforced in prior versions. If any single label from the selector was matched then the resources would be associated with each other instead +of requiring all labels in the selector to be present. Additonally, if the `hostname` of a `Mapping` matched the `hostname` of a `Host` then they would be associated +regardless of the configuration of `mappingSelector`\\`selector`. + +In version `3.2` this bug was fixed and resources that configure a selector will only be associated if **all** labels required by the selector are present. +This brings the `mappingSelector` and `selector` fields in-line with how label selectors are used throughout Kubernetes. To avoid unexpected behavior after the upgrade, +add all labels that configured in any `mappingSelector`\\`selector` to `Mappings` you want to associate with the `Host` or the `Hosts` you want to be associated with the `Listener`. You can opt-out of this fix and return to the old +association behavior by setting the environment variable `DISABLE_STRICT_LABEL_SELECTORS` to `"true"` (default: `"false"`). A future version of +$productName$ may remove the ability to opt-out of this bugfix. + +> **Note:** The `mappingSelector` field is only configurable on `v3alpha1` CRDs. In the `v2` CRDs the equivalent field is `selector`. +either `selector` or `mappingSelector` may be configured in the `v3alpha1` CRDs, but `selector` has been deprecated in favour of `mappingSelector`. + +See The [Host documentation](../../running/host-crd#controlling-association-with-mappings) for more information about `Host` / `Mapping` association. + +## Port assignments + +$productName$ uses the following ports to listen for HTTP/HTTPS traffic automatically via TCP: + +| Port | Process | Function | +|------|----------|---------------------------------------------------------| +| 8001 | envoy | Internal stats, logging, etc.; not exposed outside pod | +| 8002 | watt | Internal watt snapshot access; not exposed outside pod | +| 8003 | ambex | Internal ambex snapshot access; not exposed outside pod | +| 8004 | diagd | Internal `diagd` access; not exposed outside pod | +| 8005 | snapshot | Exposes a scrubbed $productName$ snapshot outside of the pod | +| 8080 | envoy | Default HTTP service port | +| 8443 | envoy | Default HTTPS service port | +| 8877 | diagd | Direct access to diagnostics UI; provided by `busyambassador entrypoint` | + +[^1]: This may change in a future release to reflect the Pods's + namespace if deployed to a namespace other than `default`. + https://github.com/emissary-ingress/emissary/issues/1583 + +[Go `net.Dial`]: https://golang.org/pkg/net/#Dial +[Go `strconv.ParseBool`]: https://golang.org/pkg/strconv/#ParseBool +[Go `time.ParseDuration`]: https://pkg.go.dev/time#ParseDuration +[Redis 6 ACL]: https://redis.io/topics/acl diff --git a/docs/edge-stack/latest/topics/running/host-crd.md b/docs/edge-stack/latest/topics/running/host-crd.md new file mode 100644 index 000000000..082a5f9d1 --- /dev/null +++ b/docs/edge-stack/latest/topics/running/host-crd.md @@ -0,0 +1,329 @@ +import Alert from '@material-ui/lab/Alert'; + +# The **Host** CRD + +The custom `Host` resource defines how $productName$ will be +visible to the outside world. It collects all the following information in a +single configuration resource: + +- The hostname by which $productName$ will be reachable +- How $productName$ should handle TLS certificates +- How $productName$ should handle secure and insecure requests +- Which `Mappings` should be associated with this `Host` + + + Remember that Listener resources are required for a functioning + $productName$ installation!
+ Learn more about Listener. +
+ + + Remember than $productName$ does not make sure that a wildcard Host exists! If the + wildcard behavior is needed, a Host with a hostname of "*" + must be defined by the user. + + +A minimal `Host` resource, using Let’s Encrypt to handle TLS, would be: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: minimal-host +spec: + hostname: host.example.com + acmeProvider: + email: julian@example.com +``` + +This `Host` tells $productName$ to expect to be reached at `host.example.com`, +and to manage TLS certificates using Let’s Encrypt, registering as +`julian@example.com`. Since it doesn’t specify otherwise, requests using +cleartext will be automatically redirected to use HTTPS, and $productName$ will +not search for any specific further configuration resources related to this +`Host`. + +Remember that a Listener will also be required for this example to +be functional. Many examples of setting up `Host` and `Listener` are available in the +[Configuring $productName$ to Communicate](../../../howtos/configure-communications) document. + +## Setting the `hostname` + +The `hostname` element tells $productName$ which hostnames to expect. `hostname` is a DNS glob, +so all of the following are valid: + +- `host.example.com` +- `*.example.com` +- `host.example.*` + +The following are _not_ valid: + +- `host.*.com` -- Envoy supports only prefix and suffix globs +- `*host.example.com` -- the wildcard must be its own element in the DNS name + +In all cases, the `hostname` is used to match the `:authority` header for HTTP routing. +When TLS termination is active, the `hostname` is also used for SNI matching. + +## Controlling Association with `Mapping`s + +A `Mapping` will not be associated with a `Host` unless at least one of the following is true: + +- The `Mapping` specifies a `hostname` attribute that matches the `Host` in question. +- The `Host` specifies a `mappingSelector` that matches the `Mapping`'s Kubernetes `label`s. + +> **Note:** The `mappingSelector` field is only configurable on `v3alpha1` CRDs. In the `v2` CRDs the equivalent field is `selector`. +either `selector` or `mappingSelector` may be configured in the `v3alpha1` CRDs, but `selector` has been deprecated in favour of `mappingSelector`. + +If neither of the above is true, the `Mapping` will not be associated with the `Host` in +question. This is intended to help manage memory consumption with large numbers of `Host`s and large +numbers of `Mapping`s. + +If the `Host` specifies `mappingSelector` _and_ the `Mapping` specifies `hostname`, both must match +for the association to happen. + +The `mappingSelector` is a Kubernetes [label selector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#labelselector-v1-meta). For a `Mapping` to be associated with a `Host` that uses `mappingSelector`, then **all** labels +required by the `mappingSelector` must be present on the `Mapping` in order for it to be associated with the `Host`. +A `Mapping` may have additional labels other than those required by the `mappingSelector` so long as the required labels are present. + +**in 2.0, only `matchLabels` is supported**, for example: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: minimal-host +spec: + hostname: host.example.com + mappingSelector: + matchLabels: + examplehost: host +``` + +The above `Host` will associate with these `Mapping`s: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: mapping-with-label-match + labels: + examplehost: host # This matches the Host's mappingSelector. +spec: + prefix: /httpbin/ + service: http://httpbin.org +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: mapping-with-hostname-match +spec: + hostname: host.example.com # This is an exact match of the Host's hostname. + prefix: /httpbin/ + service: http://httpbin.org +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: mapping-with-hostname-glob-match +spec: + hostname: '*.example.com' # This glob matches the Host's hostname too. + prefix: /httpbin/ + service: http://httpbin.org +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: mapping-with-both-matches + labels: + examplehost: host # This matches the Host's mappingSelector. +spec: + hostname: '*.example.com' # This glob matches the Host's hostname. + prefix: /httpbin/ + service: http://httpbin.org +``` + +It will _not_ associate with any of these: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: skip-mapping-wrong-label + labels: + examplehost: staging # This doesn't match the Host's mappingSelector. +spec: + prefix: /httpbin/ + service: http://httpbin.org +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: skip-mapping-wrong-hostname +spec: + hosname: 'bad.example.com' # This doesn't match the Host's hostname. + prefix: /httpbin/ + service: http://httpbin.org +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: skip-mapping-still-wrong + labels: + examplehost: staging # This doesn't match the Host's mappingSelector, +spec: # and if the Host specifies mappingSelector AND the + hostname: host.example.com # Mapping specifies hostname, BOTH must match. So + prefix: /httpbin/ # the matching hostname isn't good enough. + service: http://httpbin.org +``` + +Future versions of $productName$ will support `matchExpressions` as well. + +> **Note:** In $productName$ version `3.2`, a bug with how `Hosts` are associated with `Mappings` was fixed. The `mappingSelector` field in `Hosts` was not +properly being enforced in prior versions. If any single label from the selector was matched then the `Host` would be associated with the `Mapping` instead +of requiring all labels in the selector to be present. Additonally, if the `hostname` of the `Mapping` matched the `hostname` of the `Host` then they would be associated +regardless of the configuration of `mappingSelector`. +In version `3.2` this bug was fixed and a `Host` will only be associated with a `Mapping` if **all** labels required by the selector are present. +This brings the `mappingSelector` field in-line with how label selectors are used throughout Kubernetes. To avoid unexpected behavior after the upgrade, +add all labels that `Hosts` have in their `mappingSelector` to `Mappings` you want to associate with the `Host`. You can opt-out of this fix and return to the old +`Mapping`/`Host` association behavior by setting the environment variable `DISABLE_STRICT_LABEL_SELECTORS` to `"true"` (default: `"false"`). A future version of +$productName$ may remove the ability to opt-out of this bugfix. + +## Secure and insecure requests + +A **secure** request arrives via HTTPS; an **insecure** request does not. By default, secure requests will be routed and insecure requests will be redirected (using an HTTP 301 response) to HTTPS. The behavior of insecure requests can be overridden using the `requestPolicy` element of a `Host`: + +```yaml +requestPolicy: + insecure: + action: insecure-action + additionalPort: insecure-port +``` + +The `insecure-action` can be one of: + +- `Redirect` (the default): redirect to HTTPS +- `Route`: go ahead and route as normal; this will allow handling HTTP requests normally +- `Reject`: reject the request with a 400 response + +```yaml +requestPolicy: + insecure: + additionalPort: -1 # This is how to disable the default redirection from 8080. +``` + +Some special cases to be aware of here: + +- **Case matters in the actions:** you must use e.g. `Reject`, not `reject`. +- The `X-Forwarded-Proto` header is honored when determining whether a request is secure or insecure. For more information, see "Load Balancers, the `Host` Resource, and `X-Forwarded-Proto`" below. +- ACME challenges with prefix `/.well-known/acme-challenge/` are always forced to be considered insecure, since they are not supposed to arrive over HTTPS. +- $AESproductName$ provides native handling of ACME challenges. If you are using this support, $AESproductName$ will automatically arrange for insecure ACME challenges to be handled correctly. If you are handling ACME yourself - as you must when running $OSSproductName$ - you will need to supply appropriate `Host` resources and `Mapping`s to correctly direct ACME challenges to your ACME challenge handler. + +## TLS settings + +The `Host` is responsible for high-level TLS configuration in $productName$. There are +several settings covering TLS: + +### ACME support + +$AESproductName$ comes with built in support for automatic certificate +management using the [ACME protocol](https://tools.ietf.org/html/rfc8555). + +It does this by using the `hostname` of a `Host` to request a certificate from +the `acmeProvider.authority` using the `HTTP-01` challenge. After requesting a +certificate, $AESproductName$ will then manage the renewal process automatically. + +The `acmeProvider` element of the `Host` configures the Certificate Authority +$AESproductName$ will request the certificate from and the email address that the CA +will use to notify about any lifecycle events of the certificate. + +```yaml +acmeProvider: + authority: url-to-provider + email: email-of-registrant +``` + +**Notes on ACME Support:** + +- If the authority is not supplied, the Let’s Encrypt production environment is assumed. + +- In general, `email-of-registrant` is mandatory when using ACME: it should be + a valid email address that will reach someone responsible for certificate + management. + +- ACME stores certificates in Kubernetes secrets. The name of the secret can be + set using the `tlsSecret` element: + + ```yaml + acmeProvider: + email: user@example.com + tlsSecret: + name: tls-cert + ``` + + if not supplied, a name will be automatically generated from the `hostname` and `email`. + +- $AESproductName$ uses the [`HTTP-01` challenge + ](https://letsencrypt.org/docs/challenge-types/) for ACME support: + - Does not require permission to edit DNS records + - The `hostname` must be reachable from the internet so the CA can check + `POST` to an endpoint in $AESproductName$. + - Wildcard domains are not supported. + +### `tlsSecret` enables TLS termination + +`tlsSecret` specifies a Kubernetes `Secret` is **required** for any TLS termination to occur. If ACME is enabled, +it will set `tlsSecret`: in all other cases, TLS termination will not occur if `tlsSecret` is not specified. + +The following `Host` will configure $productName$ to read a `Secret` named +`tls-cert` for a certificate to use when terminating TLS. + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: example-host +spec: + hostname: host.example.com + acmeProvider: + authority: none + tlsSecret: + name: tls-cert +``` + +### `tlsContext` links to a `TLSContext` for additional configuration + +`tlsContext` specifies a [`TLSContext`](#) to use for additional TLS information. Note that you **must** still +define `tlsSecret` for TLS termination to happen. It is an error to supply both `tlsContext` and `tls`. + +See the [TLS discussion](../tls) for more details. + +### `tls` allows manually providing additional configuration + +`tls` allows specifying most of the things a `TLSContext` can, inline in the `Host`. Note that you **must** still +define `tlsSecret` for TLS termination to happen. It is an error to supply both `tlsContext` and `tls`. + +See the [TLS discussion](../tls) for more details. + +## Load balancers, the `Host` resource, and `X-Forwarded-Proto` + +In a typical installation, $productName$ runs behind a load balancer. The +configuration of the load balancer can affect how $productName$ sees requests +arriving from the outside world, which can in turn can affect whether $productName$ +considers the request secure or insecure. As such: + +- **We recommend layer 4 load balancers** unless your workload includes + long-lived connections with multiple requests arriving over the same + connection. For example, a workload with many requests carried over a small + number of long-lived gRPC connections. +- **$productName$ fully supports TLS termination at the load balancer** with a single exception, listed below. +- If you are using a layer 7 load balancer, **it is critical that the system be configured correctly**: + - The load balancer must correctly handle `X-Forwarded-For` and `X-Forwarded-Proto`. + - The `l7Depth` element in the [`Listener` CRD](../../running/listener) must be set to the number of layer 7 load balancers the request passes through to reach $productName$ (in the typical case, where the client speaks to the load balancer, which then speaks to $productName$, you would set `l7Depth` to 1). If `l7Depth` remains at its default of 0, the system might route correctly, but upstream services will see the load balancer's IP address instead of the actual client's IP address. + +It's important to realize that Envoy manages the `X-Forwarded-Proto` header such that it **always** reflects the most trustworthy information Envoy has about whether the request arrived encrypted or unencrypted. If no `X-Forwarded-Proto` is received from downstream, **or if it is considered untrustworthy**, Envoy will supply an `X-Forwarded-Proto` that reflects the protocol used for the connection to Envoy itself. The `l7Depth` element is also used when determining trust for `X-Forwarded-For`, and it is therefore important to set it correctly. Its default of 0 should always be correct when $productName$ is behind only layer 4 load balancers; it should need to be changed **only** when layer 7 load balancers are involved. + +### CRD specification + +The `Host` CRD is formally described by its protobuf specification. Developers who need access to the specification can find it [here](https://github.com/emissary-ingress/emissary/blob/release/v2.0/api/getambassador.io/v2/Host.proto). diff --git a/docs/edge-stack/latest/topics/running/tls/cleartext-redirection.md b/docs/edge-stack/latest/topics/running/tls/cleartext-redirection.md new file mode 100644 index 000000000..74fc88eeb --- /dev/null +++ b/docs/edge-stack/latest/topics/running/tls/cleartext-redirection.md @@ -0,0 +1,91 @@ +import Alert from '@material-ui/lab/Alert'; + +# Cleartext support + +While most modern web applications choose to encrypt all traffic, there remain +cases where supporting cleartext communications is important. $productName$ supports +both forcing [automatic redirection to HTTPS](#http-https-redirection) and +[serving cleartext](#cleartext-routing) traffic on a `Host`. + + + If no Hosts are defined, $productName$ enables HTTP->HTTPS redirection. You will + need to explicitly create a Host to enable cleartext communication at all. + + + + The Listener and + Host CRDs work together to manage HTTP and HTTPS routing. + This document is meant as a quick reference to the Host resource: for a more complete + treatment of handling cleartext and HTTPS, see Configuring $productName$ Communications. + + +## Cleartext Routing + +To allow cleartext to be routed, set the `requestPolicy.insecure.action` of a `Host` to `Route`: + +```yaml +requestPolicy: + insecure: + action: Redirect +``` + +This allows routing for either HTTP and HTTPS, or _only_ HTTP, depending on `tlsSecret` configuration: + +- If the `Host` does not specify a `tlsSecret`, it will only route HTTP, not terminating TLS at all. +- If the `Host` does specify a `tlsSecret`, it will route both HTTP and HTTPS. + + + If no Hosts are defined, $productName$ enables HTTP->HTTPS redirection. You will + need to explicitly create a Host to enable cleartext communication at all. + + + + The Listener and + Host CRDs work together to manage HTTP and HTTPS routing. + This document is meant as a quick reference to the Host resource: for a more complete + treatment of handling cleartext and HTTPS, see Configuring $productName$ Communications. + + +## HTTP->HTTPS redirection + +Most websites that force HTTPS will also automatically redirect any +requests that come into it over HTTP: + +``` +Client $productName$ +| | +| http:///api | +| --------------------------> | +| | +| 301: https:///api | +| <-------------------------- | +| | +| https:///api | +| --------------------------> | +| | +``` + +In $productName$, this is configured by setting the `insecure.action` in a `Host` to `Redirect`. + +```yaml +requestPolicy: + insecure: + action: Redirect +``` + +$productName$ determines which requests are secure and which are insecure using the +`securityModel` of the [`Listener`] that accepts the request. + +[`Listener`]: ../../listener + + + If no Hosts are defined, $productName$ enables HTTP->HTTPS redirection. You will + need to explicitly create a Host to enable cleartext communication at all. + + + + The Listener and + Host CRDs work together to manage HTTP and HTTPS routing. + This document is meant as a quick reference to the Host resource: for a more complete + treatment of handling cleartext and HTTPS, see Configuring $productName$ Communications. + diff --git a/docs/edge-stack/latest/topics/running/tls/index.md b/docs/edge-stack/latest/topics/running/tls/index.md new file mode 100644 index 000000000..b60dcb9fd --- /dev/null +++ b/docs/edge-stack/latest/topics/running/tls/index.md @@ -0,0 +1,520 @@ +import Alert from '@material-ui/lab/Alert'; + +# Transport Layer Security (TLS) + +$productName$'s robust TLS support exposes configuration options +for different TLS use cases including: + +- [Simultaneously Routing HTTP and HTTPS](cleartext-redirection#cleartext-routing) +- [HTTP -> HTTPS Redirection](cleartext-redirection#http-https-redirection) +- [Mutual TLS](mtls) +- [Server Name Indication (SNI)](sni) +- [TLS Origination](origination) + +## Certificates and Secrets + +Properly-functioning TLS requires the use of [TLS certificates] to prove that the +various systems communicating are who they say they are. At minimum, $productName$ +must have a server certificate that identifies it to clients; when [mTLS](./mtls) +or [client certificate authentication] are in use, additional certificates are needed. + +You supply certificates to $productName$ in Kubernetes [TLS Secrets]. These Secrets +_must_ contain valid X.509 certificates with valid PKCS1, PKCS8, or Elliptic Curve private +keys. If a Secret does not contain a valid certificate, an error message will be logged, for +example: + +``` +tls-broken-cert.default.1 2 errors:; 1. K8sSecret secret tls-broken-cert.default tls.key cannot be parsed as PKCS1 or PKCS8: asn1: syntax error: data truncated; 2. K8sSecret secret tls-broken-cert.default tls.crt cannot be parsed as x.509: x509: malformed certificate +``` + +If you set the `AMBASSADOR_FORCE_SECRET_VALIDATION` environment variable, the invalid +Secret will be rejected, and a `Host` or `TLSContext` resource attempting to use an invalid +certificate will be disabled entirely. **Note** that in $productName$ $version$, this +includes disabling cleartext communication for such a `Host`. + +[tls certificates]: https://protonmail.com/blog/tls-ssl-certificate/ +[client certificate authentication]: ../../../howtos/client-cert-validation/ +[tls secrets]: https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets + +## `Host` + +A `Host` represents a domain in $productName$ and defines how the domain manages TLS. For more information on the Host resource, see [The Host CRD reference documentation](../host-crd). + +**If no `Host`s are present**, $productName$ synthesizes a `Host` that +terminates TLS using a self-signed TLS certificate, and redirects cleartext +traffic to HTTPS. You will need to explictly define `Host`s to change this behavior +(for example, to use a different certificate or to route cleartext). + + + The examples below do not define a requestPolicy; however, most real-world + usage of $productName$ will require defining the requestPolicy.
+
+ For more information, please refer to the Host documentation. +
+ +### Automatic TLS with ACME + +With $AESproductName$, you can configure the `Host` to manage TLS by +requesting a certificate from a Certificate Authority using the +[ACME HTTP-01 challenge](https://letsencrypt.org/docs/challenge-types/). + +After you create a DNS record, configure $AESproductName$ to get a certificate from the default CA, [Let's Encrypt](https://letsencrypt.org), by providing a hostname and your email for the certificate: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: example-host +spec: + hostname: host.example.com + acmeProvider: + authority: https://acme-v02.api.letsencrypt.org/directory # Optional: The CA you want to get your certificate from. Defaults to Let's Encrypt + email: julian@example.com +``` + +$AESproductName$ will now request a certificate from the CA and store it in a Secret +in the same namespace as the `Host`. + +**If you use ACME for multiple Hosts, add a wildcard Host too.** +This is required to manage a known issue. This issue will be resolved in a future Ambassador Edge Stack release. + +### Bring your own certificate + +The `Host` can read a certificate from a Kubernetes Secret and use that certificate +to terminate TLS on a domain. + +The following example shows the certificate contained in the Kubernetes Secret named +`host-secret` configured to have $productName$ terminate TLS on the `host.example.com` +domain: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: example-host +spec: + hostname: host.example.com + tlsSecret: + name: host-secret +``` + +By default, `tlsSecret` will only look for the named secret in the same namespace as the `Host`. +In the above example, the secret `host-secret` will need to exist within the `default` namespace +since that is the namespace of the `Host`. + +To reference a secret that is in a different namespace from the `Host`, the `namespace` field is required. +The below example will configure the `Host` to use the `host-secret` secret from the `example` namespace. + +```yaml +--- +apiVersion: getambassador.io/v2 +kind: Host +metadata: + name: example-host +spec: + hostname: host.example.com + acmeProvider: + authority: none + tlsSecret: + name: host-secret + namespace: example +``` + + + + The Kubernetes Secret named by tlsSecret must contain a valid TLS certificate. + If `AMBASSADOR_FORCE_SECRET_VALIDATION` is set and the Secret contains an invalid + certificate, $productName$ will reject the Secret and completely disable the + `Host`; see [**Certificates and Secrets**](#certificates-and-secrets) above. + + + +### Advanced TLS configuration with the `Host` + +You can specify TLS configuration directly in the `Host` via the `tls` field. This is the +recommended method to do more advanced TLS configuration for a single `Host`. + +For example, the configuration to enforce a minimum TLS version on the `Host` looks as follows: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: example-host +spec: + hostname: host.example.com + tlsSecret: + name: min-secret + tls: + min_tls_version: v1.2 +``` + + + + The Kubernetes Secret named by tlsSecret must contain a valid TLS certificate. + If `AMBASSADOR_FORCE_SECRET_VALIDATION` is set and the Secret contains an invalid + certificate, $productName$ will reject the Secret and completely disable the + `Host`; see [**Certificates and Secrets**](#certificates-and-secrets) above. + + + +The following fields are accepted in the `tls` field: + +```yaml +tls: + cert_chain_file: # string + private_key_file: # string + ca_secret: # string + cacert_chain_file: # string + alpn_protocols: # string + cert_required: # bool + min_tls_version: # string + max_tls_version: # string + cipher_suites: # array of strings + ecdh_curves: # array of strings + sni: # string + crl_secret: # string +``` + +These fields have the same function as in the [`TLSContext`](#tlscontext) resource, +as described below. + +### `Host` and `TLSContext` + +You can link a `Host` to a [`TLSContext`](#tlscontext) instead of defining `tls` +settings in the `Host` itself. This is primarily useful for sharing settings between +multiple `Host`s. + +#### Link a `TLSContext` to the `Host` + + + It is invalid to use both the tls setting and the tlsContext + setting on the same Host. The recommended setting is using the tls setting + unless you have multiple Hosts that need to share TLS configuration. + + +To link a [`TLSContext`](#tlscontext) with a `Host`, create a [`TLSContext`](#tlscontext) +with the desired configuration and link it to the `Host` by setting the `tlsContext.name` +field in the `Host`. For example, to enforce a minimum TLS version on the `Host` above, +create a `TLSContext` with any name with the following configuration: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: min-tls-context +spec: + hosts: + - host.example.com + secret: min-secret + min_tls_version: v1.2 +``` + +Next, link it to the `Host` via the `tlsContext` field as shown: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: example-host +spec: + hostname: host.example.com + tlsSecret: + name: min-secret + tlsContext: + name: min-tls-context +``` + + + + The `Host` and the `TLSContext` must name the same Kubernetes Secret; if not, + $productName$ will disable TLS for the `Host`. + + + + + + The Kubernetes Secret named by tlsSecret must contain a valid TLS certificate. + If `AMBASSADOR_FORCE_SECRET_VALIDATION` is set and the Secret contains an invalid + certificate, $productName$ will reject the Secret and completely disable the + `Host`; see [**Certificates and Secrets**](#certificates-and-secrets) above. + + + + + + The `Host`'s `hostname` and the `TLSContext`'s `hosts` must have compatible settings. If + they do not, requests may not be accepted. + + + +See [`TLSContext`](#tlscontext) below to read more on the description of these fields. + +#### Create a `TLSContext` with the name `{{AMBASSADORHOST}}-context` (DEPRECATED) + + + This implicit TLSContext linkage is deprecated and will be removed + in a future version of $productName$; it is not recommended for new + configurations. Any other TLS configuration in the Host will override + this implict TLSContext link. + + +The `Host` will implicitly link to the `TLSContext` when a `TLSContext` exists with the following: + +- the name `{{NAME_OF_AMBASSADORHOST}}-context` +- `hosts` in the `TLSContext` set to the same value as `hostname` in the `Host`, and +- `secret` in the `TLSContext` set to the same value as `tlsSecret` in the `Host` + +**As noted above, this implicit linking is deprecated.** + +For example, another way to enforce a minimum TLS version on the `Host` above would +be to simply create the `TLSContext` with the name `example-host-context` and then not modify the `Host`: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: example-host-context +spec: + hosts: + - host.example.com + secret: host-secret + min_tls_version: v1.2 +``` + + + + The `Host` and the `TLSContext` must name the same Kubernetes Secret; if not, + $productName$ will disable TLS for the `Host`. + + + + + + The Kubernetes Secret named by tlsSecret must contain a valid TLS certificate. + If `AMBASSADOR_FORCE_SECRET_VALIDATION` is set and the Secret contains an invalid + certificate, $productName$ will reject the Secret and completely disable the + `Host`; see [**Certificates and Secrets**](#certificates-and-secrets) above. + + + + + + The `Host`'s `hostname` and the `TLSContext`'s `hosts` must have compatible settings. If + they do not, requests may not be accepted. + + + +Full reference for all options available to the `TLSContext` can be found [below](#tlscontext). + +## TLSContext + +The `TLSContext` is used to configure advanced TLS options in $productName$. +Remember, a `TLSContext` must always be paired with a `Host`. + +A full schema of the `TLSContext` can be found below with descriptions of the +different configuration options. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: example-host-context +spec: + # 'hosts' defines the hosts for which this TLSContext is relevant. + # It ties into SNI. A TLSContext without "hosts" is useful only for + # originating TLS. + # type: array of strings + # + # hosts: [] + + # 'sni' defines the SNI string to use on originated connections. + # type: string + # + # sni: None + + # 'secret' defines a Kubernetes Secret that contains the TLS certificate we + # use for origination or termination. If not specified, $productName$ will look + # at the value of cert_chain_file and private_key_file. + # type: string + # + # secret: None + + # 'ca_secret' defines a Kubernetes Secret that contains the TLS certificate we + # use for verifying incoming TLS client certificates. + # type: string + # + # ca_secret: None + + # Tells $productName$ whether to interpret a "." in the secret name as a "." or + # a namespace identifier. + # type: boolean + # + # secret_namespacing: true + + # 'cert_required' can be set to true to _require_ TLS client certificate + # authentication. + # type: boolean + # + # cert_required: false + + # 'alpn_protocols' is used to enable the TLS ALPN protocol. It is required + # if you want to do GRPC over TLS; typically it will be set to "h2" for that + # case. + # type: string (comma-separated list) + # + # alpn_protocols: None + + # 'min_tls_version' sets the minimum acceptable TLS version: v1.0, v1.1, + # v1.2, or v1.3. It defaults to v1.0. + # min_tls_version: v1.0 + + # 'max_tls_version' sets the maximum acceptable TLS version: v1.0, v1.1, + # v1.2, or v1.3. It defaults to v1.3. + # max_tls_version: v1.3 + + # Tells $productName$ to load TLS certificates from a file in its container. + # type: string + # + # cert_chain_file: None + # private_key_file: None + # cacert_chain_file: None +``` + + + + `secret` and (if used) `ca_secret` must specify Kubernetes Secrets containing valid TLS + certificates. If `AMBASSADOR_FORCE_SECRET_VALIDATION` is set and either Secret contains + an invalid certificate, $productName$ will reject the Secret, which will also completely + disable any `Host` using the `TLSContext`; see [**Certificates and Secrets**](#certificates-and-secrets) + above. + + + +### ALPN protocols + +The `alpn_protocols` setting configures the TLS ALPN protocol. To use gRPC over +TLS, set `alpn_protocols: h2`. If you need to support HTTP/2 upgrade from +HTTP/1, set `alpn_protocols: h2,http/1.1` in the configuration. + +#### HTTP/2 support + +The `alpn_protocols` setting is also required for HTTP/2 support. + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: tls +spec: + secret: ambassador-certs + hosts: ['*'] + alpn_protocols: h2[, http/1.1] +``` + +Without setting alpn_protocols as shown above, HTTP2 will not be available via +negotiation and will have to be explicitly requested by the client. + +If you leave off http/1.1, only HTTP2 connections will be supported. + +### TLS parameters + +The `min_tls_version` setting configures the minimum TLS protocol version that +$productName$ will use to establish a secure connection. When a client +using a lower version attempts to connect to the server, the handshake will +result in the following error: `tls: protocol version not supported`. + +The `max_tls_version` setting configures the maximum TLS protocol version that +$productName$ will use to establish a secure connection. When a client +using a higher version attempts to connect to the server, the handshake will +result in the following error: +`tls: server selected unsupported protocol version`. + +The `cipher_suites` setting configures the supported ciphers found below using the +[configuration parameters for BoringSSL](https://commondatastorage.googleapis.com/chromium-boringssl-docs/ssl.h.html#Cipher-suite-configuration) when negotiating a TLS 1.0-1.2 connection. +This setting has no effect when negotiating a TLS 1.3 connection. When a client does not +support a matching cipher a handshake error will result. + +The `ecdh_curves` setting configures the supported ECDH curves when negotiating +a TLS connection. When a client does not support a matching ECDH a handshake +error will result. + +``` + - AES128-SHA + - AES256-SHA + - AES128-GCM-SHA256 + - AES256-GCM-SHA384 + - ECDHE-RSA-AES128-SHA + - ECDHE-RSA-AES256-SHA + - ECDHE-RSA-AES128-GCM-SHA256 + - ECDHE-RSA-AES256-GCM-SHA384 + - ECDHE-RSA-CHACHA20-POLY1305 + - ECDHE-ECDSA-AES128-SHA + - ECDHE-ECDSA-AES256-SHA + - ECDHE-ECDSA-AES128-GCM-SHA256 + - ECDHE-ECDSA-AES256-GCM-SHA384 + - ECDHE-ECDSA-CHACHA20-POLY1305 + - ECDHE-PSK-AES128-CBC-SHA + - ECDHE-PSK-AES256-CBC-SHA + - ECDHE-PSK-CHACHA20-POLY1305 + - PSK-AES128-CBC-SHA + - PSK-AES256-CBC-SHA + - DES-CBC3-SHA +``` + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: tls +spec: + hosts: ['*'] + secret: ambassador-certs + min_tls_version: v1.0 + max_tls_version: v1.3 + cipher_suites: + - '[ECDHE-ECDSA-AES128-GCM-SHA256|ECDHE-ECDSA-CHACHA20-POLY1305]' + - '[ECDHE-RSA-AES128-GCM-SHA256|ECDHE-RSA-CHACHA20-POLY1305]' + ecdh_curves: + - X25519 + - P-256 +``` + +The `crl_secret` field allows you to reference a Kubernetes Secret that contains a certificate revocation list. +If specified, $productName$ will verify that the presented peer certificate has not been revoked by this CRL even if they are otherwise valid. This provides a way to reject certificates before they expire or if they become compromised. +The `crl_secret` field takes a PEM-formatted [Certificate Revocation List](https://en.wikipedia.org/wiki/Certificate_revocation_list) in a `crl.pem` entry. + +Note that if a CRL is provided for any certificate authority in a trust chain, a CRL must be provided for all certificate authorities in that chain. Failure to do so will result in verification failure for both revoked and unrevoked certificates from that chain. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: tls-crl +spec: + hosts: ["*"] + secret: ambassador-certs + min_tls_version: v1.0 + max_tls_version: v1.3 + crl_secret: 'ambassador-crl' +--- +apiVersion: v1 +kind: Secret +metadata: + name: ambassador-crl + namespace: ambassador +type: Opaque +data: + crl.pem: | + {BASE64 CRL CONTENTS} +--- +``` diff --git a/docs/edge-stack/latest/topics/using/dev-portal.md b/docs/edge-stack/latest/topics/using/dev-portal.md new file mode 100644 index 000000000..4b405b0a8 --- /dev/null +++ b/docs/edge-stack/latest/topics/using/dev-portal.md @@ -0,0 +1,425 @@ +> **Developer Portal API visualization is now available in Ambassador Cloud. These docs will remain as a historical reference for hosted Developer Portal installations. [Go to the quick start guide](/docs/cloud/latest/visualize-api/quick-start/).** + +# Developer Portal + +## Rendering API documentation + +The _Dev Portal_ uses the `Mapping` resource to automatically discover services known by +the Ambassador Edge Stack. + +For each `Mapping`, the _Dev Portal_ will attempt to fetch an OpenAPI V3 document +when a `docs` attribute is specified. + +### `docs` attribute in `Mapping`s + +This documentation endpoint is defined by the optional `docs` attribute in the `Mapping`. + +```yaml + docs: + path: "string" # optional; default is "" + url: "string" # optional; default is "" + ignored: bool # optional; default is false + display_name: "string" # optional; default is "" +``` + +where: + +* `path`: path for the OpenAPI V3 document. +The Ambassador Edge Stack will append the value of `docs.path` to the `prefix` +in the `Mapping` so it will be able to use Envoy's routing capabilities for +fetching the documentation from the upstream service . You will need to update +your microservice to return a Swagger or OAPI document at this URL. +* `url`: absolute URL to an OpenAPI V3 document. +* `ignored`: ignore this `Mapping` for documenting services. Note that the service +will appear in the _Dev Portal_ anyway if another, non-ignored `Mapping` exists +for the same service. +* `display_name`: custom name to show for this service in the devportal. + +> Note: +> +> Previous versions of the _Dev Portal_ tried to obtain documentation automatically +> from `/.ambassador-internal/openapi-docs` by default, while the current version +> will not try to obtain documentation unless a `docs` attribute is specified. +> Users should set `docs.path` to `/.ambassador-internal/openapi-docs` in their `Mapping`s +> in order to keep the previous behavior. +> +> +> The `docs` field of Mappings was not introduced until `Ambassador Edge Stack` version 1.9 because Ambassador was automatically searching for docs on `/.ambassador-internal/openapi-docs` +> Make sure to update your CRDs with the following command if you are encountering problems after upgrading from an earlier version of Ambassador. +```yaml + `kubectl apply -f https://app.getambassador.io/yaml/edge-stack/$version$/aes-crds.yaml` +``` + +> If you are on an earlier version of Ambassador, either upgrade to a newer version, or make your documentation available on `/.ambassador-internal/openapi-docs`. + +Example: + +With the `Mapping`s below, the _Dev Portal_ would fetch OpenAPI documentation +from `service-a:5000` at the path `/srv/openapi/` and from `httpbin` from an +external URL. `service-b` would have no documentation. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: service-a +spec: + prefix: /service-a/ + rewrite: /srv/ + service: service-a:5000 + docs: + path: /openapi/ ## docs will be obtained from `/srv/openapi/` +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: service-b +spec: + prefix: /service-b/ + service: service-b ## no `docs` attribute, so service-b will not be documented +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: regular-httpbin +spec: + hostname: "*" + host_rewrite: httpbin.org + prefix: /httpbin/ + service: httpbin.org + docs: + url: https://httpbin.org/spec.json +``` + +> Notes on access to documentation `path`s: +> +> By default, all the `path`s where documentation has been found will **NOT** be publicly +> exposed by the Ambassador Edge Stack. This is controlled by a special +> `FilterPolicy` installed internally. + +> Limitations on Mappings with a `host` attribute +> +> The Dev Portal will ignore `Mapping`s that contain `host`s that cannot be +> parsed as a valid hostname, or use a regular expression (when `host_regex: true`). + +### Publishing the documentation + +All rendered API documentation is published at the `/docs/` URL by default. Users can +achieve a higher level of customization by creating a `DevPortal` resource. +`DevPortal` resources allow the customization of: + +- _what_ documentation is published +- _how_ it looks + +Users can create a `DevPortal` resource for specifying the default configuration for +the _Dev Portal_, filtering `Mappings` and namespaces and specifying the content. + +> Note: when several `DevPortal` resources exist, the Dev Portal will pick a random +> one and ignore the rest. A specific `DevPortal` can be used as the default configuration +> by setting the `default` attribute to `true`. Future versions will +> use other `DevPortals` for configuring alternative _views_ of the Dev Portal. + +`DevPortal` resources have the following syntax: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: DevPortal +metadata: + name: "string" + namespace: "string" +spec: + default: bool ## optional; default false + docs: ## optional; default is [] + - service: "string" ## required + url: "string" ## required + content: ## optional + url: "string" ## optional; see below + branch: "string" ## optional; see below + dir: "string" ## optional; see below + selector: ## optional + matchNamespaces: ## optional; default is [] + - "string" + matchLabels: ## optional; default is {} + "string": "string" + naming_scheme: "string" ## optional; supported values [ "namespace.name", "name.prefix" ]; default "namespace.name" + preserve_servers: bool ## optional; default false + search: + enabled: bool ## optional; default false + type: "string" ## optional; supported values ["title-only", "all-content"]; default "title-only" +``` + +where: + +* `default`: `true` when this is the default Dev Portal configuration. +* `content`: see [section below](#styling-the-devportal). +* `selector`: rules for filtering `Mapping`s: + * `matchNamespaces`: list of namespaces, used for filtering the `Mapping`s that + will be shown in the `DevPortal`. When multiple namespaces are provided, the `DevPortal` + will consider `Mapping`s in **any** of those namespaces. + * `matchLabels`: dictionary of labels, filtering the `Mapping`s that will + be shown in the `DevPortal`. When multiple labels are provided, the `DevPortal` + will only consider the `Mapping`s that match **all** the labels. +* `docs`: static list of _service_/_documentation_ pairs that will be shown + in the _Dev Portal_. Only the documentation from this list will be shown in the _Dev Portal_ + (unless additional docs are included with a `selector`). + * `service`: service name used for listing user-provided documentation. + * `url`: a full URL to a OpenAPI document for this service. This document will be + served _as it is_, with no extra processing from the _Dev Portal_ (besides replacing + the _hostname_). +* `naming_scheme`: Configures how DevPortal docs are displayed and linked to in the UI. + * "namespace.name" will display the docs with the namespace and name of the mapping. + e.g. a Mapping named `quote` in namespace `default` will be displayed as `default.quote` + and its docs will have the relative path of `/default/quote` + * "name.prefix" will display the docs with the name and prefix of the mapping. + e.g. a Mapping named `quote` with a prefix `backend` will be displayed as `quote.backend` + and its docs will have the relative path of `/quote/backend` +* `preserve_servers`: Configures the DevPortal to no longer dynamically build server definitions + for the "try it out" request builder by using the Edge Stack hostname. When set to `true`, the + DevPortal will instead display the server definitions from the `servers` section of the Open API + docs supplied to the DevPortal for the service. +* `search`: as of Edge Stack 1.13.0, the DevPortal content is now searchable + * `enabled`: default `false``; set to true to enable search functionality. + * When `enabled=false`, the DevPortal search endpoint (`/[DEVPORTAL_PATH/api/search`) will return an empty response + * `type`: Configure the items fed into search + * `title-only` (default): only search over the names of DevPortal services and markdown pages + * `all-content`: Search over openapi spec content and markdown page content. + +Example: + +The scope of the default _Dev Portal_ can be restricted to +`Mappings` with the `public-api: true` and `documented: true` labels by creating +a `DevPortal` `ambassador` resource like this: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: DevPortal +metadata: + name: ambassador +spec: + default: true + content: + url: https://github.com/datawire/devportal-content-v2.git + selector: + matchLabels: + public-api: "true" ## labels for matching only some Mappings + documented: "true" ## (note that "true" must be quoted) +``` + +Example: + +The _Dev Portal_ can show a static list OpenAPI docs. In this example, a `eks.aws-demo` +_service_ is shown with the documentation obtained from a URL. In addition, +the _Dev Portal_ will show documentation for all the services discovered in the +`aws-demo` namespace: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: DevPortal +metadata: + name: ambassador +spec: + default: true + docs: + - service: eks.aws-demo + url: https://api.swaggerhub.com/apis/kkrlogistics/amazon-elastic_kubernetes_service/2017-11-01/swagger.json + selector: + matchNamespaces: + - aws-demo ## matches all the services in the `aws-demo` namespace + ## (note that Mappings must contain a `docs` attribute) +``` + +> Note: +> +> The free and unlicensed versions of `Ambassador Edge Stack` only support documentation for five services in the `DevPortal`. +> When you start publishing documentation for more services to your `DevPortal`, keep in mind that you will not see more than 5 OpenAPI documents even if you have more than 5 services properly configured to report their OpenAPI specifications. +> For more information on extending the number of services in your `DevPortal` please contact sales via our [pricing information page](/editions/). + + +## Styling the `DevPortal` + +The look and feel of a `DevPortal` can be fully customized for your particular +organization by specifying a different `content`, customizing not only _what_ +is shown but _how_ it is shown, and giving the possibility to +add some specific content on your API documentation (e.g., best practices, +usage tips, etc.) depending on where it has been published. + +The default _Dev Portal_ content is loaded in order from: + +- the `ambassador` `DevPortal` resource. +- the Git repo specified in the optional `DEVPORTAL_CONTENT_URL` environment variable. +- the default repository at [GitHub](https://github.com/datawire/devportal-content-v2.git). + +To use your own styling, clone or copy the repository, create an `ambassador` `DevPortal` +and update the `content` attribute to point to the repository. If you wish to use a +private GitHub repository, create a [Personal Access Token](https://help.github.com/en/articles/creating-a-personal-access-token-for-the-command-line) +and include it in the `content` following the example below: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: DevPortal +metadata: + name: ambassador +spec: + default: true + content: + url: https://9cb034008ddfs819da268d9z13b7ecd26@github.com/datawire/private-devportal-repo.git + selector: + matchLabels: + public-api: true +``` + +The `content` can be have the following attributes: + +```yaml + content: + url: "string" ## optional; default is the default repo + branch: "string" ## optional; default is "master" + dir: "string" ## optional; default is "/" +``` + +where: + +* `url`: Git URL for the content +* `branch`: the Git branch +* `dir`: subdirectory in the Git repo + +#### Iterating on _Dev Portal_ styling and content + +**Local Development** + +Check out a local copy of your content repo and from within run the following docker image: + +``` +docker run -it --rm --volume $PWD:/content --entrypoint local-devportal --publish 1080:1080 + docker.io/datawire/aes:$version$ /content +``` + +and open `http://localhost:1080` in your browser. Any changes made locally to +devportal content will be reflected immediately on page refresh. + +> Note: +> +> The docker command above will only work for AES versions 1.13.0+. + +**Remote Ambassador** + +After committing and pushing changes to your devportal content repo changes to git, set your DevPortal to fetch from your branch: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: DevPortal +metadata: + name: ambassador +spec: + default: true + content: + url: $REPO_URL + branch: $DEVELOPMENT_BRANCH +``` + +Then you can force a reload of DevPortal content by hitting a refresh endpoint on your remote ambassador: + +``` +# first, get your ambassador service +export AMBASSADOR_LB_ENDPOINT=$(kubectl -n ambassador get svc ambassador \ + -o "go-template={{range .status.loadBalancer.ingress}}{{or .ip .hostname}}{{end}}") + +# Then refresh the DevPortal content +curl -X POST -Lk ${AMBASSADOR_LB_ENDPOINT}/docs/api/refreshContent +``` + +> Note: +> +> The DevPortal does not share a cache between replicas, so the content refresh endpoint +> will only refresh the content on a single replica. It is suggested that you use this +> endpoint in a single replica Edge Stack setup. + +#### Customizing documentation names and paths + +The _Dev Portal_ displays the documentation's Mapping name and namespace by default, +but you can override this behavior. + +To change the documentation naming scheme for the entire _Dev Portal_, you can set +`naming_scheme` in the `DevPortal` resource: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: DevPortal +metadata: + name: ambassador +spec: + default: true + naming_scheme: "name.prefix" +``` + +With the above configuration, a mapping for `service-a`: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: service-a +spec: + prefix: /path/ + service: service-a:5000 + docs: + path: /openapi/ +``` + +Will be displayed in the _Dev Portal_ as `service-a.path`, +and the API documentation will be accessed at `$AMBASSADOR_URL/docs/doc/service-a/path`. + +You can also override the display name of documentation on a per-mapping basis. +Per-mapping overrides will take precedence over the `DevPortal` `naming_scheme`. + +A mapping for `service-b` with `display_name` set: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: service-b +spec: + prefix: /otherpath/ + service: service-b:5000 + docs: + path: /openapi/ + display_name: "Cat Service" +``` + +Will be displayed in the _Dev Portal_ as `Cat Service`, and the documentation will be +accessed at `$AMBASSADOR_URL/docs/doc/Cat%20Service`. + + +## Default configuration + +The _Dev Portal_ supports some default configuration in some environment variables +(for backwards compatibility). + +### Environment variables + +The _Dev Portal_ can also obtain some default configuration from environment variables +defined in the AES `Deployment`. This configuration method is considered deprecated and +kept only for backwards compatibility: users should configure the default values with +the `ambassador` `DevPortal`. + +| Setting | Description | +| ------------------------ | ------------------------------------------------------------------------------ | +| AMBASSADOR_URL | External URL of Ambassador Edge Stack; include the protocol (e.g., `https://`) | +| POLL_EVERY_SECS | Interval for polling OpenAPI docs; default 60 seconds | +| DEVPORTAL_CONTENT_URL | Default URL to the repository hosting the content for the Portal | +| DEVPORTAL_CONTENT_DIR | Default content subdir (defaults to `/`) | +| DEVPORTAL_CONTENT_BRANCH | Default content branch (defaults to `master`) | +| DEVPORTAL_DOCS_BASE_PATH | Base path for api docs (defaults to `/doc/`) | + +## Visualize your API documentation in the cloud + +If you haven't already done so, you may want to [connect your cluster to Ambassador Cloud](../../../tutorials/getting-started). Connected clusters will automatically report your `Mapping`s' OpenAPI documents, allowing you to host and visualize all of your services API documentation on a shared, secure and authenticated platform. diff --git a/docs/edge-stack/latest/topics/using/filters/apikeys.md b/docs/edge-stack/latest/topics/using/filters/apikeys.md new file mode 100644 index 000000000..b65bb9afe --- /dev/null +++ b/docs/edge-stack/latest/topics/using/filters/apikeys.md @@ -0,0 +1,35 @@ +import Alert from '@material-ui/lab/Alert'; + +# API Keys Filter + +The API Keys filter validates API Keys present in the HTTP header. The list of authorized API Keys is defined directly in a secret. + +## API Keys global arguments + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Filter +metadata: + name: "example-apikeys-filter" + namespace: "example-namespace" +spec: + APIKey: + httpHeader: "x-my-api-key-header" # optional; default is X-API-Key + keys: + - secretName: "my-secret-api-keys" +--- +apiVersion: v1 +kind: Secret +metadata: + namespace: ambassador + name: my-secret-api-keys +data: + key-one: bXktZmlyc3QtYXBpLWtleQ== + key-two: bXktc2Vjb25kLWFwaS1rZXk= +``` + + - `httpHeader` is the header used to do the API Keys validation. + + - `keys`: A list of API keys that will be used for the validation. A list of keys can be defined using a secret or you can define a standalone key directly in the filter resource. + diff --git a/docs/edge-stack/latest/topics/using/filters/external.md b/docs/edge-stack/latest/topics/using/filters/external.md new file mode 100644 index 000000000..9d24229f6 --- /dev/null +++ b/docs/edge-stack/latest/topics/using/filters/external.md @@ -0,0 +1,246 @@ +# External Filter + +The `External` `Filter` calls out to an external service speaking the +[`ext_authz` protocol](../../../running/services/ext-authz), providing +a highly flexible interface to plug in your own authentication, +authorization, and filtering logic. + +## Example + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Filter +metadata: + name: "my-filter" + namespace: "my-namespace" +spec: + External: + auth_service: "https://example-auth:3000" + proto: http + timeout_ms: 5000 + include_body: + max_bytes: 4096 + allow_partial: true + status_on_error: + code: 403 + failure_mode_allow: false + + # proto: http only + path_prefix: "/path" + allowed_request_headers: + - "x-allowed-input-header" + allowed_authorization_headers: + - "x-allowed-output-header" + add_linkerd_headers: false +``` + +The `External` spec is identical to the [`AuthService` +spec](../../../running/services/auth-service), with the following +exceptions: + +* In an `AuthService`, the `tls` field must be a string referring to a + `TLSContext`. In an `External` `Filter`, it may only be a Boolean; + referring to a `TLSContext` is not supported. +* In an `AuthService`, the default value of the `add_linkerd_headers` + field is based on the [`ambassador` + `Module`](../../../running/ambassador). In an `External` `Filter`, + the default value is always `false`. +* `External` `Filters` lack the `stats_name`, and + `add_auth_headers` fields that `AuthServices` have. + +## Fields + +`auth_service` is the only required field, all others are optional. + +| Attribute | Default value | Description | +| ---------------------------- | ----------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `auth_service` | (none; a value is required) | Identifies the external auth service to talk to. The format of this field is `scheme://host:port` where `scheme://` and `:port` are optional. The scheme-part, if present, must be either `http://` or `https://`; if the scheme-part is not present, it behaves as if `http://` is given. The scheme-part influences the default value of the `tls` field and the default value of the port-part. The host-part must be the [namespace-qualified DNS name](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#namespaces-of-services) of the service you want to use for authentication. | +| `tls` | `true` if `auth_service` starts with "https://" | Controls whether to use TLS or cleartext when speaking to the external auth service. The default is based on the scheme-part of the `auth_service`. | +| `tlsConfig` | none; optional | Configure tls settings between $productName$ and the configured AuthService. See [`Configure TLS Settings`](#configuring-tls-settings). | +| `proto` | `http` | Specifies which variant of the [`ext_authz` protocol](../../../running/services/ext-authz) to use when communicating with the external auth service. Valid options are `http` or `grpc`. | +| `timeout_ms` | `5000` | The total maximum duration in milliseconds for the request to the external auth service, before triggering `status_on_error` or `failure_mode_allow`. | +| `include_body` | `null` | Controls how much to buffer the request body to pass to the external auth service, for use cases such as computing an HMAC or request signature. If `include_body` is `null` or unset, then the request body is not buffered at all, and an empty body is passed to the external auth service. If `include_body` is not `null`, the `max_bytes` and `allow_partial` subfields are required. Unfortunately, in order for `include_body` to function properly, the `AuthService` in [`aes.yaml`](https://app.getambassador.io/yaml/edge-stack/$version$/aes.yaml) must be edited to have `include_body` set with `max_bytes` greater than the largest `max_bytes` used by any `External` `Filter` (so if an `External` `Filter` has `max_bytes: 4096`, then the `AuthService` will need `max_bytes: 4097`), and `allow_partial: true`. | +| `include_body.max_bytes` | (none; a value is required if `include_body` is not `null`) | Controls the amount of body data that is passed to the external auth service. | +| `include_body.allow_partial` | (none; a value is required if `include_body` is not `null`) | Controls what happens to requests with bodies larger than `max_bytes`. If `allow_partial` is `true`, the first `max_bytes` of the body are sent to the external auth service. If `false`, the message is rejected with HTTP 413 ("Payload Too Large"). | +| `status_on_error.code` | `403` | Controls the status code returned when unable to communicate with external auth service. This is ignored if `failure_mode_allow: true`. | +| `failure_mode_allow` | `false` | Controls whether to allow or reject requests when there is an error communicating with the external auth service; a value of `true` allows the request through to the upstream backend service, a value of `false` returns a `status_on_error.code` response to the client. | +| `protocol_version` | `v2` | Indicates the version of the transport protocol that the External Filter is using. This is only applicable to External Filters using `proto: grpc`. Allowed values are `v3` and `v2`(defualt). `protocol_version` was used in previous versions of $productName$ to note the protocol used by the gRPC service for the External Filter. $productName$ 3.x is running an updated version of Envoy that has dropped support for the `v2` protocol, so starting in 3.x, if `protocol_version` is not specified, the default value of `v2` will cause an error to be posted and a static response will be returned. Therefore, you must set it to `protocol_version: v3`. If upgrading from a previous version, you will want to set it to `v3` and ensure it is working before upgrading to Emissary-ingress 3.Y. The default value for `protocol_version` remains `v2` in the `getambassador.io/v3alpha1` CRD specifications to avoid making breaking changes outside of a CRD version change. Future versions of CRD's will deprecate it. | + +The following fields are only used if `proto` is set to `http`. They +are ignored if `proto` is `grpc`. + +| Attribute | Default value | Description | +| ------------------------------- | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `path_prefix` | `""` | Prepends a string to the request path of the request when sending it to the external auth service. By default this is empty, and nothing is prepended. For example, if the client makes a request to `/foo`, and `path_prefix: /bar`, then the path in the request made to the external auth service will be `/foo/bar`. | +| `allowed_request_headers` | `[]` | Lists the headers (case-insensitive) that are copied from the incoming request to the request made to the external auth service. In addition to the headers listed in this field, the following headers are always included: `Authorization`, `Cookie`, `From`, `Proxy-Authorization`, `User-Agent`, `X-Forwarded-For`, `X-Forwarded-Host`, and `X-Forwarded-Proto`. | +| `allowed_authorization_headers` | `[]` | Lists the headers (case-insensitive) that are copied from the response from the external auth service to the request sent to the upstream backend service (if the external auth service indicates that the request to the upstream backend service should be allowed). In addition to the headers listed in this field, the following headers are always included: `Authorization`, `Location`, `Proxy-Authenticate`, `Set-cookie`, `WWW-Authenticate` | +| `add_linkerd_headers` | `false` | When true, in the request to the external auth service, adds an `l5d-dst-override` HTTP header that is set to the hostname and port number of the external auth service. | + + +The following fields are only used if `proto` is set to `grpc`. They +are ignored if `proto` is `http`. + + +| Attribute | Default value | Description | +| ------------------ | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `protocol_version` | `v2` | Indicates the version of the transport protocol that the External Filter is using. This is only applicable to External Filters using `proto: grpc`. When left unset or set to `v2` $productName$ will automatically convert between the `v2` protocol used by the External Filter and the `v3` protocol that is used by the `AuthService` that ships with $productName$. When this field is set to `v3` then no conversion between $productName$ and the `AuthService` will take place as it can speak `v3` natively with $productName$'s `AuthService`. | + +## Tracing Header Propagation + +If $productName$ is configured to use a `TraceService`, Envoy will send tracing information as gRPC Metadata. Add the trace headers to the `allowed_request_headers` field to propagate the trace headers when using an ExternalFilter configured with `proto:http`. For example, if using **Zipkin** with **B3 Propagation** headers you can configure your External Filter like this: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Filter +metadata: + name: "my-ext-filter" + namespace: "my-namespace" +spec: + External: + auth_service: "https://example-auth:3000" + proto: http + path_prefix: /check_request + allowed_request_headers: + - X-B3-Parentspanid + - X-B3-Sampled + - X-B3-Spanid + - X-B3-Traceid + - X-Envoy-Expected-Rq-Timeout-Ms + - X-Envoy-Internal + - X-Request-Id +``` + +## Configuring TLS Settings + +When an `ExternalFilter` has the `auth_service` field configured with a URL that starts with `https://` then $productName$ will attempt to communicate with the AuthService over a TLS connection. The following configurations are supported: + +1. Verify server certificate with host CA Certificates - *default when `tls: true`* +2. Verify server certificate with provided CA Certificate +3. Mutual TLS between client and server + +Overall, these new configuration options enhance the security of the communications between $productName$ and your `ExternalFilter` by providing a way to verify the server's certificate, allowing customization of the trust verification process, and enabling mutual TLS (mTLS) between $productName$ and the `ExternalFilter` service. By employing these security measures, users can have greater confidence in the authenticity, integrity, and confidentiality of their filter's actions, especially if it interacts with any sensitive information. + +The following settings are provided for configuring the `tlsConfig`: + +| Attribute | Sub-Field | Default Value | Description | +| --------------- | ------------ | ----------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `caCertificate` | | | Configures $productName$ to use the provided CA certifcate to verify the server provided certificate. | +| | `fromSecret` | secret `namespace` defaults to Filter namespace if not set. | Provide the `name` and `namespace` (optional) of an `Opaque` [Kubernetes Secret](https://kubernetes.io/docs/concepts/configuration/secret/#secret-types) that contains the `tls.crt` key with the CA Certificate. | +| `certificate` | | | Configures $productName$ to use the provided certificate to present to the server when connecting. | +| | `fromSecret` | secret `namespace` defaults to Filter namespace if not set. | Provide the `name` and `namespace` (optional) of a `kubernetes.io/tls` [Kubernetes Secret](https://kubernetes.io/docs/concepts/configuration/secret/#secret-types) that contains the private key and public certificate that will be presented to the AuthService. | + +### Example - Verify Server with Custom CA Certificate + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Filter +metadata: + name: "my-ext-filter" + namespace: "my-namespace" +spec: + External: + auth_service: "https://example-auth:3000" + proto: grpc + tlsConfig: + caCertificate: + fromSecret: + name: ca-cert-secret + namespace: shared-certs +``` + +### Example - Mutual TLS (mTLS) + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Filter +metadata: + name: "my-ext-filter" + namespace: "my-namespace" +spec: + External: + auth_service: "https://example-auth:3000" + proto: grpc + tlsConfig: + caCertificate: + fromSecret: + name: ca-cert-secret + namespace: shared-certs + certificate: + fromSecret: + name: client-cert-secret +``` + +## Metrics + +As of $productName$ 3.4.0, the following metrics for External Filters are available via the [metrics endpoint](../../../running/statistics/8877-metrics) + +| Metric | Type | Description | +| ------------------------------------------------- | --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `ambassador_edge_stack_external_filter_allowed` | Counter | Number of requests that were allowed by Ambassador Edge Stack External Filters. Includes requests that are allowed by failure_mode_allow when unable to connect to the External Filter. | +| `ambassador_edge_stack_external_filter_denied` | Counter | Number of requests that were denied by Ambassador Edge Stack External Filters. Includes requests that are denied by an inability to connect to the External Filter or due to a Filter config error. | +| `ambassador_edge_stack_external_filter_error` | Counter | Number of errors returned directly from Ambassador Edge Stack External Filters and errors from an inability to connect to the External Filter | +| `ambassador_edge_stack_external_handler_error` | Counter | Number of errors caused by Ambassador Edge Stack encountering invalid Filter config or an error while parsing the config. \nThese errors will always result in a HTTP 500 response being returned to the client and do not count towards metrics that track response codes from external filters. | +| `ambassador_edge_stack_external_filter_rq_class` | Counter (with labels) | Aggregated counter of response code classes returned to downstream clients from Ambassador Edge Stack External Filters. Includes requests that are denied by an inability to connect to the External Filter. | +| `ambassador_edge_stack_external_filter_rq_status` | Counter (with labels) | Counter of response codes returned to downstream clients from Ambassador Edge Stack External Filters. Includes requests that are denied by an inability to connect to the External Filter. | + + +An example of what the metrics may look like can be seen below + +``` +# HELP ambassador_edge_stack_external_filter_allowed Number of requests that were allowed by Ambassador Edge Stack External Filters. Includes requests that are allowed by failure_mode_allow when unable to connect to the External Filter. +# TYPE ambassador_edge_stack_external_filter_allowed counter +ambassador_edge_stack_external_filter_allowed 2 + +# HELP ambassador_edge_stack_external_filter_denied Number of requests that were denied by Ambassador Edge Stack External Filters. Includes requests that are denied by an inability to connect to the External Filter or due to a Filter config error. +# TYPE ambassador_edge_stack_external_filter_denied counter +ambassador_edge_stack_external_filter_denied 12 + +# HELP ambassador_edge_stack_external_filter_error Number of errors returned directly from Ambassador Edge Stack External Filters and errors from an inability to connect to the External Filter +# TYPE ambassador_edge_stack_external_filter_error counter +ambassador_edge_stack_external_filter_error 2 + +# HELP ambassador_edge_stack_external_filter_rq_class Aggregated counter of response code classes returned to downstream clients from Ambassador Edge Stack External Filters. Includes requests that are denied by an inability to connect to the External Filter. +# TYPE ambassador_edge_stack_external_filter_rq_class counter +ambassador_edge_stack_external_filter_rq_class{class="2xx"} 2 +ambassador_edge_stack_external_filter_rq_class{class="4xx"} 5 +ambassador_edge_stack_external_filter_rq_class{class="5xx"} 7 + +# HELP ambassador_edge_stack_external_filter_rq_status Counter of response codes returned to downstream clients from Ambassador Edge Stack External Filters. Includes requests that are denied by an inability to connect to the External Filter. +# TYPE ambassador_edge_stack_external_filter_rq_status counter +ambassador_edge_stack_external_filter_rq_status{status="200"} 2 +ambassador_edge_stack_external_filter_rq_status{status="401"} 3 +ambassador_edge_stack_external_filter_rq_status{status="403"} 2 +ambassador_edge_stack_external_filter_rq_status{status="500"} 2 +ambassador_edge_stack_external_filter_rq_status{status="501"} 5 + +# HELP ambassador_edge_stack_external_handler_error Number of errors caused by Ambassador Edge Stack encountering invalid Filter config or an error while parsing the config. \nThese errors will always result in a HTTP 500 response being returned to the client and do not count towards metrics that track response codes from external filters. +# TYPE ambassador_edge_stack_external_handler_error counter +ambassador_edge_stack_external_handler_error 0 +``` + + +## Transport Protocol Migration + +> **Note:** The following information is only applicable to External Filters using `proto: grpc` +As of $productName$ version 2.3, the `v2` transport protocol is deprecated and any External Filters making use +of it should migrate to `v3` before support for `v2` is removed in a future release. + +The following imports simply need to be updated to migrate an External Filter + +`v2` Imports: +``` + envoyCoreV2 "github.com/datawire/ambassador/pkg/api/envoy/api/v2/core" + envoyAuthV2 "github.com/datawire/ambassador/pkg/api/envoy/service/auth/v2" + envoyType "github.com/datawire/ambassador/pkg/api/envoy/type" +``` + +`v3` Imports: +``` + envoyCoreV3 "github.com/datawire/ambassador/v2/pkg/api/envoy/config/core/v3" + envoyAuthV3 "github.com/datawire/ambassador/v2/pkg/api/envoy/service/auth/v3" + envoyType "github.com/datawire/ambassador/v2/pkg/api/envoy/type/v3" +``` + +In the [datawire/sample-external-service repository](https://github.com/datawire/Sample-External-Service), you can find examples of an External Filter using both the +`v2` transport protocol as well as `v3` along with deployment instructions for reference. The External Filter in this repo does not perform any authorization and is instead meant to serve as a reference for the operations that an External can make use of. diff --git a/docs/edge-stack/latest/topics/using/filters/index.md b/docs/edge-stack/latest/topics/using/filters/index.md new file mode 100644 index 000000000..4bf92f5c4 --- /dev/null +++ b/docs/edge-stack/latest/topics/using/filters/index.md @@ -0,0 +1,188 @@ +import Alert from '@material-ui/lab/Alert'; + +# Filters and authentication + +Filters are used to extend the Ambassador Edge Stack to modify or intercept a request before sending to your backend service. The most common use case for Filters is authentication, and Edge Stack includes a number of built-in filters for this purpose. Edge Stack also supports developing custom filters. + +Filters are managed using a FilterPolicy resource. The FilterPolicy resource specifies a particular host or URL to match, along with a set of filters to run when an request matches the host/URL. + +## Filter types + +Edge Stack supports the following filter types: + +* [JWT](jwt) - validates JSON Web Tokens +* [OAuth2](oauth2) - performs OAuth2 authorization against an identity provider implementing [OIDC Discovery](https://openid.net/specs/openid-connect-discovery-1_0.html). +* [Plugin](plugin) - allows users to write custom Filters in Go that run as part of the Edge Stack container +* [External](external) - allows users to call out to other services for request processing. This can include both custom services (in any language) or third party services. +* [API Keys](apikeys) - validates API Keys present in a custom HTTP header + +## Managing Filters + +Filters are created with the Filter resource type, which contains global arguments to that filter. Which Filter(s) to use for which HTTP requests is then configured in FilterPolicy resources, which may contain path-specific arguments to the filter. + +### Filter definition + +Filters are created as Filter resources. The body of the resource spec depends on the filter type: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Filter +metadata: + name: "string" # required; this is how to refer to the Filter in a FilterPolicy + namespace: "string" # optional; default is the usual `kubectl apply` default namespace +spec: + ambassador_id: # optional; default is ["default"] + - "string" + ambassador_id: "string" # no need for a list if there's only one value + FILTER_TYPE: + GLOBAL_FILTER_ARGUMENTS +``` + +### FilterPolicy definition + +FilterPolicy resources specify which filters (if any) to apply to +which HTTP requests. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: FilterPolicy +metadata: + name: "example-filter-policy" + namespace: "example-namespace" +spec: + rules: + - host: "glob-string" + path: "glob-string" + filters: # optional; omit or set to `null` or `[]` to apply no filters to this request + - name: "string" # required + namespace: "string" # optional; default is the same namespace as the FilterPolicy + ifRequestHeader: # optional; default to apply this filter to all requests matching the host & path + name: "string" # required + negate: bool # optional; default is false + # It is invalid to specify both "value" and "valueRegex". + value: "string" # optional; default is any non-empty string + valueRegex: "regex-string" # optional; default is any non-empty string + onDeny: "enum-string" # optional; default is "break" + onAllow: "enum-string" # optional; default is "continue" + arguments: DEPENDS # optional +``` + +Rule configuration values include: + +| Value | Example | Description | +| ----- | ------- | ----------- | +| `host` | `*`, `foo.com` | The Host that a given rule should match | +| `path` | `/foo/url/` | The URL path that a given rule should match to | +| `filters` | `name: keycloak` | The name of a given filter to be applied| + +The wildcard `*` is supported for both `path` and `host`. + +The type of the `arguments` property is dependent on which Filter type is being referred to; see the "Path-Specific Arguments" documentation for each Filter type. + +When multiple Filters are specified in a rule: + + * The filters are gone through in order + * Each filter may either: + * return a direct HTTP *response*, intended to be sent back to the requesting HTTP client (normally *denying* the request from being forwarded to the upstream service) OR + * return a modification to make to the HTTP *request* before sending it to other filters or the upstream service (normally *allowing* the request to be forwarded to the upstream service with modifications). + * If a filter has an `ifRequestHeader` setting, the filter is skipped + unless the request (including any modifications made by earlier + filters) has the HTTP header field `name` + set to (or not set to if `negate: true`): + * a non-empty string if neither `value` nor `valueRegex` are set + * the exact string `value` (case-sensitive) (if `value` is set) + * a string that matches the regular expression `valueRegex` (if + `valueRegex` is set). This uses [RE2][] syntax (always, not + obeying [`regex_type`][] in the Ambassador module) but does not + support the `\C` escape sequence. + * `onDeny` identifies what to do when the filter returns an "HTTP response": + * `"break"`: End processing, and return the response directly to + the requesting HTTP client. Later filters are not called. The request is not forwarded to the upstream service. + * `"continue"`: Continue processing. The request is passed to the + next filter listed; or if at the end of the list, it is forwarded to the upstream service. The HTTP response returned from the filter is discarded. + * `onAllow` identifies what to do when the filter returns a + "modification to the HTTP request": + - `"break"`: Apply the modification to the request, then end filter processing, and forward the modified request to the upstream service. Later filters are not called. + - `"continue"`: Continue processing. Apply the request modification, then pass the modified request to the next filter + listed; or if at the end of the list, forward it to the upstream service. + * Modifications to the request are cumulative; later filters have access to _all_ headers inserted by earlier filters. + +#### FilterPolicy example + +In the example below, the `param-filter` Filter Plugin is loaded and configured to run on requests to `/httpbin/`. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Filter +metadata: + name: param-filter # This is the name used in FilterPolicy + namespace: standalone +spec: + Plugin: + name: param-filter # The plugin's `.so` file's base name + +--- +apiVersion: getambassador.io/v3alpha1 +kind: FilterPolicy +metadata: + name: httpbin-policy +spec: + rules: + # Don't apply any filters to requests for /httpbin/ip + - host: "*" + path: /httpbin/ip + filters: null + # Apply param-filter and auth0 to requests for /httpbin/ + - host: "*" + path: /httpbin/* + filters: + - name: param-filter + - name: auth0 + # Default to authorizing all requests with auth0 + - host: "*" + path: "*" + filters: + - name: auth0 +``` + + + Edge Stack will choose the first FilterPolicy rule that matches the incoming request. As in the above example, you must list your rules in the order of least to most generic. + + +#### Multiple domains + +In this example, the `foo-keycloak` filter is used for requests to `foo.bar.com`, while the `example-auth0` filter is used for requests to `example.com`. This configuration is useful if you are hosting multiple domains in the same cluster. + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: FilterPolicy +metadata: + name: multi-domain-policy +spec: + rules: + - host: foo.bar.com + path: "*" + filters: + - name: foo-keycloak + - host: example.com + path: "*" + filters: + - name: example-auth0 +``` + +## Installing self-signed certificates + +The JWT and OAuth2 filters speak to other services over HTTP or HTTPS. If those services are configured to speak HTTPS using a self-signed certificate, attempting to talk to them will result in an error mentioning `ERR x509: certificate signed by unknown authority`. You can fix this by installing that self-signed certificate into the AES container by copying the certificate to `/usr/local/share/ca-certificates/` and then running `update-ca-certificates`. Note that the `aes` image sets `USER 1000` but `update-ca-certificates` needs to be run as root. + +The following Dockerfile will accomplish this procedure for you. When deploying Edge Stack, refer to that custom Docker image rather than to `docker.io/datawire/aes:$version$` + +```Dockerfile +FROM docker.io/datawire/aes:$version$ +USER root +COPY ./my-certificate.pem /usr/local/share/ca-certificates/my-certificate.crt +RUN update-ca-certificates +USER 1000 +``` diff --git a/docs/edge-stack/latest/topics/using/filters/jwt.md b/docs/edge-stack/latest/topics/using/filters/jwt.md new file mode 100644 index 000000000..562314d34 --- /dev/null +++ b/docs/edge-stack/latest/topics/using/filters/jwt.md @@ -0,0 +1,277 @@ +import Alert from '@material-ui/lab/Alert'; + +# JWT Filter + +The JWT filter type performs JWT validation on a [bearer token](https://tools.ietf.org/html/rfc6750) present in the HTTP header. If the bearer token JWT doesn't validate, or has insufficient scope, an RFC 6750-complaint error response with a `WWW-Authenticate` header is returned. The list of acceptable signing keys is loaded from a JWK Set that is loaded over HTTP, as specified in `jwksURI`. Only RSA and `none` algorithms are supported. + +## JWT global arguments + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Filter +metadata: + name: "example-jwt-filter" + namespace: "example-namespace" +spec: + JWT: + jwksURI: "url-string" # required, unless the only validAlgorithm is "none" + insecureTLS: bool # optional; default is false + renegotiateTLS: "enum-string" # optional; default is "never" + validAlgorithms: # optional; default is "all supported algos except for 'none'" + - "RS256" + - "RS384" + - "RS512" + - "none" + + audience: "string" # optional, unless `requireAudience: true` + requireAudience: bool # optional; default is false + + issuer: "url-string" # optional, unless `requireIssuer: true` + requireIssuer: bool # optional; default is false + + requireExpiresAt: bool # optional; default is false + leewayForExpiresAt: "duration" # optional; default is "0" + + requireNotBefore: bool # optional; default is false + leewayForNotBefore: "duration" # optional; default is "0" + + requireIssuedAt: bool # optional; default is false + leewayForIssuedAt: "duration" # optional; default is "0" + + maxStale: # optional; default is "0" + + injectRequestHeaders: # optional; default is [] + - name: "header-name-string" # required + value: "go-template-string" # required + + errorResponse: # optional + contentType: "string" # deprecated; use 'headers' instead + realm: "string" # optional; default is "{{.metadata.name}}.{{.metadata.namespace}}" + headers: # optional; default is [{name: "Content-Type", value: "application/json"}] + - name: "header-name-string" # required + value: "go-template-string" # required + bodyTemplate: "string" # optional; default is `{{ . | json "" }}` +``` + + - `insecureTLS` disables TLS verification for the cases when `jwksURI` begins with `https://`. This is discouraged in favor of either using plain `http://` or [installing a self-signed certificate](../#installing-self-signed-certificates). + - `renegotiateTLS` allows a remote server to request TLS renegotiation. Accepted values are "never", "onceAsClient", and "freelyAsClient". + - `leewayForExpiresAt` allows tokens expired by this much to be used; + to account for clock skew and network latency between the HTTP + client and the Ambassador Edge Stack. + - `leewayForNotBefore` allows tokens that shouldn't be used until + this much in the future to be used; to account for clock skew + between the HTTP client and the Ambassador Edge Stack. + - `leewayForIssuedAt` allows tokens issued this much in the future to + be used; to account for clock skew between the HTTP client and + the Ambassador Edge Stack. + - `maxStale` How long to keep stale cached OIDC replies for. This sets the `max-stale` Cache-Control directive on requests, and also ignores the `no-store` and `no-cache` Cache-Control directives on responses. This is useful for maintaining good performance when working with identity providers with misconfigured Cache-Control. Note that if you are reusing the same `authorizationURL` and `jwksURI` across different OAuth and JWT filters respectively, then you MUST set `maxStale` as a consistent value on each filter to get predictable caching behavior. + - `injectRequestHeaders` injects HTTP header fields in to the request before sending it to the upstream service; where the header value can be set based on the JWT value. The value is specified as a [Go `text/template`][] string, with the following data made available to it: + + * `.token.Raw` → `string` the raw JWT + * `.token.Header` → `map[string]interface{}` the JWT header, as parsed JSON + * `.token.Claims` → `map[string]interface{}` the JWT claims, as parsed JSON + * `.token.Signature` → `string` the token signature + * `.httpRequestHeader` → [`http.Header`][] a copy of the header of the incoming HTTP request. Any changes to `.httpRequestHeader` (such as by using using `.httpRequestHeader.Set`) have no effect. It is recommended to use `.httpRequestHeader.Get` instead of treating it as a map, in order to handle capitalization correctly. + + Also available to the template are the [standard functions available + to Go `text/template`s][Go `text/template` functions], as well as: + + * a `hasKey` function that takes the a string-indexed map as arg1, + and returns whether it contains the key arg2. (This is the same + as the [Sprig function of the same name][Sprig `hasKey`].) + + * a `doNotSet` function that causes the result of the template to + be discarded, and the header field to not be adjusted. This is + useful for only conditionally setting a header field; rather + than setting it to an empty string or `""`. Note that + this does _not_ unset an existing header field of the same name; + in order to prevent the untrusted client from being able to + spoof these headers, use a [Lua script][Lua Scripts] to remove + the client-supplied value before the Filter runs. See below for + an example. Not sanitizing the headers first is a potential + security vulnerability. + + Any headers listed will override (not append to) the original request header with that name. + - `errorResponse` allows templating the error response, overriding the default json error format. Make sure you validate and test your template, not to generate server-side errors on top of client errors. + * `contentType` is deprecated, and is equivalent to including a + `name: "Content-Type"` item in `headers`. + * `realm` allows specifying the realm to report in the `WWW-Authenticate` response header. + * `headers` sets extra HTTP header fields in the error response. The value is specified as a [Go `text/template`][] string, with the same data made available to it as `bodyTemplate` (below). It does not have access to the `json` function. + * `bodyTemplate` specifies body of the error; specified as a [Go `text/template`][] string, with the following data made available to it: + + * `.status_code` → `integer` the HTTP status code to be returned + * `.httpStatus` → `integer` an alias for `.status_code` (hidden from `{{ . | json "" }}`) + * `.message` → `string` the error message string + * `.error` → `error` the raw Go `error` object that generated `.message` (hidden from `{{ . | json "" }}`) + * `.error.ValidationError` → [`jwt.ValidationError`][] the JWT validation error, will be `nil` if the error is not purely JWT validation (insufficient scope, malformed or missing `Authorization` header) + * `.request_id` → `string` the Envoy request ID, for correlation (hidden from `{{ . | json "" }}` unless `.status_code` is in the 5XX range) + * `.requestId` → `string` an alias for `.request_id` (hidden from `{{ . | json "" }}`) + + Also availabe to the template are the [standard functions + available to Go `text/template`s][Go `text/template` functions], + as well as: + + * a `json` function that formats arg2 as JSON, using the arg1 + string as the starting indentation. For example, the + template `{{ json "indent>" "value" }}` would yield the + string `indent>"value"`. + +`"duration"` strings are parsed as a sequence of decimal numbers, each +with optional fraction and a unit suffix, such as "300ms", "-1.5h" or +"2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", +"h". See [Go `time.ParseDuration`][]. + + + If you are using a templating system for your YAML that also makes use of Go templating, then you will need to escape the template strings meant to be interpreted by Edge Stack. + + +[Go `time.ParseDuration`]: https://golang.org/pkg/time/#ParseDuration +[Go `text/template`]: https://golang.org/pkg/text/template/ +[Go `text/template` functions]: https://golang.org/pkg/text/template/#hdr-Functions +[`http.Header`]: https://golang.org/pkg/net/http/#Header +[`jwt.ValidationError`]: https://godoc.org/github.com/dgrijalva/jwt-go#ValidationError +[Lua Scripts]: ../../../running/ambassador/#lua-scripts +[Sprig `hasKey`]: https://masterminds.github.io/sprig/dicts.html#haskey + +## JWT path-specific arguments + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: FilterPolicy +metadata: + name: "example-filter-policy" + namespace: "example-namespace" +spec: + rules: + - host: "*" + path: "*" + filters: + - name: "example-jwt-filter" + arguments: + scope: # optional; default is [] + - "scope-value-1" + - "scope-value-2" +``` + +`scope` is a list of OAuth scope values that Edge Stack will require to be listed in the [`scope` claim](https://tools.ietf.org/html/draft-ietf-oauth-token-exchange-19#section-4.2). In addition to the normal values of the `scope` claim (a JSON string containing a space-separated list of values), the JWT Filter also accepts a JSON array of values. + +## Example configuration + +```yaml +# Example results are for the JWT: +# +# eyJhbGciOiJub25lIiwidHlwIjoiSldUIiwiZXh0cmEiOiJzbyBtdWNoIn0.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ. +# +# To save you some time decoding that JWT: +# +# header = { +# "alg": "none", +# "typ": "JWT", +# "extra": "so much" +# } +# claims = { +# "sub": "1234567890", +# "name": "John Doe", +# "iat": 1516239022 +# } +--- +apiVersion: getambassador.io/v3alpha1 +kind: Filter +metadata: + name: example-jwt-filter + namespace: example-namespace +spec: + JWT: + jwksURI: "https://getambassador-demo.auth0.com/.well-known/jwks.json" + validAlgorithms: + - "none" + audience: "myapp" + requireAudience: false + injectRequestHeaders: + - name: "X-Fixed-String" + value: "Fixed String" + # result will be "Fixed String" + - name: "X-Token-String" + value: "{{ .token.Raw }}" + # result will be "eyJhbGciOiJub25lIiwidHlwIjoiSldUIiwiZXh0cmEiOiJzbyBtdWNoIn0.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ." + - name: "X-Token-H-Alg" + value: "{{ .token.Header.alg }}" + # result will be "none" + - name: "X-Token-H-Typ" + value: "{{ .token.Header.typ }}" + # result will be "JWT" + - name: "X-Token-H-Extra" + value: "{{ .token.Header.extra }}" + # result will be "so much" + - name: "X-Token-C-Sub" + value: "{{ .token.Claims.sub }}" + # result will be "1234567890" + - name: "X-Token-C-Name" + value: "{{ .token.Claims.name }}" + # result will be "John Doe" + - name: "X-Token-C-Optional-Empty" + value: "{{ .token.Claims.optional }}" + # result will be ""; the header field will be set + # even if the "optional" claim is not set in the JWT. + - name: "X-Token-C-Optional-Unset" + value: "{{ if hasKey .token.Claims \"optional\" | not }}{{ doNotSet }}{{ end }}{{ .token.Claims.optional }}" + # Similar to "X-Token-C-Optional-Empty" above, but if the + # "optional" claim is not set in the JWT, then the header + # field won't be set either. + # + # Note that this does NOT remove/overwrite a client-supplied + # header of the same name. In order to distrust + # client-supplied headers, you MUST use a Lua script to + # remove the field before the Filter runs (see below). + - name: "X-Token-C-Iat" + value: "{{ .token.Claims.iat }}" + # result will be "1.516239022e+09" (don't expect JSON numbers + # to always be formatted the same as input; if you care about + # that, specify the formatting; see the next example) + - name: "X-Token-C-Iat-Decimal" + value: "{{ printf \"%.0f\" .token.Claims.iat }}" + # result will be "1516239022" + - name: "X-Token-S" + value: "{{ .token.Signature }}" + # result will be "" (since "alg: none" was used in this example JWT) + - name: "X-Authorization" + value: "Authenticated {{ .token.Header.typ }}; sub={{ .token.Claims.sub }}; name={{ printf \"%q\" .token.Claims.name }}" + # result will be: "Authenticated JWT; sub=1234567890; name="John Doe"" + - name: "X-UA" + value: "{{ .httpRequestHeader.Get \"User-Agent\" }}" + # result will be: "curl/7.66.0" or + # "Mozilla/5.0 (X11; Linux x86_64; rv:69.0) Gecko/20100101 Firefox/69.0" + # or whatever the requesting HTTP client is + errorResponse: + headers: + - name: "Content-Type" + value: "application/json" + - name: "X-Correlation-ID" + value: "{{ .httpRequestHeader.Get \"X-Correlation-ID\" }}" + # Regarding the "altErrorMessage" below: + # ValidationErrorExpired = 1<<4 = 16 + # https://godoc.org/github.com/dgrijalva/jwt-go#StandardClaims + bodyTemplate: |- + { + "errorMessage": {{ .message | json " " }}, + {{- if .error.ValidationError }} + "altErrorMessage": {{ if eq .error.ValidationError.Errors 16 }}"expired"{{ else }}"invalid"{{ end }}, + "errorCode": {{ .error.ValidationError.Errors | json " "}}, + {{- end }} + "httpStatus": "{{ .status_code }}", + "requestId": {{ .request_id | json " " }} + } +--- +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: + config: + lua_scripts: | + function envoy_on_request(request_handle) + request_handle:headers():remove("x-token-c-optional-unset") + end +``` diff --git a/docs/edge-stack/latest/topics/using/filters/oauth2.md b/docs/edge-stack/latest/topics/using/filters/oauth2.md new file mode 100644 index 000000000..06d15ad3c --- /dev/null +++ b/docs/edge-stack/latest/topics/using/filters/oauth2.md @@ -0,0 +1,626 @@ +import Alert from '@material-ui/lab/Alert'; + +# The OAuth2 Filter + +The OAuth2 filter type performs OAuth2 authorization against an identity provider implementing [OIDC Discovery](https://openid.net/specs/openid-connect-discovery-1_0.html). The filter is both: + +* An OAuth Client, which fetches resources from the Resource Server on the user's behalf. +* Half of a Resource Server, validating the Access Token before allowing the request through to the upstream service, which implements the other half of the Resource Server. + +This is different from most OAuth implementations where the Authorization Server and the Resource Server are in the same security domain. With Ambassador Edge Stack, the Client and the Resource Server are in the same security domain, and there is an independent Authorization Server. + +## The Ambassador authentication flow + +This is what the authentication process looks like at a high level when using Ambassador Edge Stack with an external identity provider. The use case is an end-user accessing a secured app service. + +![Ambassador Authentication OAuth/OIDC](../../../images/ambassador_oidc_flow.jpg) + +### Some basic authentication terms + +For those unfamiliar with authentication, here is a basic set of definitions. + +* OpenID: is an [open standard](https://openid.net/) and [decentralized authentication protocol](https://en.wikipedia.org/wiki/OpenID). OpenID allows users to be authenticated by co-operating sites, referred to as "relying parties" (RP) using a third-party authentication service. End-users can create accounts by selecting an OpenID identity provider (such as Auth0, Okta, etc), and then use those accounts to sign onto any website that accepts OpenID authentication. +* Open Authorization (OAuth): an open standard for [token-based authentication and authorization](https://oauth.net/) on the Internet. OAuth provides to clients a "secure delegated access" to server or application resources on behalf of an owner, which means that although you won't manage a user's authentication credentials, you can specify what they can access within your application once they have been successfully authenticated. The current latest version of this standard is OAuth 2.0. +* Identity Provider (IdP): an entity that [creates, maintains, and manages identity information](https://en.wikipedia.org/wiki/Identity_provider) for user accounts (also referred to "principals") while providing authentication services to external applications (referred to as "relying parties") within a distributed network, such as the web. +* OpenID Connect (OIDC): is an [authentication layer that is built on top of OAuth 2.0](https://openid.net/connect/), which allows applications to verify the identity of an end-user based on the authentication performed by an IdP, using a well-specified RESTful HTTP API with JSON as a data format. Typically an OIDC implementation will allow you to obtain basic profile information for a user that successfully authenticates, which in turn can be used for implementing additional security measures like Role-based Access Control (RBAC). +* JSON Web Token (JWT): is a [JSON-based open standard for creating access tokens](https://jwt.io/), such as those generated from an OAuth authentication. JWTs are compact, web-safe (or URL-safe), and are often used in the context of implementing single sign-on (SSO) within federated applications and organizations. Additional profile information, claims, or role-based information can be added to a JWT, and the token can be passed from the edge of an application right through the application's service call stack. + +If you look back at the authentication process diagram, the function of the entities involved should now be much clearer. + +### Using an identity hub + +Using an identity hub or broker allows you to support many IdPs without having to code individual integrations with them. For example, [Auth0](https://auth0.com/docs/identityproviders) and [Keycloak](https://www.keycloak.org/docs/latest/server_admin/index.html#social-identity-providers) both offer support for using Google and GitHub as an IdP. + +An identity hub sits between your application and the IdP that authenticates your users, which not only adds a level of abstraction so that your application (and Ambassador Edge Stack) is isolated from any changes to each provider's implementation, but it also allows your users to chose which provider they use to authenticate (and you can set a default, or restrict these options). + +The Auth0 docs provide a guide for adding social IdP "[connections](https://auth0.com/docs/identityproviders)" to your Auth0 account, and the Keycloak docs provide a guide for adding social identity "[brokers](https://www.keycloak.org/docs/latest/server_admin/index.html#social-identity-providers)". + + +## OAuth2 global arguments + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Filter +metadata: + name: "example-oauth2-filter" + namespace: "example-namespace" +spec: + OAuth2: + authorizationURL: "url" # required + + ############################################################################ + # OAuth Client settings # + ############################################################################ + + expirationSafetyMargin: "duration" # optional; default is "0" + + # Which settings exist depends on the grantType; supported grantTypes + # are "AuthorizationCode", "Password", and "ClientCredentials". + grantType: "enum" # optional; default is "AuthorizationCode" + + # How should Ambassador authenticate itself to the identity provider? + clientAuthentication: # optional + method: "enum" # optional; default is "HeaderPassword" + jwtAssertion: # optional if method method=="JWTAssertion"; forbidden otherwise + setClientID: bool # optional; default is false + # the following members of jwtAssertion only apply when the + # grantType is NOT "ClientCredentials". + audience: "string" # optional; default is to use the token endpoint from the authorization URL + signingMethod: "enum" # optional; default is "RS256" + lifetime: "duration" # optional; default is "1m" + setNBF: bool # optional; default is false + nbfSafetyMargin: "duration" # optional; default is 0s + setIAT: bool # optional; default is false + otherClaims: # optional; default is {} + "string": anything + otherHeaderParameters: # optional; default is {} + "string": anything + + ## OAuth Client settings: grantType=="AuthorizationCode" ################### + clientURL: "string" # deprecated; use 'protectedOrigins' instead + protectedOrigins: # required; must have at least 1 item + - origin: "url" # required + internalOrigin: "url" # optional; default is to just use the 'origin' field + includeSubdomains: bool # optional; default is false + useSessionCookies: # optional; default is { value: false } + value: bool # optional; default is true + ifRequestHeader: # optional; default to apply "useSessionCookies.value" to all requests + name: "string" # required + negate: bool # optional; default is false + # It is invalid to specify both "value" and "valueRegex". + value: "string" # optional; default is any non-empty string + valueRegex: "regex" # optional; default is any non-empty string + clientSessionMaxIdle: "duration" # optional; default is to use the access token lifetime or 14 days if a refresh token is present + extraAuthorizationParameters: # optional; default is {} + "string": "string" + postLogoutRedirectURI: "url" # optional; default is empty string + + ## OAuth Client settings: grantType=="AuthorizationCode" or "Password" ##### + clientID: "string" # required + # The client secret can be specified by including the raw secret as a + # string in "secret", or by referencing Kubernetes secret with + # "secretName" (and "secretNamespace"). It is invalid to specify both + # "secret" and "secretName". + secret: "string" # required (unless secretName is set) + secretName: "string" # required (unless secret is set) + secretNamespace: "string" # optional; default is the same namespace as the Filter + + ## OAuth Client settings (grantType=="ClientCredentials") ################## + # + # (there are no additional client settings for + # grantType=="ClientCredentials") + + ############################################################################ + # OAuth Resource Server settings # + ############################################################################ + + allowMalformedAccessToken: bool # optional; default is false + accessTokenValidation: "enum" # optional; default is "auto" + accessTokenJWTFilter: # optional; default is null + name: "string" # required + namespace: "string" # optional; default is the same namespace as the Filter + inheritScopeArgument: bool # optional; default is false + stripInheritedScope: bool # optional; default is false + arguments: JWT-Filter-Arguments # optional + injectRequestHeaders: # optional; default is [] + - name: "header-name-string" # required + value: "go-template-string" # required + + ############################################################################ + # HTTP client settings for talking with the identity provider # + ############################################################################ + + insecureTLS: bool # optional; default is false + renegotiateTLS: "enum" # optional; default is "never" + maxStale: "duration" # optional; default is "0" +``` + +### General settings + + - `authorizationURL`: Identifies where to look for the `/.well-known/openid-configuration` descriptor to figure out how to talk to the OAuth2 provider + +### OAuth client settings + +These settings configure the OAuth Client part of the filter. + + - `grantType`: Which type of OAuth 2.0 authorization grant to request from the identity provider. Currently supported are: + * `"AuthorizationCode"`: Authenticate by redirecting to a login page served by the identity provider. + + * `"ClientCredentials"`: Authenticate by requiring that the + incoming HTTP request include as headers the credentials for + Ambassador to use to authenticate to the identity provider. + + The type of credentials needing to be submitted depends on the + `clientAuthentication.method` (below): + + For `"HeaderPassword"` and `"BodyPassword"`, the headers + `X-Ambassador-Client-ID` and `X-Ambassador-Client-Secret` must + be set. + + For `"JWTAssertion"`, the `X-Ambassador-Client-Assertion` + header must be set to a JWT that is signed by your client + secret, and conforms with the requirements in RFC 7521 section + 5.2 and RFC 7523 section 3, as well as any additional specified + by your identity provider. + + * `"Password"`: Authenticate by requiring `X-Ambassador-Username` and `X-Ambassador-Password` on all + incoming requests, and use them to authenticate with the identity provider using the OAuth2 + `Resource Owner Password Credentials` grant type. + + - `expirationSafetyMargin`: Check that access tokens not expire for + at least this much longer; otherwise consider them to be already + expired. This provides a safety margin of time for your + application to send it to an upstream Resource Server that grants + insufficient leeway to account for clock skew and + network/application latency. + + - `clientAuthentication`: Configures how Ambassador uses the + `clientID` and `secret` to authenticate itself to the identity + provider: + * `method`: Which method Ambassador should use to authenticate + itself to the identity provider. Currently supported are: + + `"HeaderPassword"`: Treat the client secret (below) as a + password, and pack that in to an HTTP header for HTTP Basic + authentication. + + `"BodyPassword"`: Treat the client secret (below) as a + password, and put that in the HTTP request bodies submitted to + the identity provider. This is NOT RECOMMENDED by RFC 6749, + and should only be used when using HeaderPassword isn't + possible. + + `"JWTAssertion"`: Treat the client secret (below) as a + password, and put that in the HTTP request bodies submitted to + the identity provider. This is NOT RECOMMENDED by RFC 6749, + and should only be used when using HeaderPassword isn't + possible. + * `jwtAssertion`: Settings to use when `method: "JWTAssertion"`. + + `setClientID`: Whether to set the Client ID as an HTTP + parameter; setting it as an HTTP parameter is optional (per RFC + 7521 §4.2) because the Client ID is also contained in the JWT + itself, but some identity providers document that they require + it to also be set as an HTTP parameter anyway. + + `audience` (only when `grantType` is not + `"ClientCredentials"`): The audience value that your identity + provider requires. + + `signingMethod` (only when `grantType` is not + `"ClientCredentials"`): The method to use to sign the JWT; how + to interpret the `secret` (below). Supported values are: + - RSA: `"RS256"`, `"RS384"`, `"RS512"`: The secret must be a + PEM-encoded RSA private key. + - RSA-PSS: `"PS256"`, `"PS384"`, `"PS512"`: The secret must be + a PEM-encoded RSA private key. + - ECDSA: `"ES256"`, `"ES384"`, `"ES512"`: The secret must be a + PEM-encoded Eliptic Curve private key. + - HMAC-SHA: `"HS256"`, `"HS384"`, `"HS512"`: The secret is a + raw string of bytes; it can contain anything. + + `lifetime` (only when `grantType` is not + `"ClientCredentials"`): The lifetime of the generated JWT; just + enough time for the request to the identity provider to + complete (plus possibly an extra allowance for clock skew). + + `setNBF` (only when `grantType` is not `"ClientCredentials"`): + Whether to set the optional "nbf" ("Not Before") claim in the + generated JWT. + + `nbfSafetyMargin` (only `setNBF` is true): The safety margin to + build-in to the "nbf" claim, to allow for clock skew between + ambassador and the identity provider. + + `setIAT` (only when `grantType` is not `"ClientCredentials"`): + Whether to set the optional "iat" ("Issued At") claim in the + generated JWT. + + `otherClaims` (only when `grantType` is not + `"ClientCredentials"`): Any extra non-standard claims to + include in the generated JWT. + + `otherHeaderParameters` (only when `grantType` is not + `"ClientCredentials"`): Any extra JWT header parameters to + include in the generated JWT non-standard claims to include in + the generated JWT; only the "typ" and "alg" header parameters + are set by default. + +Depending on which `grantType` is used, different settings exist. + +Settings that are only valid when `grantType: "AuthorizationCode"` or `grantType: "Password"`: + + - `clientID`: The Client ID you get from your identity provider. + - The client secret you get from your identity provider can be specified 2 different ways: + * As a string, in the `secret` field. + * As a Kubernetes `generic` Secret, named by `secretName`/`secretNamespace`. The Kubernetes secret must of + the `generic` type, with the value stored under the key`oauth2-client-secret`. If `secretNamespace` is not given, it defaults to the namespace of the Filter resource. + * **Note**: It is invalid to set both `secret` and `secretName`. + +Settings that are only valid when `grantType: "AuthorizationCode"`: + + - `protectedOrigins`: (You determine these, and must register them + with your identity provider) Identifies hostnames that can + appropriately set cookies for the application. Only the scheme + (`https://`) and authority (`example.com:1234`) parts are used; the + path part of the URL is ignored. + + You will need to register each origin in `protectedOrigins` as an + authorized callback endpoint with your identity provider. The URL + will look like + `{{ORIGIN}}/.ambassador/oauth2/redirection-endpoint`. + + + If you provide more than one `protectedOrigin`, all share the same + authentication system, so that logging into one origin logs you + into all origins; to have multiple domains that have separate + logins, use separate `Filter`s. + + + `internalOrigin`: This sub-field of `protectedOrigins[i]` allows + you to tell Ambassador that there is another gateway in front of + Ambassador that rewrites the `Host` header, so that on the + internal network between that gateway and Ambassador, the origin + appears to be `internalOrigin` instead of `origin`. As a + special-case the scheme and/or authority of the `internalOrigin` + may be `*`, which matches any scheme or any domain respectively. + The `*` is most useful in configurations with exactly one + protected origin; in such a configuration, Ambassador doesn't + need to know what the origin looks like on the internal network, + just that a gateway in front of Ambassador is rewriting it. It + is invalid to use `*` with `includeSubdomains: true`. + + For example, if you have a gateway in front of Ambassador + handling traffic for `myservice.example.com`, terminating TLS + and routing that traffic to Ambassador with the name + `ambassador.internal`, you might write: + + ```yaml + protectedOrigins: + - origin: https://myservice.example.com + internalOrigin: http://ambassador.internal + ``` + + or, to avoid being fragile to renaming `ambassador.internal` to + something else, since there are not multiple origins that the + Filter must distinguish between, you could instead write: + + ```yaml + protectedOrigins: + - origin: https://myservice.example.com + internalOrigin: "*://*" + ``` + + - `clientURL` is deprecated, and is equivalent to setting + + ```yaml + protectedOrigins: + - origin: clientURL-value + internalOrigin: "*://*" + ``` + +- `postLogoutRedirectURI`: Set this field to a valid URL to have $productName$ redirect there upon a successful logout. You must register the following endpoint with your IDP as the Post Logout Redirect `{{ORIGIN}}/.ambassador/oauth2/post-logout-redirect`. This informs your IDP to redirect back to $productName$ once the IDP has cleared the session data. Once the IDP has redirected back to $productName$, this clears the local $productName$ session information before redirecting to the destination specified by the `postLogoutRedirectURI` value. + * If Post Logout Redirect is configured in your IDP to `{{ORIGIN}}/.ambassador/oauth2/post-logout-redirect` then, after a successful logout, a redirect is issued to the URL configured in `postLogoutRedirectURI`. + * If `{{ORIGIN}}/.ambassador/oauth2/post-logout-redirect` is configured as the Post Logout Redirect in your IDP, but `postLogoutRedirectURI` is not configured in $productName$, then your IDP will error out as it will be expecting specific instructions for the post logout behavior. + Refer to your IDP’s documentation to verify if it supports Post Logout Redirects. + For more information on `post_logout_redirect_uri functionality`, refer to the [OpenID Connect RP-Initiated Logout 1.0 specs](https://openid.net/specs/openid-connect-rpinitiated-1_0.html). + + - `extraAuthorizationParameters`: Extra (non-standard or extension) OAuth authorization parameters to use. It is not valid to specify a parameter used by OAuth itself ("response_type", "client_id", "redirect_uri", "scope", or "state"). + + - `clientSessionMaxIdle`: Control how long the session held by Ambassador Edge Stack's OAuth client will last until we automatically expire it. + * Ambassador Edge Stack creates a new session when submitting requests to the upstream backend server and sets a cookie containing the sessionID. When a user makes a request to a backend service protected by the OAuth2 Filter, the OAuth Client in Ambassador Edge Stack will use the sessionID contained in the cookie to fetch the access token (and optional refresh token) for the current session so that it can be used when submitting a request to the upstream backend service. This session has a limited lifetime before it expires or extended, prompting the user to log back in. + * Setting a `clientSessionMaxIdle` duration is useful when your IdP is configured to return a refresh token along with an access token from your IdP's authorization server. `clientSessionMaxIdle` can be set to match Ambassador Edge Stack OAuth client's session lifetime to the lifetime of the refresh token configured within the IdP. + * If this is not set, then we tie the OAuth client's session lifetime to the lifetime of the access token received from the IdP's authorization server when no refresh token is also provided. If there is a refresh token, then by default we set it to be 14 days. + + - By default, any cookies set by the Ambassador Edge Stack will be + set to expire when the session expires naturally. The + `useSessionCookies` setting may be used to cause session cookies to + be used instead. + + * Normally cookies are set to be deleted at a specific time; + session cookies are deleted whenever the user closes their web + browser. This may mean that the cookies are deleted sooner than + normal if the user closes their web browser; conversely, it may + mean that cookies persist for longer than normal if the use does + not close their browser. + * The cookies being deleted sooner may or may not affect + user-perceived behavior, depending on the behavior of the + identity provider. + * Any cookies persisting longer will not affect behavior of the + system; Ambassador Edge Stack validates whether the session + is expired when considering the cookie. + + If `useSessionCookies` is non-`null`, then: + + * By default it will have the cookies for all requests be + session cookies or not according to the + `useSessionCookies.value` sub-argument. + + * Setting the `useSessionCookies.ifRequestHeader` sub-argument + tells it to use `useSessionCookies.value` for requests that + match the condition, and `!useSessionCookies.value` for + requests don't match. + + When determining if a request matches, it looks at the HTTP + header field named by `useSessionCookies.ifRequestHeader.name` + (case-insensitive), and checks if it is either set to (if + `useSessionCookies.ifRequestHeader.negate: false`) or not set + to (if `useSessionCookies.ifRequestHeader.negate: true`)... + + a non-empty string (if neither + `useSessionCookies.ifRequestHeader.value` nor + `useSessionCookies.ifRequestHeader.valueRegex` are set) + + the exact string `value` (case-sensitive) (if + `useSessionCookies.ifRequestHeader.value` is set) + + a string that matches the regular expression + `useSessionCookies.ifRequestHeader.valueRegex` (if + `valueRegex` is set). This uses [RE2][] syntax (always, not + obeying [`regex_type` in the `ambassador Module`][]) but does + not support the `\C` escape sequence. + + (it is invalid to have both `value` and `valueRegex` set) + +### OAuth resource server settings + + - `allowMalformedAccessToken`: Allow any access token, even if they are not RFC 6750-compliant. + - `injectRequestHeaders` injects HTTP header fields in to the request before sending it to the upstream service; where the header value can be set based on the JWT value. + If an OAuth2 filter is chained with a JWT filter with `injectRequestHeaders` configured, both sets of headers will be injected. + If the same header is injected in both filters, the OAuth2 filter will populate the value. + The value is specified as a [Go `text/template`][] string, with the following data made available to it: + + * `.token.Raw` → `string` the access token raw JWT + * `.token.Header` → `map[string]interface{}` the access token JWT header, as parsed JSON + * `.token.Claims` → `map[string]interface{}` the access token JWT claims, as parsed JSON + * `.token.Signature` → `string` the access token signature + * `.idToken.Raw` → `string` the raw id token JWT + * `.idToken.Header` → `map[string]interface{}` the id token JWT header, as parsed JSON + * `.idToken.Claims` → `map[string]interface{}` the id token JWT claims, as parsed JSON + * `.idToken.Signature` → `string` the id token signature + * `.httpRequestHeader` → [`http.Header`][] a copy of the header of the incoming HTTP request. Any changes to `.httpRequestHeader` (such as by using using `.httpRequestHeader.Set`) have no effect. It is recommended to use `.httpRequestHeader.Get` instead of treating it as a map, in order to handle capitalization correctly. + + - `accessTokenValidation`: How to verify the liveness and scope of Access Tokens issued by the identity provider. Valid values are either `"auto"`, `"jwt"`, or `"userinfo"`. Empty or unset is equivalent to `"auto"`. + * `"jwt"`: Validates the Access Token as a JWT. + + By default: It accepts the RS256, RS384, or RS512 signature + algorithms, and validates the signature against the JWKS from + OIDC Discovery. It then validates the `exp`, `iat`, `nbf`, + `iss` (with the Issuer from OIDC Discovery), and `scope` + claims: if present, none of the scope values are required to be + present. This relies on the identity provider using + non-encrypted signed JWTs as Access Tokens, and configuring the + signing appropriately + + This behavior can be modified by delegating to [`JWT` + Filter](../jwt/) with `accessTokenJWTFilter`: + - `name` and `namespace` are used to identify which JWT Filter + to use. It is an error to point at a Filter that is not a + JWT filter. + - `arguments` is is the same as the `arguments` field when + referring to a JWT Filter from a FilterPolicy. + - `inheritScopeArgument` sets whether to inherit the `scope` + argument from the FilterPolicy rule that triggered the OAuth2 + Filter (similarly special-casing the `offline_access` scope + value); if the `arguments` field also specifies a `scope` + argument, then the union of the two is used. + - `stripInheritedScope` modifies the behavior of + `inheritScopeArgument`. Some identity providers use scope + values that are URIs when speaking OAuth, but when encoding + those scope values in to a JWT the provider strips the + leading path of the value; removing everything up to and + including the last "/" in the value. Setting + `stripInheritedScope` mimics this when passing the required + scope to the JWT Filter. It is meaningless to set + `stripInheritedScope` if `inheritScopeArgument` is not set. + * `"userinfo"`: Validates the access token by polling the OIDC UserInfo Endpoint. This means that the Ambassador Edge Stack must initiate an HTTP request to the identity provider for each authorized request to a protected resource. This performs poorly, but functions properly with a wider range of identity providers. It is not valid to set `accessTokenJWTFilter` if `accessTokenValidation: userinfo`. + * `"auto"` attempts to do `"jwt"` validation if any of these + conditions are true: + + + `accessTokenJWTFilter` is set, or + + `grantType` is `"ClientCredentials"`, or + + the Access Token parses as a JWT and the signature is valid, + + and otherwise falls back to `"userinfo"` validation. + +[RE2]: https://github.com/google/re2/wiki/Syntax +[`regex_type` in the `ambassador Module`]: ../../../running/ambassador/ + +### HTTP client + +These HTTP client settings are used for talking to the identity +provider: + + - `maxStale`: How long to keep stale cached OIDC replies for. This sets the `max-stale` Cache-Control directive on requests, and also **ignores the `no-store` and `no-cache` Cache-Control directives on responses**. This is useful for maintaining good performance when working with identity providers with misconfigured Cache-Control. Setting to 0 means that it will default back to the identity provider's default cache settings as specified by the Cache-Control directives on responses which may include no caching depending if the identity provider sets the `no-cache` and `no-store` directives. Note that if you are reusing the same `authorizationURL` and `jwksURI` across different OAuth and JWT filters respectively, then you MUST set `maxStale` as a consistent value on each filter to get predictable caching behavior. The default is 0. + - `insecureTLS` disables TLS verification when speaking to an identity provider with an `https://` `authorizationURL`. This is discouraged in favor of either using plain `http://` or [installing a self-signed certificate](../#installing-self-signed-certificates). + - `renegotiateTLS` allows a remote server to request TLS renegotiation. Accepted values are "never", "onceAsClient", and "freelyAsClient". + +`"duration"` strings are parsed as a sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". See [Go `time.ParseDuration`](https://golang.org/pkg/time/#ParseDuration). + +## OAuth2 path-specific arguments + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: FilterPolicy +metadata: + name: "example-filter-policy" + namespace: "example-namespace" +spec: + rules: + - host: "*" + path: "*" + filters: + - name: "example-oauth2-filter" + arguments: + scope: # optional; default is ["openid"] for `grantType=="AuthorizationCode"`; [] for `grantType=="ClientCredentials"` and `grantType=="Password"` + - "scopevalue1" + - "scopevalue2" + scopes: # deprecated; use 'scope' instead + insteadOfRedirect: # optional for "AuthorizationCode"; default is to do a redirect to the identity provider + ifRequestHeader: # optional; default is to return httpStatusCode for all requests that would redirect-to-identity-provider + name: "string" # required + negate: bool # optional; default is false + # It is invalid to specify both "value" and "valueRegex". + value: "string" # optional; default is any non-empty string + valueRegex: "regex" # optional; default is any non-empty string + # option 1: + httpStatusCode: integer # optional; default is 403 (unless `filters` is set) + # option 2 (deprecated - will be removed in future version): + filters: # optional; default is to use `httpStatusCode` instead + - name: "string" # required + namespace: "string" # optional; default is the same namespace as the FilterPolicy + ifRequestHeader: # optional; default to apply this filter to all requests matching the host & path + name: "string" # required + negate: bool # optional; default is false + # It is invalid to specify both "value" and "valueRegex". + value: "string" # optional; default is any non-empty string + valueRegex: "regex" # optional; default is any non-empty string + onDeny: "enum" # optional; default is "break" + onAllow: "enum" # optional; default is "continue" + arguments: DEPENDS # optional + sameSite: "enum" # optional; the SameSite attribute to set on cookies created by this filter. valid values include: "lax", "strict", "none". by default, no SameSite attribute is set, which typically allows the browser to decide the value. +``` + + - `scope`: A list of OAuth scope values to include in the scope of the authorization request. If one of the scope values for a path is not granted, then access to that resource is forbidden; if the `scope` argument lists `foo`, but the authorization response from the provider does not include `foo` in the scope, then it will be taken to mean that the authorization server forbade access to this path, as the authenticated user does not have the `foo` resource scope. + + If `grantType: "AuthorizationCode"`, then the `openid` scope value is always included in the requested scope, even if it is not listed in the `FilterPolicy` argument. + + If `grantType: "ClientCredentials"` or `grantType: "Password"`, then the default scope is empty. If your identity provider does not have a default scope, then you will need to configure one here. + + As a special case, if the `offline_access` scope value is requested, but not included in the response then access is not forbidden. With many identity providers, requesting the `offline_access` scope is necessary to receive a Refresh Token. + + The ordering of scope values does not matter, and is ignored. + + - `scopes` is deprecated, and is equivalent to setting `scope`. + + - `insteadOfRedirect`: An action to perform instead of redirecting + the User-Agent to the identity provider, when using `grantType: "AuthorizationCode"`. + By default, if the User-Agent does not have a currently-authenticated session, + then the Ambassador Edge Stack will redirect the User-Agent to the + identity provider. Setting `insteadOfRedirect` allows you to modify + this behavior. `insteadOfRedirect` does nothing when `grantType: + "ClientCredentials"`, because the Ambassador Edge Stack will never + redirect the User-Agent to the identity provider for the client + credentials grant type. + * If `insteadOfRedirect` is non-`null`, then by default it will + apply to all requests that would cause the redirect; setting the + `ifRequestHeader` sub-argument causes it to only apply to + requests that have the HTTP header field + `name` (case-insensitive) either set to (if `negate: false`) or + not set to (if `negate: true`) + + a non-empty string if neither `value` nor `valueRegex` are set + + the exact string `value` (case-sensitive) (if `value` is set) + + a string that matches the regular expression `valueRegex` (if + `valueRegex` is set). This uses [RE2][] syntax (always, not + obeying [`regex_type` in the `ambassador Module`][]) but does + not support the `\C` escape sequence. + * By default, it serves an authorization-denied error page; by default HTTP 403 ("Forbidden"), but this can be configured by the `httpStatusCode` sub-argument. + * __DEPRECATED__ Instead of serving that simple error page, it can instead be configured to call out to a list of other Filters, by setting the `filters` list. The syntax and semantics of this list are the same as `.spec.rules[].filters` in a [`FilterPolicy`](../#filterpolicy-definition). Be aware that if one of these filters modify the request rather than returning a response, then the request will be allowed through to the backend service, even though the `OAuth2` Filter denied it. + * It is invalid to specify both `httpStatusCode` and `filters`. + +## XSRF protection + +The `ambassador_xsrf.NAME.NAMESPACE` cookie is an opaque string that should be used as an XSRF token. Applications wishing to leverage the Ambassador Edge Stack in their XSRF attack protection should take two extra steps: + + 1. When generating an HTML form, the server should read the cookie, and include a `` element in the form. + 2. When handling submitted form data should verify that the form value and the cookie value match. If they do not match, it should refuse to handle the request, and return an HTTP 4XX response. + +Applications using request submission formats other than HTML forms should perform analogous steps of ensuring that the value is present in the request duplicated in the cookie and also in either the request body or secure header field. A secure header field is one that is not `Cookie`, is not "[simple](https://www.w3.org/TR/cors/#simple-header)", and is not explicitly allowed by the CORS policy. + + + + Prior versions of the Ambassador Edge Stack did not have an ambassador_xsrf.NAME.NAMESPACE cookie, and instead required you to use the ambassador_session.NAME.NAMESPACE cookie. The ambassador_session.NAME.NAMESPACE cookie should no longer be used for XSRF-protection purposes. + + +## RP-initiated logout + +When a logout occurs, it is often not enough to delete the Ambassador +Edge Stack's session cookie or session data; after this happens, and the web +browser is redirected to the Identity Provider to re-log-in, the +Identity Provider may remember the previous login, and immediately +re-authorize the user; it would be like the logout never even +happened. + +To solve this, the Ambassador Edge Stack can use [OpenID Connect Session +Management](https://openid.net/specs/openid-connect-session-1_0.html) +to perform an "RP-Initiated Logout", where Edge Stack +(the OpenID Connect "Relying Party" or "RP") +communicates directly with Identity Providers that support OpenID +Connect Session Management, to properly log out the user. +Unfortunately, many Identity Providers do not support OpenID Connect +Session Management. + +This is done by having your application direct the web browser `POST` +*and navigate* to `/.ambassador/oauth2/logout`. There are 2 +form-encoded values that you need to include: + + 1. `realm`: The `name.namespace` of the `Filter` that you want to log + out of. This may be submitted as part of the POST body, or may be set as a URL query parameter. + 2. `_xsrf`: The value of the `ambassador_xsrf.{{realm}}` cookie + (where `{{realm}}` is as described above). This must be set in the POST body, the URL query part will not be checked. + +### Example configurations + +```html +
+ + + +
+``` + + +```html +
+ + +
+``` + +Using JavaScript: + +```js +function getCookie(name) { + var prefix = name + "="; + var cookies = document.cookie.split(';'); + for (var i = 0; i < cookies.length; i++) { + var cookie = cookies[i].trimStart(); + if (cookie.indexOf(prefix) == 0) { + return cookie.slice(prefix.length); + } + } + return ""; +} + +function logout(realm) { + var form = document.createElement('form'); + form.method = 'post'; + form.action = '/.ambassador/oauth2/logout?realm='+realm; + //form.target = '_blank'; // uncomment to open the identity provider's page in a new tab + + var xsrfInput = document.createElement('input'); + xsrfInput.type = 'hidden'; + xsrfInput.name = '_xsrf'; + xsrfInput.value = getCookie("ambassador_xsrf."+realm); + form.appendChild(xsrfInput); + + document.body.appendChild(form); + form.submit(); +} +``` + +## Redis + +The Ambassador Edge Stack relies on Redis to store short-lived authentication credentials and rate limiting information. If the Redis data store is lost, users will need to log back in and all existing rate-limits would be reset. + +## Further reading + +In this architecture, Ambassador Edge Stack is functioning as an Identity Aware Proxy in a Zero Trust Network. For more about this security architecture, read the [BeyondCorp security architecture whitepaper](https://ai.google/research/pubs/pub43231) by Google. + +The ["How-to" section](../../../../howtos/) has detailed tutorials on integrating Ambassador with a number of Identity Providers. diff --git a/docs/edge-stack/latest/topics/using/filters/plugin.md b/docs/edge-stack/latest/topics/using/filters/plugin.md new file mode 100644 index 000000000..4f6cbeb91 --- /dev/null +++ b/docs/edge-stack/latest/topics/using/filters/plugin.md @@ -0,0 +1,46 @@ +# Plugin Filter + + +The Plugin filter type allows you to plug in your own custom code. This code is compiled to a `.so` file, which you load into the Edge Stack container at `/etc/ambassador-plugins/${NAME}.so`. The [Filter Development Guide](../../../../howtos/filter-dev-guide) contains a tutorial on developing filters. + +## The Plugin interface + +This code is written in the Go programming language and must be compiled with the exact same compiler settings as Edge Stack (any overlapping libraries used must have their versions match exactly). This information is documented in the `/ambassador/aes-abi.txt` file in the AES docker image. + +Plugins are compiled with `go build -buildmode=plugin -trimpath` and must have a `main.PluginMain` function with the signature `PluginMain(w http.ResponseWriter, r *http.Request)`: + +```go +package main + +import ( + "net/http" +) + +func PluginMain(w http.ResponseWriter, r *http.Request) { … } +``` + +`*http.Request` is the incoming HTTP request that can be mutated or intercepted, which is done by `http.ResponseWriter`. + +Headers can be mutated by calling `w.Header().Set(HEADERNAME, VALUE)`. +Finalize changes by calling `w.WriteHeader(http.StatusOK)`. + +If you call `w.WriteHeader()` with any value other than 200 (`http.StatusOK`) instead of modifying the request, the plugin has +taken over the request, and the request will not be sent to your backend service. You can call `w.Write()` to write the body of an error page. + +## Plugin global arguments + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Filter +metadata: + name: "example-plugin-filter" + namespace: "example-namespace" +spec: + Plugin: + name: "string" # required; this tells it where to look for the compiled plugin file; "/etc/ambassador-plugins/${NAME}.so" +``` + +## Plugin path-specific arguments + +Path specific arguments are not supported for Plugin filters at this time. diff --git a/docs/edge-stack/latest/topics/using/licenses.md b/docs/edge-stack/latest/topics/using/licenses.md new file mode 100644 index 000000000..648f4deaf --- /dev/null +++ b/docs/edge-stack/latest/topics/using/licenses.md @@ -0,0 +1,54 @@ +# $productName$ Licenses + +$productName$ requires a valid Enterprise license or Community license to start up. The Community license allows you to use $productName$ for free with certain restrictions and the Enterprise license lifts these restrictions for further use of premium features. + +For more details on the different licenses, please visit the [editions page](/editions). + +## Enterprise License +To obtain an Enterprise license, you can [reach out to our sales team][] for more information. + +If you have any questions regarding your Enterprise license, or require an air gapped license, please to reach out to [support][]. + +## Applying a License +The process for applying a license is the same, regardless of which plan you choose: + +* Enterprise License: If you have already purchased an Enterprise plan, you can follow the steps below to connect your clusters to Ambassador Cloud. Your Enterprise license will automatically apply to all clusters that you connect. If you believe you have an Enterprise license, but this is not reflected in Ambassador Cloud after connecting your clusters, please reach out to [support][]. + +* Community License: If you wish to utilize a free Community license for your Edge Stack clusters, you can follow the steps below to connect your clusters to Ambassador Cloud, and the Community license will be automatically applied. + + +1. Installing the cloud connect token + + You can follow the instructions on [the quickstart guide][] to get signed into [Ambassador Cloud][] and obtain a cloud connect token for your installation of $productName$ if you don't already have one. + This will let $productName$ request and renew your license from Ambassador Cloud. + + The Cloud Connect Token is a `ConfigMap` that you will install in your Kubernetes cluster and looks like this: + + ```yaml + apiVersion: v1 + kind: ConfigMap + metadata: + name: edge-stack-agent-cloud-token + namespace: ambassador + data: + CLOUD_CONNECT_TOKEN: + ``` + +2. Install the Cloud Connect Token + + If you are using Helm, you can use Helm to manage your installation. + + ```bash + helm install edge-stack --namespace ambassador datawire/edge-stack --set emissary-ingress.createDefaultListeners=true --set emissary-ingress.agent.cloudConnectToken= + ``` + + If you do not want to use Helm, then you can apply the Cloud Connect Token with raw yaml instead. + + ```bash + kubectl create configmap --namespace ambassador edge-stack-agent-cloud-token --from-literal=CLOUD_CONNECT_TOKEN= + ``` + +[reach out to our sales team]: /contact-us/ +[the quickstart guide]: ../../../tutorials/getting-started +[Ambassador Cloud]: https://app.getambassador.io/cloud/ +[support]: https://support.datawire.io diff --git a/docs/edge-stack/latest/topics/using/rate-limits/index.md b/docs/edge-stack/latest/topics/using/rate-limits/index.md new file mode 100644 index 000000000..b6e90faa7 --- /dev/null +++ b/docs/edge-stack/latest/topics/using/rate-limits/index.md @@ -0,0 +1,193 @@ +import Alert from '@material-ui/lab/Alert'; + +# Basic rate limiting + +Rate limiting in $productName$ is composed of two parts: + +* The [`RateLimitService`] resource tells $productName$ what external service + to use for rate limiting. + + If $productName$ cannot contact the rate limit service, it will allow the request to be processed as if there were no rate limit service configuration. + +* _Labels_ that get attached to requests. A label is basic metadata that + is used by the `RateLimitService` to decide which limits to apply to + the request. + + + These labels require Mapping resources with apiVersion + getambassador.io/v2 or newer — if you're updating an old installation, check the + apiVersion! + + +Labels are grouped according to _domain_ and _group_: + +```yaml +labels: + "domain1": + - "group1": + - "my_label_specifier_1" + - "my_label_specifier_2" + - "group2": + - "my_label_specifier_3" + - "my_label_specifier_4" + "domain2": + - ... +``` + +## Attaching labels to requests + +There are two ways of setting labels on a request: + +1. You can set labels on an individual [`Mapping`](../mappings). These labels + will only apply to requests that use that `Mapping`. + + ```yaml + --- + apiVersion: getambassador.io/v3alpha1 + kind: Mapping + metadata: + name: foo-mapping + spec: + hostname: "*" + prefix: /foo/ + service: foo + labels: + "domain1": + - "group1": + - "my_label_specifier_1" + - "my_label_specifier_2" + - "group2": + - "my_label_specifier_3" + - "my_label_specifier_4" + "domain2": + - ... + ``` + +2. You can set global labels in the [`ambassador` `Module`](../../running/ambassador). + These labels will apply to _every_ request that goes through $productName$. + + ```yaml + --- + apiVersion: getambassador.io/v3alpha1 + kind: Module + metadata: + name: ambassador + spec: + config: + default_labels: + "domain1": + defaults: + - "my_label_specifier_a" + - "my_label_specifier_b" + "domain2": + defaults: + - "my_label_specifier_c" + - "my_label_specifier_d" + ``` + + If a `Mapping` and the defaults both give label groups for the same domain, the + default labels are prepended to each label group from the `Mapping`. If the `Mapping` + does not give any labels for that domain, the global labels are placed into a new + label group named "default" for that domain. + +Each label group is a list of labels; each label is a key/value pair. Since the label +group is a list rather than a map: +- it is possible to have multiple labels with the same key, and +- the order of labels matters. + +> Note: The terminology used by the Envoy documentation differs from +> the terminology used by $productName$: +> +> | $productName$ | Envoy | +> |-----------------|-------------------| +> | label group | descriptor | +> | label | descriptor entry | +> | label specifier | rate limit action | + +The `Mapping`s' listing of the groups of specifiers have names for the +groups; the group names are useful for humans dealing with the YAML, +but are ignored by $productName$, all $productName$ cares about are the +*contents* of the groupings of label specifiers. + +There are 5 types of label specifiers in $productName$: + + + +1. `source_cluster` + + ```yaml + source_cluster: + key: source_cluster + ``` + + Sets the label `source_cluster=«Envoy source cluster name»"`. The Envoy + source cluster name is the name of the Envoy cluster that the request came + in on. + + The syntax of this label currently _requires_ `source_cluster: {}`. + +2. `destination_cluster` + + ```yaml + destination_cluster: + key: destination_cluster + ``` + + Sets the label `destination_cluster=«Envoy destination cluster name»"`. The Envoy + destination cluster name is the name of the Envoy cluster to which the `Mapping` + routes the request. You can get the name for a cluster from the + [diagnostics service](../../running/diagnostics/). + + The syntax of this label currently _requires_ `destination_cluster: {}`. + +3. `remote_address` + + ```yaml + remote_address: + key: remote_address + ``` + + Sets the label `remote_address=«IP address of the client»"`. The IP address of + the client will be taken from the `X-Forwarded-For` header, to correctly manage + situations with L7 proxies. This requires that $productName$ be correctly + [configured to communicate](../../../howtos/configure-communications). + + The syntax of this label currently _requires_ `remote_address: {}`. + +4. `request_headers` + + ```yaml + request_headers: + header_name: "header-name" + key: mykey + ``` + + If a header named `header-name` is present, set the label `mykey=«value of the header»`. + If no header named `header-name` is present, **the entire label group is dropped**. + +5. `generic_key` + + ```yaml + generic_key: + key: mykey + value: myvalue + ``` + + Sets the label `«mykey»=«myval»`. Note that supplying a `key` is supported only + with the Envoy V3 API. + +## Rate limiting requests based on their labels + +This is determined by your `RateLimitService` implementation. + +$AESproductName$ provides a `RateLimitService` implementation that is +configured by a `RateLimit` custom resource. + +See the [$AESproductName$ RateLimit Reference](./rate-limits) for information on how +to configure `RateLimit`s in $AESproductName$. + +See the [Basic Rate Limiting tutorial](../../../howtos/rate-limiting-tutorial) for an +example `RateLimitService` implementation for $OSSproductName$. diff --git a/docs/edge-stack/latest/topics/using/rate-limits/rate-limits.md b/docs/edge-stack/latest/topics/using/rate-limits/rate-limits.md new file mode 100644 index 000000000..f764000bc --- /dev/null +++ b/docs/edge-stack/latest/topics/using/rate-limits/rate-limits.md @@ -0,0 +1,534 @@ +# Rate limiting reference + +Rate limiting in $productName$ is composed of two parts: + +* Labels that get attached to requests; a label is basic metadata that + is used by the `RateLimitService` to decide which limits to apply to + the request. +* `RateLimit`s configure $productName$'s built-in + `RateLimitService`, and set limits based on the labels on the + request. + + +> This page covers using `RateLimit` resources to configure $productName$ + to rate limit requests. See the [Basic Rate Limiting article](../) for + information on adding labels to requests. + + +## Rate limiting requests based on their labels + +A `RateLimit` resource defines a list of limits that apply to +different requests. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: RateLimit +metadata: + name: example-limits +spec: + domain: "my_domain" + limits: + - name: per-minute-limit # optional; default is the `$name.$namespace-$idx` where name is the name of the CRD and idx is the index into the limits array + action: Enforce # optional; default to "Enforce". valid values are "Enforce" and "LogOnly", case insensitive. + pattern: + - "my_key1": "my_value1" + "my_key2": "my_value2" + - "my_key3": "my_value3" + rate: 5 + unit: "minute" + injectRequestHeaders: # optional + - name: "header-name-string-1" # required + value: "go-template-string" # required + - name: "header-name-string-2" # required + value: "go-template-string" # required + injectResponseHeaders: # optional + - name: "header-name-string-1" # required + value: "go-template-string" # required + errorResponse: # optional + headers: # optional; default is [], adding no additional headers + - name: "header-name-string" # required + value: "go-template-string" # required + bodyTemplate: "string" # optional; default is "", returning no response body + - name: per-second-limit + action: Enforce + pattern: + - "my_key4": "" # check the key but not the value + - "my_key5": "*" # check the key but not the value + rate: 5 + unit: "second" + ... +``` + +It makes no difference whether limits are defined together in one +`RateLimit` resource or are defined separately in many `RateLimit` +resources. + + + + - `name`: The symbolic name for this ratelimit. Used to set dynamic metadata that can be referenced in the Envoy access log. + + - `action`: Each limit has an *action* that it will take when it is exceeded. Actions include: + + * `Enforce` - enforce this limit on the client by returning HTTP 429. This is the default action. + * `LogOnly` - do not enforce this limit on the client, and allow the client request upstream if no other limit applies. + + - `pattern`: Each limit has a *pattern* that matches against a label + group on a request to decide if that limit should apply to that + request. For a pattern to match, the request's label group must + start with exactly the labels specified in the pattern, in order. + If a label in a pattern has an empty string or `"*"` as the value, + then it only checks the key of that label on the request; not the + value. If a list item in the pattern has multiple key/value pairs, + if any of them match the label then it is considered a match. + + For example, the pattern + + ```yaml + pattern: + - "key1": "foo" + "key1": "bar" + - "key2": "" + ``` + + matches the label group + + ```yaml + - key1: foo + - key2: baz + - otherkey: knob + ``` + + and + + ```yaml + - key1: bar + - key2: baz + - otherkey: knob + ``` + + but not the label group + + ```yaml + - key0: frob + - key1: foo + - key2: baz + ``` + + If a label group is matched by multiple patterns, the pattern with + the longest list of items wins. + + If a request has multiple label groups, then multiple limits may apply + to that request; if *any* of the limits are being hit, then Ambassador + will reject the request as an HTTP 429. + + - `rate`, `unit`: The limit itself is specified as an integer number + of requests per a unit of time. Valid units of time are `second`, + `minute`, `hour`, or `day` (all case-insensitive). + + So for example + + ```yaml + rate: 5 + unit: minute + ``` + + would allow 5 requests per minute, and any requests in excess of + that would result in HTTP 429 errors. Note that the limit is + tracked in terms of wall clock minutes and not a sliding + window. For example if 5 requests happen 59 seconds into the + current wall clock minute, then clients only need to wait a second + in order to make another 5 requests. + + - `burstFactor`: The optional `burstFactor` field changes enforcement + of ratelimits in two ways: + + * A `burstFactor` of `N` will allow unused requests from a window + of `N` time units to be rolled over and included in the current + request limit. This will effectively result in two separate + ratelimits being applied depending on the dynamic behavior of + clients. Clients that only make occasional bursts will end up + with an effective ratelimit of `burstFactor` * `rate`, whereas + clients that make requests continually will be limited to just + `rate`. For example: + + ```yaml + rate: 5 + unit: minute + burstFactor: 5 + ``` + + would allow bursts of up to 25 request per minute, but only + permit continual usage of 5 requests per minute. + + * A `burstFactor` of `1` is logically very similar to no + `burstFactor` with one key difference. When `burstFactor` is + specified, requests are tracked with a sliding window rather than + in terms of wall clock minutes. For example: + + ```yaml + rate: 5 + unit: minute + burstFactor: 1 + ``` + + With*out* the `burstFactor` of 1, the above limit would permit up + to 5 requests within any wall clock minute. *With* the + `burstFactor` of 1 it means that no more than 5 requests are + permitted within any 1 minute sliding window. + + Note that the `burstFactor` field only works when the + `AES_RATELIMIT_PREVIEW` environment variable is set to `true`. + + - `injectRequestHeaders`, `injectResponseHeaders`: If this limit's + pattern matches the request, then `injectRequestHeaders` injects + HTTP header fields in to the request before sending it to the + upstream service (assuming the limit even allows the request to go + to the upstream service), and `injectResponseHeaders` injects + headers in to the response sent back to the client (whether the + response came from the upstream service or is an HTTP 429 response + because it got rate limited). This is very similar to + `injectRequestHeaders` in a [`JWT` Filter][]. The header value is + specified as a [Go `text/template`][] string, with the following + data made available to it: + + * `.RateLimitResponse.OverallCode` → `int` : `1` for OK, `2` for + OVER_LIMIT. + * `.RateLimitResponse.Statuses` → + [`[]*RateLimitResponse_DescriptorStatus]`]`v2.RateLimitResponse_DescriptorStatus` + The itemized status codes for each limit that was selected for + this request. + * `.RetryAfter` → `time.Duration` the amount of time until all of + the limits would allow access again (0 if they all currently + allow access). + + Also available to the template are the [standard functions available + to Go `text/template`s][Go `text/template` functions], as well as: + + * a `hasKey` function that takes the a string-indexed map as arg1, + and returns whether it contains the key arg2. (This is the same + as the [Sprig function of the same name][Sprig `hasKey`].) + + * a `doNotSet` function that causes the result of the template to + be discarded, and the header field to not be adjusted. This is + useful for only conditionally setting a header field; rather + than setting it to an empty string or `""`. Note that + this does _not_ unset an existing header field of the same name. + + - `errorResponse` allows templating the error response, overriding the default json error format. Make sure you validate and test your template, not to generate server-side errors on top of client errors. + * `headers` sets extra HTTP header fields in the error response. The value is specified as a [Go `text/template`][] string, with the same data made available to it as `bodyTemplate` (below). It does not have access to the `json` function. + * `bodyTemplate` specifies body of the error; specified as a [Go `text/template`][] string, with the following data made available to it: + + * `.status_code` → `integer` the HTTP status code to be returned + * `.message` → `string` the error message string + * `.request_id` → `string` the Envoy request ID, for correlation (hidden from `{{ . | json "" }}` unless `.status_code` is in the 5XX range) + * `.RateLimitResponse.OverallCode` → `int` : `1` for OK, `2` for + OVER_LIMIT. + * `.RateLimitResponse.Statuses` → + [`[]*RateLimitResponse_DescriptorStatus]`]`v3.RateLimitResponse_DescriptorStatus` + The itemized status codes for each limit that was selected for + this request. + * `.RetryAfter` → `time.Duration` the amount of time until all of + the limits would allow access again (0 if they all currently + allow access). + + Also availabe to the template are the [standard functions + available to Go `text/template`s][Go `text/template` functions], + as well as: + + * a `json` function that formats arg2 as JSON, using the arg1 + string as the starting indentation. For example, the + template `{{ json "indent>" "value" }}` would yield the + string `indent>"value"`. + +[`JWT` Filter]: ../../filters/jwt +[Go `text/template`]: https://golang.org/pkg/text/template/ +[Go `text/template` functions]: https://golang.org/pkg/text/template/#hdr-Functions +[Sprig `hasKey`]: https://masterminds.github.io/sprig/dicts.html#haskey + +## Logging RateLimits + +It is often desirable to know which RateLimit, if any, is applied to a client's request. This can be achieved by leveraging dynamic metadata available to Envoy's access log. + +The following dynamic metadata keys are available under the `envoy.filters.http.ratelimit` namespace. See https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage for more on Envoy's access log format. + +* `aes.ratelimit.name` - The symbolic `name` of the `Limit` on a `RateLimit` object that triggered the ratelimit action. +* `aes.ratelimit.action` - The action that the `Limit` took. Possible values include `Enforce` and `LogOnly`. When the action is `Enforce`, the client was ratelimited with HTTP 429. When the action is `LogOnly`, the ratelimit was not enforced and the client's request was allowed upstream. +* `aes.ratelimit.retry_after` - The time in seconds until the `Limit` resets. Equivalent to the value of the `Retry-After` returned to the client if the limit was enforced. + +If a `Limit` with a `LogOnly` action is exceeded and there are no other non-`LogOnly` `Limit`s that were exceeded, the request will be allowed upstream and that `Limit` will available as dynamic metadata above. + +Note that if multiple `Limit`s were exceeded by a request, only the `Limit` with the longest time until reset (i.e. its Retry-After value) will be available as dynamic metadata above. The only exception is if the `Limit` with the longest time until reset is `LogOnly` and there exists another non-`LogOnly` limit that was exceeded. In that case, the non-`LogOnly` `Limit` will be available as dynamic metadata. This ensures that `LogOnly` `Limits` will never prevent non-`LogOnly` `Limits` from enforcing or from being observable in the Envoy access log. + +### An example access log specification for RateLimit dynamic metadata + +Module: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: + config: + envoy_log_format: 'ratelimit %DYNAMIC_METADATA(envoy.filters.http.ratelimit:aes.ratelimit.name)% took action %DYNAMIC_METADATA(envoy.filters.http.ratelimit:aes.ratelimit.action)%' +``` + +## RateLimit examples + +### An example service-level rate limit + +The following `Mapping` resource will add a +`my_default_generic_key_label` `generic_key` label to every request to +the `foo-app` service: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: foo-app +spec: + hostname: "*" + prefix: /foo/ + service: foo + labels: + ambassador: + - label_group: + - generic_key: + value: my_default_generic_key_label +``` + +You can then create a default RateLimit for every request that matches +this label: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: RateLimit +metadata: + name: default-rate-limit +spec: + domain: ambassador + limits: + - pattern: + - generic_key: "my_default_generic_key_label" + rate: 10 + unit: minute +``` + +> Tip: For testing purposes, it is helpful to configure per-minute +> rate limits before switching the rate limits to per second or per +> hour. + +### An example with multiple labels + +Mappings can have multiple `labels` which annotate a given request. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: catalog +spec: + hostname: "*" + prefix: /catalog/ + service: catalog + labels: + ambassador: # the label domain + - string_request_label: # the label group name -- useful for humans, ignored by Ambassador + - generic_key: # this is a generic_key label + value: catalog # annotate the request with `generic_key=catalog` + - header_request_label: # another label group name + - request_headers: # this is a label using request headers + key: headerkey # annotate the request with `headerkey=the specific HTTP method used` + header_name: ":method" # if the :method header is somehow unset, the whole group will be dropped. + - multi_request_label_group: + - request_headers: + key: authorityheader + header_name: ":authority" + - request_headers: + key: xuserheader + header_name: "x-user" # again, if x-user is not present, the _whole group_ is dropped +``` + +Let's digest the above example: + +* Request labels must be part of the "ambassador" label domain. Or + rather, it must match the domain in your + `RateLimitService.spec.domain` which defaults to + `Module.spec.default_label_domain` which defaults to `ambassador`; + but normally you should accept the default and just accept that the + domain on the Mappings must be set to "ambassador". +* Each label must have a name, e.g., `one_request_label` +* The `string_request_label` simply adds the string `catalog` to every + incoming request to the given mapping. The string is referenced + with the key `generic_key`. +* The `header_request_label` adds a specific HTTP header value to the + request, in this case, the method. Note that HTTP/2 request headers + must be used here (e.g., the `host` header needs to be specified as + the `:authority` header). +* Multiple labels can be part of a single named label, e.g., + `multi_request_label` specifies two different headers to be added +* When an HTTP header is not present, the entire named label is + omitted. The `omit_if_not_present: true` is an explicit notation to + remind end-users of this limitation. `false` is *not* a supported + value. + +### An example with multiple limits + +Labels can be grouped. This allows for a single request to count +against multiple different `RateLimit` resources. For example, +imagine the following scenario: + +1. Users should be limited on the total number of requests that can be + sent to a set of endpoints +2. On a specific service, stricter limits are desirable + +The following `Mapping` resources could be configured: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: foo-app +spec: + hostname: "*" + prefix: /foo/ + service: foo + labels: + ambassador: + - foo-app_label_group: + - generic_key: + value: foo-app + - total_requests_group: + - remote_address + remote_address: {} # this is _required_ at present +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: bar-app +spec: + hostname: "*" + prefix: /bar/ + service: bar + labels: + ambassador: + - bar-app_label_group: + - generic_key: + value: bar-app + - total_requests_group: + - remote_address + remote_address: {} # this is _required_ at present +``` + +Now requests to the `foo-app` and the `bar-app` would be labeled with +```yaml +- "generic_key": "foo-app" +- "remote_address": "10.10.11.12" +``` +and +```yaml +- "generic_key": "bar-app" +- "remote_address": "10.10.11.12" +``` +respectively. `RateLimit`s on these two services could be created as +such: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: RateLimit +metadata: + name: foo-rate-limit +spec: + domain: ambassador + limits: + - pattern: [{generic_key: "foo-app"}] + rate: 10 + unit: second +--- +apiVersion: getambassador.io/v3alpha1 +kind: RateLimit +metadata: + name: bar-rate-limit +spec: + domain: ambassador + limits: + - pattern: [{generic_key: "bar-app"}] + rate: 20 + unit: second +--- +apiVersion: getambassador.io/v3alpha1 +kind: RateLimit +metadata: + name: user-rate-limit +spec: + domain: ambassador + limits: + - pattern: [{remote_address: "*"}] + rate: 100 + unit: minute +``` + +### An example with global labels and groups + +Global labels are prepended to every single label group. In the above +example, if the following global label was added in the `ambassador` +Module: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: + config: + default_label_domain: ambassador + default_labels: + ambassador: + defaults: + - generic_key: + value: "my_default_label" +``` + +The labels metadata would change + + - from + ```yaml + - "generic_key": "foo-app" + - "remote_address": "10.10.11.12" + ``` + to + ```yaml + - "generic_key": "my_default_label" + - "generic_key": "foo-app" + - "remote_address": "10.10.11.12" + ``` + +and + + - from + ```yaml + - "generic_key": "bar-app" + - "remote_address": "10.10.11.12" + ``` + to + ```yaml + - "generic_key": "my_default_label" + - "generic_key": "bar-app" + - "remote_address": "10.10.11.12" + ``` + +respectively. + +And thus our `RateLimit`s would need to change to appropriately handle +the new labels. diff --git a/docs/edge-stack/latest/tutorials/getting-started.md b/docs/edge-stack/latest/tutorials/getting-started.md new file mode 100644 index 000000000..f7f8a3481 --- /dev/null +++ b/docs/edge-stack/latest/tutorials/getting-started.md @@ -0,0 +1,158 @@ +--- +description: "A simple three step guide to installing $productName$ and quickly get started routing traffic from the edge of your Kubernetes cluster to your services." +--- + +import Alert from '@material-ui/lab/Alert'; +import GettingStartedEdgeStack21Tabs from './gs-tabs' + +# $productName$ quick start + +
+

Contents

+ +- [1. Installation](#1-installation) +- [Getting a license from Ambassador Cloud](#getting-a-license-from-ambassador-cloud) +- [2. Routing traffic from the edge](#2-routing-traffic-from-the-edge) +- [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## 1. Installation + +### Getting a license from Ambassador Cloud + +We'll start by installing $productName$ into your cluster. + +$productName$ requires a [license](../../topics/using/licenses) to function, so the first step is getting one to use while installing. If you are in air-gapped environment, please [contact sales](https://www.getambassador.io/contact-us). + +1. Log in to [Ambassador Cloud](https://app.getambassador.io/cloud/edge-stack/license/existing/) with GitHub, GitLab or Google and select your team account. + +2. Follow the prompts to name the cluster and click **Generate Key**. + +3. Either follow the installation instructions there, or copy the token out and follow along here. + +4. Once your cluster is connected to Ambassador Cloud, a community license is automatically applied. + +**We recommend using Helm** to install but there are other options below to choose from. Please replace `` below with your token from Ambassador Cloud. + + + +Success! At this point, you have installed $productName$. Now let's get some traffic flowing to your services. + +## 2. Routing traffic from the edge + +$productName$ uses Kubernetes Custom Resource Definitions (CRDs) to declaratively define its desired state. The workflow you are going to build uses a simple demo app, a **`Listener` CRD**, and a **`Mapping` CRD**. The `Listener` CRD tells $productName$ what port to listen on, and the `Mapping` CRD tells $productName$ how to route incoming requests by host and URL path from the edge of your cluster to Kubernetes services. + +1. Start by creating a `Listener` resource for HTTP on port 8080: + + ``` + kubectl apply -f - <The Service and Deployment are created in your default namespace. You can use kubectl get services,deployments quote to see their status. + +3. Generate the YAML for a `Mapping` to tell $productName$ to route all traffic inbound to the `/backend/` path to the `quote` Service. + + In this step, we'll be using the Mapping Editor, which you can find in the service details view of your [Ambassador Cloud connected installation](#getting-a-license-from-ambassador-cloud). + Open your browser to https://app.getambassador.io/cloud/services/quote/details and click on **New Mapping**. + + Default options are automatically populated. **Enable and configure the following settings**, then click **Generate Mapping**: + - **Path Matching**: `/backend/` + - **OpenAPI Docs**: `/.ambassador-internal/openapi-docs` + + ![](../images/mapping-editor.png) + + Whether you decide to automatically push the change to Git for this newly create Mapping resource or not, the resulting Mapping should be similar to the example below. + + **Apply this YAML to your target cluster now.** + + ```yaml + kubectl apply -f - <Victory! You have created your first $productName$ Mapping, routing a request from your cluster's edge to a service! + +## What's next? + +Explore some of the popular tutorials on $productName$: + +* [Intro to Mappings](../../topics/using/intro-mappings/): declaratively routes traffic from +the edge of your cluster to a Kubernetes service +* [Host resource](../../topics/running/host-crd/): configure a hostname and TLS options for your ingress. +* [Rate Limiting](../../topics/using/rate-limits/rate-limits/): create policies to control sustained traffic loads + +$productName$ has a comprehensive range of [features](/features/) to +support the requirements of any edge microservice. + +To learn more about how $productName$ works, read the [$productName$ Story](../../about/why-ambassador). diff --git a/docs/edge-stack/latest/tutorials/gs-tabs.js b/docs/edge-stack/latest/tutorials/gs-tabs.js new file mode 100644 index 000000000..6cbeab1bc --- /dev/null +++ b/docs/edge-stack/latest/tutorials/gs-tabs.js @@ -0,0 +1,132 @@ +import AppBar from '@material-ui/core/AppBar'; +import Box from '@material-ui/core/Box'; +import Tab from '@material-ui/core/Tab'; +import Tabs from '@material-ui/core/Tabs'; +import { makeStyles } from '@material-ui/core/styles'; +import PropTypes from 'prop-types'; +import React from 'react'; + +import CodeBlock from '../../../../../src/components/CodeBlock'; +import Icon from '../../../../../src/components/Icon'; + +function TabPanel(props) { + const { children, value, index, ...other } = props; + + return ( + + ); +} + +TabPanel.propTypes = { + children: PropTypes.node, + index: PropTypes.any.isRequired, + value: PropTypes.any.isRequired, +}; + +function a11yProps(index) { + return { + id: `simple-tab-${index}`, + 'aria-controls': `simple-tabpanel-${index}`, + }; +} + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + backgroundColor: 'transparent', + }, +})); + +export default function GettingStartedEdgeStack21Tabs(props) { + const version = props.version; + const classes = useStyles(); + const [value, setValue] = React.useState(0); + + const handleChange = (event, newValue) => { + setValue(newValue); + }; + + return ( +
+ + + } + label="Helm 3" + {...a11yProps(0)} + style={{ minWidth: '10%', textTransform: 'none' }} + /> + } + label="Kubernetes YAML" + {...a11yProps(1)} + style={{ minWidth: '10%', textTransform: 'none' }} + /> + + + + {/*Helm 3 install instructions*/} + + + {'# Add the Repo:' + + '\n' + + 'helm repo add datawire https://app.getambassador.io' + + '\n' + + 'helm repo update' + + '\n \n' + + '# Create Namespace and Install:' + + '\n' + + 'kubectl create namespace ambassador && \\' + + '\n' + + `kubectl apply -f https://app.getambassador.io/yaml/edge-stack/${version}/aes-crds.yaml` + + '\n \n' + + 'kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system' + + '\n \n' + + 'helm install edge-stack --namespace ambassador datawire/edge-stack \\' + + '\n' + + ' --set emissary-ingress.agent.cloudConnectToken= && \\' + + '\n' + + 'kubectl -n ambassador wait --for condition=available --timeout=90s deploy -lproduct=aes'} + + + + + {/*YAML install instructions*/} + + + {`kubectl apply -f https://app.getambassador.io/yaml/edge-stack/${version}/aes-crds.yaml && \\` + + '\n' + + 'kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system' + + '\n \n' + + `kubectl apply -f https://app.getambassador.io/yaml/edge-stack/${version}/aes.yaml && \\` + + '\n' + + `kubectl create configmap --namespace ambassador edge-stack-agent-cloud-token --from-literal=CLOUD_CONNECT_TOKEN= && \\` + + '\n' + + 'kubectl -n ambassador wait --for condition=available --timeout=90s deploy -lproduct=aes' + + '\n'} + + +
+ ); +} diff --git a/docs/edge-stack/latest/tutorials/gs-tabs2.js b/docs/edge-stack/latest/tutorials/gs-tabs2.js new file mode 100644 index 000000000..bfd950477 --- /dev/null +++ b/docs/edge-stack/latest/tutorials/gs-tabs2.js @@ -0,0 +1,174 @@ +import AppBar from '@material-ui/core/AppBar'; +import Box from '@material-ui/core/Box'; +import Tab from '@material-ui/core/Tab'; +import Tabs from '@material-ui/core/Tabs'; +import { makeStyles } from '@material-ui/core/styles'; +import PropTypes from 'prop-types'; +import React from 'react'; + +import CodeBlock from '../../../../../src/components/CodeBlock'; + +function TabPanel(props) { + const { children, value, index, ...other } = props; + + return ( + + ); +} + +TabPanel.propTypes = { + children: PropTypes.node, + index: PropTypes.any.isRequired, + value: PropTypes.any.isRequired, +}; + +function a11yProps(index) { + return { + id: `simple-tab-${index}`, + 'aria-controls': `simple-tabpanel-${index}`, + }; +} + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + backgroundColor: 'transparent', + }, +})); + +export default function SimpleTabs() { + const classes = useStyles(); + const [value, setValue] = React.useState(0); + + const handleChange = (event, newValue) => { + setValue(newValue); + }; + + return ( +
+ + + + + + + + + + {/*Helm 3 token install instructions*/} + Log in to{' '} + Ambassador Cloud. + Click Connect my cluster to Ambassador Cloud, then{' '} + Connect via Helm. The slideout contains instructions with a + unique cloud-connect-token that is used to connect your + cluster to your Ambassador Cloud account. +
+ Run the following command, replacing $TOKEN + with your token: + + {'helm upgrade ambassador --namespace ambassador datawire/ambassador \\' + + '\n' + + ' --set agent.cloudConnectToken=$TOKEN && \\' + + '\n' + + 'kubectl -n ambassador wait --for condition=available --timeout=90s deploy -lproduct=aes'} + +
+ + + {/*Helm 2 token install instructions*/} + Log in to{' '} + Ambassador Cloud. + Click Connect my cluster to Ambassador Cloud, then{' '} + Connect via Helm. The slideout contains instructions with a + unique cloud-connect-token that is used to connect your + cluster to your Ambassador Cloud account. +
+ Run the following command, replacing $TOKEN + with your token: + + {'helm upgrade --namespace ambassador ambassador datawire/ambassador \\' + + '\n' + + ' --set crds.create=false --set agent.cloudConnectToken=$TOKEN && \\' + + '\n' + + 'kubectl -n ambassador wait --for condition=available --timeout=90s deploy -lproduct=aes'} + +
+ + + {/*YAML token install instructions*/} + Log in to{' '} + Ambassador Cloud. + Click Connect my cluster to Ambassador Cloud, then{' '} + Connect via Kubernetes YAML. The slideout contains instructions + with a unique cloud-connect-token that is used to connect + your cluster to your Ambassador Cloud account. +
+ Run the following command, replacing $TOKEN + with your token: + + {'kubectl create configmap -n ambassador ambassador-agent-cloud-token \\' + + '\n' + + ' --from-literal=CLOUD_CONNECT_TOKEN=$TOKEN'} + +
+ + + {/*edgectl token install instructions*/} + Connecting $productName$ that was installed via edgectl is + identical to the Kubernetes YAML procedure. +
+ Log in to{' '} + Ambassador Cloud. + Click Connect my cluster to Ambassador Cloud, then{' '} + Connect via Kubernetes YAML. The slideout contains instructions + with a unique cloud-connect-token that is used to connect + your cluster to your Ambassador Cloud account. +
+ Run the following command, replacing $TOKEN + with your token: + + {'kubectl create configmap -n ambassador ambassador-agent-cloud-token \\' + + '\n' + + ' --from-literal=CLOUD_CONNECT_TOKEN=$TOKEN'} + +
+
+ ); +} diff --git a/docs/edge-stack/latest/versions.yml b/docs/edge-stack/latest/versions.yml new file mode 100644 index 000000000..df6d9c9b1 --- /dev/null +++ b/docs/edge-stack/latest/versions.yml @@ -0,0 +1,35 @@ +# branch info +branch: release/v3.8 + +# self +version: 3.8.1 +productName: "Ambassador Edge Stack" +productNamePlural: "Ambassador Edge Stacks" +productNamespace: ambassador +productDeploymentName: edge-stack +productHelmName: edge-stack + +# OSS (not self) +ossVersion: 3.8.1 +ossDocsVersion: "pre-release" +ossChartVersion: 8.8.1 +OSSproductName: "Emissary-ingress" +OSSproductNamePlural: "Emissary-ingresses" + +# AES (self) +aesVersion: 3.8.1 +aesDocsVersion: "pre-release" +aesChartVersion: 8.8.1 +AESproductName: "Ambassador Edge Stack" +AESproductNamePlural: "Ambassador Edge Stacks" + +# other products +qotmVersion: 1.7 +quoteVersion: 0.5.0 + +# Most recent version from previous major versions +# This is mostly to ensure that the migration matrix stays up-to-date +versionTwoX: 2.5.1 +chartVersionTwoX: 7.6.1 +versionOneX: 1.14.4 +chartVersionOneX: 6.9.5 diff --git a/docs/emissary/latest b/docs/emissary/latest deleted file mode 120000 index 98fccd6d0..000000000 --- a/docs/emissary/latest +++ /dev/null @@ -1 +0,0 @@ -3.8 \ No newline at end of file diff --git a/docs/emissary/latest/about/aes-emissary-eol.md b/docs/emissary/latest/about/aes-emissary-eol.md new file mode 100644 index 000000000..1e4b2caa9 --- /dev/null +++ b/docs/emissary/latest/about/aes-emissary-eol.md @@ -0,0 +1,56 @@ +# $productName$ End of Life Policy + +This document describes the End of Life policy and maintenance windows for Ambassador Edge Stack, and to the open source project Emissary Ingress. + +## Supported Versions + +Ambassador Edge Stack and Emissary-ingress versions are expressed as **x.y.z**, where **x** is the major version, **y** is the minor version, and **z** is the patch version, following [Semantic Versioning](https://semver.org/) terminology. + +**X-series (Major Versions)** + +- **1.y**: 1.0 GA on January 2020 +- **2.y**: 2.0.4 GA on October 2021, and 2.1.0 in December 2021. + +**Y-release (Minor versions)** + +- For 1.y, that is **1.14.z** +- For 2.y, that is **2.3.z** + +In this document, **Current** refers to the latest X-series release. + +Maintenance refers to the previous X-series release, including security and Sev1 defect patches. + +## CNCF Ecosystem Considerations + +- Envoy releases a major version every 3 months and supports its previous releases for 12 months. Envoy does not support any release longer than 12 months. +- Kubernetes 1.19 and newer receive 12 months of patch support (The [Kubernetes Yearly Support Period](https://github.com/kubernetes/enhancements/blob/master/keps/sig-release/1498-kubernetes-yearly-support-period/README.md)). + +# The Policy + +> We will offer a 6 month maintenance window for the latest Y-release of an X-series after a new X-series goes GA and becomes the current release. For example, we will support 2.3 for severity 1 and defect patches for six months after 3.0 is released. +> + +> During the maintenance window, Y-releases will only receive security and Sev1 defect patches. Users desiring new features or bug fixes for lower severity defects will need to upgrade to the current X-series. +> + +> The current X-series will receive as many Y-releases as necessary and as often as we have new features or patches to release. +> + +> Ambassador Labs offers no-downtime migration to current versions from maintenance releases. Migration from releases that are outside of the maintenance window may be subject to downtime. +> + +> Artifacts of releases outside of the maintenance window will be frozen and will remain available publicly for download with the best effort. These artifacts include Docker images, application binaries, Helm charts, etc. +> + +### When we say support with “defect patches”, what do we mean? + +- We will fix security issues in our Emissary-ingress and Ambassador Edge Stack code +- We will pick up security fixes from dependencies as they are made available +- We will not maintain forks of our major dependencies +- We will not attempt our own back ports of critical fixes to dependencies which are out of support from their own communities + +## Extended Maintenance for 1.14 + +Given this policy, we should have dropped maintenance for 1.14 in March 2022, however we recognize that the introduction of an EOL policy necessitates a longer maintenance window. For this reason, we do offer an "extended maintenance" window for 1.14 until the end of September 2022, 3 months after the latest 2.3 release. Please note that this extended maintenance window will not apply to customers using Kubernetes 1.22 and above, and this extended maintenance will also not provide a no-downtime migration path from 1.14 to 3.0. + +After September 2022, the current series will be 3.x, and the maintenance series will be 2.y. diff --git a/docs/emissary/latest/about/alternatives.md b/docs/emissary/latest/about/alternatives.md new file mode 100644 index 000000000..bafec0873 --- /dev/null +++ b/docs/emissary/latest/about/alternatives.md @@ -0,0 +1,19 @@ +# $productName$ vs. other software + +Alternatives to $productName$ fall into three basic categories: + +* Hosted API gateways, such as the [Amazon API gateway](https://aws.amazon.com/api-gateway/). +* Traditional API gateways, such as [Kong](https://konghq.org/). +* L7 proxies, such as [Traefik](https://traefik.io/), [NGINX](http://nginx.org/), [HAProxy](http://www.haproxy.org/), or [Envoy](https://www.envoyproxy.io), or Ingress controllers built on these proxies. + +Both hosted API gateways and traditional API gateways are: + +* Not self-service. The management interfaces on traditional API gateways are not designed for developer self-service, and provide limited safety and usability for developers. +* Not Kubernetes-native. They're typically configured using REST APIs, making it challenging to adopt cloud-native patterns such as GitOps and declarative configuration. +* [Designed for API management, rather than designed for microservices](../../topics/concepts/microservices-api-gateways). + +A Layer 7 proxy can be used as an API gateway, but typically requires additional bespoke development to support microservices use cases. In fact, many API gateways package the additional features needed for an API gateway on top of an L7 proxy. $productName$ uses Envoy, while Kong uses NGINX. If you're interested in deploying Envoy directly, we've written an [introductory tutorial](https://www.datawire.io/guide/traffic/getting-started-lyft-envoy-microservices-resilience/). + +## Istio + +[Istio](https://istio.io) is an open-source service mesh, built on Envoy. A service mesh is designed to manage East/West traffic (traffic between servers and your data center), while an API gateway manages North/South traffic (in and out of your data center). Documentation on how to deploy $productName$ with Istio is [here](../../howtos/istio). In general, we've found that North/South traffic is quite different from East/West traffic (i.e., you don't control the client in the North/South use case). diff --git a/docs/emissary/latest/about/changes-2.x.md b/docs/emissary/latest/about/changes-2.x.md new file mode 100644 index 000000000..4c4124864 --- /dev/null +++ b/docs/emissary/latest/about/changes-2.x.md @@ -0,0 +1,238 @@ +import Alert from '@material-ui/lab/Alert'; + +Major Changes in $productName$ 2.X +================================== + +The 2.X family introduces a number of changes to allow $productName$ +to more gracefully handle larger installations, reduce global configuration to +better handle multitenant or multiorganizational installations, reduce memory +footprint, and improve performance. We welcome feedback!! Join us on +[Slack](http://a8r.io/slack) and let us know what you think. + +While $productName$ 2 is functionally compatible with $productName$ 1.14, note +that this is a **major version change** and there are important differences between +$productName$ 1.X and $productName$ $version$. For details, read on. + +## 1. Configuration API Version `getambassador.io/v3alpha1` + +$productName$ 2.0 introduced API version `getambassador.io/v3alpha1` to allow +certain changes in configuration resources that are not backwards compatible with +$productName$ 1.X. The most notable example of change is the addition of the +**mandatory** `Listener` resource; however, there are important changes +in `Host` and `Mapping` as well. + + + $productName$ 2.X supports only API versions getambassador.io/v2 + and getambassador.io/v3alpha1. If you are using any resources with + older API versions, you will need to upgrade them. + + +API version `getambassador.io/v3alpha1` replaces `x.getambassador.io/v3alpha1` from +the 2.0 developer previews. `getambassador.io/v3alpha1` may still change as we receive +feedback. + +## 2. Kubernetes 1.22 and Structural CRDs + +Kubernetes 1.22 requires [structural CRDs](https://kubernetes.io/blog/2019/06/20/crd-structural-schema/). +This change is primarily meant to support better CRD validation, but it also has the +effect that union types are no longer allowed in CRDs: for example, an element that can be +either a string or a list of strings is not allowed. Several such elements appeared in the +`getambassador.io/v2` CRDs, requiring changes. In `getambassador.io/v3alpha1`: + +- `ambassador_id` must always be a list of strings +- `Host.mappingSelector` supersedes `Host.selector`, and controls association between Hosts and Mappings +- `Mapping.hostname` supersedes `Mapping.host` and `Mapping.host_regex` +- `Mapping.tls` can only be a string +- `Mapping.labels` always requires maps instead of strings + +## 2. `Listener`s, `Host`s, and `Mapping`s + +$productName$ 2.0 introduced the new **mandatory** `Listener` CRD, and made some changes +to the `Host` and `Mapping` resources. + +### The `Listener` CRD + +The new [`Listener` CRD](../../topics/running/listener) defines where and how $productName$ should listen for requests from the network, and which `Host` definitions should be used to process those requests. + +**Note that `Listener`s are never created by $productName$, and must be defined by the user.** If you do not +define any `Listener`s, $productName$ will not listen anywhere for connections, and therefore won't do +anything useful. It will log a `WARNING` to this effect. + +A `Listener` specifically defines + +- `port`: a port number on which to listen for new requests; +- `protocol` and `securityModel`: the protocol stack and security model to use (e.g. `HTTPS` using the `X-Forwarded-Proto` header); and +- `hostBinding`: how to tell if a given `Host` should be associated with this `Listener`: + - a `Listener` can choose to consider all `Host`s, or only `Host`s in the same namespace as the `Listener`, or + - a `Listener` can choose to consider only `Host`s with a particular Kubernetes `label`. + +**Note that the `hostBinding ` is mandatory.** A `Listener` _must_ specify how to identify the `Host`s to associate with the `Listener`', or the `Listener` will be rejected. This is intended to help prevent cases where a `Listener` mistakenly grabs too many `Host`s: if you truly need a `Listener` that associates with all `Host`s, the easiest way is to tell the `Listener` to look for `Host`s in all namespaces, with no further selectors, for example: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: listener +metadata: + name: all-hosts-listener +spec: + port: 8080 + securityModel: XFP + protocol: HTTPS + hostBinding: + namespace: + from: ALL +``` + +A `Listener` that has no associated `Host`s will be logged as a `WARNING`, and will not be included in the Envoy configuration generated by $productName$. + +Note also that there is no limit on how many `Listener`s may be created, and as such no limit on the number of ports to which a `Host` may be associated. + + + Learn more about Listener.
+ Learn more about Host. +
+ +### Wildcard `Host`s No Longer Created + +In $productName$ 1.X, $productName$ would make sure that a wildcard `Host`, with a `hostname` of `"*"`, was always present. +$productName$ 2.X does **not** force a wildcard `Host`: if you need the wildcard behavior, you will need to create +a `Host` with a hostname of `"*"`. + +Of particular note is that $productName$ **will not** respond to queries to an IP address unless a wildcard +`Host` is present. If `foo.example.com` resolves to `10.11.12.13`, and the only `Host` has a +`hostname` of `foo.example.com`, then: + +- requests to `http://foo.example.com/` will work, but +- requests to `http://10.11.12.13/` will **not** work. + +Adding a `Host` with a `hostname` of `"*"` will allow the second query to work. + + + Learn more about Host. + + +### `Host` and `Mapping` Association + +The [`Host` CRD](../../topics/running/host-crd) continues to define information about hostnames, TLS certificates, and how to handle requests that are "secure" (using HTTPS) or "insecure" (using HTTP). The [`Mapping` CRD](../../topics/using/intro-mappings) continues to define how to map the URL space to upstream services. + +However, as of $productName$ 2.0, a `Mapping` will not be associated with a `Host` unless at least one of the following is true: + +- The `Mapping` specifies a `hostname` attribute that matches the `Host` in question. + + - Note that a `getambassador.io/v2` `Mapping` has `host` and `host_regex`, rather than `hostname`. + - A `getambassador.io/v3alpha1` `Mapping` will honor `host` and `host_regex` as a transition aid, but `host` and `host_regex` are deprecated in favor of `hostname`. + - A `Mapping` that specifies `host_regex: true` will be associated with all `Host`s. This is generally far less desirable than using `hostname` with a DNS glob. + +- The `Host` specifies a `mappingSelector` that matches the `Mapping`'s Kubernetes `label`s. + + - Note that a `getambassador.io/v2` `Host` has a `selector`, rather than a `mappingSelector`. + - A `getambassador.io/v3alpha1` `Host` ignores `selector` and, instead, looks only at `mappingSelector`. + - Where a `selector` got a default value if not specified, `mappingSelector` must be explicitly stated. + +Without either a `hostname` match or a `label` match, the `Mapping` will not be associated with the `Host` in question. This is intended to help manage memory consumption with large numbers of `Host`s and large numbers of `Mapping`s. + + + Learn more about Host.
+ Learn more about Mapping. +
+ +### Independent `Host` Actions + +Each `Host` can specify its `requestPolicy.insecure.action` independently of any other `Host`, allowing for HTTP routing as flexible as HTTPS routing. + + + Learn more about Host. + + +### `Host`, `TLSContext`, and TLS Termination + +As of $productName$ 2.0, **`Host`s are required for TLS termination**. It is no longer sufficient to create a [`TLSContext`](../../topics/running/tls/#tlscontext) by itself; the [`Host`](../../topics/running/host-crd) is required. + +The minimal setup for TLS termination is therefore a Kubernetes `Secret` of type `kubernetes.io/tls`, and a `Host` that uses it: + +```yaml +--- +kind: Secret +type: kubernetes.io/tls +metadata: + name: minimal-secret +data: + tls secret goes here +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: minimal-host +spec: + hostname: minimal.example.com + tlsSecret: + name: minimal-secret +``` + +It is **not** necessary to explicitly state a `TLSContext` in the `Host`: setting `tlsSecret` is enough. Of course, `TLSContext` is still the ideal way to share TLS configuration between more than one `Host`. For further examples, see [Configuring $productName$ Communications](../../howtos/configure-communications). + + + Learn more about Host.
+ Learn more about TLSContext. +
+ +### `Mapping`s, `TCPMapping`s, and TLS Origination + +A `getambassador.io/v2` `Mapping` or `TCPMapping` could specify `tls: true` to indicate TLS origination without supplying a certificate. This is not supported in `getambassador.io/v3alpha1`: instead, use an `https://` prefix on the `service`. In the [Mapping](../../topics/using/mappings/#using-tls), this is straightforward, but [there are more details for the `TCPMapping` when using TLS](../../topics/using/tcpmappings/#tcpmapping-and-tls). + + + Learn more about Mapping. + + +### `Mapping`s and `labels` + +The `Mapping` CRD includes a `labels` field, used with rate limiting. The +[syntax of the `labels`](../../topics/using/rate-limits#attaching-labels-to-requests) has changed +for compatibility with Kubernetes 1.22. + + + Learn more about Mapping. + + +## 3. Other Changes + +### Envoy V3 API by Default + +By default, $productName$ 2.X will configure Envoy using the +[V3 Envoy API](https://www.envoyproxy.io/docs/envoy/latest/api-v3/api). In $productName$ +$version$, you may switch back to Envoy V2 by setting the `AMBASSADOR_ENVOY_API_VERSION` +environment variable to "V2"; in $productName$ 2.2.0, support for the Envoy V2 API (and +the `AMBASSADOR_ENVOY_API_VERSION` environment variable) will be removed. + +### More Performant Reconfiguration by Default + +In $productName$ 1.X, the environment variable `AMBASSADOR_FAST_RECONFIGURE` could be used to enable a higher performance implementation of the code $productName$ uses to validate and generate Envoy configuration. In $productName$ 2.X, this higher-performance mode is always enabled. + +### Changes to the `ambassador` `Module`, and the `tls` `Module` + +It is no longer possible to configure TLS using the `tls` element of the `ambassador` `Module` or using the `tls` `Module`. Both of these cases are correctly covered by the `TLSContext` resource. + +With the introduction of the `Listener` resource, a few settings have moved from the `Module` to the `Listener`. + +Configuration for the `PROXY` protocol is part of the `Listener` resource in $productName$ 2.X, so the `use_proxy_protocol` element of the `ambassador` `Module` is no longer supported. Note that the `Listener` resource can configure `PROXY` resource per-`Listener`, rather than having a single global setting. For further information, see the [`Listener` documentation](../../topics/running/listener). + +`xff_num_trusted_hops` has been removed from the `Module`, and its functionality has been moved to the `l7Depth` setting in the `Listener` resource. + + + Learn more about Listener. + + +### `TLSContext` `redirect_cleartext_from` and `Host` `insecure.additionalPort` + +`redirect_cleartext_from` has been removed from the `TLSContext` resource; `insecure.additionalPort` has been removed from the `Host` CRD. Both of these cases are covered by adding additional `Listener`s. For further examples, see [Configuring $productName$ Communications](../../howtos/configure-communications). + +### Service Preview No Longer Supported + +Service Preview is no longer supported as of $productName$ 2.X, as its use cases are supported by Telepresence. + +### Edge Policy Console No Longer Supported + +The Edge Policy Console has been removed as of $productName$ 2.X, in favor of Ambassador Cloud. + +### `Project` CRD No Longer Supported + +The `Project` CRD has been removed as of $productName$ 2.X, in favor of Argo. diff --git a/docs/emissary/latest/about/changes-3.y.md b/docs/emissary/latest/about/changes-3.y.md new file mode 100644 index 000000000..91105d281 --- /dev/null +++ b/docs/emissary/latest/about/changes-3.y.md @@ -0,0 +1,52 @@ +import Alert from '@material-ui/lab/Alert'; + +Major Changes in $productName$ 3.X +================================== + +The 3.X family introduces a number of changes to ensure $productName$ +keeps up with latest Envoy versions and to support new features such as HTTP/3. +We welcome feedback! Join us on [Slack](http://a8r.io/slack) and let us know what you think. + +$productName$ 3 is functionally compatible with $productName$ 2.x, but with any major upgrade there are some changes to consider. Such as, Envoy removing support for V2 Transport Protocol features. Below we will outline some of these changes and things to consider when upgrading. + +## 1. Envoy Upgraded to 1.22 + +$productName$ 3.X has been upgraded from Envoy 1.17.X to Envoy **1.22** which keeps $productName$ up-to-date with +the latest security fixes, bug fixes, performance improvements and feature enhancements provided by Envoy Proxy. Most of the changes are under the hood but the most notable change to developers is the removal of support for Envoy V2 Transport Protocol. This means all AuthServices and LogServices must be updated to use the V3 Protocol. + +This also means some of the v2 runtime bootstrap flags have been removed as well: + +```yaml +# No longer necessary because this was removed from Envoy +# $productName$ already was converted to use the compressor API +# https://www.envoyproxy.io/docs/envoy/v1.22.0/configuration/http/http_filters/compressor_filter#config-http-filters-compressor +"envoy.deprecated_features.allow_deprecated_gzip_http_filter": true, + +# Upgraded to v3, all support for V2 Transport Protocol removed +"envoy.deprecated_features:envoy.api.v2.route.HeaderMatcher.regex_match": true, +"envoy.deprecated_features:envoy.api.v2.route.RouteMatch.regex": true, + +# Developer will need to upgrade TracingService to V3 protocol which no longer supports HTTP_JSON_V1 +"envoy.deprecated_features:envoy.config.trace.v2.ZipkinConfig.HTTP_JSON_V1": true, + +# V2 protocol removed so flag no longer necessary +"envoy.reloadable_features.enable_deprecated_v2_api": true, +``` + + + Learn more about Envoy Proxy changes. + + +## 2. Envoy V2 Protocol Support Removed + +With the upgrade to Envoy **1.22**, the V2 Envoy Transport Protocol is no longer supported. +$productName$ 3.X **only** supports [V3 Envoy API](https://www.envoyproxy.io/docs/envoy/latest/api-v3/api). + + +The environment variable AMBASSADOR_ENVOY_API_VERSION has been removed and will no longer have the affect +of changing the transport protocol. + + + +The setting of transport_protocol to v2 is no longer supported within CRDS (AuthService, etc...). An error will now be logged and $productName$ will not configure envoy correctly. You should remove this field from your CRD's or convert it to v3 the only supported version at this time. + diff --git a/docs/emissary/latest/about/faq.md b/docs/emissary/latest/about/faq.md new file mode 100644 index 000000000..513c75c55 --- /dev/null +++ b/docs/emissary/latest/about/faq.md @@ -0,0 +1,79 @@ +# Frequently Asked Questions + +## General + +### Why $productName$? + +Kubernetes shifts application architecture for microservices, as well as the +development workflow for a full-cycle development. $productName$ is designed for +the Kubernetes world with: + +* Sophisticated traffic management capabilities (thanks to its use of [Envoy Proxy](https://www.envoyproxy.io)), such as load balancing, circuit breakers, rate limits, and automatic retries. +* A declarative, self-service management model built on Kubernetes Custom Resource Definitions, enabling GitOps-style continuous delivery workflows. + +We've written about [the history of $productName$](https://blog.getambassador.io/building-ambassador-an-open-source-api-gateway-on-kubernetes-and-envoy-ed01ed520844), [Why $productName$ In Depth](../why-ambassador), [Features and Benefits](../features-and-benefits) and about the [evolution of API Gateways](../../topics/concepts/microservices-api-gateways/). + +### What's the difference between $OSSproductName$ and $AESproductName$? + +$OSSproductName$ is a CNCF Incubating project and provides the open-source core of $AESproductName$. Originally we called $OSSproductName$ the "Ambassador API Gateway", but as the project evolved, we realized that the functionality we were building had extended far beyond an API Gateway. In particular, the $AESproductName$ is intended to provide all the functionality you need at the edge -- hence, an "edge stack." This includes an API Gateway, ingress controller, load balancer, developer portal, and more. + +### How is $AESproductName$ licensed? + +The core $OSSproductName$ is open source under the Apache Software License 2.0. The GitHub repository for the core is [https://github.com/emissary-ingress/emissary](https://github.com/emissary-ingress/emissary). Some additional features of the $AESproductName$ (e.g., Single Sign-On) are not open source and available under a proprietary license. + +### Can I use the add-on features for $AESproductName$ for free? + +Yes! The core functionality of the $AESproductName$ is free and has no limits whatsoever. If you wish to use one of our additional, proprietary features such as Single Sign-On, you can get a free community license for up to 5 requests per second by registering your connected $AESproductName$ installation in [Ambassador Cloud](https://app.getambassador.io/cloud/). Please contact [sales](/contact-us/) if you need more than 5 RPS. + +For more details on core unlimited features and premium features, see the [editions page](/editions). + +### How does $productName$ use Envoy Proxy? + +$productName$ uses [Envoy Proxy](https://www.envoyproxy.io) as its core proxy. Envoy is an open-source, high-performance proxy originally written by Lyft. Envoy is now part of the Cloud Native Computing Foundation. + +### Is $productName$ production ready? + +[//]: # (+FIX+ Check for OSS) + +Yes. Thousands of organizations, large and small, run $productName$ in production. +Public users include Chick-Fil-A, ADP, Microsoft, NVidia, and AppDirect, among others. + +### What is the performance of $productName$? + +There are many dimensions to performance. We published a benchmark of [$productName$ performance on Kubernetes](/resources/envoyproxy-performance-on-k8s/). Our internal performance regressions cover many other scenarios; we expect to publish more data in the future. + +### What's the difference between a service mesh (such as Istio) and $productName$? + +[//]: # (+FIX+ Check for OSS) + +Service meshes focus on routing internal traffic from service to service +("east-west"). $productName$ focuses on traffic into your cluster ("north-south"). +While both a service mesh and $productName$ can route L7 traffic, the reality is that +these use cases are quite different. Many users will integrate $productName$ with a +service mesh. Production customers of $productName$ have integrated with Consul, +Istio, and Linkerd2. + +## Common Configurations + +### How do I disable the default Admin mappings? + +See the [Protecting the Diagnostics Interface](../../howtos/protecting-diag-access) how-to. + +## Troubleshooting + +### How do I get help for $productName$? + +We have an online [Slack community](http://a8r.io/slack) with thousands of +users. We try to help out as often as possible, although we can't promise a +particular response time. If you need a guaranteed SLA, we also have commercial +contracts. [Contact sales](/contact-us/) for more information. + +### What do I do when I get the error `no healthy upstream`? + +This error means that $productName$ could not connect to your backend service. +Start by verifying that your backend service is actually available and +responding by sending an HTTP response directly to the pod. Then, verify that +$productName$ is routing by deploying a test service and seeing if the mapping +works. Then, verify that your load balancer is properly routing requests to +$productName$. In general, verifying each network hop between your client and +backend service is critical to finding the source of the problem. diff --git a/docs/emissary/latest/about/features-and-benefits.md b/docs/emissary/latest/about/features-and-benefits.md new file mode 100644 index 000000000..a25d77526 --- /dev/null +++ b/docs/emissary/latest/about/features-and-benefits.md @@ -0,0 +1,43 @@ +# Features and benefits + +In cloud-native organizations, developers frequently take on responsibility for the full development lifecycle of a service, from development to QA to operations. $productName$ was specifically designed for these organizations where developers have operational responsibility for their service(s). + +As such, the $productName$ is designed to be used by both developers and operators. + +## Self-Service via Kubernetes Annotations + +$productName$ is built from the start to support _self-service_ deployments -- a developer working on a new service doesn't have to go to Operations to get their service added to the mesh, they can do it themselves in a matter of seconds. Likewise, a developer can remove their service from the mesh, or merge services, or separate services, as needed, at their convenience. All of these operations are performed via Kubernetes resources or annotations, so they can easily integrate with your existing development workflow. + +## Flexible canary deployments + +[//]: # (+FIX+ Forge is no more) + +Canary deployments are an essential component of cloud-native development workflows. In a canary deployment, a small percentage of production traffic is routed to a new version of a service to test it under real-world conditions. $productName$ allows developers to easily control and manage the amount of traffic routed to a given service through annotations. [This tutorial](https://www.datawire.io/faster/canary-workflow/) covers a complete canary workflow using the $productName$. + +## Kubernetes-native architecture + +[//]: # (+FIX+ we've come to realize that it's better to scale vertically) + +$productName$ relies entirely on Kubernetes for reliability, availability, and scalability. For example, $productName$ persists all state in Kubernetes, instead of requiring a separate database. Scaling the $productName$ is as simple as changing the replicas in your deployment, or using a [horizontal pod autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/). + +$productName$ uses [Envoy](https://www.envoyproxy.io) for all traffic routing and proxying. Envoy is a modern L7 proxy that is used in production at companies including Lyft, Apple, Google, and Stripe. + +## gRPC and HTTP/2 support + +$productName$ fully supports gRPC and HTTP/2 routing, thanks to Envoy's extensive capabilities in this area. See [gRPC and $productName$](../../howtos/grpc) for more information. + +## Istio Integration + +$productName$ integrates with the [Istio](https://istio.io) service mesh as the edge proxy. In this configuration, $productName$ routes external traffic to the internal Istio service mesh. See [Istio and $productName$](../../howtos/istio) for details. + +## Authentication + +$productName$ supports authenticating incoming requests using a custom authentication service. When configured, the $productName$ will check with your external authentication service prior to routing an incoming request. For more information, see the [authentication guide](../../topics/running/services/auth-service). + +## Rate limiting + +$productName$ supports rate limiting incoming requests. When configured, the $productName$ will check with a third party rate limit service prior to routing an incoming request. For more information, see the [rate limiting guide](../../topics/using/rate-limits/). + +## Integrated UI + +$productName$ includes a diagnostics service so that you can quickly debug issues associated with configuring the $productName$. For more information, see [running $productName$ in Production](../../topics/running). diff --git a/docs/emissary/latest/about/known-issues.md b/docs/emissary/latest/about/known-issues.md new file mode 100644 index 000000000..6b89c65a8 --- /dev/null +++ b/docs/emissary/latest/about/known-issues.md @@ -0,0 +1,9 @@ +import Alert from '@material-ui/lab/Alert'; + +Known Issues in $productName$ +============================= + +## 2.2.1 + +- TLS certificates using elliptic curves were incorrectly flagged as invalid. This issue is + corrected in $productName$ 2.2.2. diff --git a/docs/emissary/latest/about/support.md b/docs/emissary/latest/about/support.md new file mode 100644 index 000000000..11927f951 --- /dev/null +++ b/docs/emissary/latest/about/support.md @@ -0,0 +1,27 @@ +# Need help? + +If you need help deploying $productName$ at your organization, there are several different options available to you. + +## Support tiers + +### $productName$ community support + +When running $OSSproductName$, or $AESproductName$ with free community licenses, [join our Slack channel](http://a8r.io/slack) to talk with other users in the community and get your questions answered. + +If you can’t find an answer there, [contact us](/contact-us) to learn more about the support options available with $AESproductName$ Enterprise. + +### $AESproductName$ Enterprise + +With $AESproductName$ Enterprise, you have access to deployment and production support. To learn more, [contact sales](/contact-us). + +**Deployment and Update Support**: $AESproductName$ can accelerate your migration to Kubernetes, or your upgrade between versions of $AESproductName$. Deployment support helps you with the $AESproductName$ and Kubernetes migration, before you move to production. + +**Production Support**: We offer two types of production support contracts for users deploying the $AESproductName$ in production. We offer both business hour (8am - 5pm EST, M-F) and 24x7 Sev 1 support for the $AESproductName$. 24x7 Sev 1 support includes custom hotfix support for production outages if necessary. + +## File a Github Issue + +If you see a bug you want to fix, see room for documentation improvements, or have something else you want to change, you can [file an issue on github](https://github.com/datawire/ambassador/issues/new). + +## Pricing + +[Contact us](/contact-us) to learn how we can help, and for detailed pricing information. diff --git a/docs/emissary/latest/about/why-ambassador.md b/docs/emissary/latest/about/why-ambassador.md new file mode 100644 index 000000000..0d3439838 --- /dev/null +++ b/docs/emissary/latest/about/why-ambassador.md @@ -0,0 +1,54 @@ +# Why $productName$? + +$productName$ gives platform engineers a comprehensive, self-service edge stack for managing the boundary between end-users and Kubernetes. Built on the [Envoy Proxy](https://www.envoyproxy.io) and fully Kubernetes-native, $productName$ is made to support multiple, independent teams that need to rapidly publish, monitor, and update services for end-users. A true edge stack, $productName$ can also be used to handle the functions of an API Gateway, a Kubernetes ingress controller, and a layer 7 load balancer (for more, see [this blog post](https://blog.getambassador.io/kubernetes-ingress-nodeport-load-balancers-and-ingress-controllers-6e29f1c44f2d)). + +## How Does $productName$ work? + +$productName$ is an open-source, Kubernetes-native [microservices API gateway](../../topics/concepts/microservices-api-gateways) built on the [Envoy Proxy](https://www.envoyproxy.io). $productName$ is built from the ground up to support multiple, independent teams that need to rapidly publish, monitor, and update services for end-users. $productName$ can also be used to handle the functions of a Kubernetes ingress controller and load balancer (for more, see [this blog post](https://blog.getambassador.io/kubernetes-ingress-nodeport-load-balancers-and-ingress-controllers-6e29f1c44f2d)). + +## Cloud-native applications today + +Traditional cloud applications were built using a monolithic approach. These applications were designed, coded, and deployed as a single unit. Today's cloud-native applications, by contrast, consist of many individual (micro)services. This results in an architecture that is: + +* __Heterogeneous__: Services are implemented using multiple (polyglot) languages, they are designed using multiple architecture styles, and they communicate with each other over multiple protocols. +* __Dynamic__: Services are frequently updated and released (often without coordination), which results in a constantly-changing application. +* __Decentralized__: Services are managed by independent product-focused teams, with different development workflows and release cadences. + +### Heterogeneous services + +$productName$ is commonly used to route traffic to a wide variety of services. It supports: + +* configuration on a *per-service* basis, enabling fine-grained control of timeouts, rate limiting, authentication policies, and more. +* a wide range of L7 protocols natively, including HTTP, HTTP/2, gRPC, gRPC-Web, and WebSockets. +* Can route raw TCP for services that use protocols not directly supported by $productName$. + +### Dynamic services + +Service updates result in a constantly changing application. The dynamic nature of cloud-native applications introduces new challenges around configuration updates, release, and testing. $productName$: + +* Enables [progressive delivery](../../topics/concepts/progressive-delivery), with support for canary routing and traffic shadowing. +* Exposes high-resolution observability metrics, providing insight into service behavior. +* Uses a zero downtime configuration architecture, so configuration changes have no end-user impact. + +### Decentralized workflows + +Independent teams can create their own workflows for developing and releasing functionality that are optimized for their specific service(s). With $productName$, teams can: + +* Leverage a [declarative configuration model](../../topics/concepts/gitops-continuous-delivery), making it easy to understand the canonical configuration and implement GitOps-style best practices. +* Independently configure different aspects of $productName$, eliminating the need to request configuration changes through a centralized operations team. + +## $productName$ is engineered for Kubernetes + +$productName$ takes full advantage of Kubernetes and Envoy Proxy. + +* All of the state required for $productName$ is stored directly in Kubernetes, eliminating the need for an additional database. +* The $productName$ team has added extensive engineering efforts and integration testing to ensure optimal performance and scale of Envoy and Kubernetes. + +## For more information + +[Deploy $productName$ today](../../tutorials/getting-started) and join the community [Slack Channel](http://a8r.io/slack). + +Interested in learning more? + +* [Why did we start building $productName$?](https://blog.getambassador.io/building-ambassador-an-open-source-api-gateway-on-kubernetes-and-envoy-ed01ed520844) +* [$productName$ Architecture overview](../../topics/concepts/architecture) diff --git a/docs/emissary/latest/community.md b/docs/emissary/latest/community.md new file mode 100644 index 000000000..2b578891a --- /dev/null +++ b/docs/emissary/latest/community.md @@ -0,0 +1,12 @@ +# Community + +## Contributor's guide +Please review our [contributor's guide](https://github.com/emissary-ingress/emissary/blob/master/DevDocumentation/DEVELOPING.md) +on GitHub to learn how you can help make Emissary-ingress better. + +## Changelog +Our [changelog](https://github.com/emissary-ingress/emissary/blob/master/CHANGELOG.md) +describes new features, bug fixes, and updates to each version of Emissary-ingress. + +## Meetings +Check out our community [meeting schedule](https://github.com/emissary-ingress/emissary/blob/master/Community/MEETING_SCHEDULE.md) for opportunities to interact with Emissary-ingress developers. diff --git a/docs/emissary/latest/doc-links.yml b/docs/emissary/latest/doc-links.yml new file mode 100644 index 000000000..58b81ad9c --- /dev/null +++ b/docs/emissary/latest/doc-links.yml @@ -0,0 +1,229 @@ +- title: Quick start + link: /tutorials/getting-started +- title: Core concepts + items: + - title: Kubernetes network architecture + link: /topics/concepts/kubernetes-network-architecture + - title: "The Ambassador operating model: GitOps and continuous delivery" + link: /topics/concepts/gitops-continuous-delivery + - title: Progressive delivery + link: /topics/concepts/progressive-delivery + - title: Microservices API gateways + link: /topics/concepts/microservices-api-gateways + - title: $productName$ architecture + link: /topics/concepts/architecture + - title: Rate limiting at the edge + link: /topics/concepts/rate-limiting-at-the-edge +- title: Installation and updates + link: /topics/install/ + items: + - title: Install with Helm + link: /topics/install/helm + - title: Install with Kubernetes YAML + link: /topics/install/yaml-install + - title: Try the demo with Docker + link: /topics/install/docker + - title: Upgrade or migrate to a newer version + link: /topics/install/migration-matrix +- title: $productName$ user guide + items: + - title: Deployment + items: + - title: Deployment architecture + link: /topics/running/ambassador-deployment + - title: $productName$ environment variables and ports + link: /topics/running/environment + - title: $productName$ and Redis + link: /topics/running/aes-redis + - title: $productName$ with AWS + link: /topics/running/ambassador-with-aws + - title: $productName$ with GKE + link: /topics/running/ambassador-with-gke + - title: Advanced deployment configuration + link: /topics/running/running + - title: Performance and scaling $productName$ + link: /topics/running/scaling + - title: Active health checking configuration + link: /howtos/active-health-checking + - title: HTTP/3 configuration + items: + - title: HTTP3 setup in $productName$ + link: /topics/running/http3 + - title: HTTP/3 with AKS + link: /howtos/http3-aks + - title: HTTP/3 with EKS + link: /howtos/http3-eks + - title: HTTP/3 with GKE + link: /howtos/http3-gke + - title: Service routing and communication + items: + - title: Configuring $productName$ to communicate + link: /howtos/configure-communications + - title: Get traffic from the edge + link: /howtos/route + - title: TCP connections + link: /topics/using/tcpmappings + - title: gRPC connections + link: /howtos/grpc + - title: WebSocket connections + link: /howtos/websockets + - title: Authentication + items: + - title: Basic authentication + link: /howtos/basic-auth + - title: Rate limiting + items: + - title: Rate limiting service + link: /topics/running/services/rate-limit-service/ + - title: Basic rate limiting + link: /topics/using/rate-limits/ + - title: Service monitoring + items: + - title: Explore distributed tracing and Kubernetes monitoring + link: /howtos/dist-tracing + - title: Distributed tracing with Datadog + link: /howtos/tracing-datadog + - title: Distributed tracing with Zipkin + link: /howtos/tracing-zipkin + - title: Distrbuted tracing with LightStep + link: /howtos/tracing-lightstep + - title: Monitoring with Prometheus and Grafana + link: /howtos/prometheus + - title: Statistics + link: /topics/running/statistics + - title: Envoy statistics with StatsD + link: /topics/running/statistics/envoy-statsd + - title: The metrics endpoint + link: /topics/running/statistics/8877-metrics + - title: $productName$ integrations + items: + - title: Knative Serverless Framework + link: /howtos/knative + - title: ExternalDNS integration + link: /howtos/external-dns + - title: Consul integration + link: /howtos/consul + - title: Istio integration + link: /howtos/istio + - title: Linkerd 2 integration + link: /howtos/linkerd2 +- title: Technical reference + items: + - title: Custom resources + items: + - title: The Host resource + link: /topics/running/host-crd + - title: The Listener resource + link: /topics/running/listener + - title: The Module resource + link: /topics/running/ambassador + - title: The Mapping resource + link: /topics/using/intro-mappings + - title: Advanced Mapping configuration + link: /topics/using/mappings + - title: TLS configuration + items: + - title: TLS overview + link: /topics/running/tls/ + - title: Cleartext support + link: /topics/running/tls/cleartext-redirection + - title: Mutual TLS (mTLS) + link: /topics/running/tls/mtls + - title: Server Name Indication (SNI) + link: /topics/running/tls/sni + - title: TLS origination + link: /topics/running/tls/origination + - title: TLS termination and enabling HTTPS + link: /howtos/tls-termination + - title: Using cert-manager + link: /howtos/cert-manager + - title: Client certificate validation + link: /howtos/client-cert-validation + - title: Ingress and load balancing + items: + - title: AuthService settings + link: /topics/using/authservice + - title: Automatic retries + link: /topics/using/retries + - title: Canary releases + link: /topics/using/canary + - title: Circuit Breakers + link: /topics/using/circuit-breakers + - title: Cross-Origin Resource Sharing (CORS) + link: /topics/using/cors + - title: Ingress controller + link: /topics/running/ingress-controller + - title: Load balancing + link: /topics/running/load-balancer + - title: Service discovery and resolvers + link: /topics/running/resolvers + - title: Headers + items: + - title: Headers overview + link: /topics/using/headers/headers + - title: Add request headers + link: /topics/using/headers/add_request_headers + - title: Remove request headers + link: /topics/using/headers/remove_request_headers + - title: Add response headers + link: /topics/using/headers/add_response_headers + - title: Remove response headers + link: /topics/using/headers/remove_response_headers + - title: Header-based routing + link: /topics/using/headers/headers + - title: Host header + link: /topics/using/headers/host + - title: Routing + items: + - title: Keepalive + link: /topics/using/keepalive + - title: Method-based routing + link: /topics/using/method + - title: Prefix regex + link: /topics/using/prefix_regex + - title: Query parameter-based routing + link: /topics/using/query_parameters/ + - title: Redirects + link: /topics/using/redirects + - title: Rewrites + link: /topics/using/rewrites + - title: Timeouts + link: /topics/using/timeouts + - title: Traffic shadowing + link: /topics/using/shadowing + - title: Plug-in services + items: + - title: Authentication service + link: /topics/running/services/auth-service + - title: ExtAuth protocol + link: /topics/running/services/ext_authz + - title: Log service + link: /topics/running/services/log-service + - title: Tracing service + link: /topics/running/services/tracing-service + - title: Traffic management + items: + - title: Custom error responses + link: /topics/running/custom-error-responses + - title: Gzip compression + link: /topics/running/gzip +- title: Diagnostics + link: /topics/running/diagnostics +- title: FAQs + link: /about/faq +- title: Troubleshooting + link: /topics/running/debugging +- title: Known issues + link: /about/known-issues +- title: Changes in $productName$ 2.X + link: /about/changes-2.x +- title: Changes in $productName$ 3.X + link: /about/changes-3.y +- title: Release Notes + link: /release-notes +- title: Community + link: /community +- title: End of Life Policy + link: /about/aes-emissary-eol +- title: Licenses + link: licenses diff --git a/docs/emissary/latest/features-icons/basic-authentication.svg b/docs/emissary/latest/features-icons/basic-authentication.svg new file mode 100644 index 000000000..2bd19edf5 --- /dev/null +++ b/docs/emissary/latest/features-icons/basic-authentication.svg @@ -0,0 +1,20 @@ + + + + noun_897228_cc + Created with Sketch. + + + + + + + + + + + + + + + diff --git a/docs/emissary/latest/features-icons/canary-release.svg b/docs/emissary/latest/features-icons/canary-release.svg new file mode 100644 index 000000000..f8de57d9d --- /dev/null +++ b/docs/emissary/latest/features-icons/canary-release.svg @@ -0,0 +1,27 @@ + + + + Group 25 + Created with Sketch. + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/emissary/latest/features-icons/cors.svg b/docs/emissary/latest/features-icons/cors.svg new file mode 100644 index 000000000..e559d9242 --- /dev/null +++ b/docs/emissary/latest/features-icons/cors.svg @@ -0,0 +1,14 @@ + + + + noun_111967_cc + Created with Sketch. + + + + + + + + + diff --git a/docs/emissary/latest/features-icons/datadog.png b/docs/emissary/latest/features-icons/datadog.png new file mode 100644 index 000000000..eea05f8ca Binary files /dev/null and b/docs/emissary/latest/features-icons/datadog.png differ diff --git a/docs/emissary/latest/features-icons/datadog.svg b/docs/emissary/latest/features-icons/datadog.svg new file mode 100644 index 000000000..e46e8118c --- /dev/null +++ b/docs/emissary/latest/features-icons/datadog.svg @@ -0,0 +1,12 @@ + + + + Screen Shot 2018-04-05 at 8.22.25 AM + Created with Sketch. + + + + + + + diff --git a/docs/emissary/latest/features-icons/diagnostics.svg b/docs/emissary/latest/features-icons/diagnostics.svg new file mode 100644 index 000000000..940e1bc2f --- /dev/null +++ b/docs/emissary/latest/features-icons/diagnostics.svg @@ -0,0 +1,14 @@ + + + + noun_196445_cc + Created with Sketch. + + + + + + + + + diff --git a/docs/emissary/latest/features-icons/distributed-tracing.png b/docs/emissary/latest/features-icons/distributed-tracing.png new file mode 100644 index 000000000..6b69e28ca Binary files /dev/null and b/docs/emissary/latest/features-icons/distributed-tracing.png differ diff --git a/docs/emissary/latest/features-icons/grpc.png b/docs/emissary/latest/features-icons/grpc.png new file mode 100644 index 000000000..b2f5a0d91 Binary files /dev/null and b/docs/emissary/latest/features-icons/grpc.png differ diff --git a/docs/emissary/latest/features-icons/prometheus.svg b/docs/emissary/latest/features-icons/prometheus.svg new file mode 100644 index 000000000..d5252a666 --- /dev/null +++ b/docs/emissary/latest/features-icons/prometheus.svg @@ -0,0 +1,14 @@ + + + + prometheus_logo_grey + Created with Sketch. + + + + + + + + + diff --git a/docs/emissary/latest/features-icons/rate-limiting.svg b/docs/emissary/latest/features-icons/rate-limiting.svg new file mode 100644 index 000000000..f1b6eacb5 --- /dev/null +++ b/docs/emissary/latest/features-icons/rate-limiting.svg @@ -0,0 +1,16 @@ + + + + Group 10 + Created with Sketch. + + + + + + + + + + + diff --git a/docs/emissary/latest/features-icons/regex-routing.svg b/docs/emissary/latest/features-icons/regex-routing.svg new file mode 100644 index 000000000..113b53b5b --- /dev/null +++ b/docs/emissary/latest/features-icons/regex-routing.svg @@ -0,0 +1,20 @@ + + + + noun_699774_cc + Created with Sketch. + + + + + + + + + + + + + + + diff --git a/docs/emissary/latest/features-icons/request-transformers.svg b/docs/emissary/latest/features-icons/request-transformers.svg new file mode 100644 index 000000000..0b13e2dc8 --- /dev/null +++ b/docs/emissary/latest/features-icons/request-transformers.svg @@ -0,0 +1,18 @@ + + + + noun_96239_cc + Created with Sketch. + + + + + + + + + + + + + diff --git a/docs/emissary/latest/features-icons/shadowing.svg b/docs/emissary/latest/features-icons/shadowing.svg new file mode 100644 index 000000000..9e85eee1d --- /dev/null +++ b/docs/emissary/latest/features-icons/shadowing.svg @@ -0,0 +1,15 @@ + + + + shadow + Created with Sketch. + + + + + + + + + + diff --git a/docs/emissary/latest/features-icons/statsd.png b/docs/emissary/latest/features-icons/statsd.png new file mode 100644 index 000000000..283744384 Binary files /dev/null and b/docs/emissary/latest/features-icons/statsd.png differ diff --git a/docs/emissary/latest/features-icons/statsd.svg b/docs/emissary/latest/features-icons/statsd.svg new file mode 100644 index 000000000..cabc90db1 --- /dev/null +++ b/docs/emissary/latest/features-icons/statsd.svg @@ -0,0 +1,20 @@ + + + + 88eb31f74479e422e4e9abfc6c2b00ee + Created with Sketch. + + + + + + + + + + + + + + + diff --git a/docs/emissary/latest/features-icons/third-party-auth.svg b/docs/emissary/latest/features-icons/third-party-auth.svg new file mode 100644 index 000000000..5359a24a6 --- /dev/null +++ b/docs/emissary/latest/features-icons/third-party-auth.svg @@ -0,0 +1,14 @@ + + + + noun_511233_cc + Created with Sketch. + + + + + + + + + diff --git a/docs/emissary/latest/features-icons/timeouts.svg b/docs/emissary/latest/features-icons/timeouts.svg new file mode 100644 index 000000000..47f630567 --- /dev/null +++ b/docs/emissary/latest/features-icons/timeouts.svg @@ -0,0 +1,18 @@ + + + + noun_587034_cc + Created with Sketch. + + + + + + + + + + + + + diff --git a/docs/emissary/latest/features-icons/tls-termination.svg b/docs/emissary/latest/features-icons/tls-termination.svg new file mode 100644 index 000000000..6a631a96e --- /dev/null +++ b/docs/emissary/latest/features-icons/tls-termination.svg @@ -0,0 +1,17 @@ + + + + noun_63544_cc + Created with Sketch. + + + + + + + + + + + + diff --git a/docs/emissary/latest/features-icons/url-rewrite.svg b/docs/emissary/latest/features-icons/url-rewrite.svg new file mode 100644 index 000000000..023e2e05f --- /dev/null +++ b/docs/emissary/latest/features-icons/url-rewrite.svg @@ -0,0 +1,14 @@ + + + + noun_1295942_cc + Created with Sketch. + + + + + + + + + diff --git a/docs/emissary/latest/features-icons/websockets.svg b/docs/emissary/latest/features-icons/websockets.svg new file mode 100644 index 000000000..af17b9c05 --- /dev/null +++ b/docs/emissary/latest/features-icons/websockets.svg @@ -0,0 +1,16 @@ + + + + noun_50814_cc + Created with Sketch. + + + + + + + + + + + diff --git a/docs/emissary/latest/howtos/active-health-checking.md b/docs/emissary/latest/howtos/active-health-checking.md new file mode 100644 index 000000000..fb5decddf --- /dev/null +++ b/docs/emissary/latest/howtos/active-health-checking.md @@ -0,0 +1,78 @@ +import Alert from '@material-ui/lab/Alert'; + +# Active Health Checking + +$productName$ provides support for active health checking of upstreams via the `Mapping` resource. Active health checking will configure Envoy to make requests to the upstream at a configurable interval. If the upstream does not respond with an expected status code then the upstream will be marked as unhealthy and Envoy will no longer route requests to that upstream until they respond successfully to the health check. + +This feature can only be used with the [endpoint resolver](../../topics/running/resolvers#the-kubernetes-endpoint-resolver). This is necessary because the endpoint resolver allows Envoy to be aware of each individual pod in a deployment as opposed to the [kubernetes service resolver](../../topics/running/resolvers#the-kubernetes-service-resolver) where Envoy is only aware of the upstream as a single endpoint. When envoy is aware of the multiple pods in a deployment then it will allow the active health checks to mark an individual pod as unhealthy while the remaining pods are able to serve requests. + + +Active health checking configuration wil only function with the endpoint resolver. If configuration for active health checking is provided on a Mapping that does not use the endpoint resolver then the health checking configuration will be ignored. + + +## Active Health Checking Configuration + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: "example-mapping" + namespace: "example-namespace" +spec: + hostname: "*" + prefix: /example/ + service: quote + health_checks: list[object] # optional + - unhealthy_threshold: int # optional (default: 2) + healthy_threshold: int # optional (default: 1) + interval: duration # optional (default: 5s) + timeout: duration # optional (default: 3s) + health_check: object # required + http: + path: string # required + hostname: string # optional + remove_request_headers: list[string] # optional + add_request_headers: list[object] # optional + - example-header-1: + append: bool # optional (default: true) + value: string # required + expected_statuses: list[object] # optional + - max: int (100-599) # required (only when using expected_statuses) + min: int (100-599) # required (only when using expected_statuses) + + - health_check: object # required + grpc: + authority: string # optional + upstream_name: string # required +... +``` + +### `health_checks` configuration + +`health_checks` Configures a list of health checks to be run for the `Mapping` and provides several config options for how the health check requests should be run. + +- `unhealthy_threshold`: The number of unexpected responses for an upstream pod to be marked as unhealthy. Regardless of the configuration of `unhealthy_threshold`, a single `503` response will mark the upstream as unhealthy until it passes the required number of health checks. This field is optional and defaults to `2`. +- `healthy_threshold`: The number of expected responses for an unhealthy upstream pod to be marked as healthy and resume handling traffic. This field is optional and defaults to `1`. +- `interval`: Specifies the interval for how frequently the health check request should be made. It is divided amongst the pods in a deployment. For example, an `interval` of `1s` on a deployment of 5 pods would result in each pod receiving a health check request about every 5 seconds. This field is optional and defaults to `5s` when not configured. +- `timeout`: Configures the timeout for the health check requests to an upstream. If a health check request fails the timeout it will be considred a failed check and count towards the `unhealthy_threshold`. This field is optional and defaults to `3s`. +- `health_check`: This field is required and provides the configuration for how the health check requests should be made. Either `grpc` or `http` may be configured for this field depending on whether an HTTP or gRPC health check is desired. + +### HTTP `health_check` Configuration + +`health_check.http` configures HTTP health checks to the upstream. + +- `path`: This field is required and configures the path on the upstream service that the health checks request should be made to. +- `hostname`: Configures the value of the host header in the health check request. This field is optional and defaults to using the name of the Envoy cluster this health check is associated with. +- `expected_statuses`: Configures a range of response statuses from Min to Max (both inclusive). If the upstream returns any status in this range then it will be considered a passed health check. Thies field is optional and by default only `5xx` responses count as a failed health check and contribute towards the `unhealthy_threshold`. + - `max`: End of the statuses to include. Must be between 100 and 599 (inclusive). + - `min`: Start of the statuses to include. Must be between 100 and 599 (inclusive). +- `remove_request_headers`: Configures a list of HTTP headers that should be removed from each health check request sent to the upstream. +- `request_headers_to_add`: Configures a list of HTTP headers that should be added to each health check request sent to the upstream. + +### gRPC `health_check` Configuration + +`health_check.grpc` configures gRPC health checks to the upstream. Only two fields are configurable for gRPC health checks. + +- `authority`: Configures the value of the :authority header in the gRPC health check request. This field is optional and if left empty the upstream name will be used. +- `upstream_name`: This field is required and configures the UpstreamName name parameter which will be sent to gRPC service in the health check message. diff --git a/docs/emissary/latest/howtos/basic-auth.md b/docs/emissary/latest/howtos/basic-auth.md new file mode 100644 index 000000000..70ce27ce5 --- /dev/null +++ b/docs/emissary/latest/howtos/basic-auth.md @@ -0,0 +1,191 @@ +import Alert from '@material-ui/lab/Alert'; + +# Basic authentication (for $productName$) + +[//]: # (+FIX+ link to "authentication and authorization" concept) + + + This guide applies to $OSSproductName$, use of this guide with $AESproductName$ is not supported. $AESproductName$ does authentication using the Filter resource instead of the AuthService resource as described below. + + +$productName$ can authenticate incoming requests before routing them to a backing +service. In this tutorial, we'll configure $productName$ to use an external third +party authentication service. We're assuming also that you are running the +quote application in your cluster as described in the +[$productName$ tutorial](../../tutorials/quickstart-demo/). + +## Before you get started + +This tutorial assumes you have already followed the $productName$ [Installation](../../topics/install/) guide. If you haven't done that already, you should do so now. + +Once complete, you'll have a Kubernetes cluster running $productName$. Let's walk through adding authentication to this setup. + +## 1. Deploy the authentication service + +$productName$ delegates the actual authentication logic to a third party authentication service. We've written a [simple authentication service](https://github.com/datawire/ambassador-auth-service) that: + +- listens for requests on port 3000; +- expects all URLs to begin with `/extauth/`; +- performs HTTP Basic Auth for all URLs starting with `/backend/get-quote/` (other URLs are always permitted); +- accepts only user `username`, password `password`; and +- makes sure that the `x-qotm-session` header is present, generating a new one if needed. + +$productName$ routes _all_ requests through the authentication service: it relies on the auth service to distinguish between requests that need authentication and those that do not. If $productName$ cannot contact the auth service, it will return a 503 for the request; as such, **it is very important to have the auth service running before configuring $productName$ to use it.** + +Here's the YAML we'll start with: + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: example-auth +spec: + type: ClusterIP + selector: + app: example-auth + ports: + - port: 3000 + name: http-example-auth + targetPort: http-api +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: example-auth +spec: + replicas: 1 + strategy: + type: RollingUpdate + selector: + matchLabels: + app: example-auth + template: + metadata: + labels: + app: example-auth + spec: + containers: + - name: example-auth + image: docker.io/datawire/ambassador-auth-service:2.0.0 + imagePullPolicy: Always + ports: + - name: http-api + containerPort: 3000 + resources: + limits: + cpu: "0.1" + memory: 100Mi +``` + +Note that the cluster does not yet contain any $productName$ AuthService definition. This is intentional: we want the service running before we tell $productName$ about it. + +The YAML above is published at getambassador.io, so if you like, you can just do + +``` +kubectl apply -f https://app.getambassador.io/yaml/v2-docs/$ossVersion$/demo/demo-auth.yaml +``` + +to spin everything up. (Of course, you can also use a local file, if you prefer.) + +Wait for the pod to be running before continuing. The output of `kubectl get pods` should look something like + +``` +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +example-auth-6c5855b98d-24clp 1/1 Running 0 4m +``` +Note that the `READY` field says `1/1` which means the pod is up and running. + +## 2. Configure $productName$ authentication + +Once the auth service is running, we need to tell $productName$ about it. The easiest way to do that is point it to the `example-auth` service with the following: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: AuthService +metadata: + name: authentication +spec: + auth_service: "example-auth:3000" + path_prefix: "/extauth" + allowed_request_headers: + - "x-qotm-session" + allowed_authorization_headers: + - "x-qotm-session" +``` + +This configuration tells $productName$ about the auth service, notably that it needs the `/extauth` prefix, and that it's OK for it to pass back the `x-qotm-session` header. Note that `path_prefix` and `allowed_*_headers` are optional. + +If the auth service uses a framework like [Gorilla Toolkit](http://www.gorillatoolkit.org) which enforces strict slashes as HTTP path separators, it is possible to end up with an infinite redirect where the auth service's framework redirects any request with non-conformant slashing. This would arise if the above example had `path_prefix: "/extauth/"`, the auth service would see a request for `/extauth//backend/get-quote/` which would then be redirected to `/extauth/backend/get-quote/` rather than actually be handled by the authentication handler. For this reason, remember that the full path of the incoming request including the leading slash, will be appended to `path_prefix` regardless of non-conformant slashing. + +You can apply this file from getambassador.io with + +``` +kubectl apply -f https://app.getambassador.io/yaml/v2-docs/$ossVersion$/demo/demo-auth-enable.yaml +``` + +or, again, apply it from a local file if you prefer. + +Note that the cluster does not yet contain any $productName$ AuthService definition. + +## 3. Test authentication + +If we `curl` to a protected URL: + +``` +$ curl -Lv $AMBASSADORURL/backend/get-quote/ +``` + +We get a 401 since we haven't authenticated. + +``` +* TCP_NODELAY set +* Connected to 54.165.128.189 (54.165.128.189) port 32281 (#0) +> GET /backend/get-quote/ HTTP/1.1 +> Host: 54.165.128.189:32281 +> User-Agent: curl/7.63.0 +> Accept: */* +> +< HTTP/1.1 401 Unauthorized +< www-authenticate: Basic realm="Ambassador Realm" +< content-length: 0 +< date: Thu, 23 May 2019 15:24:55 GMT +< server: envoy +< +* Connection #0 to host 54.165.128.189 left intact +``` + +If we authenticate to the service, we will get a quote successfully: + +``` +$ curl -Lv -u username:password $AMBASSADORURL/backend/get-quote/ + +* TCP_NODELAY set +* Connected to 54.165.128.189 (54.165.128.189) port 32281 (#0) +* Server auth using Basic with user 'username' +> GET /backend/get-quote/ HTTP/1.1 +> Host: 54.165.128.189:32281 +> Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ= +> User-Agent: curl/7.63.0 +> Accept: */* +> +< HTTP/1.1 200 OK +< content-type: application/json +< date: Thu, 23 May 2019 15:25:06 GMT +< content-length: 172 +< x-envoy-upstream-service-time: 0 +< server: envoy +< +{ + "server": "humble-blueberry-o2v493st", + "quote": "Nihilism gambles with lives, happiness, and even destiny itself!", + "time": "2019-05-23T15:25:06.544417902Z" +* Connection #0 to host 54.165.128.189 left intact +} +``` + +## More + +For more details about configuring authentication, see the [`External` filter](/docs/edge-stack/latest/topics/using/filters/) documentation. diff --git a/docs/emissary/latest/howtos/cert-manager.md b/docs/emissary/latest/howtos/cert-manager.md new file mode 100644 index 000000000..d742decd9 --- /dev/null +++ b/docs/emissary/latest/howtos/cert-manager.md @@ -0,0 +1,230 @@ +# Using cert-manager + +[//]: # (+FIX+ link to "TLS and certificates" concept) + +$AESproductName$ has simple and easy built-in support for automatically [using ACME] with the +`http-01` challenge to create and renew TLS certificates. However, this support is not available +in $OSSproductName$, and it is limited to the ACME `http-01` challenge type. If you're running +$OSSproductName$, or if you require more flexible certificate management (such as using ACME's +`dns-01` challenge, or using a non-ACME certificate source), external certificate management +tools are also supported. + +[using ACME]: ../../topics/running/host-crd + +One such tool is Jetstack's [cert-manager](https://github.com/jetstack/cert-manager), which is a general-purpose tool +for managing certificates in Kubernetes. Cert-manager will automatically create and renew TLS certificates and store +them as Kubernetes secrets for easy use in a cluster. $productName$ will automatically watch for secret +changes and reload certificates upon renewal. + +> **Note:** This document assumes cert-manager v0.15 or greater. This document has been updated to use CRD standards +> specified in v0.15. [Legacy CRD support](https://cert-manager.io/docs/installation/upgrading/upgrading-0.14-0.15/) +> was removed in cert-manager v0.15, see their [upgrading](https://cert-manager.io/docs/installation/upgrading/) +> document for more info. + +## Install cert-manager + +There are many different ways to [install cert-manager](https://cert-manager.io/docs/installation/). For simplicity, we will use Helm. + +1. Create the cert-manager CRDs. + ``` + kubectl apply -f https://github.com/jetstack/cert-manager/releases/latest/download/cert-manager.crds.yaml + ``` + +2. Add the `jetstack` Helm repository. + ``` + helm repo add jetstack https://charts.jetstack.io && helm repo update + ``` + +3. Install cert-manager. + + ``` + kubectl create ns cert-manager + helm install cert-manager --namespace cert-manager jetstack/cert-manager + ``` + +## Issuing certificates + +cert-manager issues certificates from a CA such as [Let's Encrypt](https://letsencrypt.org/). It does this using the ACME protocol which supports various challenge mechanisms for verifying ownership of the domain. + +### Issuer + +An `Issuer` or `ClusterIssuer` identifies which Certificate Authority cert-manager will use to issue a certificate. `Issuer` is a namespaced resource allowing you to use different CAs in each namespace, a `ClusterIssuer` is used to issue certificates in any namespace. Configuration depends on which ACME [challenge](#challenge) you are using. + +### Certificate + +A [Certificate](https://cert-manager.io/docs/concepts/certificate/) is a namespaced resource that references an `Issuer` or `ClusterIssuer` for issuing certificates. `Certificate`s define the DNS name(s) a key and certificate should be issued for, as well as the secret to store those files (e.g. `ambassador-certs`). Configuration depends on which ACME [challenge](#challenge) you are using. + +By duplicating issuers, certificates, and secrets one can support multiple domains with [SNI](../../topics/running/tls/sni). + +## Challenge + +cert-manager supports two kinds of ACME challenges that verify domain ownership in different ways: HTTP-01 and DNS-01. + +### DNS-01 challenge + +The DNS-01 challenge verifies domain ownership by proving you have control over its DNS records. Issuer configuration will depend on your [DNS provider](https://cert-manager.io/docs/configuration/acme/dns01/#supported-dns01-providers). This example uses [AWS Route53](https://cert-manager.io/docs/configuration/acme/dns01/route53/). + +1. Create the IAM policy specified in the cert-manager [AWS Route53](https://cert-manager.io/docs/configuration/acme/dns01/route53/) documentation. + +2. Note the `accessKeyID` and create a `secret` named `prod-route53-credentials-secret` in the cert-manager namespace that has a key value: `secret-access-key` from your AWS IaM credentials. + +3. Create and apply a `ClusterIssuer`. + + ```yaml + --- + apiVersion: cert-manager.io/v1alpha2 + kind: ClusterIssuer + metadata: + name: letsencrypt-prod + spec: + acme: + email: email@example.com + server: https://acme-v02.api.letsencrypt.org/directory + privateKeySecretRef: + name: letsencrypt-prod + solvers: + - selector: + dnsZones: + - "myzone.route53.com" + dns01: + route53: + region: us-east-1 + accessKeyID: {accessKeyID} + hostedZoneID: {Hosted Zone ID} # optional, allows you to reduce the scope of permissions in Amazon IAM + secretAccessKeySecretRef: + name: prod-route53-credentials-secret + key: secret-access-key + ``` + +4. Create and apply a `Certificate`. + + ```yaml + --- + apiVersion: cert-manager.io/v1alpha2 + kind: Certificate + metadata: + name: myzone.route53.com + # cert-manager will put the resulting Secret in the same Kubernetes + # namespace as the Certificate. You should create the certificate in + # whichever namespace you want to configure a Host. + spec: + secretName: ambassador-certs + issuerRef: + name: letsencrypt-prod + kind: ClusterIssuer + commonName: myzone.route53.com + dnsNames: + - myzone.route53.com + ``` + +5. Verify the secret is created. + + ``` + $ kubectl get secrets -n ambassador + NAME TYPE DATA AGE + ambassador-certs kubernetes.io/tls 2 1h + ``` + +### HTTP-01 challenge + +The HTTP-01 challenge verifies ownership of the domain by sending a request for a specific file on that domain. cert-manager accomplishes this by sending a request to a temporary pod with the prefix `/.well-known/acme-challenge/`. To perform this challenge: + +1. Create and apply a `ClusterIssuer`. + + ```yaml + --- + apiVersion: cert-manager.io/v1alpha2 + kind: ClusterIssuer + metadata: + name: letsencrypt-prod + spec: + acme: + email: email@example.com + server: https://acme-v02.api.letsencrypt.org/directory + privateKeySecretRef: + name: letsencrypt-prod + solvers: + - http01: + ingress: + class: nginx + selector: {} + ``` + +2. Create and apply a `Certificate`. + + ```yaml + --- + apiVersion: cert-manager.io/v1alpha2 + kind: Certificate + metadata: + name: ambassador-certs + # cert-manager will put the resulting Secret in the same Kubernetes + # namespace as the Certificate. You should create the certificate in + # whichever namespace you want to configure a Host. + namespace: ambassador + spec: + secretName: ambassador-certs + issuerRef: + name: letsencrypt-prod + kind: ClusterIssuer + dnsNames: + - example.com + ``` + +3. Apply both the `ClusterIssuer` and `Certificate` + + After applying both of these YAML manifests, you will notice that cert-manager has spun up a temporary pod named `cm-acme-http-solver-xxxx` but no certificate has been issued. Check the cert-manager logs and you will see a log message that looks like this: + + ``` + $ kubectl logs cert-manager-756d6d885d-v7gmg + ... + Preparing certificate default/ambassador-certs with issuer + Calling GetOrder + Calling GetAuthorization + Calling HTTP01ChallengeResponse + Cleaning up old/expired challenges for Certificate default/ambassador-certs + Calling GetChallenge + wrong status code '404' + Looking up Ingresses for selector certmanager.k8s.io/acme-http-domain=161156668,certmanager.k8s.io/acme-http-token=1100680922 + Error preparing issuer for certificate default/ambassador-certs: http-01 self check failed for domain "example.com + ``` + +4. Create a Mapping for the `/.well-known/acme-challenge/` route. + + cert-manager uses an `Ingress` to issue the challenge to `/.well-known/acme-challenge/` that is incompatible with Ambassador. We will need to create a `Mapping` so the cert-manager can reach the temporary pod. + + ```yaml + --- + apiVersion: getambassador.io/v3alpha1 + kind: Mapping + metadata: + name: acme-challenge-mapping + spec: + hostname: "*" + prefix: /.well-known/acme-challenge/ + rewrite: "" + service: acme-challenge-service + + --- + apiVersion: v1 + kind: Service + metadata: + name: acme-challenge-service + spec: + ports: + - port: 80 + targetPort: 8089 + selector: + acme.cert-manager.io/http01-solver: "true" + ``` + + Apply the YAML and wait a couple of minutes. cert-manager will retry the challenge and issue the certificate. + +5. Verify the secret is created: + + ``` + $ kubectl get secrets + NAME TYPE DATA AGE + ambassador-certs kubernetes.io/tls 2 1h + ambassador-token-846d5 kubernetes.io/service-account-token 3 2h + ``` diff --git a/docs/emissary/latest/howtos/client-cert-validation.md b/docs/emissary/latest/howtos/client-cert-validation.md new file mode 100644 index 000000000..10fe639d7 --- /dev/null +++ b/docs/emissary/latest/howtos/client-cert-validation.md @@ -0,0 +1,110 @@ +# Client certificate validation + +[//]: # (+FIX+ link to "TLS and client certs" concept) + +Sometimes, for additional security or authentication purposes, you will want +the server to validate who the client is before establishing an encrypted +connection. + +To support this, $productName$ can be configured to use a provided CA certificate +to validate certificates sent from your clients. This allows for client-side +mTLS where both $productName$ and the client provide and validate each other's +certificates. + +## Prerequisites + +- [openssl](https://www.openssl.org/source/) For creating client certificates +- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) +- [$productName$](../../tutorials/getting-started) +- [cURL](https://curl.haxx.se/download.html) + + +## Configuration + +1. Create a certificate and key. + + This can be done with a single command with `openssl`: + + ``` + openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 + ``` + + Enter a passcode for PEM files and fill in the certificate information. + Since this certificate will only be shared between a client and $productName$, + the Common Name must be set to something. Everything else can be left blank. + + **Note:** If using MacOS, + [you must](https://curl.haxx.se/mail/archive-2014-10/0053.html) + add the certificate and key as a PKCS encoded file to your Keychain. To do + this: + + 1. Encode `cert.pem` and `key.pem` created above in PKCS format + + ``` + openssl pkcs12 -inkey key.pem -in cert.pem -export -out certificate.p12 + ``` + + 2. Open "Keychain Access" on your system and select "File"->"Import Items..." + + 3. Navigate to your working directory and select the `certificate.p12` file + we just created above. + +2. Create a secret to hold the client CA certificate. + + ``` + kubectl create secret generic client-cacert --from-file=tls.crt=cert.pem + ``` + +3. Configure $productName$ to use this certificate for client certificate validation. + + First create a `Host` to manage your domain: + + ```yaml + apiVersion: getambassador.io/v3alpha1 + kind: Host + metadata: + name: example-host + spec: + hostname: host.example.com + acmeProvider: + email: julian@example.com + ``` + + Then create a `TLSContext` to configure advanced TLS options like client + certificate validation: + + ```yaml + --- + apiVersion: getambassador.io/v3alpha1 + kind: TLSContext + metadata: + name: example-host-context + spec: + hosts: + - host.example.com + secret: host.example.com + ca_secret: client-cacert + cert_required: false # Optional: Configures $productName$ to reject the request if the client does not provide a certificate. Default: false + ``` + + **Note**: Client certificate validation requires $productName$ be configured to terminate TLS + + $productName$ is now be configured to validate certificates that the client provides. + +4. Test that $productName$ is validating the client certificates with `curl` + + **Linux**: + ``` + curl -v --cert cert.pem --key key.pem https://host.example.com/ + ``` + + **MacOS**: + ``` + curl -v --cert certificate.p12:[password] https://host.example.com/ + ``` + + Looking through the verbose output, you can see we are sending a client + certificate and $productName$ is validating it. + + If you need further proof, simply create a new set of certificates and + try sending the curl with those. You will see $productName$ deny the request. diff --git a/docs/emissary/latest/howtos/configure-communications.md b/docs/emissary/latest/howtos/configure-communications.md new file mode 100644 index 000000000..1ac09d2cb --- /dev/null +++ b/docs/emissary/latest/howtos/configure-communications.md @@ -0,0 +1,763 @@ +import Alert from '@material-ui/lab/Alert'; + +Configuring $productName$ Communications +======================================== + +For $productName$ to do its job of managing network communications for your services, it first needs to know how its own communications should be set up. This is handled by a combination of resources: the `Listener`, the `Host`, and the `TLSContext`. + +- `Listener`: defines where, and how, $productName$ should listen for requests from the network. +- `Host`: defines which hostnames $productName$ should care about, and how to handle different kinds of requests for those hosts. `Host`s can be associated with one or more `Listener`s. +- `TLSContext`: defines whether, and how, $productName$ will manage TLS certificates and options. `TLSContext`s can be associated with one or more `Host`s. + +Once the basic communications setup is in place, $productName$ `Mapping`s and `TCPMapping`s can be associated with `Host`s to actually do routing. + + + Remember that Listener and Host resources are  + required for a functioning $productName$ installation that can route traffic!
+ Learn more about Listener.
+ Learn more about Host. +
+ + + Remember than $productName$ does not make sure that a wildcard Host exists! If the + wildcard behavior is needed, a Host with a hostname of "*" + must be defined by the user. + + + + Several different resources work together to configure communications. A working knowledge of all of them + can be very useful:
+ Learn more about Listener.
+ Learn more about Host.
+ Learn more about Mapping.
+ Learn more about TCPMapping.
+ Learn more about TLSContext. +
+ +A Note on TLS +------------- + +[TLS] can appear intractable if you haven't set up [certificates] correctly. If you're +having trouble with TLS, always [check the logs] of your $productName$ Pods and look for +certificate errors. + +[TLS]: ../../topics/running/tls +[certificates]: ../../topics/running/tls#certificates-and-secrets +[check the logs]: ../../topics/running/debugging#review-logs + +Examples / Cookbook +------------------- + +### Basic HTTP and HTTPS + +A useful configuration is to support either HTTP or HTTPS, in this case on either port 8080 or port 8443. This +tends to make it as easy as possible to communicate with the services behind the $productName$ instance. It uses +two `Listener`s and at least one `Host`. + + +#### `Listener`s: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: http-listener +spec: + port: 8080 + protocol: HTTPS # NOT A TYPO, see below + securityModel: XFP + hostBinding: + namespace: + from: SELF # See below +--- +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: https-listener +spec: + port: 8443 + protocol: HTTPS + securityModel: XFP + hostBinding: + namespace: + from: SELF # See below +``` + +- Both `Listener`s use `protocol: HTTPS` to allow Envoy to inspect incoming connections, determine + whether or not TLS is in play, and set `X-Forwarded-Proto` appropriately. The `securityModel` then specifies + that `X-Forwarded-Proto` will determine whether requests will be considered secure or insecure. + +- The `hostBinding` shown here will allow any `Host` in the same namespace as the `Listener`s + to be associated with both `Listener`s; in turn, that will allow access to that `Host`'s + `Mapping`s from either port. For greater control, use a `selector` instead. + +- Note that the `Listener`s do not specify anything about TLS certificates. The `Host` + handles that; see below. + + + Learn more about Listener + + +#### `Host` + +This example will assume that we expect to be reachable as `foo.example.com`, and that the `foo.example.com` +certificate is stored in the Kubernetes `Secret` named `foo-secret`: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: foo-host +spec: + hostname: "foo.example.com" + tlsSecret: + name: foo-secret + requestPolicy: + insecure: + action: Redirect +``` + +- The `tlsSecret` specifies the certificate in use for TLS termination. +- The `requestPolicy` specifies routing HTTPS and redirecting HTTP to HTTPS. +- Since the `Host` does not specify a `selector`, only `Mapping`s with a `hostname` that matches + `foo.example.com` will be associated with this `Host`. +- **Note well** that simply defining a `TLSContext` is not sufficient to terminate TLS: you must define either + a `Host` or an `Ingress`. +- Note that if no `Host` is present, but a TLS secret named `fallback-secret` is available, the system will + currently define a `Host` using `fallback-secret`. **This behavior is subject to change.** + + + Learn more about Host + + +### HTTP-Only + +Another straightforward configuration is to support only HTTP, in this case on port 8080. This uses a single +`Listener` and a single `Host`: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: http-listener +spec: + port: 8080 + protocol: HTTP + securityModel: INSECURE + hostBinding: + namespace: + from: SELF # See below +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: foo-host +spec: + hostname: "foo.example.com" + requestPolicy: + insecure: + action: Route +``` + +- Here, we listen only on port 8080, and only for HTTP. HTTPS will be rejected. +- Since requests are only allowed using HTTP, we declare all requests `INSECURE` by definition. +- The `Host` specifies routing HTTP, rather than redirecting it. + + + + Learn more about Listener
+ Learn more about Host +
+ +### TLS using ACME ($AESproductName$ only) + +This scenario uses ACME to get certificates for `foo.example.com` and `bar.example.com`. HTTPS traffic to either +host is routed; HTTP traffic to `foo.example.com` will be redirected to HTTPS, but HTTP traffic to `bar.example.com` +will be rejected outright. + +Since this example uses ACME, **it is only supported in $AESproductName$**. + +For demonstration purposes, we show this example listening for HTTPS on port 9999, using `X-Forwarded-Proto`. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: https-listener-9999 +spec: + port: 9999 + protocol: HTTPS + securityModel: XFP + hostBinding: # Edit as needed + namespace: + from: SELF +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: foo-host +spec: + hostname: foo.example.com + acmeProvider: + email: julian@example.com +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: bar-host +spec: + hostname: bar.example.com + acmeProvider: + email: julian@example.com + requestPolicy: + insecure: + action: Reject +``` + +(`Mapping`s are not shown.) + +- Our `Listener`s will accept HTTPS and HTTP on port 9999, and the protocol used will dictate whether + the requests are secure (HTTPS) or insecure (HTTP). +- `foo-host` defaults to ACME with Let's Encrypt, since `acmeProvider.authority` is not provided. +- `foo-host` defaults to redirecting insecure requests, since the default for `requestPolicy.insecure.action` is `Redirect`. +- `bar-host` uses Let's Encrypt as well, but it will reject insecure requests. + +**If you use ACME for multiple Hosts, add a wildcard Host too.** +This is required to manage a known issue. This issue will be resolved in a future Ambassador Edge Stack release. + + + Learn more about Listener
+ Learn more about Host +
+ +### Multiple TLS Certificates + +This scenario uses TLS without ACME. Each of our two `Host`s uses a distinct TLS certificate. HTTPS +traffic to either`foo.example.com` or `bar.example.com` is routed, but this time `foo.example.com` will redirect +HTTP requests, while `bar.example.com` will route them. + +Since this example does not use ACME, it is supported in $productName$ as well as $AESproductName$. + +For demonstration purposes, we show this example listening for HTTPS on port 4848, using `X-Forwarded-Proto`. + +```yaml +--- +apiVersion: v1 +kind: Secret +type: kubernetes.io/tls +metadata: + name: foo-example-secret +data: + tls.crt: -certificate PEM- + tls.key: -secret key PEM- +--- +apiVersion: v1 +kind: Secret +type: kubernetes.io/tls +metadata: + name: bar-example-secret +data: + tls.crt: -certificate PEM- + tls.key: -secret key PEM- +--- +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: https-listener-4848 +spec: + port: 4848 + protocol: HTTPS + securityModel: XFP + hostBinding: # Edit as needed + namespace: + from: SELF +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: foo-host +spec: + hostname: foo.example.com + tlsSecret: + name: foo-example-secret +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: bar-host +spec: + hostname: bar.example.com + tlsSecret: + name: bar-example-secret + requestPolicy: + insecure: + action: Route +``` + +- `foo-host` and `bar-host` simply reference the `tlsSecret` to use for termination. + - If the secret involved contains a wildcard cert, or a cert with multiple SAN, both `Host`s could + reference the same `tlsSecret`. +- `foo-host` relies on the default insecure routing action of `Redirect`. +- `bar-host` must explicitly specify routing HTTP. + + + + Learn more about Listener
+ Learn more about Host +
+ +### Using a `TLSContext` + +If you need to share other TLS settings between two `Host`s, you can reference a `TLSContext` as well as +the `tlsSecret`. This is the same as the previous example, but we use a `TLSContext` to set `ALPN` information, +and we assume that the `Secret` contains a wildcard cert. + +```yaml +--- +apiVersion: v1 +kind: Secret +type: kubernetes.io/tls +metadata: + name: wildcard-example-secret +data: + tls.crt: -wildcard here- + tls.key: -wildcard here- +--- +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: example-context +spec: + secret: wildcard-example-secret + alpn_protocols: [h2, istio] +--- +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: https-listener-4848 +spec: + port: 4848 + protocol: HTTPS + securityModel: XFP + hostBinding: # Edit as needed + namespace: + from: SELF +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: foo-host +spec: + hostname: foo.example.com + tlsContext: + name: example-context + tlsSecret: + name: wildcard-example-secret +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: bar-host +spec: + hostname: bar.example.com + tlsContext: + name: example-context + tlsSecret: + name: wildcard-example-secret + requestPolicy: + insecure: + action: Route +``` + +- Note that specifying the `tlsSecret` is still necessary, even when `tlsContext` is specified. + + + + Learn more about Listener
+ Learn more about Host +
+ +### ACME With a TLSContext ($AESproductName$ Only) + +In $AESproductName$, you can use a `TLSContext` with ACME as well. This example is the same as "TLS using ACME", +but we use a `TLSContext` to set `ALPN` information. Again, ACME is only supported in $AESproductName$. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: example-context +spec: + secret: example-acme-secret + alpn_protocols: [h2, istio] +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: foo-host +spec: + hostname: foo.example.com + acmeProvider: + email: julian@example.com + tlsContext: + name: example-context + tlsSecret: + name: example-acme-secret +``` + +- Note that we don't provide the `Secret`: the ACME client will create it for us. + + + + Learn more about Listener
+ Learn more about Host +
+ +### Using an L7 Load Balancer to Terminate TLS + +In this scenario, a layer 7 load balancer ahead of $productName$ will terminate TLS, so $productName$ will always see HTTP with a known good `X-Forwarded-Protocol`. We'll use that to route HTTPS and redirect HTTP. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: lb-listener +spec: + port: 8443 + protocol: HTTP + securityModel: XFP + l7Depth: 1 + hostBinding: # This may well need editing for your case! + namespace: + from: SELF +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: foo-host +spec: + hostname: "foo.example.com" + requestPolicy: + insecure: + action: Redirect +``` + +- We set `l7Depth` to 1 to indicate that there's a single trusted L7 load balancer ahead of us. +- We specifically set this Listener to HTTP-only, but we stick with port 8443 just because we expect people setting up TLS at all to expect to use port 8443. (There's nothing special about the port number, pick whatever you like.) +- Our `Host` does not specify a `tlsSecret`, so $productName$ will not try to terminate TLS. +- Since the `Listener` still pays attention to `X-Forwarded-Proto`, both secure and insecure requests + are possible, and we use the `Host` to route HTTPS and redirect HTTP. + + + + Learn more about Listener
+ Learn more about Host +
+ +### Using a Split L4 Load Balancer to Terminate TLS + +Here, we assume that $productName$ is behind a load balancer setup that handles TLS at layer 4: + +- Incoming cleartext traffic is forwarded to $productName$ on port 8080. +- Incoming TLS traffic is terminated at the load balancer, then forwarded to $productName$ _as cleartext_ on port 8443. +- This might involve multiple L4 load balancers, but the actual number doesn't matter. +- The actual port numbers we use don't matter either, as long as $productName$ and the load balancer(s) agree on which port is for which traffic. + +We're going to route HTTPS for both `foo.example.com` and `bar.example.com`, redirect HTTP for `foo.example.com`, and reject HTTP for `bar.example.com`. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: split-lb-one-listener +spec: + protocol: HTTP + port: 8080 + securityModel: INSECURE + hostBinding: # This may well need editing for your case! + namespace: + from: SELF +--- +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: split-lb-two-listener +spec: + protocol: HTTP + port: 8443 + securityModel: SECURE + hostBinding: # This may well need editing for your case! + namespace: + from: SELF +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: foo-host +spec: + hostname: foo.example.com +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: bar-host +spec: + hostname: bar.example.com + requestPolicy: + insecure: + action: Reject +``` + +- Since L4 load balancers cannot set `X-Forwarded-Protocol`, we don't use it at all here: instead, we dictate that 8080 and 8443 both speak cleartext HTTP, but everything arriving at port 8080 is insecure and everything at port 8443 is secure. + + + + Learn more about Listener
+ Learn more about Host +
+ +### Listening on Multiple Ports + +There's no reason you need to use ports 8080 and 8443, or that you're limited to two ports. Here we'll use ports 9001 and 9002 for HTTP, and port 4001 for HTTPS. We'll route traffic irrespective of protocol. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: listener-9001 +spec: + protocol: HTTP + port: 9001 + securityModel: XFP + hostBinding: # This may well need editing for your case! + namespace: + from: SELF +--- +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: listener-9002 +spec: + protocol: HTTP + port: 9002 + securityModel: XFP + hostBinding: # This may well need editing for your case! + namespace: + from: SELF +--- +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: listener-4001 +spec: + protocol: HTTPS + port: 4001 + securityModel: XFP + hostBinding: # This may well need editing for your case! + namespace: + from: SELF +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: route-host +spec: + requestPolicy: + insecure: + action: Route +``` + +- We can use `X-Forwarded-Proto` for all our `Listener`s: the HTTP-only `Listener`s will set it correctly. +- Each `Listener` can specify only one port, but there's no hardcoded limit on the number of `Listener`s you can have. + + + + Learn more about Listener
+ Learn more about Host +
+ +### Using Labels to Associate `Host`s and `Listener`s + +In the examples above, the `Listener`s all associate with any `Host` in their namespace. In this +example, we will use Kubernetes labels to control the association instead. + +Here, we'll listen for HTTP to `foo.example.com` on port 8888, and for either HTTP or HTTPS to `bar.example.com` on +port 9999 (where we'll redirect HTTP to HTTPS). Traffic to `baz.example.com` will work on both ports, and we'll route +HTTP for it rather than redirecting. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: listener-8888 +spec: + protocol: HTTP + port: 8888 + securityModel: XFP + hostBinding: + selector: + matchLabels: + tenant: foo +--- +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: listener-9999 +spec: + protocol: HTTPS + port: 9999 + securityModel: XFP + hostBinding: + selector: + matchLabels: + tenant: bar +--- +apiVersion: v1 +kind: Secret +type: kubernetes.io/tls +metadata: + name: bar-secret +data: + tls.crt: -wildcard here- + tls.key: -wildcard here- +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: foo-host + labels: + tenant: foo +spec: + hostname: foo.example.com + requestPolicy: + insecure: + action: Route +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: bar-host + labels: + tenant: bar +spec: + hostname: bar.example.com + tlsSecret: + name: bar-secret +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: baz-host + labels: + tenant: foo + tenant: bar +spec: + hostname: baz.example.com + tlsSecret: + name: bar-secret + requestPolicy: + insecure: + action: Route +``` + +- Note the `labels` on each `Host`, which the `hostBinding` on the `Listener` can reference. + - Note also that only label selectors are supported at the moment. + + + + Learn more about Listener
+ Learn more about Host +
+ +### Wildcard `Host`s and `Mapping`s + +In a `Mapping`, the `host` is now treated as a glob rather than an exact match, with the goal of vastly reducing the need for `host_regex`. (The `hostname` in a `Host` has always been treated as a glob). + +- **Note that only prefix and suffix matches are supported**, so `*.example.com` and `foo.*` are both fine, but `foo.*.com` will not work -- you'll need to use `host_regex` if you really need that. (This is an Envoy limitation.) + +In this example, we'll accept both HTTP and HTTPS, but: + +- Cleartext traffic to any host in `lowsec.example.com` will be routed. +- Cleartext traffic to any host in `normal.example.com` will be redirected. +- Any other cleartext traffic will be rejected. + +```yaml +--- +apiVersion: v1 +kind: Secret +type: kubernetes.io/tls +metadata: + name: example-secret +data: + tls.crt: -wildcard for *.example.com here- + tls.key: -wildcard for *.example.com here- +--- +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: listener-8443 +spec: + port: 8443 + protocol: HTTPS + securityModel: XFP + hostBinding: # This may well need editing for your case! + namespace: + from: SELF +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: lowsec-host +spec: + hostname: "*.lowsec.example.com" + tlsSecret: + name: example-secret + requestPolicy: + insecure: + action: Route +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: normal-host +spec: + hostname: "*.normal.example.com" + tlsSecret: + name: example-secret + requestPolicy: # We could leave this out since + insecure: # redirecting is the default, but + action: Redirect # it's spelled out for clarity. +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: catchall-host +spec: + hostname: "*.example.com" + tlsSecret: + name: example-secret + requestPolicy: + insecure: + action: Reject +``` + +- We'll listen for HTTP or HTTPS on port 8443. +- The three `Host`s apply different insecure routing actions depending on the hostname. +- You could also do this with `host_regex`, but using `host` with globs will give better performance. + - Being able to _not_ associate a given `Mapping` with a given `Host` when the `Mapping`'s + `host` doesn't match helps a lot when you have many `Host`s. + - Reliably determining if a regex (for the `Mapping`) matches a glob (for the `Host`) isn't really possible, so we can't prune `host_regex` `Mapping`s at all. + + + Learn more about Listener
+ Learn more about Host +
diff --git a/docs/emissary/latest/howtos/consul.md b/docs/emissary/latest/howtos/consul.md new file mode 100644 index 000000000..d75e35ceb --- /dev/null +++ b/docs/emissary/latest/howtos/consul.md @@ -0,0 +1,564 @@ + +import Alert from '@material-ui/lab/Alert'; + +# Consul integration + +
+

Contents

+ +- [Consul integration](#consul-integration) + - [Architecture overview](#architecture-overview) + - [Installing Consul](#installing-consul) + - [Installing $productName$](#installing-consul) + - [Using Consul for service discovery](#using-consul-for-service-discovery) + - [Using Consul for authorization and encryption](#using-consul-for-authorization-and-encryption) + - [Environment variables](#environment-variables) + - [More information](#more-information) + +
+ +[Consul](https://www.consul.io) is a widely used service mesh. +$productName$ natively supports service discovery and unauthenticated +communication to services in Consul. Additionally, the *Ambassador +Consul Connector* enables $productName$ to encrypt and authenticate +its communication via mTLS with services in Consul that make use of +[Consul's *Connect* feature](https://www.consul.io/docs/connect). + +## Architecture overview + +Using Consul with $productName$ is particularly useful when deploying +$productName$ in hybrid cloud environments where you deploy +applications on VMs and Kubernetes. In this environment, Consul +enables $productName$ to securely route over TLS to any application +regardless of where it is deployed. + +In this architecture, Consul serves as the source of truth for your +entire data center, tracking available endpoints, service +configuration, and secrets for TLS encryption. New applications and +services automatically register themselves with Consul using the +Consul agent or API. When you send a request through $productName$, +$productName$ sends the request to an endpoint based on the data in +Consul. + +![ambassador-consul](../images/consul-ambassador.png) + +This guide first instructs you on registering a service with Consul +and using $productName$ to dynamically route requests to that service +based on Consul's service discovery data, and subsequently instructs +you on using using the Ambassador Consul Connector to use Consul for +authorizing and encrypting requests. + +## Installing Consul + +If you already have Consul installed in your cluster, then go ahead +and skip to the next section. + +1. Before you install Consul, make sure to check the Consul + documentation for any setup steps specific to your platform. Below + you can find setup guides for some of the more popular Kubernetes + platforms. This step is primarily to ensure you have the proper + permissions to set up Consul. You can skip these guides if your + cluster is already configured to grant you the necessary + permissions. + + - [Microsoft Azure Kubernetes Service (AKS)](https://learn.hashicorp.com/tutorials/consul/kubernetes-aks-azure?utm_source=consul.io&utm_medium=docs) + - [Amazon Elastic Kubernetes Service (EKS)](https://learn.hashicorp.com/tutorials/consul/kubernetes-eks-aws?utm_source=consul.io&utm_medium=docs) + - [Google Kubernetes Engine (GKE)](https://learn.hashicorp.com/tutorials/consul/kubernetes-gke-google?utm_source=consul.io&utm_medium=docs) + + + + If you did not find your Kubernetes platform above, check the + [Consul documentation here](https://www.consul.io/docs/k8s) to see + if there are specific setup instructions for your platform. + + + +2. Add the Hashicorp repository for installing Consul with Helm. If + you do not have Helm installed, you can find an [installation guide + here](https://helm.sh/docs/intro/install/). + + ```shell + helm repo add hashicorp https://helm.releases.hashicorp.com + ``` + +3. Create a new `consul-values.yaml` YAML file for the Consul + installation values and copy the values below into that file. + + ```yaml + global: + datacenter: dc1 + + ui: + service: + type: 'LoadBalancer' + + syncCatalog: + enabled: true + + server: + replicas: 1 + bootstrapExpect: 1 + + connectInject: + enabled: true + ``` + + + + Note: you are free to change the value of the `datacenter` field in + the install values. This is the the name of your Consul + Datacenter. + + + +4. Install Consul with Helm using the `consul-values.yaml` values file + you just created. + + ```shell + helm install -f consul-values.yaml hashicorp hashicorp/consul + ``` + +## Installing $productName$ + +If you have not already installed $productName$ in to your cluster, +then go to the [quick start guide](../../tutorials/getting-started) +before continuing any further in this guide. + +## Using Consul for service discovery + +This section of the guide instructs you on configuring $productName$ +to look for services registered to Consul, registering a demo +application with Consul, and configuring $productName$ to route to +this application using endpoint data from Consul. + +In this tutorial, you deploy the application in Kubernetes. However, +this application can be deployed anywhere in your data center, such as +on a VM. + +1. Configure $productName$ to look for services registered to Consul + by creating the `ConsulResolver`. Use `kubectl` to apply the + following manifest: + + ```shell + kubectl apply -f < + + **Note:** If you changed the name of your `datacenter` in the + Consul install values, make sure to change it in the resolver above + to match the name of your datacenter. + + If you changed the name of the helm install from `hashicorp` to + another value, make sure to update the value of the `address` field + in your resolver to match it. + + If you are having trouble figuring out what your `address` field + should be, it follow this format: + `http://{consul_server_pod}.{consul_server_service}.{namespace}.svc.cluster.local:{consul_port}`. + The default Consul port should be `8500` unless you changed it. + + + + This tells $productName$ that Consul is a service discovery endpoint. + + You must set the resolver to your `ConsulResolver` on a + per-`Mapping` basis, otherwise for that `Mapping` $productName$ + uses the default resolver. That is, in order for a `Mapping` to + make use of a service registered in Consul, you need to add + `resolver: consul-dc1` to that `Mapping`. + + For more information about resolver configuration, see the + [resolver reference documentation](../../topics/running/resolvers). + (If you're using Consul deployed elsewhere in your data center, + make sure the `address` points to your Consul FQDN or IP address). + +2. Deploy the Quote demo application. Use `kubectl` to + apply the following manifest: + + ```shell + kubectl apply -f < + + The `SERVICE_NAME` environment variable in the `quote-consul` + `Deployment` specifies the service name for Consul. The default + value is set to "quote-consul", so you only need to include it if + you want to change the service name. + + + + The Quote application contains code to automatically + register itself with Consul, using the `CONSUL_IP` and `POD_IP` + environment variables specified within the `quote-consul` container + spec. + + When you apply this manifest, it registers the `Pod` in the + `quote-consul` `Deployment` as a Consul service with the name + `quote-consul` and the IP address of the `Pod`. + + + + The `"consul.hashicorp.com/connect-inject": "false"` annotation + tells Consul that for this `Deployment` you do not want to use the + sidecar proxy that is part of Consul's Connect feature. Without + Consul's sidecar, the service needs to include code to make a + request to Consul to register the service. The manifest includes + the environment variables `CONSUL_IP`, `POD_IP`, and `SERVICE_NAME` + to provide the Quote service with enough information + to build that request and send it to Consul. To see how this code + works, see our [our Git repo for the Quote + service](https://github.com/datawire/quote). + + + +3. Verify the `quote-consul` `Deployment`'s `Pod` has been registered + with Consul. You can verify this by accessing the Consul UI. + + First use `kubectl port-forward` to make the UI available on your + local workstation: + + ```shell + kubectl port-forward service/hashicorp-consul-ui 8500:80 + ``` + + Then, while the port-forward is running, go to + http://localhost:8500/ in a web browser. You should see a service + named `quote-consul`. + + After you have verified that you see the `quote-consul` service in + your web browser, you may kill the port-forward. + + + + Port forwarding not working for you? Make sure the service name + matches your Consul UI service by checking `kubectl get svc -A` + + + +4. Configure $productName$ to make use of this `quote-consul` service. + Use `kubectl` to apply the following manifest: + + ```shell + kubectl apply -f < + +**Congratulations!** You're successfully routing traffic to the Quote +application, the location of which is registered in +Consul. + + + +## Using Consul for authorization and encryption + +In this part of the guide, you'll install a different version of the +demo application that now uses Consul's *Connect* feature to authorize +its incoming connections using mTLS, and install *Ambassador Consul +Connector* to enable $productName$ to authenticate to such services. + +The following steps assume you've already set up Consul for service +discovery, as detailed above. + +1. The Ambassador Consul Connector retrieves the TLS certificate + issued by the Consul CA and stores it in a Kubernetes `Secret` for + $productName$ to use. Deploy the Ambassador Consul Connector with + `kubectl`: + + ```shell + kubectl apply -f https://app.getambassador.io/yaml/v2-docs/$ossVersion$/consul/ambassador-consul-connector.yaml + ``` + + This installs in to your cluster: + + - RBAC resources. + - The Ambassador Consul Connector service. + - A `TLSContext` named `ambassador-consul` to load the + `ambassador-consul-connect` `Secret` into $productName$. + +2. Deploy a new version of the demo application, and configure it to + inject the Consul Connect sidecar by setting + `"consul.hashicorp.com/connect-inject"` to `true`. Note that in + this version of the configuration, you do not have to configure + environment variables for the location of the Consul server. Use + `kubectl` to apply the following manifest: + + ```yaml + kubectl apply -f - < + + Note: Annotations are used to attach metadata to Kubernetes + objects. You can use annotations to link external information to + objects, working in a similar, yet different, fashion to labels. + For more information on annotations, refer to the [Annotating + Kubernetes Services for + Humans](https://kubernetes.io/blog/2021/04/20/annotating-k8s-for-humans/) + article, or get started with annotations in your own cluster with + the [Ambassador Cloud Quick start + guide](https://www.getambassador.io/docs/cloud/latest/service-catalog/quick-start/). + + + + This deploys a demo application `Deployment` called `quote-connect` + (different than the `quote-consul` `Deployment` in the previous + section) with the Consul Connect sidecar proxy. The Connect + sidecar registers the application with Consul, requires TLS to + access the application, and exposes other [Consul Service + Segmentation](https://www.consul.io/docs/connect) features. + + The annotation `consul.hashicorp.com/connect-inject` being set to + `true` in this `Deployment` tells Consul that you want this + `Deployment` to use the Consul Connect sidecar. The sidecar + proxies requests to the service that it is attached to. Keep this + in mind mind when debug requests to the service. + +4. Verify the `quote-connect` `Deployment`'s `Pod` has been registered + with Consul. You can verify this by accessing the Consul UI. + + First use `kubectl port-forward` to make the UI available on your + local workstation: + + ```shell + kubectl port-forward service/hashicorp-consul-ui 8500:80 + ``` + + Then, while the port-forward is running, open + http://localhost:8500/ in a web browser. You should see a service + named `quote-connect`. + + After you have verified that you see the `quote-connect` service in + your web browser, you may kill the port-forward. + +5. Create a `Mapping` to configure $productName$ route to the + `quote-connect` service in Consul. Use `kubectl` to apply the + following manifest: + + ```shell + kubectl apply -f < + +**Congratulations!** You successfully configured the service to work +with the Consul Connect sidecar. + + + +### Environment variables + +The Ambassador Consul Connector can be configured with the following +environment variables. The defaults are best for most use-cases. + +| Environment Variable | Description | Default | +|------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------| +| `_AMBASSADOR_ID` | Set the Ambassador ID so multiple instances of this integration can run per-Cluster when there are multiple $productNamePlural$ (Required if `AMBASSADOR_ID` is set in your $productName$ `Deployment`) | `""` | +| `_CONSUL_HOST` | Set the IP or DNS name of the target Consul HTTP API server | `127.0.0.1` | +| `_CONSUL_PORT` | Set the port number of the target Consul HTTP API server | `8500` | +| `_AMBASSADOR_TLS_SECRET_NAME` | Set the name of the Kubernetes `v1.Secret` created by this program that contains the Consul-generated TLS certificate. | `$AMBASSADOR_ID-consul-connect` | +| `_AMBASSADOR_TLS_SECRET_NAMESPACE` | Set the namespace of the Kubernetes `v1.Secret` created by this program. | (same Namespace as the Pod running this integration) | + +## More information + +For more about $productName$'s integration with Consul, read the +[service discovery configuration](../../topics/running/resolvers) +documentation. diff --git a/docs/emissary/latest/howtos/dist-tracing.md b/docs/emissary/latest/howtos/dist-tracing.md new file mode 100644 index 000000000..bc40df8fd --- /dev/null +++ b/docs/emissary/latest/howtos/dist-tracing.md @@ -0,0 +1,49 @@ +# Explore distributed tracing and Kubernetes monitoring + +The Kubernetes monitoring and distributed tracing landscape is hard to grasp. Although this conceptual approach to [observability is nothing new](https://blog.getambassador.io/distributed-tracing-with-java-microdonuts-kubernetes-and-the-ambassador-api-gateway-ace15b62a89e) — companies like New Relic revolutionized single-application performance monitoring (APM) over a decade ago — it took a few years and the [Dapper publication](https://research.google/pubs/pub36356/) for this idea to migrate to distributed applications. The importance of this technology is only increasing, as more and more of us are building [“deep systems”](https://lightstep.com/deep-systems/). + +As the industry is slowly but surely maturing, standardization is underway. Standardization means proprietary vendor solutions and open source projects are able to collaborate, making our lives easier. For newcomers, understanding the plethora of options, concepts, specifications, and implementations available is still a daunting task: + +* How are Zipkin and Jaeger related? +* What is header propagation? +* Which headers format should I use? +* Who owns the initialization of a trace context? +* How are the data points collected? + +The [K8s Initializer](https://app.getambassador.io/initializer/) can make it easy to enable distributed tracing in any microservices-based system by offering an opinionated and preconfigured application stack that will get you up and running in no time. This way, you can focus on understanding your service topology and interactions rather than waste days on attempting to understand competing standard integrations and tuning configuration switches. + +## One-Click Tracing Configuration with the K8s Initializer + +The K8s Initializer is a tool we built for you to quickly bootstrap any Kubernetes cluster with your own application-ready playground. It will generate YAML manifests for ingress, observability, and more in just a few clicks. Once installed on a local or remote Kubernetes cluster, the generated Kubernetes manifest resources will give you a perfect sandbox environment to deploy your own applications and explore standard integrations. + +Specifically for observability and distributed tracing, the K8s Initializer bundles a Jaeger installation to collect and visualize traces along with a pre-configured Ambassador Edge Stack acting as the ingress controller that will create a trace context on every request. A single selection is required. + +As per the option we selected, we’ll be generating Zipkin-format traces and use B3 headers for propagation between our services. There you have it! Instrument your Java, Python, Golang or Node.js applications with Zipkin and B3 header propagation libraries, then configure your code to send the trace data to the `jaeger-collector.monitoring:9411` service endpoint. + +All that is left to do is making a few requests and visualizing the trace data in the Jaeger UI. + +## Visualizing Trace Data + +As we installed the Ambassador Edge Stack as our ingress controller for Kubernetes via the K8s Initializer, it will instrument parent trace spans for each request coming into our Kubernetes cluster from the internet. The K8s Initializer also pre-configured Ambassador to exposes the Jaeger UI on a subpath: `https://$AMBASSADOR_IP/jaeger/` + +Simply by visiting this URL on our installation, we are able to visualize the generated and collected trace information emitted by our Ambassador installation: + +![Jaeger screenshot](../images/jaeger.png) + +## Tracing the Future: OpenTelemetry + +The [OpenTelemetry project](https://opentelemetry.io/) was created with the intent of stopping the proliferation of API standards and libraries one might need to support for all their observability needs, effectively replacing the Zipkin-API, OpenCensus, OpenTracing and more competing implementations. + +> OpenTelemetry provides a single set of APIs, libraries, agents, and collector services to capture distributed traces and metrics from your application. You can analyze them using Prometheus, Jaeger, and other observability tools.
+-[https://opentelemetry.io/](https://opentelemetry.io/) + +It’s at this point in the conversation that someone inevitably mentions that XKCD... + +![XKCD #927](../images/xkcd.png) + +OpenTelemetry ultimately supports multiple formats in its [OpenTelemetry-Collector](https://github.com/open-telemetry/opentelemetry-collector), easing the transition from one technology to another when installed as a middleware and translator to relay trace data to other collectors. Along with many of its long-awaited features, it supports multiple trace exporters for Jaeger, Zipkin and proprietary APIs. + +## Learn More +In this tutorial, we’ve shown you how to monitor your Kubernetes application with distributed tracing in just a few clicks with the K8s Initializer. To learn more about these tools and distributed tracing, we also recommend [A Complete Guide to Distributed Tracing by the Lightstep Team](https://lightstep.com/distributed-tracing/). + +We also have guides to setup Edge Stack with [Datadog](../tracing-datadog/), [Zipkin](../tracing-zipkin/), and [Prometheus and Grafana](../prometheus). diff --git a/docs/emissary/latest/howtos/external-dns.md b/docs/emissary/latest/howtos/external-dns.md new file mode 100644 index 000000000..f0f51dbb2 --- /dev/null +++ b/docs/emissary/latest/howtos/external-dns.md @@ -0,0 +1,130 @@ +import Alert from '@material-ui/lab/Alert'; + +# ExternalDNS with $productName$ + +[ExternalDNS](https://github.com/kubernetes-sigs/external-dns) configures your existing DNS provider to make Kubernetes resources discoverable via public DNS servers by getting resources from the Kubernetes API to create a list of DNS records. + + +## Getting started + +### Prerequisites + +Start by checking the [ExternalDNS repo's deployment instructions](https://github.com/kubernetes-sigs/external-dns#deploying-to-a-cluster) to get information about the supported DNS providers and steps to setup ExternalDNS for your provider. Each DNS provider will have its own required steps as well as annotations, arguments, and permissions needed for the following configuration. + + +### Installation + +Configuration for a `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` are necessary for the ExternalDNS deployment to support compatability with $productName$ and allow ExternalDNS to get hostnames from $productName$'s `Hosts`. + +The following configuration is an example configuring $productName$ - ExternalDNS integration with [AWS Route53](https://aws.amazon.com/route53/) as the DNS provider. Refer to the ExternalDNS documentation above for annotations and arguments for your DNS Provider. + + +1. Create a YAML file named `externaldns-config.yaml`, and copy the following configuration into it. + + + Ensure that the apiGroups include "getambassador.io" following "networking.k8s.io" and the resources include "hosts" after "ingresses". + + + ```yaml + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: external-dns + annotations: + eks.amazonaws.com/role-arn: {ARN} # AWS ARN role + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: external-dns + rules: + - apiGroups: [""] + resources: ["services","endpoints","pods"] + verbs: ["get","watch","list"] + - apiGroups: ["extensions","networking.k8s.io", "getambassador.io"] + resources: ["ingresses", "hosts"] + verbs: ["get","watch","list"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["list","watch"] + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: external-dns-viewer + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: external-dns + subjects: + - kind: ServiceAccount + name: external-dns + namespace: default + --- + apiVersion: apps/v1 + kind: Deployment + metadata: + name: external-dns + spec: + strategy: + type: Recreate + selector: + matchLabels: + app: external-dns + template: + metadata: + labels: + app: external-dns + annotations: + iam.amazonaws.com/role: {ARN} # AWS ARN role + spec: + serviceAccountName: external-dns + containers: + - name: external-dns + image: registry.opensource.zalan.do/teapot/external-dns:latest + args: + - --source=ambassador-host + - --domain-filter=example.net # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones + - --provider=aws + - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization + - --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both) + - --registry=txt + - --txt-owner-id= {Hosted Zone ID} # Insert Route53 Hosted Zone ID here + ``` + +2. Review the arguments section from the ExternalDNS deployment + + Configure or remove arguments to fit your needs. Additional arguments required for your DNS provider can be found by checking the [ExternalDNS repo's deployment instructions](https://github.com/kubernetes-sigs/external-dns#deploying-to-a-cluster). + + * `--source=ambassador-host` - required across all DNS providers to tell ExternalDNS to look for hostnames in the $productName$ `Host` configurations. + +3. Apply the above config with the following command to deploy ExternalDNS to your cluster and configure support for $productName$ + + ```shell + kubectl apply -f externaldns-ambassador.yaml + ``` + + + For the above example, ensure that you are using an EKS cluster, or register your cluster with AWS so that ExternalDNS can view and edit your AWS Hosted Zones. If you are using a cluster outside EKS and not registered with AWS you will see permissions errors from the ExternalDNS pod when attempting to list the Hosted Zones. + + +## Usage + +After applying the above configuration, ExternalDNS is ready to use. Configure a `Host` with the following annotation to allow ExternalDNS to get the IP address of your $productName$'s LoadBalancer and register it with your DNS provider. + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: your-hostname + annotations: + external-dns.ambassador-service: $productDeploymentName$.$productNamespace$ +spec: + acmeProvider: + authority: none + hostname: your-hostname.example.com +``` + + +Victory! ExternalDNS is now running and configured to report $productName$'s IP and hostname with your DNS provider. diff --git a/docs/emissary/latest/howtos/filter-dev-guide.md b/docs/emissary/latest/howtos/filter-dev-guide.md new file mode 100644 index 000000000..eefe8b6bd --- /dev/null +++ b/docs/emissary/latest/howtos/filter-dev-guide.md @@ -0,0 +1,97 @@ +# Developing custom filters for routing + +Sometimes you may want $AESproductName$ to manipulate an incoming request. Some example use cases: + +* Inspect an incoming request, and add a custom header that can then be used for routing +* Add custom Authorization headers +* Validate an incoming request fits an OpenAPI specification before passing the request to a target service + +$AESproductName$ supports these use cases by allowing you to execute custom logic in `Filters`. Filters are written in Golang, and managed by $AESproductName$. If you want to write a filter in a language other than Golang, $AESproductName$ also has an HTTP/gRPC interface for Filters called `External`. + +## Prerequisites + +`Plugin` `Filter`s are built as [Go plugins](https://golang.org/pkg/plugin/) and loaded directly into the $AESproductName$ container so they can run in-process with the rest of $AESproductName$. + +To build a `Plugin` `Filter` into the $AESproductName$ container you will need +- Linux or MacOS host (Windows Subsystem for Linux is ok) +- [Docker](https://docs.docker.com/install/) +- [rsync](https://rsync.samba.org/) + +The `Plugin` `Filter` is built by `make` which uses Docker to create a stable build environment in a container and `rsync` to copy files between the container and your host machine. + +See the [README](https://github.com/datawire/apro-example-plugin) for more information on how the `Plugin` works. + +## Creating and deploying filters + +We've created an example filter that you can customize for your particular use case. + +1. Start with the example filter: `git clone + https://github.com/datawire/apro-example-plugin/`. + +2. Make code changes to `param-plugin.go`. Note: If you're developing a non-trivial filter, see the rapid development section below for a faster way to develop and test your filter. + +3. Run `make DOCKER_REGISTRY=...`, setting `DOCKER_REGISTRY` to point + to a registry you have access to. This will generate a Docker image + named `$DOCKER_REGISTRY/amb-sidecar-plugin:VERSION`. + +4. Push the image to your Docker registry: `docker push $DOCKER_REGISTRY/amb-sidecar-plugin:VERSION`. + +5. Configure $AESproductName$ to use the plugin by creating a `Filter` + and `FilterPolicy` CRD, as per the [filter reference](/docs/edge-stack/latest/topics/using/filters/). + +6. Update the standard $AESproductName$ manifest to use your Docker + image instead of the standard sidecar. + + ```patch + value: '60' + - name: AMBASSADOR_INTERNAL_URL + value: https://127.0.0.1:8443 + - image: docker.io/datawire/aes:$version$ + + image: DOCKER_REGISTRY/aes-plugin:VERSION + imagePullPolicy: Always + livenessProbe: + httpGet: + ``` + +## Rapid development of a custom filter + +During development, you may want to sidestep the deployment process for a faster development loop. The `aes-plugin-runner` helps you rapidly develop $AESproductName$ filters locally. + +To install the runner, download the latest version: + +Mac 64-bit | +Linux 64-bit + +Note that the plugin runner must match the version of $AESproductName$ that you are running. Place the binary somewhere in your `$PATH`. + +Information about open-source code used in `aes-plugin-runner` can be found by running `aes-plugin-runner --version`. + +Now, you can quickly test and develop your filter. + +1. In your filter directory, type: `aes-plugin-runner :8080 ./param-plugin.so`. +2. Test the filter by running `curl`: + + ``` + $ curl -Lv localhost:8080?db=2 + * Rebuilt URL to: localhost:8080/?db=2 + * Trying ::1... + * TCP_NODELAY set + * Connected to localhost (::1) port 8080 (#0) + > GET /?db=2 HTTP/1.1 + > Host: localhost:8080 + > User-Agent: curl/7.54.0 + > Accept: */* + > + < HTTP/1.1 200 OK + < X-Dc: Even + < Date: Mon, 25 Feb 2019 19:58:38 GMT + < Content-Length: 0 + < + * Connection #0 to host localhost left intact + ``` + +Note in the example above the `X-Dc` header is added. This lets you inspect the changes the filter is making to your HTTP header. + +## Further reading + +For more details about configuring filters and the `plugin` interface, see the [filter reference](/docs/edge-stack/latest/topics/using/filters/). diff --git a/docs/emissary/latest/howtos/grpc.md b/docs/emissary/latest/howtos/grpc.md new file mode 100644 index 000000000..3967ddf7d --- /dev/null +++ b/docs/emissary/latest/howtos/grpc.md @@ -0,0 +1,403 @@ +# gRPC Connections + +$productName$ makes it easy to access your services from outside your application. This includes gRPC services, although a little bit of additional configuration is required: by default, Envoy connects to upstream services using HTTP/1.x and then upgrades to HTTP/2 whenever possible. However, gRPC is built on HTTP/2 and most gRPC servers do not speak HTTP/1.x at all. $productName$ must tell its underlying Envoy that your gRPC service only wants to speak to that HTTP/2, using the `grpc` attribute of a `Mapping`. + +## Writing a gRPC service for $productName$ + +There are many examples and walkthroughs on how to write gRPC applications so that is not what this article will aim to accomplish. If you do not yet have a service written you can find examples of gRPC services in all supported languages here: [gRPC Quickstart](https://grpc.io/docs/quickstart/) + +This document will use the [gRPC python helloworld example](https://github.com/grpc/grpc/tree/master/examples/python/helloworld) to demonstrate how to configure a gRPC service with $productName$. + +Follow the example up through [Run a gRPC application](https://grpc.io/docs/languages/python/quickstart/#run-a-grpc-application) to get started. + +### Dockerize + +After building our gRPC application and testing it locally, we need to package it as a Docker container and deploy it to Kubernetes. + +To run a gRPC application, we need to include the client/server and the protocol buffer definitions. + +For gRPC with python, we need to install `grpcio` and the common protos. + +```Dockerfile +FROM python:2.7 + +WORKDIR /grpc + +ENV PATH "$PATH:/grpc" + +COPY greeter_server.py /grpc +COPY helloworld_pb2.py /grpc +COPY helloworld_pb2_grpc.py /grpc + +RUN python -m pip install grpcio +RUN python -m pip install grpcio-tools googleapis-common-protos + +CMD ["python", "./greeter_server.py"] + +EXPOSE 50051 +``` + +Create the container and test it: + +``` +$ docker build -t /grpc_example +$ docker run -p 50051:50051 /grpc_example +``` + +Where `` is your Docker user or registry. + +Switch to another terminal and from the same directory, run the `greeter_client`. The output should be the same as running it outside of the container. + +``` +$ docker run -p 50051:50051 /grpc_example +Greeter client received: Hello, you! +``` + +Once you verify the container works, push it to your Docker registry: + +``` +$ docker push /grpc_example +``` + +### Mapping gRPC services + +$productName$ `Mapping`s are based on URL prefixes; for gRPC, the URL prefix is the full-service name, including the package path (`package.service`). These are defined in the `.proto` definition file. In the example [proto definition file](https://github.com/grpc/grpc/blob/master/examples/protos/helloworld.proto) we see: + +``` +package helloworld; + +// The greeting service definition. +service Greeter { ... } +``` + +so the URL `prefix` is `helloworld.Greeter` and the mapping would be: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: grpc-py +spec: + hostname: "*" + grpc: True + prefix: /helloworld.Greeter/ + rewrite: /helloworld.Greeter/ + service: grpc-example +``` + +Note the `grpc: true` line - this is what tells Envoy to use HTTP/2 so the request can communicate with your backend service. Also note that you'll need `prefix` and `rewrite` the same here, since the gRPC service needs the package and service to be in the request to do the right thing. + +### Deploying to Kubernetes + +`grpc_example.yaml` + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: example-host +spec: + hostname: host.example.com + acmeProvider: + authority: none + requestPolicy: + insecure: + action: Route +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: grpc-py +spec: + hostname: "*" + grpc: True + prefix: /helloworld.Greeter/ + rewrite: /helloworld.Greeter/ + service: grpc-example + +--- +apiVersion: v1 +kind: Service +metadata: + labels: + service: grpc-example + name: grpc-example +spec: + type: ClusterIP + ports: + - name: grpc-greet + port: 80 + targetPort: grpc-api + selector: + service: grpc-example +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: grpc-example +spec: + replicas: 1 + selector: + matchLabels: + service: grpc-example + template: + metadata: + labels: + service: grpc-example + spec: + containers: + - name: grpc-example + image: /grpc_example + ports: + - name: grpc-api + containerPort: 50051 + restartPolicy: Always +``` + +The Host is declared here because we are using gRPC without TLS. Since $productName$ terminates TLS by default, in the Host we add a `requestPolicy` which allows insecure connections. After adding the $productName$ mapping to the service, the rest of the Kubernetes deployment YAML file is pretty straightforward. We need to identify the container image to use, expose the `containerPort` to listen on the same port the Docker container is listening on, and map the service port (80) to the container port (50051). + +> For more information on insecure routing, please refer to the [`Host` documentation](../../topics/running/host-crd#secure-and-insecure-requests). + + +Once you have the YAML file and the correct Docker registry, deploy it to your cluster with `kubectl`. + +``` +$ kubectl apply -f grpc_example.yaml +``` + +### Testing the Deployment + +Make sure to test your Kubernetes deployment before making more advanced changes (like adding TLS). To test any service with $productName$, we will need the hostname of the running $productName$ service which you can get with: + +``` +$ kubectl get service ambassador -o wide +``` +Which should return something similar to: + +``` +NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE +ambassador 10.11.12.13 35.36.37.38 80:31656/TCP 1m +``` +where `EXTERNAL-IP` is the `$AMBASSADORHOST` and 80 is the `$PORT`. + +You will need to open the `greeter_client.py` and change `localhost:50051` to `$AMBASSADORHOST:$PORT` + +```diff +- with grpc.insecure_channel('localhost:50051') as channel: ++ with grpc.insecure_channel(‘$AMBASSADORHOST:$PORT’) as channel: + stub = helloworld_pb2_grpc.GreeterStub(channel) + response = stub.SayHello(helloworld_pb2.HelloRequest(name='you')) + print("Greeter client received: " + response.message) +``` + +After making that change, simply run the client again and you will see the gRPC service in your cluster respond: + +``` +$ python greeter_client.py +Greeter client received: Hello, you! +``` + +### gRPC and TLS + +There is some extra configuration required to connect to a gRPC service through $productName$ over an encrypted channel. Currently, the gRPC call is being sent over cleartext to $productName$ which proxies it to the gRPC application. + +![](../images/grpc-tls.png) + +If you want to add TLS encryption to your gRPC calls, first you need to tell $productName$ to add [ALPN protocols](../../topics/running/tls) which are required by HTTP/2 to do TLS. + +For example: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: tls +spec: + hosts: + - "*" + secret: ambassador-cert + alpn_protocols: h2 +``` + +Next, you need to change the client code slightly and tell it to open a secure RPC channel with $productName$. + +```diff +- with grpc.insecure_channel(‘$AMBASSADORHOST:$PORT’) as channel: ++ with grpc.secure_channel(‘$AMBASSADORHOST:$PORT’, grpc.ssl_channel_credentials()) as channel: + stub = helloworld_pb2_grpc.GreeterStub(channel) + response = stub.SayHello(helloworld_pb2.HelloRequest(name='you')) + print("Greeter client received: " + response.message) +``` + +`grpc.ssl_channel_credentials(root_certificates=None, private_key=None, certificate_chain=None)`returns the root certificate that will be used to validate the certificate and public key sent by $productName$. The default values of `None` tells the gRPC runtime to grab the root certificate from the default location packaged with gRPC and ignore the private key and certificate chain fields. Generally, passing no arguments to the method that requests credentials gives the same behavior. Refer to the languages [API Reference](https://grpc.io/docs/) if this is not the case. + +$productName$ is now terminating TLS from the gRPC client and proxying the call to the application over cleartext. + +![](../images/gRPC-TLS-Ambassador.png) + +If you want to configure authentication in another language, [gRPC provides examples](https://grpc.io/docs/guides/auth.html) with proper syntax for other languages. + +#### Working with Host headers that include the port + +Some gRPC clients automatically include the port in the Host header. This is a problem when using TLS because the certificate will match `myurl.com` but the Host header will be `myurl.com:443`, resulting in the error `rpc error: code = Unimplemented desc =`. If you run into this issue, there are two ways to solve it depending on your use case, both using the Module resource. + +The first is to set the `strip_matching_host_port` [property](../../topics/running/ambassador#strip-matching-host-port) to `true`. However, this only works if the port in the header matches the port that Envoy listens on (8443 by default). In the default installation of $productName$, the public port is 443, which then maps internally to 8443, so this only works for custom installations where the public service port matches the port in the Listener resource. + +The second solution is to use the following [Lua script](../../topics/running/ambassador#lua-scripts), which always strips the port: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador + namespace: ambassador +spec: + config: + lua_scripts: | + function envoy_on_request(request_handle) + local authority = request_handle:headers():get(":authority") + if(string.find(authority, ":") ~= nil) + then + local authority_index = string.find(authority, ":") + local stripped_authority = string.sub(authority, 1, authority_index - 1) + request_handle:headers():replace(":authority", stripped_authority) + end + end +``` + +#### Originating TLS with gRPC service + +![](../images/gRPC-TLS-Originate.png) + +$productName$ can originate TLS with your gRPC service so the entire RPC channel is encrypted. To configure this, first get some TLS certificates and configure the server to open a secure channel with them. Using self-signed certs this can be done with OpenSSL and adding a couple of lines to the server code. + +```diff +def serve(): + server = grpc.server(futures.ThreadPoolExecutor(max_workers=10)) ++ with open('certs/server.key', 'rb') as f: ++ private_key = f.read() ++ with open('certs/server.crt', 'rb') as f: ++ cert_chain = f.read() ++ server_creds = grpc.ssl_server_credentials( ( (private_key, cert_chain), ) ) + helloworld_pb2_grpc.add_GreeterServicer_to_server(Greeter(), server) +- server.add_insecure_port('[::]:50052') ++ server.add_secure_port('[::]:50052', server_creds) + server.start() +``` + +Rebuild your docker container **making sure the certificates are included** and follow the same steps of testing and deploying to Kubernetes. You will need to make a small change to the client code to test locally. + +```diff +- with grpc.insecure_channel(‘localhost:$PORT’) as channel: ++ with grpc.secure_channel(‘localhost:$PORT’, grpc.ssl_channel_credentials(open('certs/server.crt', 'rb').read())) as channel: + stub = helloworld_pb2_grpc.GreeterStub(channel) + response = stub.SayHello(helloworld_pb2.HelloRequest(name='you')) + print("Greeter client received: " + response.message) +``` + +Once deployed we will need to tell $productName$ to originate TLS to the application. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: grpc-py-tls +spec: + hostname: "*" + grpc: True + tls: upstream + prefix: /hello.Greeter/ + rewrite: /hello.Greeter/ + service: https://grpc-py + +--- +apiVersion: v1 +kind: Service +metadata: + labels: + service: grpc-py + name: grpc-py +spec: + type: ClusterIP + ports: + - name: grpc-greet + port: 443 + targetPort: grpc-api + selector: + service: grpc-py +``` + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: upstream +spec: + alpn_protocols: h2 + secret: ambassador-cert +``` + +We need to tell $productName$ to route to the `service:` over https and have the service listen on `443`. We also need to tell $productName$ to use ALPN protocols when originating TLS with the application, the same way we did with TLS termination. This is done by setting `alpn_protocols: ["h2"]` in a `TLSContext` telling the service to use that tls-context in the mapping by setting `tls: upstream`. + +Refer to the [TLS document](../../topics/running/tls/origination#advanced-configuration-using-a-tlscontext) for more information on TLS origination. + +### gRPC headers + +gRPC services use [HTTP/2 headers](https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md). This means that some header-based routing rules will need to be rewritten to support HTTP/2 headers. For example, `host: subdomain.host.com` needs to be rewritten using the `headers: ` attribute with the `:authority` header: + +``` +headers: + :authority: subdomain.host.com +``` + +## Note + +### Ingress controllers + +Some [Kubernetes ingress controllers](https://kubernetes.io/docs/concepts/services-networking/ingress/) do not support HTTP/2 fully. As a result, if you are running $productName$ with an ingress controller in front, you may find that gRPC requests fail even with correct $productName$ configuration. + +A simple way around this is to use $productName$ with a `LoadBalancer` service, rather than an Ingress controller. You can also consider using [$productName$ as your Ingress Controller](../../topics/running/ingress-controller). + +### Mappings with hosts + +As with any `Mapping`, your gRPC service's `Mapping` may include a `host`: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: grpc-py +spec: + hostname: "*" + grpc: true + prefix: /helloworld.Greeter/ + rewrite: /helloworld.Greeter/ + service: grpc-example + host: api.example.com +``` + +Some gRPC client libraries produce requests where the `host` or `:authority` header includes the port number. For example, a request to the above service might include `host: api.example.com:443` instead of just `host: api.example.com`. To avoid having $productName$ return a 404 (not found) response to these requests due to the mismatched host, you may want to set `strip_matching_host_port` in the [Ambassador module](../../topics/running/ambassador#strip-matching-host-port). + +Alternately, you may find it cleaner to make sure your gRPC client does not include the port in the `host` header. Here is an example using gRPC/Go. + +```go +hostname := "api.example.com" +port := "443" +config := &tls.Config{ServerName: hostname} +creds := credentials.NewTLS(config) +opts := []grpc.DialOption{ + grpc.WithTransportCredentials(creds), +// ... +} +conn, err := grpc.Dial(hostname+":"+port, opts...) +// ... +``` + +## gRPC-Web + +$productName$ also supports the [gRPC-Web](../../topics/running/ambassador#grpc) protocol for browser-based gRPC applications. diff --git a/docs/emissary/latest/howtos/http3-aks.md b/docs/emissary/latest/howtos/http3-aks.md new file mode 100644 index 000000000..2f9be012f --- /dev/null +++ b/docs/emissary/latest/howtos/http3-aks.md @@ -0,0 +1,60 @@ +--- +title: "$productName$ - HTTP/3 support for Azure Kubernetes Service (AKS)" +description: "How to configure HTTP/3 support for Azure Kubernetes Service (AKS). This guide shows how to setup the LoadBalancer service for AKS to support both TCP and UDP communications." +--- + +# Azure Kubernetes Service Engine HTTP/3 configuration + +This guide shows how to setup HTTP/3 support for Azure Kubernetes Service (AKS). The instructions provided in this page are a continuation of the [HTTP/3 in $productName$](../../topics/running/http3) documentation. + +## Configuring an external load balancer for AKS + +To configure an external load balancer for AKS, you need to: + +1. Reserve a public static IP address. +2. Create two `LoadBalancer` services, one for TCP and one for UDP. +3. Assign the public static IP address to the `loadBalancerIP` field. + +An example of the two load balancer services described above looks as follows: + +```yaml +# selectors and labels removed for clarity +apiVersion: v1 +kind: Service +metadata: + name: $productDeploymentName$ + namespace: $productNamespace$ +spec: + type: LoadBalancer + loadBalancerIP: xx.xx.xx.xx # Enter your public static IP address here. + ports: + - name: http + port: 80 + targetPort: 8080 + protocol: TCP + - name: https + port: 443 + targetPort: 8443 + protocol: TCP + --- + apiVersion: v1 +kind: Service +metadata: + name: $productDeploymentName$-udp + namespace: $productNamespace$ +spec: + type: LoadBalancer + loadBalancerIP: xx.xx.xx.xx # Enter your public static IP address here. + ports: + - name: http3 + port: 443 # Default support for HTTP/3 requires you to use 443 for the external client-facing port. + targetPort: 8443 + protocol: UDP + +``` + +In the above example, AKS generates two `LoadBalancer` services, one for UDP and the other for TCP. + + +You should verify that the Managed Identity or Serivce Principal has permissions to assign the IP address to the newly created LoadBalancer services. Refer to the Azure Docs - Managed Identity for more information. + diff --git a/docs/emissary/latest/howtos/http3-eks.md b/docs/emissary/latest/howtos/http3-eks.md new file mode 100644 index 000000000..d09a1af5a --- /dev/null +++ b/docs/emissary/latest/howtos/http3-eks.md @@ -0,0 +1,252 @@ +--- +title: "HTTP/3 with Amazon Elastic Kubernetes Service (EKS) | $productName$" +description: "How to configure HTTP/3 support for Amazon Elastic Kubernetes Service (EKS). This guide shows how to setup the LoadBalancer service for EKS to support both TCP and UDP communications." +--- + +# Amazon Elastic Kubernetes Service HTTP/3 configuration + +This guide shows how to setup HTTP/3 support for Amazon Elastic Kubernetes Service (EKS) The instructions provided in this page are a continuation of the [HTTP/3 in $productName$](../../topics/running/http3) documentation. + +## Create a network load balancer (NLB) + + The virtual private cloud (VPC) for your load balancer needs one public subnet in each availability zone where you have targets. + + ```shell + SUBNET_IDS=( ) + + aws elbv2 create-load-balancer \ + --name ${CLUSTER_NAME}-nlb \ + --type network \ + --subnets ${SUBNET_IDS} + ``` + +## Create a NodePort service + +Now create a `NodePort` service for $productName$ installation with two entries. Use `port: 443` to include support for both TCP and UDP traffic. + ```yaml + # Selectors and labels removed for clarity. + apiVersion: v1 + kind: Service + metadata: + name: $productDeploymentName$-http3 + namespace: $productNamespace$ + spec: + type: NodePort + ports: + - name: http + port: 80 + targetPort: 8080 + protocol: TCP + nodePort: 30080 + - name: https + port: 443 + targetPort: 8443 + protocol: TCP + nodePort: 30443 + - name: http3 + port: 443 + targetPort: 8443 + protocol: UDP + nodePort: 30443 + ``` + +## Create target groups + +Run the following command with the variables for your VPC ID and cluster name: + + ```shell + VPC_ID= + CLUSTER_NAME= + + aws elbv2 create-target-group --name ${CLUSTER_NAME}-tcp-tg \ + --protocol TCP --port 30080 --vpc-id ${VPC_ID} \ + --health-check-protocol TCP \ + --health-check-port 30080 \ + --target-type instance + + aws elbv2 create-target-group --name ${CLUSTER_NAME}-tcp-udp-tg \ + --protocol TCP_UDP --port 30443 --vpc-id ${VPC_ID} \ + --health-check-protocol TCP \ + --health-check-port 30443 \ + --target-type instance + ``` + +## Register your instances + +Next, register your cluster's instance with the the instance IDs and Amazon Resource Names (ARN). + +To get your cluster's instance IDs, enter the following command: + ```shell + aws ec2 describe-instances \ + --filters Name=tag:eks:cluster-name,Values=${CLUSTER_NAME} \ + --output text + --query 'Reservations[*].Instances[*].InstanceId' \ + ``` + +To get your ARNs, enter the following command: + ```shell + TCP_TG_NAME=${CLUSTER_NAME}-tcp-tg-name + TCP_UDP_TG_NAME=${CLUSTER_NAME}-tcp-udp-tg-name + + aws elbv2 describe-target-groups \ + --query 'TargetGroups[?TargetGroupName==`'${TCP_TG_NAME}'`].TargetGroupArn' \ + --output text + aws elbv2 describe-target-groups \ + --query 'TargetGroups[?TargetGroupName==`'${TCP_UDP_TG_NAME}'`]. TargetGroupArn' \ + --output text + ``` + +Register the instances with the target groups and load balancer using the instance IDs and ARNs you retrieved. + ```shell + INSTANCE_IDS=( ) + REGION= + TG_NAME= + TCP_TG_ARN=arn:aws:elasticloadbalancing:${REGION}:079.....:targetgroup/${TG_NAME}/... + TCP_UDP_TG_ARN=arn:aws:elasticloadbalancing:${REGION}:079.....:targetgroup/${TG_NAME}/... + + aws elbv2 register-targets --target-group-arn ${TCP_TG_ARN} --targets ${INSTANCE_IDS} + aws elbv2 register-targets --target-group-arn ${TCP_UDP_TG_ARN} --targets ${INSTANCE_IDS} + ``` + +## Create listeners in AWS. + +Register your cluster's instance with the instance IDs and ARNs. + +To get the load balancer's ARN, enter the following command: + ```shell + aws elbv2 describe-load-balancers --name ${CLUSTER_NAME}-nlb \ + --query 'LoadBalancers[0].LoadBalancerArn' \ + --output text + ``` + +Create a TCP listener on port 80 that that forwards to the TargetGroup {TCP_TG_ARN}. + ```shell + aws elbv2 create-listener --load-balancer-arn ${LB_ARN} \ + --protocol TCP --port 80 \ + --default-actions Type=forward,TargetGroupArn=${TCP_TG_ARN} + ``` + + Create a TCP_UDP listener on port 443 that forwards to the TargetGroup {TCP_UDP_TG_ARN}. + ```shell + aws elbv2 create-listener --load-balancer-arn ${LB_ARN} \ + --protocol TCP_UDP --port 443 \ + --default-actions Type=forward,TargetGroupArn=${TCP_UDP_TG_ARN} + ``` + +## Update the security groups + +Now you need to update your security groups to receive traffic. This security group covers all node groups attached to the EKS cluster: + ```shell + aws eks describe-cluster --name ${CLUSTER_NAME} | grep clusterSecurityGroupId + ``` + +Now authorize the cluster security group to allow internet traffic: + ```shell + for x in ${CLUSTER_SG}; do \ + aws ec2 authorize-security-group-ingress --group-id $$x --protocol tcp --port 30080 --cidr 0.0.0.0/0; \ + aws ec2 authorize-security-group-ingress --group-id $$x --protocol tcp --port 30443 --cidr 0.0.0.0/0; \ + aws ec2 authorize-security-group-ingress --group-id $$x --protocol udp --port 30443 --cidr 0.0.0.0/0; \ + done + ``` + +## Get the DNS name for the load balancers + +Enter the following command to get the DNS name for your load balancers and create a CNAME record at your domain provider: + ```shell + aws elbv2 describe-load-balancers --name ${CLUSTER_NAME}-nlb \ + --query 'LoadBalancers[0].DNSName' \ + --output text + ``` + +## Create Listener resources + +Now you need to create the `Listener` resources for $productName$. The first `Listener` in the example below handles traffic for HTTP/1.1 and HTTP/2, while the second `Listener` handles all HTTP/3 traffic. + + ```yaml + kubectl apply -f - < + acmeProvider: + authority: none + tlsSecret: + name: tls-cert # The QUIC network protocol requires TLS with a valid certificate + tls: + min_tls_version: v1.3 + max_tls_version: v1.3 + alpn_protocols: h2,http/1.1 + EOF + ``` + +## Apply the quote service and a Mapping to test the HTTP/3 configuration. + +Finally, apply the quote service to a $productName$ `Mapping`. + + ```shell + kubectl apply -f https://app.getambassador.io/yaml/v2-docs/$version$/quickstart/qotm.yaml + kubectl apply -f - </backend/ + ``` +Your domain now shows that it is being served with HTTP/3. diff --git a/docs/emissary/latest/howtos/http3-gke.md b/docs/emissary/latest/howtos/http3-gke.md new file mode 100644 index 000000000..677e89e35 --- /dev/null +++ b/docs/emissary/latest/howtos/http3-gke.md @@ -0,0 +1,56 @@ +--- +title: "$productName$ - HTTP/3 support for Google Kubernetes Engine (GKE)" +description: "How to configure HTTP/3 support for Google Kubernetes Engine (GKE). This guide shows how to setup the LoadBalancer service for GKE to support both TCP and UDP communications." +--- + +# Google Kubernetes Service Engine HTTP/3 configuration + +This guide shows how to setup HTTP/3 support for Google Kubernetes Engine (GKE). The instructions provided in this page are a continuation of the [HTTP/3 in $productName$](../../topics/running/http3) documentation. + +## Configuring an external load balancer for GKE + +Currently, GKE only supports adding feature flags to `alpha` clusters, and doesn't support the creation of mixed protocol services of type `LoadBalancer`. To configure an external load balancer for GKE, you need to: + +1. Reserve a public static IP address. +2. Create two `LoadBalancer` services, one for TCP and one for UDP. +3. Assign the public static IP address to the `loadBalancerIP` field. + +An example of the two load balancer services described above looks as follows: + +```yaml +# selectors and labels removed for clarity +apiVersion: v1 +kind: Service +metadata: + name: $productDeploymentName$ + namespace: $productNamespace$ +spec: + type: LoadBalancer + loadBalancerIP: xx.xx.xx.xx # Enter your public static IP address here. + ports: + - name: http + port: 80 + targetPort: 8080 + protocol: TCP + - name: https + port: 443 + targetPort: 8443 + protocol: TCP + --- + apiVersion: v1 +kind: Service +metadata: + name: $productDeploymentName$-udp + namespace: $productNamespace$ +spec: + type: LoadBalancer + loadBalancerIP: xx.xx.xx.xx # Enter your public static IP address here. + ports: + - name: http3 + port: 443 # Default support for HTTP/3 requires you to use 443 for the external client-facing port. + targetPort: 8443 + protocol: UDP + +``` + +In the above example, GKE generates two `LoadBalancer` services, one for UDP and the other for TCP. diff --git a/docs/emissary/latest/howtos/index.md b/docs/emissary/latest/howtos/index.md new file mode 100644 index 000000000..f16cdd46e --- /dev/null +++ b/docs/emissary/latest/howtos/index.md @@ -0,0 +1,25 @@ +# "How-to" guides + +These guides are designed to help users quickly accomplish common tasks. The guides assume a certain level of understanding of $productName$. Many of these guides are contributed by third parties; we welcome contributions via Pull Request at https://github.com/emissary-ingress/emissary. + +* Integrating with Service Mesh. $productName$ natively integrates with many service meshes. + * [HashiCorp Consul](consul) + * [Istio](istio) + * [Linkerd](linkerd2) +* Distributed tracing. $productName$ natively supports a number of distributed tracing systems to enable developers to visualize request flow in microservice and service-oriented architectures. + * [Datadog](tracing-datadog) + * [Zipkin](tracing-zipkin) +* Monitoring. $productName$ integrates with a number of different monitoring/metrics providers. + * [Prometheus](prometheus) +* [Developing Custom Filters](filter-dev-guide) +* Frameworks and Protocols. $productName$ supports a wide range of protocols and cloud-native frameworks. + * [gRPC](grpc) + * [Knative Serverless Framework](knative) + * [WebSockets](websockets) +* Security. $productName$ supports a number of strategies for securing Kubernetes services. + * [Protecting the Diagnostics Interface](protecting-diag-access) + * [HTTPS and TLS termination](tls-termination) + * [Certificate Manager](cert-manager) can be used to automatically obtain and renew TLS certificates; $AESproductName$ natively integrates this functionality. + * [Client Certificate Validation](client-cert-validation) + * [Basic Authentication](basic-auth) is a tutorial on how to use the external authentication API to code your own authentication service. + * [Basic Rate Limiting](rate-limiting-tutorial) diff --git a/docs/emissary/latest/howtos/istio.md b/docs/emissary/latest/howtos/istio.md new file mode 100644 index 000000000..e26571b73 --- /dev/null +++ b/docs/emissary/latest/howtos/istio.md @@ -0,0 +1,445 @@ +import Alert from '@material-ui/lab/Alert'; + +# Istio integration + +$productName$ and Istio: Edge Proxy and Service Mesh together in one. $productName$ is deployed at the edge of your network and routes incoming traffic to your internal services (aka "north-south" traffic). [Istio](https://istio.io/) is a service mesh for microservices, and is designed to add application-level Layer (L7) observability, routing, and resilience to service-to-service traffic (aka "east-west" traffic). Both Istio and $productName$ are built using [Envoy](https://www.envoyproxy.io). + +$productName$ and Istio can be deployed together on Kubernetes. In this configuration, $productName$ manages +traditional edge functions such as authentication, TLS termination, and edge routing. Istio mediates communication +from $productName$ to services, and communication between services. + +This allows the operator to have the best of both worlds: a high performance, modern edge service ($productName$) combined with a state-of-the-art service mesh (Istio). While Istio has introduced a [Gateway](https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/) abstraction, $productName$ still has a much broader feature set for edge routing than Istio. For more on this topic, see our blog post on [API Gateway vs Service Mesh](https://blog.getambassador.io/api-gateway-vs-service-mesh-104c01fa4784). + +This guide explains how to take advantage of both $productName$ and Istio to have complete control and observability over how requests are made in your cluster: + +- [Install Istio](#install-istio) and configure auto-injection +- [Install $productName$ with Istio integration](#install-edge) +- [Configure an mTLS `TLSContext`](#configure-an-mtls-tlscontext) +- [Route to services using mTLS](#route-to-services-using-mtls) + +If desired, you may also + +- [Enable strict mTLS](#enable-strict-mtls) +- [Configure Prometheus metrics collection](#configure-prometheus-metrics-collection) +- [Configure Istio distributed tracing](#configure-istio-distributed-tracing) + +To follow this guide, you need: + +- A Kubernetes cluster version 1.15 and above +- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) +- Istio version 1.10 or higher + +## Install Istio + +Start by [installing Istio](https://istio.io/docs/setup/getting-started/). Any supported installation method for +Istio will work for use with $productName$. + +### Configure Istio Auto-Injection + +Istio functions by supplying a sidecar container running Envoy with every service in the mesh (including +$productName$). The sidecar is what enforces Istio policies for traffic to and from the service, notably +including mTLS encryption and certificate handling. As such, it is very important that the sidecar be +correctly supplied for every service in the mesh! + +While it is possible to manage sidecars by hand, it is far easier to allow Istio to automatically inject +the sidecar as necessary. To do this, set the `istio-injection` label on each Kubernetes Namespace for +which you want auto-injection: + +```yaml +kubectl label namespace $namespace istio-injection=enabled --overwrite +``` + + + The following example uses the `istio-injection` label to arrange for auto-injection in the + `$productNamespace$` Namespace below. You can manage sidecar injection by hand if you wish; what + is critical is that every service that participates in the Istio mesh have the Istio + sidecar. + + +## Install $productName$ with Istio Integration + +Properly integrating $productName$ with Istio provides support for: + +* [Mutual TLS (mTLS)](../../topics/running/tls/mtls/), with certificates managed by Istio, to allow end-to-end encryption +for east-west traffic; +* Automatic generation of Prometheus metrics for services; and +* Istio distributed tracing for end-to-end observability. + +The simplest way to enable everything is to install $productName$ using [Helm](https://helm.sh), though +you can use manual installation with YAML if you wish. + +### Installation with Helm (Recommended) + +To install with Helm, write the following YAML to a file called `istio-integration.yaml`: + +```yaml +# Listeners are required in $productName$ 2.0. +# This will create the two default Listeners for HTTP on port 8080 and HTTPS on port 8443. +createDefaultListeners: true + +# These are annotations that will be added to the $productName$ pods. +podAnnotations: + # These first two annotations tell Istio not to try to do port management for the + # $productName$ pod itself. Though these annotations are placed on the $productName$ + # pods, they are interpreted by Istio. + traffic.sidecar.istio.io/includeInboundPorts: "" # do not intercept any inbound ports + traffic.sidecar.istio.io/includeOutboundIPRanges: "" # do not intercept any outbound traffic + + # We use proxy.istio.io/config to tell the Istio proxy to write newly-generated mTLS certificates + # into /etc/istio-certs, which will be mounted below. Though this annotation is placed on the + # $productName$ pods, it is interpreted by Istio. + proxy.istio.io/config: | + proxyMetadata: + OUTPUT_CERTS: /etc/istio-certs + + # We use sidecar.istio.io/userVolumeMount to tell the Istio sidecars to mount the istio-certs + # volume at /etc/istio-certs, allowing the sidecars to see the generated certificates. Though + # this annotation is placed on the $productName$ pods, it is interpreted by Istio. + sidecar.istio.io/userVolumeMount: '[{"name": "istio-certs", "mountPath": "/etc/istio-certs"}]' + +# We define a single storage volume called "istio-certs". It starts out empty, and Istio +# uses it to communicate mTLS certs between the Istio proxy and the Istio sidecars (see the +# annotations above). +volumes: + - emptyDir: + medium: Memory + name: istio-certs + +# We also tell $productName$ to mount the "istio-certs" volume at /etc/istio-certs in the +# $productName$ pod. This gives $productName$ access to the mTLS certificates, too. +volumeMounts: + - name: istio-certs + mountPath: /etc/istio-certs/ + readOnly: true + +# Finally, we need to set some environment variables for $productName$. +env: + # AMBASSADOR_ISTIO_SECRET_DIR tells $productName$ to look for Istio mTLS certs, and to + # make them available as a secret named "istio-certs". + AMBASSADOR_ISTIO_SECRET_DIR: "/etc/istio-certs" + + # AMBASSADOR_ENVOY_BASE_ID is set to prevent collisions with the Istio sidecar's Envoy, + # which runs with base-id 0. + AMBASSADOR_ENVOY_BASE_ID: "1" +``` + +To install $productName$ with Helm, use these values to configure Istio integration: + +1. Create the `$productNamespace$` Namespace: + + ```yaml + kubectl create namespace $productNamespace$ + ``` + +2. Enable Istio auto-injection for it: + + ```yaml + kubectl label namespace $productNamespace$ istio-injection=enabled --overwrite + ``` + +3. Make sure the Helm repo is configured: + + ```bash + helm repo add datawire https://app.getambassador.io + helm repo update + ``` + +4. Use Helm to install $productName$ in $productNamespace$: + + ```bash + helm install $productHelmName$ --namespace $productNamespace$ -f istio-integration.yaml datawire/$productHelmName$ && \ + kubectl -n $productNamespace$ wait --for condition=available --timeout=90s deploy -lapp.kubernetes.io/instance=$productDeploymentName$ + ``` + +### Installation Using YAML + +To install using YAML files, you need to manually incorporate the contents of the `istio-integration.yaml` +file shown above into your deployment YAML: + +* `pod-annotations` should be configured as Kubernetes `annotations` on the $productName$ Pods; +* `volumes`, `volumeMounts`, and `env` contents should be included in the $productDeploymentName$ Deployment; and +* you must also label the $productNamespace$ Namespace for auto-injection as described above. + +### Configuring an Existing Installation + +If you have already installed $productName$ and want to enable Istio: + +1. Install Istio. +2. Label the $productNamespace$ namespace for Istio auto-injection, as above. +2. Edit the $productName$ Deployments to contain the `annotations`, `volumes`, `volumeMounts`, and `env` elements + shown above. + * If you installed with Helm, you can use `helm upgrade` with `-f istio-integration.yaml` to modify the + installation for you. +3. Restart the $productName$ pods. + +## Configure an mTLS `TLSContext` + +After configuring $productName$ for Istio integration, the Istio mTLS certificates are available within +$productName$: + +- Both the `istio-proxy` sidecar and $productName$ mount the `istio-certs` volume at `/etc/istio-certs`. +- The `istio-proxy` sidecar saves the mTLS certificates into `/etc/istio-certs` (per the `OUTPUT_CERTS` + environment variable). +- $productName$ reads the mTLS certificates from `/etc/istio-certs` (per the `AMBASSADOR_ISTIO_SECRET_DIR` + environment variable) and creates a Secret named `istio-certs`. + + + At present, the Secret name istio-certs cannot be changed. + + +To make use of the `istio-certs` Secret, create a `TLSContext` referencing it: + + ```shell + $ kubectl apply -f - < + You must either explicitly specify port 80 in your Mapping's service + element, or set up the Kubernetes Service resource for your upstream service to map port + 443. If you don't do one of these, connections to your upstream will hang — see the + "Configure Service Ports" section below for more information. + + +The behavior of your service will not seem to change, even though mTLS is active: + + ```shell + $ curl -k https://{{AMBASSADOR_HOST}}/backend/ + { + "server": "bewitched-acai-5jq7q81r", + "quote": "A late night does not make any sense.", + "time": "2020-06-02T10:48:45.211178139Z" + } + ``` + +This request first went to $productName$, which routed it over an mTLS connection to the quote service in the +default namespace. That connection was intercepted by the `istio-proxy` which authenticated the request as +being from $productName$, exported various metrics, and finally forwarded it on to the actual quote service. + +### Configure Service Ports + +When mTLS is active, Istio makes TLS connections to your services. Since Istio handles the TLS protocol for +you, you don't need to modify your services — however, the TLS connection will still use port 443 +if you don't configure your `Mapping`s to _explicitly_ use port 80. + +If your upstream service was not written to use TLS, its `Service` resource may only map port 80. If Istio +attempts a TLS connection on port 443 when port 443 is not defined by the `Service` resource, the connection +will hang _even though the Istio sidecar is active_, because Kubernetes itself doesn't know how to handle +the connection to port 443. + +As shown above, one simple way to deal with this situation is to explicitly specify port 80 in the `Mapping`'s +`service`: + + ```yaml + service: quote:80 # Be explicit about port 80. + ``` + +Another way is to set up your Kubernetes `Service` to map both port 80 and port 443. For example, the +Quote (which listens on port 8080 in its pod) might use a `Service` like this: + + ```yaml + apiVersion: v1 + kind: Service + metadata: + name: quote + spec: + type: ClusterIP + selector: + app: quote + ports: + - name: http + port: 80 + protocol: TCP + targetPort: 8080 + - name: https + port: 443 + protocol: TCP + targetPort: 8080 + ``` + +Note that ports 80 and 443 are both mapped to `targetPort` 8080, where the service is actually listening. +This permits Istio routing to work whether mTLS is active or not. + +## Enable Strict mTLS + +Istio defaults to _permissive_ mTLS, where mTLS is allowed between services, but not required. Configuring +[_strict_ mTLS](https://istio.io/docs/tasks/security/authentication/authn-policy/#globally-enabling-istio-mutual-tls-in-strict-mode) requires all connections within the cluster be encrypted. To switch Istio to use strict mTLS, +apply a `PeerAuthentication` resource in each namespace that should operate in strict mode: + + ```yaml + $ kubectl apply -f - <This guide applies to $OSSproductName$. It will not work correctly +on $AESproductName$. + +$productName$ can validate incoming requests before routing them to a backing service. In this tutorial, we'll configure $productName$ to use a simple third party rate limit service. (If you don't want to implement your own rate limiting service, $AESproductName$ integrates a [powerful, flexible rate limiting service](/docs/edge-stack/latest/topics/using/rate-limits/rate-limits/).) + +## Before you get started + +This tutorial assumes you have already followed the $productName$ [Installation](../../topics/install/) and [Quickstart Tutorial](../../tutorials/quickstart-demo) guides. If you haven't done that already, you should do so now. + +Once completed, you'll have a Kubernetes cluster running $productName$ and the Quote service. Let's walk through adding rate limiting to this setup. + +## 1. Deploy the rate limit service + +$productName$ delegates the actual rate limit logic to a third party service. We've written a [simple rate limit service](https://github.com/emissary-ingress/ratelimit-example) that: + +- listens for requests on port 5000; +- handles gRPC `shouldRateLimit` requests; +- allows requests with the `x-emissary-test-allow: "true"` header; and +- marks all other requests as `OVER_LIMIT`; + +Here's the YAML we'll start with: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: RateLimitService +metadata: + name: ratelimit + namespace: default +spec: + service: "ratelimit-example.default:5000" + protocol_version: v3 + domain: emissary + failure_mode_deny: true +--- +apiVersion: v1 +kind: Service +metadata: + name: ratelimit-example +spec: + selector: + app: ratelimit-example + ports: + - name: http + port: 5000 + targetPort: http +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ratelimit-example +spec: + replicas: 1 + selector: + matchLabels: + app: ratelimit-example + template: + metadata: + labels: + app: ratelimit-example + spec: + containers: + - name: ratelimit-example + image: docker.io/emissaryingress/ratelimit-example:v3 + imagePullPolicy: Always + ports: + - name: http + containerPort: 5000 + resources: + limits: + memory: "64Mi" + cpu: "100m" +``` + +Once this configuration is applied Kubernetes will startup the example ratelimit service and $productName$ will be configured to use the rate limit service. The `RateLimitService` configuration tells $productName$ to: + +- Send `ShouldRateLimit` check request to `ratelimit-example.default:5000` +- Configure Envoy to talk with the example ratelimit service using transport protocol `v3` (*only supported version*) +- Set the labels `domain` to `emissary` (*labels discussed below*) + +If $productName$ cannot contact the rate limit service, it can either fail open or closed. The default is to fail open but in the example `RateLimitService` above we toggled it via the `failure_mode_deny: true` setting. + +## 2. Configure $productName$ Mappings + +$productName$ only validates requests on `Mapping`s which set labels to use for rate limiting, so you'll need to apply `labels` to your `Mapping`s to enable rate limiting. For more information +on the labelling process, see the [Rate Limits configuration documentation](../../topics/using/rate-limits/). + + + These labels require Mapping resources with apiVersion getambassador.io/v2 or newer — if you're updating an old installation, check the + apiVersion! + + +Labels are added to a `Mapping` using the `labels` field and `domain` configured in the `RateLimitService`. For example: + +```yaml +labels: + emissary: + - request_label_group: + - x-emissary-test-allow: + request_headers: + key: "x-emissary-test-allow" + header_name: "x-emissary-test-allow" +``` + +If we were to apply it the `Mapping` definition for the `quote-backend` service outlined in the quick-start then it would look like this: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + hostname: "*" + prefix: /backend/ + service: quote + labels: + emissary: + - request_label_group: + - x-emissary-test-allow: + request_headers: + key: "x-emissary-test-allow" + header_name: "x-emissary-test-allow" +``` + +Note that the `key` could be anything you like, but our example rate limiting service expects it to match the name of the header. Also note that since our `RateLimitService` expects to use labels in the +`emissary` domain, our `Mapping` must match. + +## 2. Test rate limiting + +If we `curl` to a rate-limited URL: + +```shell +curl -i -H "x-emissary-test-allow: probably" http://$LB_ENDPOINT/backend/ +``` + +We get a `429` status code, since we are being rate limited. + +```shell +HTTP/1.1 429 Too Many Requests +content-type: text/html; charset=utf-8 +content-length: 0 +``` + +If we set the correct header value to the service request, we will get a quote successfully: + +```shell +$ curl -i -H "x-emissary-test-allow: true" http://$LB_ENDPOINT/backend/ + +TCP_NODELAY set +* Connected to 35.196.173.175 (35.196.173.175) port 80 (#0) +> GET /backed HTTP/1.1 +> Host: 35.196.173.175 +> User-Agent: curl/7.54.0 +> Accept: */* +> +< HTTP/1.1 200 OK +< content-type: application/json +< date: Thu, 23 May 2019 15:25:06 GMT +< content-length: 172 +< x-envoy-upstream-service-time: 0 +< server: envoy +< +{ + "server": "humble-blueberry-o2v493st", + "quote": "Nihilism gambles with lives, happiness, and even destiny itself!", + "time": "2019-05-23T15:25:06.544417902Z" +* Connection #0 to host 54.165.128.189 left intact +} +``` + +## More + +For more details about configuring the external rate limit service, read the [rate limit documentation](../../topics/using/rate-limits/). diff --git a/docs/emissary/latest/howtos/route.md b/docs/emissary/latest/howtos/route.md new file mode 100644 index 000000000..6a399ec56 --- /dev/null +++ b/docs/emissary/latest/howtos/route.md @@ -0,0 +1,241 @@ +--- +description: "$productName$ uses the Mapping resource to map a resource, like a URL prefix, to a Kubernetes service or web service." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Get traffic from the edge + +
+

Contents

+ +* [Examples](#examples) +* [Applying a Mapping Resource](#applying-a-mapping-resource) +* [Resources](#resources) +* [Services](#services) +* [Extending Mappings](#extending-mappings) +* [Best Practices](#best-practices) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +The core $productName$ resource used to manage cluster ingress is the `Mapping` resource. + +**A `Mapping` resource routes a URL path (or prefix) to a service (either a Kubernetes service or other web service).** + + + Remember that Listener and Host resources are  + required for a functioning $productName$ installation that can route traffic!
+ Learn more about Listener.
+ Learn more about Host. +
+ +## Examples + +This `Mapping` would route requests to `https:///webapp/` to the `webapp-svc` Service. **This is not a +complete example on its own; see below.** + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: webapp-mapping +spec: + prefix: /webapp/ + service: webapp-svc +``` + +| Name | Type | Description | +| :--- | :--- | :--- | +| `metadata.name` | String | Identifies the Mapping. | +| `spec.prefix` | String | The URL prefix identifying your resource. [See below](#resources) on how $productName$ handles resources. | +| `spec.service` | String | The service handling the resource. If a Kubernetes service, it must include the namespace (in the format `service.namespace`) if the service is in a different namespace than $productName$. [See below](#services) on service name formatting.| + +Here's another example using a web service that maps requests to `/httpbin/` to `http://httpbin.org` (again, **this is not a +complete example on its own; see below**): + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: httpbin-mapping +spec: + prefix: /httpbin/ + service: http://httpbin.org + hostname: '*' +``` + +### Complete example configuration + +For demonstration purposes, here's a possible way of combining a `Listener`, a `Host`, and both `Mapping`s above that is complete and functional: + +- it will accept HTTP or HTTPS on port 8443; +- $productName$ is terminating TLS; +- HTTPS to `foo.example.com` will be routed as above; +- HTTP to `foo.example.com` will be redirected to HTTPS; +- HTTP or HTTPS to other hostnames will be rejected; and +- the associations between the `Listener`, the `Host`, and the `Mappings` use Kubernetes `label`s. + +```yaml +--- +apiVersion: v1 +kind: Secret +type: kubernetes.io/tls +metadata: + name: foo-example-secret +data: + tls.crt: -certificate PEM- + tls.key: -secret key PEM- +--- +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: listener-8443 +spec: + port: 8443 + protocol: HTTPS + securityModel: XFP + hostBinding: + selector: + matchLabels: + exampleName: basic-https +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: foo-host + labels: + exampleName: basic-https +spec: + hostname: "foo.example.com" + tlsSecret: + name: foo-example-secret + selector: + matchLabels: + exampleName: basic-https +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: webapp-mapping + labels: + exampleName: basic-https +spec: + prefix: /webapp/ + service: webapp-svc + hostname: 'foo.example.com' +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: httpbin-mapping + labels: + exampleName: basic-https +spec: + prefix: /httpbin/ + service: http://httpbin.org + hostname: 'foo.example.com' + +``` + +Note the addition of `label`s and `selector`s to explicitly specify which resources should associate in this example. + + + Learn more about Listener.
+ Learn more about Host. +
+ +## Applying a Mapping resource + +A Mapping resource can be managed using the same workflow as any other Kubernetes resources (like a Service or Deployment). For example, if the above Mapping is saved into a file called `httpbin-mapping.yaml`, the following command will apply the configuration directly to $productName$: + +``` +kubectl apply -f httpbin-mapping.yaml +``` + +For production use, best practice is to store the file in a version control system and apply the changes with a continuous deployment pipeline. The Ambassador Operating Model provides more detail. + +## Resources + +To $productName$, a resource is a group of one or more URLs that all share a common prefix in the URL path. For example, these URLs all share the `/resource1/` path prefix, so `/resource1/` can be considered a single resource: + +* `https://ambassador.example.com/resource1/foo` +* `https://ambassador.example.com/resource1/bar` +* `https://ambassador.example.com/resource1/baz/zing` + +On the other hand, these URLs share only the prefix `/` -- you _could_ tell $productName$ to treat them as a single resource, but it's probably not terribly useful. + +* `https://ambassador.example.com/resource1/foo` +* `https://ambassador.example.com/resource2/bar` +* `https://ambassador.example.com/resource3/baz/zing` + +Note that the length of the prefix doesn't matter; a prefix like `/v1/this/is/my/very/long/resource/name/` is valid. + +Also note that $productName$ does not actually require the prefix to start and end with `/` -- however, in practice, it's a good idea. Specifying a prefix of `/man` would match all of the following, which probably is not what was intended: + +* `https://ambassador.example.com/man/foo` +* `https://ambassador.example.com/mankind` +* `https://ambassador.example.com/man-it-is/really-hot-today` + +## Services + +$productName$ routes traffic to a service. A service is defined as `[scheme://]service[.namespace][:port]`. Everything except for the service is optional. + +- `scheme` can be either `http` or `https`; if not present, the default is `http`. +- `service` is the name of a service (typically the service name in Kubernetes or Consul); it is not allowed to contain the `.` character. +- `namespace` is the namespace in which the service is running. Starting with $productName$ 1.0.0, if not supplied, it defaults to the namespace in which the Mapping resource is defined. The default behavior can be configured using the [Module resource](../../topics/running/ambassador). When using a Consul resolver, `namespace` is not allowed. +- `port` is the port to which a request should be sent. If not specified, it defaults to `80` when the scheme is `http` or `443` when the scheme is `https`. Note that the [resolver](../../topics/running/resolvers) may return a port in which case the `port` setting is ignored. + +While using service.namespace.svc.cluster.local may work for Kubernetes resolvers, the preferred syntax is service.namespace. + + +## Extending Mappings + +Mapping resources support a rich set of annotations to customize the specific routing behavior. Here's an example service for implementing the [CQRS pattern](https://docs.microsoft.com/en-us/azure/architecture/patterns/cqrs) (using HTTP): + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: cqrs-get +spec: + prefix: /cqrs/ + method: GET + service: getcqrs + hostname: '*' +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: cqrs-put +spec: + prefix: /cqrs/ + method: PUT + service: putcqrs + hostname: '*' +``` + +## Best Practices + +$productName$'s configuration is assembled from multiple YAML blocks which are managed by independent application teams. This implies that certain best practices should be followed. + +#### $productName$'s configuration should be under version control. + +While you can always read back the $productName$'s configuration from Kubernetes or its diagnostic service, the $productName$ will not do versioning for you. + +#### $productName$ tries to not start with a broken configuration, but it's not perfect. + +Gross errors will result in the $productName$ refusing to start, in which case `kubectl logs` will be helpful. However, it's always possible to map a resource to the wrong service, or use the wrong `rewrite` rules. $productName$ can't detect that on its own, although its [diagnostic service](../../topics/running/diagnostics/) can help you figure it out. + +#### Be careful of mapping collisions. + +If two different developers try to map `/myservice/` to something, this can lead to unexpected behavior. $productName$'s [canary deployment](../../topics/using/canary/) logic means that it's more likely that traffic will be split between them than that it will throw an error -- again, the diagnostic service can help you here. + +#### Unless specified, mapping attributes cannot be applied to any other resource type. + +## What's next? + +There are many options for [advanced mapping configurations](../../topics/using/mappings), with features like [automatic retries](../../topics/using/retries/), [timeouts](../../topics/using/timeouts/), [rate limiting](../../topics/using/rate-limits/), [redirects](../../topics/using/redirects/), and more. diff --git a/docs/emissary/latest/howtos/tls-termination.md b/docs/emissary/latest/howtos/tls-termination.md new file mode 100644 index 000000000..2bbdf4c40 --- /dev/null +++ b/docs/emissary/latest/howtos/tls-termination.md @@ -0,0 +1,192 @@ +# TLS termination and enabling HTTPS + +TLS encryption is one of the basic requirements of having a secure system. +$AESproductName$ [automatically enables TLS termination/HTTPs +](../../topics/running/host-crd#tls-settings), making TLS encryption +easy and centralizing TLS termination for all of your services in Kubernetes. + +While this automatic certificate management in $AESproductName$ helps +simply TLS configuration in your cluster, the Open-Source $OSSproductName$ +still requires you provide your own certificate to enable TLS. + +The following will walk you through the process of enabling TLS with a +self-signed certificate created with the `openssl` utility. + +**Note** these instructions also work if you would like to provide your own +certificate to $AESproductName$. + +## Prerequisites + +This guide requires you have the following installed: + +- A Kubernetes cluster v1.11 or newer +- The Kubernetes command-line tool, +[`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) +- [openssl](https://www.openssl.org/source/) + +## Install $productName$ + +[Install $productName$ in Kubernetes](../../topics/install). + +## Create a listener listening on the correct port and protocol +We first need to create a listener to tell Emissary which port will be using the HTTPS protocol + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: emissary-ingress-listener-8443 +spec: + port: 8443 + protocol: HTTPS + securityModel: XFP + hostBinding: + namespace: + from: ALL +``` + +## Create a self-signed certificate + +OpenSSL is a tool that allows us to create self-signed certificates for opening +a TLS encrypted connection. The `openssl` command below will create a +create a certificate and private key pair that $productName$ can use for TLS +termination. + +- Create a private key and certificate. + + ``` + openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -subj '/CN=ambassador-cert' -nodes + ``` + + The above command will create a certificate and private key with the common + name `ambassador`. Since this certificate is self-signed and only used for testing, + the other information requested can be left blank. + +- Verify the `key.pem` and `cert.pem` files were created + + ``` + ls *.pem + cert.pem key.pem + ``` + +## Store the certificate and key in a Kubernetes Secret + +$productName$ dynamically loads TLS certificates by reading them from +Kubernetes secrets. Use `kubectl` to create a `tls` secret to hold the pem +files we created above. + +``` +kubectl create secret tls tls-cert --cert=cert.pem --key=key.pem +``` + +## Tell $productName$ to use this secret for TLS termination + +Now that we have stored our certificate and private key in a Kubernetes secret +named `tls-cert`, we need to tell $productName$ to use this certificate +for terminating TLS on a domain. A `Host` is used to tell $productName$ which +certificate to use for TLS termination on a domain. + +Create the following `Host` to have $productName$ use the `Secret` we created +above for terminating TLS on all domains. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: wildcard-host +spec: + hostname: "*" + acmeProvider: + authority: none + tlsSecret: + name: tls-cert +``` + +**Note:** If running multiple instances of $productName$ in one cluster remember to include the `ambassador_id` property in the `spec`, e.g.: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: wildcard-host +spec: + ambassador_id: [ "my_id" ] + ... +``` + +Apply the `Host` configured above with `kubectl`: + +``` +kubectl apply -f wildcard-host.yaml +``` + +$productName$ is now configured to listen for TLS traffic on port `8443` and +terminate TLS using the self-signed certificate we created. + +## Send a request Over HTTPS + +We can now send encrypted traffic over HTTPS. + +First, make sure the $productName$ service is listening on `443` and forwarding +to port `8443`. Verify this with `kubectl`: + +``` +kubectl get service ambassador -o yaml + +apiVersion: v1 +kind: Service +... +spec: + ports: + - name: http + port: 80 + protocol: TCP + targetPort: 8080 + - name: https + port: 443 + protocol: TCP + targetPort: 8443 +... +``` + +If the output to the `kubectl` command is not similar to the example above, +edit the $productName$ service to add the `https` port. + +After verifying $productName$ is listening on port 443, send a request +to your backend service with curl: + +``` +curl -Lk https://{{AMBASSADOR_IP}}/backend/ + +{ + "server": "trim-kumquat-fccjxh8x", + "quote": "Abstraction is ever present.", + "time": "2019-07-24T16:36:56.7983516Z" +} +``` + +**Note:** Since we are using a self-signed certificate, you must set the `-k` +flag in curl to disable hostname validation. + +## Next steps + +This guide walked you through how to enable basic TLS termination in $productName$ using a self-signed certificate for simplicity. + +### Get a valid certificate from a certificate authority + +While a self-signed certificate is a simple and quick way to get $productName$ to terminate TLS, it should not be used by production systems. In order to serve HTTPS traffic without being returned a security warning, you will need to get a certificate from an official Certificate Authority like Let's Encrypt. + +Jetstack's `cert-manager` provides a simple +way to manage certificates from Let's Encrypt. See our documentation for more +information on how to [use `cert-manager` with $productName$ +](../cert-manager). + +### Enable advanced TLS options + +$productName$ exposes configuration for many more advanced options +around TLS termination, origination, client certificate validation, and SNI +support. See the full [TLS reference](../../topics/running/tls) for more +information. diff --git a/docs/emissary/latest/howtos/tracing-datadog.md b/docs/emissary/latest/howtos/tracing-datadog.md new file mode 100644 index 000000000..d627e29f2 --- /dev/null +++ b/docs/emissary/latest/howtos/tracing-datadog.md @@ -0,0 +1,63 @@ +# Distributed Tracing with Datadog + +In this tutorial, we'll configure $productName$ to initiate a trace on some sample requests, and use DataDog APM to visualize them. + +## Before you get started + +This tutorial assumes you have already followed $productName$ [Getting Started](../../tutorials/getting-started) guide. If you haven't done that already, you should do that now. + +After completing the Getting Started guide you will have a Kubernetes cluster running $productName$ and the Quote service. Let's walk through adding tracing to this setup. + +## 1. Configure the DataDog agent + +You will need to configure the DataDog agent so that it uses a host-port and accepts non-local APM traffic, you can follow the DataDog [documentation](https://docs.datadoghq.com/agent/kubernetes/apm/?tab=daemonset) on how to do this. + +## 2. Configure Envoy JSON logging + +DataDog APM can [correlate traces with logs](https://docs.datadoghq.com/tracing/advanced/connect_logs_and_traces/) if you propagate the current span and trace IDs with your logs. + +When using JSON logging with Envoy, $productName$ will automatically append the `dd.trace_id` and `dd.span_id` properties to all logs so that correlation works: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: + config: + envoy_log_type: json +``` + +## 3. Configure the TracingService + +Next configure a TracingService that will write your traces using the DataDog tracing driver, as you want to write traces to your host-local DataDog agent you can use the `${HOST_IP}` interpolation to get the host IP address from the $productName$ containers environment. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TracingService +metadata: + name: tracing +spec: + service: "${HOST_IP}:8126" + driver: datadog + config: + service_name: test +``` + +## 4. Generate some requests + +Use `curl` to generate a few requests to an existing $productName$ mapping. You may need to perform many requests, since only a subset of random requests are sampled and instrumented with traces. + +``` +$ curl -L $AMBASSADOR_IP/httpbin/ip +``` + +## 5. Test traces + +Once you have made some requests you should be able to [view your traces](https://app.datadoghq.com/apm/traces) within a few minutes in the DataDog UI. If you would like more information on DataDog APM to learn about its features and benefits you can view the [documentation](https://docs.datadoghq.com/tracing/). + +## More + +For more details about configuring the external tracing service, read the documentation on [external tracing](../../topics/running/services/tracing-service). diff --git a/docs/emissary/latest/howtos/tracing-lightstep.md b/docs/emissary/latest/howtos/tracing-lightstep.md new file mode 100644 index 000000000..30353e71d --- /dev/null +++ b/docs/emissary/latest/howtos/tracing-lightstep.md @@ -0,0 +1,230 @@ +# Distributed Tracing with OpenTelemetry and Lightstep + +In this tutorial, we'll configure [$productName$](https://www.getambassador.io/products/edge-stack/api-gateway) to initiate a trace on some sample requests, collect them with the OpenTelemetry Collector and use Lightstep to visualize them. + + + Please note that the TracingService no longer supports the native Envoy Lightstep tracing driver as of $productName$ version 3.4.0. If you are currently using the native Lightstep tracing driver, please refer to the bottom of the page on how to migrate. + + +## Before you get started + +This tutorial assumes you have already followed the $productName$ [Getting Started](../../tutorials/getting-started) guide. If you haven't done that already, you should do that now. + +After completing the Getting Started guide you will have a Kubernetes cluster running $productName$ and the Quote service. Let's walk through adding tracing to this setup. + +## 1. Setup Lightstep + +If you don't already have a Lightstep account be sure to create one [here](https://lightstep.com/). Then create a Project and be sure to create and save the Access Token information. You can find your Access Token information under the Project settings. + +## 2. Deploy the OpenTelemetry Collector + +The next step is to deploy the OpenTelemetry Collector. The purpose of the OpenTelemetry Collector is to receive the requested trace data and then export it to Lightstep. + +For the purposes of this tutorial, we are going to create and use the `monitoring` namespace. This can be done with the following command. + +```bash + kubectl create namespace monitoring +``` + +Next we are going to setup our configuration for the OpenTelemetry Collector. First, we use a Kubernetes secret to store our Lightstep Access Token that we saved in step one. It is important for us to encode the secret in Base64. How you want to do this securely is up to you, for the purposes of this tutorial we will use the online tool [Base64Encode.org](https://www.base64encode.org/). Once the secret is encoded, please apply the following YAML and be sure to update the value of the `lightstep_access_token` with your encoded token. + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: otel + namespace: monitoring +type: Opaque +data: + lightstep_access_token: YOUR_BASE64_ENCODED_TOKEN_HERE +``` + +Next, please add the following YAML to a file named `opentelemetry.yaml`. This configuration will create 3 resources. A ConfigMap that will store our configuration options, an OpenTelemetry Deployment that uses the [OpenTelemetry Collector Contrib](https://github.com/open-telemetry/opentelemetry-collector-contrib) container image, and an associated Service for our Distributed Tracing. + +```yaml +--- +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: otel-collector-conf + namespace: monitoring + labels: + app: opentelemetry + component: otel-collector-conf +data: + otel-collector-config: | + receivers: + zipkin: {} + processors: + batch: + memory_limiter: + # Same as --mem-ballast-size-mib CLI argument + ballast_size_mib: 683 + # 80% of maximum memory up to 2G + limit_mib: 1500 + # 25% of limit up to 2G + spike_limit_mib: 512 + check_interval: 5s + queued_retry: + extensions: + health_check: {} + zpages: {} + exporters: + otlp: + endpoint: ingest.lightstep.com:443 + headers: {"lightstep-access-token":"${LIGHTSTEP_ACCESS_TOKEN}"} + service: + extensions: [health_check, zpages] + pipelines: + traces: + receivers: [zipkin] + processors: [memory_limiter, batch, queued_retry] + exporters: + - otlp +--- +apiVersion: v1 +kind: Service +metadata: + name: otel-collector + namespace: monitoring + labels: + app: opentelemetry + component: otel-collector +spec: + ports: + - name: otlp # Default endpoint for OpenTelemetry receiver. + port: 55680 + - name: zipkin # Default endpoint for Zipkin trace receiver. + port: 9411 + selector: + component: otel-collector +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: otel-collector + namespace: monitoring + labels: + app: opentelemetry + component: otel-collector +spec: + selector: + matchLabels: + app: opentelemetry + component: otel-collector + minReadySeconds: 5 + progressDeadlineSeconds: 120 + replicas: 1 + template: + metadata: + labels: + app: opentelemetry + component: otel-collector + spec: + containers: + - command: + - "/otelcontribcol" + - "--config=/conf/otel-collector-config.yaml" + - "--mem-ballast-size-mib=683" # Memory Ballast size should be max 1/3 to 1/2 of memory. + image: otel/opentelemetry-collector-contrib:0.11.0 + name: otel-collector + resources: + limits: + cpu: 1000m + memory: 2Gi + requests: + cpu: 200m + memory: 400Mi + ports: + - containerPort: 55680 # Default endpoint for OpenTelemetry receiver. + - containerPort: 9411 # Default endpoint for Zipkin receiver. + env: + - name: LIGHTSTEP_ACCESS_TOKEN + valueFrom: + secretKeyRef: + name: otel + key: lightstep_access_token + volumeMounts: + - name: otel-collector-config-vol + mountPath: /conf + livenessProbe: + httpGet: + path: / + port: 13133 + readinessProbe: + httpGet: + path: / + port: 13133 + volumes: + - configMap: + name: otel-collector-conf + items: + - key: otel-collector-config + path: otel-collector-config.yaml + name: otel-collector-config-vol +``` + +Be sure to apply this configuration with the following command: + +```bash + kubectl apply -f opentelemetry.yaml +``` + +At this point, the OpenTelemetry Collector should be setup properly and ready to send data to Lightstep. + +## 3. Configure the TracingService + +Now that the OpenTelemetry Collector is setup for collecting data, the next step will be for us to setup our [TracingService](../../topics/running/services/tracing-service). We will be using the Zipkin driver to send our request trace data to the OpenTelemetry Collector for Distributed Tracing. Please apply the following YAML. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TracingService +metadata: + name: tracing-zipkin + namespace: ambassador +spec: + service: otel-collector.monitoring:9411 + driver: zipkin +``` + +As a final step we want to restart $productName$ as this is necessary to add the distributed tracing headers. This command will restart all the Pods (assuming $productName$ is installed in the ambassador namespace): + +```bash + kubectl -n ambassador rollout restart deploy +``` + + + Restarting $productName$ is required after deploying a Tracing Service for changes to take effect. + + +## 4. Testing our Distributed Tracing + +Finally, we are going to test our Distributed Tracing. Use `curl` to generate a few requests to an existing $productName$ `Mapping`. You may need to perform many requests since only a subset of random requests are sampled and instrumented with traces. + +```bash + curl -Li http://$LB_ENDPOINT/backend/ +``` + +At this point, we should be able to view and check our traces on the [Lightstep app](https://app.lightstep.com/). You can do so by clicking on the Explorer tab and searching for a trace. + +## Migrating from the Lightstep Tracing Driver + + + Please be sure to follow these steps prior to upgrading to $productName$ version 3.4.0. + + +As of $productName$ version 3.4.0, the Lightstep tracing driver will no longer be supported. This is due to the upgrade to Envoy version 1.24, where the team at LightStep has completely removed support for the LightStep Tracing driver in favor of using the OpenTelemetry Collector. In order to continue to use Lightstep to visualize our traces, we can follow similar steps to the above tutorial. + +First, make sure that the OpenTelemetry Collector is installed. This can be done by following the same commands as step 2 of this page. Please be sure to create/update the Kubernetes secret to include your Lightstep Access Token. + +Then, we simply need to edit our TracingService to point to the OpenTelemetry Collector (instead of the ingest endpoint of Lightstep) and to use the Zipkin driver. Please note that $productName$ can only support 1 TracingService per instance. Because of this, we must edit our previous TracingService rather than applying a second one. + +If you were using the Lightstep tracing driver, you may have your Lightstep Access Token information set in your TracingService config. Using a Kubernetes Secret, we no longer need to reference the token here. + +Once our TracingService configuration has been updated, a restart of $productName$ is necessary for Lightstep to recieve our Distributed Tracing information. This can be done with the following command: + +```bash + kubectl -n ambassador rollout restart deploy +``` diff --git a/docs/emissary/latest/howtos/tracing-zipkin.md b/docs/emissary/latest/howtos/tracing-zipkin.md new file mode 100644 index 000000000..37ddc9026 --- /dev/null +++ b/docs/emissary/latest/howtos/tracing-zipkin.md @@ -0,0 +1,129 @@ +import Alert from '@material-ui/lab/Alert'; + +# Distributed tracing with Zipkin + +In this tutorial, we'll configure $productName$ to initiate a trace on some sample requests, and use Zipkin to visualize them. + +## Before you get started + +This tutorial assumes you have already followed $productName$ [Getting Started](../../tutorials/getting-started) guide. If you haven't done that already, you should do that now. + +After completing the Getting Started guide you will have a Kubernetes cluster running $productName$ and the Quote service. Let's walk through adding tracing to this setup. + +## 1. Deploy Zipkin + +In this tutorial, you will use a simple deployment of the open-source [Zipkin](https://github.com/openzipkin/zipkin/wiki) distributed tracing system to store and visualize $productName$-generated traces. The trace data will be stored in memory within the Zipkin container, and you will be able to explore the traces via the Zipkin web UI. + +First, add the following YAML to a file named `zipkin.yaml`. This configuration will create a Zipkin Deployment that uses the [openzipkin/zipkin](https://hub.docker.com/r/openzipkin/zipkin/) container image and also an associated Service. We will also include a `TracingService` that configures $productName$ to use the Zipkin service (running on the default port of 9411) to provide tracing support. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TracingService +metadata: + name: tracing +spec: + service: "zipkin:9411" + driver: zipkin + config: {} +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: zipkin +spec: + replicas: 1 + selector: + matchLabels: + app: zipkin + template: + metadata: + labels: + app: zipkin + spec: + containers: + - name: zipkin + image: openzipkin/zipkin + env: + # note: in-memory storage holds all data in memory, purging older data upon a span limit. + # you should use a proper storage in production environments + - name: STORAGE_TYPE + value: mem +--- +apiVersion: v1 +kind: Service +metadata: + labels: + name: zipkin + name: zipkin +spec: + ports: + - port: 9411 + targetPort: 9411 + selector: + app: zipkin +``` + +Next, deploy this configuration into your cluster: + +``` +$ kubectl apply -f zipkin.yaml +``` + +As a final step we want to restart $productName$ as this is necessary to add the tracing header. This command will restart all the Pods (assuming $productName$ is installed in the ambassador namespace): + +``` +$ kubectl -n ambassador rollout restart deploy +``` + + Restarting $productName$ is required after deploying a Tracing Service for changes to take effect. + + +## 2. Generate some requests + +Use `curl` to generate a few requests to an existing $productName$ `Mapping`. You may need to perform many requests since only a subset of random requests are sampled and instrumented with traces. + +``` +$ curl -L $AMBASSADOR_IP/backend/ +``` + +## 3. Test traces + +To test things out, we'll need to access the Zipkin UI. If you're on Kubernetes, get the name of the Zipkin pod: + +``` +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +ambassador-5ffcfc798-c25dc 2/2 Running 0 1d +prometheus-prometheus-0 2/2 Running 0 113d +zipkin-868b97667c-58v4r 1/1 Running 0 2h +``` + +And then use `kubectl port-forward` to access the pod: + +``` +$ kubectl port-forward zipkin-868b97667c-58v4r 9411 +``` + +Open your web browser to `http://localhost:9411` for the Zipkin UI. + +If you're on `minikube` you can access the `NodePort` directly, and this ports number can be obtained via the `minikube services list` command. If you are using `Docker for Mac/Windows`, you can use the `kubectl get svc` command to get the same information. + +``` +$ minikube service list +|-------------|----------------------|-----------------------------| +| NAMESPACE | NAME | URL | +|-------------|----------------------|-----------------------------| +| default | ambassador-admin | http://192.168.99.107:30319 | +| default | ambassador | http://192.168.99.107:31893 | +| default | zipkin | http://192.168.99.107:31043 | +|-------------|----------------------|-----------------------------| +``` + +Open your web browser to the Zipkin dashboard `http://192.168.99.107:31043/zipkin/`. + +In the Zipkin UI, click on the "Find Traces" button to get a listing instrumented traces. Each of the traces that are displayed can be clicked on, which provides further information about each span and associated metadata. + +## Learn more + +For more details about configuring the external tracing service, read the documentation on [external tracing](../../topics/running/services/tracing-service). diff --git a/docs/emissary/latest/howtos/websockets.md b/docs/emissary/latest/howtos/websockets.md new file mode 100644 index 000000000..25cac7da9 --- /dev/null +++ b/docs/emissary/latest/howtos/websockets.md @@ -0,0 +1,43 @@ +# WebSocket connections + +$productName$ makes it easy to access your services from outside your +application, and this includes services that use WebSockets. Only a +small amount of additional configuration is required, which is as +simple as telling the Mapping to allow "upgrading" from the HTTP protocol to +the "websocket" protocol: + +```yaml +allow_upgrade: +- websocket +``` + +## Example WebSocket service + +The example configuration below demonstrates the addition of the `allow_upgrade:` attribute to support websockets. The use of `use_websocket` is now deprecated. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: my-service-mapping +spec: + hostname: "*" + prefix: /my-service/ + service: my-service + allow_upgrade: + - websocket + +--- +kind: Service +apiVersion: v1 +metadata: + name: my-service +spec: + selector: + app: MyApp + ports: + - protocol: TCP + port: 80 + targetPort: 9376 +``` diff --git a/docs/emissary/latest/images/Auth0_JWT.png b/docs/emissary/latest/images/Auth0_JWT.png new file mode 100644 index 000000000..e18155f50 Binary files /dev/null and b/docs/emissary/latest/images/Auth0_JWT.png differ diff --git a/docs/emissary/latest/images/Auth0_domain_clientID.png b/docs/emissary/latest/images/Auth0_domain_clientID.png new file mode 100644 index 000000000..a7f8edf61 Binary files /dev/null and b/docs/emissary/latest/images/Auth0_domain_clientID.png differ diff --git a/docs/emissary/latest/images/Auth0_method_callback_origins.png b/docs/emissary/latest/images/Auth0_method_callback_origins.png new file mode 100644 index 000000000..8d31138e1 Binary files /dev/null and b/docs/emissary/latest/images/Auth0_method_callback_origins.png differ diff --git a/docs/emissary/latest/images/aes-success.png b/docs/emissary/latest/images/aes-success.png new file mode 100644 index 000000000..66f28d3fc Binary files /dev/null and b/docs/emissary/latest/images/aes-success.png differ diff --git a/docs/emissary/latest/images/ambassador-arch.png b/docs/emissary/latest/images/ambassador-arch.png new file mode 100644 index 000000000..5a5cb652f Binary files /dev/null and b/docs/emissary/latest/images/ambassador-arch.png differ diff --git a/docs/emissary/latest/images/ambassador-logo.svg b/docs/emissary/latest/images/ambassador-logo.svg new file mode 100644 index 000000000..1f0e06a80 --- /dev/null +++ b/docs/emissary/latest/images/ambassador-logo.svg @@ -0,0 +1,49 @@ + + + + ambassador logo@1x + Created with Sketch. + + + + + + + diff --git a/docs/emissary/latest/images/ambassador_oidc_flow.jpg b/docs/emissary/latest/images/ambassador_oidc_flow.jpg new file mode 100644 index 000000000..4f1c0c7e6 Binary files /dev/null and b/docs/emissary/latest/images/ambassador_oidc_flow.jpg differ diff --git a/docs/emissary/latest/images/apple.png b/docs/emissary/latest/images/apple.png new file mode 100644 index 000000000..8b8277f16 Binary files /dev/null and b/docs/emissary/latest/images/apple.png differ diff --git a/docs/emissary/latest/images/auth-flow.png b/docs/emissary/latest/images/auth-flow.png new file mode 100644 index 000000000..e1ba43879 Binary files /dev/null and b/docs/emissary/latest/images/auth-flow.png differ diff --git a/docs/emissary/latest/images/authentication-icon.svg b/docs/emissary/latest/images/authentication-icon.svg new file mode 100644 index 000000000..342e8a3df --- /dev/null +++ b/docs/emissary/latest/images/authentication-icon.svg @@ -0,0 +1,18 @@ + + + + noun_897228_cc + Created with Sketch. + + + + + + + + + + + + + diff --git a/docs/emissary/latest/images/blackbird.png b/docs/emissary/latest/images/blackbird.png new file mode 100644 index 000000000..1f10e5cc9 Binary files /dev/null and b/docs/emissary/latest/images/blackbird.png differ diff --git a/docs/emissary/latest/images/canary-release-overview.png b/docs/emissary/latest/images/canary-release-overview.png new file mode 100644 index 000000000..c683a23dc Binary files /dev/null and b/docs/emissary/latest/images/canary-release-overview.png differ diff --git a/docs/emissary/latest/images/configure-icon.svg b/docs/emissary/latest/images/configure-icon.svg new file mode 100644 index 000000000..0f5568406 --- /dev/null +++ b/docs/emissary/latest/images/configure-icon.svg @@ -0,0 +1,14 @@ + + + + noun_858572_cc + Created with Sketch. + + + + + + + + + diff --git a/docs/emissary/latest/images/consul-ambassador.png b/docs/emissary/latest/images/consul-ambassador.png new file mode 100644 index 000000000..c4911624d Binary files /dev/null and b/docs/emissary/latest/images/consul-ambassador.png differ diff --git a/docs/emissary/latest/images/container-inner-dev-loop.png b/docs/emissary/latest/images/container-inner-dev-loop.png new file mode 100644 index 000000000..06586cd6e Binary files /dev/null and b/docs/emissary/latest/images/container-inner-dev-loop.png differ diff --git a/docs/emissary/latest/images/datawire-logo.svg b/docs/emissary/latest/images/datawire-logo.svg new file mode 100644 index 000000000..fb45872cc --- /dev/null +++ b/docs/emissary/latest/images/datawire-logo.svg @@ -0,0 +1,27 @@ + + + + Group 3 + Created with Sketch. + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/emissary/latest/images/diagnostics-example.png b/docs/emissary/latest/images/diagnostics-example.png new file mode 100644 index 000000000..6c825b0cd Binary files /dev/null and b/docs/emissary/latest/images/diagnostics-example.png differ diff --git a/docs/emissary/latest/images/diagnostics.png b/docs/emissary/latest/images/diagnostics.png new file mode 100644 index 000000000..292487039 Binary files /dev/null and b/docs/emissary/latest/images/diagnostics.png differ diff --git a/docs/emissary/latest/images/docker-compose.png b/docs/emissary/latest/images/docker-compose.png new file mode 100644 index 000000000..b8829521b Binary files /dev/null and b/docs/emissary/latest/images/docker-compose.png differ diff --git a/docs/emissary/latest/images/docker.png b/docs/emissary/latest/images/docker.png new file mode 100644 index 000000000..1f35e5ea4 Binary files /dev/null and b/docs/emissary/latest/images/docker.png differ diff --git a/docs/emissary/latest/images/edge-stack-1.13.4.png b/docs/emissary/latest/images/edge-stack-1.13.4.png new file mode 100644 index 000000000..954ac1a9c Binary files /dev/null and b/docs/emissary/latest/images/edge-stack-1.13.4.png differ diff --git a/docs/emissary/latest/images/edge-stack-1.13.7-json-logging.png b/docs/emissary/latest/images/edge-stack-1.13.7-json-logging.png new file mode 100644 index 000000000..4a47cbdfc Binary files /dev/null and b/docs/emissary/latest/images/edge-stack-1.13.7-json-logging.png differ diff --git a/docs/emissary/latest/images/edge-stack-1.13.7-memory.png b/docs/emissary/latest/images/edge-stack-1.13.7-memory.png new file mode 100644 index 000000000..9c415ba36 Binary files /dev/null and b/docs/emissary/latest/images/edge-stack-1.13.7-memory.png differ diff --git a/docs/emissary/latest/images/edge-stack-1.13.7-tcpmapping-consul.png b/docs/emissary/latest/images/edge-stack-1.13.7-tcpmapping-consul.png new file mode 100644 index 000000000..c455a47f1 Binary files /dev/null and b/docs/emissary/latest/images/edge-stack-1.13.7-tcpmapping-consul.png differ diff --git a/docs/emissary/latest/images/edge-stack-1.13.8-cloud-bugfix.png b/docs/emissary/latest/images/edge-stack-1.13.8-cloud-bugfix.png new file mode 100644 index 000000000..6beaf653b Binary files /dev/null and b/docs/emissary/latest/images/edge-stack-1.13.8-cloud-bugfix.png differ diff --git a/docs/emissary/latest/images/emissary-1.13.10-cors-origin.png b/docs/emissary/latest/images/emissary-1.13.10-cors-origin.png new file mode 100644 index 000000000..b7538e5f4 Binary files /dev/null and b/docs/emissary/latest/images/emissary-1.13.10-cors-origin.png differ diff --git a/docs/emissary/latest/images/fast-icon.svg b/docs/emissary/latest/images/fast-icon.svg new file mode 100644 index 000000000..354542eed --- /dev/null +++ b/docs/emissary/latest/images/fast-icon.svg @@ -0,0 +1,14 @@ + + + + noun_1187990_cc + Created with Sketch. + + + + + + + + + diff --git a/docs/emissary/latest/images/features-icons/basic-authentication.svg b/docs/emissary/latest/images/features-icons/basic-authentication.svg new file mode 100644 index 000000000..2bd19edf5 --- /dev/null +++ b/docs/emissary/latest/images/features-icons/basic-authentication.svg @@ -0,0 +1,20 @@ + + + + noun_897228_cc + Created with Sketch. + + + + + + + + + + + + + + + diff --git a/docs/emissary/latest/images/features-icons/canary-release.svg b/docs/emissary/latest/images/features-icons/canary-release.svg new file mode 100644 index 000000000..f8de57d9d --- /dev/null +++ b/docs/emissary/latest/images/features-icons/canary-release.svg @@ -0,0 +1,27 @@ + + + + Group 25 + Created with Sketch. + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/emissary/latest/images/features-icons/cors.svg b/docs/emissary/latest/images/features-icons/cors.svg new file mode 100644 index 000000000..e559d9242 --- /dev/null +++ b/docs/emissary/latest/images/features-icons/cors.svg @@ -0,0 +1,14 @@ + + + + noun_111967_cc + Created with Sketch. + + + + + + + + + diff --git a/docs/emissary/latest/images/features-icons/datadog.png b/docs/emissary/latest/images/features-icons/datadog.png new file mode 100644 index 000000000..eea05f8ca Binary files /dev/null and b/docs/emissary/latest/images/features-icons/datadog.png differ diff --git a/docs/emissary/latest/images/features-icons/datadog.svg b/docs/emissary/latest/images/features-icons/datadog.svg new file mode 100644 index 000000000..e46e8118c --- /dev/null +++ b/docs/emissary/latest/images/features-icons/datadog.svg @@ -0,0 +1,12 @@ + + + + Screen Shot 2018-04-05 at 8.22.25 AM + Created with Sketch. + + + + + + + diff --git a/docs/emissary/latest/images/features-icons/diagnostics.svg b/docs/emissary/latest/images/features-icons/diagnostics.svg new file mode 100644 index 000000000..940e1bc2f --- /dev/null +++ b/docs/emissary/latest/images/features-icons/diagnostics.svg @@ -0,0 +1,14 @@ + + + + noun_196445_cc + Created with Sketch. + + + + + + + + + diff --git a/docs/emissary/latest/images/features-icons/distributed-tracing.png b/docs/emissary/latest/images/features-icons/distributed-tracing.png new file mode 100644 index 000000000..6b69e28ca Binary files /dev/null and b/docs/emissary/latest/images/features-icons/distributed-tracing.png differ diff --git a/docs/emissary/latest/images/features-icons/grpc.png b/docs/emissary/latest/images/features-icons/grpc.png new file mode 100644 index 000000000..b2f5a0d91 Binary files /dev/null and b/docs/emissary/latest/images/features-icons/grpc.png differ diff --git a/docs/emissary/latest/images/features-icons/prometheus.svg b/docs/emissary/latest/images/features-icons/prometheus.svg new file mode 100644 index 000000000..d5252a666 --- /dev/null +++ b/docs/emissary/latest/images/features-icons/prometheus.svg @@ -0,0 +1,14 @@ + + + + prometheus_logo_grey + Created with Sketch. + + + + + + + + + diff --git a/docs/emissary/latest/images/features-icons/rate-limiting.svg b/docs/emissary/latest/images/features-icons/rate-limiting.svg new file mode 100644 index 000000000..f1b6eacb5 --- /dev/null +++ b/docs/emissary/latest/images/features-icons/rate-limiting.svg @@ -0,0 +1,16 @@ + + + + Group 10 + Created with Sketch. + + + + + + + + + + + diff --git a/docs/emissary/latest/images/features-icons/regex-routing.svg b/docs/emissary/latest/images/features-icons/regex-routing.svg new file mode 100644 index 000000000..113b53b5b --- /dev/null +++ b/docs/emissary/latest/images/features-icons/regex-routing.svg @@ -0,0 +1,20 @@ + + + + noun_699774_cc + Created with Sketch. + + + + + + + + + + + + + + + diff --git a/docs/emissary/latest/images/features-icons/request-transformers.svg b/docs/emissary/latest/images/features-icons/request-transformers.svg new file mode 100644 index 000000000..0b13e2dc8 --- /dev/null +++ b/docs/emissary/latest/images/features-icons/request-transformers.svg @@ -0,0 +1,18 @@ + + + + noun_96239_cc + Created with Sketch. + + + + + + + + + + + + + diff --git a/docs/emissary/latest/images/features-icons/shadowing.svg b/docs/emissary/latest/images/features-icons/shadowing.svg new file mode 100644 index 000000000..9e85eee1d --- /dev/null +++ b/docs/emissary/latest/images/features-icons/shadowing.svg @@ -0,0 +1,15 @@ + + + + shadow + Created with Sketch. + + + + + + + + + + diff --git a/docs/emissary/latest/images/features-icons/statsd.png b/docs/emissary/latest/images/features-icons/statsd.png new file mode 100644 index 000000000..283744384 Binary files /dev/null and b/docs/emissary/latest/images/features-icons/statsd.png differ diff --git a/docs/emissary/latest/images/features-icons/statsd.svg b/docs/emissary/latest/images/features-icons/statsd.svg new file mode 100644 index 000000000..cabc90db1 --- /dev/null +++ b/docs/emissary/latest/images/features-icons/statsd.svg @@ -0,0 +1,20 @@ + + + + 88eb31f74479e422e4e9abfc6c2b00ee + Created with Sketch. + + + + + + + + + + + + + + + diff --git a/docs/emissary/latest/images/features-icons/third-party-auth.svg b/docs/emissary/latest/images/features-icons/third-party-auth.svg new file mode 100644 index 000000000..5359a24a6 --- /dev/null +++ b/docs/emissary/latest/images/features-icons/third-party-auth.svg @@ -0,0 +1,14 @@ + + + + noun_511233_cc + Created with Sketch. + + + + + + + + + diff --git a/docs/emissary/latest/images/features-icons/timeouts.svg b/docs/emissary/latest/images/features-icons/timeouts.svg new file mode 100644 index 000000000..47f630567 --- /dev/null +++ b/docs/emissary/latest/images/features-icons/timeouts.svg @@ -0,0 +1,18 @@ + + + + noun_587034_cc + Created with Sketch. + + + + + + + + + + + + + diff --git a/docs/emissary/latest/images/features-icons/tls-termination.svg b/docs/emissary/latest/images/features-icons/tls-termination.svg new file mode 100644 index 000000000..6a631a96e --- /dev/null +++ b/docs/emissary/latest/images/features-icons/tls-termination.svg @@ -0,0 +1,17 @@ + + + + noun_63544_cc + Created with Sketch. + + + + + + + + + + + + diff --git a/docs/emissary/latest/images/features-icons/url-rewrite.svg b/docs/emissary/latest/images/features-icons/url-rewrite.svg new file mode 100644 index 000000000..023e2e05f --- /dev/null +++ b/docs/emissary/latest/images/features-icons/url-rewrite.svg @@ -0,0 +1,14 @@ + + + + noun_1295942_cc + Created with Sketch. + + + + + + + + + diff --git a/docs/emissary/latest/images/features-icons/websockets.svg b/docs/emissary/latest/images/features-icons/websockets.svg new file mode 100644 index 000000000..af17b9c05 --- /dev/null +++ b/docs/emissary/latest/images/features-icons/websockets.svg @@ -0,0 +1,16 @@ + + + + noun_50814_cc + Created with Sketch. + + + + + + + + + + + diff --git a/docs/emissary/latest/images/features-table.jpg b/docs/emissary/latest/images/features-table.jpg new file mode 100644 index 000000000..3de2eb4f0 Binary files /dev/null and b/docs/emissary/latest/images/features-table.jpg differ diff --git a/docs/emissary/latest/images/gRPC-TLS-Ambassador.png b/docs/emissary/latest/images/gRPC-TLS-Ambassador.png new file mode 100644 index 000000000..0189253e0 Binary files /dev/null and b/docs/emissary/latest/images/gRPC-TLS-Ambassador.png differ diff --git a/docs/emissary/latest/images/gRPC-TLS-Originate.png b/docs/emissary/latest/images/gRPC-TLS-Originate.png new file mode 100644 index 000000000..1b62010dd Binary files /dev/null and b/docs/emissary/latest/images/gRPC-TLS-Originate.png differ diff --git a/docs/emissary/latest/images/github-login.png b/docs/emissary/latest/images/github-login.png new file mode 100644 index 000000000..cfd4d4bf1 Binary files /dev/null and b/docs/emissary/latest/images/github-login.png differ diff --git a/docs/emissary/latest/images/global-features-bg.svg b/docs/emissary/latest/images/global-features-bg.svg new file mode 100644 index 000000000..a39c5232d --- /dev/null +++ b/docs/emissary/latest/images/global-features-bg.svg @@ -0,0 +1,34 @@ + + + + ambassador_logo@2x + Created with Sketch. + + + + + + + + diff --git a/docs/emissary/latest/images/grafana.png b/docs/emissary/latest/images/grafana.png new file mode 100644 index 000000000..03912506d Binary files /dev/null and b/docs/emissary/latest/images/grafana.png differ diff --git a/docs/emissary/latest/images/grpc-tls.png b/docs/emissary/latest/images/grpc-tls.png new file mode 100644 index 000000000..4d705ff0c Binary files /dev/null and b/docs/emissary/latest/images/grpc-tls.png differ diff --git a/docs/emissary/latest/images/helm-navy.png b/docs/emissary/latest/images/helm-navy.png new file mode 100644 index 000000000..a97101435 Binary files /dev/null and b/docs/emissary/latest/images/helm-navy.png differ diff --git a/docs/emissary/latest/images/helm.png b/docs/emissary/latest/images/helm.png new file mode 100644 index 000000000..1c5af71b8 Binary files /dev/null and b/docs/emissary/latest/images/helm.png differ diff --git a/docs/emissary/latest/images/highly-available-icon.svg b/docs/emissary/latest/images/highly-available-icon.svg new file mode 100644 index 000000000..9cb3eff99 --- /dev/null +++ b/docs/emissary/latest/images/highly-available-icon.svg @@ -0,0 +1,14 @@ + + + + noun_1205522_cc + Created with Sketch. + + + + + + + + + diff --git a/docs/emissary/latest/images/jaeger.png b/docs/emissary/latest/images/jaeger.png new file mode 100644 index 000000000..3b821c09e Binary files /dev/null and b/docs/emissary/latest/images/jaeger.png differ diff --git a/docs/emissary/latest/images/kubernetes.png b/docs/emissary/latest/images/kubernetes.png new file mode 100644 index 000000000..a392a886b Binary files /dev/null and b/docs/emissary/latest/images/kubernetes.png differ diff --git a/docs/emissary/latest/images/left-arrow.svg b/docs/emissary/latest/images/left-arrow.svg new file mode 100644 index 000000000..75cdc7f17 --- /dev/null +++ b/docs/emissary/latest/images/left-arrow.svg @@ -0,0 +1,12 @@ + + + + Path 2 + Created with Sketch. + + + + + + + diff --git a/docs/emissary/latest/images/linux.png b/docs/emissary/latest/images/linux.png new file mode 100644 index 000000000..1832c5940 Binary files /dev/null and b/docs/emissary/latest/images/linux.png differ diff --git a/docs/emissary/latest/images/logo.png b/docs/emissary/latest/images/logo.png new file mode 100644 index 000000000..701f63ba8 Binary files /dev/null and b/docs/emissary/latest/images/logo.png differ diff --git a/docs/emissary/latest/images/mapping-editor.png b/docs/emissary/latest/images/mapping-editor.png new file mode 100644 index 000000000..f8b751a19 Binary files /dev/null and b/docs/emissary/latest/images/mapping-editor.png differ diff --git a/docs/emissary/latest/images/network-architecture.png b/docs/emissary/latest/images/network-architecture.png new file mode 100644 index 000000000..3217b3e16 Binary files /dev/null and b/docs/emissary/latest/images/network-architecture.png differ diff --git a/docs/emissary/latest/images/penguin-background.svg b/docs/emissary/latest/images/penguin-background.svg new file mode 100644 index 000000000..7affc0d5d --- /dev/null +++ b/docs/emissary/latest/images/penguin-background.svg @@ -0,0 +1,102 @@ + + + + @2xambassador_logo + Created with Sketch. + + + + + + + + + + + + + diff --git a/docs/emissary/latest/images/pro-iap.png b/docs/emissary/latest/images/pro-iap.png new file mode 100644 index 000000000..787265d8b Binary files /dev/null and b/docs/emissary/latest/images/pro-iap.png differ diff --git a/docs/emissary/latest/images/quote.svg b/docs/emissary/latest/images/quote.svg new file mode 100644 index 000000000..bab6e8710 --- /dev/null +++ b/docs/emissary/latest/images/quote.svg @@ -0,0 +1,16 @@ + + + + + Created with Sketch. + + + + + + + + + + + diff --git a/docs/emissary/latest/images/right-arrow.svg b/docs/emissary/latest/images/right-arrow.svg new file mode 100644 index 000000000..627144c60 --- /dev/null +++ b/docs/emissary/latest/images/right-arrow.svg @@ -0,0 +1,12 @@ + + + + Path 2 Copy + Created with Sketch. + + + + + + + diff --git a/docs/emissary/latest/images/routing-icon.svg b/docs/emissary/latest/images/routing-icon.svg new file mode 100644 index 000000000..bd860b138 --- /dev/null +++ b/docs/emissary/latest/images/routing-icon.svg @@ -0,0 +1,14 @@ + + + + noun_1062254_cc + Created with Sketch. + + + + + + + + + diff --git a/docs/emissary/latest/images/self-service-features-bg.svg b/docs/emissary/latest/images/self-service-features-bg.svg new file mode 100644 index 000000000..40bc400ae --- /dev/null +++ b/docs/emissary/latest/images/self-service-features-bg.svg @@ -0,0 +1,93 @@ + + + + Group 8 + Created with Sketch. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + SELF-SERVICE FEATURES + + + + diff --git a/docs/emissary/latest/images/shadowing.png b/docs/emissary/latest/images/shadowing.png new file mode 100644 index 000000000..097ecbd5e Binary files /dev/null and b/docs/emissary/latest/images/shadowing.png differ diff --git a/docs/emissary/latest/images/speedometers.png b/docs/emissary/latest/images/speedometers.png new file mode 100644 index 000000000..7ce2c2a22 Binary files /dev/null and b/docs/emissary/latest/images/speedometers.png differ diff --git a/docs/emissary/latest/images/tp-architecture.png b/docs/emissary/latest/images/tp-architecture.png new file mode 100644 index 000000000..20ae35895 Binary files /dev/null and b/docs/emissary/latest/images/tp-architecture.png differ diff --git a/docs/emissary/latest/images/tp-tutorial-1.png b/docs/emissary/latest/images/tp-tutorial-1.png new file mode 100644 index 000000000..ee68dc7db Binary files /dev/null and b/docs/emissary/latest/images/tp-tutorial-1.png differ diff --git a/docs/emissary/latest/images/tp-tutorial-2.png b/docs/emissary/latest/images/tp-tutorial-2.png new file mode 100644 index 000000000..129dc6ee3 Binary files /dev/null and b/docs/emissary/latest/images/tp-tutorial-2.png differ diff --git a/docs/emissary/latest/images/tp-tutorial-3.png b/docs/emissary/latest/images/tp-tutorial-3.png new file mode 100644 index 000000000..946629fc3 Binary files /dev/null and b/docs/emissary/latest/images/tp-tutorial-3.png differ diff --git a/docs/emissary/latest/images/tp-tutorial-4.png b/docs/emissary/latest/images/tp-tutorial-4.png new file mode 100644 index 000000000..cb6e7a9d2 Binary files /dev/null and b/docs/emissary/latest/images/tp-tutorial-4.png differ diff --git a/docs/emissary/latest/images/trad-inner-dev-loop.png b/docs/emissary/latest/images/trad-inner-dev-loop.png new file mode 100644 index 000000000..618b674f8 Binary files /dev/null and b/docs/emissary/latest/images/trad-inner-dev-loop.png differ diff --git a/docs/emissary/latest/images/windows.png b/docs/emissary/latest/images/windows.png new file mode 100644 index 000000000..a065dc17d Binary files /dev/null and b/docs/emissary/latest/images/windows.png differ diff --git a/docs/emissary/latest/images/xkcd.png b/docs/emissary/latest/images/xkcd.png new file mode 100644 index 000000000..ed0d5c33b Binary files /dev/null and b/docs/emissary/latest/images/xkcd.png differ diff --git a/docs/emissary/latest/release-notes/edge-stack-1.13.4.png b/docs/emissary/latest/release-notes/edge-stack-1.13.4.png new file mode 100644 index 000000000..954ac1a9c Binary files /dev/null and b/docs/emissary/latest/release-notes/edge-stack-1.13.4.png differ diff --git a/docs/emissary/latest/release-notes/edge-stack-1.13.7-json-logging.png b/docs/emissary/latest/release-notes/edge-stack-1.13.7-json-logging.png new file mode 100644 index 000000000..4a47cbdfc Binary files /dev/null and b/docs/emissary/latest/release-notes/edge-stack-1.13.7-json-logging.png differ diff --git a/docs/emissary/latest/release-notes/edge-stack-1.13.7-memory.png b/docs/emissary/latest/release-notes/edge-stack-1.13.7-memory.png new file mode 100644 index 000000000..9c415ba36 Binary files /dev/null and b/docs/emissary/latest/release-notes/edge-stack-1.13.7-memory.png differ diff --git a/docs/emissary/latest/release-notes/edge-stack-1.13.7-tcpmapping-consul.png b/docs/emissary/latest/release-notes/edge-stack-1.13.7-tcpmapping-consul.png new file mode 100644 index 000000000..c455a47f1 Binary files /dev/null and b/docs/emissary/latest/release-notes/edge-stack-1.13.7-tcpmapping-consul.png differ diff --git a/docs/emissary/latest/release-notes/edge-stack-1.13.8-cloud-bugfix.png b/docs/emissary/latest/release-notes/edge-stack-1.13.8-cloud-bugfix.png new file mode 100644 index 000000000..6beaf653b Binary files /dev/null and b/docs/emissary/latest/release-notes/edge-stack-1.13.8-cloud-bugfix.png differ diff --git a/docs/emissary/latest/release-notes/edge-stack-2.0.0-host_crd.png b/docs/emissary/latest/release-notes/edge-stack-2.0.0-host_crd.png new file mode 100644 index 000000000..c77ef5287 Binary files /dev/null and b/docs/emissary/latest/release-notes/edge-stack-2.0.0-host_crd.png differ diff --git a/docs/emissary/latest/release-notes/edge-stack-2.0.0-ingressstatus.png b/docs/emissary/latest/release-notes/edge-stack-2.0.0-ingressstatus.png new file mode 100644 index 000000000..6856d308d Binary files /dev/null and b/docs/emissary/latest/release-notes/edge-stack-2.0.0-ingressstatus.png differ diff --git a/docs/emissary/latest/release-notes/edge-stack-2.0.0-insecure_action_hosts.png b/docs/emissary/latest/release-notes/edge-stack-2.0.0-insecure_action_hosts.png new file mode 100644 index 000000000..79c20bad1 Binary files /dev/null and b/docs/emissary/latest/release-notes/edge-stack-2.0.0-insecure_action_hosts.png differ diff --git a/docs/emissary/latest/release-notes/edge-stack-2.0.0-listener.png b/docs/emissary/latest/release-notes/edge-stack-2.0.0-listener.png new file mode 100644 index 000000000..ea45a02ba Binary files /dev/null and b/docs/emissary/latest/release-notes/edge-stack-2.0.0-listener.png differ diff --git a/docs/emissary/latest/release-notes/edge-stack-2.0.0-prune_routes.png b/docs/emissary/latest/release-notes/edge-stack-2.0.0-prune_routes.png new file mode 100644 index 000000000..bc43229fc Binary files /dev/null and b/docs/emissary/latest/release-notes/edge-stack-2.0.0-prune_routes.png differ diff --git a/docs/emissary/latest/release-notes/edge-stack-2.0.0-tlscontext.png b/docs/emissary/latest/release-notes/edge-stack-2.0.0-tlscontext.png new file mode 100644 index 000000000..68dbad807 Binary files /dev/null and b/docs/emissary/latest/release-notes/edge-stack-2.0.0-tlscontext.png differ diff --git a/docs/emissary/latest/release-notes/edge-stack-2.0.0-v3alpha1.png b/docs/emissary/latest/release-notes/edge-stack-2.0.0-v3alpha1.png new file mode 100644 index 000000000..c0ac35962 Binary files /dev/null and b/docs/emissary/latest/release-notes/edge-stack-2.0.0-v3alpha1.png differ diff --git a/docs/emissary/latest/release-notes/emissary-1.13.10-cors-origin.png b/docs/emissary/latest/release-notes/emissary-1.13.10-cors-origin.png new file mode 100644 index 000000000..b7538e5f4 Binary files /dev/null and b/docs/emissary/latest/release-notes/emissary-1.13.10-cors-origin.png differ diff --git a/docs/emissary/latest/release-notes/emissary-ga.png b/docs/emissary/latest/release-notes/emissary-ga.png new file mode 100644 index 000000000..062f043a7 Binary files /dev/null and b/docs/emissary/latest/release-notes/emissary-ga.png differ diff --git a/docs/emissary/latest/release-notes/tada.png b/docs/emissary/latest/release-notes/tada.png new file mode 100644 index 000000000..c8832e8e3 Binary files /dev/null and b/docs/emissary/latest/release-notes/tada.png differ diff --git a/docs/emissary/latest/release-notes/v2.0.4-k8s-1.22.png b/docs/emissary/latest/release-notes/v2.0.4-k8s-1.22.png new file mode 100644 index 000000000..ed9b04158 Binary files /dev/null and b/docs/emissary/latest/release-notes/v2.0.4-k8s-1.22.png differ diff --git a/docs/emissary/latest/release-notes/v2.0.4-l7depth.png b/docs/emissary/latest/release-notes/v2.0.4-l7depth.png new file mode 100644 index 000000000..9314324cb Binary files /dev/null and b/docs/emissary/latest/release-notes/v2.0.4-l7depth.png differ diff --git a/docs/emissary/latest/release-notes/v2.0.4-mapping-dns-type.png b/docs/emissary/latest/release-notes/v2.0.4-mapping-dns-type.png new file mode 100644 index 000000000..7770c77d2 Binary files /dev/null and b/docs/emissary/latest/release-notes/v2.0.4-mapping-dns-type.png differ diff --git a/docs/emissary/latest/release-notes/v2.0.4-v3alpha1.png b/docs/emissary/latest/release-notes/v2.0.4-v3alpha1.png new file mode 100644 index 000000000..9c50b8fb8 Binary files /dev/null and b/docs/emissary/latest/release-notes/v2.0.4-v3alpha1.png differ diff --git a/docs/emissary/latest/release-notes/v2.0.4-version.png b/docs/emissary/latest/release-notes/v2.0.4-version.png new file mode 100644 index 000000000..9481b7dbd Binary files /dev/null and b/docs/emissary/latest/release-notes/v2.0.4-version.png differ diff --git a/docs/emissary/latest/release-notes/v2.0.5-auth-circuit-breaker.png b/docs/emissary/latest/release-notes/v2.0.5-auth-circuit-breaker.png new file mode 100644 index 000000000..cac8cf7b2 Binary files /dev/null and b/docs/emissary/latest/release-notes/v2.0.5-auth-circuit-breaker.png differ diff --git a/docs/emissary/latest/release-notes/v2.0.5-mappingselector.png b/docs/emissary/latest/release-notes/v2.0.5-mappingselector.png new file mode 100644 index 000000000..31942ede6 Binary files /dev/null and b/docs/emissary/latest/release-notes/v2.0.5-mappingselector.png differ diff --git a/docs/emissary/latest/release-notes/v2.1.0-canary.png b/docs/emissary/latest/release-notes/v2.1.0-canary.png new file mode 100644 index 000000000..39d3bbbfb Binary files /dev/null and b/docs/emissary/latest/release-notes/v2.1.0-canary.png differ diff --git a/docs/emissary/latest/release-notes/v2.1.0-gzip-enabled.png b/docs/emissary/latest/release-notes/v2.1.0-gzip-enabled.png new file mode 100644 index 000000000..061fcbc97 Binary files /dev/null and b/docs/emissary/latest/release-notes/v2.1.0-gzip-enabled.png differ diff --git a/docs/emissary/latest/release-notes/v2.1.0-smoother-migration.png b/docs/emissary/latest/release-notes/v2.1.0-smoother-migration.png new file mode 100644 index 000000000..ebd77497d Binary files /dev/null and b/docs/emissary/latest/release-notes/v2.1.0-smoother-migration.png differ diff --git a/docs/emissary/latest/release-notes/v2.1.2-annotations.png b/docs/emissary/latest/release-notes/v2.1.2-annotations.png new file mode 100644 index 000000000..b5498c3c1 Binary files /dev/null and b/docs/emissary/latest/release-notes/v2.1.2-annotations.png differ diff --git a/docs/emissary/latest/release-notes/v2.1.2-host-mapping-matching.png b/docs/emissary/latest/release-notes/v2.1.2-host-mapping-matching.png new file mode 100644 index 000000000..1cfba5ede Binary files /dev/null and b/docs/emissary/latest/release-notes/v2.1.2-host-mapping-matching.png differ diff --git a/docs/emissary/latest/release-notes/v2.1.2-mapping-cors.png b/docs/emissary/latest/release-notes/v2.1.2-mapping-cors.png new file mode 100644 index 000000000..f76ea01ca Binary files /dev/null and b/docs/emissary/latest/release-notes/v2.1.2-mapping-cors.png differ diff --git a/docs/emissary/latest/release-notes/v2.1.2-mapping-less-weighted.png b/docs/emissary/latest/release-notes/v2.1.2-mapping-less-weighted.png new file mode 100644 index 000000000..7e299062e Binary files /dev/null and b/docs/emissary/latest/release-notes/v2.1.2-mapping-less-weighted.png differ diff --git a/docs/emissary/latest/release-notes/v2.1.2-mapping-no-rewrite.png b/docs/emissary/latest/release-notes/v2.1.2-mapping-no-rewrite.png new file mode 100644 index 000000000..5d3d5a29f Binary files /dev/null and b/docs/emissary/latest/release-notes/v2.1.2-mapping-no-rewrite.png differ diff --git a/docs/emissary/latest/release-notes/v2.2.0-cloud.png b/docs/emissary/latest/release-notes/v2.2.0-cloud.png new file mode 100644 index 000000000..5923fcb44 Binary files /dev/null and b/docs/emissary/latest/release-notes/v2.2.0-cloud.png differ diff --git a/docs/emissary/latest/release-notes/v2.2.0-percent-escape.png b/docs/emissary/latest/release-notes/v2.2.0-percent-escape.png new file mode 100644 index 000000000..df4d81b94 Binary files /dev/null and b/docs/emissary/latest/release-notes/v2.2.0-percent-escape.png differ diff --git a/docs/emissary/latest/release-notes/v2.2.0-tls-cert-validation.png b/docs/emissary/latest/release-notes/v2.2.0-tls-cert-validation.png new file mode 100644 index 000000000..f8635b5af Binary files /dev/null and b/docs/emissary/latest/release-notes/v2.2.0-tls-cert-validation.png differ diff --git a/docs/emissary/latest/releaseNotes.yml b/docs/emissary/latest/releaseNotes.yml new file mode 100644 index 000000000..ae2e5de86 --- /dev/null +++ b/docs/emissary/latest/releaseNotes.yml @@ -0,0 +1,1643 @@ +# -*- fill-column: 100 -*- + +# This file should be placed in the folder for the version of the +# product that's meant to be documented. A `/release-notes` page will +# be automatically generated and populated at build time. +# +# Note that an entry needs to be added to the `doc-links.yml` file in +# order to surface the release notes in the table of contents. +# +# The YAML in this file should contain: +# +# changelog: An (optional) URL to the CHANGELOG for the product. +# items: An array of releases with the following attributes: +# - version: The (optional) version number of the release, if applicable. +# - date: The date of the release in the format YYYY-MM-DD. +# - notes: An array of noteworthy changes included in the release, each having the following attributes: +# - type: The type of change, one of `bugfix`, `feature`, `security` or `change`. +# - title: A short title of the noteworthy change. +# - body: >- +# Two or three sentences describing the change and why it +# is noteworthy. This is HTML, not plain text or +# markdown. It is handy to use YAML's ">-" feature to +# allow line-wrapping. +# - image: >- +# The URL of an image that visually represents the +# noteworthy change. This path is relative to the +# `release-notes` directory; if this file is +# `FOO/releaseNotes.yml`, then the image paths are +# relative to `FOO/release-notes/`. +# - docs: The path to the documentation page where additional information can be found. +# - href: A path from the root to a resource on the getambassador website, takes precedence over a docs link. + +changelog: https://github.com/emissary-ingress/emissary/blob/$branch$/CHANGELOG.md +items: + - version: 3.8.1 + date: '2023-09-18' + notes: + - title: Upgrade Golang to 1.20.8 + type: security + body: >- + Upgrading to the latest release of Golang as part of our general dependency upgrade process. This includes security fixes for CVE-2023-39318, CVE-2023-39319. + docs: https://go.dev/doc/devel/release#go1.20.minor + + - version: 3.8.0 + date: '2023-08-29' + notes: + - title: Account for matchLabels when associating mappings with the same prefix to different Hosts + type: bugfix + body: >- + As of v2.2.2, if two mappings were associated with different Hosts through host + mappingSelector labels but share the same prefix, the labels were not taken into + account which would cause one Mapping to be correctly routed but the other not. + + This change fixes this issue so that Mappings sharing the same prefix but associated + with different Hosts will be correctly routed. + docs: https://github.com/emissary-ingress/emissary/issues/4170 + - title: Duplication of values when using multiple Headers/QueryParameters in Mappings + type: bugfix + body: >- + In previous versions, if multiple Headers/QueryParameters were used in a v3alpha1 mapping, + these values would duplicate and cause all the Headers/QueryParameters to have the same value. + This is no longer the case and the expected values for unique Headers/QueryParameters will apply. + + This issue was only present in v3alpha1 Mappings. For users who may have this issue, please + be sure to re-apply any v3alpha1 Mappings in order to update the stored v2 Mapping and resolve the + issue. + docs: topics/using/headers/headers + - title: Ambassador Agent no longer collects Envoy metrics + type: change + body: >- + When the Ambassador agent is being used, it will no longer attempt to collect and report Envoy metrics. + In previous versions, $productName$ would always create an Envoy stats sink for the agent as long as the AMBASSADOR_GRPC_METRICS_SINK + environment variable was provided. This environment variable was hardcoded on the release manifests and has now been removed + and an Envoy stats sink for the agent is no longer created. + docs: topics/running/environment#ambassador_grpc_metrics_sink + - version: 3.7.2 + date: '2023-07-25' + notes: + - title: Upgrade to Envoy 1.26.4 + type: security + body: >- + This upgrades $productName$ to be built on Envoy v1.26.4 which includes a fixes for CVE-2023-35942, CVE-2023-35943, CVE-2023-35944. + docs: https://www.envoyproxy.io/docs/envoy/v1.26.1/version_history/v1.26/v1.26 + + - title: Shipped Helm chart v8.7.2 + type: change + body: >- + - Update default image to $productName$ v3.7.2.
+ docs: https://github.com/emissary-ingress/emissary/blob/master/charts/emissary-ingress/CHANGELOG.md + + - version: 3.7.1 + date: '2023-07-13' + notes: + - title: Upgrade to Envoy 1.26.3 + type: security + body: >- + This upgrades $productName$ to be built on Envoy v1.26.3 which includes a fix for CVE-2023-35945. + docs: https://www.envoyproxy.io/docs/envoy/v1.26.1/version_history/v1.26/v1.26 + + - version: 3.7.0 + date: '2023-06-20' + notes: + - title: Upgrade to Envoy 1.26.1 + type: feature + body: >- + This upgrades $productName$ to be built on Envoy v1.26.1 which provides security, performance and feature enhancements. You can read more about them here: Envoy Proxy 1.26.1 Release Notes + docs: https://www.envoyproxy.io/docs/envoy/v1.26.1/version_history/v1.26/v1.26 + + - version: 3.6.0 + date: '2023-04-17' + notes: + - title: Upgrade to Envoy 1.25.4 + type: feature + body: >- + This upgrades $productName$ to be built on Envoy v1.25.4 which provides security, performance and feature enhancements. You can read more about them here: Envoy Proxy 1.25.4 Release Notes + docs: https://www.envoyproxy.io/docs/envoy/v1.25.4/version_history/v1.25/v1.25 + + - title: Shipped Helm chart v8.6.0 + type: change + body: >- + - Update default image to $productName$ v3.6.0.
+ + - Add support for setting nodeSelector, tolerations and affinity on the Ambassador Agent. Thanks to Philip Panyukov.
+ + - Use autoscaling API version based on Kubernetes version. Thanks to Elvind Valderhaug.
+ + - Upgrade KubernetesEndpointResolver & ConsulResolver apiVersions to getambassador.io/v3alpha1 + + docs: https://github.com/emissary-ingress/emissary/blob/master/charts/emissary-ingress/CHANGELOG.md + + - version: 3.5.2 + date: "2023-04-05" + notes: + - title: Upgrade to Envoy 1.24.5 + type: security + body: >- + This upgrades $productName$ to be built on Envoy v1.24.5. This update includes various security patches including CVE-2023-27487, CVE-2023-27491, CVE-2023-27492, CVE-2023-27493, CVE-2023-27488, and CVE-2023-27496. It also contains the dependency update for c-ares which was patched on top. + - title: Upgrade to Golang 1.20.3 + type: security + body: >- + Upgrading to the latest release of Golang as part of our general dependency upgrade process. This includes security fixes for CVE-2023-24537, CVE-2023-24538, CVE-2023-24534, CVE-2023-24536. + + - version: 3.5.1 + date: '2023-02-24' + notes: + - title: Shipped Helm chart v8.5.1 + type: bugfix + body: >- + Fix regression where the Module resource fails validation when setting the ambassador_id after upgrading to getambassador.io/v3alpha1.

+ + Thanks to Pier. + + docs: https://github.com/emissary-ingress/emissary/blob/master/charts/emissary-ingress/CHANGELOG.md + + - version: 3.5.0 + date: '2023-02-15' + notes: + - title: Upgraded to golang 1.20.1 + type: security + body: >- + Upgraded to the latest release of Golang as part of our general dependency upgrade process. This includes + security fixes for CVE-2022-41725, CVE-2022-41723. + - title: TracingService support for native OpenTelemetry driver + type: feature + body: >- + In Envoy 1.24, experimental support for a native OpenTelemetry tracing driver was introduced that allows exporting spans in the otlp format. Many observability platforms accept that format and is the recommended replacement for the LightStep driver. $productName$ now supports setting the TracingService.spec.driver=opentelemetry to export traces in the otlp format.

+ + Thanks to Paul for helping us get this tested and over the finish line! + + - title: Switch to a non-blocking readiness check + type: feature + body: >- + The /ready endpoint used by $productName$ was using the Envoy admin port (8001 by default).This generates a problem during config reloads with large configs as the admin thread is blocking so the /ready endpoint can be very slow to answer (in the order of several seconds, even more).

+ + $productName$ will now use a specific envoy listener that can answer /ready calls from an Envoy worker thread so the endpoint is always fast and it does not suffer from single threaded admin thread slowness on config reloads and other slow endpoints handled by the admin thread.

+ + Configure the listener port using AMBASSADOR_READY_PORT and enable access log using AMBASSADOR_READY_LOG environment variables. + + docs: https://www.getambassador.io/docs/emissary/latest/topics/running/environment/ + + - title: Fix envoy config generated when including port in Host.hostname + type: bugfix + body: >- + When wanting to expose traffic to clients on ports other than 80/443, users will set a port in the Host.hostname (eg.Host.hostname=example.com:8500). The config generated allowed matching on the :authority header. This worked in v1.Y series due to the way $productName$ was generating Envoy configuration under a single wild-card virtual_host and matching on :authority.

+ + In v2.Y/v3.Y+, the way $productName$ generates Envoy configuration changed to address memory pressure and improve route lookup speed in Envoy. However, when including a port in the hostname, an incorrect configuration was generated with an sni match including the port. This caused incoming request to never match causing a 404 Not Found.This has been fixed and the correct envoy configuration is + being generated which restores existing behavior. + + - title: Add support for resolving port names in Ingress resource + type: change + body: >- + Previously, specifying backend ports by name in Ingress was not supported and would result in defaulting to port 80. This allows $productName$ to now resolve port names for backend services. If the port number cannot be resolved by the name (e.g named port in the Service doesn't exist) then it will continue to default back + to port 80.

+ + Thanks to Anton Ustyuzhanin!. + github: + - title: "#4809" + link: https://github.com/emissary-ingress/emissary/pull/4809 + + - title: Add starupProbe to emissary-apiext server + type: change + body: >- + The emissary-apiext server is a Kubernetes Conversion Webhook that converts between the $productName$ CRD versions. On startup, it ensures that a self-signed cert is available so that K8s API Server can talk to the conversion webhook (*TLS is required by K8s*). We have introduced a startupProbe to ensure that emissary-apiext server has enough time to configure the webhooks before running liveness and readiness probes. This is to ensure + slow first-time startup doesn't cause K8s to needlessly restart the pod. + + - title: Upgraded to Python 3.10 + type: change + body: >- + Upgraded to Python 3.10 as part of continued investment in keeping dependencies updated. + + - title: Upgraded base image to alpine-3.17 + type: change + body: >- + Upgraded base image to alpine-3.17 as part of continued investment in keeping dependencies updated. + + - title: Shipped Helm chart v8.5.0 + type: change + body: >- + Here are a list of changes to the helm chart:

+ + - Update default image to $productName$ v3.5.0.
+ - Add support for configuring startupProbes on the emissary-ingress deployment.
+ - Allow setting pod and container security settings on the Ambassador Agent.
+ - Added deprecation notice in the values.yaml file for podSecurityPolicy value because support has been removed in Kubernetes 1.25. + + docs: https://github.com/emissary-ingress/emissary/blob/master/charts/emissary-ingress/CHANGELOG.md + + - version: 3.4.1 + date: '2023-02-07' + notes: + - title: Upgrade to Envoy 1.24.2 + type: security + body: >- + This upgrades $productName$ to be built on Envoy v1.24.2. This update addresses the folowing notable items:

+ + - Updates boringssl to address High CVE-2023-0286
+ - Updates c-ares dependency to address issue with cname wildcard dns resolution for upstream clusters

+ + Users that are using $productName$ with Certificate Revocation List and allow external users to provide input should upgrade to ensure they are not vulnerable to CVE-2023-0286. + + - version: 3.4.0 + date: '2023-01-03' + notes: + - title: Upgrade to Envoy 1.24.1 + type: feature + body: >- + This upgrades $productName$ to be built on Envoy v1.24.1. Two notable changes were introduced:

+ + First, the team at LightStep and the Envoy Maintainers have decided to no longer support the native LightStep tracing driver in favor of using the Open Telemetry driver. The code for the native Enovy LightStep driver has been removed from the Envoy code base. This means $productName$ will no longer support LightStep in the TracingService. The recommended upgrade path is to leverage a supported Tracing driver such as Zipkin and use the Open Telemetry Collector to collect and forward Observabity data to LightStep. A guide for this can be found here: Distributed Tracing with Open Telemetry and LightStep.

+ + Second, a bug was fixed in Envoy 1.24 that changes how the upstream clusters distributed tracing span is named. Prior to Envoy 1.24 it would always set the span name to the cluster.name. The expected behavior from Envoy was that if provided an alt_stat_name then use it else fallback to cluster.name. + + docs: https://www.envoyproxy.io/docs/envoy/latest/version_history/v1.24/v1.24 + + - title: Re-add support for getambassador.io/v1 + type: feature + body: >- + Support for the getambassador.io/v1 apiVersion has been re-introduced, in order to facilitate smoother migrations from $productName$ 1.y. Previously, in order to make migrations possible, an "unserved" v1 version was declared to Kubernetes, but was unsupported by $productName$. That unserved v1 could + cause an excess of errors to be logged by the Kubernetes Nodes (regardless of whether the installation was migrated from 1.y or was a fresh 2.y install); fully supporting v1 again should resolve these errors. + docs: https://github.com/emissary-ingress/emissary/pull/4055 + + - title: Add support for active health checking configuration. + type: feature + body: >- + It is now possible to configure active healhchecking for upstreams within a Mapping. If the upstream fails its configured health check then Envoy will mark the upstream as unhealthy and no longer send traffic to that upstream. Single pods within a group can be marked as unhealthy. The healthy pods will continue to receive + traffic normally while the unhealthy pods will not receive any traffic until they recover by passing the health check. + docs: howtos/active-health-checking/ + + - title: Add environment variables to the healthcheck server. + type: feature + body: >- + The healthcheck server's bind address, bind port and IP family can now be configured using environment variables:

+ + AMBASSADOR_HEALTHCHECK_BIND_ADDRESS: The address to bind the healthcheck server to.

+ + AMBASSADOR_HEALTHCHECK_BIND_PORT: The port to bind the healthcheck server to.

+ + AMBASSADOR_HEALTHCHECK_IP_FAMILY: The IP family to use for the healthcheck server.

+ + This allows the healthcheck server to be configured to use IPv6-only k8s environments. (Thanks to Dmitry Golushko!). + docs: https://www.getambassador.io/docs/emissary/pre-release/topics/running/environment/ + + - title: Adopt stand alone Ambassador Agent + type: change + body: >- + Previously, the Agent used for communicating with Ambassador Cloud was bundled into $productName$. This tied it to the same release schedule as $productName$ and made it difficult to iterate on its feature set. It has now been extracted into its own repository and has its own release process and schedule. + docs: https://github.com/datawire/ambassador-agent + + - version: 3.3.1 + date: '2022-12-08' + notes: + - title: Update Golang to 1.19.4 + type: security + body: >- + Updated Golang to latest 1.19.4 patch release which contained two CVEs: CVE-2022-41720, CVE-2022-41717. + + CVE-2022-41720 only affects Windows and $productName$ only ships on Linux. CVE-2022-41717 affects HTTP/2 servers that are exposed to external clients. $productName$ does not expose any of these golang servers to external clients directly. + + - version: 3.3.0 + date: '2022-11-02' + notes: + - title: Update Golang to 1.19.2 + type: security + body: >- + Updated Golang to 1.19.2 to address the CVEs: CVE-2022-2879, CVE-2022-2880, CVE-2022-41715. + + - title: Fix regression in http to https redirects with AuthService + type: bugfix + body: >- + By default $productName$ adds routes for http to https redirection. When + an AuthService is applied in v2.Y of $productName$, Envoy would skip the + ext_authz call for non-tls http request and would perform the https + redirect. In Envoy 1.20+ the behavior has changed where Envoy will + always call the ext_authz filter and must be disabled on a per route + basis. + This new behavior change introduced a regression in v3.0 of + $productName$ when it was upgraded to Envoy 1.22. The http to https + redirection no longer works when an AuthService was applied. This fix + restores the previous behavior by disabling the ext_authz call on the + https redirect routes. + github: + - title: "#4620" + link: https://github.com/emissary-ingress/emissary/issues/4620 + + - title: Fix regression in host_redirects with AuthService + type: bugfix + body: >- + When an AuthService is applied in v2.Y of $productName$, + Envoy would skip the ext_authz call for all redirect routes and + would perform the redirect. In Envoy 1.20+ the behavior has changed + where Envoy will always call the ext_authz filter so it must be + disabled on a per route basis. + This new behavior change introduced a regression in v3.0 of + $productName$ when it was upgraded to Envoy 1.22. The host_redirect + would call an AuthService prior to redirect if applied. This fix + restores the previous behavior by disabling the ext_authz call on the + host_redirect routes. + github: + - title: "#4640" + link: https://github.com/emissary-ingress/emissary/issues/4640 + + - version: 3.2.0 + date: '2022-09-27' + notes: + - title: Update Golang to 1.19.1 + type: security + body: >- + Updated Golang to 1.19.1 to address the CVEs: CVE-2022-27664, CVE-2022-32190. + + - title: Add support for Host resources using secrets from different namespaces + type: feature + body: >- + Previously the Host resource could only use secrets that are in the namespace as the + Host. The tlsSecret field in the Host has a new subfield namespace that will allow + the use of secrets from different namespaces. + docs: topics/running/tls/#bring-your-own-certificate + + - title: Add failure_mode_deny option to the RateLimitService + type: feature + body: >- + By default, when Envoy is unable to communicate with the configured + RateLimitService then it will allow traffic through. The + RateLimitService resource now exposes the + failure_mode_deny + option. Set failure_mode_deny: true, then Envoy will + deny traffic when it is unable to communicate to the RateLimitService + returning a 500. + docs: topics/running/services/rate-limit-service/ + + - title: Allow bypassing of EDS for manual endpoint insertion + type: feature + body: >- + Set AMBASSADOR_EDS_BYPASS to true to bypass EDS handling of endpoints and have endpoints be + inserted to clusters manually. This can help resolve with 503 UH caused by certification rotation relating to + a delay between EDS + CDS. The default is false. + docs: topics/running/environment/#ambassador_eds_bypass + + - title: Add support for config change batch window before reconfiguring Envoy + type: feature + body: >- + The AMBASSADOR_RECONFIG_MAX_DELAY env var can be optionally set to batch changes for the specified + non-negative window period in seconds before doing an Envoy reconfiguration. Default is "1" if not set. + + - title: Allow setting custom_tags for traces + type: feature + body: >- + It is now possible to set custom_tags in the + TracingService. Trace tags can be set based on + literal values, environment variables, or request headers. The existing tag_headers field is now deperacated. If both tag_headers and custom_tags are set then tag_headers will be ignored. + (Thanks to Paul!) + docs: topics/running/services/tracing-service/ + + - title: Change to behavior for associating Hosts with Mappings and Listeners with Hosts + type: change + body: >- + Changes to label matching will change how Hosts are associated with Mappings and how Listeners are associated with Hosts. There was a bug with label + selectors that was causing resources that configure a selector to be incorrectly associated with more resources than intended. + If any single label from the selector was matched then the resources would be associated. + Now it has been updated to correctly only associate these resources if all labels required by + their selector are present. This brings the mappingSelector/selector fields in-line with how label selectors are used + in Kubernetes. To avoid unexpected behavior after the upgrade, add all labels that Hosts/Listeners have in their + mappingSelector/selector to Mappings/Hosts you want to associate with them. You can opt-out of the new behavior + by setting the environment variable DISABLE_STRICT_LABEL_SELECTORS to "true" (default: "false"). + (Thanks to Filip Herceg and Joe Andaverde!). + docs: topics/running/environment/#disable_strict_label_selectors + + - title: Envoy upgraded to 1.23.0 + type: change + body: >- + The envoy version included in $productName$ has been upgraded from 1.22 to that latest release of 1.23.0. This provides $productName$ with the latest security patches, performances enhancments,and features offered by the envoy proxy. + docs: https://www.envoyproxy.io/docs/envoy/latest/version_history/v1.23/v1.23.0 + + - title: Correctly manage cluster names when service names are very long + type: bugfix + body: >- + Distinct services with names that are the same in the first forty characters will no longer be incorrectly mapped to the same cluster. + github: + - title: "#4354" + link: https://github.com/emissary-ingress/emissary/issues/4354 + + - title: Properly populate alt_stats_name for Tracing, Auth and RateLimit Services + type: bugfix + body: >- + Previously, setting the stats_name for the TracingService, RateLimitService + or the AuthService would have no affect because it was not being properly passed to the Envoy cluster + config. This has been fixed and the alt_stats_name field in the cluster config is now set correctly. + (Thanks to Paul!) + + - title: Diagnostics stats properly handles parsing envoy metrics with colons + type: bugfix + body: >- + If a Host or TLSContext contained a hostname with a : when using the diagnostics endpoints ambassador/v0/diagd then an error would be thrown due to the parsing logic not being able to handle the extra colon. This has been fixed and $productName$ will not throw an error when parsing + envoy metrics for the diagnostics user interface. + + - title: TCPMappings use correct SNI configuration + type: bugfix + body: >- + $productName$ 2.0.0 introduced a bug where a TCPMapping that uses SNI, + instead of using the hostname glob in the TCPMapping, uses the hostname glob + in the Host that the TLS termination configuration comes from. + + - title: TCPMappings configure TLS termination without a Host resource + type: bugfix + body: >- + $productName$ 2.0.0 introduced a bug where a TCPMapping that terminates TLS + must have a corresponding Host that it can take the TLS configuration from. + This was semi-intentional, but didn't make much sense. You can now use a + TLSContext without a Hostas in $productName$ 1.y releases, or a + Host with or without a TLSContext as in prior 2.y releases. + + - title: TCPMappings and HTTP Hosts can coexist on Listeners that terminate TLS + type: bugfix + body: >- + Prior releases of $productName$ had the arbitrary limitation that a + TCPMapping cannot be used on the same port that HTTP is served on, even if + TLS+SNI would make this possible. $productName$ now allows TCPMappings to be + used on the same Listener port as HTTP Hosts, as long as that + Listener terminates TLS. + + - version: 3.1.0 + date: '2022-08-01' + notes: + - title: Add support for OpenAPI 2 contracts + type: feature + body: >- + The agent is now able to parse api contracts using swagger 2, and to convert them to OpenAPI 3, making them available for use in the dev portal. + - title: Add new secrets sync directive to the Agent + type: feature + body: >- + Adds a new command to the agent directive service to manage secrets. This allows a third party product to manage CRDs that depend upon a secret. + - title: Add additional pprof endpoints + type: feature + body: >- + Add additional pprof endpoints to allow for profiling $productName$: + - CPU profiles (/debug/pprof/profile) + - tracing (/debug/pprof/trace) + - command line running (/debug/pprof/cmdline) + - program counters (/debug/pprof/symbol) + - title: Default YAML enables the diagnostics interface from non-local clients on the admin service port + type: change + body: >- + In the standard published .yaml files, the Module resource enables serving remote client requests to the :8877/ambassador/v0/diag/ endpoint. The associated Helm chart release also now enables it by default. + - title: fix regression in the agent for the metrics transfer. + type: bugfix + body: >- + A regression was introduced in 2.3.0 causing the agent to miss some of the metrics coming from emissary ingress before sending them to Ambassador cloud. This issue has been resolved to ensure + that all the nodes composing the emissary ingress cluster are reporting properly. + - title: Update Golang to 1.17.12 + type: security + body: >- + Updated Golang to 1.17.12 to address the CVEs: CVE-2022-23806, CVE-2022-28327, CVE-2022-24675, CVE-2022-24921, CVE-2022-23772. + - title: Update Curl to 7.80.0-r2 + type: security + body: >- + Updated Curl to 7.80.0-r2 to address the CVEs: CVE-2022-32207, CVE-2022-27782, CVE-2022-27781, CVE-2022-27780. + - title: Update openSSL-dev to 1.1.1q-r0 + type: security + body: >- + Updated openSSL-dev to 1.1.1q-r0 to address CVE-2022-2097. + - title: Update ncurses to 1.1.1q-r0 + type: security + body: >- + Updated ncurses to 1.1.1q-r0 to address CVE-2022-29458 + - version: 3.0.0 + date: '2022-06-28' + notes: + - title: Upgrade to Envoy 1.22 + type: change + body: >- + $productName$ has been upgraded to the latest Envoy 1.22 patch release which provides security, performance and feature enhancements. You can read more about them here: Envoy Proxy 1.22.0 Release Notes + + This is a major jump in Envoy versions from the current of 1.17 in EdgeStack 2.X. Most of the changes are under the hood and allow $productName$ to adopt new features in the future. However, one major change that will effect users is the removal of V2 xDS Transport Protocol support. See below for additional information. + - title: Envoy V2 xDS Transport Protocol Support Removed + type: change + body: >- + Envoy removed support for V2 xDS Transport Protocol which means $productName$ now only supports the Envoy V3 xDS Transport Protocol. + + User should first upgrade to $productName$ 2.3 prior to ensure that the AuthServices, LogServices and RatelimitServices are working properly by setting protocol_version: "v3". + + If protocol_version is not specified in 3.Y, the default value of v2 will cause an error to be posted and a static response will be returned. Therefore, you must set it to protocol_version: v3. If upgrading from a previous version, you will want to set it to v3 and ensure it is working before upgrading to Emissary-ingress 3.Y. The default value for protocol_version remains v2 in the getambassador.io/v3alpha1 CRD specifications to avoid making breaking changes outside of a CRD version change. Future versions of CRD's will deprecate it. + + docs: topics/running/services/auth-service/ + - title: HTTP/3 Downstream Support + type: feature + body: >- + With the upgrade to Envoy, $productName$ is now able to provide downstream support for HTTP/3. + + HTTP/3 is built on the QUIC protocol which communicates using the UDP network protocol. QUIC requires TLS v1.3 be used when communicating between client and server. + docs: topics/running/http3 + - title: Zipkin driver for TracingService removed support for HTTP_JSON_V1 + type: change + body: >- + When using the zipkin driver for the TracingService. The collector_endpoint_version no longer accepts `HTTP_JSON_V1` due to the Envoy upgrade. The new default value is `HTTP_JSON`. + docs: topics/running/services/tracing-service/ + + - version: 2.5.1 + date: '2022-12-08' + notes: + - title: Update Golang to 1.19.4 + type: security + body: >- + Updated Golang to latest 1.19.4 patch release which contained two CVEs: CVE-2022-41720, CVE-2022-41717. + + CVE-2022-41720 only affects Windows and $productName$ only ships on Linux. CVE-2022-41717 affects HTTP/2 servers that are exposed to external clients. $productName$ does not expose any of these golang servers to external clients directly. + + - version: 2.5.0 + date: '2022-11-03' + notes: + - title: Diagnostics stats properly handles parsing envoy metrics with colons + type: bugfix + body: >- + If a Host or TLSContext contained a hostname with a : then when using the + diagnostics endpoints ambassador/v0/diagd then an error would be thrown due to the parsing logic not + being able to handle the extra colon. This has been fixed and $productName$ will not throw an error when parsing + envoy metrics for the diagnostics user interface. + + - title: Bump Golang to 1.19.2 + type: security + body: >- + Bump Go from 1.17.12 to 1.19.2. This is to keep the Go version current. + + - version: 2.4.1 + date: '2022-10-10' + notes: + - title: Diagnostics stats properly handles parsing envoy metrics with colons + type: bugfix + body: >- + If a Host or TLSContext contained a hostname with a : then when using the diagnostics endpoints ambassador/v0/diagd then an error would be thrown due to the parsing logic not being able to handle the extra colon. This has been fixed and $productName$ will not throw an error when parsing envoy metrics for the diagnostics user interface. + + - title: Backport fixes for handling synthetic auth services + type: bugfix + body: >- + The synthetic AuthService didn't correctly handle AmbassadorID, which was fixed in version 3.1 of $productName$.The fix has been backported to make sure the AuthService is handled correctly during upgrades. + + - version: 2.4.0 + date: '2022-09-19' + notes: + - title: Add support for Host resources using secrets from different namespaces + type: feature + body: >- + Previously the Host resource could only use secrets that are in the namespace as the + Host. The tlsSecret field in the Host has a new subfield namespace that will allow + the use of secrets from different namespaces. + docs: topics/running/tls/#bring-your-own-certificate + + - title: Allow bypassing of EDS for manual endpoint insertion + type: change + body: >- + Set `AMBASSADOR_EDS_BYPASS` to `true` to bypass EDS handling of endpoints and have endpoints be + inserted to clusters manually. This can help resolve with `503 UH` caused by certification rotation relating to + a delay between EDS + CDS. The default is `false`. + + - title: Properly populate alt_stats_name for Tracing, Auth and RateLimit Services + type: bugfix + body: >- + Previously, setting the stats_name for the TracingService, RateLimitService + or the AuthService would have no affect because it was not being properly passed to the Envoy cluster + config. This has been fixed and the alt_stats_name field in the cluster config is now set correctly. + (Thanks to Paul!) + + - title: Add support for config change batch window before reconfiguring Envoy + type: feature + body: >- + The AMBASSADOR_RECONFIG_MAX_DELAY env var can be optionally set to batch changes for the specified + non-negative window period in seconds before doing an Envoy reconfiguration. Default is "1" if not set. + + - title: TCPMappings use correct SNI configuration + type: bugfix + body: >- + $productName$ 2.0.0 introduced a bug where a TCPMapping that uses SNI, + instead of using the hostname glob in the TCPMapping, uses the hostname glob + in the Host that the TLS termination configuration comes from. + + - title: TCPMappings configure TLS termination without a Host resource + type: bugfix + body: >- + $productName$ 2.0.0 introduced a bug where a TCPMapping that terminates TLS + must have a corresponding Host that it can take the TLS configuration from. + This was semi-intentional, but didn't make much sense. You can now use a + TLSContext without a Hostas in $productName$ 1.y releases, or a + Host with or without a TLSContext as in prior 2.y releases. + + - title: TCPMappings and HTTP Hosts can coexist on Listeners that terminate TLS + type: bugfix + body: >- + Prior releases of $productName$ had the arbitrary limitation that a + TCPMapping cannot be used on the same port that HTTP is served on, even if + TLS+SNI would make this possible. $productName$ now allows TCPMappings to be + used on the same Listener port as HTTP Hosts, as long as that + Listener terminates TLS. + + - version: 2.3.2 + date: '2022-08-01' + notes: + - title: Fix regression in the agent for the metrics transfer. + type: bugfix + body: >- + A regression was introduced in 2.3.0 causing the agent to miss some of the metrics coming from + emissary ingress before sending them to Ambassador cloud. This issue has been resolved to ensure + that all the nodes composing the emissary ingress cluster are reporting properly. + + - title: Update Golang to 1.17.12 + type: security + body: >- + Updated Golang to 1.17.12 to address the CVEs: CVE-2022-23806, CVE-2022-28327, CVE-2022-24675, + CVE-2022-24921, CVE-2022-23772. + + - title: Update Curl to 7.80.0-r2 + type: security + body: >- + Updated Curl to 7.80.0-r2 to address the CVEs: CVE-2022-32207, CVE-2022-27782, CVE-2022-27781, + CVE-2022-27780. + + - title: Update openSSL-dev to 1.1.1q-r0 + type: security + body: >- + Updated openSSL-dev to 1.1.1q-r0 to address CVE-2022-2097. + + - title: Update ncurses to 1.1.1q-r0 + type: security + body: >- + Updated ncurses to 1.1.1q-r0 to address CVE-2022-29458 + + + - version: 2.3.1 + date: '2022-06-10' + notes: + - title: Fix regression in tracing service config + type: bugfix + body: >- + A regression was introduced in 2.3.0 that leaked zipkin default config fields into the configuration + for the other drivers (lightstep, etc...). This caused $productName$ to crash on startup. This issue has been resolved + to ensure that the defaults are only applied when driver is zipkin + docs: https://github.com/emissary-ingress/emissary/issues/4267 + + - title: Envoy security updates + type: security + body: >- + We have backported patches from the Envoy 1.19.5 security update to $productName$'s + 1.17-based Envoy, addressing CVE-2022-29224 and CVE-2022-29225. $productName$ is not + affected by CVE-2022-29226, CVE-2022-29227, or CVE-2022-29228; as it does not support internal + redirects, and does not use Envoy's built-in OAuth2 filter. + docs: https://groups.google.com/g/envoy-announce/c/8nP3Kn4jV7k + + - version: 2.3.0 + date: '2022-06-06' + notes: + - title: Remove unused packages + type: security + body: >- + Completely remove gdbm, pip, smtplib, and sqlite packages, as they are unused. + + - title: CORS now happens before auth + type: bugfix + body: >- + When CORS is specified (either in a Mapping or in the Ambassador + Module), CORS processing will happen before authentication. This corrects a + problem where XHR to authenticated endpoints would fail. + + - title: Correctly handle caching of Mappings with the same name in different namespaces + type: bugfix + body: >- + In 2.x releases of $productName$ when there are multiple Mappings that have the same + metadata.name across multiple namespaces, their old config would not properly be removed + from the cache when their config was updated. This resulted in an inability to update configuration + for groups of Mappings that share the same name until the $productName$ pods restarted. + + - title: Fix support for Zipkin API-v1 with Envoy xDS-v3 + type: bugfix + body: >- + It is now possible for a TracingService to specify + collector_endpoint_version: HTTP_JSON_V1 when using xDS v3 to configure Envoy + (which has been the default since $productName$ 1.14.0). The HTTP_JSON_V1 + value configures Envoy to speak to Zipkin using Zipkin's old API-v1, while the + HTTP_JSON value configures Envoy to speak to Zipkin using Zipkin's new + API-v2. In previous versions of $productName$ it was only possible to use + HTTP_JSON_V1 when explicitly setting the + AMBASSADOR_ENVOY_API_VERSION=V2 environment variable to force use of xDS v2 + to configure Envoy. + docs: topics/running/services/tracing-service/ + + - title: Allow setting propagation modes for Lightstep tracing + type: feature + body: >- + It is now possible to set propagation_modes in the + TracingService config when using lightstep as the driver. + (Thanks to Paul!) + docs: topics/running/services/tracing-service/ + github: + - title: '#4179' + link: https://github.com/emissary-ingress/emissary/pull/4179 + + - title: Added Support for Certificate Revocation Lists + type: feature + body: >- + $productName$ now supports Envoy's Certificate Revocation lists. + This allows users to specify a list of certificates that $productName$ should reject even if the certificate itself is otherwise valid. + docs: topics/running/tls + + - title: Added support for the LogService v3 transport protocol + type: feature + body: >- + Previously, a LogService would always have $productName$ communicate with the + external log service using the envoy.service.accesslog.v2.AccessLogService + API. It is now possible for the LogService to specify + protocol_version: v3 to use the newer + envoy.service.accesslog.v3.AccessLogService API instead. This functionality + is not available if you set the AMBASSADOR_ENVOY_API_VERSION=V2 environment + variable. + docs: topics/running/services/log-service/ + + - title: Deprecated v2 transport protocol for AuthServices + type: change + body: >- + A future release of $productName$ will remove support for the now deprecated v2 transport protocol in AuthServices. Instructions for migrating existing AuthServices from v2 to v3 can be found on the AuthService page. This change only impacts gRPC AuthServices. HTTP AuthServices are unaffected by this change. + docs: topics/running/services/auth-service/ + + edgeStackNotes: + - title: Improved performance processing OAuth2 Filters + type: change + body: >- + When each OAuth2 Filter that references a Kubernetes secret is loaded, $AESproductName$ previously needed to communicate with the API server to request and validate that secret before loading the next Filter. To improve performance, $AESproductName$ will now load and validate all secrets required by OAuth2 Filters at once prior to loading the filters. + + - title: Added Support for transport protocol v3 in External Filters + type: feature + body: >- + External Filters can now make use of the v3 transport protocol. In addtion to the support for the v3 transport protocol, the default AuthService installed with $AESproductName$ will now only operate with transport protocol v3. In order to support existing External Filters using v2, $AESproductName$ will automatically translate + v2 to the new default of v3. Any External Filters will be assumed to be using transport protocol v2 and will use the automatic conversion to v3 unless the new protocol_version field on the External Filter is explicitly set to v3. + + - title: Deprecated v2 transport protocol for External Filters + type: change + body: >- + A future release of $AESproductName$ will remove support for the now deprecated v2 transport protocol in External Filters. Migrating Existing External Filters from v2 to v3 is simple and and example can be found on the External Filter page. This change only impacts gRPC External Filters. HTTP External Filters are unaffected by this change. + + - version: 2.2.2 + date: '2022-02-25' + notes: + - title: TLS Secret validation is now opt-in + type: change + body: >- + You may now choose to enable TLS Secret validation by setting the + AMBASSADOR_FORCE_SECRET_VALIDATION=true environment variable. The default configuration does not + enforce secret validation. + docs: topics/running/tls#certificates-and-secrets + + - title: Correctly validate EC (Elliptic Curve) Private Keys + type: bugfix + body: >- + Kubernetes Secrets that should contain an EC (Elliptic Curve) TLS Private Key are now properly validated. + github: + - title: '#4134' + link: https://github.com/emissary-ingress/emissary/issues/4134 + docs: topics/running/tls#certificates-and-secrets + + - version: 2.2.1 + date: '2022-02-22' + notes: + - title: Envoy V2 API deprecation + type: change + body: >- + Support for the Envoy V2 API is deprecated as of $productName$ v2.1, and will be removed in $productName$ + v3.0. The AMBASSADOR_ENVOY_API_VERSION environment variable will be removed at the same + time. Only the Envoy V3 API will be supported (this has been the default since $productName$ v1.14.0). + + - title: Envoy security updates + type: security + body: >- + Upgraded Envoy to address security vulnerabilities CVE-2021-43824, CVE-2021-43825, CVE-2021-43826, + CVE-2022-21654, and CVE-2022-21655. + docs: https://groups.google.com/g/envoy-announce/c/bIUgEDKHl4g + - title: Correctly support canceling rollouts + type: bugfix + body: >- + The Ambassador Agent now correctly supports requests to cancel a rollout. + docs: ../../argo/latest/howtos/manage-rollouts-using-cloud + + - version: 2.2.0 + date: '2022-02-10' + notes: + - title: Envoy V2 API deprecation + type: change + body: >- + Support for the Envoy V2 API is deprecated as of $productName$ v2.1, and will be removed in $productName$ + v3.0. The AMBASSADOR_ENVOY_API_VERSION environment variable will be removed at the same + time. Only the Envoy V3 API will be supported (this has been the default since $productName$ v1.14.0). + + - title: Emissary-ingress will watch for Cloud Connect Tokens + type: change + body: >- + $productName$ will now watch for ConfigMap or Secret resources specified by the + AGENT_CONFIG_RESOURCE_NAME environment variable in order to allow all + components (and not only the Ambassador Agent) to authenticate requests to + Ambassador Cloud. + image: ./v2.2.0-cloud.png + + - title: Update Alpine and libraries + type: security + body: >- + $productName$ has updated Alpine to 3.15, and Python and Go dependencies + to their latest compatible versions, to incorporate numerous security patches. + + - title: Support a log-level metric + type: feature + body: >- + $productName$ now supports the metric ambassador_log_level{label="debug"} + which will be set to 1 if debug logging is enabled for the running Emissary + instance, or to 0 if not. This can help to be sure that a running production + instance was not actually left doing debugging logging, for example. + (Thanks to Fabrice!) + github: + - title: '#3906' + link: https://github.com/emissary-ingress/emissary/issues/3906 + docs: topics/running/statistics/8877-metrics/ + + - title: Envoy configuration % escaping + type: feature + body: >- + $productName$ is now leveraging a new Envoy Proxy patch that allows Envoy to accept escaped + '%' characters in its configuration. This means that error_response_overrides and other + custom user content can now contain '%' symbols escaped as '%%'. + docs: topics/running/custom-error-responses + github: + - title: 'DW Envoy: 74' + link: https://github.com/datawire/envoy/pull/74 + - title: 'Upstream Envoy: 19383' + link: https://github.com/envoyproxy/envoy/pull/19383 + image: ./v2.2.0-percent-escape.png + + - title: Stream metrics from Envoy to Ambassador Cloud + type: feature + body: >- + Support for streaming Envoy metrics about the clusters to Ambassador Cloud. + github: + - title: '#4053' + link: https://github.com/emissary-ingress/emissary/pull/4053 + docs: https://github.com/emissary-ingress/emissary/pull/4053 + + - title: Support received commands to pause, continue and abort a Rollout via Agent directives + type: feature + body: >- + The Ambassador agent now receives commands to manipulate Rollouts (pause, continue, and + abort are currently supported) via directives and executes them in the cluster. A report + is sent to Ambassador Cloud including the command ID, whether it ran successfully, and + an error message in case there was any. + github: + - title: '#4040' + link: https://github.com/emissary-ingress/emissary/pull/4040 + docs: https://github.com/emissary-ingress/emissary/pull/4040 + + - title: Validate certificates in TLS Secrets + type: bugfix + body: >- + Kubernetes Secrets that should contain TLS certificates are now validated before being + accepted for configuration. A Secret that contains an invalid TLS certificate will be logged + as an invalid resource. + github: + - title: '#3821' + link: https://github.com/emissary-ingress/emissary/issues/3821 + docs: ../topics/running/tls + image: ./v2.2.0-tls-cert-validation.png + + edgeStackNotes: + - title: Devportal support for using API server definitions from OpenAPI docs + type: feature + body: >- + You can now set preserve_servers in Ambassador Edge Stack's + DevPortal resource to configure the DevPortal to use server definitions from + the OpenAPI document when displaying connection information for services in the DevPortal. + + - version: 2.1.2 + prevVersion: 2.1.0 + date: '2022-01-25' + notes: + - title: Envoy V2 API deprecation + type: change + body: >- + Support for the Envoy V2 API is deprecated as of $productName$ v2.1, and will be removed in $productName$ + v3.0. The AMBASSADOR_ENVOY_API_VERSION environment variable will be removed at the same + time. Only the Envoy V3 API will be supported (this has been the default since $productName$ v1.14.0). + + - title: Docker BuildKit always used for builds + type: change + body: >- + Docker BuildKit is enabled for all Emissary builds. Additionally, the Go + build cache is fully enabled when building images, speeding up repeated builds. + docs: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md + + - title: Fix support for for v2 Mappings with CORS + type: bugfix + body: >- + Emissary-ingress 2.1.0 generated invalid Envoy configuration for + getambassador.io/v2 Mappings that set + spec.cors.origins to a string rather than a list of strings; this has been + fixed, and these Mappings should once again function correctly. + docs: topics/using/cors/#the-cors-attribute + image: ./v2.1.2-mapping-cors.png + + - title: Correctly handle canary Mapping weights when reconfiguring + type: bugfix + body: >- + Changes to the weight of Mapping in a canary group + will now always be correctly managed during reconfiguration; such changes could + have been missed in earlier releases. + docs: topics/using/canary/#the-weight-attribute + + - title: Correctly handle solitary Mappings with explicit weights + type: bugfix + body: >- + A Mapping that is not part of a canary group, but that has a + weight less than 100, will be correctly configured to receive all + traffic as if the weight were 100. + docs: topics/using/canary/#the-weight-attribute + image: ./v2.1.2-mapping-less-weighted.png + + - title: Correctly handle empty rewrite in a Mapping + type: bugfix + body: >- + Using rewrite: "" in a Mapping is correctly handled + to mean "do not rewrite the path at all". + docs: topics/using/rewrites + image: ./v2.1.2-mapping-no-rewrite.png + + - title: Correctly use Mappings with host redirects + type: bugfix + body: >- + Any Mapping that uses the host_redirect field is now properly discovered and used. Thanks + to Gabriel Féron for contributing this bugfix! + github: + - title: '#3709' + link: https://github.com/emissary-ingress/emissary/issues/3709 + docs: https://github.com/emissary-ingress/emissary/issues/3709 + + - title: Correctly handle DNS wildcards when associating Hosts and Mappings + type: bugfix + body: >- + Mappings with DNS wildcard hostname will now be correctly + matched with Hosts. Previously, the case where both the Host + and the Mapping use DNS wildcards for their hostnames could sometimes + not correctly match when they should have. + docs: howtos/configure-communications/ + image: ./v2.1.2-host-mapping-matching.png + + - title: Fix overriding global settings for adding or removing headers + type: bugfix + body: >- + If the ambassador Module sets a global default for + add_request_headers, add_response_headers, + remove_request_headers, or remove_response_headers, it is often + desirable to be able to turn off that setting locally for a specific Mapping. + For several releases this has not been possible for Mappings that are native + Kubernetes resources (as opposed to annotations), as an empty value ("mask the global + default") was erroneously considered to be equivalent to unset ("inherit the global + default"). This is now fixed. + docs: topics/using/defaults/ + + - title: Fix empty error_response_override bodies + type: bugfix + body: >- + It is now possible to set a Mapping + spec.error_response_overrides body.text_format to an empty + string or body.json_format to an empty dict. Previously, this was possible + for annotations but not for native Kubernetes resources. + docs: topics/running/custom-error-responses/ + + - title: Annotation conversion and validation + type: bugfix + body: >- + Resources that exist as getambassador.io/config annotations rather than as + native Kubernetes resources are now validated and internally converted to v3alpha1 and, + the same as native Kubernetes resources. + image: ./v2.1.2-annotations.png + + - title: Validation error reporting + type: bugfix + body: >- + Resource validation errors are now reported more consistently; it was the case that in + some situations a validation error would not be reported. + + - version: 2.1.1 + date: 'N/A' + notes: + - title: Never issued + type: change + isHeadline: true + body: >- + Emissary-ingress 2.1.1 was not issued; Ambassador Edge Stack 2.1.1 uses + Emissary-ingress 2.1.0. + + - version: 2.1.0 + date: '2021-12-16' + notes: + - title: Not recommended; upgrade to 2.1.2 instead + type: change + isHeadline: true + body: >- + Emissary-ingress 2.1.0 is not recommended; upgrade to 2.1.2 instead. + + - title: Envoy V2 API deprecation + type: change + body: >- + Support for the Envoy V2 API is deprecated as of $productName$ v2.1, and will be removed in $productName$ + v3.0. The AMBASSADOR_ENVOY_API_VERSION environment variable will be removed at the same + time. Only the Envoy V3 API will be supported (this has been the default since $productName$ v1.14.0). + + - title: Smoother migrations with support for getambassador.io/v2 CRDs + type: feature + body: >- + $productName$ supports getambassador.io/v2 CRDs, to simplify migration from $productName$ + 1.X. Note: it is important to read the migration + documentation before starting migration. + docs: topics/install/migration-matrix + image: ./v2.1.0-smoother-migration.png + + - title: Correctly handle all changing canary configurations + type: bugfix + body: >- + The incremental reconfiguration cache could miss some updates when multiple + Mappings had the same prefix ("canary"ing multiple + Mappings together). This has been corrected, so that all such + updates correctly take effect. + github: + - title: '#3945' + link: https://github.com/emissary-ingress/emissary/issues/3945 + docs: https://github.com/emissary-ingress/emissary/issues/3945 + image: ./v2.1.0-canary.png + + - title: Secrets used for ACME private keys will not log errors + type: bugfix + body: >- + When using Kubernetes Secrets to store ACME private keys (as the Edge Stack + ACME client does), an error would always be logged about the Secret not being + present, even though it was present, and everything was working correctly. + This error is no longer logged. + + - title: When using gzip, upstreams will no longer receive encoded data + type: bugfix + body: >- + When using gzip compression, upstream services will no longer receive compressed + data. This bug was introduced in 1.14.0. The fix restores the default behavior of + not sending compressed data to upstream services. + github: + - title: '#3818' + link: https://github.com/emissary-ingress/emissary/issues/3818 + docs: https://github.com/emissary-ingress/emissary/issues/3818 + image: ./v2.1.0-gzip-enabled.png + + - title: Update to busybox 1.34.1 + type: security + body: >- + Update to busybox 1.34.1 to resolve CVE-2021-28831, CVE-2021-42378, + CVE-2021-42379, CVE-2021-42380, CVE-2021-42381, CVE-2021-42382, CVE-2021-42383, + CVE-2021-42384, CVE-2021-42385, and CVE-2021-42386. + + - title: Update Python dependencies + type: security + body: >- + Update Python dependencies to resolve CVE-2020-28493 (jinja2), CVE-2021-28363 + (urllib3), and CVE-2021-33503 (urllib3). + + - title: Remove test-only code from the built image + type: security + body: >- + Previous built images included some Python packages used only for test. These + have now been removed, resolving CVE-2020-29651. + + - version: 2.0.5 + date: '2021-11-08' + notes: + - title: AuthService circuit breakers + type: feature + body: >- + It is now possible to set the circuit_breakers for AuthServices, + exactly the same as for Mappings and TCPMappings. This makes it + possible to configure your AuthService to be able to handle more than 1024 + concurrent requests. + docs: topics/running/services/auth-service/ + image: ./v2.0.5-auth-circuit-breaker.png + + - title: Improved validity checking for error response overrides + type: bugfix + body: >- + Any token delimited by '%' is now validated agains a whitelist of valid + Envoy command operators. Any mapping containing an error_response_overrides + section with invalid command operators will be discarded. + docs: topics/running/custom-error-responses + + - title: mappingSelector is now correctly supported in the Host CRD + type: bugfix + body: >- + The Host CRD now correctly supports the mappingSelector + element, as documented. As a transition aid, selector is a synonym for + mappingSelector; a future version of $productName$ will remove the + selector element. + github: + - title: '#3902' + link: https://github.com/emissary-ingress/emissary/issues/3902 + docs: https://github.com/emissary-ingress/emissary/issues/3902 + image: ./v2.0.5-mappingselector.png + + - version: 2.0.4 + date: '2021-10-19' + notes: + - title: General availability! + type: feature + body: >- + We're pleased to introduce $productName$ 2.0.4 for general availability! The + 2.X family introduces a number of changes to allow $productName$ to more + gracefully handle larger installations, reduce global configuration to better + handle multitenant or multiorganizational installations, reduce memory footprint, and + improve performance. We welcome feedback!! Join us on + Slack and let us know what you think. + isHeadline: true + docs: about/changes-2.x + image: ./emissary-ga.png + + - title: API version getambassador.io/v3alpha1 + type: change + body: >- + The x.getambassador.io/v3alpha1 API version has become the + getambassador.io/v3alpha1 API version. The Ambassador- + prefixes from x.getambassador.io/v3alpha1 resources have been + removed for ease of migration. Note that getambassador.io/v3alpha1 + is the only supported API version for 2.0.4 — full support for + getambassador.io/v2 will arrive soon in a later 2.X version. + docs: about/changes-2.x + image: ./v2.0.4-v3alpha1.png + + - title: Support for Kubernetes 1.22 + type: feature + body: >- + The getambassador.io/v3alpha1 API version and the published chart + and manifests have been updated to support Kubernetes 1.22. Thanks to + Mohit Sharma for contributions to + this feature! + docs: about/changes-2.x + image: ./v2.0.4-k8s-1.22.png + + - title: Mappings support configuring strict or logical DNS + type: feature + body: >- + You can now set dns_type between strict_dns and + logical_dns in a Mapping to configure the Service + Discovery Type. + docs: topics/using/mappings/#dns-configuration-for-mappings + image: ./v2.0.4-mapping-dns-type.png + + - title: Mappings support controlling DNS refresh with DNS TTL + type: feature + body: >- + You can now set respect_dns_ttl to true to force the + DNS refresh rate for a Mapping to be set to the record's TTL + obtained from DNS resolution. + docs: topics/using/mappings/#dns-configuration-for-mappings + + - title: Support configuring upstream buffer sizes + type: feature + body: >- + You can now set buffer_limit_bytes in the ambassador + Module to to change the size of the upstream read and write buffers. + The default is 1MiB. + docs: topics/running/ambassador/#modify-default-buffer-size + + - title: Version number reported correctly + type: bugfix + body: >- + The release now shows its actual released version number, rather than + the internal development version number. + github: + - title: '#3854' + link: https://github.com/emissary-ingress/emissary/issues/3854 + docs: https://github.com/emissary-ingress/emissary/issues/3854 + image: ./v2.0.4-version.png + + - title: Large configurations work correctly with Ambassador Cloud + type: bugfix + body: >- + Large configurations no longer cause $productName$ to be unable + to communicate with Ambassador Cloud. + github: + - title: '#3593' + link: https://github.com/emissary-ingress/emissary/issues/3593 + docs: https://github.com/emissary-ingress/emissary/issues/3593 + + - title: Listeners correctly support l7Depth + type: bugfix + body: >- + The l7Depth element of the Listener CRD is + properly supported. + docs: topics/running/listener#l7depth + image: ./v2.0.4-l7depth.png + + - version: 2.0.3-ea + date: '2021-09-16' + notes: + - title: Developer Preview! + body: We're pleased to introduce $productName$ 2.0.3 as a developer preview. The 2.X family introduces a number of changes to allow $productName$ to more gracefully handle larger installations, reduce global configuration to better handle multitenant or multiorganizational installations, reduce memory footprint, and improve performance. We welcome feedback!! Join us on Slack and let us know what you think. + type: change + isHeadline: true + docs: about/changes-2.x + + - title: AES_LOG_LEVEL more widely effective + body: The environment variable AES_LOG_LEVEL now also sets the log level for the diagd logger. + type: feature + docs: topics/running/running/ + github: + - title: '#3686' + link: https://github.com/emissary-ingress/emissary/issues/3686 + - title: '#3666' + link: https://github.com/emissary-ingress/emissary/issues/3666 + + - title: AmbassadorMapping supports setting the DNS type + body: You can now set dns_type in the AmbassadorMapping to configure how Envoy will use the DNS for the service. + type: feature + docs: topics/using/mappings/#using-dns_type + + - title: Building Emissary no longer requires setting DOCKER_BUILDKIT + body: It is no longer necessary to set DOCKER_BUILDKIT=0 when building Emissary. A future change will fully support BuildKit. + type: bugfix + docs: https://github.com/emissary-ingress/emissary/issues/3707 + github: + - title: '#3707' + link: https://github.com/emissary-ingress/emissary/issues/3707 + + - version: 2.0.2-ea + date: '2021-08-24' + notes: + - title: Developer Preview! + body: We're pleased to introduce $productName$ 2.0.2 as a developer preview. The 2.X family introduces a number of changes to allow $productName$ to more gracefully handle larger installations, reduce global configuration to better handle multitenant or multiorganizational installations, reduce memory footprint, and improve performance. We welcome feedback!! Join us on Slack and let us know what you think. + type: change + isHeadline: true + docs: about/changes-2.x + + - title: Envoy security updates + type: bugfix + body: 'Upgraded envoy to 1.17.4 to address security vulnerabilities CVE-2021-32777, CVE-2021-32778, CVE-2021-32779, and CVE-2021-32781.' + docs: https://groups.google.com/g/envoy-announce/c/5xBpsEZZDfE?pli=1 + + - title: Expose Envoy's allow_chunked_length HTTPProtocolOption + type: feature + body: 'You can now set allow_chunked_length in the Ambassador Module to configure the same value in Envoy.' + docs: topics/running/ambassador/#content-length-headers + + - title: Envoy-configuration snapshots saved + type: change + body: Envoy-configuration snapshots get saved (as ambex-#.json) in /ambassador/snapshots. The number of snapshots is controlled by the AMBASSADOR_AMBEX_SNAPSHOT_COUNT environment variable; set it to 0 to disable. The default is 30. + docs: topics/running/running/ + + - version: 2.0.1-ea + date: '2021-08-12' + notes: + - title: Developer Preview! + body: We're pleased to introduce $productName$ 2.0.1 as a developer preview. The 2.X family introduces a number of changes to allow $productName$ to more gracefully handle larger installations, reduce global configuration to better handle multitenant or multiorganizational installations, reduce memory footprint, and improve performance. We welcome feedback!! Join us on Slack and let us know what you think. + type: change + isHeadline: true + docs: about/changes-2.x + + - title: Improved Ambassador Cloud visibility + type: feature + body: Ambassador Agent reports sidecar process information and AmbassadorMapping OpenAPI documentation to Ambassador Cloud to provide more visibility into services and clusters. + docs: /docs/cloud/latest/service-catalog/quick-start/ + + - title: Configurable per-AmbassadorListener statistics prefix + body: The optional stats_prefix element of the AmbassadorListener CRD now determines the prefix of HTTP statistics emitted for a specific AmbassadorListener. + type: feature + docs: topics/running/listener + + - title: Configurable statistics names + body: The optional stats_name element of AmbassadorMapping, AmbassadorTCPMapping, AuthService, LogService, RateLimitService, and TracingService now sets the name under which cluster statistics will be logged. The default is the service, with non-alphanumeric characters replaced by underscores. + type: feature + docs: topics/running/statistics + + - title: Updated klog to reduce log noise + type: bugfix + body: We have updated to k8s.io/klog/v2 to track upstream and to quiet unnecessary log output. + docs: https://github.com/emissary-ingress/emissary/issues/3603 + + - title: Subsecond time resolution in logs + type: change + body: Logs now include subsecond time resolutions, rather than just seconds. + docs: https://github.com/emissary-ingress/emissary/pull/3650 + + - title: Configurable Envoy-configuration rate limiting + type: change + body: Set AMBASSADOR_AMBEX_NO_RATELIMIT to true to completely disable ratelimiting Envoy reconfiguration under memory pressure. This can help performance with the endpoint or Consul resolvers, but could make OOMkills more likely with large configurations. The default is false, meaning that the rate limiter is active. + docs: topics/concepts/rate-limiting-at-the-edge/ + + - version: 2.0.0-ea + date: '2021-06-24' + notes: + - title: Developer Preview! + body: We're pleased to introduce $productName$ 2.0.0 as a developer preview. The 2.X family introduces a number of changes to allow $productName$ to more gracefully handle larger installations, reduce global configuration to better handle multitenant or multiorganizational installations, reduce memory footprint, and improve performance. We welcome feedback!! Join us on Slack and let us know what you think. + type: change + docs: about/changes-2.x + isHeadline: true + + - title: Configuration API v3alpha1 + body: >- + $productName$ 2.0.0 introduces API version x.getambassador.io/v3alpha1 for + configuration changes that are not backwards compatible with the 1.X family. API versions + getambassador.io/v0, getambassador.io/v1, and + getambassador.io/v2 are deprecated. Further details are available in the Major Changes + in 2.X document. + type: feature + docs: about/changes-2.x/#1-configuration-api-version-getambassadoriov3alpha1 + image: ./edge-stack-2.0.0-v3alpha1.png + + - title: The AmbassadorListener Resource + body: The new AmbassadorListener CRD defines where and how to listen for requests from the network, and which AmbassadorHost definitions should be used to process those requests. Note that the AmbassadorListener CRD is mandatory and consolidates all port configuration; see the AmbassadorListener documentation for more details. + type: feature + docs: topics/running/listener + image: ./edge-stack-2.0.0-listener.png + + - title: AmbassadorMapping hostname DNS glob support + body: >- + Where AmbassadorMapping's host field is either an exact match or (with host_regex set) a regex, + the new hostname element is always a DNS glob. Use hostname instead of host for best results. + docs: about/changes-2.x/#ambassadorhost-and-ambassadormapping-association + type: feature + + - title: Memory usage improvements for installations with many AmbassadorHosts + body: The behavior of the Ambassador module prune_unreachable_routes field is now automatic, which should reduce Envoy memory requirements for installations with many AmbassadorHosts + docs: topics/running/ambassador/#prune-unreachable-routes + image: ./edge-stack-2.0.0-prune_routes.png + type: feature + + - title: Independent Host actions supported + body: Each AmbassadorHost can specify its requestPolicy.insecure.action independently of any other AmbassadorHost, allowing for HTTP routing as flexible as HTTPS routing. + docs: topics/running/host-crd/#secure-and-insecure-requests + github: + - title: '#2888' + link: https://github.com/datawire/ambassador/issues/2888 + image: ./edge-stack-2.0.0-insecure_action_hosts.png + type: bugfix + + - title: Correctly set Ingress resource status in all cases + body: $productName$ 2.0.0 fixes a regression in detecting the Ambassador Kubernetes service that could cause the wrong IP or hostname to be used in Ingress statuses -- thanks, Noah Fontes! + docs: topics/running/ingress-controller + type: bugfix + image: ./edge-stack-2.0.0-ingressstatus.png + + - title: Stricter mTLS enforcement + body: $productName$ 2.0.0 fixes a bug where mTLS could use the wrong configuration when SNI and the :authority header didn't match + type: bugfix + + - title: Port configuration outside AmbassadorListener has been moved to AmbassadorListener + body: The TLSContext redirect_cleartext_from and AmbassadorHost requestPolicy.insecure.additionalPort elements are no longer supported. Use a AmbassadorListener for this functionality instead. + type: change + docs: about/changes-2.x/#tlscontext-redirect_cleartext_from-and-host-insecureadditionalport + + - title: PROXY protocol configuration has been moved to AmbassadorListener + body: The use_proxy_protocol element of the Ambassador Module is no longer supported, as it is now part of the AmbassadorListener resource (and can be set per-AmbassadorListener rather than globally). + type: change + docs: about/changes-2.x/#proxy-protocol-configuration + + - title: Stricter rules for AmbassadorHost/AmbassadorMapping association + body: An AmbassadorMapping will only be matched with an AmbassadorHost if the AmbassadorMapping's host or the AmbassadorHost's selector (or both) are explicitly set, and match. This change can significantly improve $productName$'s memory footprint when many AmbassadorHosts are involved. Further details are available in the Major Changes in 2.X document. + docs: about/changes-2.x/#host-and-mapping-association + type: change + + - title: AmbassadorHost or Ingress now required for TLS termination + body: An AmbassadorHost or Ingress resource is now required when terminating TLS -- simply creating a TLSContext is not sufficient. Further details are available in the AmbassadorHost CRD documentation. + docs: about/changes-2.x/#host-tlscontext-and-tls-termination + type: change + image: ./edge-stack-2.0.0-host_crd.png + + - title: Envoy V3 APIs + body: By default, $productName$ will configure Envoy using the V3 Envoy API. This change is mostly transparent to users, but note that Envoy V3 does not support unsafe regular expressions or, e.g., Zipkin's V1 collector protocol. Further details are available in the Major Changes in 2.X document. + type: change + docs: about/changes-2.x/#envoy-v3-api-by-default + + - title: Module-based TLS no longer supported + body: The tls module and the tls field in the Ambassador module are no longer supported. Please use TLSContext resources instead. + docs: about/changes-2.x/#tls-the-ambassador-module-and-the-tls-module + image: ./edge-stack-2.0.0-tlscontext.png + type: change + + - title: Higher performance while generating Envoy configuration now enabled by default + body: The environment variable AMBASSADOR_FAST_RECONFIGURE is now set by default, enabling the higher-performance implementation of the code that $productName$ uses to generate and validate Envoy configurations. + docs: topics/running/scaling/#ambassador_fast_reconfigure-and-ambassador_legacy_mode-flags + type: change + + - title: Service Preview no longer supported + body: >- + Service Preview and the AGENT_SERVICE environment variable are no longer supported. + The Telepresence product replaces this functionality. + docs: https://www.getambassador.io/docs/telepresence/ + type: change + + - title: edgectl no longer supported + body: The edgectl CLI tool has been deprecated; please use the emissary-ingress helm chart instead. + docs: topics/install/helm/ + type: change + + - version: 1.14.2 + date: '2021-09-29' + notes: + - title: Mappings support controlling DNS refresh with DNS TTL + type: feature + body: >- + You can now set respect_dns_ttl in Ambassador Mappings. When true it + configures that upstream's refresh rate to be set to resource record’s TTL + docs: topics/using/mappings/#dns-configuration-for-mappings + + - title: Mappings support configuring strict or logical DNS + type: feature + body: >- + You can now set dns_type in Ambassador Mappings to use Envoy's + logical_dns resolution instead of the default strict_dns. + docs: topics/using/mappings/#dns-configuration-for-mappings + + - title: Support configuring upstream buffer size + type: feature + body: >- + You can now set buffer_limit_bytes in the ambassador + Module to to change the size of the upstream read and write buffers. + The default is 1MiB. + docs: topics/running/ambassador/#modify-default-buffer-size + + - version: 1.14.1 + date: '2021-08-24' + notes: + - title: Envoy security updates + type: change + body: >- + Upgraded Envoy to 1.17.4 to address security vulnerabilities CVE-2021-32777, + CVE-2021-32778, CVE-2021-32779, and CVE-2021-32781. + docs: https://groups.google.com/g/envoy-announce/c/5xBpsEZZDfE + + - version: 1.14.0 + date: '2021-08-19' + notes: + - title: Envoy upgraded to 1.17.3! + type: change + body: >- + Update from Envoy 1.15 to 1.17.3 + docs: https://www.envoyproxy.io/docs/envoy/latest/version_history/version_history + + - title: Expose Envoy's allow_chunked_length HTTPProtocolOption + type: feature + body: >- + You can now set allow_chunked_length in the Ambassador Module to configure + the same value in Envoy. + docs: topics/running/ambassador/#content-length-headers + + - title: Default Envoy API version is now V3 + type: change + body: >- + AMBASSADOR_ENVOY_API_VERSION now defaults to V3 + docs: topics/running/running/#ambassador_envoy_api_version + + - title: Subsecond time resolution in logs + type: change + body: Logs now include subsecond time resolutions, rather than just seconds. + docs: https://github.com/emissary-ingress/emissary/pull/3650 + + - version: 1.13.10 + date: '2021-07-28' + notes: + - title: Fix for CORS origins configuration on the Mapping resource + type: bugfix + body: >- + Fixed a regression when specifying a comma separated string for cors.origins + on the Mapping resource. + ([#3609](https://github.com/emissary-ingress/emissary/issues/3609)) + docs: topics/using/cors + image: ../images/emissary-1.13.10-cors-origin.png + + - title: New Envoy-configuration snapshots for debugging + body: 'Envoy-configuration snapshots get saved (as ambex-#.json) in /ambassador/snapshots. The number of snapshots is controlled by the AMBASSADOR_AMBEX_SNAPSHOT_COUNT environment variable; set it to 0 to disable. The default is 30.' + type: change + docs: topics/running/environment/ + + - title: Optionally remove ratelimiting for Envoy reconfiguration + body: >- + Set AMBASSADOR_AMBEX_NO_RATELIMIT to true to completely disable + ratelimiting Envoy reconfiguration under memory pressure. This can help performance with + the endpoint or Consul resolvers, but could make OOMkills more likely with large + configurations. The default is false, meaning that the rate limiter is + active. + type: change + docs: topics/running/environment/ + + edgeStackNotes: + - title: Mappings support configuring the DevPortal fetch timeout + type: bugfix + body: >- + The Mapping resource can now specify docs.timeout_ms to set the + timeout when the Dev Portal is fetching API specifications. + docs: topics/using/dev-portal + image: ../images/edge-stack-1.13.10-docs-timeout.png + + - title: Dev Portal will strip HTML tags when displaying results + type: bugfix + body: >- + The Dev Portal will now strip HTML tags when displaying search results, showing just the + actual content of the search result. + docs: topics/using/dev-portal + + - title: Consul certificate rotation logs more information + type: change + body: >- + Consul certificate-rotation logging now includes the fingerprints and validity timestamps + of certificates being rotated. + docs: howtos/consul/ + image: ../images/edge-stack-1.13.10-consul-cert-log.png + + - version: 1.13.9 + date: '2021-06-30' + notes: + - title: Fix for TCPMappings + body: >- + Configuring multiple TCPMappings with the same ports (but different hosts) no longer + generates invalid Envoy configuration. + type: bugfix + docs: topics/using/tcpmappings/ + + - version: 1.13.8 + date: '2021-06-08' + notes: + - title: Fix Ambassador Cloud Service Details + body: >- + Ambassador Agent now accurately reports up-to-date Endpoint information to Ambassador + Cloud + type: bugfix + docs: tutorials/getting-started/#3-connect-your-cluster-to-ambassador-cloud + image: ../images/edge-stack-1.13.8-cloud-bugfix.png + + - title: Improved Argo Rollouts Experience with Ambassador Cloud + body: >- + Ambassador Agent reports ConfigMaps and Deployments to Ambassador Cloud to provide a + better Argo Rollouts experience. See [Argo+Ambassador + documentation](https://www.getambassador.io/docs/argo) for more info. + type: feature + docs: https://www.getambassador.io/docs/argo + + - version: 1.13.7 + date: '2021-06-03' + notes: + - title: JSON logging support + body: >- + Add AMBASSADOR_JSON_LOGGING to enable JSON for most of the Ambassador control plane. Some + (but few) logs from gunicorn and the Kubernetes client-go package still log text. + image: ../images/edge-stack-1.13.7-json-logging.png + docs: topics/running/running/#log-format + type: feature + + - title: Consul resolver bugfix with TCPMappings + body: >- + Fixed a bug where the Consul resolver would not actually use Consul endpoints with + TCPMappings. + image: ../images/edge-stack-1.13.7-tcpmapping-consul.png + docs: topics/running/resolvers/#the-consul-resolver + type: bugfix + + - title: Memory usage calculation improvements + body: >- + Ambassador now calculates its own memory usage in a way that is more similar to how the + kernel OOMKiller tracks memory. + image: ../images/edge-stack-1.13.7-memory.png + docs: topics/running/scaling/#inspecting-ambassador-performance + type: change + + - version: 1.13.6 + date: '2021-05-24' + notes: + - title: Quieter logs in legacy mode + type: bugfix + body: >- + Fixed a regression where Ambassador snapshot data was logged at the INFO label + when using AMBASSADOR_LEGACY_MODE=true. + + - version: 1.13.5 + date: '2021-05-13' + notes: + - title: Correctly support proper_case and preserve_external_request_id + type: bugfix + body: >- + Fix a regression from 1.8.0 that prevented ambassador Module + config keys proper_case and preserve_external_request_id + from working correctly. + docs: topics/running/ambassador/#header-case + + - title: Correctly support Ingress statuses in all cases + type: bugfix + body: >- + Fixed a regression in detecting the Ambassador Kubernetes service that could cause the + wrong IP or hostname to be used in Ingress statuses (thanks, [Noah + Fontes](https://github.com/impl)! + docs: topics/running/ingress-controller + + - version: 1.13.4 + date: '2021-05-11' + notes: + - title: Envoy 1.15.5 + body: >- + Incorporate the Envoy 1.15.5 security update by adding the + reject_requests_with_escaped_slashes option to the Ambassador module. + image: ../images/edge-stack-1.13.4.png + docs: topics/running/ambassador/#rejecting-client-requests-with-escaped-slashes + type: security +# Don't go any further back than 1.13.4. diff --git a/docs/emissary/latest/topics/concepts/architecture.md b/docs/emissary/latest/topics/concepts/architecture.md new file mode 100644 index 000000000..fe9e0bd31 --- /dev/null +++ b/docs/emissary/latest/topics/concepts/architecture.md @@ -0,0 +1,27 @@ +# The $productName$ architecture + +## $productName$ is a control plane + +$productName$ is a specialized [control plane for Envoy Proxy](https://blog.getambassador.io/the-importance-of-control-planes-with-service-meshes-and-front-proxies-665f90c80b3d). In this architecture, $productName$ translates configuration (in the form of Kubernetes Custom Resources) to Envoy configuration. All actual traffic is directly handled by the high-performance [Envoy Proxy](https://www.envoyproxy.io). + +![Architecture](../../images/ambassador-arch.png) + +## Details + +1. The service owner defines configuration in Kubernetes manifests. +2. When the manifest is applied to the cluster, the Kubernetes API notifies $productName$ of the change. +3. $productName$ parses the change and transforms the configuration into a semantic intermediate representation. Envoy configuration is generated from this IR. +4. The new configuration is passed to Envoy via the gRPC-based Aggregated Discovery Service (ADS) API. +5. Traffic flows through the reconfigured Envoy, without dropping any connections. + +## Scaling and availability + +$productName$ relies on Kubernetes for scaling, high availability, and persistence. All $productName$ configuration is stored directly in Kubernetes; there is no database. $productName$ is packaged as a single container that contains both the control plane and an Envoy Proxy instance. By default, $productName$ is deployed as a Kubernetes `deployment` and can be scaled and managed like any other Kubernetes deployment. + +### Stateless architecture + +By design, $productName$ is an entirely stateless architecture. Each individual $productName$ instance operates independently of other instances. These $productName$ instances rely on Kubernetes to coordinate the configuration between the different $productName$ instances. This enables $productName$ to sidestep the need to engineer a safe, highly available centralized control plane (and if you don't think that this is hard, check out [Jepsen](https://jepsen.io)). By contrast, other control plane architectures rely on a single centralized control plane to manage multiple instances of the data plane. This means that these control plane architectures must engineer resilience and availability into their central control plane. + +## Envoy Proxy + +$productName$ closely tracks Envoy Proxy releases. A stable branch of Envoy Proxy is maintained that enables the team to cherry-pick specific fixes into $productName$. diff --git a/docs/emissary/latest/topics/concepts/gitops-continuous-delivery.md b/docs/emissary/latest/topics/concepts/gitops-continuous-delivery.md new file mode 100644 index 000000000..336a1c66b --- /dev/null +++ b/docs/emissary/latest/topics/concepts/gitops-continuous-delivery.md @@ -0,0 +1,66 @@ +# The Ambassador operating model: GitOps and continuous delivery + +Containerized applications deployed in Kubernetes generally follow the microservices design pattern, where an application composed of dozens or even hundreds of services communicate with each other. Independent application development teams are responsible for the full lifecycle of a service, including coding, testing, deployment, release, and operations. By giving these teams independence, microservices enable organizations to scale their development without sacrificing agility. + +## Policies, declarative configuration, and Custom Resource Definitions + +$productName$ configuration is built on the concept of _policies_. A policy is a statement of intent and codified in a declarative configuration file. $productName$ takes advantage of Kubernetes Custom Resource Definitions (CRDs) to provide a declarative configuration workflow that is idiomatic with Kubernetes. + +Both operators and application developers can write policies. Typically, operators are responsible for global policies that affect all microservices. Common examples of these types of policies include TLS configuration and metrics. Application development teams will want to own the policies that affect their specific service, as these settings will vary from service to service. Examples of these types of service-specific settings include protocols (e.g., HTTP, gRPC, TCP, WebSockets), timeouts, and cross-origin resource sharing settings. + +Because many different teams may need to write policies, $productName$ supports a decentralized configuration model. Individual policies are written in different files. $productName$ aggregates all policies into one master policy configuration for the edge. + +## Continuous delivery and GitOps + +Code cannot provide value to end-users until it is running in production. [Continuous Delivery](https://continuousdelivery.com/) is the ability to get changes of all types -- including new features, configuration changes, bug fixes, and experiments -- into production, and in front of customers safely and quickly in a sustainable way. + +[GitOps](https://www.weave.works/technologies/gitops/) is an approach to continuous delivery that relies on using a source control system as a single source of truth for all infrastructure and configuration. **In the GitOps model, configuration changes go through a specific workflow:** + +1. All configuration is stored in source control. +2. A configuration change is made via pull request. +3. The pull request is approved and merged into the production branch. +4. Automated systems (e.g., a continuous integration pipeline) ensure the configuration of the production branch is in full sync with actual production systems. + +Critically, no human should ever directly apply configuration changes to a live +cluster. Instead, any changes happen via the source control system. This entire +workflow is also self-service; an operations team does not need to be +directly involved in managing the change process (except in the review/approval +process, if desirable). + +Contrast this a **traditional, manual workflow:** + +1. App developer defines configuration. +2. App developer opens a ticket for operations. +3. Operations team reviews ticket. +4. Operations team initiates infrastructure change management process. +5. Operations team executes change using UI or REST API. +6. Operations team notifies app developer of the change. +7. App developer tests change, and opens a ticket to give feedback to operations if necessary. + +The self-service, continuous delivery model is critical for ensuring that edge operations can scale. + +## Continuous delivery, GitOps, and $productName$ + +Adopting a continuous delivery workflow with $productName$ via GitOps provides several advantages: + +1. **Reduced deployment risk**: By immediately deploying approved configuration into production, configuration issues can be rapidly identified. Resolving any issue is as simple as rolling back the change in source control. +2. **Auditability**: Understanding the specific configuration of $productName$ is as simple as reviewing the configuration in the source control repository. Moreover, any changes made to the configuration will also be recorded, providing context on previous configurations. +3. **Simpler infrastructure upgrades**: Upgrading any infrastructure component, + whether the component is Kubernetes, $productName$, or some other piece of + infrastructure, is straightforward. A replica environment can be easily + created and tested directly from your source control system. Once the + upgrade has been validated, the replica environment can be swapped into + production, or production can be live upgraded. +4. **Security**: Access to production cluster(s) can be restricted to senior operators and an automated system, reducing the number of individuals who can directly modify the cluster. + +In a typical $productName$ GitOps workflow: + +* Each service has its own $productName$ policy. This policy consists of one or more $productName$ custom resource definitions, specified in YAML. +* This policy is stored in the same repository as the service, and managed by the service team. +* Changes to the policy follow the GitOps workflow discussed above (e.g., pull request, approval, and continuous delivery). +* Global configuration that is managed by operations are stored in a central repository alongside other cluster configuration. This repository is also set up for continuous delivery with a GitOps workflow. + +## Further reading + +* The [AppDirect engineering team](https://blog.getambassador.io/fireside-chat-with-alex-gervais-accelerating-appdirect-developer-workflow-with-ambassador-7586597b1c34) writes $productName$ configurations within each team's Kubernetes service YAML manifests. These are stored in git and follow the same review/approval process as any other code unit, and a continuous delivery pipeline listens on changes to the repository and applies changes to Kubernetes. +* Netflix introduces [full cycle development](https://netflixtechblog.com/full-cycle-developers-at-netflix-a08c31f83249), a model for developing microservices diff --git a/docs/emissary/latest/topics/concepts/index.md b/docs/emissary/latest/topics/concepts/index.md new file mode 100644 index 000000000..2d02a0277 --- /dev/null +++ b/docs/emissary/latest/topics/concepts/index.md @@ -0,0 +1,10 @@ +# Core concepts + +This section of the documentation introduces core concepts of Kubernetes and Ambassador. Kubernetes and microservices introduce a number of new, powerful paradigms for software development, and Ambassador takes full advantage of these paradigms. + +This section discusses: + +* [The Kubernetes Network Architecture and Ambassador](kubernetes-network-architecture) +* [The Ambassador Operating Model: Continuous Delivery, GitOps, and Declarative Configuration](gitops-continuous-delivery) +* [Progressive Delivery](progressive-delivery) +* [Microservices API Gateways](microservices-api-gateways) diff --git a/docs/emissary/latest/topics/concepts/kubernetes-network-architecture.md b/docs/emissary/latest/topics/concepts/kubernetes-network-architecture.md new file mode 100644 index 000000000..2239a24fc --- /dev/null +++ b/docs/emissary/latest/topics/concepts/kubernetes-network-architecture.md @@ -0,0 +1,52 @@ +# Kubernetes Network architecture + +## Kubernetes has its own isolated network + +Each Kubernetes cluster provides its own isolated network namespace. This approach has a number of benefits. For example, each pod can be easily accessed with its own IP address. One of the consequences of this approach, however, is that a network bridge is required in order to route traffic from outside the Kubernetes cluster to services inside the cluster. + +## Routing traffic to your Kubernetes cluster + +While there are a number of techniques for routing traffic to a Kubernetes cluster, by far the most common and popular method involves deploying an in-cluster edge proxy / ingress controller along with an external load balancer. In this architecture, the network topology looks like this: + +
+ +![Kubernetes Network Architecture](/../../images/documentation/kubernetes-network.inline.svg) + +
+ +Each of the components in this topology is discussed in further detail below. + +### Load balancer + +The load balancer is deployed outside of the Kubernetes cluster. Typically, the load balancer also has one or more static IP addresses assigned to it. A DNS entry is then created to map a domain name (e.g., example.com) to the static IP address. + +Cloud infrastructure providers such as Amazon Web Services, Azure, Digital Ocean, and Google make it easy to create these load balancers directly from Kubernetes. This is done by creating a Kubernetes service of `type: LoadBalancer`. When this service is created, the cloud provider will use the metadata contained in the Kubernetes service definition to provision a load balancer. + +If the Kubernetes cluster is deployed in a private data center, an external load balancer is still generally used. Provisioning of this load balancer usually requires the involvement of the data center operations team. + +In both the private data center and cloud infrastructure case, the external load balancer should be configured to point to the edge proxy. + +### Edge Proxy / ingress controller + +The Edge Proxy is typically a Layer 7 proxy that is deployed directly in the cluster. The core function of the edge proxy is to accept incoming traffic from the external load balancer and route the traffic to Kubernetes services. The edge proxy should be configured using Kubernetes manifests. This enables a common management workflow for both the edge proxy and Kubernetes services. + +The most popular approach to configuring edge proxies is with the Kubernetes ingress resource. When an edge proxy can process ingress resources, it is called an ingress controller. Not all edge proxies are ingress controllers (because they can't process ingress resources), but all ingress controllers are edge proxies. + +The ingress resource is a Kubernetes standard. As such, it is a lowest common denominator resource. In practice, users find that the ingress resource is insufficient in scope to address the requirements for edge routing. Semantics such as TLS termination, redirecting to TLS, timeouts, rate limiting, and authentication are all beyond the scope of the ingress resource. + +$productName$ can function as an ingress controller (i.e., it reads ingress resources), although it also includes many other capabilities that are beyond the scope of the ingress specification. Most $productName$ users find that the various additional capabilities of $productName$ are essential, and end up using $productName$'s extensions to the ingress resource, instead of using ingress resources themselves. + +### Kubernetes services and Pods + +Each instance of your application is deployed in a Kubernetes pod. As the workload on your application increases or decreases, Kubernetes can automatically add or remove pods. A Kubernetes _service_ represents a group of pods that comprise the same version of a given application. Traffic can be routed to the pods via a Kubernetes service, or it can be routed directly to the pods. + +When traffic is routed to the pods via a Kubernetes service, Kubernetes uses a built-in mechanism called `kube-proxy` to load balance traffic between the pods. Due to its implementation, `kube-proxy` is a Layer 4 proxy, i.e., it load balances at a connection level. For particular types of traffic such as HTTP/2 and gRPC, this form of load balancing is particularly problematic as it can easily result in a very uneven load balancing configuration. + +Traffic can also be routed directly to pods, bypassing the Kubernetes service. Since pods are much more ephemeral than Kubernetes services, this approach requires an edge proxy that is optimized for this use case. In particular, the edge proxy needs to support real-time discovery of pods, and be able to dynamically update pod locations without downtime. + +$productName$ supports routing both to Kubernetes services and directly to pods. + +## Further reading + +* [Kubernetes Ingress 101](https://blog.getambassador.io/kubernetes-ingress-nodeport-load-balancers-and-ingress-controllers-6e29f1c44f2d) +* [Envoy Proxy Performance on Kubernetes](/resources/envoyproxy-performance-on-k8s/) diff --git a/docs/emissary/latest/topics/concepts/microservices-api-gateways.md b/docs/emissary/latest/topics/concepts/microservices-api-gateways.md new file mode 100644 index 000000000..ba95b8fc0 --- /dev/null +++ b/docs/emissary/latest/topics/concepts/microservices-api-gateways.md @@ -0,0 +1,60 @@ +# Microservices API gateways + +A microservices API gateway is an API gateway designed to accelerate the development workflow of independent services teams. A microservices API gateway provides all the functionality for a team to independently publish, monitor, and update a microservice. + +This focus on accelerating the development workflow is distinct from the purpose of traditional API gateways, which focus on the challenges of managing APIs. Over the past decade, organizations have worked to expose internal systems through well-defined APIs. The challenge of safely exposing hundreds or thousands of APIs to end-users (both internal and external) led to the emergence of API gateways. Over time, API gateways have become centralized, mission critical pieces of infrastructure that control access to these APIs. + +In this article, we'll discuss how the difference in business objective (productivity vs management) results in a very different API gateway. + +## Microservices organization + +In a microservices organization, small teams of developers work independently from each other to rapidly deliver functionality to the customer. In order for each service team to work independently, with a productive workflow, a services team needs to be able to: + +1. Publish their service, so that others can use the service +2. Monitor their service, to see how well it's working +3. Test and update their service, so they can keep on improving the service + +The team needs to do all of this *without* requiring assistance from another operations or platform team--as soon as a services team requires another team, they're no longer working independently, and this can lead to bottlenecks. + +For service publication, a microservices API gateway provides a static address for consumers, and dynamically route requests to the appropriate service address. In addition, providing authentication and TLS termination for security are typical considerations in exposing a service to other consumers. + +Understanding the end-user experience of a service is crucial to improving the service. For example, a software update could inadvertently impact the latency of certain requests. A microservices API gateway is well situated to collect key observability metrics on end-user traffic as it routes traffic to the end service. + +A microservices API gateway also supports dynamically routing user requests to different service versions for canary testing. By routing a small fraction of end-user requests to a new version of a service, service teams can safely test the impact of new updates to a small subset of users. + +## Microservices API Gateways vs. enterprise API Gateways + +At first glance, the use case described above may be fulfilled with an enterprise-focused API gateway. While this may be true, the actual emphasis of enterprise API gateways and microservices API gateways are somewhat different: + +| Use case | Traditional Enterprise API gateway | Microservices API gateway | +|---------------|-------------------|------------------------------| +| Primary Purpose | Expose, compose, and manage internal business APIs | Expose and observe internal business services | +| Publishing Functionality | API management team or service team registers / updates gateway via admin API | Service team registers / updates gateway via declarative code as part of the deployment process | +| Monitoring | Admin and operations focused e.g. meter API calls per consumer, report errors (e.g. internal 5XX). | Developer focused e.g. latency, traffic, errors, saturation | +| Handling and Debugging Issues | L7 error-handling (e.g. custom error page or payload). Run gateway/API with additional logging. Troubleshoot issue in staging environment | Configure more detailed monitoring. Enable traffic shadowing and / or canarying | +| Testing | Operate multiple environments for QA, Staging, and Production. Automated integration testing, and gated API deployment. Use client-driven API versioning for compatibility and stability (e.g. semver) | Facilitate canary routing for dynamic testing (taking care with data mutation side effects). Use developer-driven service versioning for upgrade management | +| Local Development | Deploy gateway locally (via installation script, Vagrant or Docker), and attempt to mitigate infrastructure differences with production. Use language-specific gateway mocking and stubbing frameworks | Deploy gateway locally via service orchestration platform (e.g. Kubernetes) | + +## Self-service publishing + +A team needs to be able to publish a new service to customers without requiring an operations or API management team. This ability to self-service for deployment and publication enables the team to keep the feature release velocity high. While a traditional enterprise API gateway may provide a simple mechanism (e.g., REST API) for publishing a new service, in practice, the usage is often limited to the use of a dedicated team that is responsible for the gateway. The primary reason for limiting publication to a single team is to provide an additional (human) safety mechanism: an errant API call could have potentially disastrous effects on production. + +Microservices API gateways utilize mechanisms that enable service teams to easily *and* safely publish new services, with the inherent understanding that the producing team are responsible for their service, and will fix an issue if one occurs. A microservices gateway provides configurable monitoring for issue detection, and provides hooks for debugging, such as inspecting traffic or traffic shifting/duplication. + +## Monitoring and rate limiting + +A common business model for APIs is metering, where a consumer is charged different fees depending on API usage. Traditional enterprise API gateways excel in this use case: they provide functionality for monitoring per-client usage of an API, and the ability to limit usage when the client exceeds their quota. + +A microservice gateway also requires monitoring and rate limiting, but for different reasons. Monitoring user-visible metrics such as throughput, latency, and availability, are important to ensure that new updates don't impact the end-user. Robust end-user metrics are critical to allowing rapid, incremental updates. Rate limiting is used to improve the overall resilience of a service. When a service is not responding as expected, an API gateway can throttle incoming requests to allow a service to recover and prevent a cascade failure. + +## Testing and updates + +A microservices application has multiple services, each of which is being independently updated. Automated pre-production testing of a moving target is necessary but not sufficient for microservices. Canary testing, where a small percentage of production traffic is routed to a new service version, is an important tool to help test an update. By limiting a new service version to a small percentage of users, the impact of a service failure is limited. + +In a traditional enterprise API gateway, routing is used to isolate or compose/aggregate changing API versions. Automated pre-production testing and manual post-production verification and exploration are required. + +## Summary + +Traditional enterprise API gateways are designed to solve the challenges of API management. While they may appear to solve some of the challenges of adopting microservices, the reality is that a microservices workflow creates a different set of requirements. Integrating a microservices API gateway into your development workflow empowers service teams to self-publish, monitor, and update their service, quickly and safely. This will enable your organization to ship software more rapidly, and with more reliability than ever before. + +For further reading on how an API Gateway can accelerate continuous delivery, read [this blog post](https://blog.getambassador.io/continuous-delivery-how-can-an-api-gateway-help-or-hinder-1ff15224ec4d). diff --git a/docs/emissary/latest/topics/concepts/progressive-delivery.md b/docs/emissary/latest/topics/concepts/progressive-delivery.md new file mode 100644 index 000000000..f2ade27f2 --- /dev/null +++ b/docs/emissary/latest/topics/concepts/progressive-delivery.md @@ -0,0 +1,47 @@ +# Progressive delivery + +Today's cloud-native applications may consist of hundreds of services, each of which are being updated at any time. Thus, many cloud-native organizations augment regression test strategies with testing in production using progressive delivery techniques. + +Progressive Delivery is an approach for releasing software to production users. In the progressive delivery model, software is released to ever growing subsets of production users. This approach reduces the blast radius in the event of a failure. + +## Why test in production? + +Modern cloud applications are continuously deployed, as different teams rapidly update their respective services. Deploying and testing updates in a pre-production staging environment introduces a bottleneck to the speed of iteration. More importantly, staging environments are not representative of what will be running in production when the deployment actually occurs given the velocity of service updates and changes in production. Testing in production addresses both of these challenges: developers evaluate their changes in the real-world environment, enabling rapid iteration. + +## Progressive delivery strategies + +There are a number of different strategies for progressive delivery. These include: + +* Feature flags, where specific features are made available to specific user groups +* Canary releases, where a (small) percentage of traffic is routed to a new version of a service before the service is full production +* Traffic shadowing, where real user traffic is copied, or shadowed, from production to the service under test + +Observability is a critical requirement for testing in production. Regardless of progressive delivery strategy, collecting key metrics around latency, traffic, errors, and saturation (the [“Four Golden Signals of Monitoring”](https://landing.google.com/sre/sre-book/chapters/monitoring-distributed-systems/#xref_monitoring_golden-signals)) provides valuable insight into the stability and performance of a new version of the service. Moreover, application developers can compare the metrics (e.g., latency) between the production version and an updated version. If the metrics are similar, then updates can proceed with much greater confidence. + +$productName$ supports a variety of strategies for progressive delivery. These strategies are discussed in further detail below. + +### Canary releases + +Canary releases shifts a small amount of real user traffic from production to the service under test. + +The user will see the direct response from the canary version of the service from any traffic that is shifted to the canary release, and they will not trigger or see the response from the current production released version of the service. The canary results can also be verified (both the downstream response and associated upstream side effects), but it is key to understand that any side effects will be persisted. + +In addition to allowing verification that the service is not crashing or otherwise behaving badly from an operational perspective when dealing with real user traffic and behavior, canary releasing allows user validation. For example, if a business KPI performs worse for all canaried requests, then this most likely indicates that the canaried service should not be fully released in its current form. + +Canary tests can be automated, and are typically run after testing in a pre-production environment has been completed. The canary release is only visible to a fraction of actual users, and any bugs or negative changes can be reversed quickly by either routing traffic away from the canary or by rolling-back the canary deployment. + +![Canary release process overview](../../images/canary-release-overview.png) + +Canary releases are not a panacea. In particular, many services may not receive sufficient traffic in order for canary releases to provide useful information in an actionable timeframe. + +### Traffic shadowing + +This approach “shadows” or mirrors a small amount of real user traffic from production to the service under test. + +Although the shadowed results can be verified (both the downstream response and associated upstream side effects) they are not returned to the user -- the user only sees the results from the currently released service. Typically any side effects are not persisted or are executed as a no-op and verified (much like setting up a mock, and verifying that a method/function was called with the correct parameters). + +This allows verification that the service is not crashing or otherwise behaving badly from an operational perspective when dealing with real user traffic and behavior (and the larger the percentage of traffic shadowed, the higher the probability of confidence). + +## Further reading + +* [Canary release pattern](https://blog.getambassador.io/cloud-native-patterns-canary-release-1cb8f82d371a) diff --git a/docs/emissary/latest/topics/concepts/rate-limiting-at-the-edge.md b/docs/emissary/latest/topics/concepts/rate-limiting-at-the-edge.md new file mode 100644 index 000000000..f471b6d57 --- /dev/null +++ b/docs/emissary/latest/topics/concepts/rate-limiting-at-the-edge.md @@ -0,0 +1,33 @@ +# Rate limiting concepts at the edge + +Rate limiting at the edge is a technique that is used to prevent a sudden or sustained increase in user traffic from breaking an API or underlying service. On the Internet, users can do whatever they want to your APIs, as you have no direct control over these end-users. Whether it’s intentional or not, these users can impact the availability, responsiveness, and scalability of your service. + +## Two approaches: Rate limiting and load shedding + +Rate limiting use cases that fall into this scenario range from implementing functional requirements related to a business scenario -- for example, where requests from paying customers is prioritized over requests from non-paying trial users -- to implementing cross-functional requirements, such as resilience from a malicious actor attempting to issue a denial-of-service (DoS) attack. + +A closely related technique to rate limiting is load shedding, and this can be used to selectively prioritize traffic (by dropping requests) based on the state of the entire system. For example, if a backend data store has become overloaded and slow to respond, it may be appropriate to drop (or “shed”) low priority requests or requests that are not time sensitive. + +## Use cases and scenarios + +The table below outlines several scenarios where rate limiting and load shedding can provide an effective solution to a range of functional and cross-functional requirements. The “Type of Rate Limiter” column provides a summary of the category of rate limiting that would be most appropriate for the scenario, and the “Specifics” column outlines what business or system properties would be involved in computing rate limiting decisions. + +| Scenario | Type of Rate Limiter |                            Specifics                        +| --- | --- | --- | +**Fairness.** One or more users are sending large volumes of requests, and thus impacting other users of the API | **User request rate limiting -** restricts each user to a predetermined number of requests per time unit.

**Concurrent user request limiting -** limits the number of concurrent user requests that can be inflight at any given point in time. |
  • User ID rate limiter
  • User property rate limiter (IP address, organisation, device etc)
  • Geographic rate limiter
  • Time-based rate limiter
+**Prioritisation.** The business model depends on handling high priority requests over others | **User request rate limiting** |
  • User ID rate limiter
  • User property rate limiter (IP address, organisation, device, free vs non-free user etc)
+**Resilience.** The API backend cannot scale rapidly enough to meet request demand due to a technical issue. | **Backend utilisation load shedder -** rate limit based upon utilisation of aggregate backend instances.

**Node/server utilisation load shedder -** rate limit based upon utilisation of individual or isolated groups of compute nodes/servers. |
  • User ID rate limiter
  • User property rate limiter (IP address, organisation, device etc)
+**Security.** Prevent bad actors from using a DoS attack to overwhelm services, fuzzing, or brute force attacks |**User request rate limiting**

**Node/server utilisation load shedder** |
  • User ID rate limiter
  • User property rate limiter (IP address, organisation, device etc)
  • Service identifier load shedder e.g. login service, audit service
+**Responsiveness.** As per the Reactive Manifesto, responsive systems focus on providing rapid and consistent response times, establishing reliable upper bounds so they deliver a consistent quality of service | **Concurrent user request limiting**

**Backend utilisation load shedder**

**Node/server utilisation load shedder** |
  • User ID rate limiter
  • User property rate limiter (IP address, organisation, device etc)
  • Service identifier load shedder e.g. login service, audit service
+ +## Avoiding contention with rate limiting configuration: Decoupling Dev and Ops + +One of the core features of $productName$ is the decentralization of configuration, allowing operations and development teams to independently control $productName$, as well as individual application development teams to minimize collaboration when configuring independently deployable services. This same approach applies to rate limiting configuration. + +The $productName$ rate limiting configuration allows centralized operations teams to define and implement global rate limiting and load shedding policies to protect the system, while still allowing individual application teams to define rate limiting policies that enforce business rules, for example, around paying and non-paying customers (perhaps implementing the so-called “freemium” model). See [Advanced Rate Limiting](../../../../2.0/howtos/advanced-rate-limiting) documentation for examples. + +## Benefits of applying a rate limiter to the edge + +Modern applications and APIs can experience floods of traffic over a short time period (e.g. from achieving a HackerNews front page link), and increasingly bad actors and cyber criminals are targeting public-facing services. + +By implementing rate limiting and load shedding capabilities at the edge, a large amount of scenarios that are bad for business can be mitigated. These capabilities also make the life of the operations and development team that much easier, as the need to constantly firefight ingress traffic is reduced. diff --git a/docs/emissary/latest/topics/install/ambassador-oss-community.md b/docs/emissary/latest/topics/install/ambassador-oss-community.md new file mode 100644 index 000000000..b53d1407e --- /dev/null +++ b/docs/emissary/latest/topics/install/ambassador-oss-community.md @@ -0,0 +1,14 @@ +# Integration in community projects + +import Table from "../../../../../src/components/CommunityTable"; + +**$AESproductName$ is now available and includes additional functionality beyond the current $OSSproductName$.** +These features include automatic HTTPS, OAuth/OpenID Connect authentication support, integrated rate +limiting, a developer portal, and [more](/edge-stack-faq/). + +## $OSSproductName$ integrations + +If you still want to use just $OSSproductName$, don't worry! $OSSproductName$ +is currently available out-of-the-box in some Kubernetes installers and local environments. + + \ No newline at end of file diff --git a/docs/emissary/latest/topics/install/bare-metal.md b/docs/emissary/latest/topics/install/bare-metal.md new file mode 100644 index 000000000..84ac1c8d7 --- /dev/null +++ b/docs/emissary/latest/topics/install/bare-metal.md @@ -0,0 +1,93 @@ +import Alert from '@material-ui/lab/Alert'; + +# Install with bare metal + +In cloud environments, provisioning a readily available network load balancer with $productName$ is the best option for handling ingress into your Kubernetes cluster. When running Kubernetes on a bare metal setup, where network load balancers are not available by default, we need to consider different options for exposing $productName$. + +## Exposing $productName$ via NodePort + +The simplest way to expose an application in Kubernetes is via a `NodePort` service. In this configuration, we create the $productName$ service] and identify `type: NodePort` instead of `LoadBalancer`. Kubernetes will then create a service and assign that service a port to be exposed externally and direct traffic to $productName$ via the defined `port`. + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: ambassador +spec: + type: NodePort + ports: + - name: http + port: 8088 + targetPort: 8080 + nodePort: 30036 # Optional: Define the port you would like exposed + protocol: TCP + selector: + service: ambassador +``` + +Using a `NodePort` leaves $productName$ isolated from the host network, allowing the Kubernetes service to handle routing to $productName$ pods. You can drop-in this YAML to replace the `LoadBalancer` service in the [YAML installation guide](../yaml-install) and use `http://:/` as the host for requests. + +## Exposing $productName$ via host network + +When running $productName$ on a bare metal install of Kubernetes, you have the option to configure $productName$ pods to use the network of the host they are running on. This method allows you to bind $productName$ directly to port 80 or 443 so you won't need to identify the port in requests. + +i.e `http://:/` becomes `http:///` + +This can be configured by setting `hostNetwork: true` in the $productName$ deployment. `dnsPolicy: ClusterFirstWithHostNet` will also need to set to tell $productName$ to use *KubeDNS* when attempting to resolve mappings. + +```diff +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ambassador +spec: + replicas: 1 + selector: + matchLabels: + service: ambassador + template: + metadata: + annotations: + sidecar.istio.io/inject: "false" + labels: + service: ambassador + app.kubernetes.io/managed-by: getambassador.io + spec: ++ hostNetwork: true ++ dnsPolicy: ClusterFirstWithHostNet + serviceAccountName: ambassador + containers: + - name: ambassador + image: docker.io/datawire/ambassador:$version$ + resources: + limits: + cpu: 1 + memory: 400Mi + requests: + cpu: 200m + memory: 100Mi + env: + - name: AMBASSADOR_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + livenessProbe: + httpGet: + path: /ambassador/v0/check_alive + port: 8877 + initialDelaySeconds: 30 + periodSeconds: 3 + readinessProbe: + httpGet: + path: /ambassador/v0/check_ready + port: 8877 + initialDelaySeconds: 30 + periodSeconds: 3 + restartPolicy: Always +``` + +This configuration does not require a defined $productName$ service, so you can remove that service if you have defined one. + +**Note:** Before configuring $productName$ with this method, consider some of the functionality that is lost by bypassing the Kubernetes service including only having one $productName$ able to bind to port 8080 or 8443 per node and losing any load balancing that is typically performed by Kubernetes services. diff --git a/docs/emissary/latest/topics/install/convert-to-v3alpha1.md b/docs/emissary/latest/topics/install/convert-to-v3alpha1.md new file mode 100644 index 000000000..2d8dfb790 --- /dev/null +++ b/docs/emissary/latest/topics/install/convert-to-v3alpha1.md @@ -0,0 +1,275 @@ +import Alert from '@material-ui/lab/Alert'; + +# Convert Configuration to `getambassador.io/v3alpha1` + +Once your $productName$ $version$ installation is running, it is **strongly recommended** that +you convert your existing configuration resources from `getambassador.io/v2` to +`getambassador.io/v3alpha1`. + + + While it is not necessary to convert all your resources to getambassador.io/v3alpha1 + immediately, you should ultimately update them all for full functionality with $productName$ + + +In general, the best way to convert any resource is to start with `kubectl get`: using +`kubectl get -o yaml` on any `getambassador.io/v2` resource will cause $productName$ to +translate it to a `getambassador.io/v3alpha1` resource. You can then verify that the +`getambassador.io/v3alpha1` resource looks correct and re-apply it, which will convert the +stored copy to `getambassador.io/v3alpha1`. + +As you do the conversion, here are the things to bear in mind: + +## 1. `ambassador_id` must be an array, not a simple string. + +`getambassador.io/v2` allowed `ambassador_id` to be either an array of strings, or a simple +string. In `getambassador.io/v3alpha1`, only the array form is supported: instead of +`ambassador_id: "foo"`, use `ambassador_id: [ "foo" ]`. This applies to all $productName$ +resources, and is supported by all versions of Ambassador 1.X. + +## 2. You must have a `Listener` for each port on which $productName$ should listen. + + + Learn more about Listener + + +`Listener` is **mandatory**. Defining your own `Listener`(s) allows you to carefully +tailor the set of ports you actually need to use, and exactly which `Host` resources +are matched with them (see below). + +## 3. `Listener`, `Host`, and `Mapping` must be explicit about how they associate. + +You need to have `Listener`s, `Host`s, and `Mapping`s correctly associated with each other for $productName$ 2.X configuration. + +### 3.1. `Listener` and `Host` are associated through `Listener.hostBinding` + + + Learn more about Listener
+ Learn more about Host +
+ +In a `Listener`, the `hostBinding` controls whether a given `Host` is associated with that `Listener`, as discussed in the [`Listener`](../../running/listener) documentation. +**The recommended setting is using `hostBinding.selector`** to choose only `Host`s that have a defined +Kubernetes label: + +```yaml +hostBinding: + selector: + matchLabels: + my-listener: listener-8080 +``` + +The above example shows a `Listener` configured to associate only with `Host`s that have a `my-listener: listener-8080` label. + +For migration purposes, it is possible to have a `Listener` associate with all of the `Host`s. This is not recommended for production environments, however, as it can resulting confusing behavior with large numbers of `Host`s, and it +can also result in larger Envoy configurations that slow reconfiguration. + +```yaml +hostBinding: + namespace: + from: ALL +``` + +but **this is not recommended in production**. Allowing every `Host` to associate +with every `Listener` can result in confusing behavior with large numbers of `Host`s, and it +can also result in larger Envoy configurations that slow reconfiguration. + +### 3.2. `Host` and `Mapping` are associated through `Host.mappingSelector` + + +In $productName$ 1.X, `Mapping`s were nearly always associated with every `Host`. Since this +tends to result in larger Envoy configurations that slow down reconfiguration, $productName$ 2.X +inverts this behavior: **`Host` and `Mapping` will not associate without explicit selection**. + +To have a `Mapping` associate with a `Host`, at least one of the following must hold: + +- Recommended: The `Host` must define a `mappingSelector` that matches a `label` on the `Mapping`. +- Alternately, the `Mapping` must define `hostname` that matches the `hostname` of the `Host`. + (Note that the `hostname` of both `Host` and `AmbasssadorMapping` is a DNS glob.) + +If the `Host` defines a `mappingSelector` and the `Mapping` also defines a `hostname`, both must match. + +As a migration aid: + +- A `Mapping` with a `hostname` of `"*"` will associate with any `Host` that +has no `mappingSelector`, and +- A `v3alpha1` `Mapping` will honor `host` if `hostname` is not present. + + + Learn more about Host
+ Learn more about Mapping +
+ + + A Mapping that specifies host_regex: true is associated with  + all Hosts. This is generally far less desirable than using hostname + with a DNS glob. + + + + Support for host and host_regex will be removed before + v3alpha1 is promoted to v3. + + +## 4. Use `Host` to terminate TLS + + + Learn more about Host
+ Learn more about TLSContext +
+ +In $productName$ 1.X, simply creating a `TLSContext` is sufficient to terminate TLS, but in +2.X you _must_ use a `Host`. The minimal setup to terminate TLS is now something like this: + +```yaml +--- +apiVersion: v1 +kind: Secret +metadata: + name: my-secret +type: kubernetes.io/tls +data: + tls.crt: base64-PEM + tls.key: base64-PEM +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: my-host +spec: + hostname: host.example.com + tlsSecret: my-secret +``` + +In the example above, TLS is terminated for `host.example.com`. A `TLSContext` is still right way to share data +about TLS configuration across `Host`s: set both `tlsSecret` and `tlsContext` in the `Host`. + +## 5. `Mapping` should use `hostname` if possible + + + Learn more about Mapping + + +The `getambassador.io/v3alpha1` `Mapping` introduces the new `hostname` element, which is always +a DNS glob. Using `hostname` instead of `host` is **strongly recommended** unless you absolutely +require regular expression matching: + +- if `host` is being used for an exact match, simply rename `host` to `hostname`. +- if `host` is being used for a regex that effects a prefix or suffix match, rename it + to `hostname` and rewrite the regex into a DNS glob, e.g. `host: .*\.example\.com` would become + `hostname: *.example.com`. + +Additionally, when `hostname` is used, the `Mapping` will be associated with a `Host` only +if `hostname` matches the hostname of the `Host`. If the `Host`'s `selector` is also set, +both the `selector` and the hostname must line up. + + + An Mapping that specifies host_regex: true will be associated with  + all Hosts. This is generally far less desirable than using + hostname with a DNS glob. + + +## 6. `Mapping` added headers must not be simple strings + + + Learn more about Mapping + + +The `getambassador.io/v2` `Mapping` supported strings and dictionaries for `add_request_headers` and +`add_response_headers`, for example: + +```yaml +add_request_headers: + X-Add-String: bar + X-Add-Dict: + value: bar +``` + +In `getambassador.io/v2`, both `X-Add-String` and `X-Add-Dict` will be added with the value `bar`. + +The string form - shown with `X-Add-String` - is not supported in `getambassador.io/v3alpha1`. Use the +dictionary form instead (which works in both `getambassador.io/v2` and `getambassador.io/v3alpha1`). + +## 7. `Mapping` `headers` and `query_parameters` must not be `true` + + + Learn more about Mapping + + +`headers` and `query_parameters` in a `Mapping` control header matches and query-parameter matches. In +`getambassador.io/v2`, they support both strings and dictionaries, and each has a `_regex` variant. +For example: + + ```yaml + headers: + x-exact-match: foo + x-existence-match: true + headers_regex: + x-regex-match: "fo.*o" + ``` + +In this example, the `Mapping` requires the `x-exact-match` header to have the value `foo`, the +`x-regex-match` whose value starts with `fo` and ends with `o`. However, `x-existence-match` requires +simply that the `x-existence-match` header exists. + +In `getambassador.io/v3alpha1`, the `true` value for an existence match is not supported. Instead, +use `headers_regex` for the same header with value of `.*`. This is fully supported in 1.k) + +`query_parameters` and `query_parameters_regex` work exactly like `headers` and `headers_reex`. + +## 8. `Mapping` `labels` must be converted to new syntax + + + Learn more about Mapping + + +In `getambassador.io/v2`, the `labels` element in a `Mapping` supported several different types of +data. In `getambassador.io/v3alpha1`, all labels must have the same type, so labels must be converted +to the new syntax: + +| `getambassador.io/v2` | `getambassador.io/v3alpha1` | +| -------------------------------- | ----------------------------------------------------------- | +| `source_cluster` | `{ source_cluster: { key: source_cluster } }` | +| `destination_cluster` | `{ destination_cluster: { key: destination_cluster }` } | +| `remote_address` | `{ remote_address: { key: remote_address } }` | +| `{ my_key: { header: my_hdr } }` | `{ generic_key: { value: my_val } }` | +| `{ my_val }` | `{ generic_key: { value: my_val } }` | +| `{ my_key: { header: my_hdr } }` | `{ request_headers: { key: my_key, header_name: my_hdr } }` | + +You can check the [Rate Limiting Labels documentation](../../using/rate-limits#attaching-labels-to-requests) +for more examples. + +## 9. `tls` cannot be `true` in `AuthService`, `Mapping`, `RateLimitService`, and `TCPMapping` + + + Learn more about AuthService
+ Learn more about Mapping
+ Learn more about RateLimitService
+ Learn more about TCPMapping +
+ +The `tls` element in `AuthService`, `Mapping`, `RateLimitService`, and `TCPMapping` controls TLS +origination. In `getambassador.io/v2`, it may be a string naming a `TLSContext` to use to determine +which client certificate is sent, or the boolean value `true` to request TLS origination with no +cluent certificate being sent. + +In `getambassador.io/v3alpha1`, only the string form is supported. To originate TLS with no client +certificate (the semantic of `tls: true`), omit the `tls` element and prefix the `service` with +`https://`. Note that `TCPMapping` in `getambassador.io/v2` does not support the `https://prefix`. + +## 10. Some `Module` settings have moved or changed + + + Learn more about Listener + + +A few settings have moved from the `Module` in 2.0. Make sure you review the following settings +and move them to their new locations if you are using them in a `Module`: + +- Configuration for the `PROXY` protocol is part of the `Listener` resource in $productName$ 2.0, +so the `use_proxy_protocol` element of the `ambassador` `Module` is no longer supported. + +- `xff_num_trusted_hops` has been removed from the `Module`, and its functionality has been moved +to the `l7Depth` setting in the `Listener` resource. + +- It is no longer possible to configure TLS using the `tls` element of the `Module`. Its +functionality is fully covered by the `TLSContext` resource. diff --git a/docs/emissary/latest/topics/install/docker.md b/docs/emissary/latest/topics/install/docker.md new file mode 100644 index 000000000..e430a55c5 --- /dev/null +++ b/docs/emissary/latest/topics/install/docker.md @@ -0,0 +1,73 @@ +import Alert from '@material-ui/lab/Alert'; + +# Run the demo with Docker + +In this Docker quickstart guide, we'll get $productName$ running locally +with a demo configuration. In the next section, we'll then walk through how to +deploy $productName$ in Kubernetes with a custom configuration. + +## 1. Running the demo configuration + +By default, $productName$ uses a demo configuration to show some of its basic features. Get it running with Docker, and expose $productName$ on port 8080: + +``` +docker run -it -p 8080:8080 --name=$productDeploymentName$ --rm docker.io/emissaryingress/emissary:$version$ --demo +``` + +## 2. $productName$'s diagnostics + +$productName$ provides live diagnostics viewable with a web browser. While this would normally not be exposed to the public network, the Docker demo publishes the diagnostics service at the following URL: + +`http://localhost:8080/ambassador/v0/diag/` + +You'll have to authenticate to view this page: use the username `admin`, +password `admin` (obviously this would be a poor choice in the real world!). +We'll talk more about authentication shortly. + +To access the Diagnostics page with authentication, use `curl http://localhost:8080/ambassador/v0/diag/ -u admin:admin` + +Some of the most important information - your $productName$ version, how recently $productName$'s configuration was updated, and how recently Envoy last reported status to $productName$ - is right at the top. The diagnostics overview can show you what it sees in your configuration map, and which Envoy objects were created based on your configuration. + +## 3. The Quote service + +Since $productName$ is a comprehensive, self-service edge stack, its primary purpose is to provide access and control to microservices for the teams that manage them. The demo is preconfigured with a mapping that connects the `/qotm/` resource to the "Quote" service -- a demo service that supplies quotations. You can try it out by opening + +`http://localhost:8080/qotm/` + +in your browser, or from the command line as + +``` +curl -L 'http://localhost:8080/qotm/?json=true' +``` + +This request will route to the `qotm` service at `demo.getambassador.io`, and return a random quote. + +You can see details of the mapping by clicking the blue `http://localhost:8080/qotm/` link at the very bottom of the `Ambassador Route Table` in the diagnostics overview. + +## 4. Authentication + +On the diagnostic overview, you can also see that $productName$ is configured to do authentication -- in the middle of the overview page, you'll see the `Ambassador Services In Use` section, and you can click the `tcp://127.0.0.1:5050` link for details on the `AuthService` configuration. This demo auth service is running inside the Docker container with $productName$ and the Quote service, and $productName$ uses it to mediate access to everything behind $productName$. + +You saw above that access to the diagnostic overview required you to authenticate as an administrator. Getting a random quote does not require authentication, but to get a specific quote, you'll have to authenticate as a demo user. To see this in action, open + +`http://localhost:8080/qotm/quote/5` + +in your browser. From the command line, you can see that: + +``` +curl -Lv 'http://localhost:8080/qotm/quote/5?json=true' +``` + +will return a 401, but + +``` +curl -Lv -u username:password 'http://localhost:8080/qotm/quote/5?json=true' +``` + +will succeed. (Note that that's literally "username" and "password" -- the demo auth service is deliberately not very secure!) + +Note that it's up to the auth service to decide what needs authentication -- teaming $productName$ with an authentication service can be as flexible or strict as you need it to be. + +## Next steps + +We've just walked through some of the core features of $productName$ in a local configuration. To see $productName$ in action on Kubernetes, check out the [Installation Guide](../). diff --git a/docs/emissary/latest/topics/install/helm.md b/docs/emissary/latest/topics/install/helm.md new file mode 100644 index 000000000..a807d3355 --- /dev/null +++ b/docs/emissary/latest/topics/install/helm.md @@ -0,0 +1,104 @@ +import Alert from '@material-ui/lab/Alert'; + +# Install with Helm + + + + To migrate from $productName$ 1.X to $productName$ 2.X, see the + [$productName$ migration matrix](../migration-matrix/). This guide + **will not work** for that, due to changes to the configuration + resources used for $productName$ 2.X. + + + +[Helm](https://helm.sh) is a package manager for Kubernetes that automates the release and management of software on Kubernetes. $productName$ can be installed via a Helm chart with a few simple steps, depending on if you are deploying for the first time, upgrading $productName$ from an existing installation, or migrating from $productName$. + +## Before you begin + +The $productName$ Helm chart is hosted by Datawire and published at `https://app.getambassador.io`. + +Start by adding this repo to your helm client with the following command: + +``` +helm repo add datawire https://app.getambassador.io +helm repo update +``` + +## Install with Helm + +When you run the Helm chart, it installs $productName$. + +1. Install the $productName$ CRDs. + + Before installing $productName$ $version$ itself, you must configure your + Kubernetes cluster to support the `getambassador.io/v3alpha1` and `getambassador.io/v2` + configuration resources. This is required. + + ``` + kubectl apply -f https://app.getambassador.io/yaml/emissary/$version$/emissary-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $version$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$AESproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. Install the $productName$ Chart with the following command: + + ``` + helm install -n $productNamespace$ --create-namespace \ + $productHelmName$ datawire/$productHelmName$ && \ + kubectl rollout status -n $productNamespace$ deployment/$productDeploymentName$ -w + ``` + +3. Next Steps + + $productName$ shold now be successfully installed and running, but in order to get started deploying Services and test routing to them you need to configure a few more resources. + + - [The `Listener` Resource](../../running/listener/) is required to configure which ports the $productName$ pods listen on so that they can begin responding to requests. + - [The `Mapping` Resouce](../../using/intro-mappings/) is used to configure routing requests to services in your cluster. + - [The `Host` Resource](../../running/host-crd/) configures TLS termination for enablin HTTPS communication. + - Explore how $productName$ [configures communication with clients](../../../howtos/configure-communications) + + + We strongly recommend following along with our Quickstart Guide to get started by creating a Listener, deploying a simple service to test with, and setting up a Mapping to route requests from $productName$ to the demo service. + + + + $productName$ $version$ includes a Deployment in the $productNamespace$ namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + +For more advanced configuration and details about helm values, +[please see the helm chart.](https://artifacthub.io/packages/helm/datawire/emissary-ingress/) + +## Upgrading an existing installation + +See the [migration matrix](../migration-matrix) for instructions about upgrading +$productName$. + + diff --git a/docs/emissary/latest/topics/install/index.less b/docs/emissary/latest/topics/install/index.less new file mode 100644 index 000000000..bc649e7ca --- /dev/null +++ b/docs/emissary/latest/topics/install/index.less @@ -0,0 +1,57 @@ +@media (max-width: 769px) { + #index-installContainer { + flex-direction: column; + } + .index-dropdown { + width: auto; + } + .index-dropBtn { + width: 100%; + } +} + +.index-dropBtn { + background-color: #8e77ff; + color: white; + padding: 10px; + font-size: 16px; + border: none; + margin-top: -20px; +} + +.index-dropdown { + position: relative; + display: inline-block; +} + +.index-dropdownContent { + display: none; + position: absolute; + background-color: #f1f1f1; + width: 100%; + box-shadow: 0px 8px 16px 0px rgba(0, 0, 0, 0.2); + z-index: 1; +} + +.index-dropdownContent a { + color: black; + padding: 12px 16px; + text-decoration: none; + display: block; +} + +.index-dropdownContent a:hover { + background-color: #ddd; +} + +.index-dropdown:hover .index-dropdownContent { + display: block; +} + +.index-dropdown:hover .index-dropBtn { + background-color: #5f3eff; +} + +#index-installContainer { + display: flex; +} diff --git a/docs/emissary/latest/topics/install/index.md b/docs/emissary/latest/topics/install/index.md new file mode 100644 index 000000000..40fa95fd0 --- /dev/null +++ b/docs/emissary/latest/topics/install/index.md @@ -0,0 +1,47 @@ +import Alert from '@material-ui/lab/Alert'; +import './index.less' + +# Installing $productName$ + +## Install with Helm + +Helm, the package manager for Kubernetes, is the recommended way to install +$productName$. Full details are in the [Helm instructions.](helm/) + +## Install with Kubernetes YAML + +Another way to install $productName$ if you are unable to use Helm is to +directly apply Kubernetes YAML. See details in the +[manual YAML installation instructions.](yaml-install). + +## Try the demo with Docker + +The Docker install will let you try the $productName$ locally in seconds, +but is not supported for production workloads. [Try $productName$ on Docker.](docker/) + +## Upgrade or migrate to a newer version + +If you already have an existing installation of $AESproductName$ or +$OSSproductName$, you can upgrade your instance. The [migration matrix](migration-matrix/) +shows you how. + +## Container Images + +Although our installation guides will favor using the `docker.io` container registry, +we publish $AESproductName$ and $OSSproductName$ releases to multiple registries. + +Starting with version 1.0.0, you can pull the emissary image from any of the following registries: + +- `docker.io/emissaryingress/` +- `gcr.io/datawire/` + +We want to give you flexibility and independence from a hosting platform's uptime to support +your production needs for $AESproductName$ or $OSSproductName$. Read more about +[Running $productName$ in Production](../running). + +# What’s Next? + +$productName$ has a comprehensive range of [features](/features/) to +support the requirements of any edge microservice. To learn more about how $productName$ works, along with use cases, best practices, and more, +check out the [Welcome page](../../tutorials/getting-started) or read the [$productName$ +Story](../../about/why-ambassador). diff --git a/docs/emissary/latest/topics/install/migrate-to-2-alternate.md b/docs/emissary/latest/topics/install/migrate-to-2-alternate.md new file mode 100644 index 000000000..edc42916a --- /dev/null +++ b/docs/emissary/latest/topics/install/migrate-to-2-alternate.md @@ -0,0 +1,40 @@ +--- + Title: Migrate to $productName$ $versionTwoX$ + description: "Instructions for how to upgrade $productName$ to $versionTwoX$. Transfer your current configuration of $AESproductName$ or $OSSproductName$ to $versionTwoX$." +--- +import Alert from '@material-ui/lab/Alert'; + +# Upgrading $productName$ $versionTwoX$ with a separate cluster + +You can upgrade from any version of $AESproductName$ or $OSSproductName$ to +any version of either by installing the new version in a new Kubernetes cluster, +then copying over configuration as needed. This is the way to be absolutely +certain that each installation cannot affect the other: it is extremely safe, +but is also significantly more effort. + +For example, to upgrade from some other version of $AESproductName$ or +$OSSproductName$ to $productName$ $versionTwoX$: + +1. Install $productName$ $versionTwoX$ in a completely new cluster. + +2. **Create `Listener`s for $productName$ $versionTwoX$.** + + When $productName$ $versionTwoX$ starts, it will not have any `Listener`s, and it will not + create any. You must create `Listener` resources by hand, or $productName$ $versionTwoX$ + will not listen on any ports. + +3. Copy the entire configuration from the $productName$ 1.X cluster to the $productName$ + $versionTwoX$ cluster. This is most simply done with `kubectl get -o yaml | kubectl apply -f -`. + + This will create `getambassador.io/v2` resources in the $productName$ $versionTwoX$ cluster. + $productName$ $versionTwoX$ will translate them internally to `getambassador.io/v3alpha1` + resources. + +4. Each $productName$ instance has its own cluster, so you can test the new + instance without disrupting traffic to the existing instance. + +5. If you need to make changes, you can change the `getambassador.io/v2` resource, or convert the + resource you're changing to `getambassador.io/v3alpha1` by using `kubectl edit`. + +6. Once everything is working with both versions, transfer incoming traffic to the $productName$ + $versionTwoX$ cluster. diff --git a/docs/emissary/latest/topics/install/migrate-to-3-alternate.md b/docs/emissary/latest/topics/install/migrate-to-3-alternate.md new file mode 100644 index 000000000..3b9df0c11 --- /dev/null +++ b/docs/emissary/latest/topics/install/migrate-to-3-alternate.md @@ -0,0 +1,36 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrading $productName$ $version$ with a separate cluster + +You can upgrade from any version of $AESproductName$ or $OSSproductName$ to +any version of either by installing the new version in a new Kubernetes cluster, +then copying over configuration as needed. This is the way to be absolutely +certain that each installation cannot affect the other: it is extremely safe, +but is also significantly more effort. + +For example, to upgrade from some other version of $AESproductName$ or +$OSSproductName$ to $productName$ $version$: + +1. Install $productName$ $version$ in a completely new cluster. + +2. **Create `Listener`s for $productName$ $version$.** + + When $productName$ $version$ starts, it will not have any `Listener`s, and it will not + create any. You must create `Listener` resources by hand, or $productName$ $version$ + will not listen on any ports. + +3. Copy the entire configuration from the $productName$ 1.X cluster to the $productName$ + $version$ cluster. This is most simply done with `kubectl get -o yaml | kubectl apply -f -`. + + This will create `getambassador.io/v2` resources in the $productName$ $version$ cluster. + $productName$ $version$ will translate them internally to `getambassador.io/v3alpha1` + resources. + +4. Each $productName$ instance has its own cluster, so you can test the new + instance without disrupting traffic to the existing instance. + +5. If you need to make changes, you can change the `getambassador.io/v2` resource, or convert the + resource you're changing to `getambassador.io/v3alpha1` by using `kubectl edit`. + +6. Once everything is working with both versions, transfer incoming traffic to the $productName$ + $version$ cluster. diff --git a/docs/emissary/latest/topics/install/migration-matrix.md b/docs/emissary/latest/topics/install/migration-matrix.md new file mode 100644 index 000000000..a95382071 --- /dev/null +++ b/docs/emissary/latest/topics/install/migration-matrix.md @@ -0,0 +1,46 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrading $productName$ + + + Read the instructions below before making any changes to your cluster! + + +There are currently multiple paths for upgrading $productName$, depending on what version you're currently +running, what you want to be running, and whether you installed $productName$ using [Helm](../helm) or +YAML. + +(To check out if you installed $productName$ using Helm, run `helm list --all` and see if +$productName$ is listed. If so, you installed using Helm.) + + + Read the instructions below before making any changes to your cluster! + + +## If you are currently running $AESproductName$ + +See the [instructions on updating $AESproductName$](/docs/edge-stack/$aesDocsVersion$/topics/install/migration-matrix/). + +## If you installed $OSSproductName$ using Helm + +| If you're running. | You can upgrade to | +|-----------------------------------------|-----------------------------------------------------------------------------------------------------------------------------| +| $OSSproductName$ $version$ | [$AESproductName$ $aesVersion$](/docs/edge-stack/$aesDocsVersion$/topics/install/upgrade/helm/emissary-3.8/edge-stack-3.X/) | +| $OSSproductName$ 3.7.X | [$OSSproductName$ $version$](../upgrade/helm/emissary-3.7/emissary-3.X) | +| $OSSproductName$ $versionTwoX$ | [$OSSproductName$ $version$](../upgrade/helm/emissary-2.5/emissary-3.X) | +| $OSSproductName$ 2.4.X | [$OSSproductName$ $versionTwoX$](../upgrade/helm/emissary-2.4/emissary-2.X) | +| $OSSproductName$ 2.0.5 | [$OSSproductName$ $versionTwoX$](../upgrade/helm/emissary-2.0/emissary-2.X) | +| $OSSproductName$ $versionOneX$ | [$OSSproductName$ $versionTwoX$](../upgrade/helm/emissary-1.14/emissary-2.X) | +| $OSSproductName$ prior to $versionOneX$ | [$OSSproductName$ $versionOneX$](../../../../1.14/topics/install/upgrading) | + +## If you installed $OSSproductName$ manually by applying YAML + +| If you're running. | You can upgrade to | +|-----------------------------------------|-----------------------------------------------------------------------------------------------------------------------------| +| $OSSproductName$ $version$ | [$AESproductName$ $aesVersion$](/docs/edge-stack/$aesDocsVersion$/topics/install/upgrade/yaml/emissary-3.8/edge-stack-3.X/) | +| $OSSproductName$ 3.7.X | [$OSSproductName$ $version$](../upgrade/yaml/emissary-3.7/emissary-3.X) | +| $OSSproductName$ $versionTwoX$ | [$OSSproductName$ $version$](../upgrade/yaml/emissary-2.5/emissary-3.X) | +| $OSSproductName$ 2.4.X | [$OSSproductName$ $versionTwoX$](../upgrade/yaml/emissary-2.4/emissary-2.X) | +| $OSSproductName$ 2.0.5 | [$OSSproductName$ $versionTwoX$](../upgrade/yaml/emissary-2.0/emissary-2.X) | +| $OSSproductName$ $versionOneX$ | [$OSSproductName$ $versionTwoX$](../upgrade/yaml/emissary-1.14/emissary-2.X) | +| $OSSproductName$ prior to $versionOneX$ | [$OSSproductName$ $versionOneX$](../../../../1.14/topics/install/upgrading) | diff --git a/docs/emissary/latest/topics/install/upgrade/helm/emissary-1.14/emissary-2.X.md b/docs/emissary/latest/topics/install/upgrade/helm/emissary-1.14/emissary-2.X.md new file mode 100644 index 000000000..bd61fafc9 --- /dev/null +++ b/docs/emissary/latest/topics/install/upgrade/helm/emissary-1.14/emissary-2.X.md @@ -0,0 +1,312 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 1.14.X (Helm) + + + This guide covers migrating from $productName$ 1.14.X to $productName$ $versionTwoX$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation originally made using Helm. + If you did not install with Helm, see the YAML-based + upgrade instructions. + + +We're pleased to introduce $productName$ $versionTwoX$! The 2.X family introduces a number of +changes to allow $productName$ to more gracefully handle larger installations (including +multitenant or multiorganizational installations), reduce memory footprint, and improve +performance. In keeping with [SemVer](https://semver.org), $productName$ 2.X introduces +some changes that aren't backward-compatible with 1.X. These changes are detailed in +[Major Changes in $productName$ 2.X](../../../../../../about/changes-2.x/). + +## Migration Overview + + + Read the migration instructions below before making any changes to your + cluster! + + +The recommended strategy for migration is to run $productName$ 1.14 and $productName$ +$versionTwoX$ side-by-side in the same cluster. This gives $productName$ $versionTwoX$ +and $productName$ 1.14 access to all the same configuration resources, with some +important caveats: + +1. **$productName$ 1.14 will not see any `getambassador.io/v3alpha1` resources.** + + This is intentional; it provides a way to apply configuration only to + $productName$ $versionTwoX$, while not interfering with the operation of your + $productName$ 1.14 installation. + +2. **If needed, you can use labels to further isolate configurations.** + + If you need to prevent your $productName$ $versionTwoX$ installation from + seeing a particular bit of $productName$ 1.14 configuration, you can apply + a Kubernetes label to the configuration resources that should be seen by + your $productName$ $versionTwoX$ installation, then set its + `AMBASSADOR_LABEL_SELECTOR` environment variable to restrict its configuration + to only the labelled resources. + + For example, you could apply a `version-two: true` label to all resources + that should be visible to $productName$ $versionTwoX$, then set + `AMBASSADOR_LABEL_SELECTOR=version-two=true` in its Deployment. + +3. **Be careful about label selectors on Kubernetes Services!** + + If you have services in $productName$ 1.14 that use selectors that will match + Pods from $productName$ $versionTwoX$, traffic will be erroneously split between + $productName$ 1.14 and $productName$ $versionTwoX$. The labels used by $productName$ + $versionTwoX$ include: + + ```yaml + app.kubernetes.io/name: emissary-ingress + app.kubernetes.io/instance: emissary-ingress + app.kubernetes.io/part-of: emissary-ingress + app.kubernetes.io/managed-by: getambassador.io + product: aes + profile: main + ``` + +4. **Be careful to only have one $productName$ Agent running at a time.** + + The $productName$ Agent is responsible for communications between + $productName$ and Ambassador Cloud. If multiple versions of the Agent are + running simultaneously, Ambassador Cloud could see conflicting information + about your cluster. + + The best way to avoid multiple agents when installing with Helm is to use + `--set agent.enabled=false` to tell Helm not to install a new Agent with + $productName$ $versionTwoX$. Once testing is done, you can switch Agents safely. + +You can also migrate by [installing $productName$ $versionTwoX$ in a separate cluster](../../../../migrate-to-2-alternate). +This permits absolute certainty that your $productName$ 1.14 configuration will not be +affected by changes meant for $productName$ $versionTwoX$, and it eliminates concerns about +ACME, but it is more effort. + +## Side-by-Side Migration Steps + +Migration is a seven-step process: + +1. **Make sure that older configuration resources are not present.** + + $productName$ 2.X does not support `getambassador.io/v0` or `getambassador.io/v1` + resources, and Kubernetes will not permit removing support for CRD versions that are + still in use for stored resources. To verify that no resources older than + `getambassador.io/v2` are active, run + + ``` + kubectl get crds -o 'go-template={{range .items}}{{.metadata.name}}={{.status.storedVersions}}{{"\n"}}{{end}}' | fgrep getambassador.io + ``` + + If `v1` is present in the output, **do not begin migration.** The old resources must be + converted to `getambassador.io/v2` and the `storedVersion` information in the cluster + must be updated. If necessary, contact Ambassador Labs on [Slack](http://a8r.io/slack) + for more information. + +2. **Install new CRDs.** + + Before installing $productName$ $versionTwoX$ itself, you must configure your + Kubernetes cluster to support its new `getambassador.io/v3alpha1` configuration + resources. Note that `getambassador.io/v2` resources are still supported, but **you + must install support for `getambassador.io/v3alpha1`** to run $productName$ $versionTwoX$, + even if you intend to continue using only `getambassador.io/v2` resources for some + time. + + ``` + kubectl apply -f https://app.getambassador.io/yaml/emissary/$versionTwoX$/emissary-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $versionTwoX$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$AESproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +3. **Install $productName$ $versionTwoX$.** + + After installing the new CRDs, you need to install $productName$ $versionTwoX$ itself + **in the same namespace as your existing $productName$ 1.14 installation**. It's important + to use the same namespace so that the two installations can see the same secrets, etc. + + Start by making sure that your `emissary` Helm repo is set correctly: + + ```bash + helm repo remove datawire + helm repo add datawire https://app.getambassador.io + helm repo update + ``` + + Typically, $productName$ 1.14 was installed in the `ambassador` namespace. If you installed + $productName$ 1.14 in a different namespace, change the namespace in the commands below. + + - If you do not need to set `AMBASSADOR_LABEL_SELECTOR`: + + ```bash + helm install -n ambassador \ + --set agent.enabled=false \ + $productHelmName$ datawire/$productHelmName$ && \ + kubectl rollout status -n $productNamespace$ deployment/$productDeploymentName$ -w + ``` + + - If you do need to set `AMBASSADOR_LABEL_SELECTOR`, use `--set`, for example: + + ```bash + helm install -n ambassador \ + --set agent.enabled=false \ + --set env.AMBASSADOR_LABEL_SELECTOR="version-two=true" \ + $productHelmName$ datawire/$productHelmName$ && \ + kubectl rollout status -n $productNamespace$ deployment/$productDeploymentName$ -w + ``` + + + You must use the $productHelmName$ Helm chart for $productName$ 2.X. + Do not use the ambassador Helm chart. + + + + $productName$ $versionTwoX$ includes a Deployment in the $productNamespace$ namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + +4. **Install `Listener`s and `Host`s as needed.** + + An important difference between $productName$ 1.14 and $productName$ $versionTwoX$ is the + new **mandatory** `Listener` CRD. Also, when running both installations side by side, + you will need to make sure that a `Host` is present for the new $productName$ $versionTwoX$ + Service. For example: + + ```bash + kubectl apply -f - < + Kubernetes will not allow you to have a getambassador.io/v3alpha1 resource + with the same name as a getambassador.io/v2 resource or vice versa: only + one version can be stored at a time.
+
+ If you find that your $productName$ $versionTwoX$ installation and your $productName$ 1.14 + installation absolutely must have resources that are only seen by one version or the + other way, see overview section 2, "If needed, you can use labels to further isolate configurations". + + + **If you find that you need to roll back**, just reinstall your 1.14 CRDs and delete your + installation of $productName$ $versionTwoX$. + +6. **When ready, switch over to $productName$ $versionTwoX$.** + + You can run $productName$ 1.14 and $productName$ $versionTwoX$ side-by-side as long as you care + to. However, taking full advantage of $productName$ 2.X's capabilities **requires** + [updating your configuration to use `getambassador.io/v3alpha1` configuration resources](../../../../convert-to-v3alpha1), + since some useful features in $productName$ $versionTwoX$ are only available using + `getambassador.io/v3alpha1` resources. + + When you're ready to have $productName$ $versionTwoX$ handle traffic on its own, switch + your original $productName$ 1.14 Service to point to $productName$ $versionTwoX$. Use + `kubectl edit service ambassador` and change the `selectors` to: + + ``` + app.kubernetes.io/instance: emissary-ingress + app.kubernetes.io/name: emissary-ingress + profile: main + ``` + + Repeat using `kubectl edit service ambassador-admin` for the `ambassador-admin` + Service. + +7. **Finally, install the $productName$ $versionTwoX$ Ambassador Agent.** + + First, scale the 1.14 agent to 0: + + ``` + kubectl scale -n ambassador deployment/ambassador-agent --replicas=0 + ``` + + Once that's done, install the new Agent. **Note that if you needed to set + `AMBASSADOR_LABEL_SELECTOR`, you must add that to this `helm upgrade` command.** + + ```bash + helm upgrade -n ambassador \ + --set agent.enabled=true \ + $productHelmName$ datawire/$productHelmName$ \ + kubectl rollout status -n $productNamespace$ deployment/$productDeploymentName$ -w + ``` + +Congratulations! At this point, $productName$ $versionTwoX$ is fully running and it's safe to remove the `ambassador` and `ambassador-agent` Deployments: + +``` +kubectl delete deployment/ambassador deployment/ambassador-agent +``` + +Once $productName$ 1.14 is no longer running, you may [convert](../../../../convert-to-v3alpha1) +any remaining `getambassador.io/v2` resources to `getambassador.io/v3alpha1`. +You may also want to redirect DNS to the `edge-stack` Service and remove the +`ambassador` Service. diff --git a/docs/emissary/latest/topics/install/upgrade/helm/emissary-2.0/emissary-2.X.md b/docs/emissary/latest/topics/install/upgrade/helm/emissary-2.0/emissary-2.X.md new file mode 100644 index 000000000..c0a392f17 --- /dev/null +++ b/docs/emissary/latest/topics/install/upgrade/helm/emissary-2.0/emissary-2.X.md @@ -0,0 +1,75 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 2.0.5 (Helm) + + + This guide covers migrating from $productName$ 2.0.5 to $productName$ $versionTwoX$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation originally made using Helm. + If you did not install with Helm, see the YAML-based + upgrade instructions. + + +Since $productName$'s configuration is entirely stored in Kubernetes resources, upgrading between minor +versions is straightforward. + +Migration is a two-step process: + +1. **Install new CRDs.** + + Before installing $productName$ $versionTwoX$ itself, you need to update the CRDs in + your cluster; Helm will not do this for you. This is mandatory during any upgrade of $productName$. + + ``` + kubectl apply -f https://app.getambassador.io/yaml/emissary/$versionTwoX$/emissary-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $versionTwoX$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$AESproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. **Install $productName$ $versionTwoX$.** + + After installing the new CRDs, use Helm to install $productName$ $versionTwoX$. Start by + making sure that your `datawire` Helm repo is set correctly: + + ```bash + helm repo remove datawire + helm repo add datawire https://app.getambassador.io + helm repo update + ``` + + Then, update your $productName$ installation in the `$productNamespace$` namespace. + If necessary for your installation (e.g. if you were running with + `AMBASSADOR_SINGLE_NAMESPACE` set), you can choose a different namespace. + + ```bash + helm upgrade -n $productNamespace$ \ + $productHelmName$ datawire/$productHelmName$ && \ + kubectl rollout status -n $productNamespace$ deployment/emissary-ingress -w + ``` + + + You must use the $productHelmName$ Helm chart for $productName$ 2.X. + Do not use the ambassador Helm chart. + diff --git a/docs/emissary/latest/topics/install/upgrade/helm/emissary-2.4/emissary-2.X.md b/docs/emissary/latest/topics/install/upgrade/helm/emissary-2.4/emissary-2.X.md new file mode 100644 index 000000000..3e44b5119 --- /dev/null +++ b/docs/emissary/latest/topics/install/upgrade/helm/emissary-2.4/emissary-2.X.md @@ -0,0 +1,75 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 2.4.Z (Helm) + + + This guide covers migrating from $productName$ 2.4.Z to $productName$ $versionTwoX$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation originally made using Helm. + If you did not install with Helm, see the YAML-based + upgrade instructions. + + +Since $productName$'s configuration is entirely stored in Kubernetes resources, upgrading between minor +versions is straightforward. + +Migration is a two-step process: + +1. **Install new CRDs.** + + Before installing $productName$ $versionTwoX$ itself, you need to update the CRDs in + your cluster; Helm will not do this for you. This is mandatory during any upgrade of $productName$. + + ``` + kubectl apply -f https://app.getambassador.io/yaml/emissary/$versionTwoX$/emissary-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $versionTwoX$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$AESproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. **Install $productName$ $versionTwoX$.** + + After installing the new CRDs, use Helm to install $productName$ $versionTwoX$. Start by + making sure that your `datawire` Helm repo is set correctly: + + ```bash + helm repo remove datawire + helm repo add datawire https://app.getambassador.io + helm repo update + ``` + + Then, update your $productName$ installation in the `$productNamespace$` namespace. + If necessary for your installation (e.g. if you were running with + `AMBASSADOR_SINGLE_NAMESPACE` set), you can choose a different namespace. + + ```bash + helm upgrade -n $productNamespace$ \ + $productHelmName$ datawire/$productHelmName$ && \ + kubectl rollout status -n $productNamespace$ deployment/emissary-ingress -w + ``` + + + You must use the $productHelmName$ Helm chart for $productName$ 2.X. + Do not use the ambassador Helm chart. + diff --git a/docs/emissary/latest/topics/install/upgrade/helm/emissary-2.5/emissary-3.X.md b/docs/emissary/latest/topics/install/upgrade/helm/emissary-2.5/emissary-3.X.md new file mode 100644 index 000000000..d8439c5a3 --- /dev/null +++ b/docs/emissary/latest/topics/install/upgrade/helm/emissary-2.5/emissary-3.X.md @@ -0,0 +1,153 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 2.5.Z (Helm) + + + This guide covers migrating from $productName$ 2.5.Z to $productName$ $version$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation originally made using Helm. + If you did not install with Helm, see the YAML-based + upgrade instructions. + + + + Make sure that you have updated any AuthServices, LogServices and RateLimitServices to use + protocol_version: "v3" or else an error will be posted and a static response will be returned in $version$. + + +Since $productName$'s configuration is entirely stored in Kubernetes resources, upgrading between minor +versions is straightforward. + +$productName$ 3 is functionally compatible with $productName$ 2.x, but with any major upgrade there are some changes to consider. Such as, Envoy removing support for V2 Transport Protocol features. Below we will outline some of these changes and things to consider when upgrading. + +### Resources to check before migrating to $version$. + +$productName$ 3.X has been upgraded from Envoy 1.17.X to Envoy 1.22 which removed support for the Envoy V2 Transport Protocol. This means all `AuthService`, `RatelimitService`, and `LogServices` must be updated to use the V3 Protocol. Additionally support for some of the runtime bootstrap flags has been removed. + +You can refer to the [Major changes in $productName$ 3.x](../../../../../../about/changes-3.y/) guide for an overview of the changes. + +1. $productName$ 3.2 fixed a bug with `Host.spec.selector\mappingSelector` and `Listener.spec.selector` not being properly enforced. + In previous versions, if only a single label from the selector was present on the resource then they would be associated. Additionally, when associating `Hosts` with `Mappings`, if the `Mapping` configured a `hostname` that matched the `hostname` of the `Host` then they would be associated regardless of the configuration of the `selector\mappingSelector` on the `Host`. + + Before upgrading, review your Ambassador resources, and if you make use of the selectors, ensure that every other resource you want it to be associated with contains all the required labels. + + The environment variable `DISABLE_STRICT_LABEL_SELECTORS` can be set to `"true"` on the $productName$ deployment to revert to the + old incorrect behavior to help prevent any configuration issues after upgrading in the event that not all manifests making use of the selectors have been corrected yet. + + For more information on `DISABLE_STRICT_LABEL_SELECTORS` see the [Environment Variables page](../../../../../running/environment). + +2. Check Transport Protocol usage on all resources before migrating. + + The `AuthService`, `RatelimitService`, and `LogServices` that use the `grpc` protocol will now need to explicilty set `protocol_version: "v3"`. If not set or set to `v2` then an error will be posted and a static response will be returned. + + `protocol_version` should be updated to `v3` for all of the above resources while still running $productName$ $versionTwoX$. As of version `2.3.z`+, support for `protocol_version` `v2` and `v3` is supported in order to allow migration from `protocol_version` `v2` to `v3` before upgrading to $productName$ $version$ where support for `v2` is removed. + + Upgrading any application code for your own implementations of these services is very straightforward. + + The following imports simply need to be updated to switch from Envoy's Transport Protocol `v2` to `v3`, and then the configuration for these resources can be updated to add the `protocl_version: "v3"` when the updated service is deployed. + + `v2` Imports: + ```golang + envoyCoreV2 "github.com/datawire/ambassador/pkg/api/envoy/api/v2/core" + envoyAuthV2 "github.com/datawire/ambassador/pkg/api/envoy/service/auth/v2" + envoyType "github.com/datawire/ambassador/pkg/api/envoy/type" + ``` + + `v3` Imports: + ```golang + envoyCoreV3 "github.com/datawire/ambassador/v2/pkg/api/envoy/config/core/v3" + envoyAuthV3 "github.com/datawire/ambassador/v2/pkg/api/envoy/service/auth/v3" + envoyType "github.com/datawire/ambassador/v2/pkg/api/envoy/type/v3" + ``` + +3. Check removed runtime changes + + ```yaml + # No longer necessary because this was removed from Envoy + # $productName$ already was converted to use the compressor API + # https://www.envoyproxy.io/docs/envoy/v1.22.0/configuration/http/http_filters/compressor_filter#config-http-filters-compressor + "envoy.deprecated_features.allow_deprecated_gzip_http_filter": true, + + # Upgraded to v3, all support for V2 Transport Protocol removed + "envoy.deprecated_features:envoy.api.v2.route.HeaderMatcher.regex_match": true, + "envoy.deprecated_features:envoy.api.v2.route.RouteMatch.regex": true, + + # Developers will need to upgrade TracingService to V3 protocol which no longer supports HTTP_JSON_V1 + "envoy.deprecated_features:envoy.config.trace.v2.ZipkinConfig.HTTP_JSON_V1": true, + + # V2 protocol removed so flag no longer necessary + "envoy.reloadable_features.enable_deprecated_v2_api": true, + ``` + +4. Support for LightStep tracing driver removed + + + As of $productName$ 3.4.Z, the LightStep tracing driver is no longer supported. To ensure you do not drop any tracing data, be sure to read before upgrading. + + +$productName$ 3.4 is based on Envoy 1.24.1 which removed support for the `LightStep` tracing driver. The team at LightStep and the maintainers of Envoy-Proxy recommend that users instead leverage the OpenTelemetry Collector to send tracing information to LightStep. We have written a guide which can be found here Distributed Tracing with OpenTelemetry and Lightstep that outlines how to set this up. **It is important that you follow this upgrade path prior to upgrading or you will drop tracing data.** + +## Migration Steps + +Migration is a two-step process: + +1. **Install new CRDs.** + + After reviewing the changes in 3.x and confirming that you are ready to upgrade, the process is the same as upgrading minor versions + in previous version of $productName$ and does not require the complex migration steps that the migration from 1.x tto 2.x required. + + Before installing $productName$ $version$ itself, you need to update the CRDs in + your cluster. This is mandatory during any upgrade of $productName$. + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/emissary/$version$/emissary-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $version$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$AESproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. **Install $productName$ $version$.** + + After installing the new CRDs, use Helm to install $productName$ $version$. Start by + making sure that your `datawire` Helm repo is set correctly: + + ```bash + helm repo remove datawire + helm repo add datawire https://app.getambassador.io + helm repo update + ``` + + Then, update your $productName$ installation in the `$productNamespace$` namespace. + If necessary for your installation (e.g. if you were running with + `AMBASSADOR_SINGLE_NAMESPACE` set), you can choose a different namespace. + + ```bash + helm upgrade -n $productNamespace$ \ + $productHelmName$ datawire/$productHelmName$ && \ + kubectl rollout status -n $productNamespace$ deployment/emissary-ingress -w + ``` + + + You must use the $productHelmName$ Helm chart for $productName$ 3.Y. + diff --git a/docs/emissary/latest/topics/install/upgrade/helm/emissary-3.4/emissary-3.X.md b/docs/emissary/latest/topics/install/upgrade/helm/emissary-3.4/emissary-3.X.md new file mode 100644 index 000000000..f5b4e26da --- /dev/null +++ b/docs/emissary/latest/topics/install/upgrade/helm/emissary-3.4/emissary-3.X.md @@ -0,0 +1,87 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 3.4.Z (Helm) + + + This guide covers migrating from $productName$ 3.4.Z to $productName$ $version$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation originally made using Helm. + If you did not install with Helm, see the YAML-based + upgrade instructions. + + +Since $productName$'s configuration is entirely stored in Kubernetes resources, upgrading between minor +versions is straightforward. + +### Resources to check before migrating to $version$. + + + As of $productName$ 3.4.Z, the LightStep tracing driver is no longer supported. To ensure you do not drop any tracing data, be sure to read below before upgrading. + + +$productName$ 3.4 has been upgraded from Envoy 1.23 to Envoy 1.24.1 which removed support for the `LightStep` tracing driver. The team at LightStep and the maintainers of Envoy-Proxy recommend that users instead leverage the OpenTelemetry Collector to send tracing information to LightStep. We have written a guide which can be found here Distributed Tracing with OpenTelemetry and Lightstep that outlines how to set this up. **It is important that you follow this upgrade path prior to upgrading or you will drop tracing data.** + +## Migration Steps + +Migration is a two-step process: + +1. **Install new CRDs.** + + After reviewing the changes in 3.x and confirming that you are ready to upgrade, the process is the same as upgrading minor versions + in previous version of $productName$ and does not require the complex migration steps that the migration from 1.x tto 2.x required. + + Before installing $productName$ $version$ itself, you need to update the CRDs in + your cluster. This is mandatory during any upgrade of $productName$. + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/emissary/$version$/emissary-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $version$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$AESproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. **Install $productName$ $version$.** + + After installing the new CRDs, use Helm to install $productName$ $version$. Start by + making sure that your `datawire` Helm repo is set correctly: + + ```bash + helm repo remove datawire + helm repo add datawire https://app.getambassador.io + helm repo update + ``` + + Then, update your $productName$ installation in the `$productNamespace$` namespace. + If necessary for your installation (e.g. if you were running with + `AMBASSADOR_SINGLE_NAMESPACE` set), you can choose a different namespace. + + ```bash + helm upgrade -n $productNamespace$ \ + $productHelmName$ datawire/$productHelmName$ && \ + kubectl rollout status -n $productNamespace$ deployment/emissary-ingress -w + ``` + + + You must use the $productHelmName$ Helm chart for $productName$ 3.Y. + diff --git a/docs/emissary/latest/topics/install/upgrade/helm/emissary-3.7/emissary-3.X.md b/docs/emissary/latest/topics/install/upgrade/helm/emissary-3.7/emissary-3.X.md new file mode 100644 index 000000000..ffa7f19d1 --- /dev/null +++ b/docs/emissary/latest/topics/install/upgrade/helm/emissary-3.7/emissary-3.X.md @@ -0,0 +1,81 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 3.7.Z (Helm) + + + This guide covers migrating from $productName$ 3.7.Z to $productName$ $version$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation originally made using Helm. + If you did not install with Helm, see the YAML-based + upgrade instructions. + + +Since $productName$'s configuration is entirely stored in Kubernetes resources, upgrading between minor +versions is straightforward. + +### Resources to check before migrating to $version$. + +## Migration Steps + +Migration is a two-step process: + +1. **Install new CRDs.** + + After reviewing the changes in 3.x and confirming that you are ready to upgrade, the process is the same as upgrading minor versions + in the previous version of $productName$ and does not require the complex migration steps that the migration from 1.x to 2.x required. + + Before installing $productName$ $version$ itself, you need to update the CRDs in + your cluster. This is mandatory during any upgrade of $productName$. + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/emissary/$version$/emissary-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $version$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$AESproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. **Install $productName$ $version$.** + + After installing the new CRDs, use Helm to install $productName$ $version$. Start by + making sure that your `datawire` Helm repo is set correctly: + + ```bash + helm repo remove datawire + helm repo add datawire https://app.getambassador.io + helm repo update + ``` + + Then, update your $productName$ installation in the `$productNamespace$` namespace. + If necessary for your installation (e.g. if you were running with + `AMBASSADOR_SINGLE_NAMESPACE` set), you can choose a different namespace. + + ```bash + helm upgrade -n $productNamespace$ \ + $productHelmName$ datawire/$productHelmName$ && \ + kubectl rollout status -n $productNamespace$ deployment/emissary-ingress -w + ``` + + + You must use the $productHelmName$ Helm chart for $productName$ 3.Y. + diff --git a/docs/emissary/latest/topics/install/upgrade/yaml/emissary-1.14/emissary-2.X.md b/docs/emissary/latest/topics/install/upgrade/yaml/emissary-1.14/emissary-2.X.md new file mode 100644 index 000000000..eb1dcf6c8 --- /dev/null +++ b/docs/emissary/latest/topics/install/upgrade/yaml/emissary-1.14/emissary-2.X.md @@ -0,0 +1,282 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 1.14.X (YAML) + + + This guide covers migrating from $productName$ 1.14.X to $productName$ $versionTwoX$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation made without using Helm. + If you originally installed with Helm, see the Helm-based + upgrade instructions. + + +We're pleased to introduce $productName$ $versionTwoX$! The 2.X family introduces a number of +changes to allow $productName$ to more gracefully handle larger installations (including +multitenant or multiorganizational installations), reduce memory footprint, and improve +performance. In keeping with [SemVer](https://semver.org), $productName$ 2.X introduces +some changes that aren't backward-compatible with 1.X. These changes are detailed in +[Major Changes in $productName$ 2.X](../../../../../../about/changes-2.x/). + +## Migration Overview + + + Read the migration instructions below before making any changes to your + cluster! + + +The recommended strategy for migration is to run $productName$ 1.14 and $productName$ +$versionTwoX$ side-by-side in the same cluster. This gives $productName$ $versionTwoX$ +and $productName$ 1.14 access to all the same configuration resources, with some +important caveats: + +1. **$productName$ 1.14 will not see any `getambassador.io/v3alpha1` resources.** + + This is intentional; it provides a way to apply configuration only to + $productName$ $versionTwoX$, while not interfering with the operation of your + $productName$ 1.14 installation. + +2. **If needed, you can use labels to further isolate configurations.** + + If you need to prevent your $productName$ $versionTwoX$ installation from + seeing a particular bit of $productName$ 1.14 configuration, you can apply + a Kubernetes label to the configuration resources that should be seen by + your $productName$ $versionTwoX$ installation, then set its + `AMBASSADOR_LABEL_SELECTOR` environment variable to restrict its configuration + to only the labelled resources. + + For example, you could apply a `version-two: true` label to all resources + that should be visible to $productName$ $versionTwoX$, then set + `AMBASSADOR_LABEL_SELECTOR=version-two=true` in its Deployment. + +3. **Be careful about label selectors on Kubernetes Services!** + + If you have services in $productName$ 1.14 that use selectors that will match + Pods from $productName$ $versionTwoX$, traffic will be erroneously split between + $productName$ 1.14 and $productName$ $versionTwoX$. The labels used by $productName$ + $versionTwoX$ include: + + ```yaml + app.kubernetes.io/name: emissary-ingress + app.kubernetes.io/instance: emissary-ingress + app.kubernetes.io/part-of: emissary-ingress + app.kubernetes.io/managed-by: getambassador.io + product: aes + profile: main + ``` + +4. **Be careful to only have one $productName$ Agent running at a time.** + + The $productName$ Agent is responsible for communications between + $productName$ and Ambassador Cloud. If multiple versions of the Agent are + running simultaneously, Ambassador Cloud could see conflicting information + about your cluster. + + The migration YAML used below to install $productName$ $versionTwoX$ will not + install a duplicate agent. If you are building your own YAML, make sure not + to include a duplicate agent. + +You can also migrate by [installing $productName$ $versionTwoX$ in a separate cluster](../../../../migrate-to-2-alternate). +This permits absolute certainty that your $productName$ 1.14 configuration will not be +affected by changes meant for $productName$ $versionTwoX$, and it eliminates concerns about +ACME, but it is more effort. + +## Side-by-Side Migration Steps + +Migration is a seven-step process: + +1. **Make sure that older configuration resources are not present.** + + $productName$ 2.X does not support `getambassador.io/v0` or `getambassador.io/v1` + resources, and Kubernetes will not permit removing support for CRD versions that are + still in use for stored resources. To verify that no resources older than + `getambassador.io/v2` are active, run + + ``` + kubectl get crds -o 'go-template={{range .items}}{{.metadata.name}}={{.status.storedVersions}}{{"\n"}}{{end}}' | fgrep getambassador.io + ``` + + If `v1` is present in the output, **do not begin migration.** The old resources must be + converted to `getambassador.io/v2` and the `storedVersion` information in the cluster + must be updated. If necessary, contact Ambassador Labs on [Slack](http://a8r.io/slack) + for more information. + +2. **Install new CRDs.** + + Before installing $productName$ $versionTwoX$ itself, you must configure your + Kubernetes cluster to support its new `getambassador.io/v3alpha1` configuration + resources. Note that `getambassador.io/v2` resources are still supported, but **you + must install support for `getambassador.io/v3alpha1`** to run $productName$ $versionTwoX$, + even if you intend to continue using only `getambassador.io/v2` resources for some + time. + + ``` + kubectl apply -f https://app.getambassador.io/yaml/emissary/$versionTwoX$/emissary-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $versionTwoX$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$AESproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +3. **Install $productName$ $versionTwoX$.** + + After installing the new CRDs, you need to install $productName$ $versionTwoX$ itself + **in the same namespace as your existing $productName$ 1.14 installation**. It's important + to use the same namespace so that the two installations can see the same secrets, etc. + + We publish two manifests for different namespaces. Use only the one that + matches the namespace into which you installed $productName$ 1.14: + + - [`emissary-emissaryns.yaml`] for the `emissary` namespace; or + - [`emissary-defaultns.yaml`] for the `default` namespace. + + If you installed $productName$ 1.14 into some other namespace, you'll need to + download one of the files and edit it to match your namespace. + + [`emissary-emissaryns.yaml`]: https://app.getambassador.io/yaml/emissary/$versionTwoX$/emissary-emissaryns.yaml + [`emissary-defaultns.yaml`]: https://app.getambassador.io/yaml/emissary/$versionTwoX$/emissary-defaultns.yaml + + **If you need to set `AMBASSADOR_LABEL_SELECTOR`**, you'll need to download + your chosen file and and edit it to do so. + + Assuming that you're using the `default` namespace: + + ``` + kubectl apply -f https://app.getambassador.io/yaml/emissary/$versionTwoX$/emissary-defaultns.yaml && \ + kubectl rollout status -n default deployment/edge-stack -w + ``` + + + $productName$ $versionTwoX$ includes a Deployment in the $productNamespace$ namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + +4. **Install `Listener`s and `Host`s as needed.** + + An important difference between $productName$ 1.14 and $productName$ $versionTwoX$ is the + new **mandatory** `Listener` CRD. Also, when running both installations side by side, + you will need to make sure that a `Host` is present for the new $productName$ $versionTwoX$ + Service. For example: + + ```bash + kubectl apply -f - < + Kubernetes will not allow you to have a getambassador.io/v3alpha1 resource + with the same name as a getambassador.io/v2 resource or vice versa: only + one version can be stored at a time.
+
+ If you find that your $productName$ $versionTwoX$ installation and your $productName$ 1.14 + installation absolutely must have resources that are only seen by one version or the + other way, see overview section 2, "If needed, you can use labels to further isolate configurations". + + + **If you find that you need to roll back**, just reinstall your 1.14 CRDs and delete your + installation of $productName$ $versionTwoX$. + +6. **When ready, switch over to $productName$ $versionTwoX$.** + + You can run $productName$ 1.14 and $productName$ $versionTwoX$ side-by-side as long as you care + to. However, taking full advantage of $productName$ 2.X's capabilities **requires** + [updating your configuration to use `getambassador.io/v3alpha1` configuration resources](../../../../convert-to-v3alpha1), + since some useful features in $productName$ $versionTwoX$ are only available using + `getambassador.io/v3alpha1` resources. + + When you're ready to have $productName$ $versionTwoX$ handle traffic on its own, switch + your original $productName$ 1.14 Service to point to $productName$ $versionTwoX$. Use + `kubectl edit service ambassador` and change the `selectors` to: + + ``` + app.kubernetes.io/instance: emissary-ingress + app.kubernetes.io/name: emissary-ingress + profile: main + ``` + + Repeat using `kubectl edit service ambassador-admin` for the `ambassador-admin` + Service. + + +Congratulations! At this point, $productName$ $versionTwoX$ is fully running and it's safe to remove the `ambassador` and `ambassador-agent` Deployments: + +``` +kubectl delete deployment/ambassador deployment/ambassador-agent +``` + +Once $productName$ 1.14 is no longer running, you may [convert](../../../../convert-to-v3alpha1) +any remaining `getambassador.io/v2` resources to `getambassador.io/v3alpha1`. +You may also want to redirect DNS to the `edge-stack` Service and remove the +`ambassador` Service. diff --git a/docs/emissary/latest/topics/install/upgrade/yaml/emissary-2.0/emissary-2.X.md b/docs/emissary/latest/topics/install/upgrade/yaml/emissary-2.0/emissary-2.X.md new file mode 100644 index 000000000..b16d046fc --- /dev/null +++ b/docs/emissary/latest/topics/install/upgrade/yaml/emissary-2.0/emissary-2.X.md @@ -0,0 +1,65 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 2.0.5 (YAML) + + + This guide covers migrating from $productName$ 2.0.5 to $productName$ $versionTwoX$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation made without using Helm. + If you originally installed with Helm, see the Helm-based + upgrade instructions. + + +Since $productName$'s configuration is entirely stored in Kubernetes resources, upgrading between minor +versions is straightforward. + +Migration is a two-step process: + +1. **Install new CRDs.** + + Before installing $productName$ $versionTwoX$ itself, you need to update the CRDs in + your cluster. This is mandatory during any upgrade of $productName$. + + ``` + kubectl apply -f https://app.getambassador.io/yaml/emissary/$versionTwoX$/emissary-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $versionTwoX$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$AESproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. **Install $productName$ $versionTwoX$.** + + After installing the new CRDs, upgrade $productName$ $versionTwoX$. + + + Our emissary-emissaryns.yaml file + uses the `emissary` namespace, since this is the default for $productName$. + We also publish emissary-defaultns.yaml for the + `default` namespace. For any other namespace, you should download one of these files and edit the namespaces manually. + + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/emissary/$versionTwoX$/emissary-emissaryns.yaml && \ + kubectl rollout status -n emissary deployment/emissary-ingress -w + ``` diff --git a/docs/emissary/latest/topics/install/upgrade/yaml/emissary-2.4/emissary-2.X.md b/docs/emissary/latest/topics/install/upgrade/yaml/emissary-2.4/emissary-2.X.md new file mode 100644 index 000000000..ec8b6a70a --- /dev/null +++ b/docs/emissary/latest/topics/install/upgrade/yaml/emissary-2.4/emissary-2.X.md @@ -0,0 +1,67 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 2.4.Z (YAML) + + + This guide covers migrating from $productName$ 2.4.Z to $productName$ $versionTwoX$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation made without using Helm. + If you originally installed with Helm, see the Helm-based + upgrade instructions. + + +Since $productName$'s configuration is entirely stored in Kubernetes resources, upgrading between minor +versions is straightforward. + +## Migration Steps + +Migration is a two-step process: + +1. **Install new CRDs.** + + Before installing $productName$ $versionTwoX$ itself, you need to update the CRDs in + your cluster. This is mandatory during any upgrade of $productName$. + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/emissary/$versionTwoX$/emissary-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $versionTwoX$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$AESproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. **Install $productName$ $versionTwoX$.** + + After installing the new CRDs, upgrade $productName$ $versionTwoX$. + + + Our emissary-emissaryns.yaml file + uses the `emissary` namespace, since this is the default for $productName$. + We also publish emissary-defaultns.yaml for the + `default` namespace. For any other namespace, you should download one of these files and edit the namespaces manually. + + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/emissary/$versionTwoX$/emissary-emissaryns.yaml && \ + kubectl rollout status -n emissary deployment/emissary-ingress -w + ``` diff --git a/docs/emissary/latest/topics/install/upgrade/yaml/emissary-2.5/emissary-3.X.md b/docs/emissary/latest/topics/install/upgrade/yaml/emissary-2.5/emissary-3.X.md new file mode 100644 index 000000000..ea3b7bc98 --- /dev/null +++ b/docs/emissary/latest/topics/install/upgrade/yaml/emissary-2.5/emissary-3.X.md @@ -0,0 +1,144 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 2.5.Z (YAML) + + + This guide covers migrating from $productName$ 2.5.Z to $productName$ $version$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation made without using Helm. + If you originally installed with Helm, see the Helm-based + upgrade instructions. + + + + Make sure that you have updated any AuthServices, LogServices and RateLimitServices to use + protocol_version: "v3" or else an error will be posted and a static response will be returned in $version$. + + +Since $productName$'s configuration is entirely stored in Kubernetes resources, upgrading +between versions is straightforward. + +$productName$ 3 is functionally compatible with $productName$ 2.x, but with any major upgrade there are some changes to consider. Such as, Envoy removing support for V2 Transport Protocol features. Below we will outline some of these changes and things to consider when upgrading. + +### Resources to check before migrating to $version$. + +$productName$ 3.X has been upgraded from Envoy 1.17.X to Envoy 1.22 which removed support for the Envoy V2 Transport Protocol. This means all `AuthService`, `RatelimitService`, and `LogServices` must be updated to use the V3 Protocol. Additionally support for some of the runtime bootstrap flags has been removed. + +You can refer to the [Major changes in $productName$ 3.x](../../../../../../about/changes-3.y/) guide for an overview of the changes. + +1. $productName$ 3.2 fixed a bug with `Host.spec.selector\mappingSelector` and `Listener.spec.selector` not being properly enforced. + In previous versions, if only a single label from the selector was present on the resource then they would be associated. Additionally, when associating `Hosts` with `Mappings`, if the `Mapping` configured a `hostname` that matched the `hostname` of the `Host` then they would be associated regardless of the configuration of the `selector\mappingSelector` on the `Host`. + + Before upgrading, review your Ambassador resources, and if you make use of the selectors, ensure that every other resource you want it to be associated with contains all the required labels. + + The environment variable `DISABLE_STRICT_LABEL_SELECTORS` can be set to `"true"` on the $productName$ deployment to revert to the + old incorrect behavior to help prevent any configuration issues after upgrading in the event that not all manifests making use of the selectors have been corrected yet. + + For more information on `DISABLE_STRICT_LABEL_SELECTORS` see the [Environment Variables page](../../../../../running/environment#disable_strict_label_selectors). + +2. Check Transport Protocol usage on all resources before migrating. + + The `AuthService`, `RatelimitService`, and `LogServices` that use the `grpc` protocol will now need to explicilty set `protocol_version: "v3"`. If not set or set to `v2` then an error will be posted and a static response will be returned. + + `protocol_version` should be updated to `v3` for all of the above resources while still running $productName$ $versionTwoX$. As of version `2.3.z`+, support for `protocol_version` `v2` and `v3` is supported in order to allow migration from `protocol_version` `v2` to `v3` before upgrading to $productName$ $version$ where support for `v2` is removed. + + Upgrading any application code for your own implementations of these services is very straightforward. + + The following imports simply need to be updated to switch from Envoy's Transport Protocol `v2` to `v3`, and then the configuration for these resources can be updated to add the `protocl_version: "v3"` when the updated service is deployed. + + `v2` Imports: + ```golang + envoyCoreV2 "github.com/datawire/ambassador/pkg/api/envoy/api/v2/core" + envoyAuthV2 "github.com/datawire/ambassador/pkg/api/envoy/service/auth/v2" + envoyType "github.com/datawire/ambassador/pkg/api/envoy/type" + ``` + + `v3` Imports: + ```golang + envoyCoreV3 "github.com/datawire/ambassador/v2/pkg/api/envoy/config/core/v3" + envoyAuthV3 "github.com/datawire/ambassador/v2/pkg/api/envoy/service/auth/v3" + envoyType "github.com/datawire/ambassador/v2/pkg/api/envoy/type/v3" + ``` + +3. Check removed runtime changes + + ```yaml + # No longer necessary because this was removed from Envoy + # $productName$ already was converted to use the compressor API + # https://www.envoyproxy.io/docs/envoy/v1.22.0/configuration/http/http_filters/compressor_filter#config-http-filters-compressor + "envoy.deprecated_features.allow_deprecated_gzip_http_filter": true, + + # Upgraded to v3, all support for V2 Transport Protocol removed + "envoy.deprecated_features:envoy.api.v2.route.HeaderMatcher.regex_match": true, + "envoy.deprecated_features:envoy.api.v2.route.RouteMatch.regex": true, + + # Developers will need to upgrade TracingService to V3 protocol which no longer supports HTTP_JSON_V1 + "envoy.deprecated_features:envoy.config.trace.v2.ZipkinConfig.HTTP_JSON_V1": true, + + # V2 protocol removed so flag no longer necessary + "envoy.reloadable_features.enable_deprecated_v2_api": true, + ``` + +4. Support for LightStep tracing driver removed + + + As of $productName$ 3.4.Z, the LightStep tracing driver is no longer supported. To ensure you do not drop any tracing data, be sure to read before upgrading. + + +$productName$ 3.4 is based on Envoy 1.24.1 which removed support for the `LightStep` tracing driver. The team at LightStep and the maintainers of Envoy-Proxy recommend that users instead leverage the OpenTelemetry Collector to send tracing information to LightStep. We have written a guide which can be found here Distributed Tracing with OpenTelemetry and Lightstep that outlines how to set this up. **It is important that you follow this upgrade path prior to upgrading or you will drop tracing data.** + +## Migration Steps + +Migration is a two-step process: + +1. **Install new CRDs.** + + After reviewing the changes in 3.x and confirming that you are ready to upgrade, the process is the same as upgrading minor versions + in previous version of $productName$ and does not require the complex migration steps that the migration from 1.x tto 2.x required. + + Before installing $productName$ $version$ itself, you need to update the CRDs in + your cluster. This is mandatory during any upgrade of $productName$. + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/emissary/$version$/emissary-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $version$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$AESproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. **Install $productName$ $version$.** + + After installing the new CRDs, upgrade $productName$ $version$. + + + Our emissary-emissaryns.yaml file + uses the `emissary` namespace, since this is the default for $productName$. + We also publish emissary-defaultns.yaml for the + `default` namespace. For any other namespace, you should download one of these files and edit the namespaces manually. + + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/emissary/$version$/emissary-emissaryns.yaml && \ + kubectl rollout status -n emissary deployment/emissary-ingress -w + ``` diff --git a/docs/emissary/latest/topics/install/upgrade/yaml/emissary-3.4/emissary-3.X.md b/docs/emissary/latest/topics/install/upgrade/yaml/emissary-3.4/emissary-3.X.md new file mode 100644 index 000000000..723ac6a18 --- /dev/null +++ b/docs/emissary/latest/topics/install/upgrade/yaml/emissary-3.4/emissary-3.X.md @@ -0,0 +1,75 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 3.4.Z (YAML) + + + This guide covers migrating from $productName$ 3.4.Z to $productName$ $version$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation made without using Helm. + If you originally installed with Helm, see the Helm-based + upgrade instructions. + + +Since $productName$'s configuration is entirely stored in Kubernetes resources, upgrading +between versions is straightforward. + +### Resources to check before migrating to $version$. + + + As of $productName$ 3.4.Z, the LightStep tracing driver is no longer supported. To ensure you do not drop any tracing data, be sure to read below before upgrading. + + +$productName$ 3.4 has been upgraded from Envoy 1.23 to Envoy 1.24.1 which removed support for the `LightStep` tracing driver. The team at LightStep and the maintainers of Envoy-Proxy recommend that users instead leverage the OpenTelemetry Collector to send tracing information to LightStep. We have written a guide which can be found here Distributed Tracing with OpenTelemetry and Lightstep that outlines how to set this up. **It is important that you follow this upgrade path prior to upgrading or you will drop tracing data.** + +## Migration Steps + +Migration is a two-step process: + +1. **Install new CRDs.** + + Before installing $productName$ $version$ itself, you need to update the CRDs in + your cluster. This is mandatory during any upgrade of $productName$. + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/emissary/$version$/emissary-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $version$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$AESproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. **Install $productName$ $version$.** + + After installing the new CRDs, upgrade $productName$ $version$. + + + Our emissary-emissaryns.yaml file + uses the `emissary` namespace, since this is the default for $productName$. + We also publish emissary-defaultns.yaml for the + `default` namespace. For any other namespace, you should download one of these files and edit the namespaces manually. + + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/emissary/$version$/emissary-emissaryns.yaml && \ + kubectl rollout status -n emissary deployment/emissary-ingress -w + ``` diff --git a/docs/emissary/latest/topics/install/upgrade/yaml/emissary-3.7/emissary-3.X.md b/docs/emissary/latest/topics/install/upgrade/yaml/emissary-3.7/emissary-3.X.md new file mode 100644 index 000000000..024dd30c4 --- /dev/null +++ b/docs/emissary/latest/topics/install/upgrade/yaml/emissary-3.7/emissary-3.X.md @@ -0,0 +1,69 @@ +import Alert from '@material-ui/lab/Alert'; + +# Upgrade $productName$ 3.7.Z (YAML) + + + This guide covers migrating from $productName$ 3.7.Z to $productName$ $version$. If + this is not your exact situation, see the migration + matrix. + + + + This guide is written for upgrading an installation made without using Helm. + If you originally installed with Helm, see the Helm-based + upgrade instructions. + + +Since $productName$'s configuration is entirely stored in Kubernetes resources, upgrading +between versions is straightforward. + +### Resources to check before migrating to $version$. + +## Migration Steps + +Migration is a two-step process: + +1. **Install new CRDs.** + + Before installing $productName$ $version$ itself, you need to update the CRDs in + your cluster. This is mandatory during any upgrade of $productName$. + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/emissary/$version$/emissary-crds.yaml + kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system + ``` + + + $productName$ $version$ includes a Deployment in the `emissary-system` namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$AESproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. **Install $productName$ $version$.** + + After installing the new CRDs, upgrade $productName$ $version$. + + + Our emissary-emissaryns.yaml file + uses the `emissary` namespace, since this is the default for $productName$. + We also publish emissary-defaultns.yaml for the + `default` namespace. For any other namespace, you should download one of these files and edit the namespaces manually. + + + ```bash + kubectl apply -f https://app.getambassador.io/yaml/emissary/$version$/emissary-emissaryns.yaml && \ + kubectl rollout status -n emissary deployment/emissary-ingress -w + ``` diff --git a/docs/emissary/latest/topics/install/yaml-install.md b/docs/emissary/latest/topics/install/yaml-install.md new file mode 100644 index 000000000..bb628f5b3 --- /dev/null +++ b/docs/emissary/latest/topics/install/yaml-install.md @@ -0,0 +1,89 @@ +--- + description: In this guide, we'll walk through the process of deploying $productName$ in Kubernetes for ingress routing. +--- + +import Alert from '@material-ui/lab/Alert'; + +# Install manually + + + + To migrate from $productName$ 1.X to $productName$ 2.X, see the + [$productName$ migration matrix](../migration-matrix/). This guide + **will not work** for that, due to changes to the configuration + resources used for $productName$ 2.X. + + + +In this guide, we'll walk you through installing $productName$ in your Kubernetes cluster. + +The manual install process does not allow for as much control over configuration +as the [Helm install method](../helm), so if you need more control over your $productName$ +installation, it is recommended that you use helm. + +## Before you begin + +$productName$ is designed to run in Kubernetes for production. The most essential requirements are: + +* Kubernetes 1.11 or later +* The `kubectl` command-line tool + +## Install with YAML + +$productName$ is typically deployed to Kubernetes from the command line. If you don't have Kubernetes, you should use our [Docker](../docker) image to deploy $productName$ locally. + +1. In your terminal, run the following command: + + ``` + kubectl create namespace $productNamespace$ || true + kubectl apply -f https://app.getambassador.io/yaml/emissary/$version$/emissary-crds.yaml && \ + kubectl apply -f https://app.getambassador.io/yaml/emissary/$version$/emissary-emissaryns.yaml && \ + kubectl -n $productNamespace$ wait --for condition=available --timeout=90s deploy $productDeploymentName$ + ``` + + + $productName$ $version$ includes a Deployment in the $productNamespace$ namespace + called emissary-apiext. This is the APIserver extension + that supports converting $productName$ CRDs between getambassador.io/v2 + and getambassador.io/v3alpha1. This Deployment needs to be running at + all times. + + + + If the emissary-apiext Deployment's Pods all stop running, + you will not be able to use getambassador.io/v3alpha1 CRDs until restarting + the emissary-apiext Deployment. + + + + There is a known issue with the emissary-apiext service that impacts all $productName$ 2.x and 3.x users. Specifically, the TLS certificate used by apiext expires one year after creation and does not auto-renew. All users who are running $productName$/$AESproductName$ 2.x or 3.x with the apiext service should proactively renew their certificate as soon as practical by running kubectl delete --all secrets --namespace=emissary-system to delete the existing certificate, and then restart the emissary-apiext deployment with kubectl rollout restart deploy/emissary-apiext -n emissary-system. + This will create a new certificate with a one year expiration. We will issue a software patch to address this issue well before the one year expiration. Note that certificate renewal will not cause any downtime. + + +2. Determine the IP address or hostname of your cluster by running the following command: + + ``` + kubectl get -n $productNamespace$ service $productDeploymentName$ -o "go-template={{range .status.loadBalancer.ingress}}{{or .ip .hostname}}{{end}}" + ``` + + Your load balancer may take several minutes to provision your IP address. Repeat the provided command until you get an IP address. + +3. Next Steps + + $productName$ shold now be successfully installed and running, but in order to get started deploying Services and test routing to them you need to configure a few more resources. + + - [The `Listener` Resource](../../running/listener/) is required to configure which ports the $productName$ pods listen on so that they can begin responding to requests. + - [The `Mapping` Resouce](../../using/intro-mappings/) is used to configure routing requests to services in your cluster. + - [The `Host` Resource](../../running/host-crd/) configures TLS termination for enablin HTTPS communication. + - Explore how $productName$ [configures communication with clients](../../../howtos/configure-communications) + + + We strongly recommend following along with our Quickstart Guide to get started by creating a Listener, deploying a simple service to test with, and setting up a Mapping to route requests from $productName$ to the demo service. + + +## Upgrading an existing installation + +See the [migration matrix](../migration-matrix) for instructions about upgrading +$productName$. + + diff --git a/docs/emissary/latest/topics/running/ambassador-deployment.md b/docs/emissary/latest/topics/running/ambassador-deployment.md new file mode 100644 index 000000000..d870f32c3 --- /dev/null +++ b/docs/emissary/latest/topics/running/ambassador-deployment.md @@ -0,0 +1,21 @@ +# Deployment architecture + +$productName$ can be deployed in a variety of configurations. The specific configuration depends on your data center. + +## Public cloud + +If you're using a public cloud provider such as Amazon, Azure, or Google, $productName$ can be deployed directly to a Kubernetes cluster running in the data center. Traffic is routed to $productName$ via a cloud-managed load balancer such as an Amazon Elastic Load Balancer or Google Cloud Load Balancer. Typically, this load balancer is transparently managed by Kubernetes in the form of the `LoadBalancer` service type. $productName$ then routes traffic to your services running in Kubernetes. + +## On-Premise data center + +In an on-premise data center, $productName$ is deployed on the Kubernetes cluster. Instead of exposing it via the `LoadBalancer` service type, $productName$ is exposed as a `NodePort`. Traffic is sent to a specific port on any of the nodes in the cluster, which route the traffic to $productName$, which then routes the traffic to your services running in Kubernetes. You'll also need to deploy a separate load balancer to route traffic from your core routers to $productName$. [MetalLB](https://metallb.universe.tf/) is an open-source external load balancer for Kubernetes designed for this problem. Other options are traditional TCP load balancers such as F5 or Citrix Netscaler. + +## Hybrid data center + +Many data centers include services that are running outside of Kubernetes on virtual machines. For $productName$ to route to services both inside and outside of Kubernetes, it needs the real-time network location of all services. This problem is known as "[service discovery](https://www.datawire.io/guide/traffic/service-discovery-microservices/)" and $productName$ supports using [Consul](https://www.consul.io). Services in your data center register themselves with Consul, and $productName$ uses Consul-supplied data to dynamically route requests to available services. + +## Hybrid on-premise data center + +The diagram below details a common network architecture for a hybrid on-premise data center. Traffic flows from core routers to MetalLB, which routes to $productName$ running in Kubernetes. $productName$ routes traffic to individual services running on both Kubernetes and VMs. Consul tracks the real-time network location of the services, which $productName$ uses to route to the given services. + +![Architecture](../../images/consul-ambassador.png) diff --git a/docs/emissary/latest/topics/running/ambassador-with-aws.md b/docs/emissary/latest/topics/running/ambassador-with-aws.md new file mode 100644 index 000000000..b321543ae --- /dev/null +++ b/docs/emissary/latest/topics/running/ambassador-with-aws.md @@ -0,0 +1,364 @@ +# $productName$ with AWS + +$productName$ is a platform agnostic Kubernetes API gateway. It will run in any distribution of Kubernetes whether it is managed by a cloud provider or on homegrown bare-metal servers. + +This document serves as a reference for different configuration options available when running Kubernetes in AWS. See [Installing $productName$](../../install) for the various installation methods available. + +## Recommended configuration + +There are lot of configuration options available to you when running $productName$ in AWS. While you should read this entire document to understand what is best for you, the following is the recommended configuration when running $productName$ in AWS: + +It is recommended to terminate TLS at $productName$ so you can take advantage of all the TLS configuration options available in $productName$ including setting the allowed TLS versions, setting `alpn_protocol` options, enforcing HTTP -> HTTPS redirection, and [automatic certificate management](../host-crd) in the $productName$. + +When terminating TLS at $productName$, you should deploy a L4 [Network Load Balancer (NLB)](#network-load-balancer-nlb) with the proxy protocol enabled to get the best performance out of your load balancer while still preserving the client IP address. + +The following `Service` should be configured to deploy an NLB with cross zone load balancing enabled (see [NLB notes](#network-load-balancer-nlb) for caveat on the cross-zone-load-balancing annotation). You will need to configure the proxy protocol in the NLB manually in the AWS Console. + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: ambassador + namespace: ambassador + annotations: + service.beta.kubernetes.io/aws-load-balancer-type: "nlb" + service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true" +spec: + type: LoadBalancer + ports: + - name: HTTP + port: 80 + targetPort: 8080 + - name: HTTPS + port: 443 + targetPort: 8443 + selector: + service: ambassador +``` + + After deploying the `Service` above and manually enabling the proxy protocol you will need to deploy the following [Ambassador `Module`](../ambassador) to tell $productName$ to use the proxy protocol and then restart $productName$ for the configuration to take effect. + + ```yaml + apiVersion: getambassador.io/v3alpha1 + kind: Module + metadata: + name: ambassador + namespace: ambassador + spec: + config: + use_proxy_proto: true + ``` + + $productName$ will now expect traffic from the load balancer to be wrapped with the proxy protocol so it can read the client IP address. + +## AWS load balancer notes + +AWS provides three types of load balancers: + +### "Classic" Elastic Load Balancer (ELB) + +The ELB is the first generation AWS Elastic Load Balancer. It is the default type of load balancer ensured by a `type: LoadBalancer` `Service` and routes directly to individual EC2 instances. It can be configured to run at layer 4 or layer 7 of the OSI model. See [What is a Classic Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html) for more details. + +* Ensured by default for a `type: LoadBalancer` `Service` +* Layer 4: TCP, TCP/SSL + * Protocol support + * HTTP(S) + * Websockets + * HTTP/2 + * Connection based load balancing + * Cannot modify the request +* Layer 7: HTTP, HTTPS + * Protocol support + * HTTP(S) + * Request based load balancing + * Can modify the request (append to `X-Forwarded-*` headers) +* Can perform TLS termination + +**Notes:** +- While it has been superseded by the `Network Load Balancer` and `Application Load Balancer` the ELB offers the simplest way of provisioning an L4 or L7 load balancer in Kubernetes. +- All of the [load balancer annotations](#load-balancer-annotations) are respected by the ELB. +- If using the ELB for TLS termination, it is recommended to run in L7 mode so it can modify `X-Forwarded-Proto` correctly. + +### Network Load Balancer (NLB) + +The NLB is a second generation AWS Elastic Load Balancer. It can be ensure by a `type: LoadBalancer` `Service` using an annotation. It can only run at layer 4 of the OSI model and load balances based on connection allowing it to handle millions of requests per second. See [What is a Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) for more details. + +* Can be ensured by a `type: LoadBalancer` `Service` +* Layer 4: TCP, TCP/SSL + * Protocol support + * HTTP(S) + * Websockets + * HTTP/2 + * Connection based load balacing + * Cannot modify the request +* Can perform TLS termination + +**Notes:** +- The NLB is the most efficient load balancer capable of handling millions of requests per second. It is recommended for streaming connections since it will maintain the connection stream between the client and $productName$. +- Most of the [load balancer annotations](#load-balancer-annotations) are respected by the NLB. You will need to manually configure the proxy protocol and take an extra step to enable cross zone load balancing. +- Since it operates at L4 and cannot modify the request, you will need to tell $productName$ if it is terminating TLS or not (see [TLS termination](#tls-termination) notes below). + +### Application Load Balancer (ALB) + +The ALB is a second generation AWS Elastic Load Balancer. It cannot be ensured by a `type: LoadBalancer` `Service` and must be deployed and configured manually. It can only run at layer 7 of the OSI model and load balances based on request information allowing it to perform fine-grained routing to applications. See [What is a Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) for more details. + +* Cannot be configured by a `type: LoadBalancer` `Service` +* Layer 7: HTTP, HTTPS + * Protocol support + * HTTP(S) + * Request based load balancing + * Can modify the request (append to `X-Forwarded-*` headers) +* Can perform TLS termination + +**Notes:** + +- The ALB can perform routing based on the path, headers, host, etc.. Since $productName$ performs this kind of routing in your cluster, unless you are using the same load balancer to route to services outside of Kubernetes, the overhead of provisioning an ALB is often not worth the benefits. +- If you would like to use an ALB, you will need to expose $productName$ with a `type: NodePort` service and manually configure the ALB to forward to the correct ports. +- None of the [load balancer annotations](#load-balancer-annotations) are respected by the ALB. You will need to manually configure all options. +- The ALB will properly set the `X-Forward-Proto` header if terminating TLS. See (see [TLS termination](#tls-termination) notes below). + +## Load balancer annotations + +Kubernetes on AWS exposes a mechanism to request certain load balancer configurations by annotating the `type: LoadBalancer` `Service`. The most complete set and explanations of these annotations can be found in this [Kubernetes document](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer). This document will go over the subset that is most relevant when deploying $productName$. + +- `service.beta.kubernetes.io/aws-load-balancer-ssl-cert`: + + Configures the load balancer to use a valid certificate ARN to terminate TLS at the Load Balancer. + + Traffic from the client into the load balancer is encrypted but, since TLS is being terminated at the load balancer, traffic from the load balancer to $productName$ will be cleartext. You will need to configure $productName$ differently depending on whether the load balancer is running in L4 or L7 (see [TLS termination](#tls-termination) notes below). + +- `service.beta.kubernetes.io/aws-load-balancer-ssl-ports`: + + Configures which port the load balancer will be listening for SSL traffic on. Defaults to `"*"`. + + If you want to enable cleartext redirection, make sure to set this to `"443"` so traffic on port 80 will come in over cleartext. + +- `service.beta.kubernetes.io/aws-load-balancer-backend-protocol`: + + Configures the ELB to operate in L4 or L7 mode. Can be set to `"tcp"`/`"ssl"` for an L4 listener or `"http"`/`"https"` for an L7 listener. Defaults to `"tcp"` or `"ssl"` if `aws-load-balancer-ssl-cert` is set. + +- `service.beta.kubernetes.io/aws-load-balancer-type: "nlb"`: + + When this annotation is set it will launch a [Network Load Balancer (NLB)](#network-load-balancer-nlb) instead of a classic ELB. + +- `service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled`: + + Configures the load balancer to load balance across zones. For high availability, it is typical to deploy nodes across availability zones so this should be set to `"true"`. + + **Note:** You cannot configure this annotation and `service.beta.kubernetes.io/aws-load-balancer-type: "nlb"` at the same time. You must first deploy the `Service` with an NLB and then update it with the cross zone load balancing configuration. + +- `service.beta.kubernetes.io/aws-load-balancer-proxy-protocol`: + + Configures the ELB to enable the proxy protocol. `"*"`, which enables the proxy protocol on all ELB backends, is the only acceptable value. + + The proxy protocol can be used to preserve the client IP address. + + If setting this value, you need to make sure $productName$ is configured to use the proxy protocol (see [preserving the client IP address](#preserving-the-client-ip-address) below). + + **Note:** This annotation will not be recognized if `aws-load-balancer-type: "nlb"` is configured. Proxy protocol must be manually enabled for NLBs. + +## TLS termination + +TLS termination is an important part of any modern web app. $productName$ exposes a lot of TLS termination configuration options that make it a powerful tool for managing encryption between your clients and microservices. Refer to the [TLS Termination](../tls) documentation for more information on how to configure TLS termination at $productName$. + +With AWS, the AWS Certificate Manager (ACM) makes it easy to configure TLS termination at an AWS load balancer using the annotations explained above. + +This means that, when running $productName$ in AWS, you have the choice between terminating TLS at the load balancer using a certificate from the ACM or at $productName$ using a certificate stored as a `Secret` in your cluster. + +The following documentation will cover the different options available to you and how to configure $productName$ and the load balancer to get the most of each. + +### TLS termination at $productName$ + +Terminating TLS at $productName$ will guarantee you to be able to use all of the TLS termination options that $productName$ exposes including enforcing the minimum TLS version, setting the `alpn_protocols`, and redirecting cleartext to HTTPS. + +If terminating TLS at $productName$, you can provision any AWS load balancer that you want with the following (default) port assignments: + +```yaml +spec: + ports: + - name: http + port: 80 + targetPort: 8080 + - name: https + port: 443 + targetPort: 8443 +``` + +While terminating TLS at $productName$ makes it easier to expose more advanced TLS configuration options, it does have the drawback of not being able to use the ACM to manage certificates. You will have to manage your TLS certificates yourself or use the [automatic certificate management](../host-crd) available in $productName$ to have $productName$ do it for you. + +### TLS termination at the load balancer + +If you choose to terminate TLS at your Amazon load balancer you will be able to use the ACM to manage TLS certificates. This option does add some complexity to your $productName$ configuration, depending on which load balancer you are using. + +Terminating TLS at the load balancer means that $productName$ will be receiving all traffic as un-encrypted cleartext traffic. Since $productName$ expects to be serving both encrypted and cleartext traffic by default, you will need to make the following configuration changes to $productName$ to support this: + +#### L4 load balancer (default ELB or NLB) + +* **Load Balancer Service Configuration:** + The following `Service` will deploy a L4 ELB with TLS termination configured at the load balancer: + ```yaml + apiVersion: v1 + kind: Service + metadata: + name: ambassador + namespace: ambassador + annotations: + service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {{ACM_CERT_ARN}} + service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443" + spec: + type: LoadBalancer + ports: + - name: HTTP + port: 80 + targetPort: 8080 + - name: HTTPS + port: 443 + targetPort: 8080 + selector: + service: ambassador + ``` + + Note that the `spec.ports` has been changed so both the HTTP and HTTPS ports forward to the cleartext port 8080 on $productName$. + +* **`Host`:** + + The `Host` configures how $productName$ handles encrypted and cleartext traffic. The following `Host` configuration will tell $productName$ to `Route` cleartext traffic that comes in from the load balancer: + + ```yaml + apiVersion: getambassador.io/v3alpha1 + kind: Host + metadata: + name: ambassador + spec: + hostname: "*" + selector: + matchLabels: + hostname: wildcard + acmeProvider: + authority: none + requestPolicy: + insecure: + action: Route + ``` + +**Important:** + +Because L4 load balancers do not set `X-Forwarded` headers, $productName$ will not be able to distinguish between traffic that came in to the load balancer as encrypted or cleartext. Because of this, **HTTP -> HTTPS redirection is not possible when terminating TLS at a L4 load balancer**. + +#### L7 load balancer (ELB or ALB) + +* **Load Balancer Service Configuration (L7 ELB):** + + The following `Service` will deploy a L7 ELB with TLS termination configured at the load balancer: + ```yaml + apiVersion: v1 + kind: Service + metadata: + name: ambassador + namespace: ambassador + annotations: + service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {{ACM_CERT_ARN}} + service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443" + service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http" + spec: + type: LoadBalancer + ports: + - name: HTTP + port: 80 + targetPort: 8080 + - name: HTTPS + port: 443 + targetPort: 8080 + selector: + service: ambassador + ``` + + Note that the `spec.ports` has been changed so both the HTTP and HTTPS ports forward to the cleartext port 8080 on $productName$. + +* **`Host`:** + + The `Host` configures how $productName$ handles encrypted and cleartext traffic. The following `Host` configuration will tell $productName$ to `Redirect` cleartext traffic that comes in from the load balancer: + + ```yaml + apiVersion: getambassador.io/v3alpha1 + kind: Host + metadata: + name: ambassador + spec: + hostname: "*" + selector: + matchLabels: + hostname: wildcard + acmeProvider: + authority: none + requestPolicy: + insecure: + action: Redirect + ``` + +* **Module:** + + Since a L7 load balancer will be able to append to `X-Forwarded` headers, we need to configure $productName$ to trust the value of these headers. The following `Module` will configure $productName$ to trust a single L7 proxy in front of $productName$: + + ```yaml + apiVersion: getambassador.io/v3alpha1 + kind: Module + metadata: + name: ambassador + namespace: ambassador + spec: + config: + xff_num_trusted_hops: 1 + use_remote_address: false + ``` + +**Note:** + +$productName$ uses the value of `X-Forwarded-Proto` to know if the request originated as encrypted or cleartext. Unlike L4 load balancers, L7 load balancers will set this header so HTTP -> HTTPS redirection is possible when terminating TLS at a L7 load balancer. + +## Preserving the client IP address + +Many applications will want to know the IP address of the connecting client. In Kubernetes, this IP address is often obscured by the IP address of the `Node` that is forwarding the request to $productName$ so extra configuration must be done if you need to preserve the client IP address. + +In AWS, there are two options for preserving the client IP address. + +1. Use a L7 Load Balancer that sets `X-Forwarded-For` + + A L7 load balancer will populate the `X-Forwarded-For` header with the IP address of the downstream connecting client. If your clients are connecting directly to the load balancer, this will be the IP address of your client. + + When using L7 load balancers, you must configure $productName$ to trust the value of `X-Forwarded-For` and not append its own IP address to it by setting `xff_num_trusted_hops` and `use_remote_address: false` in the [Ambassador `Module`](../ambassador): + + ```yaml + apiVersion: getambassador.io/v3alpha1 + kind: Module + metadata: + name: ambassador + namespace: ambassador + spec: + config: + xff_num_trusted_hops: 1 + use_remote_address: false + ``` + + After configuring the above `Module`, you will need to restart $productName$ for the changes to take effect. + +2. Use the proxy protocol + + The [proxy protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt) is a wrapper around an HTTP request that, like `X-Forwarded-For`, lists the IP address of the downstream connecting client but is able to be set by L4 load balancers as well. + + In AWS, you can configure ELBs to use the proxy protocol by setting the `service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"` annotation on the service. You must manually configure this on ALBs and NLBs. + + After configuring the load balancer to use the proxy protocol, you need to tell $productName$ to expect it on the request. + + ```yaml + apiVersion: getambassador.io/v3alpha1 + kind: Module + metadata: + name: ambassador + namespace: ambassador + spec: + config: + use_proxy_proto: true + ``` + + After configuring the above `Module`, you will need to restart $productName$ for the changes to take effect. diff --git a/docs/emissary/latest/topics/running/ambassador-with-gke.md b/docs/emissary/latest/topics/running/ambassador-with-gke.md new file mode 100644 index 000000000..2b90581d2 --- /dev/null +++ b/docs/emissary/latest/topics/running/ambassador-with-gke.md @@ -0,0 +1,187 @@ +# $productName$ with GKE + +Google offers a [L7 load balancer](https://cloud.google.com/kubernetes-engine/docs/concepts/ingress) to +leverage network services such as managed SSL certificates, SSL offloading or the Google content delivery network. +A L7 load balancer in front of $productName$ can be configured by hand or by using the Ingress-GCE resource. Using the +Ingress resource also allows you to create Google-managed SSL certificates through Kubernetes. + +With this setup, HTTPS will be terminated at the Google load balancer. The load balancer will be created and configured by +the Ingress-GCE resource. The load balancer consists of a set of +[forwarding rules](https://cloud.google.com/load-balancing/docs/forwarding-rule-concepts#https_lb) and a set of +[backend services](https://cloud.google.com/load-balancing/docs/backend-service). +In this setup, the ingress resource creates two forwarding rules, one for HTTP and one for HTTPS. The HTTPS +forwarding rule has the SSL certificates attached. Also, one backend service will be created to point to +a list of instance groups at a static port. This will be the NodePort of the $productName$ service. + +With this setup, the load balancer terminates HTTPS and then directs the traffic to the $productName$ service +via the `NodePort`. $productName$ is then doing all the routing to the other internal/external services. + +# Overview of steps + +1. Install and configure the ingress with the HTTP(S) load balancer +2. Install $productName$ +3. Configure and connect $productName$ to ingress +4. Create an SSL certificate and enable HTTPS +5. Create BackendConfig for health checks +6. Configure $productName$ to do HTTP -> HTTPS redirection + +`ambassador` will be running as a `NodePort` service. Health checks will be configured to go to a BackendConfig resource. + +## 0. $productName$ + +This guide will install $OSSproductName$. You can also install $AESproductName$. Please note: +- The ingress and the `ambassador` service need to run in the same namespace +- The `ambassador` service needs to be of type `NodePort` and not `LoadBalancer`. Also remove the line with `externalTrafficPolicy: Local` +- Ambassador-Admin needs to be of type `NodePort` instead of `ClusterIP` since it needs to be available for health checks + +## 1 . Install and configure ingress with the HTTP(S) load balancer + +Create a GKE cluster through the web console. Use the release channel. When the cluster +is up and running follow [this tutorial from Google](https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer) to configure +an ingress and a L7 load balancer. After you have completed these steps you will have a running L7 load balancer +and one service. + +## 2. Install $productName$ + +Follow the first section of the [$OSSproductName$ installation guide](../../install/) to install $OSSproductName$. +Stop before defining the `ambassador` service. + +$productName$ needs to be deployed as `NodePort` instead of `LoadBalancer` to work with the L7 load balancer and the ingress. + +Save the YAML below in ambassador.yaml and apply with `kubectl apply -f ambassador.yaml` + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: ambassador +spec: + type: NodePort + ports: + - port: 8080 + targetPort: 8080 + selector: + service: ambassador +``` + +You will now have an `ambassador` service running next to your ingress. + +## 3. Configure and connect `ambassador` to the ingress + +You need to change the ingress for it to send traffic to `ambassador`. Assuming you have followed the tutorial, you should +have a file named basic-ingress.yaml. Change it to point to `ambassador` instead of web: + +```yaml +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + name: basic-ingress +spec: + backend: + serviceName: ambassador + servicePort: 8080 +``` + +Now let's connect the other service from the tutorial to `ambassador` by specifying a Mapping: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: web + namespace: default +spec: + hostname: "*" + prefix: / + service: web:8080 +``` + +All traffic will now go to `ambassador` and from `ambassador` to the `web` service. You should be able to hit your load balancer and get the output. It may take some time until the load balancer infrastructure has rolled out all changes and you might see gateway errors during that time. +As a side note: right now all traffic will go to the `web` service, including the load balancer health check. + +## 4. Create an SSL certificate and enable HTTPS + +Read up on [managed certificates on GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs). You need +a DNS name and point it to the external IP of the load balancer. + +certificate.yaml: +```yaml +apiVersion: networking.gke.io/v1beta1 +kind: ManagedCertificate +metadata: + name: www-example-com +spec: + domains: + - www.example.com +``` + +Modify the ingress from before: +```yaml +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + name: basic-ingress + annotations: + networking.gke.io/managed-certificates: www-example-com +spec: + backend: + serviceName: ambassador + servicePort: 8080 +``` + +Please wait (5-15 minutes) until the certificate is created and all edge servers have the certificates ready. +`kubectl describe ManagedCertificate` will show you the status or go to the web console to view the load balancer. + +You should now be able to access the web service via `https://www.example.com`. + +## 5. Configure BackendConfig for health checks + +Create and apply a BackendConfig resource with a [custom health check](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#direct_health) specified: + +```yaml +apiVersion: cloud.google.com/v1 +kind: BackendConfig +metadata: + name: ambassador-hc-config + namespace: ambassador +spec: + # https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features + timeoutSec: 30 + connectionDraining: + drainingTimeoutSec: 30 + logging: + enable: true + sampleRate: 1.0 + healthCheck: + checkIntervalSec: 10 + timeoutSec: 10 + port: 8877 + type: HTTP + requestPath: /ambassador/v0/check_alive +``` + +Then edit your previous `ambassador.yaml` file to add an annotation referencing the BackendConfig and apply the file: + +``` +apiVersion: v1 +kind: Service +metadata: + name: ambassador + annotations: + cloud.google.com/backend-config: '{"default": "ambassador-hc-config"}' +spec: + type: NodePort + ports: + - port: 8080 + targetPort: 8080 + selector: + service: ambassador +``` + +## 6. Configure $productName$ to do HTTP -> HTTPS redirection + +Configure $productName$ to [redirect traffic from HTTP to HTTPS](../tls/cleartext-redirection/#http-https-redirection). You will need to restart $productName$ to effect the changes with `kubectl rollout restart deployment ambassador`. + +The result should be that `http://www.example.com` will redirect to `https://www.example.com`. + +You can now add more services by specifying the hostname in the Mapping. diff --git a/docs/emissary/latest/topics/running/ambassador.md b/docs/emissary/latest/topics/running/ambassador.md new file mode 100644 index 000000000..3af41d939 --- /dev/null +++ b/docs/emissary/latest/topics/running/ambassador.md @@ -0,0 +1,558 @@ +import Alert from '@material-ui/lab/Alert'; + +# The `Ambassador` `Module` Resource + +
+

Contents

+ +* [Envoy](#envoy) +* [General](#general) +* [gRPC](#grpc) +* [Header behavior](#header-behavior) +* [Observability](#observability) +* [Protocols](#protocols) +* [Security](#security) +* [Service health / timeouts](#service-health--timeouts) +* [Traffic management](#traffic-management) + + +
+ +If present, the `ambassador` `Module` defines system-wide configuration for $productName$. You won't need it unless you need to change one of the system-wide configuration settings below. + +To use the `ambassador` `Module` to configure $productName$, it MUST be named `ambassador`, otherwise it will be ignored. To create multiple `ambassador` `Module`s in the same Kubernetes namespace, you will need to apply them as annotations with separate `ambassador_id`s: you will not be able to use multiple CRDs. + +There are many items that can be configured on the `ambassador` `Module`. They are listed below with examples and grouped by category. + +## Envoy + +##### Content-Length headers + +* `allow_chunked_length: true` tells Envoy to allow requests or responses with both `Content-Length` and `Transfer-Encoding` headers set. The default is `false`. + +By default, messages with both `Content-Length` and `Content-Transfer-Encoding` are rejected. If `allow_chunked_length` is `true`, $productName$ will remove the `Content-Length` header and process the message. See the [Envoy documentation for more details](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/core/v3/protocol.proto.html?highlight=allow_chunked_length#config-core-v3-http1protocoloptions). + +##### Envoy access logs + +* `envoy_log_path` defines the path of Envoy's access log. By default this is standard output. +* `envoy_log_type` defines the type of access log Envoy will use. Currently, only `json` or `text` are supported. +* `envoy_log_format` defines the Envoy access log line format. + +These logs can be formatted using [Envoy operators](https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#command-operators) to display specific information about an incoming request. The example below will show only the protocol and duration of a request: + +```yaml +envoy_log_path: /dev/fd/1 +envoy_log_type: json +envoy_log_format: + { + "protocol": "%PROTOCOL%", + "duration": "%DURATION%" + } +``` + +See the Envoy documentation for the [standard log format](https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#default-format-string) and a [complete list of log format operators](https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/access_log). + +##### Envoy validation timeout + +* `envoy_validation_timeout` defines the timeout, in seconds, for validating a new Envoy configuration. The default is 10. + +A value of 0 disables Envoy configuration validation. Most installations will not need to change this setting. + +For example: + +```yaml +envoy_validation_timeout: 30 +``` + +would allow 30 seconds to validate the generated Envoy configuration. + +##### Error response overrides + +* `error_response_overrides` permits changing the status code and body text for 4XX and 5XX response codes. The default is not to override any error responses. + +By default, $productName$ will pass through error responses without modification, and errors generated locally will use Envoy's default response body, if any. + +See [using error response overrides](../custom-error-responses) for usage details. For example, this configuration: + +```yaml +error_response_overrides: + - on_status_code: 404 + body: + text_format: "File not found" +``` + +would explicitly modify the body of 404s to say "File not found". + +##### Forwarding client cert details + +Two attributes allow providing information about the client's TLS certificate to upstream certificates: + +* `forward_client_cert_details: true` will tell Envoy to add the `X-Forwarded-Client-Cert` to upstream + requests. The default is `false`. +* `set_current_client_cert_details` will tell Envoy what information to include in the + `X-Forwarded-Client-Cert` header. The default is not to include the `X-Forwarded-Client-Cert` header at all. + +$productName$ will not forward information about a certificate that it cannot validate. + +See the Envoy documentation on [X-Forwarded-Client-Cert](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers.html?highlight=xfcc#x-forwarded-client-cert) and [SetCurrentClientCertDetails](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/network/http_connection_manager/v3/http_connection_manager.proto.html#extensions-filters-network-http-connection-manager-v3-httpconnectionmanager-setcurrentclientcertdetails) for more information. + +```yaml +forward_client_cert_details: true +set_current_client_cert_details: SANITIZE +``` + +##### Server name + +* `server_name` allows overriding the server name that Envoy sends with responses to clients. The default is `envoy`. + +##### Suppress Envoy headers + +* `suppress_envoy_headers: true` will prevent $productName$ from emitting certain additional + headers to HTTP requests and responses. The default is `false`. + +For the exact set of headers covered by this config, see the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/router_filter#config-http-filters-router-headers-set) + +--- +## General + +##### Ambassador ID + +* `ambassador_id` allows using multiple instances of $productName$ in the same cluster. The default is unset. + +We recommend _not_ setting `ambassador_id` if you are running only one instance of $productName$ in your cluster. For more information, see the [Running and Deployment documentation](../running/#ambassador_id). + +If used, the `ambassador_id` value must be an array, for example: + +```yaml +ambassador_id: [ "test_environment" ] +``` + +##### Defaults + +* `defaults` provides a dictionary of default values that will be applied to various $productName$ resources. The default is to have no defaults configured. + +See [Using `ambassador` `Module` Defaults](../../using/defaults) for more information. + +--- + +## gRPC + +##### Bridges + +* `enable_grpc_http11_bridge: true` will enable the gRPC-HTTP/1.1 bridge. The default is `false`. +* `enable_grpc_web: true` will enable the gRPC-Web bridge. The default is `false`. + +gRPC is a binary HTTP/2-based protocol. While this allows high performance, it can be problematic for clients that are unable to speak HTTP/2 (such as JavaScript in many browsers, or legacy clients in difficult-to-update environments). + +The gRPC-HTTP/1.1 bridge can translate HTTP/1.1 calls with `Content-Type: application/grpc` into gRPC calls: $productName$ will perform buffering and translation as necessary. For more details on the translation process, see the [Envoy gRPC HTTP/1.1 bridge documentation](https://www.envoyproxy.io/docs/envoy/v1.11.2/configuration/http_filters/grpc_http1_bridge_filter.html). + +Likewise, gRPC-Web is a JSON and HTTP-based protocol that allows browser-based clients to take advantage of gRPC protocols. The gRPC-Web specification requires a server-side proxy to translate between gRPC-Web requests and gRPC backend services, and $productName$ can fill this role when the gRPC-Web bridge is enabled. For more details on the translation process, see the [Envoy gRPC HTTP/1.1 bridge documentation](https://www.envoyproxy.io/docs/envoy/v1.11.2/configuration/http_filters/grpc_http1_bridge_filter.html); for more details on gRPC-Web itself, see the [gRPC-Web client GitHub repo](https://github.com/grpc/grpc-web). + +##### Statistics + +* `grpc_stats` allows enabling telemetry for gRPC calls using Envoy's [gRPC Statistics Filter](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/grpc_stats_filter). The default is disabled. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: + config: + grpc_stats: + upstream_stats: true + services: + - name: . + method_names: [] +``` + +Supported parameters: +* `all_methods` +* `services` +* `upstream_stats` + +Available metrics: +* `envoy_cluster_grpc__` +* `envoy_cluster_grpc__request_message_count` +* `envoy_cluster_grpc__response_message_count` +* `envoy_cluster_grpc__success` +* `envoy_cluster_grpc__total` +* `envoy_cluster_grpc_upstream_` - **only when `upstream_stats: true`** + +Please note that `` will only be present if `all_methods` is set or the service and the method are present under `services`. If `all_methods` is false or the method is not on the list, the available metrics will be in the format `envoy_cluster_grpc_`. + +* `all_methods`: If set to true, emit stats for all service/method names. +If set to false, emit stats for all service/message types to the same stats without including the service/method in the name. +**This option is only safe if all clients are trusted. If this option is enabled with untrusted clients, the clients could cause unbounded growth in the number +of stats in Envoy, using unbounded memory and potentially slowing down stats pipelines.** + +* `services`: If set, specifies an allow list of service/methods that will have individual stats emitted for them. Any call that does not match the allow list will be counted in a stat with no method specifier (generic metric). + + + If both all_methods and services are present, all_methods will be ignored. + + +* `upstream_stats`: If true, the filter will gather a histogram for the request time of the upstream. + +--- + +## Header behavior + +##### Header case + +* `proper_case: true` forces headers to have their "proper" case as shown in RFC7230. The default is `false`. +* `header_case_overrides` allows forcing certain headers to have specific casing. The default is to override no headers. + +proper_case and header_case_overrides are mutually exclusive. + +RFC7230 specifies that HTTP header names are case-insensitive, but always shows and refers to headers as starting with a capital letter, continuing in lowercase, then repeating the single capital letter after each non-alpha character. This has become an established convention when working with HTTP: + +- `Host`, not `host` or `HOST` +- `Content-Type`, not `content-type`, `Content-type`, or `cOnTeNt-TyPe` + +Internally, Envoy typically uses [all lowercase](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/header_casing) for header names. This is fully compliant with RFC7230, but some services and clients may require headers to follow the stricter casing rules implied by RFC7230 section headers: in that situation, setting `proper_case: true` will tell Envoy to force all headers to use the casing above. + +Alternately, it is also possible - although less common - for services or clients to require some other specific casing for specific headers. `header_case_overrides` specifies an array of header names: if a case-insensitive match for a header is found in the list, the matching header will be replaced with the one in the list. For example, the following configuration will force headers that match `X-MY-Header` and `X-EXPERIMENTAL` to use that exact casing, regardless of the original case used in flight: + +```yaml +header_case_overrides: +- X-MY-Header +- X-EXPERIMENTAL +``` + +If the upstream service responds with `x-my-header: 1`, $productName$ will return `X-MY-Header: 1` to the client. Similarly, if the client includes `x-ExperiMENTAL: yes` in its request, the request to the upstream service will include `X-EXPERIMENTAL: yes`. Other headers will not be altered; $productName$ will use its default lowercase header. + +Please see the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/core/v3/protocol.proto.html#config-core-v3-http1protocoloptions-headerkeyformat) for more information. Note that in general, we recommend updating clients and services rather than relying on `header_case_overrides`. + +##### Linkerd interoperability + +* `add_linkerd_headers: true` will force $productName$ to include the `l5d-dst-override` header for Linkerd. The default is `false`. + +When using older Linkerd installations, requests going to an upstream service may need to include the `l5d-dst-override` header to ensure that Linkerd will route them correctly. Setting `add_linkerd_headers` does this automatically. See the [Mapping](../../using/mappings#linkerd-interoperability-add_linkerd_headers) documentation for more details. + +##### Max request headers size + +* `max_request_headers_kb` sets the maximum allowed request header size in kilobytes. If not set, the default is 60 KB. + +See [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/network/http_connection_manager/v3/http_connection_manager.proto.html) for more information. + +##### Preserve external request ID + +* `preserve_external_request_id: true` will preserve any `X-Request-Id` header presented by the client. The default is `false`, in which case Envoy will always generate a new `X-Request-Id` value. + +##### Strip matching host port + +* `strip_matching_host_port: true` will tell $productName$ to strip any port number from the host/authority header before processing and routing the request if that port number matches the port number of Envoy's listener. The default is `false`, which will preserve any port number. + +In the default installation of $productName$ the public port is 443, which then maps internally to 8443, so this only works in custom installations where the public Service port and Envoy listener port match. + +A common reason to try using this property is if you are using gRPC with TLS and your client library appends the port to the Host header (i.e. `myurl.com:443`). We have an alternative solution in our [gRPC guide](../../../howtos/grpc#working-with-host-headers-that-include-the-port) that uses a [Lua script](#lua-scripts) to remove all ports from every Host header for that use case. + +--- + +## Miscellaneous + + +##### Envoy's admin port + +* `admin_port` specifies the port where $productName$'s Envoy will listen for low-level admin requests. The default is 8001; it should almost never need changing. + +##### Lua scripts + +* `lua_scripts` allows defining a custom Lua script to run on every request. The default is to run no script. + +This is useful for simple use cases that mutate requests or responses, for example to add a custom header: + +```yaml +lua_scripts: | + function envoy_on_response(response_handle) + response_handle:headers():add("Lua-Scripts-Enabled", "Processed") + end +``` + +For more details on the Lua API, see the [Envoy Lua filter documentation](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/lua_filter.html). + +Some caveats around the embedded scripts: + +* They run in-process, so any bugs in your Lua script can break every request. +* They're run on every request/response to every URL. +* They're inlined in the $productName$ YAML; as such, we do not recommend using Lua scripts for long, complex logic. + +If you need more flexible and configurable options, $AESproductName$ supports a [pluggable Filter system](/docs/edge-stack/latest/topics/using/filters/). + +##### Merge slashes + +* `merge_slashes: true` will cause $productName$ to merge adjacent slashes in incoming paths when doing route matching and request filtering. The default is `false`. + +For example, with `merge_slashes: true`, a request for `//foo///bar` would be matched to a `Mapping` with prefix `/foo/bar`. + +##### Modify Default Buffer Size + +By default, the Envoy that ships with $productName$ uses a defailt of 1MiB soft limit for an upstream service's read and write buffer limits. This setting allows you to configure that buffer limit. See the [Envoy docs](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/cluster/v3/cluster.proto.html?highlight=per_connection_buffer_limit_bytes) for more information. + +```yaml +buffer_limit_bytes: 5242880 # Sets the default buffer limit to 5 MiB +``` + +##### Use $productName$ namespace for service resolution + +* `use_ambassador_namespace_for_service_resolution: true` tells $productName$ to assume that unqualified services are in the same namespace as $productName$. The default is `false`. + +By default, when $productName$ sees a service name without a namespace, it assumes that the namespace is the same as the resource referring to the service. For example, for this `Mapping`: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: mapping-1 + namespace: foo +spec: + hostname: "*" + prefix: / + service: upstream +``` + +$productName$ would look for a Service named `upstream` in namespace `foo`. + +However, if `use_ambassador_namespace_for_service_resolution` is `true`, this `Mapping` would look for a Service named `foo` in the namespace in which $productName$ is installed instead. + +--- + +## Observability + +##### Diagnostics + +* `diagnostics` controls access to the diagnostics interface. + +By default, $productName$ creates a `Mapping` that allows access to the diagnostic interface at `/ambassador/v0/diag` from anywhere in the cluster. To disable the `Mapping` entirely, set `diagnostics.enabled` to `false`: + + +```yaml +diagnostics: + enabled: false +``` + +With diagnostics disabled, `/ambassador/v0/diag` will respond with 404; however, the service itself is still running, and `/ambassador/v0/diag/` is reachable from inside the $productName$ Pod at `https://localhost:8877`. You can use Kubernetes port forwarding to set up remote access to the diagnostics page temporarily: + +``` +kubectl port-forward -n ambassador deploy/ambassador 8877 +``` + +Alternately, to leave the `Mapping` intact but restrict access to only the local Pod, set `diagnostics.allow_non_local` to `false`: + +```yaml +diagnostics: + allow_non_local: true +``` + +See [Protecting Access to the Diagnostics Interface](../../../howtos/protecting-diag-access) for more information. + +--- +## Protocols + +##### Enable IPv4 and IPv6 + +* `enable_ipv4` determines whether IPv4 DNS lookups are enabled. The default is `true`. +* `enable_ipv6` determines whether IPv6 DNS lookups are enabled. The default is `false`. + +If both IPv4 and IPv6 are enabled, $productName$ will prefer IPv6. This can have strange effects if $productName$ receives `AAAA` records from a DNS lookup, but the underlying network of the pod doesn't actually support IPv6 traffic. For this reason, the default is IPv4 only. + +An [`Mapping`](../../using/mappings) can override both `enable_ipv4` and `enable_ipv6`, but if either is not stated explicitly in a `Mapping`, the values here are used. Most $productName$ installations will probably be able to avoid overriding these settings in Mappings. + +##### HTTP/1.0 support + +* `enable_http10: true` will enable handling incoming HTTP/1.0 and HTTP/0.9 requests. The default is `false`. + +--- +## Security + +##### Cross origin resource sharing (CORS) + +* `cors` sets the default CORS configuration for all mappings in the cluster. The default is that CORS is not configured. + +For example: + +```yaml +cors: + origins: http://foo.example,http://bar.example + methods: POST, GET, OPTIONS + ... +``` + +See the [CORS syntax](../../using/cors) for more information. + +##### IP allow and deny + +* `ip_allow` and `ip_deny` define HTTP source IP address ranges to allow or deny. The default is to allow all traffic. + +Only one of ip_allow and ip_deny may be specified. + +If `ip_allow` is specified, any traffic not matching a range to allow will be denied. If `ip_deny` is specified, any traffic not matching a range to deny will be allowed. A list of ranges to allow and a separate list to deny may not both be specified. + +Both take a list of IP address ranges with a keyword specifying how to interpret the address, for example: + +```yaml +ip_allow: +- peer: 127.0.0.1 +- remote: 99.99.0.0/16 +``` + +The keyword `peer` specifies that the match should happen using the IP address of the other end of the network connection carrying the request: `X-Forwarded-For` and the `PROXY` protocol are both ignored. Here, our example specifies that connections originating from the $productName$ pod itself should always be allowed. + +The keyword `remote` specifies that the match should happen using the IP address of the HTTP client, taking into account `X-Forwarded-For` and the `PROXY` protocol if they are allowed (if they are not allowed, or not present, the peer address will be used instead). This permits matches to behave correctly when, for example, $productName$ is behind a layer 7 load balancer. Here, our example specifies that HTTP clients from the IP address range `99.99.0.0` - `99.99.255.255` will be allowed. + +You may specify as many ranges for each kind of keyword as desired. + +##### Rejecting Client Requests With Escaped Slashes + +* `reject_requests_with_escaped_slashes: true` will tell $productName$ to reject requests containing escaped slashes. The default is `false`. + +When set to `true`, $productName$ will reject any client requests that contain escaped slashes (`%2F`, `%2f`, `%5C`, or `%5c`) in their URI path by returning HTTP 400. By default, $productName$ will forward these requests unmodified. + +In general, a request with an escaped slash will _not_ match a `Mapping` prefix with an unescaped slash. However, external authentication services and other upstream services may handle escaped slashes differently, which can lead to security issues if paths with escaped slashes are allowed. By setting `reject_requests_with_escaped_slashes: true`, this class of security concern can be largely avoided. + +##### Trust downstream client IP + +* `use_remote_address: false` tells $productName$ that it cannot trust the remote address of incoming connections, and must instead rely exclusively on the `X-Forwarded-For` header. The default is `true`. + +When `true` (the default), $productName$ will append its own IP address to the `X-Forwarded-For` header so that upstream services of $productName$ can get the full set of IP addresses that have propagated a request. You may also need to set `externalTrafficPolicy: Local` on your `LoadBalancer` to propagate the original source IP address. + +See the [Envoy documentation on the `X-Forwarded-For header` ](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers) and the [Kubernetes documentation on preserving the client source IP](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip) for more details. + +##### `X-Forwarded-For` trusted hops + +* `xff_num_trusted_hops` sets the default value for [the `l7Depth` setting of a `Listener`](../listener/#securitymodel). The default is 0. + +See the [`Listener` documentation](../listener/#securitymodel) for more details. + +--- + +## Service health / timeouts + +##### Incoming connection idle timeout + +* `listener_idle_timeout_ms` sets the idle timeout for incoming connections. The default is no timeout, meaning that incoming connections may remain idle forever. + +If set, this specifies the length of time (in milliseconds) that an incoming connection is allowed to be idle before being dropped. This can useful if you have proxies and/or firewalls in front of $productName$ and need to control how $productName$ initiates closing an idle TCP connection. + +Please see the [Envoy documentation on HTTP protocol options](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/core/v3/protocol.proto#config-core-v3-httpprotocoloptions) for more information. + +##### Keepalive + +* `keepalive` sets the global TCP keepalive settings. + +$productName$ will use these settings for all `AmbasasdorMapping`s unless overridden in a `Mapping`'s configuration. Without `keepalive`, $productName$ follows the operating system defaults. + +For example, the following configuration: + +```yaml +keepalive: + time: 2 + interval: 2 + probes: 100 +``` + +would enable keepalives every two seconds (`interval`), starting after two seconds of idleness (`time`), with the connection being dropped if 100 keepalives are sent with no response (`probes`). For more information, see the [Envoy keepalive documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/core/v3/address.proto.html#config-core-v3-tcpkeepalive). + +##### Upstream idle timeout + +* `cluster_idle_timeout_ms` sets the default idle timeout for upstream connections (by default, one hour). + +If set, this specifies the timeout (in milliseconds) after which an idle connection upstream is closed. The idle timeout can be completely disabled by setting `cluster_idle_timeout_ms: 0`, which risks idle upstream connections never getting closed. + +If not set, the default idle timeout is one hour. + +You can override this setting with [`idle_timeout_ms` on a `Mapping`](../../using/timeouts/). + +##### Upstream max lifetime + +* `cluster_max_connection_lifetime_ms` sets the default maximum lifetime of an upstream connection. + +If set, this specifies the maximum amount of time (in milliseconds) after which an upstream connection is drained and closed, regardless of whether it is idle or not. Connection recreation incurs additional overhead when processing requests. The overhead tends to be nominal for plaintext (HTTP) connections within the same cluster, but may be more significant for secure HTTPS connections or upstreams with high latency. For this reason, it is generally recommended to set this value to at least 10000 ms to minimize the amortized cost of connection recreation while providing a reasonable bound for connection lifetime. + +If not set (or set to zero), then upstream connections may remain open for arbitrarily long. + +You can override this setting with [`cluster_max_connection_lifetime_ms` on a `Mapping`](../../using/timeouts/). + +##### Request timeout + +* `cluster_request_timeout_ms` sets the default end-to-end timeout for a single request. + +If set, this specifies the default end-to-end timeout for every request. + +If not set, the default is three seconds. + +You can override this setting with [`timeout_ms` on a `Mapping`](../../using/timeouts/). + +##### Readiness and liveness probes + +* `readiness_probe` sets whether `/ambassador/v0/check_ready` is automatically mapped +* `liveness_probe` sets whether `/ambassador/v0/check_alive` is automatically mapped + +By default, $productName$ creates `Mapping`s that support readiness and liveness checks at `/ambassador/v0/check_ready` and `/ambassador/v0/check_alive`. To disable the readiness `Mapping` entirely, set `readiness_probe.enabled` to `false`: + + +```yaml +readiness_probe: + enabled: false +``` + +Likewise, to disable the liveness `Mapping` entirely, set `liveness_probe.enabled` to `false`: + + +```yaml +liveness_probe: + enabled: false +``` + +A disabled probe endpoint will respond with 404; however, the service is still running, and will be accessible on localhost port 8877 from inside the $productName$ Pod. + +You can change these to route requests to some other service. For example, to have the readiness probe map to the `quote` application's health check: + +```yaml +readiness_probe: + enabled: true + service: quote + rewrite: /backend/health +``` + +The liveness and readiness probes both support `prefix` and `rewrite`, with the same meanings as for [Mappings](../../using/mappings). + +##### Retry policy + +This lets you add resilience to your services in case of request failures by performing automatic retries. + +```yaml +retry_policy: + retry_on: "5xx" +``` + +--- + +## Traffic management + +##### Circuit breaking + +* `circuit_breakers` sets the global circuit breaking configuration defaults + +You can override the circuit breaker settings for individual `Mapping`s. By default, $productName$ does not configure any circuit breakers. For more information, see the [circuit breaking reference](../../using/circuit-breakers). + +##### Default label domain and labels + +* `default_labels` sets default domains and labels to apply to every request. + +For more on how to use the default labels, , see the [Rate Limit reference](../../using/rate-limits/#attaching-labels-to-requests). + +##### Default load balancer + +* `load_balancer` sets the default load balancing type and policy + +For example, to set the default load balancer to `least_request`: + +```yaml +load_balancer: + policy: least_request +``` + +If not set, the default is to use round-robin load balancing. For more information, see the [load balancer reference](../load-balancer). diff --git a/docs/emissary/latest/topics/running/custom-error-responses.md b/docs/emissary/latest/topics/running/custom-error-responses.md new file mode 100644 index 000000000..b0ad98772 --- /dev/null +++ b/docs/emissary/latest/topics/running/custom-error-responses.md @@ -0,0 +1,217 @@ +import Alert from '@material-ui/lab/Alert'; + +# Custom error responses + +Custom error responses set overrides for HTTP response statuses generated either +by $productName$ or upstream services. + +They can be configured either on the $productName$ +[`Module`](../ambassador) +or on an [`Mapping`](../../using/intro-mappings/), the schema is identical. See +below for more information on [rule precedence](#rule-precedence). + +- `on_status_code`: HTTP status code to match for this rewrite + rule. Only 4xx and 5xx classes are supported. +- `body`: Describes the response body contents and format. + + `content_type`: A string that sets the content type of the + response. + + `text_format`: A string whose value will be used as the new + response body. `Content-Type` will default to `text/plain` if + unspecified. + + `json_format`: A config object whose keys and values will be + serialized as JSON and used as the new response body. + + `text_format_source`: Describes a file to be used as the + response. If used, `filename` must be set and the file must exist + on the $productName$ pod. + * `filename`: A file path on the $productName$ pod that will be used + as the new response body. + +Only one of `text_format`, `json_format`, or `text_format_source` may be provided. + +Custom response bodies are subject to Envoy's AccessLog substitution syntax +and variables, see [Envoy's documentation](https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#config-access-log-format-strings) for more information. + +Note that the AccessLog substitutions use `%` as a delimiter (for example, +`%RESPONSE_CODE%`). To include a literal `%` in a custom response body, use `%%`. +For example, + +``` +%%RESPONSE_CODE%% %RESPONSE_CODE% +``` + +would render as + +``` +%RESPONSE_CODE% 401 +``` + +for a request that resulted in a response code of 401. + + + If the % symbol is not escaped as above (%%), it may + only be as part of an + AccessLog substitution, for example %RESPONSE_CODE% or + %PROTOCOL%. If a % is neither part of a valid + substitution nor an escape, $productName$ will ignore the custom error response. + + +## Simple response bodies + +Simple responses can be be added quickly for convenience. They are inserted into +the manifest as either text or JSON: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador + namespace: ambassador +spec: + config: + error_response_overrides: + - on_status_code: 404 + body: + text_format: "File not found" + - on_status_code: 500 + body: + json_format: + error: "Application error" + status: "%RESPONSE_CODE%" + cluster: "%UPSTREAM_CLUSTER%" +``` +## File response bodies + +For more complex response bodies a file can be returned as the response. +This could be used for a customer friendly HTML document for example. Use +`text_format_source` with a `filename` set as a path on the $productName$ pod. +`content_type` should be used set the specific file type, such as `text/html`. + +First configure the $productName$ module: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador + namespace: ambassador +spec: + config: + error_response_overrides: + - on_status_code: 404 + body: + content_type: "text/html" + text_format_source: + filename: '/ambassador/ambassador-errorpages/404.html' +``` + +Then create the config map containing the HTML file: + +```yaml +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: ambassador-errorpages + namespace: ambassador +data: + 404.html: | + +

File not found

+

Uh oh, looks like you found a bad link.

+

Click here to go back home.

+ +``` + +Finally, mount the configmap to the $productName$ pod: + +> **NOTE:** The following YAML is in [patch format](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/) +and does not represent the entire deployment spec. + +```yaml +spec: + template: + spec: + containers: + - name: aes + volumeMounts: + - name: ambassador-errorpages + mountPath: /ambassador/ambassador-errorpages + volumes: + - name: ambassador-errorpages + configMap: + name: ambassador-errorpages +``` + +## Known limitations + +- `text_format`and `text_format_source` perform no string +escaping on expanded variables. This may break the structural integrity of your +response body if, for example, the variable contains HTML data and the response +content type is `text/html`. Be careful when using variables in this way, and +consider whether the value may be coming from an untrusted source like request +or response headers. +- The `json_format` field does not support sourcing from a file. Instead +consider using `text_format_source` with a JSON file and `content_type` set to +`application/json`. + +## Rule precedence + +If rules are set on both the `Module` and on a `Mapping`, the rule set on +the `Mapping` will take precedence, ignoring any `Module` rules. This is true +even if the rules are for different status codes. For example, consider this +configuration: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador + namespace: ambassador +spec: + config: + error_response_overrides: + - on_status_code: 404 + body: + text_format: "Global 404" +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: ambassador + namespace: ambassador +spec: + hostname: "*" + prefix: /api/ + service: quote + error_response_overrides: + - on_status_code: 429 + body: + text_format: "Per-mapping 429" +``` +The `Mapping` rule will prevent an override on the 404 rule defined on the +`Module` for this `Mapping`. The rule on the `Mapping` will cause all rules on +the `Module` to be ignored, regardless of the status codes specified. A seperate +`Mapping` with no override rules defined will follow the 404 rule on the `Module`. + +## Disabling response overrides + +If error response overrides are set on the `Module`, they can be disabled on +individual mappings by setting +`bypass_error_response_overrides: true` on those mappings: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend + namespace: ambassador +spec: + hostname: "*" + prefix: /api/ + service: quote + bypass_error_response_overrides: true +``` + +This is useful if a portion of the domain serves an API whose errors should not +be rewritten, but all other APIs should contain custom errors. diff --git a/docs/emissary/latest/topics/running/debugging.md b/docs/emissary/latest/topics/running/debugging.md new file mode 100644 index 000000000..bd376483f --- /dev/null +++ b/docs/emissary/latest/topics/running/debugging.md @@ -0,0 +1,192 @@ +# Debugging + +If you’re experiencing issues with the $productName$ and cannot diagnose the issue through the `/ambassador/v0/diag/` diagnostics endpoint, this document covers various approaches and advanced use cases for debugging $productName$ issues. + +We assume that you already have a running $productName$ installation in the following sections. + +## A Note on TLS + +[TLS] can appear intractable if you haven't set up [certificates] correctly. If you're +having trouble with TLS, always [check the logs] of your $productName$ Pods and look for +certificate errors. + +[TLS]: ../tls +[certificates]: ../tls#certificates-and-secrets +[check the logs]: #review-logs + +## Check $productName$ status + +1. First, check the $productName$ Deployment with the following: `kubectl get -n $productNamespace$ deployments` + + After a brief period, the terminal will print something similar to the following: + + ``` + $ kubectl get -n $productNamespace$ deployments + NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE + $productDeploymentName$ 3 3 3 3 1m + $productDeploymentName$-apiext 3 3 3 3 1m + ``` + +2. Check that the “desired” number of Pods matches the “current” and “available” number of Pods. + +3. If they are **not** equal, check the status of the associated Pods with the following command: `kubectl get pods -n $productNamespace$`. + + The terminal should print something similar to the following: + + ``` + $ kubectl get pods -n $productNamespace$ + NAME READY STATUS RESTARTS AGE + $productDeploymentName$-85c4cf67b-4pfj2 1/1 Running 0 1m + $productDeploymentName$-85c4cf67b-fqp9g 1/1 Running 0 1m + $productDeploymentName$-85c4cf67b-vg6p5 1/1 Running 0 1m + $productDeploymentName$-apiext-736f8497d-j34pf 1/1 Running 0 1m + $productDeploymentName$-apiext-736f8497d-9gfpq 1/1 Running 0 1m + $productDeploymentName$-apiext-736f8497d-p5wgx 1/1 Running 0 1m + ``` + + The actual names of the Pods will vary. All the Pods should indicate `Running`, and all should show 1/1 containers ready. + +4. If the Pods do not seem reasonable, use the following command for details about the history of the Deployment: `kubectl describe -n $productNamespace$ deployment $productDeploymentName$` + + * Look for data in the “Replicas” field near the top of the output. For example: + `Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable` + + * Look for data in the “Events” log field near the bottom of the output, which often displays data such as a fail image pull, RBAC issues, or a lack of cluster resources. For example: + + ``` + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set $productDeploymentName$-85c4cf67b to 3 + ``` + +5. Additionally, use the following command to “describe” the individual Pods: `kubectl describe pods -n $productNamespace$ <$productDeploymentName$-pod-name>` + + * Look for data in the “Status” field near the top of the output. For example, `Status: Running` + + * Look for data in the “Events” field near the bottom of the output, as it will often show issues such as image pull failures, volume mount issues, and container crash loops. For example: + ``` + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Scheduled 4m default-scheduler Successfully assigned $productDeploymentName$-85c4cf67b-4pfj2 to gke-ambassador-demo-default-pool-912378e5-dkxc + Normal SuccessfulMountVolume 4m kubelet, gke-ambassador-demo-default-pool-912378e5-dkxc MountVolume.SetUp succeeded for volume "$productDeploymentName$-token-tmk94" + Normal Pulling 4m kubelet, gke-ambassador-demo-default-pool-912378e5-dkxc pulling image "docker.io/datawire/ambassador:0.40.0" + Normal Pulled 4m kubelet, gke-$productDeploymentName$-demo-default-pool-912378e5-dkxc Successfully pulled image "docker.io/datawire/ambassador:0.40.0" + Normal Created 4m kubelet, gke-$productDeploymentName$-demo-default-pool-912378e5-dkxc Created container + Normal Started 4m kubelet, gke-$productDeploymentName$-demo-default-pool-912378e5-dkxc Started container + ``` + +In both the Deployment Pod and the individual Pods, take the necessary action to address any discovered issues. + +## Review logs + +$productName$ logging can provide information on anything that might be abnormal or malfunctioning. While there may be a large amount of data to sort through, look for key errors such as the $productName$ process restarting unexpectedly, or a malformed Envoy configuration. + +$productName$ has two major log mechanisms: $productName$ logging and Envoy logging. Both appear in the normal `kubectl logs` output, and both can have additional debug-level logging enabled. + + + Enabling debug-level logging can produce a lot of log output — enough to + potentially impact the performance of $productName$. We don't recommend running with debug + logging enabled as a matter of course; it's usually better to enable it only when needed, + then reset logging to normal once you're finished debugging. + + +### $productName$ debug logging + +Much of $productName$'s logging is concerned with the business of noticing changes to +Kubernetes resources that specify the $productName$ configuration, and generating new +Envoy configuration in response to those changes. Enabling debug logging for this part +of the system is under the control of two environment variables: + +- Set `AES_LOG_LEVEL=debug` to debug the early boot sequence and $productName$'s interactions + with the Kubernetes cluster (finding changed resources, etc.). +- Set `AMBASSADOR_DEBUG=diagd` to debug the process of generating an Envoy configuration from + the input resources. + +### $productName$ Envoy logging + +Envoy logging is concerned with the actions Envoy is taking for incoming requests. +Typically, Envoy will only output access logs, and certain errors, but enabling Envoy +debug logging will show very verbose information about the actions Envoy is actually +taking. It can be useful for understanding why connections are being closed, or whether +an error status is coming from Envoy or from the upstream service. + +It is possible to enable Envoy logging at boot, but for the most part, it's safer to +enable it at runtime, right before sending a request that is known to have problems. +To enable Envoy debug logging, use `kubectl exec` to get a shell on the $productName$ +pod, then: + + ``` + curl -XPOST http://localhost:8001/logging?level=trace && \ + sleep 10 && \ + curl -XPOST http://localhost:8001/logging?level=warning + ``` + +This will turn on Envoy debug logging for ten seconds, then turn it off again. + +### Viewing logs + +To view the logs from $productName$: + +1. Use the following command to target an individual $productName$ Pod: `kubectl get pods -n $productNamespace$` + + The terminal will print something similar to the following: + + ``` + $ kubectl get pods -n $productNamespace$ + NAME READY STATUS RESTARTS AGE + $productDeploymentName$-85c4cf67b-4pfj2 1/1 Running 0 3m + ``` + +2. Then, run the following: `kubectl logs -n $productNamespace$ <$productDeploymentName$-pod-name>` + +The terminal will print something similar to the following: + + ``` + $ kubectl logs -n $productNamespace$ $productDeploymentName$-85c4cf67b-4pfj2 + 2018-10-10 12:26:50 kubewatch 0.40.0 INFO: generating config with gencount 1 (0 changes) + /usr/lib/python3.6/site-packages/pkg_resources/__init__.py:1235: UserWarning: /ambassador is writable by group/others and vulnerable to attack when used with get_resource_filename. Consider a more secure location (set with .set_extraction_path or the PYTHON_EGG_CACHE environment variable). + warnings.warn(msg, UserWarning) + 2018-10-10 12:26:51 kubewatch 0.40.0 INFO: Scout reports {"latest_version": "0.40.0", "application": "ambassador", "notices": [], "cached": false, "timestamp": 1539606411.061929} + + 2018-10-10 12:26:54 diagd 0.40.0 [P15TMainThread] INFO: thread count 3, listening on 0.0.0.0:8877 + [2018-10-10 12:26:54 +0000] [15] [INFO] Starting gunicorn 19.8.1 + [2018-10-10 12:26:54 +0000] [15] [INFO] Listening at: http://0.0.0.0:8877 (15) + [2018-10-10 12:26:54 +0000] [15] [INFO] Using worker: threads + [2018-10-10 12:26:54 +0000] [42] [INFO] Booting worker with pid: 42 + 2018-10-10 12:26:54 diagd 0.40.0 [P42TMainThread] INFO: Starting periodic updates + [2018-10-10 12:27:01.977][21][info][main] source/server/drain_manager_impl.cc:63] shutting down parent after drain + ``` + +Note that many deployments will have multiple logs, and the logs are independent for each Pod. + +## Examine Pod and container contents + +You can examine the contents of the $productName$ Pod for issues, such as if volume mounts are correct and TLS certificates are present in the required directory, to determine if the Pod has the latest $productName$ configuration, or if the generated Envoy configuration is correct or as expected. In these instructions, we will look for problems related to the Envoy configuration. + +1. To look into an $productName$ Pod, get a shell on the Pod using `kubectl exec`. For example, + + ``` + kubectl exec -it -n $productNamespace$ <$productDeploymentName$-pod-name> -- bash + ``` + +2. Determine the latest configuration. If you haven't overridden the configuration directory, the latest configuration will be in `/ambassador/snapshots`. If you have overridden it, $productName$ saves configurations in `$AMBASSADOR_CONFIG_BASE_DIR/snapshots`. + + In the snapshots directory: + + * `snapshot.yaml` contains the full input configuration that $productName$ has found; + * `aconf.json` contains the $productName$ configuration extracted from the snapshot; + * `ir.json` contains the IR constructed from the $productName$ configuration; and + * `econf.json` contains the Envoy configuration generated from the IR. + + In the snapshots directory, the current configuration will be stored in files with no digit suffix, and older configurations have increasing numbers. For example, `ir.json` is current, `ir-1.json` is the next oldest, then `ir-2.json`, etc. + +3. If something is wrong with `snapshot` or `aconf`, there is an issue with your configuration. If something is wrong with `ir` or `econf`, you should [open an issue on Github](https://github.com/emissary-ingress/emissary/issues/new/choose). + +4. The actual input provided to Envoy is split into `$AMBASSADOR_CONFIG_BASE_DIR/bootstrap-ads.json` and `$AMBASSADOR_CONFIG_BASE_DIR/envoy/envoy.json`. + + - The `bootstrap-ads.json` file contains details about Envoy statistics, logging, authentication, etc. + - The `envoy.json` file contains information about request routing. + - You may generally find it simplest to just look at the `econf.json` files in the `snapshot` + directory, which includes both kinds of configuration. diff --git a/docs/emissary/latest/topics/running/diagnostics.md b/docs/emissary/latest/topics/running/diagnostics.md new file mode 100644 index 000000000..205063048 --- /dev/null +++ b/docs/emissary/latest/topics/running/diagnostics.md @@ -0,0 +1,54 @@ +# Diagnostics + +With $productName$ Diagnostics and Ambassador Cloud, you get a summary of the current status and Mappings of your cluster and it's services, which gets displayed +in [Diagnostics Overview](https://www.getambassador.io/docs/cloud/latest/diagnostics-ui/view-diagnostics/). + +## Troubleshooting + +### Can't access $productName$ Diagnostics Overview? + +Create an Ambassador `Module` if one does not already exist, and add the following config to enable diagnostics data. + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: + config: + diagnostics: + enabled: true +``` +Next, ensure that the AES_REPORT_DIAGNOSTICS_TO_CLOUD environment variable is set to `"true"` on the Agent deployment to allow diagnostics information to be reported to the cloud. + + ```shell + # Namespace and deployment name depend on your current install + + kubectl set env deployment/edge-stack-agent -n ambassador AES_REPORT_DIAGNOSTICS_TO_CLOUD="true" + ``` + +Finally, set the `AES_DIAGNOSTICS_URL` environment variable to `"http://emissary-ingress-admin:8877/ambassador/v0/diag/?json=true"` + + ```shell + # Namespace, deployment name, and pod url/port depend on your current install + + kubectl set env deployment/edge-stack-agent -n ambassador AES_DIAGNOSTICS_URL="http://emissary-ingress-admin:8877/ambassador/v0/diag/?json=true" + ``` + +After setting up `AES_DIAGNOSTICS_URL`, you can access diagnostics information by using the same URL value. + +### Still can't see $productName$ Diagnostics? + +Do a port forward on your $productName$ pod + + ```shell + # Namespace, deployment name, and pod url/port depend on your current install + + kubectl port-forward edge-stack-76f785767-n2l2v -n ambassador 8877 + ``` + +You will be able to access the diagnostics overview page by going to `http://localhost:8877/ambassador/v0/diag/` + +### $productName$ not routing your services as expected? + +You will need to examine the logs and $productName$ pod status. See [Debugging](../debugging) for more information. diff --git a/docs/emissary/latest/topics/running/environment.md b/docs/emissary/latest/topics/running/environment.md new file mode 100644 index 000000000..265fceddc --- /dev/null +++ b/docs/emissary/latest/topics/running/environment.md @@ -0,0 +1,366 @@ +# $productName$ Environment variables + +Use the following variables for the environment of your $productName$ container: + +| Variable | Default value | Value type | +|----------------------------------------------------------------------------------------------------------- |-----------------------------------------------------|-------------------------------------------------------------------------------| +| [`AMBASSADOR_ID`](#ambassador_id) | `[ "default" ]` | List of strings | +| [`AES_LOG_LEVEL`](#aes_log_level) | `warn` | Log level | +| [`AGENT_CONFIG_RESOURCE_NAME`](#agent_config_resource_name) | `ambassador-agent-cloud-token` | String | +| [`AMBASSADOR_AMBEX_NO_RATELIMIT`](#ambassador_ambex_no_ratelimit) | `false` | Boolean: `true`=true, any other value=false | +| [`AMBASSADOR_AMBEX_SNAPSHOT_COUNT`](#ambassador_ambex_snapshot_count) | `30` | Integer | +| [`AMBASSADOR_CLUSTER_ID`](#ambassador_cluster_id) | Empty | String | +| [`AMBASSADOR_CONFIG_BASE_DIR`](#ambassador_config_base_dir) | `/ambassador` | String | +| [`AMBASSADOR_DISABLE_FEATURES`](#ambassador_disable_features) | Empty | Any | +| [`AMBASSADOR_DRAIN_TIME`](#ambassador_drain_time) | `600` | Integer | +| [`AMBASSADOR_ENVOY_API_VERSION`](#ambassador_envoy_api_version) | `V3` | String Enum; `V3` or `V2` | +| [`AMBASSADOR_GRPC_METRICS_SINK`](#ambassador_grpc_metrics_sink) | Empty | String (address:port) | +| [`AMBASSADOR_HEALTHCHECK_BIND_ADDRESS`](#ambassador_healthcheck_bind_address)| `0.0.0.0` | String | +| [`AMBASSADOR_HEALTHCHECK_BIND_PORT`](#ambassador_healthcheck_bind_port)| `8877` | Integer | +| [`AMBASSADOR_HEALTHCHECK_IP_FAMILY`](#ambassador_healthcheck_ip_family)| `ANY` | String Enum; `IPV4_ONLY` or `IPV6_ONLY`| +| [`AMBASSADOR_ISTIO_SECRET_DIR`](#ambassador_istio_secret_dir) | `/etc/istio-certs` | String | +| [`AMBASSADOR_JSON_LOGGING`](#ambassador_json_logging) | `false` | Boolean; non-empty=true, empty=false | +| [`AMBASSADOR_READY_PORT`](#ambassador_ready_port) | `8006` | Integer | +| [`AMBASSADOR_READY_LOG`](#ambassador_ready_log) | `false` | Boolean; [Go `strconv.ParseBool`] | +| [`AMBASSADOR_LABEL_SELECTOR`](#ambassador_label_selector) | Empty | String (label=value) | +| [`AMBASSADOR_NAMESPACE`](#ambassador_namespace) | `default` ([^1]) | Kubernetes namespace | +| [`AMBASSADOR_RECONFIG_MAX_DELAY`](#ambassador_reconfig_max_delay) | `1` | Integer | +| [`AMBASSADOR_SINGLE_NAMESPACE`](#ambassador_single_namespace) | Empty | Boolean; non-empty=true, empty=false | +| [`AMBASSADOR_SNAPSHOT_COUNT`](#ambassador_snapshot_count) | `4` | Integer | +| [`AMBASSADOR_VERIFY_SSL_FALSE`](#ambassador_verify_ssl_false) | `false` | Boolean; `true`=true, any other value=false | +| [`DD_ENTITY_ID`](#dd_entity_id) | Empty | String | +| [`DOGSTATSD`](#dogstatsd) | `false` | Boolean; Python `value.lower() == "true"` | +| [`SCOUT_DISABLE`](#scout_disable) | `false` | Boolean; `false`=false, any other value=true | +| [`STATSD_ENABLED`](#statsd_enabled) | `false` | Boolean; Python `value.lower() == "true"` | +| [`STATSD_PORT`](#statsd_port) | `8125` | Integer | +| [`STATSD_HOST`](#statsd_host) | `statsd-sink` | String | +| [`STATSD_FLUSH_INTERVAL`](#statsd_flush_interval) | `1` | Integer | +| [`_AMBASSADOR_ID`](#_ambassador_id) | Empty | String | +| [`_AMBASSADOR_TLS_SECRET_NAME`](#_ambassador_tls_secret_name) | Empty | String | +| [`_AMBASSADOR_TLS_SECRET_NAMESPACE`](#_ambassador_tls_secret_namespace) | Empty | String | +| [`_CONSUL_HOST`](#_consul_host) | Empty | String | +| [`_CONSUL_PORT`](#_consul_port) | Empty | Integer | +| [`AMBASSADOR_DISABLE_SNAPSHOT_SERVER`](#ambassador_disable_snapshot_server) | `false` | Boolean; non-empty=true, empty=false | +| [`AMBASSADOR_ENVOY_BASE_ID`](#ambassador_envoy_base_id) | `0` | Integer | | `false` | Boolean; Python `value.lower() == "true"` | + + +## Feature Flag Environment Variables + +| Variable | Default value | Value type | +|----------------------------------------------------------------------------------------- |-----------------------------------------------------|-------------------------------------------------------------------------------| +| [`AMBASSADOR_EDS_BYPASS`](#ambassador_eds_bypass) | `false` | Boolean; Python `value.lower() == "true"` | +| [`AMBASSADOR_FORCE_SECRET_VALIDATION`](#ambassador_force_secret_validation) | `false` | Boolean: `true`=true, any other value=false | +| [`AMBASSADOR_KNATIVE_SUPPORT`](#ambassador_knative_support) | `false` | Boolean; non-empty=true, empty=false | +| [`AMBASSADOR_UPDATE_MAPPING_STATUS`](#ambassador_update_mapping_status) | `false` | Boolean; `true`=true, any other value=false | +| [`ENVOY_CONCURRENCY`](#envoy_concurrency) | Empty | Integer | +| [`DISABLE_STRICT_LABEL_SELECTORS`](#disable_strict_label_selectors) | `false` | Boolean; `true`=true, any other value=false | + +### `AMBASSADOR_ID` + +$productName$ supports running multiple installs in the same cluster without restricting a given instance of $productName$ to a single namespace. +The resources that are visible to an installation can be limited with the `AMBASSADOR_ID` environment variable. + +[More information](../../running/running#ambassador_id) + +### `AES_LOG_LEVEL` + +Adjust the log level by setting the `AES_LOG_LEVEL` environment variable; from least verbose to most verbose, the valid values are `error`, `warn`/`warning`, `info`, `debug`, and `trace`. The default is `info`. +Log level names are case-insensitive. + +[More information](../../running/running#log-levels-and-debugging) + +### `AGENT_CONFIG_RESOURCE_NAME` + +Allows overriding the default config_map/secret that is used for extracting the CloudToken for connecting with Ambassador cloud. It allows all +components (and not only the Ambassador Agent) to authenticate requests to Ambassador Cloud. +If unset it will just fallback to searching for a config map or secret with the name of `ambassador-agent-cloud-token`. Note: the secret will take precedence if both a secret and config map are set. + +### `AMBASSADOR_AMBEX_NO_RATELIMIT` + +Completely disables ratelimiting Envoy reconfiguration under memory pressure. This can help performance with the endpoint or Consul resolvers, but could make OOMkills more likely with large configurations. +The default is `false`, meaning that the rate limiter is active. + +[More information](../../../topics/concepts/rate-limiting-at-the-edge/) + +### `AMBASSADOR_AMBEX_SNAPSHOT_COUNT` + +Envoy-configuration snapshots get saved (as `ambex-#.json`) in `/ambassador/snapshots`. The number of snapshots is controlled by the `AMBASSADOR_AMBEX_SNAPSHOT_COUNT` environment variable. +Set it to 0 to disable. + +[More information](../../running/debugging#examine-pod-and-container-contents) + +### `AMBASSADOR_CLUSTER_ID` + +Each $productName$ installation generates a unique cluster ID based on the UID of its Kubernetes namespace and its $productName$ ID: the resulting cluster ID is a UUID which cannot be used +to reveal the namespace name nor $productName$ ID itself. $productName$ needs RBAC permission to get namespaces for this purpose, as shown in the default YAML files provided by Datawire; +if not granted this permission it will generate a UUID based only on the $productName$ ID. To disable cluster ID generation entirely, set the environment variable +`AMBASSADOR_CLUSTER_ID` to a UUID that will be used for the cluster ID. + +[More information](../../running/running#emissary-ingress-update-checks-scout) + +### `AMBASSADOR_CONFIG_BASE_DIR` + +Controls where $productName$ will store snapshots. By default, the latest configuration will be in `/ambassador/snapshots`. If you have overridden it, $productName$ saves configurations in `$AMBASSADOR_CONFIG_BASE_DIR/snapshots`. + +[More information](../../running/debugging#examine-pod-and-container-contents) + +### `AMBASSADOR_DISABLE_FEATURES` + +To completely disable feature reporting, set the environment variable `AMBASSADOR_DISABLE_FEATURES` to any non-empty value. + +[More information](../../running/running/#emissary-ingress-update-checks-scout) + +### `AMBASSADOR_DRAIN_TIME` + +At each reconfiguration, $productName$ keeps around the old version of it's envoy config for the duration of the configured drain time. +The `AMBASSADOR_DRAIN_TIME` variable controls how much of a grace period $productName$ provides active clients when reconfiguration happens. +Its unit is seconds and it defaults to 600 (10 minutes). This can impact memory usage because $productName$ needs to keep around old versions of its configuration +for the duration of the drain time. + +[More information](../../running/scaling#ambassador_drain_time) + +### `AMBASSADOR_ENVOY_API_VERSION` + +By default, $productName$ will configure Envoy using the [V3 Envoy API](https://www.envoyproxy.io/docs/envoy/latest/api-v3/api). +In $productName$ 2.0, you were able switch back to Envoy V2 by setting the `AMBASSADOR_ENVOY_API_VERSION` environment variable to "V2". +$productName$ 3.0 has removed support for the V2 API and only the V3 API is used. While this variable cannot be set to another value in 3.0, it may +be used when introducing new API versions that are not yet available in $productName$ such as V4. + +### `AMBASSADOR_GRPC_METRICS_SINK` + +Configures Edgissary (envoy) to send metrics to the Agent which are then relayed to the Cloud. If not set then we don’t configure envoy to send metrics to the agent. If set with a bad address:port then we log an error message. In either scenario, it just stops metrics from being sent to the Agent which has no negative effect on general routing or Edgissary uptime. + +### `AMBASSADOR_HEALTHCHECK_BIND_ADDRESS` + +Configures $productName$ to bind its health check server to the provided address. If not set $productName$ will bind to all addresses (`0.0.0.0`). + +### `AMBASSADOR_HEALTHCHECK_BIND_PORT` + +Configures $productName$ to bind its health check server to the provided port. If not set $productName$ will listen on the admin port(`8877`). + +### `AMBASSADOR_HEALTHCHECK_IP_FAMILY` + +Allows the IP Family used by health check server to be overriden. By default, the health check server will listen for both IPV4 and IPV6 addresses. In some clusters you may want to force `IPV4_ONLY` or `IPV6_ONLY`. + +### `AMBASSADOR_ISTIO_SECRET_DIR` + +$productName$ will read the mTLS certificates from `/etc/istio-certs` unless configured to use a different directory with the `AMBASSADOR_ISTIO_SECRET_DIR` +environment variable and create a secret in that location named `istio-certs`. + +[More information](../../../howtos/istio#configure-an-mtls-tlscontext) + +### `AMBASSADOR_JSON_LOGGING` + +When `AMBASSADOR_JSON_LOGGING` is set to `true`, JSON format will be used for most of the control plane logs. +Some (but few) logs from `gunicorn` and the Kubernetes `client-go` package will still be in text only format. + +[More information](../../running/running#log-format) + +### `AMBASSADOR_READY_PORT` + +A dedicated Listener is created for non-blocking readiness checks. By default, the Listener will listen on the loopback address +and port `8006`. `8006` is part of the reserved ports dedicated to $productName$. If their is a conflict then setting +`AMBASSADOR_READY_PORT` to a valid port will configure Envoy to Listen on that port. + +### `AMBASSADOR_READY_LOG` + +When `AMBASSADOR_READY_LOG` is set to `true`, the envoy `/ready` endpoint will be logged. It will honor format +provided in the `Module` resource or default to the standard log line format. + +### `AMBASSADOR_LABEL_SELECTOR` + +Restricts $productName$'s configuration to only the labelled resources. For example, you could apply a `version-two: true` label +to all resources that should be visible to $productName$, then set `AMBASSADOR_LABEL_SELECTOR=version-two=true` in its Deployment. +Resources without the specified label will be ignored. + +### `AMBASSADOR_NAMESPACE` + +Controls namespace configuration for Amabssador. + +[More information](../../running/running#namespaces) + +### `AMBASSADOR_RECONFIG_MAX_DELAY` + +Controls up to how long Ambassador will wait to receive changes before doing an Envoy reconfiguration. The unit is +in seconds and must be > 0. + +### `AMBASSADOR_SINGLE_NAMESPACE` + +When set, configures $productName$ to only work within a single namespace. + +[More information](../../running/running#namespaces) + +### `AMBASSADOR_SNAPSHOT_COUNT` + +The number of snapshots that $productName$ should save. + +### `AMBASSADOR_VERIFY_SSL_FALSE` + +By default, $productName$ will verify the TLS certificates provided by the Kubernetes API. In some situations, the cluster may be +deployed with self-signed certificates. In this case, set `AMBASSADOR_VERIFY_SSL_FALSE` to `true` to disable verifying the TLS certificates. + +[More information](../../running/running#ambassador_verify_ssl_false) + +### `DD_ENTITY_ID` + +$productName$ supports setting the `dd.internal.entity_id` statitics tag using the `DD_ENTITY_ID` environment variable. If this value +is set, statistics will be tagged with the value of the environment variable. Otherwise, this statistics tag will be omitted (the default). + +[More information](../../running/statistics/envoy-statsd#using-datadog-dogstatsd-as-the-statsd-sink) + +### `DOGSTATSD` + +If you are a user of the [Datadog](https://docs.datadoghq.com/) monitoring system, pulling in the Envoy statistics from $productName$ is very easy. +Because the DogStatsD protocol is slightly different than the normal StatsD protocol, in addition to setting $productName$'s +`STATSD_ENABLED=true` environment variable, you also need to set the`DOGSTATSD=true` environment variable. + +[More information](../../running/statistics/envoy-statsd#using-datadog-dogstatsd-as-the-statsd-sink) + +### `SCOUT_DISABLE` + +$productName$ integrates Scout, a service that periodically checks with Datawire servers to advise of available updates. Scout also sends anonymized usage +data and the $productName$ version. This information is important to us as we prioritize test coverage, bug fixes, and feature development. Note that the $productName$ will +run regardless of the status of Scout. + +We do not recommend you disable Scout, since we use this mechanism to notify users of new releases (including critical fixes and security issues). This check can be disabled by setting +the environment variable `SCOUT_DISABLE` to `1` in your $productName$ deployment. + +[More information](../../running/running#emissary-ingress-update-checks-scout) + +### `STATSD_ENABLED` + +If enabled, then $productName$ has Envoy expose metrics information via the ubiquitous and well-tested [StatsD](https://github.com/etsy/statsd) +protocol. To enable this, you will simply need to set the environment variable `STATSD_ENABLED=true` in $productName$'s deployment YAML + +[More information](../../running/statistics/envoy-statsd#envoy-statistics-with-statsd) + +### `STATSD_HOST` + +When this variable is set, $productName$ by default sends statistics to a Kubernetes service named `statsd-sink` on UDP port 8125 (the usual +port of the StatsD protocol). You may instead tell $productName$ to send the statistics to a different StatsD server by setting the +`STATSD_HOST` environment variable. This can be useful if you have an existing StatsD sink available in your cluster. + +[More information](../../running/statistics/envoy-statsd#envoy-statistics-with-statsd) + +### `STATSD_PORT` + +Allows for configuring StatsD on a port other than the default (8125) + +[More information](../../running/statistics/envoy-statsd#envoy-statistics-with-statsd) + +### `STATSD_FLUSH_INTERVAL` + +How often, in seconds, to submit statsd reports (if `STATSD_ENABLED`) + +[More information](../../running/statistics/envoy-statsd#envoy-statistics-with-statsd) + +### `_AMBASSADOR_ID` + +Used with the Ambassador Consul connector. Sets the Ambassador ID so multiple instances of this integration can run per-Cluster when there are multiple $productNamePlural$ (Required if `AMBASSADOR_ID` is set in your $productName$ `Deployment` + +[More information](../../../howtos/consul#environment-variables) + +### `_AMBASSADOR_TLS_SECRET_NAME` + +Used with the Ambassador Consul connector. Sets the name of the Kubernetes `v1.Secret` created by this program that contains the Consul-generated TLS certificate. + +[More information](../../../howtos/consul#environment-variables) + +### `_AMBASSADOR_TLS_SECRET_NAMESPACE` + +Used with the Ambassador Consul connector. Sets the namespace of the Kubernetes `v1.Secret` created by this program. + +[More information](../../../howtos/consul#environment-variables) + +### `_CONSUL_HOST` + +Used with the Ambassador Consul connector. Sets the IP or DNS name of the target Consul HTTP API server + +[More information](../../../howtos/consul#environment-variables) + +### `_CONSUL_PORT` + +Used with the Ambassador Consul connector. Sets the port number of the target Consul HTTP API server. + +[More information](../../../howtos/consul#environment-variables) + +### `AMBASSADOR_DISABLE_SNAPSHOT_SERVER` + +Disables the built-in snapshot server + +### `AMBASSADOR_ENVOY_BASE_ID` + +Base ID of the Envoy process + +### `AMBASSADOR_EDS_BYPASS` + +Bypasses EDS handling of endpoints and causes endpoints to be inserted to clusters manually. This can help resolve with `503 UH` +caused by certification rotation relating to a delay between EDS + CDS. + +### `AMBASSADOR_FORCE_SECRET_VALIDATION` + +If you set the `AMBASSADOR_FORCE_SECRET_VALIDATION` environment variable, invalid Secrets will be rejected, +and a `Host` or `TLSContext` resource attempting to use an invalid certificate will be disabled entirely. + +[More information](../../running/tls#certificates-and-secrets) + +### `AMBASSADOR_KNATIVE_SUPPORT` + +Enables support for knative + +### `AMBASSADOR_UPDATE_MAPPING_STATUS` + +If `AMBASSADOR_UPDATE_MAPPING_STATUS` is set to the string `true`, $productName$ will update the `status` of every `Mapping` +CRD that it accepts for its configuration. This has no effect on the proper functioning of $productName$ itself, and can be a +performance burden on installations with many `Mapping`s. It has no effect for `Mapping`s stored as annotations. + +The default is `false`. We recommend leaving `AMBASSADOR_UPDATE_MAPPING_STATUS` turned off unless required for external systems. + +[More information](../../running/running#ambassador_update_mapping_status) + +### `ENVOY_CONCURRENCY` + +Configures the optional [--concurrency](https://www.envoyproxy.io/docs/envoy/latest/operations/cli#cmdoption-concurrency) command line option when launching Envoy. +This controls the number of worker threads used to serve requests and can be used to fine-tune system resource usage. + +### `DISABLE_STRICT_LABEL_SELECTORS` + +In $productName$ version `3.2`, a bug with how `Hosts` are associated with `Mappings` was fixed and with how `Listeners` are assocaited with `Hosts`. The `mappingSelector`\\`selector` fields in `Hosts` and `Listeners` were not +properly being enforced in prior versions. If any single label from the selector was matched then the resources would be associated with each other instead +of requiring all labels in the selector to be present. Additonally, if the `hostname` of a `Mapping` matched the `hostname` of a `Host` then they would be associated +regardless of the configuration of `mappingSelector`\\`selector`. + +In version `3.2` this bug was fixed and resources that configure a selector will only be associated if **all** labels required by the selector are present. +This brings the `mappingSelector` and `selector` fields in-line with how label selectors are used throughout Kubernetes. To avoid unexpected behavior after the upgrade, +add all labels that configured in any `mappingSelector`\\`selector` to `Mappings` you want to associate with the `Host` or the `Hosts` you want to be associated with the `Listener`. You can opt-out of this fix and return to the old +association behavior by setting the environment variable `DISABLE_STRICT_LABEL_SELECTORS` to `"true"` (default: `"false"`). A future version of +$productName$ may remove the ability to opt-out of this bugfix. + +> **Note:** The `mappingSelector` field is only configurable on `v3alpha1` CRDs. In the `v2` CRDs the equivalent field is `selector`. +either `selector` or `mappingSelector` may be configured in the `v3alpha1` CRDs, but `selector` has been deprecated in favour of `mappingSelector`. + +See The [Host documentation](../../running/host-crd#controlling-association-with-mappings) for more information about `Host` / `Mapping` association. + +## Port assignments + +$productName$ uses the following ports to listen for HTTP/HTTPS traffic automatically via TCP: + +| Port | Process | Function | +|------|----------|---------------------------------------------------------| +| 8001 | envoy | Internal stats, logging, etc.; not exposed outside pod | +| 8002 | watt | Internal watt snapshot access; not exposed outside pod | +| 8003 | ambex | Internal ambex snapshot access; not exposed outside pod | +| 8004 | diagd | Internal `diagd` access; not exposed outside pod | +| 8005 | snapshot | Exposes a scrubbed $productName$ snapshot outside of the pod | +| 8080 | envoy | Default HTTP service port | +| 8443 | envoy | Default HTTPS service port | +| 8877 | diagd | Direct access to diagnostics UI; provided by `busyambassador entrypoint` | + +[^1]: This may change in a future release to reflect the Pods's + namespace if deployed to a namespace other than `default`. + https://github.com/emissary-ingress/emissary/issues/1583 + +[Go `net.Dial`]: https://golang.org/pkg/net/#Dial +[Go `strconv.ParseBool`]: https://golang.org/pkg/strconv/#ParseBool +[Go `time.ParseDuration`]: https://pkg.go.dev/time#ParseDuration +[Redis 6 ACL]: https://redis.io/topics/acl diff --git a/docs/emissary/latest/topics/running/gzip.md b/docs/emissary/latest/topics/running/gzip.md new file mode 100644 index 000000000..e3005c836 --- /dev/null +++ b/docs/emissary/latest/topics/running/gzip.md @@ -0,0 +1,55 @@ +# Gzip compression + +Gzip enables $productName$ to compress upstream data upon client request. Compression is useful in situations where large payloads need to be transmitted without compromising the response time. Compression can also save on bandwidth costs at the expense of increased computing costs. + +## How does it work? + +When the gzip filter is enabled, request and response headers are inspected to determine whether or not the content should be compressed. If so, and the request and response headers allow, the content is compressed and then sent to the client with the appropriate headers. It also uses the zlib module, which provides `Deflate` compression and decompression code. + +For more details see [Envoy - Gzip](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/compressor_filter). + +## The `gzip` API + +- `memory_level`: A value from 1 to 9 that controls the amount of internal memory used by zlib. Higher values use more memory, but are faster and produce better compression results. The default value is 5. +- `min_content_length`: A minimum response length, in bytes, which will trigger compression. The default value is 30. +- `compression_level`: A value used for selecting the zlib compression level. This setting will affect the speed and amount of compression applied to the content. “BEST” provides higher compression at the cost of higher latency, “SPEED” provides lower compression with minimum impact on response time. “DEFAULT” provides an optimal result between speed and compression. This field will be set to “DEFAULT” if not specified. +- `compression_strategy`: A value used for selecting the zlib compression strategy which is directly related to the characteristics of the content. Most of the time “DEFAULT” will be the best choice, though there are situations in which changing this parameter might produce better results. For example, run-length encoding (RLE) is typically used when the content is known for having sequences in which the same data occurs many consecutive times. For more information about each strategy, please refer to the zlib manual. +- `window_bits`: A value from 9 to 15 that represents the base two logarithmic of the compressor’s window size. Larger window results in better compression at the expense of memory usage. The default is 12 which will produce a 4096 bytes window. For more details about this parameter, please refer to zlib manual > deflateInit2. +- `content_type`: A set of strings that specify which mime-types yield compression; e.g., application/json, text/html, etc. When this field is not defined, compression will be applied to the following mime-types: “application/javascript”, “application/json”, “application/xhtml+xml”, “image/svg+xml”, “text/css”, “text/html”, “text/plain”, “text/xml”. +- `disable_on_etag_header`: A Boolean, if true, disables compression when the response contains an ETag header. When it is false, the filter will preserve weak ETags and remove the ones that require strong validation. +- `remove_accept_encoding_header`: A Boolean, if true, removes accept-encoding from the request headers before dispatching it to the upstream so that responses do not get compressed before reaching the filter. + +## Example + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: + config: + gzip: + memory_level: 2 + min_content_length: 32 + compression_level: BEST + compression_strategy: RLE + content_type: + - application/javascript + - application/json + - text/plain + disable_on_etag_header: false + remove_accept_encoding_header: false +``` + +Minimum configuration: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: + config: + gzip: + enabled: true +``` diff --git a/docs/emissary/latest/topics/running/host-crd.md b/docs/emissary/latest/topics/running/host-crd.md new file mode 100644 index 000000000..cd187ebe6 --- /dev/null +++ b/docs/emissary/latest/topics/running/host-crd.md @@ -0,0 +1,279 @@ +import Alert from '@material-ui/lab/Alert'; + +# The **Host** CRD + +The custom `Host` resource defines how $productName$ will be +visible to the outside world. It collects all the following information in a +single configuration resource: + +* The hostname by which $productName$ will be reachable +* How $productName$ should handle TLS certificates +* How $productName$ should handle secure and insecure requests +* Which `Mappings` should be associated with this `Host` + + + Remember that Listener resources are required for a functioning + $productName$ installation!
+ Learn more about Listener. +
+ + + Remember than $productName$ does not make sure that a wildcard Host exists! If the + wildcard behavior is needed, a Host with a hostname of "*" + must be defined by the user. + + +A minimal `Host` resource, assuming no TLS configuration, would be: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: minimal-host +spec: + hostname: host.example.com +``` + +This `Host` tells $productName$ to expect to be reached at `host.example.com`, +with no TLS termination, and only associating with `Mapping`s that also set a +`hostname` that matches `host.example.com`. + +Remember that a Listener will also be required for this example to +be functional. Many examples of setting up `Host` and `Listener` are available +in the [Configuring $productName$ Communications](../../../howtos/configure-communications) +document. + +## Setting the `hostname` + +The `hostname` element tells $productName$ which hostnames to expect. `hostname` is a DNS glob, +so all of the following are valid: + +- `host.example.com` +- `*.example.com` +- `host.example.*` + +The following are _not_ valid: + +- `host.*.com` -- Envoy supports only prefix and suffix globs +- `*host.example.com` -- the wildcard must be its own element in the DNS name + +In all cases, the `hostname` is used to match the `:authority` header for HTTP routing. +When TLS termination is active, the `hostname` is also used for SNI matching. + +## Controlling Association with `Mapping`s + +A `Mapping` will not be associated with a `Host` unless at least one of the following is true: + +- The `Mapping` specifies a `hostname` attribute that matches the `Host` in question. +- The `Host` specifies a `mappingSelector` that matches the `Mapping`'s Kubernetes `label`s. + +> **Note:** The `mappingSelector` field is only configurable on `v3alpha1` CRDs. In the `v2` CRDs the equivalent field is `selector`. +either `selector` or `mappingSelector` may be configured in the `v3alpha1` CRDs, but `selector` has been deprecated in favour of `mappingSelector`. + +If neither of the above is true, the `Mapping` will not be associated with the `Host` in +question. This is intended to help manage memory consumption with large numbers of `Host`s and large +numbers of `Mapping`s. + +If the `Host` specifies `mappingSelector` _and_ the `Mapping` specifies `hostname`, both must match +for the association to happen. + +The `mappingSelector` is a Kubernetes [label selector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#labelselector-v1-meta). For a `Mapping` to be associated with a `Host` that uses `mappingSelector`, then **all** labels +required by the `mappingSelector` must be present on the `Mapping` in order for it to be associated with the `Host`. +A `Mapping` may have additional labels other than those required by the `mappingSelector` so long as the required labels are present. + +**in 2.0, only `matchLabels` is supported**, for example: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: minimal-host +spec: + hostname: host.example.com + mappingSelector: + matchLabels: + examplehost: host +``` + +The above `Host` will associate with these `Mapping`s: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: mapping-with-label-match + labels: + examplehost: host # This matches the Host's mappingSelector. +spec: + prefix: /httpbin/ + service: http://httpbin.org +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: mapping-with-hostname-match +spec: + hostname: host.example.com # This is an exact match of the Host's hostname. + prefix: /httpbin/ + service: http://httpbin.org +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: mapping-with-hostname-glob-match +spec: + hostname: "*.example.com" # This glob matches the Host's hostname too. + prefix: /httpbin/ + service: http://httpbin.org +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: mapping-with-both-matches + labels: + examplehost: host # This matches the Host's mappingSelector. +spec: + hostname: "*.example.com" # This glob matches the Host's hostname. + prefix: /httpbin/ + service: http://httpbin.org +``` + +It will _not_ associate with any of these: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: skip-mapping-wrong-label + labels: + examplehost: staging # This doesn't match the Host's mappingSelector. +spec: + prefix: /httpbin/ + service: http://httpbin.org +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: skip-mapping-wrong-hostname +spec: + hosname: "bad.example.com" # This doesn't match the Host's hostname. + prefix: /httpbin/ + service: http://httpbin.org +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: skip-mapping-still-wrong + labels: + examplehost: staging # This doesn't match the Host's mappingSelector, +spec: # and if the Host specifies mappingSelector AND the + hostname: host.example.com # Mapping specifies hostname, BOTH must match. So + prefix: /httpbin/ # the matching hostname isn't good enough. + service: http://httpbin.org +``` + +Future versions of $productName$ will support `matchExpressions` as well. + +> **Note:** In $productName$ version `3.2`, a bug with how `Hosts` are associated with `Mappings` was fixed. The `mappingSelector` field in `Hosts` was not +properly being enforced in prior versions. If any single label from the selector was matched then the `Host` would be associated with the `Mapping` instead +of requiring all labels in the selector to be present. Additonally, if the `hostname` of the `Mapping` matched the `hostname` of the `Host` then they would be associated +regardless of the configuration of `mappingSelector`. +In version `3.2` this bug was fixed and a `Host` will only be associated with a `Mapping` if **all** labels required by the selector are present. +This brings the `mappingSelector` field in-line with how label selectors are used throughout Kubernetes. To avoid unexpected behavior after the upgrade, +add all labels that `Hosts` have in their `mappingSelector` to `Mappings` you want to associate with the `Host`. You can opt-out of this fix and return to the old +`Mapping`/`Host` association behavior by setting the environment variable `DISABLE_STRICT_LABEL_SELECTORS` to `"true"` (default: `"false"`). A future version of +$productName$ may remove the ability to opt-out of this bugfix. + +## Secure and insecure requests + +A **secure** request arrives via HTTPS; an **insecure** request does not. By default, secure requests will be routed and insecure requests will be redirected (using an HTTP 301 response) to HTTPS. The behavior of insecure requests can be overridden using the `requestPolicy` element of a `Host`: + +```yaml +requestPolicy: + insecure: + action: insecure-action + additionalPort: insecure-port +``` + +The `insecure-action` can be one of: + +* `Redirect` (the default): redirect to HTTPS +* `Route`: go ahead and route as normal; this will allow handling HTTP requests normally +* `Reject`: reject the request with a 400 response + +```yaml +requestPolicy: + insecure: + additionalPort: -1 # This is how to disable the default redirection from 8080. +``` + +Some special cases to be aware of here: + +* **Case matters in the actions:** you must use e.g. `Reject`, not `reject`. +* The `X-Forwarded-Proto` header is honored when determining whether a request is secure or insecure. For more information, see "Load Balancers, the `Host` Resource, and `X-Forwarded-Proto`" below. +* ACME challenges with prefix `/.well-known/acme-challenge/` are always forced to be considered insecure, since they are not supposed to arrive over HTTPS. +* $AESproductName$ provides native handling of ACME challenges. If you are using this support, $AESproductName$ will automatically arrange for insecure ACME challenges to be handled correctly. If you are handling ACME yourself - as you must when running $OSSproductName$ - you will need to supply appropriate `Host` resources and Mappings to correctly direct ACME challenges to your ACME challenge handler. + +## TLS settings + +The `Host` is responsible for high-level TLS configuration in $productName$. There are +several settings covering TLS: + +### `tlsSecret` enables TLS termination + +`tlsSecret` specifies a Kubernetes `Secret` is **required** for any TLS termination to occur. No matter what other TLS +configuration is present, TLS termination will not occur if `tlsSecret` is not specified. + +The following `Host` will configure $productName$ to read a `Secret` named +`tls-cert` for a certificate to use when terminating TLS. + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: example-host +spec: + hostname: host.example.com + acmeProvider: + authority: none + tlsSecret: + name: tls-cert +``` + +### `tlsContext` links to a `TLSContext` for additional configuration + +`tlsContext` specifies a [`TLSContext`](#) to use for additional TLS information. Note that you **must** still +define `tlsSecret` for TLS termination to happen. It is an error to supply both `tlsContext` and `tls`. + +See the [TLS discussion](../tls) for more details. + +### `tls` allows manually providing additional configuration + +`tls` allows specifying most of the things a `TLSContext` can, inline in the `Host`. Note that you **must** still +define `tlsSecret` for TLS termination to happen. It is an error to supply both `tlsContext` and `tls`. + +See the [TLS discussion](../tls) for more details. + +## Load balancers, the `Host` resource, and `X-Forwarded-Proto` + +In a typical installation, $productName$ runs behind a load balancer. The +configuration of the load balancer can affect how $productName$ sees requests +arriving from the outside world, which can in turn can affect whether $productName$ +considers the request secure or insecure. As such: + +- **We recommend layer 4 load balancers** unless your workload includes + long-lived connections with multiple requests arriving over the same + connection. For example, a workload with many requests carried over a small + number of long-lived gRPC connections. +- **$productName$ fully supports TLS termination at the load balancer** with a single exception, listed below. +- If you are using a layer 7 load balancer, **it is critical that the system be configured correctly**: + - The load balancer must correctly handle `X-Forwarded-For` and `X-Forwarded-Proto`. + - The `l7Depth` element in the [`Listener` CRD](../../running/listener) must be set to the number of layer 7 load balancers the request passes through to reach $productName$ (in the typical case, where the client speaks to the load balancer, which then speaks to $productName$, you would set `l7Depth` to 1). If `l7Depth` remains at its default of 0, the system might route correctly, but upstream services will see the load balancer's IP address instead of the actual client's IP address. + +It's important to realize that Envoy manages the `X-Forwarded-Proto` header such that it **always** reflects the most trustworthy information Envoy has about whether the request arrived encrypted or unencrypted. If no `X-Forwarded-Proto` is received from downstream, **or if it is considered untrustworthy**, Envoy will supply an `X-Forwarded-Proto` that reflects the protocol used for the connection to Envoy itself. The `l7Depth` element is also used when determining trust for `X-Forwarded-For`, and it is therefore important to set it correctly. Its default of 0 should always be correct when $productName$ is behind only layer 4 load balancers; it should need to be changed **only** when layer 7 load balancers are involved. + +### CRD specification + +The `Host` CRD is formally described by its protobuf specification. Developers who need access to the specification can find it [here](https://github.com/emissary-ingress/emissary/blob/v2.1.0/api/getambassador.io/v2/Host.proto). diff --git a/docs/emissary/latest/topics/running/http3.md b/docs/emissary/latest/topics/running/http3.md new file mode 100644 index 000000000..9aeb6cac8 --- /dev/null +++ b/docs/emissary/latest/topics/running/http3.md @@ -0,0 +1,149 @@ +--- +title: "HTTP/3 configuration in $productName$" +description: "Configure HTTP/3 support with $productName$. Create services to handle UDP and TCP traffic and setup HTTP/3 with your cloud service provider." +--- + +# HTTP/3 in $productName$ + +HTTP/3 is the third version of the Hypertext Transfer Protocol (HTTP). It is built on the [QUIC](https://www.chromium.org/quic/) network protocol rather than Transmission Control Protocol (TCP) like previous versions. + +## The changes and challenges of HTTP/3 + +Since the QUIC network protocol is built on UDP, most clients will require $productName$ to advertise its support for HTTP/3 using the `alt-svc` response header. This header is added to the response of the HTTP/2 and HTTP/1.1 connections. When the client sees the `alt-svc` it can choose to upgrade to HTTP/3 and connect to $productName$ using the QUIC protocol. + +QUIC requires Transport Layer Security (TLS) version 1.3 to communicate. Otherwise, $productName$ will fall back to HTTP/2 or HTTP/1.1, both of which support other TLS versions if client does not support TLS v1.3. Due to this restriction, some clients also require valid certificatesand will not upgrade to HTTP/3 traffic with self-signed certificates. + +Because HTTP/3 adoption is still growing and and changing, the $productName$ team will continue update this documentation as features change and mature. + +## Setting up HTTP/3 with $productName$ + +To configure $productName$ for HTTP/3 you need to do the following: + +1. Configure `Listener` resources. +2. Configure a `Host`. +3. Have a valid certificate. +4. Setup an external load balancer. + +### Configuring the Listener resources + +To make $productName$ listen for HTTP/3 connections over the QUIC network protocol, you need to configure a `Listener` with `TLS`, `HTTP`, and `UDP` configured within `protocolStack`. + + +The protocolStack elements need to be entered in the specific order of TLS, HTTP, UDP. + + +The `Listener` configured for HTTP/3 can be bound to the same address and port (`0.0.0.0:8443`) as the `Listener` that supports HTTP/2 and HTTP/1.1. This is not required, but it allows $productName$ to inject the default `alt-svc: h3=":443"; ma=86400, h3-29=":443"; ma=86400` header into the responses returned over the TCP connection with no additional configuration needed. **Most clients such as browsers require the `alt-svc` header to upgrade to HTTP/3**. + + +The current default of alt-svc: h3=":443"; ma=86400, h3-29=":443"; ma=86400 means that the external load balancer must be configured to accept traffic on port :443 for the client to upgrade the request. + + +```yaml +# This is a standard Listener that leverages TCP to serve HTTP/2 and HTTP/1.1 traffic. +# It is bound to the same address and port (0.0.0.0:8443) as the UDP listener. +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: $productDeploymentName$-https-listener + namespace: $productNamespace$ +spec: + port: 8443 + protocol: HTTPS + securityModel: XFP + hostBinding: + namespace: + from: ALL +--- +# This is a Listener that leverages UDP and HTTP to serve HTTP/3 traffic. +# NOTE: Raw UDP traffic is not supported. UDP and HTTP must be used together. +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: $productDeploymentName$-https-listener-udp + namespace: $productNamespace$ +spec: + port: 8443 + # Order is important here. HTTP is required. + protocolStack: + - TLS + - HTTP + - UDP + securityModel: XFP + hostBinding: + namespace: + from: ALL +``` + +### Configuring the Host resource + +Because the QUIC network requires TLS, the certificate needs to be valid so the client can upgrade a connection to HTTP/3. See the [Host documentation](./host-crd.md) for more information on how to configure TLS for a `Host`. + +### Certificate verification + +Clients can only upgrade to an HTTP/3 connection with a valid certificate. If the client won’t upgrade to HTTP/3, verify that you have a valid TLS certificate and that your client can speak **TLS v1.3**. Your `Host` resource should be configured similar to the following: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: my-domain-host +spec: + hostname: your-hostname + # acme isn't required but just shown as an example of how to manage a valid TLS cert + acmeProvider: + email: your-email@example.com + authority: https://acme-v02.api.letsencrypt.org/directory + tls: + # QUIC requires TLS v1.3 version. Verify your client supports it. + min_tls_version: v1.3 + # Either protocol can be upgraded, but http/2 is recommended. + alpn_protocols: h2,http/1.1 +``` + +### External load balancers + +The two most common service types to expose traffic outside of a Kubernetes cluster are: + +- `LoadBalancer`: A load balancer controller generates and manages the cloud provider-specific external load balancer. +- `NodePort`: The platform administrator has to manually set up things like the external load balancer, firewall rules, and health checks. + +#### LoadBalancer setup + +The ideal setup would be to configure a single service of type `LoadBalancer`, but this comes with some current restrictions: +- You need version 1.24 or later of Kubernetes with the [`MixedProtocolLBService` feature enabled](https://kubernetes.io/docs/concepts/services-networking/service/#load-balancers-with-mixed-protocol-types). +- Your cloud service provider needs to support the creation of an external load balancer with mixed protocol types (TCP/UDP), port reuse, and port forwarding. Support for Kubernetes feature flags may vary between cloud service providers. Refer to your provider’s documentation to see if they support this scenario. + +An example `LoadBalancer` configuration that fits the criteria listed above: + +```yaml + +# note: extra fields such as labels and selectors removed for clarity +apiVersion: v1 +kind: Service +metadata: + name: $productDeploymentName$ + namespace: $productNamespace$ +spec: + ports: + - name: http + port: 80 + targetPort: 8080 + protocol: TCP + - name: https + port: 443 + targetPort: 8443 + protocol: TCP + - name: http3 + port: 443 + targetPort: 8443 + protocol: UDP + type: LoadBalancer +``` + +## Cloud service provider setup + +Once you've completed the steps above, you need to configure HTTP/3 support through your cloud service provider. The configuration processes for each provider can be found here: + +- HTTP/3 setup for [Amazon Elastic Kubernetes Service (EKS)](../../../howtos/http3-eks) +- HTTP/3 setup for [Azure Kubernetes Service (AKS)](../../../howtos/http3-aks) +- HTTP/3 setup for [Google Kubernetes Engine (GKE)](../../../howtos/http3-gke) diff --git a/docs/emissary/latest/topics/running/index.md b/docs/emissary/latest/topics/running/index.md new file mode 100644 index 000000000..6eb7af94a --- /dev/null +++ b/docs/emissary/latest/topics/running/index.md @@ -0,0 +1,16 @@ +# Running $productName$ in production + +This section of the documentation is designed for operators and site reliability engineers who are managing the deployment of $productName$. Learn more below: + +* *Global Configuration:* The [Ambassador module](ambassador) is used to set system-wide configuration. +* *Exposing $productName$ to the Internet:* The [`Listener` CRD](listener) defines which ports are exposed, including their protocols and security models. The [`Host` CRD](host-crd) defines how $productName$ manages TLS, domains, and such. +* *Load Balancing:* $productName$ supports a number of different [load balancing strategies](load-balancer) as well as different ways to configure [service discovery](resolvers) +* [Gzip Compression](gzip) +* *Deploying $productName$:* On [Amazon Web Services](ambassador-with-aws) | [Google Cloud](ambassador-with-gke) | [general security and operational notes](running), including running multiple $productNamePlural$ on a cluster +* *TLS/SSL:* [Simultaneously Routing HTTP and HTTPS](tls/cleartext-redirection#cleartext-routing) | [HTTP -> HTTPS Redirection](tls/cleartext-redirection#http-https-redirection) | [Mutual TLS](tls/mtls) | [TLS origination](tls/origination) +* *Statistics and Monitoring:* [Integrating with Prometheus, DataDog, and other monitoring systems](statistics) +* *Extending $productName$* $productName$ can be extended with custom plug-ins that connect via HTTP/gRPC interfaces. [Custom Authentication](services/auth-service) | [The External Auth protocol](services/ext-authz) | [Custom Logging](services/log-service) | [Rate Limiting](services/rate-limit-service) | [Distributed Tracing](services/tracing-service) +* *Troubleshooting:* [Diagnostics](diagnostics) | [Debugging](debugging) +* *Scaling $productName$:* [Scaling $productName$](scaling) +* *Ingress:* $productName$ can function as an [Ingress Controller](ingress-controller) +* *Error Response Overrides:* $productName$ can override 4xx and 5xx responses with [custom response bodies](custom-error-responses) diff --git a/docs/emissary/latest/topics/running/ingress-controller.md b/docs/emissary/latest/topics/running/ingress-controller.md new file mode 100644 index 000000000..9b7afb824 --- /dev/null +++ b/docs/emissary/latest/topics/running/ingress-controller.md @@ -0,0 +1,325 @@ +import Alert from '@material-ui/lab/Alert'; + +# Ingress controller + +
+

Contents

+ +- [When and how to use the Ingress resource](#when-and-how-to-use-the-ingress-resource) +- [What is required to use the Ingress resource?](#what-is-required-to-use-the-ingress-resource) +- [When to use an Ingress instead of annotations or CRDs](#when-to-use-an-ingress-instead-of-annotations-or-crds) +- [Ingress support](#ingress-support) +- [Examples of Ingress configs vs Mapping configs](#examples-of-ingress-configs-vs-mapping-configs) +- [Ingress routes and mappings](#ingress-routes-and-mappings) +- [The Minimal Ingress](#the-minimal-ingress) +- [Name based virtual hosting with an Ambassador ID](#name-based-virtual-hosting-with-an-ambassador-id) +- [TLS Termination](#tls-termination) + +
+ +An Ingress resource is a popular way to expose Kubernetes services to the Internet. In order to use Ingress resources, you need to install an [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/). $productName$ can function as a fully-fledged Ingress controller, making it easy to work with other Ingress-oriented tools within the Kubernetes ecosystem. + +## When and how to use the Ingress resource + +If you're new to $productName$ and to Kubernetes, we'd recommend you start with our [quickstart](../../../tutorials/getting-started/) instead of this Ingress guide. If you're a power user and need to integrate with other software that leverages the Ingress resource, read on. The Ingress specification is very basic and does not support many of the features of $productName$, so you'll be using both the Ingress resource and $productName$'s Mapping resource to manage your Kubernetes services. + +### What is required to use the Ingress resource? + +- Know what version of Kubernetes you are using. + + - In Kubernetes 1.13 and below, the Ingress was only included in the `extensions` API. + + - Starting in Kubernetes 1.14, the Ingress was added to the new `networking.k8s.io` API. + + - Kubernetes 1.18 introduced the IngressClass resource to the existing `networking.k8s.io/v1beta1` API. + + If you are using 1.14 and above, it is recommended to use apiVersion: networking.k8s.io/v1beta1 when defining an Ingress. Since both are still supported in all 1.14+ versions of Kubernetes, this document will use extensions/v1beta1 for compatibility reasons. + If you are using 1.18 and above, sample usage of the IngressClass resource and pathType field are available on our blog. + + +- You will need RBAC permissions to create Ingress resources in either + the `extensions` `apiGroup` (present in all supported versions of + Kubernetes) or the `networking.k8s.io` `apiGroup` (introduced in + Kubernetes 1.14). + +- $productName$ will need RBAC permissions to get, list, watch, and update Ingress resources. + + You can see this in the [`aes-crds.yaml`](https://app.getambassador.io/yaml/ambassador-docs/latest/aes.yaml) + file, but this is the critical rule to add to $productName$'s `Role` or `ClusterRole`: + + ```yaml + - apiGroups: ['extensions', 'networking.k8s.io'] + resources: ['ingresses', 'ingressclasses'] + verbs: ['get', 'list', 'watch'] + - apiGroups: ['extensions', 'networking.k8s.io'] + resources: ['ingresses/status'] + verbs: ['update'] + ``` + + + This is included by default in all $productName$ installations. + + +- You must create your Ingress resource with the correct `ingress.class`. + + $productName$ will automatically read Ingress resources with the annotation + `kubernetes.io/ingress.class: ambassador`. + +- You may need to set your Ingress resource's `ambassador-id`. + + If you are [using `amabssador-id` on your Module](../running/#ambassador_id), you'll need to add the `getambassador.io/ambassador-id` + annotation to your Ingress. See the [examples below](#name-based-virtual-hosting-with-an-ambassador-id). + +- You must create a Service resource with the correct `app.kubernetes.io/component` label. + + $productName$ will automatically load balance Ingress resources using the endpoint exposed + from the Service with the annotation `app.kubernetes.io/component: ambassador-service`. + + ```yaml + --- + kind: Service + apiVersion: v1 + metadata: + name: ingress-ambassador + labels: + app.kubernetes.io/component: ambassador-service + spec: + externalTrafficPolicy: Local + type: LoadBalancer + selector: + service: ambassador + ports: + - name: http + port: 80 + targetPort: http + - name: https + port: 443 + targetPort: https + ``` + +### When to use an Ingress instead of annotations or CRDs + +We recommend that $productName$ be configured using CRDs. The Ingress resource is available to users who need it for integration with other ecosystem tools, or who feel that it more closely matches their workflows. However, it is important to recognize that the Ingress resource is rather more limited than the $productName$ Mapping is (for example, the Ingress spec has no support for rewriting or for TLS origination). **When in doubt, use CRDs.** + +## Ingress support + +$productName$ supports basic core functionality of the Ingress resource, as defined by the [Ingress resource](https://kubernetes.io/docs/concepts/services-networking/ingress/) itself: + +- Basic routing is supported, including the `route` specification and the default backend functionality. It's particularly easy to use a minimal Ingress to the $productName$ diagnostic UI. +- [TLS termination](../tls/) is supported. You can use multiple Ingress resources for SNI. +- Using the Ingress resource in concert with $productName$ CRDs or annotations is supported. This includes $productName$ annotations on the Ingress resource itself. + +$productName$ does **not** extend the basic Ingress specification with the following exceptions: + +- The `getambassador.io/ambassador-id` annotation allows you to set [the Ambassador ID](../running/#ambassador_id) for the Ingress itself. + +- The `getambassador.io/config` annotation can be provided on the Ingress resource, just as on a Service. + +Note that if you need to set `getambassador.io/ambassador-id` on the Ingress, you will also need to set `ambassador-id` on resources within the annotation. + +## Examples of Ingress configs vs Mapping configs + +### Ingress routes and Mappings + +$productName$ actually creates Mapping objects from the Ingress route rules. These Mapping objects interact with Mappings defined in CRDs **exactly** as they would if the Ingress route rules had been specified with CRDs originally. + +For example, this Ingress resource routes traffic to `/foo/` to `service1`: + +```yaml +--- +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + annotations: + kubernetes.io/ingress.class: ambassador + name: test-ingress +spec: + rules: + - http: + paths: + - path: /foo/ + backend: + serviceName: service1 + servicePort: 80 +``` + +This is the equivalent configuration using a Mapping instead: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: test-ingress-0-0 +spec: + hostname: '*' + prefix: /foo/ + service: service1:80 +``` + +This YAML will set up $productName$ to do canary routing where 50% of the traffic will go to `service1` and 50% will go to `service2`. + +```yaml +--- +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + annotations: + kubernetes.io/ingress.class: ambassador + name: test-ingress +spec: + rules: + - http: + paths: + - path: /foo/ + backend: + serviceName: service1 + servicePort: 80 +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: my-mapping +spec: + hostname: '*' + prefix: /foo/ + service: service2 +``` + +### The minimal Ingress + +An Ingress resource must provide at least some routes or a [default backend](https://kubernetes.io/docs/concepts/services-networking/ingress/#default-backend). The default backend provides for a simple way to direct all traffic to some upstream service: + +```yaml +--- +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + annotations: + kubernetes.io/ingress.class: ambassador + name: test-ingress +spec: + backend: + serviceName: exampleservice + servicePort: 8080 +``` + +This is the equivalent configuration using a Mapping instead: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: test-ingress +spec: + hostname: '*' + prefix: / + service: exampleservice:8080 +``` + +### Name based virtual hosting with an Ambassador ID + +This Ingress resource will result in all requests to `foo.bar.com` going to `service1`, and requests to `bar.foo.com` going to `service2`: + +```yaml +--- +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + annotations: + kubernetes.io/ingress.class: ambassador + getambassador.io/ambassador-id: externalid + name: name-virtual-host-ingress +spec: + rules: + - host: foo.bar.com + http: + paths: + - backend: + serviceName: service1 + servicePort: 80 + - host: bar.foo.com + http: + paths: + - backend: + serviceName: service2 + servicePort: 80 +``` + +This is the equivalent configuration using a Mapping instead: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: host-foo-mapping +spec: + ambassador_id: ['externalid'] + prefix: / + host: foo.bar.com + service: service1 +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: host-bar-mapping +spec: + ambassador_id: ['externalid'] + prefix: / + host: bar.foo.com + service: service2 +``` + +Read more on the [Kubernetes documentation on name based virtual routing](https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting). + +### TLS termination + +```yaml +--- +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + annotations: + kubernetes.io/ingress.class: ambassador + name: tls-example-ingress +spec: + tls: + - hosts: + - sslexample.foo.com + secretName: testsecret-tls + rules: + - host: sslexample.foo.com + http: + paths: + - path: / + backend: + serviceName: service1 + servicePort: 80 +``` + +This is the equivalent configuration using a Mapping instead: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: sslexample-termination-context +spec: + hosts: + - sslexample.foo.com + secret: testsecret-tls +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: sslexample-mapping +spec: + host: sslexample.foo.com + prefix: / + service: service1 +``` + +Note that this shows TLS termination, not origination: the Ingress spec does not support origination. Read more on the [Kubernetes docs on TLS termination](https://kubernetes.io/docs/concepts/services-networking/ingress/#tls). diff --git a/docs/emissary/latest/topics/running/listener.md b/docs/emissary/latest/topics/running/listener.md new file mode 100644 index 000000000..152e1b74c --- /dev/null +++ b/docs/emissary/latest/topics/running/listener.md @@ -0,0 +1,218 @@ +# The `Listener` CRD + +The `Listener` CRD defines where, and how, $productName$ should listen for requests from the network, and which `Host` definitions should be used to process those requests. For further examples of how to use `Listener`, see [Configuring $productName$ Communications](../../../howtos/configure-communications). + +**Note that `Listener`s are never created by $productName$, and must be defined by the user.** If you do not +define any `Listener`s, $productName$ will not listen anywhere for connections, and therefore won't do +anything useful. It will log a `WARNING` to this effect. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: example-listener +spec: + port: 8080 # int32, port number on which to listen + protocol: HTTPS # HTTP, HTTPS, HTTPPROXY, HTTPSPROXY, TCP + securityModel: XFP # XFP (for X-Forwarded-Proto), SECURE, INSECURE + statsPrefix: example-listener # default depends on protocol; see below + l7Depth: 0 # int32 + hostBinding: + namespace: + from: SELF # SELF, ALL + selector: ... # Kubernetes label selector +``` + +| Element | Type | Definition | +| :------ | :--- | :--------- | +| `port` | `int32` | The network port on which $productName$ should listen. *Required.* | +| `protocol` | `enum`; see below | A high-level protocol type, like "HTTPS". *Exactly one of `protocol` and `protocolStack` must be supplied.* | +| `protocolStack` | array of `enum`; see below | A sequence of low-level protocols to layer together. *Exactly one of `protocol` and `protocolStack` must be supplied.* | +| `securityModel` | `enum`; see below | How does $productName$ decide whether requests here are secure? *Required.* | +| `statsPrefix` | `string`; see below | Under what name do statistics for this `Listener` appear? *Optional; default depends on protocol.* | +| `l7Depth` | `int32` | How many layer 7 load balancers are between the edge of the network and $productName$? *Optional; default is 0.*| +| `hostBinding` | `struct`, see below | Mechanism for determining which `Host`s will be associated with this `Listener`. *Required* | + +### `protocol` and `protocolStack` + +`protocol` is the **recommended** way to tell $productName$ that a `Listener` expects connections using a well-known protocol. When using `protocol`, `protocolStack` may not also be supplied. + +Valid `protocol` values are: + +| `protocol` | Description | +| :--------- | :---------- | +| `HTTP` | Cleartext-only HTTP. HTTPS is not allowed. | +| `HTTPS` | Either HTTPS or HTTP -- Envoy's TLS support can tell whether or not TLS is in use, and it will set `X-Forwarded-Proto` correctly for later decision-making. | +| `HTTPPROXY` | Cleartext-only HTTP, using the HAProxy `PROXY` protocol. | +| `HTTPSPROXY` | Either HTTPS or HTTP, using the HAProxy `PROXY` protocol. | +| `TCP` | TCP sessions without HTTP at all. You will need to use `TCPMapping`s to route requests for this `Listener`. | +| `TLS` | TLS sessions without HTTP at all. You will need to use `TCPMapping`s to route requests for this `Listener`. | + +### `securityModel` + +`securityModel` defines how the `Listener` will decide whether a request is secure or insecure: + +| `securityModel` | Description | +| :--------- | :---------- | +| `XFP` | Requests are secure if, and only if, `X-Forwarded-Proto` indicates HTTPS. This is common; see below. | +| `SECURE` | Requests are always secure. You might set this if your load balancer always terminates TLS for you, and you can trust the clients. | +| `INSECURE` | Requests are always insecure. You might set this for an HTTP-only `Listener`, or a `Listener` for clients that are expected to be hostile. | + +The `X-Forwarded-Proto` header mentioned above is meant to reflect the protocol the _original client_ +used to contact $productName$. When no layer 7 proxies are in use, Envoy will make certain that the +`X-Forwarded-Proto` header matches the wire protocol of the connection the client made to Envoy, +which allows $productName$ to trust `X-Forwarded-Proto` for routing decisions such as deciding to +redirect requests made using HTTP over to HTTPS for greater security. When using $productName$ as an +edge proxy or a typical API gateway, this is a desirable configuration; setting `securityModel` to +`XFP` makes this easy. + +When layer proxies _are_ in use, the `XFP` setting is often still desirable; however, you will also +need to set `l7Depth` to allow it to function. See below. + +`SECURE` and `INSECURE` are helpful for cases where something downstream of $productName$ should be +allowing only one kind of request to reach $productName$. For example, a `Listener` behind a load +balancer that terminates TLS and checks client certificates might use +`SecurityModel: SECURE`, then use `Host`s to reject insecure requests if one somehow +arrives. + +### `l7Depth` + +When layer 7 (L7) proxies are in use, the connection to $productName$ comes from the L7 proxy itself +rather than from the client. Examining the protocol and IP address of that connection is useless, and +instead you need to configure the L7 proxy to pass extra information about the client to $productName$ +using the `X-Forwarded-Proto` and `X-Forwarded-For` headers. + +However, if $productName$ always trusted `X-Forwarded-Proto` and `X-Forwarded-For`, any client could +use them to lie about itself to $productName$. As a security mechanism, therefore, you must _also_ +set `l7Depth` in the `Listener` to the number of trusted L7 proxies in front of $productName$. If +`l7Depth` is not set in the `Listener`, the `xff_num_trusted_hops` value from the `ambassador` `Module` +will be used. If neither is set, the default `l7Depth` is 0. + +When `l7Depth` is 0, any incoming `X-Forwarded-Proto` is stripped: Envoy always provides an +`X-Forwarded-Proto` matching the wire protocol of the incoming connection, so that `X-Forwarded-Proto` +can be trusted. When `l7Depth` is non-zero, `X-Forwarded-Proto` is accepted from the L7 proxy, and +trusted. The actual wire protocol in use from the L7 proxy to $productName$ is ignored. + +`l7Depth` also affects $productName$'s view of the client's source IP address, which is used as the +`remote_address` field when rate limiting, and for the `X-Envoy-External-Address` header: + +- When `l7Depth` is 0, $productName$ uses the IP address of the incoming connection. +- When `l7Depth` is some value N that is non-zero, the behavior is determined by the value of + `use_remote_address` in the `ambassador` `Module`: + + - When `use_remote_address` is true (the default) then the trusted client address will be the Nth + address from the right end of the `X-Forwarded-For` header. (If the XFF contains fewer than N + addresses, Envoy falls back to using the immediate downstream connection’s source address as a + trusted client address.) + + - When `use_remote_address` is false, the trusted client address is the (N+1)th address from the + right end of XFF. (If the XFF contains fewer than N+1 addresses, Envoy falls back to using the + immediate downstream connection’s source address as a trusted client address.) + + For more detailed examples of this interaction, refer to [Envoy's documentation](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers.html#x-forwarded-for). + + +### `hostBinding` + +`hostBinding` specifies how this `Listener` should determine which `Host`s are associated with it: + +- `namespace.from` allows filtering `Host`s by the namespace of the `Host`: + - `namespace.from: SELF` accepts only `Host`s in the same namespace as the `Listener`. + - `namespace.from: ALL` accepts `Host`s in any namespace. +- `selector` accepts only `Host`s that has labels matching the selector. + +`hostBinding` is mandatory, and at least one of `namespace.from` and `selector` must be set. If both are set, both must match for a `Host` to be accepted. + +### `statsPrefix` + +$productName$ produces detailed [statistics](../statistics) which can be monitored in a variety of ways. Statistics have hierarchical names, and the `Listener` will cause a set of statistics to be logged under the name specified by `statsPrefix`. + +The default `statsPrefix` depends on the protocol for this `Listener`: + +- If the `Listener` speaks HTTPS, the default is `ingress-https`. +- Otherwise, if the `Listener` speaks HTTP, the default is `ingress-http`. +- Otherwise, if the `Listener` speaks TLS, the default is `ingress-tls-$port`. +- Otherwise, the default is `ingress-$port`. + +Note that it doesn't matter whether you use `protocol` or `protocolStack`: what matters is what protocol is actually configured. Also note that the default doesn't take the HAProxy `PROXY` protocol into account. + +Some examples: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: example-listener +spec: + port: 8080 + protocol: HTTPS + ... +``` + +will use a `statsPrefix` of `ingress-https`. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: example-listener +spec: + port: 8080 + protocol: TCP + ... +``` + +will use `statsPrefix` of `ingress-8080`. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Listener +metadata: + name: example-listener +spec: + port: 8080 + protocol: HTTPSPROXY + statsPrefix: proxy-8080 + ... +``` + +would also use `ingress-https`, but it explicitly overrides `statsPrefix` to `proxy-8080`. + +For complete information on which statistics will appear for the `Listener`, see [the Envoy listener statistics documentation](https://www.envoyproxy.io/docs/envoy/latest/configuration/listeners/stats.html). Some important statistics include + +| Statistic name | Type | Description | +| :-----------------------------------------------| :-------- | :-------------------------------- | +| `listener.$statsPrefix.downstream_cx_total` | Counter | Total connections | +| `listener.$statsPrefix.downstream_cx_active` | Gauge | Total active connections | +| `listener.$statsPrefix.downstream_cx_length_ms` | Histogram | Connection length in milliseconds | + +### `protocolStack` + +**`protocolStack` is not recommended if you can instead use `protocol`.** + +Where `protocol` allows configuring the `Listener` to use well-known protocol stacks, `protocolStack` allows configuring exactly which protocols will be layered together. If `protocol` allows what you need, it is safer to use `Protocol` than to risk having the stack broken with an incorrect `protocolStack`. + +The possible stack elements are: + +| `ProtocolStack` Element | Description | +| :---------------------- | :---------- | +| `HTTP` | Cleartext-only HTTP; must be layered with `TLS` for HTTPS | +| `PROXY` | The HAProxy `PROXY` protocol | +| `TLS` | TLS | +| `TCP` | Raw TCP | + +`protocolStack` supplies a list of these elements to describe the protocol stack. **Order matters.** Some examples: + +| `protocolStack` | Description | +| :-------------- | :---------- | +| [ `HTTP`, `TCP` ] | Cleartext-only HTTP, exactly equivalent to `protocol: HTTP`. | +| [ `TLS`, `HTTP`, `TCP` ] | HTTPS or HTTP, exactly equivalent to `protocol: HTTPS`. | +| [ `PROXY`, `TLS`, `TCP` ] | The `PROXY` protocol, wrapping `TLS` _afterward_, wrapping raw TCP. This isn't equivalent to any `protocol` setting, and may be nonsensical. | + +## Examples + +For further examples of how to use `Listener`, see [Configuring $productName$ to Communicate](../../../howtos/configure-communications). diff --git a/docs/emissary/latest/topics/running/load-balancer.md b/docs/emissary/latest/topics/running/load-balancer.md new file mode 100644 index 000000000..987a910bd --- /dev/null +++ b/docs/emissary/latest/topics/running/load-balancer.md @@ -0,0 +1,209 @@ +# Load balancing + +Load balancing configuration can be set for all $productName$ mappings in the [`ambassador` `Module`](../ambassador), or set per [`Mapping`](../../using/mappings). If nothing is set, simple round robin balancing is used via Kubernetes services. + +To use advanced load balancing, you must first configure a [resolver](../resolvers) that supports advanced load balancing (e.g., the Kubernetes Endpoint Resolver or Consul Resolver). Once a resolver is configured, you can use the `load_balancer` attribute. The following fields are supported: + +```yaml +load_balancer: + policy: +``` + +Supported load balancer policies: + +- `round_robin` +- `least_request` +- `ring_hash` +- `maglev` + +For more information on the different policies and the implications, see [load balancing strategies in Kubernetes](https://blog.getambassador.io/load-balancing-strategies-in-kubernetes-l4-round-robin-l7-round-robin-ring-hash-and-more-6a5b81595d6c). + +## Round robin +When `policy` is set to `round_robin`, $productName$ discovers healthy endpoints for the given mapping, and load balances the incoming L7 requests with round robin scheduling. To specify this: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: + config: + resolver: my-resolver + load_balancer: + policy: round_robin +``` + +or, per mapping: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + prefix: /backend/ + service: quote + resolver: my-resolver + hostname: '*' + load_balancer: + policy: round_robin +``` + +Note that load balancing may not appear to be "even" due to Envoy's threading model. For more details, see the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/faq/load_balancing/concurrency_lb). + +## Least request + +When `policy` is set to `least_request`, $productName$ discovers healthy endpoints for the given mapping, and load balances the incoming L7 requests to the endpoint with the fewest active requests. To specify this: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: + config: + resolver: my-resolver + load_balancer: + policy: least_request +``` + +or, per mapping: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend/ +spec: + hostname: '*' + prefix: /backend/ + service: quote + resolver: my-resolver + load_balancer: + policy: least_request +``` + +## Sticky sessions / session affinity + +Configuring sticky sessions makes $productName$ route requests to a specific pod providing your service in a given session. One pod serves all requests from a given session, eliminating the need for session data to be transferred between pods. $productName$ lets you configure session affinity based on the following parameters in an incoming request: + +- Cookie +- Header +- Source IP + +**NOTE:** $productName$ supports sticky sessions using two load balancing policies, `ring_hash` and `maglev`. + +### Cookie + +```yaml +load_balancer: + policy: ring_hash + cookie: + name: + ttl: + path: +``` + +If the cookie you wish to set affinity on is already present in incoming requests, then you only need the `cookie.name` field. However, if you want $productName$ to generate and set a cookie in response to the first request, then you need to specify a value for the `cookie.ttl` field which generates a cookie with the given expiration time. + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + prefix: /backend/ + hostname: '*' + service: quote + resolver: my-resolver + load_balancer: + policy: ring_hash + cookie: + name: sticky-cookie + ttl: 60s +``` + +### Header + +```yaml +load_balancer: + policy: ring_hash + header:
+``` + +$productName$ allows header based session affinity if the given header is present on incoming requests. + +Example: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + hostname: '*' + prefix: /backend/ + service: quote + resolver: my-resolver + load_balancer: + policy: ring_hash + header: STICKY_HEADER +``` + +#### Source IP + +```yaml +load_balancer: + policy: ring_hash + source_ip: +``` + +$productName$ allows session affinity based on the source IP of incoming requests. + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + hostname: '*' + prefix: /backend/ + service: quote + resolver: my-resolver + load_balancer: + policy: ring_hash + source_ip: true +``` + +Load balancing can be configured both globally, and overridden on a per mapping basis. The following example configures the default load balancing policy to be round robin, while using header-based session affinity for requests to the `/backend/` endpoint of the quote application: + +Load balancing can be configured both globally, and overridden on a per mapping basis. + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: + config: + resolver: my-resolver + load_balancer: + policy: round_robin +``` + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + hostname: '*' + prefix: /backend/ + service: quote + resolver: my-resolver + load_balancer: + policy: ring_hash + header: STICKY_HEADER +``` diff --git a/docs/emissary/latest/topics/running/resolvers.md b/docs/emissary/latest/topics/running/resolvers.md new file mode 100644 index 000000000..1ace9a86c --- /dev/null +++ b/docs/emissary/latest/topics/running/resolvers.md @@ -0,0 +1,128 @@ +# Service discovery and resolvers + +Service discovery is how cloud applications and their microservices are located on the network. In a cloud environment, services are ephemeral, existing only as long as they are needed and in use, so a real-time service discovery mechanism is required. $productName$ uses information from service discovery to determine where to route incoming requests. + +## $productName$ support for service discovery + +$productName$ supports different mechanisms for service discovery. These mechanisms are: + +* Kubernetes service-level discovery (default). +* Kubernetes endpoint-level discovery. +* Consul endpoint-level discovery. + +### Kubernetes service-level discovery + +By default, $productName$ uses Kubernetes DNS and service-level discovery. In a `Mapping` resource, specifying `service: foo` will prompt $productName$ to look up the DNS address of the `foo` Kubernetes service. Traffic will be routed to the `foo` service. Kubernetes will then load balance that traffic between multiple pods. For more details on Kubernetes networking and how this works, see our blog post on [Session affinity, load balancing controls, gRPC-Web, and $productName$](https://blog.getambassador.io/session-affinity-load-balancing-controls-grpc-web-and-ambassador-0-52-2b916b396d0c). + +### Kubernetes endpoint-level discovery + +$productName$ can also watch Kubernetes endpoints. This bypasses the Kubernetes service routing layer and enables the use of advanced load balancing controls such as session affinity and maglev. For more details, see the [load balancing reference](../load-balancer). + +### Consul endpoint-level discovery + +$productName$ natively integrates with [Consul](https://www.consul.io) for endpoint-level service discovery. In this mode, $productName$ obtains endpoint information from Consul. One of the primary use cases for this architecture is in hybrid cloud environments that run a mixture of Kubernetes services as well as VMs, as Consul can serve as the single global registry for all services. + +## The Resolver resource + +The `Resolver` resource is used to configure the discovery service strategy for $productName$. + +### The Kubernetes service resolver + +The Kubernetes Service Resolver configures $productName$ to use Kubernetes services. If no resolver is specified, this behavior is the default. When this resolver is used, the `service.namespace` value from a `Mapping` is handed to the Kubernetes cluster's DNS resolver to determine where requests are sent. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: KubernetesServiceResolver +metadata: + name: kubernetes-service +``` + +### The Kubernetes endpoint resolver + +The Kubernetes Endpoint Resolver configures $productName$ to resolve Kubernetes endpoints. This enables the use of more a [advanced load balancing configuration](../load-balancer). When this resolver is used, the endpoints for the `service` defined in a `Mapping` are resolved and used to determine where requests are sent. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: KubernetesEndpointResolver +metadata: + name: endpoint +``` + +### The Consul resolver + +The Consul Resolver configures $productName$ to use Consul for service discovery. When this resolver is used, the `service` defined in a `Mapping` is passed to Consul, along with the datacenter specified, to determine where requests are sent. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: ConsulResolver +metadata: + name: consul-dc1 +spec: + address: consul-server.default.svc.cluster.local:8500 + datacenter: dc1 +``` +- `address`: The fully-qualified domain name or IP address of your Consul server. This field also supports environment variable substitution. +- `datacenter`: The Consul data center where your services are registered + +You may want to use an environment variable if you're running a Consul agent on each node in your cluster. In this setup, you could do the following: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: ConsulResolver +metadata: + name: consul-dc1 +spec: + address: "${HOST_IP}" + datacenter: dc1 +``` + +and then add the `HOST_IP` environment variable to your Kubernetes deployment: + +```yaml +containers: + - name: example + image: docker.io/datawire/ambassador:$version$ + env: + - name: HOST_IP + valueFrom: + fieldRef: + fieldPath: status.hostIP +``` + +## Using resolvers + +Once a resolver is defined, you can use them in a given `Mapping`: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + hostname: "*" + prefix: /backend/ + service: quote + resolver: endpoint + load_balancer: + policy: round_robin +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: bar +spec: + hostname: "*" + prefix: /bar/ + service: https://bar:9000 + tls: client-context + resolver: consul-dc1 + load_balancer: + policy: round_robin +``` + +The YAML configuration above will configure $productName$ to use Kubernetes Service Discovery to route to the Consul Service Discovery to route to the `bar` service on requests with `prefix: /bar/`. diff --git a/docs/emissary/latest/topics/running/running.md b/docs/emissary/latest/topics/running/running.md new file mode 100644 index 000000000..a28b0cb55 --- /dev/null +++ b/docs/emissary/latest/topics/running/running.md @@ -0,0 +1,338 @@ +# Running and deployment + +This section is intended for operators running $productName$, and covers various aspects of deploying and configuring the $productName$ in production. + +## $productName$ and Kubernetes + +$productName$ relies on Kubernetes for reliability, availability, and scalability. This means that features such as Kubernetes readiness and liveness probes, rolling updates, and the Horizontal Pod Autoscaling should be utilized to manage $productName$. + +## Default configuration + +The default configuration of $productName$ includes default [resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container), as well as [readiness and liveness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/). These values should be adjusted for your specific environment. + +## Running as non-root + +Starting with $productName$ 0.35, we support running $productName$ as non-root. This is the recommended configuration, and will be the default configuration in future releases. We recommend you configure $productName$ to run as non-root as follows: + +* Have Kubernetes run $productName$ as non-root. This may happen by default (e.g., OpenShift) or you can set a `securityContext` in your Deployment as shown below in this abbreviated example: + +```yaml +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ambassador +spec: + replicas: 1 + selector: + matchLabels: + service: ambassador + template: + metadata: + labels: + service: ambassador + spec: + containers: + image: docker.io/datawire/aes:$version$ + name: ambassador + restartPolicy: Always + securityContext: + runAsUser: 8888 + serviceAccountName: ambassador +``` + +* Set the `service_port` element in the `ambassador Module` to 8080 (cleartext) or 8443 (TLS). This is the port that $productName$ will use to listen to incoming traffic. Note that any port number above 1024 will work; $productName$ will use 8080/8443 as its defaults in the future. + +* Make sure that incoming traffic to $productName$ is configured to route to the `service_port`. If you're using the default $productName$ configuration, this means configuring the `targetPort` to point to the `service_port` above. + +* If you are using `redirect_cleartext_from`, change the value of this field to point to your cleartext port (e.g., 8080) and set `service_port` to be your TLS port (e.g., 8443). + +## Changing the configuration directory + +While running, $productName$ needs to use a directory within its container for generated configuration data. Normally this is `/ambassador`, but in some situations - especially if running as non-root - it may be necessary to change to a different directory. To do so, set the environment variable `AMBASSADOR_CONFIG_BASE_DIR` to the full pathname of the directory to use, as shown below in this abbreviated example: + +```yaml +env: +- name: AMBASSADOR_CONFIG_BASE_DIR + value: /tmp/ambassador-config +``` + +With `AMBASSADOR_CONFIG_BASE_DIR` set as above, $productName$ will create and use the directory `/tmp/ambassador-config` for its generated data. (Note that, while the directory will be created if it does not exist, attempts to turn an existing file into a directory will fail.) + +## Running as DaemonSet + +$productName$ can be deployed as a DaemonSet to have one pod per node in a Kubernetes cluster. This setup is especially helpful when you have a Kubernetes cluster running on a private cloud. + +* In an ideal example scenario, you are running containers on Kubernetes alongside with your non-containerized applications running exposed via VIP using BIG-IP or similar products. In such cases, east-west traffic is routed based on iRules to certain a set of application pools consisting of application or web servers. In this setup, alongside traditional application servers, two or more $productName$ pods can also be part of the application pools. In case of failure there is at least one $productName$ pod available to BIG-IP that can take care of routing traffic to the Kubernetes cluster. + +* In manifest files `kind: Deployment` needs to be updated to `kind: DaemonSet` and `replicas` should be removed in `spec` section. + +## Namespaces + +$productName$ supports multiple namespaces within Kubernetes. To make this work correctly, you need to set the `AMBASSADOR_NAMESPACE` environment variable in $productName$'s container. By far the easiest way to do this is using Kubernetes' downward API (this is included in the YAML files from `getambassador.io`): + +```yaml +env: +- name: AMBASSADOR_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace +``` + +Given that `AMBASSADOR_NAMESPACE` is set, $productName$ [`Mappings`](../../using/mappings) can operate within the same namespace, or across namespaces. **Note well** that `Mappings` will have to explicitly include the namespace with the service to cross namespaces; see the [`Mapping`](../../using/mappings) documentation for more information. + +If you want $productName$ to only work within a single namespace, set `AMBASSADOR_SINGLE_NAMESPACE` as an environment variable. + +```yaml +env: +- name: AMBASSADOR_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace +- name: AMBASSADOR_SINGLE_NAMESPACE + value: "true" +``` + +With $productName$, if you set `AMBASSADOR_NAMESPACE` or `AMBASSADOR_SINGLE_NAMESPACE`, set it in the deployment container. + +If you want to set a certificate for your `TLScontext` from another namespace, you can use the following: + +```yaml +env: +- name: AMBASSADOR_SINGLE_NAMESPACE + value: "YES" +- name: AMBASSADOR_CERTS_SINGLE_NAMESPACE + value: "YES" +- name: AMBASSADOR_NAMESPACE + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace +``` + +## `AMBASSADOR_ID` + +$productName$ supports running multiple $productNamePlural$ in the same cluster, without restricting a given $productName$ to a single namespace. This is done with the `AMBASSADOR_ID` setting. In the `ambassador Module`, set the `ambassador_id`, e.g., + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: + ambassador_id: [ "ambassador-1" ] + config: + ... +``` + +Then, assign each $productName$ pod a unique `AMBASSADOR_ID` with the environment variable as part of your deployment: + +```yaml +env: +- name: AMBASSADOR_ID + value: ambassador-1 +``` + +With $productName$, if you set `AMBASSADOR_ID`, you will need to set it in the deployment container. + +$productName$ will then only use YAML objects that include an appropriate `ambassador_id` attribute. For example, if $productName$ is given the ID `ambassador-1` as above, only the first two YAML objects below will be used: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: mapping-used +spec: + ambassador_id: [ "ambassador-1" ] + prefix: /demo1/ + service: demo1 +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: mapping-used-2 +spec: + ambassador_id: [ "ambassador-1", "ambassador-2" ] + prefix: /demo2/ + service: demo2 +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: mapping-skipped +spec: + prefix: /demo3/ + service: demo3 +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: mapping-skipped-2 +spec: + ambassador_id: [ "ambassador-2" ] + prefix: /demo4/ + service: demo4 +``` + +`ambassador_id` is always a list, and may (as shown in `mapping-used-2` above) include multiple IDs to allow a given object in the configuration for multiple $productName$ instances. In this case, `mapping-used-2` will be included in the configuration for `ambassador-1` and also for `ambassador-2`. + +**Note well that _any_ $productName$ configuration resource can have an `ambassador_id` included** so, for example, it is _fully supported_ to use `ambassador_id` to qualify the `ambassador Module`, `TLSContext`, and `AuthService` objects. You will need to set `ambassador_id` in all resources you want to use for $productName$. + +If no `AMBASSADOR_ID` is assigned to an $productName$, it will use the ID `default`. If no `ambassador_id` is present in a YAML object, it will also use the ID `default`. + +## `AMBASSADOR_ENVOY_BASE_ID` + +$productName$ supports running side-by-side with other envoy-based projects in a single pod. An example of this is running with an `istio` sidecar. This is done with the `AMBASSADOR_ENVOY_BASE_ID` environment variable as part of your deployment: + +```yaml +env: +- name: AMBASSADOR_ENVOY_BASE_ID + value: 1 +``` + +If no `AMBASSADOR_ENVOY_BASE_ID` is provided then it will use the ID `0`. For more information on the Envoy base-id option please see the [Envoy command line documentation](https://www.envoyproxy.io/docs/envoy/latest/operations/cli.html?highlight=base%20id#cmdoption-base-id). + +## `AMBASSADOR_VERIFY_SSL_FALSE` + +By default, $productName$ will verify the TLS certificates provided by the Kubernetes API. In some situations, the cluster may be deployed with self-signed certificates. In this case, set `AMBASSADOR_VERIFY_SSL_FALSE` to `true` to disable verifying the TLS certificates. + +## `AMBASSADOR_UPDATE_MAPPING_STATUS` + +If `AMBASSADOR_UPDATE_MAPPING_STATUS` is set to the string `true`, $productName$ will update the `status` of every `Mapping` CRD that it accepts for its configuration. This has no effect on the proper functioning of $productName$ itself, and can be a performance burden on installations with many `Mapping`s. It has no effect for `Mapping`s stored as annotations. + +The default is `false`. We recommend leaving `AMBASSADOR_UPDATE_MAPPING_STATUS` turned off unless required for external systems. + +## `AMBASSADOR_LEGACY_MODE` + +Setting `AMBASSADOR_LEGACY_MODE` to `true` will result in $productName$ disabling certain features +and reverting to older codepaths which may be better preserve certain older behaviors. Legacy mode +currently has the following effects: + +- $productName$ will switch back to the $productName$ 1.6 input-resource validator (which can significantly +increase configuration latency for $productName$ installations with many resources). +- $productName$ will use the shell boot sequence that was the default up through 1.9.1, rather than the Golang boot sequence that became the default in 1.10.0. +- `AMBASSADOR_FAST_RECONFIGURE` (see below) is not supported in legacy mode. + +## `AMBASSADOR_FAST_RECONFIGURE` + +Setting `AMBASSADOR_FAST_RECONFIGURE` to "true" enables incremental reconfiguration. When enabled, $productName$ will track deltas from one configuration to the next and recalculate only what is necessary to follow the change. When disabled (the default), $productName$ will recompute the entire configuration at every change. + +**`AMBASSADOR_FAST_RECONFIGURE` is not supported when `AMBASSADOR_LEGACY_MODE` is active.** + +## Configuration from the filesystem + +If desired, $productName$ can be configured from YAML files in the directory `$AMBASSADOR_CONFIG_BASE_DIR/ambassador-config` (by default, `/ambassador/ambassador-config`, which is empty in the images built by Datawire). You could volume mount an external configuration directory here, for example, or use a custom Dockerfile to build configuration directly into a Docker image. + +Note well that while $productName$ will read its initial configuration from this directory, configuration loaded from Kubernetes annotations will _replace_ this initial configuration. If this is not what you want, you will need to set the environment variable `AMBASSADOR_NO_KUBEWATCH` so that $productName$ will not try to update its configuration from Kubernetes resources. + +Also note that the YAML files in the configuration directory must contain the $productName$ resources, not Kubernetes resources with annotations. + +## Log levels and debugging + +The $OSSproductName$ and $AESproductName$ support more verbose debugging levels. If using $OSSproductName$, the [diagnostics](../diagnostics) service has a button to enable debug logging. Be aware that if you're running $productName$ on multiple pods, the debug log levels are not enabled for all pods -- they are configured on a per-pod basis. + +If using $AESproductName$, you can adjust the log level by setting the `AES_LOG_LEVEL` environment variable; from least verbose to most verbose, the valid values are `error`, `warn`/`warning`, `info`, `debug`, and `trace`; the default is `info`. + +## Log format + +By default, $productName$ writes its own logs in a plain text format. To produce logs as JSON instead, set the `AMBASSADOR_JSON_LOGGING` environment variable: + +```yaml +env: +- name: AMBASSADOR_JSON_LOGGING + value: "true" +``` + +## Port assignments + +$productName$ uses some TCP ports in the range 8000-8499 internally, as well as port 8877. Third-party software integrating with $productName$ should not use ports in this range on the $productName$ pod. + +## $productName$ update checks (Scout) + +$productName$ integrates Scout, a service that periodically checks with Datawire servers to advise of available updates. Scout also sends anonymized usage data and the $productName$ version. This information is important to us as we prioritize test coverage, bug fixes, and feature development. Note that $productName$ will run regardless of the status of Scout (i.e., our uptime has zero impact on your uptime.) + +We do not recommend you disable Scout, since we use this mechanism to notify users of new releases (including critical fixes and security issues). This check can be disabled by setting the environment variable `SCOUT_DISABLE` to `1` in your $productName$ deployment. + +Each $productName$ installation generates a unique cluster ID based on the UID of its Kubernetes namespace and its $productName$ ID: the resulting cluster ID is a UUID which cannot be used to reveal the namespace name nor $productName$ ID itself. $productName$ needs RBAC permission to get namespaces for this purpose, as shown in the default YAML files provided by Datawire; if not granted this permission it will generate a UUID based only on the $productName$ ID. To disable cluster ID generation entirely, set the environment variable `AMBASSADOR_CLUSTER_ID` to a UUID that will be used for the cluster ID. + +Unless disabled, $productName$ will also report the following anonymized information back to Datawire: + +| Attribute | Type | Description | +| :------------------------ | :---- | :------------------------ | +| `cluster_count` | int | total count of clusters in use | +| `cluster_grpc_count` | int | count of clusters using GRPC upstream | +| `cluster_http_count` | int | count of clusters using HTTP or HTTPS upstream | +| `cluster_routing_envoy_rh_count` | int | count of clusters routing using Envoy `ring_hash` | +| `cluster_routing_envoy_maglev_count` | int | count of clusters routing using Envoy `maglev` | +| `cluster_routing_envoy_lr_count` | int | count of clusters routing using Envoy `least_request` | +| `cluster_routing_envoy_rr_count` | int | count of clusters routing using Envoy `round_robin` | +| `cluster_routing_kube_count` | int | count of clusters routing using Kubernetes | +| `cluster_tls_count` | int | count of clusters originating TLS | +| `custom_ambassador_id` | bool | has the `ambassador_id` been changed from 'default'? | +| `custom_listener_port` | bool | has the listener port been changed from 80/443? | +| `diagnostics` | bool | is the diagnostics service enabled? | +| `endpoint_grpc_count` | int | count of endpoints to which $productName$ will originate GRPC | +| `endpoint_http_count` | int | count of endpoints to which $productName$ will originate HTTP or HTTPS | +| `endpoint_routing` | bool | is endpoint routing enabled? | +| `endpoint_routing_envoy_rh_count` | int | count of endpoints being routed using Envoy `ring_hash` | +| `endpoint_routing_envoy_maglev_count` | int | count of endpoints being routed using Envoy `maglev` | +| `endpoint_routing_envoy_lr_count` | int | count of endpoints being routed using Envoy `least_request` | +| `endpoint_routing_envoy_rr_count` | int | count of endpoints being routed using Envoy `round_robin` | +| `endpoint_routing_kube_count` | int | count of endpoints being routed using Kubernetes | +| `endpoint_tls_count` | int | count of endpoints to which $productName$ will originate TLS | +| `extauth` | bool | is extauth enabled? | +| `extauth_allow_body` | bool | will $productName$ send the body to extauth? | +| `extauth_host_count` | int | count of extauth hosts in use | +| `extauth_proto` | str | extauth protocol in use ('http', 'grpc', or `null` if not active) | +| `group_canary_count` | int | count of Mapping groups that include more than one Mapping | +| `group_count` | int | total count of Mapping groups in use (length of the route table) | +| `group_header_match_count` | int | count of groups using header matching (including `host` and `method`) | +| `group_host_redirect_count` | int | count of groups using host_redirect | +| `group_host_rewrite_count` | int | count of groups using host_rewrite | +| `group_http_count` | int | count of HTTP Mapping groups | +| `group_precedence_count` | int | count of groups that explicitly set the precedence of the group | +| `group_regex_header_count` | int | count of groups using regex header matching | +| `group_regex_prefix_count` | int | count of groups using regex prefix matching | +| `group_resolver_consul` | int | count of groups using the Consul resolver | +| `group_resolver_kube_endpoint` | int | count of groups using the Kubernetes endpoint resolver | +| `group_resolver_kube_service` | int | count of groups using the Kubernetes service resolver | +| `group_shadow_count` | int | count of groups using shadows | +| `group_shadow_weighted_count` | int | count of groups using shadows but not shadowing all traffic | +| `group_tcp_count` | int | count of TCP Mapping groups | +| `host_count` | int | count of Host resources in use | +| `k8s_ingress_class_count` | int | count of IngressClass resources in use | +| `k8s_ingress_count` | int | count of Ingress resources in use | +| `listener_count` | int | count of active listeners (1 unless `redirect_cleartext_from` or TCP Mappings are in use) | +| `liveness_probe` | bool | are liveness probes enabled? | +| `managed_by` | string | tool that manages the $productName$ deployment, if any (e.g. helm, edgectl, etc.) | +| `mapping_count` | int | count of Mapping resources in use | +| `ratelimit` | bool | is rate limiting in use? | +| `ratelimit_custom_domain` | bool | has the rate limiting domain been changed from 'ambassador'? | +| `ratelimit_data_plane_proto` | bool | is rate limiting using the data plane proto? | +| `readiness_probe` | bool | are readiness probes enabled? | +| `request_4xx_count` | int | lower bound for how many requests have gotten a 4xx response | +| `request_5xx_count` | int | lower bound for how many requests have gotten a 5xx response | +| `request_bad_count` | int | lower bound for how many requests have failed (either 4xx or 5xx) | +| `request_elapsed` | float | seconds over which the request_ counts are valid | +| `request_hr_elapsed` | string | human-readable version of `request_elapsed` (e.g. "3 hours 35 minutes 20 seconds" | +| `request_ok_count` | int | lower bound for how many requests have succeeded (not a 4xx or 5xx) | +| `request_total_count` | int | lower bound for how many requests were handled in total | +| `statsd` | bool | is StatsD enabled? | +| `server_name` | bool | is the `server_name` response header overridden? | +| `service_resource_total` | int | total count of service resources loaded from all discovery sources | +| `tls_origination_count` | int | count of TLS origination contexts | +| `tls_termination_count` | int | count of TLS termination contexts | +| `tls_using_contexts` | bool | are new TLSContext resources in use? ? | +| `tls_using_module` | bool |is the old TLS module in use | +| `tracing` | bool | is tracing in use? | +| `tracing_driver` | str | tracing driver in use ('zipkin', 'lightstep', 'datadog', or `null` if not active) | +| `use_proxy_proto` | bool | is the `PROXY` protocol in use? | +| `use_remote_address` | bool | is $productName$ honoring remote addresses? | +| `x_forwarded_proto_redirect` | bool | is $productName$ redirecting based on `X-Forwarded-Proto`? | +| `xff_num_trusted_hops` | int | what is the count of trusted hops for `X-Forwarded-For`? | + +The `request_*` counts are always incremental: they contain only information about the last `request_elapsed` seconds. Additionally, they only provide a lower bound -- notably, if an $productName$ pod crashes or exits, no effort is made to ship out a final update, so it's very easy for counts to never be reported. + +To completely disable feature reporting, set the environment variable `AMBASSADOR_DISABLE_FEATURES` to any non-empty value. diff --git a/docs/emissary/latest/topics/running/scaling.md b/docs/emissary/latest/topics/running/scaling.md new file mode 100644 index 000000000..22fa743e0 --- /dev/null +++ b/docs/emissary/latest/topics/running/scaling.md @@ -0,0 +1,194 @@ +# Performance and scaling $productName$ + +Scaling any cloud native application is inherently domain specific, however the content here +reflects common issues, tips, and tricks that come up frequently. + +## Performance dimensions + +The performance of $productName$'s control plane can be characterized along a number of +different dimensions: + + - The number of `TLSContext` resources. + - The number of `Host` resources. + - The number of `Mapping` resources per `Host` resource. + - The number of `Mapping` resources that will span all `Host` resources (either because they're using `host_regex`, or because they're using `hostname: "*"`). + +If your application involves a larger than average number of any of the above resources, you may +find yourself in need of some of the content in this section. + +## Mysterious pod restarts (aka pushing the edge of the envelope) + +Whether your application is growing organically or whether you are deliberately scale testing, it's +helpful to recognize how $productName$ behaves as it reaches the edge of its performance +envelope along any of these dimensions. + +As $productName$ approaches the edge of its performance envelope, it will often manifest as +mysterious pod restarts triggered by Kubernetes. This does not always mean there is a problem, it +could just mean you need to tune some of the resource limits set in your deployment. When it comes +to scaling, Kubernetes will generally kill an $productName$ pod for one of two reasons: exceeding +memory limits or failed liveness/readiness probes. See the [Memory Limits](#memory-limits), +[Liveness Probes](#liveness-probes), and [Readiness Probes](#readiness-probes) +sections for more on how to cope with these situations. + +## Memory limits + +$productName$ can grow in memory usage and be killed by Kubernetes if it exceeds the limits +defined in its pod spec. When this happens it is confusing and difficult to catch because the only +indication that this has occurred is the pod transitioning momentarily into the `OOMKilled` +state. The only way to actually observe this is if you are lucky enough to be running the following +command (or have similar monitoring configured) when $productName$ gets `OOMKilled`: + +``` + kubectl get pods -n ambassador -w +``` + +In order to take the luck out of the equation, $productName$ will periodically log its +memory usage so you can see in the logs if memory limits might be a problem and require adjustment: + +``` +2020/11/26 22:35:20 Memory Usage 0.56Gi (28%) + PID 1, 0.22Gi: busyambassador entrypoint + PID 14, 0.04Gi: /usr/bin/python /usr/bin/diagd /ambassador/snapshots /ambassador/bootstrap-ads.json /ambassador/envoy/envoy.json --notices /ambassador/notices.json --port 8004 --kick kill -HUP 1 + PID 16, 0.12Gi: /ambassador/sidecars/amb-sidecar + PID 37, 0.07Gi: /usr/bin/python /usr/bin/diagd /ambassador/snapshots /ambassador/bootstrap-ads.json /ambassador/envoy/envoy.json --notices /ambassador/notices.json --port 8004 --kick kill -HUP 1 + PID 48, 0.08Gi: envoy -c /ambassador/bootstrap-ads.json --base-id 0 --drain-time-s 600 -l error +``` + +In general you should try to keep $productName$'s memory usage below 50% of the pod's limit. This may +seem like a generous safety margin, but when reconfiguration occurs, $productName$ requires additional +memory to avoid disrupting active connections. At each reconfiguration, $productName$ keeps around the +old version for the duration of the configured drain time. See +[AMBASSADOR_DRAIN_TIME](#ambassador_drain_time) for more details on how to tune this +behavior. + +$productName$'s exact memory usage depends on (among other things) how many `Host` and +`Mapping` resources are defined in your cluster. If this number has grown over time, you may need to +increase the memory limit defined in your deployment. + +## Liveness probes + +$productName$ defines the `/ambassador/v0/check_alive` endpoint on port `8877` for use with Kubernetes +liveness probes. See the Kubernetes documentation for more details on [HTTP liveness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request). + +Kubernetes will restart the $productName$ pod if it fails to get a 200 result from the endpoint. If +this happens it won't necessarily show up in an easily recognizable way in the pod logs. You can +look for Kubernetes events to see if this is happening. Use `kubectl describe pod -n ambassador` or +`kubectl get events -n ambassador` or equivalent. + +The purpose of liveness probes is to rescue an $productName$ instance that is wedged, however if +liveness probes are too sensitive they can take out $productName$ instances that are functioning +normally. This is more prone to happen as the number of $productName$ inputs increase. The +`timeoutSeconds` and `failureThreshold` fields of the $productName$ deployment's liveness Probe +determines how tolerant Kubernetes is with its probes. If you observe pod restarts along with +`Unhealthy` events, try tuning these fields upwards from their default values. See the Kubernetes documentation for more details on [tuning probes](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#probe-v1-core). + +Note that whatever changes you make to $productName$'s liveness probes should most likely be made to +its readiness probes also. + +## Readiness probes + +$productName$ defines the `/ambassador/v0/check_ready` endpoint on port `8877` for use with Kubernetes +readiness probes. See the Kubernetes documentation for more details on [readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes). + +Kubernetes uses readiness checks to prevent traffic from going to pods that are not ready to handle +requests. The only time $productName$ cannot usefully handle requests is during initial startup when it +has not yet loaded all the routing information from Kubernetes and/or consul. During this bootstrap +period there is no guarantee $productName$ would know where to send a given request. The `check_ready` +endpoint will only return 200 when all routing information has been loaded. After the initial +bootstrap period it behaves identically to the `check_alive` endpoint. + +Generally $productName$'s readiness probes should be configured with the same settings as its liveness +probes. + +## `AMBASSADOR_FAST_RECONFIGURE` and `AMBASSADOR_LEGACY_MODE` flags + +`AMBASSADOR_FAST_RECONFIGURE` is a feature flag that enables a higher performance implementation of +the code $productName$ uses to validate and generate envoy configuration. It will eventually be enabled +by default, but if you are experiencing performance problems you should try setting +`AMBASSADOR_FAST_RECONFIGURE` to `true` to see if this helps. + +`AMBASSADOR_LEGACY_MODE` is **not** recommended when performance is critical. + +## `AMBASSADOR_DRAIN_TIME` + +The `AMBASSADOR_DRAIN_TIME` variable controls how much of a grace period $productName$ provides active +clients when reconfiguration happens. Its unit is seconds and it defaults to 600 (10 minutes). This +can impact memory usage because $productName$ needs to keep around old versions of its configuration +for the duration of the drain time. + +## Unconstrained Mappings with many hosts + +When working with a large number of `Host` resources, it's important to understand the impact of +unconstrained `Mapping`s. An unconstrained `Mapping` is one that is not restricted to a specific +`Host`. Such a `Mapping` will create a route for all of your `Host`s. If this is what you want then +it is the appropriate thing to do, however if you do not intend to do this, then you can end up with +many more routes than you had intended and this can adversely impact performance. + +## Inspecting $productName$ performance + +$productName$ internally tracks a number of key performance indicators. You can inspect these via the +debug endpoint at `localhost:8877/debug`. Note that the `AMBASSADOR_FAST_RECONFIGURE` flag needs to +be set to `"true"` for this endpoint to be present: + +``` +$ kubectl exec -n ambassador -it ${POD} curl localhost:8877/debug +{ + "timers": { + # These two timers track how long it takes to respond to liveness and readiness probes. + "check_alive": "7, 45.411495ms/61.85999ms/81.358927ms", + "check_ready": "7, 49.951304ms/61.976205ms/86.279038ms", + + # These two timers track how long we spend updating our in-memory snapshot when our Kubernetes + # watches tell us something has changed. + "consulUpdate": "0, 0s/0s/0s", + "katesUpdate": "3382, 28.662µs/102.784µs/95.220222ms", + + # These timers tell us how long we spend notifying the sidecars if changed input. This + # includes how long the sidecars take to process that input. + "notifyWebhook:diagd": "2, 1.206967947s/1.3298432s/1.452718454s", + "notifyWebhooks": "2, 1.207007216s/1.329901037s/1.452794859s", + + # This timer tells us how long we spend parsing annotations. + "parseAnnotations": "2, 21.944µs/22.541µs/23.138µs", + + # This timer tells us how long we spend reconciling changes to consul inputs. + "reconcileConsul": "2, 50.104µs/55.499µs/60.894µs", + + # This timer tells us how long we spend reconciling secrets related changes to $productName$ + # inputs. + "reconcileSecrets": "2, 18.704µs/20.786µs/22.868µs" + }, + "values": { + "envoyReconfigs": { + "times": [ + "2020-11-06T13:13:24.218707995-05:00", + "2020-11-06T13:13:27.185754494-05:00", + "2020-11-06T13:13:28.612279777-05:00" + ], + "staleCount": 2, + "staleMax": 0, + "synced": true + }, + "memory": "39.73Gi of Unlimited (0%)" + } +} +``` + +## Running profiles + +$productName$ exposes endpoints at `localhost:8877/debug/pprof` to run Golang profiles to aid in live debugging. The endpoints are equivalent to those found in the [http/pprof](https://pkg.go.dev/net/http/pprof) package. `/debug/pprof/` returns an HTML page listing the available profiles. + +The following are the different types of profiles you can run: + +| Profile | Function | +| :------- | :-------- | +| /debug/pprof/allocs | Returns a sampling of all past memory allocations. | +| /debug/pprof/block | Returns stack traces of goroutines that led to blocking on sychronization primitives. | +| /debug/pprof/cmdline | Returns the command line that was invoked by the current program. | +| /debug/pprof/goroutine | Returns stack traces of all current goroutines. | +| /debug/pprof/heap | Returns a sampling of memory allocations of live objects. | +| /debug/pprof/mutex | Returns stack traces of goroutines holding contended mutexes. | +| /debug/pprof/profile | Returns pprof-formatted cpu profile. You can specify the duration using the seconds `GET` parameter. The default duration is 30 seconds. | +| /debug/pprof/symbol | Returns the program counters listed in the request. | +| /debug/pprof/threadcreate | Returns stack traces that led to creation of new OS threads. | +| /debug/pprof/trace | Returns the execution trace in binary form. You can specify the duration using the seconds `GET` parameter. The default duration is 1 second. | diff --git a/docs/emissary/latest/topics/running/services/auth-service.md b/docs/emissary/latest/topics/running/services/auth-service.md new file mode 100644 index 000000000..adfb77e4f --- /dev/null +++ b/docs/emissary/latest/topics/running/services/auth-service.md @@ -0,0 +1,150 @@ +import Alert from '@material-ui/lab/Alert'; + +# Authentication service + +$productName$ provides a highly flexible mechanism for authentication, +via the `AuthService` resource. An `AuthService` configures +$productName$ to use an external service to check authentication and +authorization for incoming requests. Each incoming request is +authenticated before routing to its destination. + +All requests are validated by the `AuthService` (unless the `Mapping` +applied to the request sets `bypass_auth`). It is not possible to +combine multiple `AuthServices`. While it is possible to create +multiple `AuthService` resources, $productName$ load-balances between +them in a round-robin fashion. This is useful for canarying an +`AuthService` change, but is not useful for deploying multiple +distinct `AuthServices`. In order to combine multiple external +services (either having multiple services apply to the same request, +or selecting between different services for the different requests), +instead of using an `AuthService`, use an [$AESproductName$ `External` +`Filter`](/docs/edge-stack/latest/topics/using/filters/). + + + +Because of the limitations described above, **$AESproductName$ does +not support `AuthService` resources, and you should instead use an +[`External` +`Filter`](/docs/edge-stack/latest/topics/using/filters/external),** +which is mostly a drop-in replacement for an `AuthService`. The +`External` `Filter` relies on the $AESproductName$ `AuthService`. +Make sure the $AESproductName$ `AuthService` is deployed before +configuring `External` `Filters`. + + + +The currently supported version of the `AuthService` resource is +`getambassador.io/v3alpha1`. Earlier versions are deprecated. + +## Example + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: AuthService +metadata: + name: authentication +spec: + ambassador_id: [ "ambassador-1" ] + auth_service: "example-auth.authentication:3000" + tls: true + proto: http + timeout_ms: 5000 + include_body: + max_bytes: 4096 + allow_partial: true + status_on_error: + code: 403 + failure_mode_allow: false + + # proto: grpc only, default is v2. If upgrading from 2.x then you must set this to v3. + protocol_version: v3 + + # proto: http only + path_prefix: "/path" + allowed_request_headers: + - "x-example-header" + allowed_authorization_headers: + - "x-qotm-session" + add_auth_headers: + x-added-auth: auth-added + add_linkerd_headers: false +``` + +## Fields + +`auth_service` is the only required field, all others are optional. + +| Attribute | Default value | Description | +| ---------------------------- | ----------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `ambassador_id` | `[ "default" ]` | Which [Ambassador ID](../../running/#ambassador_id) the `AuthService` should apply to. | +| `auth_service` | (none; a value is required) | Identifies the external auth service to talk to. The format of this field is `scheme://host:port` where `scheme://` and `:port` are optional. The scheme-part, if present, must be either `http://` or `https://`; if the scheme-part is not present, it behaves as if `http://` is given. The scheme-part controls whether TLS or plaintext is used and influences the default value of the port-part. The host-part must be the [namespace-qualified DNS name](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#namespaces-of-services) of the service you want to use for authentication. | +| `tls` | `""` | This field is populated with the name of the defined TLSContext, which determines the TLS certificate presented to external auth services. | +| `proto` | `http` | Specifies which variant of the [`ext_authz` protocol](../ext-authz/) to use when communicating with the external auth service. Valid options are `http` or `grpc`. | +| `timeout_ms` | `5000` | The total maximum duration in milliseconds for the request to the external auth service, before triggering `status_on_error` or `failure_mode_allow`. | +| `include_body` | `null` | Controls how much to buffer the request body to pass to the external auth service, for use cases such as computing an HMAC or request signature. If `include_body` is `null` or unset, then the request body is not buffered at all, and an empty body is passed to the external auth service. If `include_body` is not `null`, the `max_bytes` and `allow_partial` subfields are required. | +| `include_body.max_bytes` | (none; a value is required if `include_body` is not `null`) | Controls the amount of body data that is passed to the external auth service. | +| `include_body.allow_partial` | (none; a value is required if `include_body` is not `null`) | Controls what happens to requests with bodies larger than `max_bytes`. If `allow_partial` is `true`, the first `max_bytes` of the body are sent to the external auth service. If `false`, the message is rejected with HTTP 413 ("Payload Too Large"). | +| `status_on_error.code` | `403` | Controls the status code returned when unable to communicate with external auth service. This is ignored if `failure_mode_allow: true`. | +| `failure_mode_allow` | `false` | Controls whether to allow or reject requests when there is an error communicating with the external auth service; a value of `true` allows the request through to the upstream backend service, a value of `false` returns a `status_on_error.code` response to the client. | +| `stats_name` | the `auth_service` value with non-alphanumeric characters replaced with underscores | See [Overriding Statistics Names](../../statistics/#overriding-statistics-names). | +| `circuit_breakers` | the value set in the [`ambassador` `Module`](../../../using/defaults) | See [Circuit Breakers](../../../using/circuit-breakers/). | + +The following field is only used if `proto` is set to `grpc`. It is +ignored if `proto` is `http`. + +| Attribute | Default value | Description | +| ------------------ | ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `protocol_version` | `v3` | Allowed values are `v3` and `v2`(defualt). `protocol_version` was used in previous versions of $productName$ to control the protocol used by the gRPC service. $productName$ 3.x is running an updated version of Envoy that has dropped support for the `v2` protocol, so starting in 3.x, if `protocol_version` is not specified, the default value of `v2` will cause an error to be posted and a static response will be returned. Therefore, you must set it to `protocol_version: v3`. If upgrading from a previous version, you will want to set it to `v3` and ensure it is working before upgrading to Emissary-ingress 3.Y. The default value for `protocol_version` remains `v2` in the `getambassador.io/v3alpha1` CRD specifications to avoid making breaking changes outside of a CRD version change. Future versions of CRD's will deprecate it. | + +The following fields are only used if `proto` is set to `http`. They +are ignored if `proto` is `grpc`. + +| Attribute | Default value | Description | +| ------------------------------- | -------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `path_prefix` | `""` | Prepends a string to the request path of the request when sending it to the external auth service. By default this is empty, and nothing is prepended. For example, if the client makes a request to `/foo`, and `path_prefix: /bar`, then the path in the request made to the external auth service will be `/foo/bar`. | +| `allowed_request_headers` | `[]` | Lists the headers (case-insensitive) that are copied from the incoming request to the request made to the external auth service. In addition to the headers listed in this field, the following headers are always included: `Authorization`, `Cookie`, `From`, `Proxy-Authorization`, `User-Agent`, `X-Forwarded-For`, `X-Forwarded-Host`, and `X-Forwarded-Proto`. | +| `allowed_authorization_headers` | `[]` | Lists the headers (case-insensitive) that are copied from the response from the external auth service to the request sent to the upstream backend service (if the external auth service indicates that the request to the upstream backend service should be allowed). In addition to the headers listed in this field, the following headers are always included: `Authorization`, `Location`, `Proxy-Authenticate`, `Set-cookie`, `WWW-Authenticate` | +| `add_auth_headers` | `{}` | A dictionary of `header: value` pairs that are added to the request made to the external auth service. | +| `add_linkerd_headers` | Defaults to the value set in the [`ambassador` `Module`](../../ambassador) | When true, in the request to the external auth service, adds an `l5d-dst-override` HTTP header that is set to the hostname and port number of the external auth service. | + +## Canarying multiple `AuthServices` + +You may create multiple `AuthService` manifests to round-robin +authentication requests among multiple services. **All services must +use the same `path_prefix` and header definitions.** If you try to +have different values, you'll see an error in the [diagnostics +service](../../ambassador/#diagnostics), telling you which value is +being used. + +## Configuring public `Mappings` + +An `AuthService` can be disabled for a `Mapping` by setting +`bypass_auth` to `true`. This will tell $productName$ to allow all +requests for that `Mapping` through without interacting with the +external auth service. + + + +## Transport Protocol Migration + + +> **Note:** The following information is only applicable to `AuthServices` using `proto: grpc` +As of $productName$ version 2.3, the `v2` transport protocol is deprecated and any AuthServices making use +of it should migrate to `v3` before support for `v2` is removed in a future release. + +The following imports simply need to be updated to migrate an AuthService + +`v2` Imports: +``` + envoyCoreV2 "github.com/datawire/ambassador/pkg/api/envoy/api/v2/core" + envoyAuthV2 "github.com/datawire/ambassador/pkg/api/envoy/service/auth/v2" + envoyType "github.com/datawire/ambassador/pkg/api/envoy/type" +``` + +`v3` Imports: +``` + envoyCoreV3 "github.com/datawire/ambassador/v2/pkg/api/envoy/config/core/v3" + envoyAuthV3 "github.com/datawire/ambassador/v2/pkg/api/envoy/service/auth/v3" + envoyType "github.com/datawire/ambassador/v2/pkg/api/envoy/type/v3" +``` diff --git a/docs/emissary/latest/topics/running/services/ext-authz.md b/docs/emissary/latest/topics/running/services/ext-authz.md new file mode 100644 index 000000000..d850ba4b5 --- /dev/null +++ b/docs/emissary/latest/topics/running/services/ext-authz.md @@ -0,0 +1,83 @@ +# ExtAuth protocol + +By design, the ExtAuth protocol used by [the AuthService](../auth-service) and by [External Filters](/docs/edge-stack/latest/topics/using/filters/external) is highly flexible. The authentication service is the first external service invoked on an incoming request (e.g., it runs before the rate limit filter). Because the logic of authentication is encapsulated in an external service, you can use this to support a wide variety of use cases. For example: + +* Supporting traditional SSO authentication protocols, e.g., OAuth, OpenID Connect, etc. +* Supporting HTTP basic authentication ([see a sample implementation](https://github.com/datawire/ambassador-auth-httpbasic)). +* Only authenticating requests that are under a rate limit and rejecting authentication requests above the rate limit. +* Authenticating specific services (URLs), and not others. + +For each request, the ExtAuth service may either: + 1. return a direct HTTP *response*, intended to be sent back to the requesting HTTP client (normally *denying* the request from being forwarded to the upstream backend service) or + 2. return a modification to make to the HTTP *request* before sending it to the upstream backend service (normally *allowing* the request to be forwarded to the upstream backend service with modifications). + +The ExtAuth service receives information about every request through $productName$ and must indicate whether the request is to be allowed or not. If not, the ExtAuth service provides the HTTP response which is to be handed back to the client. A potential control flow for Authentication is shown in the image below. + +Giving the ExtAuth service the ability to control the response allows many different types of auth mechanisms, for example: + +- The ExtAuth service can simply return an error page with an HTTP 401 response. +- The ExtAuth service can choose to include a `WWW-Authenticate` header in the 401 response, to ask the client to perform HTTP Basic Auth. +- The ExtAuth service can issue a 301 `Redirect` to divert the client into an OAuth or OIDC authentication sequence. The control flow of this is shown below. ![Authentication flow](../../../images/auth-flow.png) + +There are two variants of the ExtAuth: gRPC and plain HTTP. + +### The gRPC protocol + +When `proto: grpc` is set, the ExtAuth service must implement the `Authorization` gRPC interface, defined in [Envoy's `external_auth.proto`](https://github.com/emissary-ingress/emissary/blob/master/api/envoy/service/auth/v3/external_auth.proto). + +### The HTTP protocol + +External services for `proto: http` are often easier to implement, but have several limitations compared to `proto: grpc`. + - The list of headers that the ExtAuth service is interested in reading must be known ahead of time, in order to set `allow_request_headers`. Inspecting headers that are not known ahead of time requires instead using `proto: grpc`. + - The list of headers that the ExtAuth service would like to set or modify must be known ahead of time, in order to set `allow_authorization_headers`. Setting headers that are not known ahead of time requires instead using `proto: grpc`. + - When returning a direct HTTP response, the HTTP status code cannot be 200 or in the 5XX range. Intercepting with a 200 or 5XX response requires instead using `proto: grpc`. + +#### The request From $productName$ to the ExtAuth service + +For every incoming request, a similar request is made to the ExtAuth service that mimics the: + - HTTP request method + - HTTP request path, potentially modified by `path_prefix` + - HTTP request headers that are either named in `allowed_request_headers` or in the fixed list of headers that are always included + - first `include_body.max_bytes` of the HTTP request body. + +The `Content-Length` HTTP header is set to the number of bytes in the body of the request sent to the ExtAuth service (`0` if `include_body` is not set). + +**ALL** request methods will be proxied, which implies that the ExtAuth service must be able to handle any request that any client could make. + +Take this incoming request for example: + +``` +PUT /path/to/service HTTP/1.1 +Host: myservice.example.com:8080 +User-Agent: curl/7.54.0 +Accept: */* +Content-Type: application/json +Content-Length: 27 + +{ "greeting": "hello world!", "spiders": "OMG no" } +``` + +The request $productName$ will make of the auth service is: + +``` +PUT /path/to/service HTTP/1.1 +Host: extauth.example.com:8080 +User-Agent: curl/7.54.0 +Accept: */* +Content-Type: application/json +Content-Length: 0 +``` + +#### The response returned from the ExtAuth service to $productName$ + + - If the HTTP response returned from the ExtAuth service to $productName$ has an HTTP status code of 200, then the request is allowed through to the upstream backend service. **Only** 200 indicates this; other 2XX status codes will prevent the request from being allowed through. + + The 200 response should not contain anything in the body, but may contain arbitrary headers. Any header present in the ExtAuth service' response that is also either listed in the `allow_authorization_headers` attribute of the AuthService resource or in the fixed list of headers that are always included will be copied from the ExtAuth service's response into the request going to the upstream backend service. This allows the ExtAuth service to inject tokens or other information into the request, or to modify headers coming from the client. + + The big limitation here is that the list of headers to be set must be known ahead of time, in order to set `allow_request_headers`. Setting headers that are not known ahead of time requires instead using `proto: grpc`. + + - If $productName$ cannot reach the ExtAuth service at all, if the ExtAuth service does not return a valid HTTP response, or if the HTTP response has an HTTP status code in the 5XX range, then the communication with the ExtAuth service is considered to have failed, and the `status_on_error` or `failure_mode_allow` behavior is triggered. + + - Any HTTP status code other than 200 or 5XX from the ExtAuth service tells $productName$ to **not** allow the request to continue to the upstream backend service, but that the ExtAuth service is instead intercepting the request. The entire HTTP response from the ExtAuth service--including the status code, the headers, and the body--is handed back to the client verbatim. This gives the ExtAuth service **complete** control over the entire response presented to the client. + + The big limitation here is that you cannot directly return a 200 or 5XX response. Intercepting with a 200 of 5XX response requires instead using `proto: grpc`. diff --git a/docs/emissary/latest/topics/running/services/index.md b/docs/emissary/latest/topics/running/services/index.md new file mode 100644 index 000000000..1646aa5a1 --- /dev/null +++ b/docs/emissary/latest/topics/running/services/index.md @@ -0,0 +1,11 @@ +# Available plugins + +You may need an API Gateway to enforce policies specific to your organization. $productName$ supports custom policies through external service plugins. The policy logic specific to your organization is implemented in the external service, and $productName$ is configured to send RPC requests to your service. + +Currently, $productName$ supports plugins for authentication, +access logging, rate limiting, and tracing. + +* [AuthService](auth-service) Plugin +* [LogService](log-service) Plugin +* [RateLimitService](rate-limit-service) Plugin +* [TracingService](tracing-service) Plugin diff --git a/docs/emissary/latest/topics/running/services/log-service.md b/docs/emissary/latest/topics/running/services/log-service.md new file mode 100644 index 000000000..b3e90c7c9 --- /dev/null +++ b/docs/emissary/latest/topics/running/services/log-service.md @@ -0,0 +1,116 @@ +# Log service + +By default, $productName$ puts the access logs on stdout; such +that the can be read using `kubectl logs`. The format of those logs, +and the local destination of them, can be configured using the +[`envoy_log_` settings in the `ambassador +Module`](../../ambassador). However, the +options there only allow for logging local to $productName$'s Pod. By +configuring a `LogService`, you can configure $productName$ to +report its access logs to a remote service, in addition to the usual +`ambassador Module` configured logging. + +The remote access log service (or ALS) must implement the +`AccessLogService` gRPC interface, defined in [Envoy's `als.proto`][als.proto]. + +[als.proto]: https://github.com/emissary-ingress/emissary/blob/master/api/envoy/service/accesslog/v3/als.proto + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: LogService +metadata: + name: example-log-service +spec: + # Common to all $productName$ resources + ambassador_id: []string # optional; default is ["default"] + + # LogService specific + service: "string" # required + driver: "enum-string:[tcp, http]" # required + driver_config: # required + additional_log_headers: # optional; default is [] (only for `driver: http`) + - header_name: string # required + during_request: boolean # optional; default is true + during_response: boolean # optional; default is true + during_trailer: boolean # optional; default is true + flush_interval_time: int-seconds # optional; default is 1 + flush_interval_byte_size: integer # optional; default is 16384 + grpc: boolean # optional; default is false + protocol_version: enum # optional; default is v2 +``` + + - `service` is where to route the access log gRPC requests to + + - `driver` identifies which type of accesses to log; HTTP requests (`"http"`) or + TLS connections (`"tcp"`). + + - `driver_config` stores the configuration that is specific to the `driver`: + + * `driver: tcp` has no additional configuration; the config must + be set as `driver_config: {}`. + + * `driver: http` + + - `additional_log_headers` identifies HTTP headers to include in + the access log, and when in the logged-request's lifecycle to + include them. + + - `flush_interval_time` is the maximum number of seconds to buffer + accesses for before sending them to the ALS. The logs will be + flushed to the ALS every time this duration is reached, or when the + buffered data reaches `flush_interval_byte_size`, whichever comes + first. See the [Envoy documentation on + `buffer_flush_interval`][buffer_flush_interval] for more + information. + + - `flush_interval_byte_size` is a soft size limit for the access log + buffer. The logs will be flushed to the ALS every time the + buffered data reaches this size, or whenever `flush_interval_time` + elapses, whichever comes first. See the [Envoy documentation on + `buffer_size_bytes`][buffer_size_bytes] for more information. + + - `grpc` must be `true`. + +[buffer_flush_interval]: https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/access_loggers/grpc/v3/als.proto.html#extensions-access-loggers-grpc-v3-commongrpcaccesslogconfig +[buffer_size_bytes]: https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/access_loggers/grpc/v3/als.proto.html#extensions-access-loggers-grpc-v3-commongrpcaccesslogconfig + + - `protocol_version` was used in previous versions of $productName$ to control the gRPC service name used to communicate with the `LogService`. $productName$ 3.x is running an updated version of Envoy that has dropped support for the `v2` protocol, so starting in 3.x, if `protocol_version` is not specified, the default value of `v2` will cause an error to be posted and a static response will be returned. Therefore, you must set it to `protocol_version: v3`. If upgrading from a previous version, you will want to set it to `v3` and ensure it is working before upgrading to Emissary-ingress 3.Y. The default value for `protocol_version` remains `v2` in the `getambassador.io/v3alpha1` CRD specifications to avoid making breaking changes outside of a CRD version change. Future versions of CRD's will deprecate it. + +## Example + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: LogService +metadata: + name: als +spec: + service: "als.default:3000" + driver: http + driver_config: {} # NB: driver_config must be set, even if it's empty + grpc: true # NB: grpc must be true and it will use the V3 transport protocol +``` + +## Transport Protocol Migration + + +> **Note:** The following information is only applicable to `AuthServices` using `proto: grpc` +As of $productName$ version 2.3, the `v2` transport protocol is deprecated and any AuthServices making use +of it should migrate to `v3` before support for `v2` is removed in a future release. + +The following imports simply need to be updated to migrate an AuthService + +`v2` Imports: +``` + envoyCoreV2 "github.com/datawire/ambassador/pkg/api/envoy/api/v2/core" + envoyAuthV2 "github.com/datawire/ambassador/pkg/api/envoy/service/auth/v2" + envoyType "github.com/datawire/ambassador/pkg/api/envoy/type" +``` + +`v3` Imports: +``` + envoyCoreV3 "github.com/datawire/ambassador/v2/pkg/api/envoy/config/core/v3" + envoyAuthV3 "github.com/datawire/ambassador/v2/pkg/api/envoy/service/auth/v3" + envoyType "github.com/datawire/ambassador/v2/pkg/api/envoy/type/v3" +``` diff --git a/docs/emissary/latest/topics/running/services/rate-limit-service.md b/docs/emissary/latest/topics/running/services/rate-limit-service.md new file mode 100644 index 000000000..39c2b0cef --- /dev/null +++ b/docs/emissary/latest/topics/running/services/rate-limit-service.md @@ -0,0 +1,118 @@ +# Rate limit service + +Rate limiting is a powerful technique to improve the [availability and +resilience of your +services](https://blog.getambassador.io/rate-limiting-a-useful-tool-with-distributed-systems-6be2b1a4f5f4). +In $productName$, each request can have one or more _labels_. These labels are +exposed to a third-party service via a gRPC API. The third-party service can +then rate limit requests based on the request labels. + +**Note that `RateLimitService` is only applicable to $OSSproductName$, +and not $AESproductName$, as $AESproductName$ includes a +built-in rate limit service.** + +## Request labels + +See [Attaching labels to +requests](../../../using/rate-limits#attaching-labels-to-requests) +for how to configure the labels that are attached to a request. + +## Domains + +In $productName$, each engineer (or team) can be assigned its own _domain_. A +domain is a separate namespace for labels. By creating individual domains, each +team can assign their own labels to a given request, and independently set the +rate limits based on their own labels. + +See [Attaching labels to +requests](../../../using/rate-limits/#attaching-labels-to-requests) +for how to labels under different domains. + +## External rate limit service + +In order for $productName$ to rate limit, you need to implement a +gRPC `RateLimitService`, as defined in [Envoy's `v3/rls.proto`] +interface. If you do not have the time or resources to implement your own rate +limit service, $AESproductName$ integrates a high-performance rate +limiting service. + +[envoy's `v3/rls.proto`]: https://github.com/emissary-ingress/emissary/tree/master/api/envoy/service/ratelimit/v3/rls.proto + +$productName$ generates a gRPC request to the external rate limit +service and provides a list of labels on which the rate limit service can base +its decision to accept or reject the request: + +``` +[ + {"source_cluster", ""}, + {"destination_cluster", ""}, + {"remote_address", ""}, + {"generic_key", ""}, + {"", ""} +] +``` + +If $productName$ cannot contact the rate limit service, it will +allow the request to be processed as if there were no rate limit service +configuration. + +It is the external rate limit service's responsibility to determine whether rate +limiting should take place, depending on custom business logic. The rate limit +service must simply respond to the request with an `OK` or `OVER_LIMIT` code: + +- If Envoy receives an `OK` response from the rate limit service, then $productName$ allows the client request to resume being processed by + the normal flow. +- If Envoy receives an `OVER_LIMIT` response, then $productName$ + will return an HTTP 429 response to the client and will end the transaction + flow, preventing the request from reaching the backing service. + +The headers injected by the [AuthService](../auth-service) can also be passed to +the rate limit service since the `AuthService` is invoked before the +`RateLimitService`. + +## Configuring the rate limit service + +A `RateLimitService` manifest configures $productName$ to use an +external service to check and enforce rate limits for incoming requests: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: RateLimitService +metadata: + name: ratelimit +spec: + service: 'example-rate-limit.default:5000' + protocol_version: # default is v2, if upgrading from 2.x then you must set this to v3. + failure_mode_deny: false # when set to true envoy will return 500 error when unable to communicate with RateLimitService +``` + +- `service` gives the URL of the rate limit service. If using a Kubernetes service, this should be the [namespace-qualified DNS name](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#namespaces-of-services) of that service. +- `protocol_version` Allowed values are `v3` and `v2`(default). `protocol_version` was used in previous versions of $productName$ to control the protocol used by the gRPC service to communicate with the `RateLimitService`. $productName$ 3.x is running an updated version of Envoy that has dropped support for the `v2` protocol, so starting in 3.x, if `protocol_version` is not specified, the default value of `v2` will cause an error to be posted and a static response will be returned. Therefore, you must set it to `protocol_version: v3`. If upgrading from a previous version, you will want to set it to `v3` and ensure it is working before upgrading to Emissary-ingress 3.Y. The default value for `protocol_version` remains `v2` in the `getambassador.io/v3alpha1` CRD specifications to avoid making breaking changes outside of a CRD version change. Future versions of CRD's will deprecate it. +- `failure_mode_deny` By default, Envoy will fail open when unable to communicate with the service due to it becoming unvailable or due to timeouts. When this happens the upstream service that is being protected by a rate limit may be overloaded due to this behavior. When set to `true` Envoy will be configured to return a `500` status code when it is unable to communicate with the RateLimit service and will fail closed by rejecting request to the upstream service. + +You may only use a single `RateLimitService` manifest. + +## Rate limit service and TLS + +You can tell $productName$ to use TLS to talk to your service by +using a `RateLimitService` with an `https://` prefix. However, you may also +provide a `tls` attribute: if `tls` is present and `true`, $productName$ will originate TLS even if the `service` does not have the `https://` +prefix. + +If `tls` is present with a value that is not `true`, the value is assumed to be the name of a defined TLS context, which will determine the certificate presented to the upstream service. + +## Example + +The [$OSSproductName$ Rate Limiting +Tutorial](../../../../howtos/rate-limiting-tutorial) has a simple rate limiting +example. For a more advanced example, read the [advanced rate limiting +tutorial](../../../../../2.0/howtos/advanced-rate-limiting), which uses the rate limit +service that is integrated with $AESproductName$. + +## Further reading + +- [Rate limiting: a useful tool with distributed systems](https://blog.getambassador.io/rate-limiting-a-useful-tool-with-distributed-systems-6be2b1a4f5f4) +- [Rate limiting for API Gateways](https://blog.getambassador.io/rate-limiting-for-api-gateways-892310a2da02) +- [Implementing a Java Rate Limiting Service for $productName$](https://blog.getambassador.io/implementing-a-java-rate-limiting-service-for-the-ambassador-api-gateway-e09d542455da) +- [Designing a Rate Limit Service for $productName$](https://blog.getambassador.io/designing-a-rate-limiting-service-for-ambassador-f460e9fabedb) diff --git a/docs/emissary/latest/topics/running/services/tracing-service.md b/docs/emissary/latest/topics/running/services/tracing-service.md new file mode 100644 index 000000000..b46e68708 --- /dev/null +++ b/docs/emissary/latest/topics/running/services/tracing-service.md @@ -0,0 +1,139 @@ +import Alert from '@material-ui/lab/Alert'; + +# Tracing Service + +Applications that consist of multiple services can be difficult to debug, as a single request can span multiple services. Distributed tracing tells the story of your request as it is processed through your system. Distributed tracing is a powerful tool to debug and analyze your system in addition to request logging and metrics. + +When enabled, the `TracingService` will instruct $productName$ to initiate a trace on requests by generating and populating an `x-request-id` HTTP header. Services can make use of this `x-request-id` header in logging and forward it in downstream requests for tracing. $productName$ also integrates with external trace visualization services, including Zipkin-compatible APIs such as [Zipkin](https://zipkin.io/) and [Jaeger](https://github.com/jaegertracing/) to allow you to store and visualize traces. You can read further on [Envoy's Tracing capabilities](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/observability/tracing). + +A `TracingService` manifest configures $productName$ to use an external trace visualization service: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TracingService +metadata: + name: tracing +spec: + service: "example-zipkin:9411" + driver: zipkin + config: {} + custom_tags: # optional + - tag: host + request_header: + name: ":authority" + default_value: "unknown" + - tag: path + request_header: + name: ":path" + default_value: "unknown" + sampling: + overall: 100 +``` + +| Field | Description | values | +| --------- | ----------- | ------------- | +| `service` | gives the URL of the external HTTP trace service. | ex. `example-zipkin:9411` | +| `driver` | provides the driver information that handles communicating with the service | enum:
`zipkin`
`datadog`
`opentelemetry` | +| `config` | provides additional configuration options for the selected `driver`. Supported configuration for each driver is found below. | | +| `tag_headers` | **Deprecated** - it is recommend that you switch to using `custom_tags`| | +| `custom_tags` | configure tags to attach to traces. See section below for more details. | | +| `propagation_modes` | (optional) if present, specifies a list of the propogation modes to be used | enum:
`ENVOY`
`B3`
`TRACE_CONTEXT` | +| `sampling` | (optional) if present, specifies some target percentages of requests that will be traced. | | + +Please note that you must use the HTTP/2 pseudo-header names. For example: + +- the `host` header should be specified as the `:authority` header; and +- the `method` header should be specified as the `:method` header. + + +$productName$ supports a single Global TracingService which is configured during Envoy bootstrap. $productName$ must be restarted for changes to the +TracingService manifest to take affect. If you have multiple instances of $productName$ in your cluster, ensure [ambassador_id](../../running#ambassador_id) +is set correctly in the TracingService manifest. + + +## Supported Tracing Drivers + +The `TracingService` currently supports the following drivers: + +- `zipkin` +- `datadog` +- `opentelemetry` + + +In Envoy 1.24, support for the LightStep driver was removed. As of $productName$ 3.4.0, the TracingService no longer supports the lightstep tracing driver. If you are currently using the native Lightstep tracing driver, please refer to Distributed Tracing with Open Telemetry and LightStep + + + +In $productName$ 3.5.0, support for Envoy's native OpenTelemetry driver was added to the TracingService. Envoy still considers this driver experimental. + + +## Sampling + +Configuring `sampling` will specify some target percentages of requests that will be traced. This is beneficial for high-volume services to control the amount of tracing data collected. Sampling can be configured with the following fields: + +- `client`: percentage of requests that will be force traced if the `x-client-trace-id` header is set. Defaults to 100. +- `random`: percentage of requests that will be randomly traced. Defaults to 100. +- `overall`: percentage of requests that will be traced after all other checks have been applied (force tracing, sampling, etc.). +This field functions as an upper limit on the total configured sampling rate. For instance, setting `client` +to `100%` but `overall` to `1%` will result in only `1%` of client requests with the appropriate headers to be force +traced. Defaults to 100. + +## Custom Tags and Tag Headers + +When collecting traces, $productName$ will attach tags to the `span`'s that are generated which are useful for observability. See the [Envoy Tracing Docs](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/observability/tracing#what-data-each-trace-contains) for the default list of data collected. + +Previous versions of $productName$ only supported adding additional tags through the use of the `tag_headers` field. This field is now **deprecated** and it is recommended to use `custom_tags` which supports a more powerful set of features for adding additional tags to a span. + + +If both tag_headers and custom_tags are set then tag_headers will be ignored. + + +`custom_tags` provides support for configuring additional tags based on [Envoy Custom Tags](https://www.envoyproxy.io/docs/envoy/latest/api-v3/type/tracing/v3/custom_tag.proto%23custom-tag). The following custom tag kinds supported are: + +- `request_header` - set tag from header in the request +- `environment` - set tag from an environment variable +- `literal` - set tag based on a configured literal value + +Each custom_tag supports setting oneOf `request_header`, `literal` or `environment`. Each tag should have its own entry in `custom_tags`. + +For example: + +```yaml +custom_tags: + - tag: host + request_header: + name: ":authority" + default_value: "unknown host" # optional + - tag: path + request_header: ":path" + name: ":authority" + default_value: "unknown path" # optional + - tag: cluster + literal: + value: "us-east-cluster" + - tag: nodeID + environment: + name: SERVER_ID + default_value: "unknown" # optional +``` + +## Zipkin Driver Configurations + +- `collector_endpoint` gives the API endpoint of the Zipkin service where the spans will be sent. The default value is `/api/v2/spans` +- `collector_endpoint_version` gives the transport version used when sending data to your Zipkin collector. The default value is `HTTP_JSON` and it must be one of `HTTP_JSON` or `HTTP_PROTO`. +- `collector_endpoint_hostname` sets the hostname Envoy will use when sending data to your Zipkin collector. The default value is the name of the underlying Envoy cluster. +- `trace_id_128bit` whether a 128-bit `trace id` will be used when creating a new trace instance. Defaults to `true`. Setting to `false` will result in a 64-bit trace id being used. +- `shared_span_context` whether client and server spans will share the same `span id`. The default value is `true`. + +## Datadog Driver Configurations + +- `service_name` the name of the service which is attached to the traces. The default value is `ambassador`. + +## OpenTelemetry Driver Configurations + +- `service_name` the name of the service which is attached to traces. The default value is `ambassador`. + +## Example + +Check out the [DataDog](../../../../howtos/tracing-datadog) and [Zipkin](../../../../howtos/tracing-zipkin) HOWTOs. diff --git a/docs/emissary/latest/topics/running/statistics/8877-metrics.md b/docs/emissary/latest/topics/running/statistics/8877-metrics.md new file mode 100644 index 000000000..94bd20438 --- /dev/null +++ b/docs/emissary/latest/topics/running/statistics/8877-metrics.md @@ -0,0 +1,64 @@ +# The metrics endpoint + +> For an overview of other options for gathering statistics on +> $productName$, see the [Statistics and Monitoring](../) overview. + +Each $productName$ pod exposes statistics and metrics for that pod at +`http://{POD}:8877/metrics`. The response is in the text-based +Prometheus [exposition format][]. + +[exposition format]: https://prometheus.io/docs/instrumenting/exposition_formats/ + +## Understanding the statistics + +The Prometheus exposition format includes special "HELP" lines that +make the file self-documenting as to what specific statistics mean. + + + +- `envoy_*`: See the [Envoy documentation][`GET /stats/prometheus`]. +- `ambassador_*`: + - `ambassador_edge_stack_*` (not present in $OSSproductName$): + - `ambassador_edge_stack_go_*`: See [`promethus.NewGoCollector()`][]. + - `ambassador_edge_stack_promhttp_*` See [`promhttp.Handler()`][]. + - `ambassador_edge_stack_process_*`: See [`promethus.NewProcessCollector()`][].. + - `ambassador_*_time_seconds` (for `*` = one of `aconf`, `diagnostics`, `econf`, `fetcher`, `ir`, or `reconfiguration`): + Gauges of how long the various core operations take in the diagd + process. + - `ambassador_diagnostics_(errors|notices)`: The number of + diagnostics errors and notices that would be shown in the + diagnostics UI or the Edge Policy Console. + - `ambassador_diagnostics_info`: [Info][`prometheus_client.Info`] + about the $productName$ install; all information is presented in + labels; the value of the Gauge is always "1". + - `ambassador_process_*`: See [`prometheus_client.ProcessCollector`][]. + +[`GET /stats/prometheus`]: https://www.envoyproxy.io/docs/envoy/v1.23.0/operations/admin.html#get--stats-prometheus +[`prometheus.NewProcessCollector`]: https://godoc.org/github.com/prometheus/client_golang/prometheus#NewProcessCollector +[`prometheus.NewGoCollector`]: https://godoc.org/github.com/prometheus/client_golang/prometheus#NewGoCollector +[`promhttp.Handler()`]: https://godoc.org/github.com/prometheus/client_golang/prometheus/promhttp#Handler +[`prometheus_client.Info`]: https://github.com/prometheus/client_python#info +[`prometheus_client.ProcessCollector`]: https://github.com/prometheus/client_python#process-collector + +## Polling the `:8877/metrics` endpoint with Prometheus + +To scrape metrics directly, follow the instructions for [Monitoring +with Prometheus and Grafana](../../../../howtos/prometheus). + +### Using Grafana to visualize statistics gathered by Prometheus + +#### Sample dashboard + +We provide a [sample Grafana dashboard](https://grafana.com/grafana/dashboards/4698-ambassador-edge-stack/) +that displays information collected by Prometheus from the +`:8877/metrics` endpoint. + +![Screenshot of a Grafana dashboard that displays just information from Envoy](../../../images/grafana.png) diff --git a/docs/emissary/latest/topics/running/statistics/envoy-statsd.md b/docs/emissary/latest/topics/running/statistics/envoy-statsd.md new file mode 100644 index 000000000..7cbcc2083 --- /dev/null +++ b/docs/emissary/latest/topics/running/statistics/envoy-statsd.md @@ -0,0 +1,109 @@ +import Alert from '@material-ui/lab/Alert'; + +# Envoy statistics with StatsD + +> For an overview of other options for gathering statistics on +> $productName$, see the [Statistics and Monitoring](../) overview. + +At the core of $productName$ is [Envoy Proxy], which has built-in +support for exporting a multitude of statistics about its own +operations to StatsD (or to the modified DogStatsD used by Datadog). + +[Envoy Proxy]: https://www.envoyproxy.io + +If enabled, then $productName$ has Envoy expose this information via the +[StatsD](https://github.com/etsy/statsd) protocol. +To enable this, you will simply need to set the environment +variable `STATSD_ENABLED=true` in $productName$'s deployment YAML: + +```diff + spec: + containers: + - env: ++ - name: STATSD_ENABLED ++ value: "true" + - name: AMBASSADOR_NAMESPACE + valueFrom: + fieldRef: +``` + +When this variable is set, $productName$ by default sends statistics to a +Kubernetes service named `statsd-sink` on UDP port 8125 (the usual +port of the StatsD protocol). You may instead tell $productName$ to send +the statistics to a different StatsD server by setting the +`STATSD_HOST` environment variable. This can be useful if you have an +existing StatsD sink available in your cluster. + +We have included a few example configurations in +[the `statsd-sink/` directory](https://github.com/emissary-ingress/emissary/tree/master/deployments/statsd-sink) +to help you get started. Clone or download the +repository to get local, editable copies and open a terminal +window in the `emissary/deployments/` folder. + +## Using Graphite as the StatsD sink + +[Graphite] is a web-based real-time graphing system. Spin up an +example Graphite setup: + +[Graphite]: http://graphite.readthedocs.org/ + +``` +kubectl apply -f statsd-sink/graphite/graphite-statsd-sink.yaml +``` + +This sets up the `statsd-sink` service and a deployment that contains +Graphite and its related infrastructure. Graphite's web interface is +available at `http://statsd-sink/` from within the cluster. Use port +forwarding to access the interface from your local machine: + +``` +SINKPOD=$(kubectl get pod -l service=statsd-sink -o jsonpath="{.items[0].metadata.name}") +kubectl port-forward $SINKPOD 8080:80 +``` + +This sets up Graphite access at `http://localhost:8080/`. + +## Using Datadog DogStatsD as the StatsD sink + +If you are a user of the [Datadog] monitoring system, pulling in the +Envoy statistics from $productName$ is very easy. + +[Datadog]: https://www.datadoghq.com/ + +Because the DogStatsD protocol is slightly different than the normal +StatsD protocol, in addition to setting $productName$'s +`STATSD_ENABLED=true` environment variable, you also need to set the +`DOGSTATSD=true` environment variable: + +```diff + spec: + containers: + - env: ++ - name: STATSD_ENABLED ++ value: "true" ++ - name: DOGSTATSD ++ value: "true" + - name: AMBASSADOR_NAMESPACE + valueFrom: + fieldRef: +``` + +Then, you will need to deploy the DogStatsD agent in to your cluster +to act as the StatsD sink. To do this, replace the sample API key in +our [sample YAML file][`dd-statsd-sink.yaml`] with your own, then +apply that YAML: + +[`dd-statsd-sink.yaml`]: https://github.com/emissary-ingress/emissary/blob/master/deployments/statsd-sink/datadog/dd-statsd-sink.yaml + +``` +kubectl apply -f statsd-sink/datadog/dd-statsd-sink.yaml +``` + +This sets up the `statsd-sink` service and a deployment of the +DogStatsD agent that forwards the $productName$ statistics to your +Datadog account. + +Additionally, $productName$ supports setting the `dd.internal.entity_id` +statitics tag using the `DD_ENTITY_ID` environment variable. If this value +is set, statistics will be tagged with the value of the environment variable. +Otherwise, this statistics tag will be omitted (the default). diff --git a/docs/emissary/latest/topics/running/statistics/index.md b/docs/emissary/latest/topics/running/statistics/index.md new file mode 100644 index 000000000..ab44009f7 --- /dev/null +++ b/docs/emissary/latest/topics/running/statistics/index.md @@ -0,0 +1,84 @@ +# Statistics and monitoring + +$productName$ collects many statistics internally, and makes it easy to +direct this information to a statistics and monitoring tool of your +choice. + +As an example, here are some interesting statistics to investigate: + +- `upstream_rq_total` is the total + number of requests that a particular service has received via $productName$. The rate of change of this value is one basic measure of + service utilization, i.e. requests per second. +- `upstream_rq_xx` is the total number + of requests to which a service responded with a given status code. + This value divided by the prior one, taken on + a rolling window basis, represents the recent response rate of the + service. There are corresponding classes for `2xx`, `3xx`, `4xx` and `5xx` counters that can + help clarify the nature of responses. +- `upstream_rq_time` is a Prometheus histogram or StatsD timer + that tracks the latency in milliseconds of a given service from $productName$'s perspective. + +## Overriding Statistics Names + +The optional `stats_name` element of every CRD that references a service (`Mapping`, `TCPMapping`, +`AuthService`, `LogService`, `RateLimitService`, and `TracingService`) can override the name under which cluster statistics +are logged (`usersvc` above). If not set, the default is the `service` value, with non-alphanumeric characters replaced by +underscores: + +- `service: foo` will just use `foo` +- `service: foo:8080` will use `foo_8080` +- `service: http://foo:8080` will use `http___foo_8080` +- `service: foo.othernamespace` will use `foo_othernamespace` + +The last example is worth special mention: a resource in a different namespace than the one in which $productName$ is running will automatically be qualified with the namespace of the resource itself. So, for example, if $productName$ is running in the `ambassador` namespace, and this `Mapping` is present in the `default` namespace: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: default-mapping + namespace: default +spec: + prefix: /default/ + service: default-service +``` + +then the `service` will be qualified to `default-service.default`, so the `stats_name` will be `default_service_default` rather than simply `default_service`. To change this behavior, set `stats_name` explicitly. + +## Monitoring Statistics + +There are several ways to get different statistics out of $productName$: + +- [The `:8877/metrics` endpoint](./8877-metrics) can be polled for + aggregated statistics (in a Prometheus-compatible format). This is + our recommended method as both Envoy metrics and $productName$ control plane + metrics are collected. +- $productName$ can push [Envoy statistics](./envoy-statsd) over the + StatsD or DogStatsD protocol. + +## The Four Golden Signals + +The [Four Golden Signals](https://sre.google/sre-book/monitoring-distributed-systems/) are four generally-accepted metrics +that are important to monitor for good information about service health: + +### Latency + +The time it takes to service a request as a histogram of time taken by individual requests, which can be an effective latency metric. +In StatsD, this stat would be expressed as `cluster.$name.upstream_rq_time`. +While in Prometheus format, this metric would be expressed as `envoy_cluster_upstream_rq_time_bucket{envoy_cluster_name="$name"}`. + +### Traffic + +The amount of demand being placed on your system as a gauge that shows the number of active outstanding requests, which can be a good proxy for traffic. +In StatsD, this stat would be expressed as `cluster.$name.upstream_rq_active`. +While in Prometheus format, this metric would be expressed as `envoy_cluster_upstream_rq_active{envoy_cluster_name="$name"}`. + +### Errors + +The number of failing requests. Some errors (e.g. a request succeeds, but gives the wrong answer) can only be detected by application-specific monitoring; however, many errors can be spotted simply by looking at the HTTP status code of requests. Monitoring it over time can show error rates. +In StatsD, `cluster.$name.upstream_rq_5xx` is a counter of HTTP `5xx` responses. +While in Prometheus, `envoy_cluster_upstream_rq_xx{envoy_response_code_class="5", envoy_cluster_name="$name"}` is a counter of HTTP `5xx` responses. + +### Saturation + +The hardest metric to measure, saturation describes how much of the total capability of the system to respond to requests is being used. Fully measuring saturation often requires application-specific monitoring, but looking at the 99th percentile of latency over a short window - perhaps a minute - can often give an early indication of saturation problems. diff --git a/docs/emissary/latest/topics/running/tls/cleartext-redirection.md b/docs/emissary/latest/topics/running/tls/cleartext-redirection.md new file mode 100644 index 000000000..7144b1a38 --- /dev/null +++ b/docs/emissary/latest/topics/running/tls/cleartext-redirection.md @@ -0,0 +1,76 @@ +import Alert from '@material-ui/lab/Alert'; + +# Cleartext support + +While most modern web applications choose to encrypt all traffic, there remain +cases where supporting cleartext communications is important. $productName$ supports +both forcing [automatic redirection to HTTPS](#http-https-redirection) and +[serving cleartext](#cleartext-routing) traffic on a `Host`. + + + The Listener and + Host CRDs work together to manage HTTP and HTTPS routing. + This document is meant as a quick reference to the Host resource: for a more complete + treatment of handling cleartext and HTTPS, see Configuring $productName$ Communications. + + +## Cleartext Routing + +To allow cleartext to be routed, set the `requestPolicy.insecure.action` of a `Host` to `Route`: + +```yaml +requestPolicy: + insecure: + action: Redirect +``` + +This allows routing for either HTTP and HTTPS, or _only_ HTTP, depending on `tlsSecret` configuration: + +- If the `Host` does not specify a `tlsSecret`, it will only route HTTP, not terminating TLS at all. +- If the `Host` does specify a `tlsSecret`, it will route both HTTP and HTTPS. + + + The Listener and + Host CRDs work together to manage HTTP and HTTPS routing. + This document is meant as a quick reference to the Host resource: for a more complete + treatment of handling cleartext and HTTPS, see Configuring $productName$ Communications. + + +## HTTP->HTTPS redirection + +Most websites that force HTTPS will also automatically redirect any +requests that come into it over HTTP: + +``` +Client $productName$ +| | +| http:///api | +| --------------------------> | +| | +| 301: https:///api | +| <-------------------------- | +| | +| https:///api | +| --------------------------> | +| | +``` + +In $productName$, this is configured by setting the `insecure.action` in a `Host` to `Redirect`. + +```yaml +requestPolicy: + insecure: + action: Redirect +``` + +$productName$ determines which requests are secure and which are insecure using the +`securityModel` of the [`Listener`] that accepts the request. + +[`Listener`]: ../../listener + + + The Listener and + Host CRDs work together to manage HTTP and HTTPS routing. + This document is meant as a quick reference to the Host resource: for a more complete + treatment of handling cleartext and HTTPS, see Configuring $productName$ Communications. + diff --git a/docs/emissary/latest/topics/running/tls/index.md b/docs/emissary/latest/topics/running/tls/index.md new file mode 100644 index 000000000..850fb5c07 --- /dev/null +++ b/docs/emissary/latest/topics/running/tls/index.md @@ -0,0 +1,487 @@ +import Alert from '@material-ui/lab/Alert'; + +# Transport Layer Security (TLS) + +$productName$'s robust TLS support exposes configuration options +for many different TLS use cases, using the [`Host`](#host) and +[`TLSContext`](#host-and-tlscontext) resources in concert. + +## Certificates and Secrets + +Properly-functioning TLS requires the use of [TLS certificates] to prove that the +various systems communicating are who they say they are. At minimum, $productName$ +must have a server certificate that identifies it to clients; when [mTLS] or +[client certificate authentication] are in use, additional certificates are needed. + +You supply certificates to $productName$ in Kubernetes [TLS Secrets]. These Secrets +_must_ contain valid X.509 certificates with valid PKCS1, PKCS8, or Elliptic Curve private +keys. If a Secret does not contain a valid certificate, an error message will be logged, for +example: + +``` +tls-broken-cert.default.1 2 errors:; 1. K8sSecret secret tls-broken-cert.default tls.key cannot be parsed as PKCS1 or PKCS8: asn1: syntax error: data truncated; 2. K8sSecret secret tls-broken-cert.default tls.crt cannot be parsed as x.509: x509: malformed certificate +``` + +If you set the `AMBASSADOR_FORCE_SECRET_VALIDATION` environment variable, the invalid +Secret will be rejected, and a `Host` or `TLSContext` resource attempting to use an invalid +certificate will be disabled entirely. **Note** that in $productName$ $version$, this +includes disabling cleartext communication for such a `Host`. + +[TLS Certificates]: https://protonmail.com/blog/tls-ssl-certificate/ +[mTLS]: mtls +[client certificate authentication]: ../../../howtos/client-cert-validation/ +[TLS Secrets]: https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets + +## `Host` + +A `Host` represents a domain in $productName$ and defines how the domain manages TLS. For more information on the Host resource, see [The Host CRD reference documentation](../host-crd). + +**If no `Host`s are present**, $productName$ synthesizes a `Host` that +allows only cleartext routing. You will need to explictly define `Host`s to enable +TLS termination. + + + The examples below do not define a requestPolicy; however, most real-world + usage of $productName$ will require defining the requestPolicy.
+
+ For more information, please refer to the Host documentation. +
+ +### Bring your own certificate + +The `Host` can read a certificate from a Kubernetes Secret and use that certificate +to terminate TLS on a domain. + +The following example shows the certificate contained in the Kubernetes Secret named +`host-secret` configured to have $productName$ terminate TLS on the `host.example.com` +domain: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: example-host +spec: + hostname: host.example.com + tlsSecret: + name: host-secret +``` + +By default, `tlsSecret` will only look for the named secret in the same namespace as the `Host`. +In the above example, the secret `host-secret` will need to exist within the `default` namespace +since that is the namespace of the `Host`. + +To reference a secret that is in a different namespace from the `Host`, the `namespace` field is required. +The below example will configure the `Host` to use the `host-secret` secret from the `example` namespace. + +```yaml +--- +apiVersion: getambassador.io/v2 +kind: Host +metadata: + name: example-host +spec: + hostname: host.example.com + acmeProvider: + authority: none + tlsSecret: + name: host-secret + namespace: example +``` + + + + The Kubernetes Secret named by tlsSecret must contain a valid TLS certificate. + If `AMBASSADOR_FORCE_SECRET_VALIDATION` is set and the Secret contains an invalid + certificate, $productName$ will reject the Secret and completely disable the + `Host`; see [**Certificates and Secrets**](#certificates-and-secrets) above. + + + +### Advanced TLS configuration with the `Host` + +You can specify TLS configuration directly in the `Host` via the `tls` field. This is the +recommended method to do more advanced TLS configuration for a single `Host`. + +For example, the configuration to enforce a minimum TLS version on the `Host` looks as follows: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: example-host +spec: + hostname: host.example.com + tlsSecret: + name: min-secret + tls: + min_tls_version: v1.2 +``` + + + + The Kubernetes Secret named by tlsSecret must contain a valid TLS certificate. + If `AMBASSADOR_FORCE_SECRET_VALIDATION` is set and the Secret contains an invalid + certificate, $productName$ will reject the Secret and completely disable the + `Host`; see [**Certificates and Secrets**](#certificates-and-secrets) above. + + + +The following fields are accepted in the `tls` field: +```yaml +tls: + cert_chain_file: # string + private_key_file: # string + ca_secret: # string + cacert_chain_file: # string + alpn_protocols: # string + cert_required: # bool + min_tls_version: # string + max_tls_version: # string + cipher_suites: # array of strings + ecdh_curves: # array of strings + sni: # string + crl_secret: # string +``` + +These fields have the same function as in the [`TLSContext`](#tlscontext) resource, +as described below. + +### `Host` and `TLSContext` + +You can link a `Host` to a [`TLSContext`](#tlscontext) instead of defining `tls` +settings in the `Host` itself. This is primarily useful for sharing settings between +multiple `Host`s. + +#### Link a `TLSContext` to the `Host` + + + It is invalid to use both the tls setting and the tlsContext + setting on the same Host. The recommended setting is using the tls setting + unless you have multiple Hosts that need to share TLS configuration. + + +To link a [`TLSContext`](#tlscontext) with a `Host`, create a [`TLSContext`](#tlscontext) +with the desired configuration and link it to the `Host` by setting the `tlsContext.name` +field in the `Host`. For example, to enforce a minimum TLS version on the `Host` above, +create a `TLSContext` with any name with the following configuration: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: min-tls-context +spec: + hosts: + - host.example.com + secret: min-secret + min_tls_version: v1.2 +``` + +Next, link it to the `Host` via the `tlsContext` field as shown: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: example-host +spec: + hostname: host.example.com + tlsSecret: + name: min-secret + tlsContext: + name: min-tls-context +``` + + + + The `Host` and the `TLSContext` must name the same Kubernetes Secret; if not, + $productName$ will disable TLS for the `Host`. + + + + + + The Kubernetes Secret named by tlsSecret must contain a valid TLS certificate. + If `AMBASSADOR_FORCE_SECRET_VALIDATION` is set and the Secret contains an invalid + certificate, $productName$ will reject the Secret and completely disable the + `Host`; see [**Certificates and Secrets**](#certificates-and-secrets) above. + + + + + + The `Host`'s `hostname` and the `TLSContext`'s `hosts` must have compatible settings. If + they do not, requests may not be accepted. + + + +See [`TLSContext`](#tlscontext) below to read more on the description of these fields. + +#### Create a `TLSContext` with the name `{{AMBASSADORHOST}}-context` (DEPRECATED) + + + This implicit TLSContext linkage is deprecated and will be removed + in a future version of $productName$; it is not recommended for new + configurations. Any other TLS configuration in the Host will override + this implict TLSContext link. + + +The `Host` will implicitly link to the `TLSContext` when a `TLSContext` exists with the following: + +- the name `{{NAME_OF_AMBASSADORHOST}}-context` +- `hosts` in the `TLSContext` set to the same value as `hostname` in the `Host`, and +- `secret` in the `TLSContext` set to the same value as `tlsSecret` in the `Host` + +**As noted above, this implicit linking is deprecated.** + +For example, another way to enforce a minimum TLS version on the `Host` above would +be to simply create the `TLSContext` with the name `example-host-context` and then not modify the `Host`: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: example-host-context +spec: + hosts: + - host.example.com + secret: host-secret + min_tls_version: v1.2 +``` + + + + The `Host` and the `TLSContext` must name the same Kubernetes Secret; if not, + $productName$ will disable TLS for the `Host`. + + + + + + The Kubernetes Secret named by tlsSecret must contain a valid TLS certificate. + If `AMBASSADOR_FORCE_SECRET_VALIDATION` is set and the Secret contains an invalid + certificate, $productName$ will reject the Secret and completely disable the + `Host`; see [**Certificates and Secrets**](#certificates-and-secrets) above. + + + + + + The `Host`'s `hostname` and the `TLSContext`'s `hosts` must have compatible settings. If + they do not, requests may not be accepted. + + + +Full reference for all options available to the `TLSContext` can be found [below](#tlscontext). + +## TLSContext + +The `TLSContext` is used to configure advanced TLS options in $productName$. +Remember, a `TLSContext` must always be paired with a `Host`. + +A full schema of the `TLSContext` can be found below with descriptions of the +different configuration options. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: example-host-context +spec: + # 'hosts' defines the hosts for which this TLSContext is relevant. + # It ties into SNI. A TLSContext without "hosts" is useful only for + # originating TLS. + # type: array of strings + # + # hosts: [] + + # 'sni' defines the SNI string to use on originated connections. + # type: string + # + # sni: None + + # 'secret' defines a Kubernetes Secret that contains the TLS certificate we + # use for origination or termination. If not specified, $productName$ will look + # at the value of cert_chain_file and private_key_file. + # type: string + # + # secret: None + + # 'ca_secret' defines a Kubernetes Secret that contains the TLS certificate we + # use for verifying incoming TLS client certificates. + # type: string + # + # ca_secret: None + + # Tells $productName$ whether to interpret a "." in the secret name as a "." or + # a namespace identifier. + # type: boolean + # + # secret_namespacing: true + + # 'cert_required' can be set to true to _require_ TLS client certificate + # authentication. + # type: boolean + # + # cert_required: false + + # 'alpn_protocols' is used to enable the TLS ALPN protocol. It is required + # if you want to do GRPC over TLS; typically it will be set to "h2" for that + # case. + # type: string (comma-separated list) + # + # alpn_protocols: None + + # 'min_tls_version' sets the minimum acceptable TLS version: v1.0, v1.1, + # v1.2, or v1.3. It defaults to v1.0. + # min_tls_version: v1.0 + + # 'max_tls_version' sets the maximum acceptable TLS version: v1.0, v1.1, + # v1.2, or v1.3. It defaults to v1.3. + # max_tls_version: v1.3 + + # Tells $productName$ to load TLS certificates from a file in its container. + # type: string + # + # cert_chain_file: None + # private_key_file: None + # cacert_chain_file: None +``` + + + + `secret` and (if used) `ca_secret` must specify Kubernetes Secrets containing valid TLS + certificates. If `AMBASSADOR_FORCE_SECRET_VALIDATION` is set and either Secret contains + an invalid certificate, $productName$ will reject the Secret, which will also completely + disable any `Host` using the `TLSContext`; see [**Certificates and Secrets**](#certificates-and-secrets) + above. + + + +### ALPN protocols + +The `alpn_protocols` setting configures the TLS ALPN protocol. To use gRPC over +TLS, set `alpn_protocols: h2`. If you need to support HTTP/2 upgrade from +HTTP/1, set `alpn_protocols: h2,http/1.1` in the configuration. + +#### HTTP/2 support + +The `alpn_protocols` setting is also required for HTTP/2 support. + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: tls +spec: + secret: ambassador-certs + hosts: ["*"] + alpn_protocols: h2[, http/1.1] +``` +Without setting alpn_protocols as shown above, HTTP2 will not be available via +negotiation and will have to be explicitly requested by the client. + +If you leave off http/1.1, only HTTP2 connections will be supported. + +### TLS parameters + +The `min_tls_version` setting configures the minimum TLS protocol version that +$productName$ will use to establish a secure connection. When a client +using a lower version attempts to connect to the server, the handshake will +result in the following error: `tls: protocol version not supported`. + +The `max_tls_version` setting configures the maximum TLS protocol version that +$productName$ will use to establish a secure connection. When a client +using a higher version attempts to connect to the server, the handshake will +result in the following error: +`tls: server selected unsupported protocol version`. + +The `cipher_suites` setting configures the supported ciphers found below using the +[configuration parameters for BoringSSL](https://commondatastorage.googleapis.com/chromium-boringssl-docs/ssl.h.html#Cipher-suite-configuration) when negotiating a TLS 1.0-1.2 connection. +This setting has no effect when negotiating a TLS 1.3 connection. When a client does not +support a matching cipher a handshake error will result. + +The `ecdh_curves` setting configures the supported ECDH curves when negotiating +a TLS connection. When a client does not support a matching ECDH a handshake +error will result. + +``` + - AES128-SHA + - AES256-SHA + - AES128-GCM-SHA256 + - AES256-GCM-SHA384 + - ECDHE-RSA-AES128-SHA + - ECDHE-RSA-AES256-SHA + - ECDHE-RSA-AES128-GCM-SHA256 + - ECDHE-RSA-AES256-GCM-SHA384 + - ECDHE-RSA-CHACHA20-POLY1305 + - ECDHE-ECDSA-AES128-SHA + - ECDHE-ECDSA-AES256-SHA + - ECDHE-ECDSA-AES128-GCM-SHA256 + - ECDHE-ECDSA-AES256-GCM-SHA384 + - ECDHE-ECDSA-CHACHA20-POLY1305 + - ECDHE-PSK-AES128-CBC-SHA + - ECDHE-PSK-AES256-CBC-SHA + - ECDHE-PSK-CHACHA20-POLY1305 + - PSK-AES128-CBC-SHA + - PSK-AES256-CBC-SHA + - DES-CBC3-SHA +``` + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: tls +spec: + hosts: ["*"] + secret: ambassador-certs + min_tls_version: v1.0 + max_tls_version: v1.3 + cipher_suites: + - "[ECDHE-ECDSA-AES128-GCM-SHA256|ECDHE-ECDSA-CHACHA20-POLY1305]" + - "[ECDHE-RSA-AES128-GCM-SHA256|ECDHE-RSA-CHACHA20-POLY1305]" + ecdh_curves: + - X25519 + - P-256 +``` + + +The `crl_secret` field allows you to reference a Kubernetes Secret that contains a certificate revocation list. +If specified, $productName$ will verify that the presented peer certificate has not been revoked by this CRL even if they are otherwise valid. This provides a way to reject certificates before they expire or if they become compromised. +The `crl_secret` field takes a PEM-formatted [Certificate Revocation List](https://en.wikipedia.org/wiki/Certificate_revocation_list) in a `crl.pem` entry. + +Note that if a CRL is provided for any certificate authority in a trust chain, a CRL must be provided for all certificate authorities in that chain. Failure to do so will result in verification failure for both revoked and unrevoked certificates from that chain. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: tls-crl +spec: + hosts: ["*"] + secret: ambassador-certs + min_tls_version: v1.0 + max_tls_version: v1.3 + crl_secret: 'ambassador-crl' +--- +apiVersion: v1 +kind: Secret +metadata: + name: ambassador-crl + namespace: ambassador +type: Opaque +data: + crl.pem: | + {BASE64 CRL CONTENTS} +--- +``` diff --git a/docs/emissary/latest/topics/running/tls/mtls.md b/docs/emissary/latest/topics/running/tls/mtls.md new file mode 100644 index 000000000..1b039cf85 --- /dev/null +++ b/docs/emissary/latest/topics/running/tls/mtls.md @@ -0,0 +1,88 @@ +import Alert from '@material-ui/lab/Alert'; + +# Mutual TLS (mTLS) + +Many organizations have security concerns that require all network traffic +throughout their cluster be encrypted. With traditional architectures, +this was not that complicated of a requirement since internal network traffic +was fairly minimal. With microservices, we are making many more requests over +the network that must all be authenticated and secured. + +In order for services to authenticate with each other, they will each need to +provide a certificate and key that the other trusts before establishing a +connection. This action of both the client and server providing and validating +certificates is referred to as mutual TLS. + +## mTLS with $productName$ + +Since $productName$ is a reverse proxy acting as the entry point to your cluster, +$productName$ is acting as the client as it proxies requests to services upstream. + +It is trivial to configure $productName$ to simply originate TLS connections as +the client to upstream services by setting +`service: https://{{UPSTREAM_SERVICE}}` in the `Mapping` configuration. +However, in order to do mTLS with services upstream, $productName$ must also +have certificates to authenticate itself with the service. + +To do this, we can use the `TLSContext` object to get certificates from a +Kubernetes `Secret` and use those to authenticate with the upstream service. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: upstream-context +spec: + hosts: [] + secret: upstream-certs +``` + +We use `hosts: []` for this `TLSContext` since we do not want to use it to terminate +TLS connections from the client. We are just using this to load certificates for +requests upstream. + + + + The Kubernetes Secret must contain a valid TLS certificate. If the environment + variable `AMBASSADOR_FORCE_SECRET_VALIDATION` is set and the Secret contains an invalid + certificate, $productName$ will reject the Secret and completely disable the `Host`; + see [**Certificates and Secrets**](../#certificates-and-secrets) in the TLS overview. + + + +After loading the certificates, we can tell $productName$ when to use them by +setting the `tls` parameter in a `Mapping`: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: upstream-mapping +spec: + hostname: "*" + prefix: /upstream/ + service: upstream-service + tls: upstream-context +``` + +Now, when $productName$ proxies a request to `upstream-service`, it will provide +the certificates in the `upstream-certs` secret for authentication when +encrypting traffic. + +## Service mesh + +As you can imagine, when you have many services in your cluster all +authenticating with each other, managing all of those certificates can become a +very big challenge. + +For this reason, many organizations rely on a service mesh for their +service-to-service authentication and encryption. + +$productName$ integrates with multiple service meshes and makes it easy to +configure mTLS to upstream services for all of them. Click the links below to +see how to configure $productName$ to do mTLS with any of these service meshes: + +- [Consul Connect](../../../../howtos/consul/) + +- [Istio](../../../../howtos/istio/) diff --git a/docs/emissary/latest/topics/running/tls/origination.md b/docs/emissary/latest/topics/running/tls/origination.md new file mode 100644 index 000000000..b15dd5f81 --- /dev/null +++ b/docs/emissary/latest/topics/running/tls/origination.md @@ -0,0 +1,82 @@ +import Alert from '@material-ui/lab/Alert'; + +# TLS origination + +Sometimes you may want traffic from $productName$ to your services to be encrypted. For the cases where terminating TLS at the ingress is not enough, $productName$ can be configured to originate TLS connections to your upstream services. + +## Basic configuration + +Telling $productName$ to talk to your services over HTTPS is easily configured in the `Mapping` definition by setting `https://` in the `service` field. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: basic-tls +spec: + hostname: "*" + prefix: / + service: https://example-service +``` + +## Advanced configuration using a `TLSContext` + +If your upstream services require more than basic HTTPS support (for example, providing a client certificate or +setting the minimum TLS version support) you must create a `TLSContext` for $productName$ to use when +originating TLS. For example: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: tls-context +spec: + secret: self-signed-cert + min_tls_version: v1.3 + sni: some-sni-hostname +``` + + + + The Kubernetes Secret named by `secret` must contain a valid TLS certificate. If the + environment variable `AMBASSADOR_FORCE_SECRET_VALIDATION` is set and the Secret contains + an invalid certificate, $productName$ will reject the `TLSContext` and prevent its use; + see [**Certificates and Secrets**](../#certificates-and-secrets) in the TLS overview. + + + +Configure $productName$ to use this `TLSContext` for connections to upstream services by setting the `tls` attribute of a `Mapping`: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: mapping-with-tls-context +spec: + hostname: "*" + prefix: / + service: https://example-service + tls: tls-context +``` + +The `example-service` service must now support TLS v1.3 for $productName$ to connect. + + + + The Kubernetes Secret named by `secret` must contain a valid TLS certificate. If the + environment variable `AMBASSADOR_FORCE_SECRET_VALIDATION` is set and the Secret contains + an invalid certificate, $productName$ will reject the `TLSContext` and prevent its use; + see [**Certificates and Secrets**](../#certificates-and-secrets) in the TLS overview. + + + + + + A `TLSContext` requires a certificate be provided, even in cases where the upstream + service does not require it (for origination) and the `TLSContext` is not being used + to terminate TLS. In this case, simply generate and provide a self-signed certificate. + + diff --git a/docs/emissary/latest/topics/running/tls/sni.md b/docs/emissary/latest/topics/running/tls/sni.md new file mode 100644 index 000000000..92e4992f5 --- /dev/null +++ b/docs/emissary/latest/topics/running/tls/sni.md @@ -0,0 +1,103 @@ +# Server Name Indication (SNI) + +$productName$ supports serving multiple `Host`s behind a single IP address, each +with their own certificate. + +This is as easy to do as creating a `Host` for each domain or subdomain you +want $productName$ to serve, getting a certificate for each, and telling +$productName$ which `Host` the route should be created for. + +The example below configures two `Host`s and assigns routes to them. + +## Configuring a `Host` + +The `Host` resources lets you separate configuration for each distinct domain +and subdomain you plan on serving behind $productName$. + +Let's start by creating a simple `Host` and providing our own certificate in +the `host-cert` secret. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: example-host +spec: + hostname: host.example.com + tlsSecret: + name: host-cert +``` + +Now let's create a second `Host` for a different domain we want to serve behind +$productName$. This second `Host` uses $AESproductName$'s automatic TLS +to get a certificate from Let's Encrypt. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Host +metadata: + name: foo-host +spec: + hostname: host.foo.com + acmeProvider: + email: julian@example.com +``` + +We now have two `Host`s with two different certificates. + + + A minimum version of TLS 1.1 is required to properly use SNI. When setting up your TLS configuration, be sure you are not using TLS 1.0 or older. + + +## Configuring routes + +Now that we have two domains behind $productName$, we can create routes for either +or both of them. + +We do this by setting the `hostname` attribute of a `Mapping` to the domain the +`Mapping` should be created for. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: httpbin +spec: + prefix: /httpbin/ + service: httpbin.org:80 + host_rewrite: httpbin.org + hostname: host.example.com +``` + +The above creates a `/httpbin/` endpoint for `host.example.com`. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: mockbin +spec: + prefix: /foo/ + service: foo-service + hostname: host.foo.com +``` + +The above creates a `/foo/` endpoint for `host.foo.com`. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: frontend +spec: + hostname: "*" + prefix: /bar/ + service: bar-endpoint +``` + +The above creates a `/bar/` endpoint for all `Host`s. diff --git a/docs/emissary/latest/topics/using/authservice.md b/docs/emissary/latest/topics/using/authservice.md new file mode 100644 index 000000000..cfe3598b5 --- /dev/null +++ b/docs/emissary/latest/topics/using/authservice.md @@ -0,0 +1,23 @@ +# AuthService settings + +A Mapping can pass these settings along to an [AuthService](../../running/services/auth-service). This is helpful to allow these specific configurations to apply only to certain Mappings and not globally. + +## Bypass authentication + +An AuthService can be disabled for a specific Mapping with the `bypass_auth` attribute. This will tell $productName$ to allow all requests for that Mapping through without interacting with the external auth service. This could be helpful, for example, for a public API. + +```yaml +bypass_auth: true +``` + +## Context extensions + +The `auth_context_extensions` attribute will pass the given values along to the AuthService when authentication happens. The values are arbitrary key value pairs formatted as strings. + +```yaml +auth_context_extensions: + foo: bar + baz: zing +``` + +More information is available on [the Envoy documentation on external authentication](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/http/ext_authz/v3/ext_authz.proto.html#extensions-filters-http-ext-authz-v3-checksettings). diff --git a/docs/emissary/latest/topics/using/canary.md b/docs/emissary/latest/topics/using/canary.md new file mode 100644 index 000000000..f99de1a3e --- /dev/null +++ b/docs/emissary/latest/topics/using/canary.md @@ -0,0 +1,41 @@ +# Canary releases + +Canary releasing is a deployment pattern where a small percentage of traffic is diverted to an early ("canary") release of a particular service. This technique lets you test a release on a small subset of users, mitigating the impact of any given bug. Canary releasing also allows you to quickly roll back to a known good version in the event of an unexpected error. Detailed monitoring of core service metrics is an essential part of canary releasing, as monitoring enables the rapid detection of problems in the canary release. + +## Canary releases in Kubernetes + +Kubernetes supports a basic canary release workflow using its core objects. In this workflow, a service owner can create a Kubernetes [service](https://kubernetes.io/docs/concepts/services-networking/service/). This service can then be pointed to multiple [deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/). Each deployment can be a different version. By specifying the number of `replicas` in a given deployment, you can control how much traffic goes between different versions. For example, you could set `replicas: 3` for `v1`, and `replicas: 1` for `v2`, to ensure that 25% of traffic goes to `v2`. This approach works but is fairly coarse-grained unless you have lots of replicas. Moreover, auto-scaling doesn't work well with this strategy. + +## Canary releases in $productName$ + +$productName$ supports fine-grained canary releases. $productName$ uses a weighted round-robin scheme to route traffic between multiple services. Full metrics are collected for all services, making it easy to compare the relative performance of the canary and production. + +### The weight attribute + +The `weight` attribute specifies how much traffic for a given resource will be routed using a given mapping. Its value is an integer percentage between 0 and 100. $productName$ will balance weights to make sure that, for every resource, the mappings for that resource will have weights adding to 100%. (In the simplest case, a single mapping is guaranteed to receive 100% of the traffic no matter whether it's assigned a `weight` or not.) + +Specifying a weight only makes sense if you have multiple mappings for the same resource, and typically you would _not_ assign a weight to the "default" mapping (the mapping expected to handle most traffic): letting $productName$ assign that mapping all the traffic not otherwise spoken for tends to make life easier when updating weights. + +Here's an example, which might appear during a canary deployment: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + prefix: /backend/ + service: quote +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend2 +spec: + prefix: /backend/ + service: quotev2 + weight: 10 +``` + +In this case, the quote-backend2 will receive 10% of the requests for `/backend/`, and $productName$ will assign the remaining 90% to the quote-backend. diff --git a/docs/emissary/latest/topics/using/circuit-breakers.md b/docs/emissary/latest/topics/using/circuit-breakers.md new file mode 100644 index 000000000..97b40f570 --- /dev/null +++ b/docs/emissary/latest/topics/using/circuit-breakers.md @@ -0,0 +1,116 @@ +# Circuit breakers + +Circuit breakers are a powerful technique to improve resilience. By preventing additional connections or requests to an overloaded service, circuit breakers limit the ["blast radius"](https://www.ibm.com/garage/method/practices/manage/practice_limited_blast_radius/) of an overloaded service. By design, $productName$ circuit breakers are distributed, i.e., different $productName$ instances do not coordinate circuit breaker information. + +## Circuit breaker configuration + +A default circuit breaking configuration can be set for all +$productName$ resources in the [`ambassador +Module`](../../running/ambassador), or set to a different value on a +per-resource basis for [`Mappings`](../mappings), +[`TCPMappings`](../tcpmappings/), and +[`AuthServices`](../../running/services/auth-service/). + +The `circuit_breakers` attribute configures circuit breaking. The following fields are supported: + +```yaml +circuit_breakers: +- priority: + max_connections: + max_pending_requests: + max_requests: + max_retries: +``` + +|Key|Default value|Description| +|---|---|---| +|`priority`|`default`|Specifies the priority to which the circuit breaker settings apply to; it can be set to either `default` or `high`.| +|`max_connections`|`1024`|Specifies the maximum number of connections that $productName$ will make to the services. In practice, this is more applicable to HTTP/1.1 than HTTP/2.| +|`max_pending_requests`|`1024`|Specifies the maximum number of requests that will be queued while waiting for a connection. In practice, this is more applicable to HTTP/1.1 than HTTP/2.| +|`max_requests`|`1024`|Specifies the maximum number of parallel outstanding requests to an upstream service. In practice, this is more applicable to HTTP/2 than HTTP/1.1.| +|`max_retries`|`3`|Specifies the maximum number of parallel retries allowed to an upstream service.| + +The `max_*` fields can be reduced to shrink the "blast radius," or +increased to enable $productName$ to handle a larger number of +concurrent requests. + +## Examples + +Circuit breakers defined on a single `Mapping`: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + prefix: /backend/ + service: quote + circuit_breakers: + - max_connections: 2048 + max_pending_requests: 2048 +``` + +Circuit breakers defined on a single `AuthService`: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: AuthService +metadata: + name: dancing-walrus +spec: + auth_service: http://dancing-walrus:8500 + proto: grpc + circuit_breakers: + - max_requests: 4096 +``` + +A global circuit breaker: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: + config: + circuit_breakers: + - max_connections: 2048 + max_pending_requests: 2048 +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + prefix: /backend/ + service: quote +``` + +## Circuit breakers and automatic retries + +Circuit breakers are best used in conjunction with [automatic retries](../retries). Here are some examples: + +* You've configured automatic retries for failed requests to a service. Your service is under heavy load, and starting to time out on servicing requests. In this case, automatic retries can exacerbate your problem, increasing the total request volume by 2x or more. By aggressively circuit breaking, you can mitigate failure in this scenario. +* To circuit break when services are slow, you can combine circuit breakers with retries. Reduce the time out for retries, and then set a circuit breaker that detects many retries. In this setup, if your service doesn't respond quickly, a flood of retries will occur, which can then trip the circuit breaker. + +Note that setting circuit breaker thresholds requires careful monitoring and experimentation. We recommend you start with conservative values for circuit breakers and adjust them over time. + +## More about circuit breakers + +Responses from a broken circuit contain the `x-envoy-overloaded` header. + +The following are the default values for circuit breaking if nothing is specified: + +```yaml +circuit_breakers: +- priority: default + max_connections: 1024 + max_pending_requests: 1024 + max_requests: 1024 + max_retries: 3 +``` + +Circuit breaker metrics are exposed in StatsD. For more information about the specific statistics, see the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/circuit_breaking.html). diff --git a/docs/emissary/latest/topics/using/cors.md b/docs/emissary/latest/topics/using/cors.md new file mode 100644 index 000000000..315e46942 --- /dev/null +++ b/docs/emissary/latest/topics/using/cors.md @@ -0,0 +1,155 @@ +# Cross-Origin Resource Sharing (CORS) + +Cross-Origin resource sharing lets users request resources (e.g., images, fonts, videos) from domains outside the original domain. + +CORS configuration can be set for all $productName$ mappings in the [`ambassador Module`](../../running/ambassador), or set per [`Mapping`](../mappings). + +When the CORS attribute is set at either the `Mapping` or `Module` level, $productName$ will intercept the pre-flight `OPTIONS` request and respond with the appropriate CORS headers. This means you will not need to implement any logic in your upstreams to handle these CORS `OPTIONS` requests. + +The flow of the request will look similar to the following: +``` +Client $productName$ Upstream + | OPTIONS | | + | —————————————————> | | + | CORS_RESP | | + | <————————————————— | | + | GET /foo/ | | + | —————————————————> | ————————————> | + | | RESP | + | <————————————————————————————————— | +``` +## The `cors` attribute + +The `cors` attribute enables the CORS filter. The following settings are supported: + +- `origins`: Specifies a list of allowed domains for the `Access-Control-Allow-Origin` header. To allow all origins, use the wildcard `"*"` value. Format can be either of: + - comma-separated list, e.g. + ```yaml + origins: http://foo.example,http://bar.example + ``` + - YAML array, e.g. + ```yaml + origins: + - http://foo.example + - http://bar.example + ``` + +- `methods`: if present, specifies a list of allowed methods for the `Access-Control-Allow-Methods` header. Format can be either of: + - comma-separated list, e.g. + ```yaml + methods: POST, GET, OPTIONS + ``` + - YAML array, e.g. + ```yaml + methods: + - GET + - POST + - OPTIONS + ``` + +- `headers`: if present, specifies a list of allowed headers for the `Access-Control-Allow-Headers` header. Format can be either of: + - comma-separated list, e.g. + ```yaml + headers: Content-Type + ``` + - YAML array, e.g. + ```yaml + headers: + - Content-Type + ``` + +- `credentials`: if present with a true value (boolean), will send a `true` value for the `Access-Control-Allow-Credentials` header. + +- `exposed_headers`: if present, specifies a list of allowed headers for the `Access-Control-Expose-Headers` header. Format can be either of: + - comma-separated list, e.g. + ```yaml + exposed_headers: X-Custom-Header + ``` + - YAML array, e.g. + ```yaml + exposed_headers: + - X-Custom-Header + ``` + +- `max_age`: if present, indicated how long the results of the preflight request can be cached, in seconds. This value must be a string. + +## Example + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: cors +spec: + prefix: /cors/ + service: cors-example + cors: + origins: http://foo.example,http://bar.example + methods: POST, GET, OPTIONS + headers: Content-Type + credentials: true + exposed_headers: X-Custom-Header + max_age: "86400" +``` + +## AuthService and Cross-Origin Resource Sharing + +When you use external authorization, each incoming request is authenticated before routing to its destination, including pre-flight `OPTIONS` requests. + +By default, many [`AuthService`](../../running/services/auth-service) implementations will deny these requests. If this is the case, you will need to add some logic to your `AuthService` to accept all CORS headers. + +For example, a possible configuration for Spring Boot 2.0.1: + +```java +@EnableWebSecurity +class SecurityConfig extends WebSecurityConfigurerAdapter { + + public void configure(final HttpSecurity http) throws Exception { + http + .cors().configurationSource(new PermissiveCorsConfigurationSource()).and() + .csrf().disable() + .authorizeRequests() + .antMatchers("**").permitAll(); + } + + private static class PermissiveCorsConfigurationSource implements CorsConfigurationSource { + /** + * Return a {@link CorsConfiguration} based on the incoming request. + * + * @param request + * @return the associated {@link CorsConfiguration}, or {@code null} if none + */ + @Override + public CorsConfiguration getCorsConfiguration(final HttpServletRequest request) { + final CorsConfiguration configuration = new CorsConfiguration(); + configuration.setAllowCredentials(true); + configuration.setAllowedHeaders(Collections.singletonList("*")); + configuration.setAllowedMethods(Collections.singletonList("*")); + configuration.setAllowedOrigins(Collections.singletonList("*")); + return configuration; + } + } +} +``` + +This is okay since CORS is being handled by $productName$ after authentication. + +The flow of this request will look similar to the following: + +``` +Client $productName$ Auth Upstream + | OPTIONS | | | + | —————————————————> | ————————————> | | + | | CORS_ACCEPT_* | | + | CORS_RESP |<——————————————| | + | <——————————————————| | | + | GET /foo/ | | | + | —————————————————> | ————————————> | | + | | AUTH_RESP | | + | | <———————————— | | + | | AUTH_ALLOW | | + | | ————————————————————————————> | + | | | RESP | + | <————————————————————————————————————————————————— | + ``` diff --git a/docs/emissary/latest/topics/using/defaults.md b/docs/emissary/latest/topics/using/defaults.md new file mode 100644 index 000000000..d08a84d81 --- /dev/null +++ b/docs/emissary/latest/topics/using/defaults.md @@ -0,0 +1,32 @@ +# Using `ambassador` `Module` defaults + +## The defaults element + +If present, the `ambassador Module` can define a set of defaults that will automatically be applied to certain resources: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: + config: + defaults: + class1: # This is a class. Different resource types will look in different classes. + key1: value1 # Within a class, you can set different keys. + key2: value2 + ... + class2: + ... + top_key1: value1 # If a key isn't found in a resource's class, the system will look in the + top_key2: value2 # toplevel defaults dictionary for it. +``` + +### Mapping + +Currently, only the `Mapping` resource uses the `defaults` mechanism. `Mapping` looks first for defaultable resources in the `httpmapping` class, including: + +- [`add_request_headers`](../../using/headers/add-request-headers) +- [`add_response_headers`](../../using/headers/add-response-headers) +- [`remove_request_headers`](../../using/headers/remove-request-headers) +- [`remove_response_headers`](../../using/headers/remove-response-headers) diff --git a/docs/emissary/latest/topics/using/gateway-api.md b/docs/emissary/latest/topics/using/gateway-api.md new file mode 100644 index 000000000..5e92cd0dd --- /dev/null +++ b/docs/emissary/latest/topics/using/gateway-api.md @@ -0,0 +1,19 @@ +# Gateway API + +## Using the Gateway API + +$productName$ now supports a limited subset of the new `v1alpha1` [Gateway API](https://gateway-api.sigs.k8s.io/). +Note that the Gateway API is not supported when `AMBASSADOR_LEGACY_MODE` is set. + +Support is currently limited to the following fields, however this will expand in future releases: + + - `Gateway.spec.listeners.port` + - `HTTPRoute.spec.rules.matches.path.type` (`Exact`, `Prefix`, and `RegularExpression`) + - `HTTPRoute.spec.rules.matches.path.value` + - `HTTPRoute.spec.rules.matches.header.type` (`Exact` and `RegularExpression`) + - `HTTPRoute.spec.rules.matches.header.values` + - `HTTPRoute.spec.rules.forwardTo.serviceName` + - `HTTPRoute.spec.rules.forwardTo.port` + - `HTTPRoute.spec.rules.forwardTo.weight` + +Please see the [specification](https://gateway-api.sigs.k8s.io/v1alpha2/references/spec/) for more details. diff --git a/docs/emissary/latest/topics/using/headers/add-request-headers.md b/docs/emissary/latest/topics/using/headers/add-request-headers.md new file mode 100644 index 000000000..c6ad4956d --- /dev/null +++ b/docs/emissary/latest/topics/using/headers/add-request-headers.md @@ -0,0 +1,77 @@ +# Add request headers + +$productName$ can add a dictionary of HTTP headers that can be added to each request that is passed to a service. + +## The `add_request_headers` attribute + +The `add_request_headers` attribute is a dictionary of `header`: `value` pairs. The `value` can be a `string`, `bool` or `object`. When it is an `object`, the object should have a `value` property, which is the actual header value, and the remaining attributes are additional envoy properties. + +Envoy dynamic values `%DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT%` and `%PROTOCOL%` are supported, in addition to static values. + +`add_request_headers` can be set either in a `Mapping` or using [`ambassador Module defaults`](../../defaults). + +### Mapping example + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + hostname: "*" + prefix: /backend/ + add_request_headers: + x-test-proto: "%PROTOCOL%" + x-test-ip: "%DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT%" + x-test-static: This is a test header + x-test-static-2: + value: This the test header #same as above "x-test-static header" + x-test-object: + value: This the value + append: False #True by default + service: quote + ``` + +will add the protocol, client IP, and a static header to `/backend/`. + +### Defaults example + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: + config: + defaults: + httpmapping: + add_request_headers: + x-test-proto: "%PROTOCOL%" + x-test-ip: "%DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT%" + x-test-static: This is a test header + x-test-static-2: + value: This the test header #same as above "x-test-static header" + x-test-object: + value: This the value + append: False #True by default +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend1 +spec: + hostname: "*" + prefix: /backend1/ + service: quote +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend2 +spec: + hostname: "*" + prefix: /backend2/ + service: quote +``` + +This example will add the same headers for both mappings. diff --git a/docs/emissary/latest/topics/using/headers/add-response-headers.md b/docs/emissary/latest/topics/using/headers/add-response-headers.md new file mode 100644 index 000000000..236ace610 --- /dev/null +++ b/docs/emissary/latest/topics/using/headers/add-response-headers.md @@ -0,0 +1,73 @@ +# Add response headers + +$productName$ can add a dictionary of HTTP headers that can be added to each response that is returned to the client. + +## The `add_response_headers` attribute + +The `add_response_headers` attribute is a dictionary of `header`: `value` pairs. The `value` can be a `string`, `bool` or `object`. When it is an `object`, the object should have a `value` property, which is the actual header value, and the remaining attributes are additional envoy properties. + +Envoy dynamic values `%DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT%` and `%PROTOCOL%` are supported, in addition to static values. + +`add_response_headers` can be set either in a `Mapping` or using [`ambassador Module defaults`](../../defaults). + +### Mapping example + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + prefix: /backend/ + add_response_headers: + x-test-proto: "%PROTOCOL%" + x-test-ip: "%DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT%" + x-test-static: This is a test header + x-test-object: + append: False + value: this is from object header config + service: quote +``` + +will add the protocol, client IP, and a static header to the response returned to the client. + +### Defaults example + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: + config: + defaults: + httpmapping: + add_response_headers: + x-test-proto: "%PROTOCOL%" + x-test-ip: "%DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT%" + x-test-static: This is a test header + x-test-object: + append: False + value: this is from object header config +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend1 +spec: + hostname: "*" + prefix: /backend1/ + service: quote +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend2 +spec: + hostname: "*" + prefix: /backend2/ + service: quote +``` + +This example will add the same headers for both mappings. diff --git a/docs/emissary/latest/topics/using/headers/headers.md b/docs/emissary/latest/topics/using/headers/headers.md new file mode 100644 index 000000000..126653b0d --- /dev/null +++ b/docs/emissary/latest/topics/using/headers/headers.md @@ -0,0 +1,76 @@ +import Alert from '@material-ui/lab/Alert'; + +# Header-based routing + +$productName$ can route to target services based on HTTP headers with the `headers` and `regex_headers` specifications. Multiple mappings with different annotations can be applied to construct more complex routing rules. + +## The `headers` annotation + +The `headers` attribute is a dictionary of `header`: `value` pairs. $productName$ will only allow requests that match the specified `header`: `value` pairs to reach the target service. + +### Example + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + prefix: /backend/ + service: quote + headers: + x-quote-mode: backend + x-random-header: datawire +``` + +will allow requests to /backend/ to succeed only if the x-quote-mode header has the value backend and the x-random-header has the value `datawire`. + + + 1.x versions of the Ambassador Edge Stack could test for the existence of a header using x-sample-header:true. Since 2.0, the same functionality is achieved by using regex_headers. + + +## Regex headers + +You can also set the `value` of a regex header to `".*"` to test for the existence of a header. + +### Conditional example + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-mode +spec: + prefix: /backend/ + service: quote-mode + regex_headers: + x-quote-mode: ".*" + +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-regular +spec: + prefix: /backend/ + service: quote-regular +``` + +will send requests that contain the x-quote-mode header to the quote-mode target, while routing all other requests to the quote-regular target. + +The following mapping will route mobile requests from Android and iPhones to a mobile service: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + regex_headers: + user-agent: ".*(iPhone|(a|A)ndroid).*" + prefix: /backend/ + service: quote +``` diff --git a/docs/emissary/latest/topics/using/headers/host.md b/docs/emissary/latest/topics/using/headers/host.md new file mode 100644 index 000000000..5a8dd02c1 --- /dev/null +++ b/docs/emissary/latest/topics/using/headers/host.md @@ -0,0 +1,76 @@ +# Host headers + +$productName$ supports several different methods for managing the HTTP `Host` header. + +## Using `host` and `host_regex` + +A mapping that specifies the `host` attribute will take effect _only_ if the HTTP `Host` header matches the value in the `host` attribute. If `host_regex` is `true`, the `host` value is taken to be a regular expression. Otherwise, an exact string match is required. + +You may have multiple mappings listing the same resource but different `host` attributes to effect `Host`-based routing. An example: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + prefix: /backend/ + service: quote1 +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend-2 +spec: + prefix: /backend/ + host: quote.datawire.io + service: quote2 +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend-3 +spec: + prefix: /backend/ + host: "^quote[2-9]\\.datawire\\.io$" + host_regex: true + service: quote3 +``` + +will map requests for `/` to + +- the `quote2` service if the `Host` header is `quote.datawire.io`; +- the `quote3` service if the `Host` header matches `^quote[2-9]\\.datawire\\.io$`; and to +- the `quote1` service otherwise. + +Note that enclosing regular expressions in quotes can be important to prevent backslashes from being doubled. + +## Using `host_rewrite` + +By default, the `Host` header is not altered when talking to the service -- whatever `Host` header the client gave to $productName$ will be presented to the service. For many microservices, this will be fine, but if you use $productName$ to route to services that use the `Host` header for routing, it's likely to fail (legacy monoliths are particularly susceptible to this, as well as external services). You can use `host_rewrite` to force the `Host` header to whatever value that such target services need. + +An example: the default $productName$ configuration includes the following mapping for `httpbin.org`: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: httpbin +spec: + prefix: /httpbin/ + service: httpbin.org:80 + host_rewrite: httpbin.org +``` + +As it happens, `httpbin.org` is virtually hosted, and it simply _will not_ function without a `Host` header of `httpbin.org`, which means that the `host_rewrite` attribute is necessary here. + +## `host` and `method` + +Internally: + +- the `host` attribute becomes a `header` match on the `:authority` header; and +- the `method` attribute becomes a `header` match on the `:method` header. + +You will see these headers in the diagnostic service if you use the `method` or `host` attributes. diff --git a/docs/emissary/latest/topics/using/headers/remove-request-headers.md b/docs/emissary/latest/topics/using/headers/remove-request-headers.md new file mode 100644 index 000000000..626037562 --- /dev/null +++ b/docs/emissary/latest/topics/using/headers/remove-request-headers.md @@ -0,0 +1,57 @@ +# Remove request headers + +$productName$ can remove a list of HTTP headers that would be sent to the upstream from the request. + +## The `remove_request_headers` attribute + +The `remove_request_headers` attribute takes a list of keys used to match to the header. + +`remove_request_headers` can be set either in a `Mapping` or using [`ambassador Module defaults`](../../defaults). + +## Mapping example + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + prefix: /backend/ + remove_request_headers: + - authorization + service: quote +``` + +will drop the header with key `authorization`. + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: + config: + defaults: + httpmapping: + remove_request_headers: + - authorization +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend1 +spec: + prefix: /backend1/ + service: quote +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend2 +spec: + prefix: /backend2/ + service: quote +``` + +This is the same as the mapping example, but the headers will be removed for both mappings. diff --git a/docs/emissary/latest/topics/using/headers/remove-response-headers.md b/docs/emissary/latest/topics/using/headers/remove-response-headers.md new file mode 100644 index 000000000..16b18569f --- /dev/null +++ b/docs/emissary/latest/topics/using/headers/remove-response-headers.md @@ -0,0 +1,57 @@ +# Remove response headers + +$productName$ can remove a list of HTTP headers that would be sent to the client in the response (e.g. default `x-envoy-upstream-service-time`). + +## The `remove_response_headers` attribute + +The `remove_response_headers` attribute takes a list of keys used to match to the header. + +`remove_request_headers` can be set either in a `Mapping` or using [`ambassador Module defaults`](../../defaults). + +## Mapping example + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + prefix: /backend/ + remove_response_headers: + - x-envoy-upstream-service-time + service: quote +``` + +will drop the header with key `x-envoy-upstream-service-time`. + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: + config: + defaults: + httpmapping: + remove_response_headers: + - x-envoy-upstream-service-time +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend1 +spec: + prefix: /backend1/ + service: quote +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend2 +spec: + prefix: /backend2/ + service: quote +``` + +This is the same as the mapping example, but the headers will be removed for both mappings. diff --git a/docs/emissary/latest/topics/using/index.md b/docs/emissary/latest/topics/using/index.md new file mode 100644 index 000000000..d4f09a833 --- /dev/null +++ b/docs/emissary/latest/topics/using/index.md @@ -0,0 +1,32 @@ +# Using $productName$ + +Application development teams use $productName$ to manage edge policies associated with a specific service. This section of the documentation covers core $productName$ elements that are typically used by the application development team. + +* [Introduction to Mappings](intro-mappings) The `Mapping` resource is the core resource used by every application development team. +* Mapping Configuration: + * [Automatic Retries](retries) + * [Canary Releases](canary) + * [Circuit Breakers](circuit-breakers) + * [Cross Origin Resource Sharing](cors) + * HTTP Headers + * [Header-based Routing](headers/headers) + * [Host Header](headers/host) + * [Adding Request Headers](headers/add-request-headers) + * [Adding Response Headers](headers/add-response-headers) + * [Removing Request Headers](headers/remove-request-headers) + * [Remove Response Headers](headers/remove-response-headers) + * [Query Parameter Based Routing](query-parameters) + * [Keepalive](keepalive) + * Protocols + * [TCP](tcpmappings) + * gRPC, HTTP/1.0, gRPC-Web, WebSockets + * [RegEx-based Routing](prefix-regex) + * [Redirects](redirects) + * [Rewrites](rewrites) + * [Timeouts](timeouts) + * [Traffic Shadowing](shadowing) +* [Advanced Mapping Configuration](mappings) +* Rate Limiting + * [Introduction to Rate Limits](rate-limits/) +* [Developer Portal](../../tutorials/dev-portal-tutorial) +* [Gateway API](gateway-api) diff --git a/docs/emissary/latest/topics/using/intro-mappings.md b/docs/emissary/latest/topics/using/intro-mappings.md new file mode 100644 index 000000000..516560524 --- /dev/null +++ b/docs/emissary/latest/topics/using/intro-mappings.md @@ -0,0 +1,148 @@ +import Alert from '@material-ui/lab/Alert'; + +# Introduction to the Mapping resource + +$productName$ is designed around a [declarative, self-service management model](../../concepts/gitops-continuous-delivery). The core resource used to support application development teams who need to manage the edge with $productName$ is the `Mapping` resource. + + + Remember that Listener and Host resources are +  required for a functioning $productName$ installation that can route traffic!
+ Learn more about Listener.
+ Learn more about Host. +
+ +## Quick example + +At its core a `Mapping` resource maps a `resource` to a `service`: + +| Required attribute | Description | +| :------------------------ | :------------------------ | +| `name` | is a string identifying the `Mapping` (e.g. in diagnostics) | +| [`prefix`](#resources) | is the URL prefix identifying your [resource](#resources) | +| [`service`](#services) | is the name of the [service](#services) handling the resource; must include the namespace (e.g. `myservice.othernamespace`) if the service is in a different namespace than $productName$ | + +These resources are defined as Kubernetes Custom Resource Definitions. Here's a simple example that maps all requests to `/httpbin/` to the `httpbin.org` web service: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: httpbin-mapping +spec: + prefix: /httpbin/ + service: http://httpbin.org +``` + +## Applying a Mapping resource + +A `Mapping` resource can be managed using the same workflow as any other Kubernetes resources (e.g., `service`, `deployment`). For example, if the above `Mapping` is saved into a file called `httpbin-mapping.yaml`, the following command will apply the configuration directly to $productName$: + +``` +kubectl apply -f httpbin-mapping.yaml +``` + +For production use, the general recommended best practice is to store the file in a version control system and apply the changes with a continuous deployment pipeline. For more detail, see [the Ambassador Operating Model](../../concepts/gitops-continuous-delivery). + +## Extending Mappings + +`Mapping` resources support a rich set of annotations to customize the specific routing behavior. Here's an example service for implementing the CQRS pattern (using HTTP): + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: cqrs-get +spec: + prefix: /cqrs/ + method: GET + service: getcqrs +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: cqrs-put +spec: + prefix: /cqrs/ + method: PUT + service: putcqrs +``` + +More detail on each of the available annotations are discussed in subsequent sections. + +## Resources + +To $productName$, a `resource` is a group of one or more URLs that all share a common prefix in the URL path. For example: + +``` +https://ambassador.example.com/resource1/foo +https://ambassador.example.com/resource1/bar +https://ambassador.example.com/resource1/baz/zing +https://ambassador.example.com/resource1/baz/zung +``` + +all share the `/resource1/` path prefix, so it can be considered a single resource. On the other hand: + +``` +https://ambassador.example.com/resource1/foo +https://ambassador.example.com/resource2/bar +https://ambassador.example.com/resource3/baz/zing +https://ambassador.example.com/resource4/baz/zung +``` + +share only the prefix `/` -- you _could_ tell $productName$ to treat them as a single resource, but it's probably not terribly useful. + +Note that the length of the prefix doesn't matter: if you want to use prefixes like `/v1/this/is/my/very/long/resource/name/`, go right ahead, $productName$ can handle it. + +Also note that $productName$ does not actually require the prefix to start and end with `/` -- however, in practice, it's a good idea. Specifying a prefix of + +``` +/man +``` + +would match all of the following: + +``` +https://ambassador.example.com/man/foo +https://ambassador.example.com/mankind +https://ambassador.example.com/man-it-is/really-hot-today +https://ambassador.example.com/manohmanohman +``` + +which is probably not what was intended. + +## Services + +$productName$ routes traffic to a `service`. A `service` is defined as: + +``` +[scheme://]service[.namespace][:port] +``` + +Where everything except for the `service` is optional. + +- `scheme` can be either `http` or `https`; if not present, the default is `http`. +- `service` is the name of a service (typically the service name in Kubernetes or Consul); it is not allowed to contain the `.` character. +- `namespace` is the namespace in which the service is running. Starting with $productName$ 1.0.0, if not supplied, it defaults to the namespace in which the Mapping resource is defined. The default behavior can be configured using the [`ambassador` Module](../../running/ambassador). When using a Consul resolver, `namespace` is not allowed. +- `port` is the port to which a request should be sent. If not specified, it defaults to `80` when the scheme is `http` or `443` when the scheme is `https`. Note that the [resolver](../../running/resolvers) may return a port in which case the `port` setting is ignored. + +Note that while using `service.namespace.svc.cluster.local` may work for Kubernetes resolvers, the preferred syntax is `service.namespace`. + +## Best practices for configuration + +$productName$'s configuration is assembled from multiple YAML blocks which are managed by independent application teams. This implies: + +- $productName$'s configuration should be under version control. + + While you can always read back the $productName$'s configuration from Kubernetes or its diagnostic service, the $productName$ will not do versioning for you. + +- Be aware that the $productName$ tries to not start with a broken configuration, but it's not perfect. + + Gross errors will result in $productName$ refusing to start, in which case `kubectl logs` will be helpful. However, it's always possible to e.g. map a resource to the wrong service, or use the wrong `rewrite` rules. $productName$ can't detect that on its own, although its diagnostic pages can help you figure it out. + +- Be careful of mapping collisions. + + If two different developers try to map `/user/` to something, this can lead to unexpected behavior. $productName$'s canary-deployment logic means that it's more likely that traffic will be split between them than that it will throw an error -- again, the diagnostic service can help you here. + +**Note:** Unless specified, mapping attributes cannot be applied to any other resource type. diff --git a/docs/emissary/latest/topics/using/keepalive.md b/docs/emissary/latest/topics/using/keepalive.md new file mode 100644 index 000000000..d75e96baa --- /dev/null +++ b/docs/emissary/latest/topics/using/keepalive.md @@ -0,0 +1,70 @@ +# Keepalive + +Keepalive option indicates whether `SO_KEEPALIVE` on the socket should be enabled. + +## Keepalive configuration + +Keepalive configuration can be set for all $productName$ mappings in the [`ambassador Module`](../../running/ambassador) or set per [`Mapping`](../mappings). + +The `keepalive` attribute configures keepalive. The following fields are supported: + +```yaml +keepalive: + idle_time: + interval: + probes: +``` + +### `idle_time` + +(Default: `7200`) The number of seconds a connection needs to be idle before keep-alive probes start being sent. + +### `interval` + +(Default: `75`) The number of seconds between keep-alive probes. + +### `probes` + +(Default: `9`) is the maximum number of keepalive probes to send without response before deciding the connection is dead. + +## Examples + +Keepalive probes defined on a single mapping: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + prefix: /backend/ + service: quote + keepalive: + idle_time: 100 + interval: 10 + probes: 9 +``` + +A global keepalive configuration: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: + config: + keepalive: + idle_time: 100 + interval: 10 + probes: 9 +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + prefix: /backend/ + service: quote +``` diff --git a/docs/emissary/latest/topics/using/mappings.md b/docs/emissary/latest/topics/using/mappings.md new file mode 100644 index 000000000..f930fc62a --- /dev/null +++ b/docs/emissary/latest/topics/using/mappings.md @@ -0,0 +1,189 @@ +# Advanced Mapping configuration + +$productName$ is designed so that the author of a given Kubernetes service can easily and flexibly configure how traffic gets routed to the service. The core abstraction used to support service authors is a mapping, which maps a target backend service to a given host or prefix. For Layer 7 protocols such as HTTP, gRPC, or WebSockets, the `Mapping` resource is used. For TCP, the `TCPMapping` resource is used. + +$productName$ _must_ have one or more mappings defined to provide access to any services at all. The name of the mapping must be unique. + +## System-wide defaults for Mappings + +Certain aspects of mappings can be set system-wide using the `defaults` element of the `ambassador Module`: +see [using defaults](../../using/defaults) for more information. The `Mapping` element will look first in +the `httpmapping` default class. + +## Mapping evaluation order + +$productName$ sorts mappings such that those that are more highly constrained are evaluated before those less highly constrained. The prefix length, the request method, constraint headers, and query parameters are all taken into account. + +If absolutely necessary, you can manually set a `precedence` on the mapping (see below). In general, you should not need to use this feature unless you're using the `regex_headers` or `host_regex` matching features. If there's any question about how $productName$ is ordering rules, the diagnostic service is a good first place to look: the order in which mappings appear in the diagnostic service is the order in which they are evaluated. + +## Optional fallback Mapping + +$productName$ will respond with a `404 Not Found` to any request for which no mapping exists. If desired, you can define a fallback "catch-all" mapping so all unmatched requests will be sent to an upstream service. + +For example, defining a mapping with only a `/` prefix will catch all requests previously unhandled and forward them to an external service: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: catch-all +spec: + prefix: / + service: https://www.getambassador.io +``` + +### Using `precedence` + +$productName$ sorts mappings such that those that are more highly constrained are evaluated before those less highly constrained. The prefix length, the request method, and the constraint headers are all taken into account. These mechanisms, however, may not be sufficient to guarantee the correct ordering when regular expressions or highly complex constraints are in play. + +For those situations, a `Mapping` can explicitly specify the `precedence`. A `Mapping` with no `precedence` is assumed to have a `precedence` of 0; the higher the `precedence` value, the earlier the `Mapping` is attempted. + +If multiple `Mapping`s have the same `precedence`, $productName$'s normal sorting determines the ordering within the `precedence`; however, there is no way that $productName$ can ever sort a `Mapping` with a lower `precedence` ahead of one at a higher `precedence`. + +### Using `tls` + +To originate TLS, use a `service` with an `https://` prefix. By itself, this will cause $productName$ to originate TLS without presenting a client certificate to the upstream service: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: mapping-no-cert +spec: + prefix: /prefix/ + service: https://upstream/ +``` + +If you do need to supply a client certificate, you will also need to set `tls` to the name of a defined TLS context. The client certificate given in that context will be presented to the upstream service. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: mapping-with-cert +spec: + prefix: /prefix/ + service: https://upstream/ + tls: upstream-cert-context +``` + +(If `tls` is present, $productName$ will originate TLS even if the `service` does not have an `https://` prefix.) + +### Using `cluster_tag` + +If the `cluster_tag` attribute is present, its value will be prepended to cluster names generated from +the `Mapping`. This provides a simple mechanism for customizing the `cluster` name when working with metrics. + +## Using `dns_type` + +If the `dns_type` attribute is present, its value will determine how the DNS is used when locating the upstream service. Valid values are: + +- `strict_dns` (the default): The DNS result is assumed to define the exact membership of the cluster. For example, if DNS returns three IP addresses, the cluster is assumed to have three distinct upstream hosts. If a successful DNS query returns no hosts, the cluster is assumed to be empty. `strict_dns` makes sense for situations like a Kubernetes service, where DNS resolution is fast and returns a relatively small number of IP addresses. + +- `logical_dns`: Instead of assuming that the DNS result defines the full membership of the cluster, every new connection simply uses the first IP address returned by DNS. `logical_dns` makes sense for a service with a large number of IP addresses using round-robin DNS for upstream load balancing: typically each DNS query returns a different first result, and it is not necessarily possible to know the full membership of the cluster. With `logical_dns`, no attempt is made to garbage-collect connections: they will stay open until the upstream closes them. + +If `dns_type` is not given, `strict_dns` is the default, as this is the most conservative choice. When interacting with large web services with many IP addresses, switching to `logical_dns` may be a better choice. For more on the different types of DNS, see the [`strict_dns` Envoy documentation] or the [`logical_dns` Envoy documentation]. + +[`strict_dns` Envoy documentation]: https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/service_discovery#strict-dns +[`logical_dns` Envoy documentation]: https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/service_discovery#logical-dns + + +## Namespaces and Mappings + +If `AMBASSADOR_NAMESPACE` is correctly set, $productName$ can map to services in other namespaces by taking advantage of Kubernetes DNS: + +- `service: servicename` will route to a service in the same namespace as $productName$, and +- `service: servicename.namespace` will route to a service in a different namespace. + +### Linkerd interoperability (`add_linkerd_headers`) + +When using Linkerd, requests going to an upstream service need to include the `l5d-dst-override` header to ensure that Linkerd will route them correctly. Setting `add_linkerd_headers` does this automatically, based on the `service` attribute in the `Mapping`. + +If `add_linkerd_headers` is not specified for a given `Mapping`, the default is taken from the `ambassador`[Module](../../running/ambassador). The overall default is `false`: you must explicitly enable `add_linkerd_headers` for $productName$ to add the header for you (although you can always add it yourself with `add_request_headers`, of course). + +### "Upgrading" to non-HTTP protocols (`allow_upgrade`) + +HTTP has [a mechanism][upgrade-mechanism] where the client can say +`Connection: upgrade` / `Upgrade: PROTOCOL` to switch ("upgrade") from +the current HTTP version to a different one, or even a different +protocol entirely. $productName$ itself understands and can handle the +different HTTP versions, but for other protocols you need to tell +$productName$ to get out of the way, and let the client speak that +protocol directly with your upstream service. You can do this by +setting the `allow_upgrade` field to a case-insensitive list of +protocol names $productName$ will allow switching to from HTTP. After +the upgrade, $productName$ does not interpret the traffic, and behaves +similarly to how it does for `TCPMapping`s. + +[upgrade-mechanism]: https://tools.ietf.org/html/rfc7230#section-6.7 + +This "upgrade" mechanism is a useful way of adding HTTP-based +authentication and access control to another protocol that might not +support authentication; for this reason the designers of the WebSocket +protocol made this "upgrade" mechanism the *only* way of initiating a +WebSocket connection. In a Mapping for an upstream service that +supports WebSockets, you would write + +```yaml +allow_upgrade: +- websocket +``` + +The Kubernetes apiserver itself uses this "upgrade" mechanism to +perform HTTP authentication before switching to SPDY for endpoint used +by `kubectl exec`; if you wanted to use $productName$ to expose the +Kubernetes apiserver such that `kubectl exec` functions, you would +write + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: apiserver +spec: + hostname: "*" + service: https://kubernetes.default + prefix: / + allow_upgrade: + - spdy/3.1 +``` + +There is a deprecated setting `use_websocket`; setting `use_websocket: +true` is equivalent to setting `allow_upgrade: ["websocket"]`. + +## DNS configuration for Mappings + +`respect_dns_ttl` can be set to `true` to force the DNS refresh rate for this `Mapping` to be set to the record’s TTL obtained from DNS resolution. +- Allowed values: `true` or `false` +- Default: `false` + + +`dns_type` can be used to configure the service discovery type between Strict DNS and Logical DNS. You can +- Allowed values: + - `strict_dns`: Ambassador will continuously and asynchronously resolve the specified DNS targets. + - `logical_dns`: Similar to `strict_dns`, but only uses the first IP address returned when a new connection needs to be initiated and Connections are never drained. More optimal for large scale web services that must be accessed via DNS. +- Default: `strict_dns` + + +For more information on the differences between dns types, see [the Envoy documentation for service discovery](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/service_discovery.html). + +> **Note:** When the [endpoint resolver](../../running/resolvers/#the-kubernetes-endpoint-resolver) is used in a `Mapping`, `dns_type` will be ignored in favor of the endpoint resolver's service discovery. + + + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: dnsoverwrite +spec: + service: quote + prefix: /backend/ + dns_type: logical_dns + respect_dns_ttl: true +``` diff --git a/docs/emissary/latest/topics/using/method.md b/docs/emissary/latest/topics/using/method.md new file mode 100644 index 000000000..94185dcd0 --- /dev/null +++ b/docs/emissary/latest/topics/using/method.md @@ -0,0 +1,26 @@ +# Method-based routing + +$productName$ supports routing based on the HTTP method and regular expression. + +## Using `method` + +The `method` annotation specifies the specific HTTP method for a mapping. The value of the `method` annotation must be in all upper case. + +For example: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: get +spec: + hostname: "*" + prefix: /backend/get_only/ + method: GET + service: quote +``` + +## Using `method_regex` + +When `method_regex` is set to `true`, the value of the `method` annotation will be interpreted as a regular expression. diff --git a/docs/emissary/latest/topics/using/prefix-regex.md b/docs/emissary/latest/topics/using/prefix-regex.md new file mode 100644 index 000000000..04a6e4b89 --- /dev/null +++ b/docs/emissary/latest/topics/using/prefix-regex.md @@ -0,0 +1,25 @@ +# Prefix regex + +## Using `prefix` and `prefix_regex` + +When the `prefix_regex` attribute is set to `true`, $productName$ configures a [regex route](https://www.envoyproxy.io/docs/envoy/v1.5.0/api-v1/route_config/route#route) instead of a prefix route in Envoy. **This means the entire path must match the regex specified, not only the prefix.** + +## Example with a version in the URL + +If the version is a path parameter and the resources are served by different services, then + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: qotm +spec: + prefix: "/(v1|v2)/qotm/.*" + prefix_regex: true + service: qotm +``` + +will map requests to both `/v1` and `/v2` to the `qotm` service. + +Note that enclosing regular expressions in quotes can be important to prevent backslashes from being doubled. diff --git a/docs/emissary/latest/topics/using/query-parameters.md b/docs/emissary/latest/topics/using/query-parameters.md new file mode 100644 index 000000000..0bd5eb136 --- /dev/null +++ b/docs/emissary/latest/topics/using/query-parameters.md @@ -0,0 +1,70 @@ +# Query parameter-based routing + +$productName$ can route to target services based on HTTP query parameters with the `query_parameters` and `regex_query_parameters` specifications. Multiple mappings with different annotations can be applied to construct more complex routing rules. + +## The `query_parameters` annotation + +The `query_parameters` attribute is a dictionary of `query_parameter`: `value` pairs. $productName$ will only allow requests that match the specified `query_parameter`: `value` pairs to reach the target service. + +You can also set the `value` of a query parameter to `true` to test for the existence of a query parameter. + +### A basic example + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + prefix: /backend/ + service: quote + query_parameters: + quote-mode: backend + random-query-parameter: datawire +``` + +This will allow requests to /backend/ to succeed only if the `quote-mode` query parameter has the value `backend` and the `random-query-parameter` has the value `datawire`. + +### A conditional example + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-mode +spec: + prefix: /backend/ + service: quote-mode + query_parameters: + quote-mode: true + +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-regular +spec: + prefix: /backend/ + service: quote-regular +``` + +This will send requests that contain the `quote-mode` query parameter to the `quote-mode` target, while routing all other requests to the `quote-regular` target. + +## `regex_query_parameters` + +The following mapping will route requests with the `quote-mode` header that contain values that match the regex. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + regex_query_parameters: + quote-mode: "^[a-z].*" + prefix: /backend/ + service: quote +``` diff --git a/docs/emissary/latest/topics/using/rate-limits/index.md b/docs/emissary/latest/topics/using/rate-limits/index.md new file mode 100644 index 000000000..b65f8c5dd --- /dev/null +++ b/docs/emissary/latest/topics/using/rate-limits/index.md @@ -0,0 +1,199 @@ +import Alert from '@material-ui/lab/Alert'; + +# Basic rate limiting + +Rate limiting in $productName$ is composed of two parts: + +* The [`RateLimitService`](../../running/services/rate-limit-service) resource tells $productName$ what external service + to use for rate limiting. + + If $productName$ cannot contact the rate limit service, it will allow the request to be processed as if there were no rate limit service configuration. + +* _Labels_ that get attached to requests. A label is basic metadata that + is used by the `RateLimitService` to decide which limits to apply to + the request. + + + These labels require Mapping resources with apiVersion + getambassador.io/v2 or newer — if you're updating an old installation, check the + apiVersion! + + +Labels are grouped according to _domain_ and _group_: + +```yaml +labels: + "domain1": + - "group1": + - "my_label_specifier_1" + - "my_label_specifier_2" + - "group2": + - "my_label_specifier_3" + - "my_label_specifier_4" + "domain2": + - ... +``` + +The names of domains and groups are not interpreted by $productName$ in any way: +they are solely there to help configuration authors remember the different groupings. +Note that **at present, rate limiting supports just one domain**: the name of the +domain must be configured in the [`RateLimitService`](../../running/services/rate-limit-service). + + + +## Attaching labels to requests + +There are two ways of setting labels on a request: + +1. You can set labels on an individual [`Mapping`](../mappings). These labels + will only apply to requests that use that `Mapping`. + + ```yaml + --- + apiVersion: getambassador.io/v3alpha1 + kind: Mapping + metadata: + name: foo-mapping + spec: + hostname: "*" + prefix: /foo/ + service: foo + labels: + "domain1": + - "group1": + - "my_label_specifier_1" + - "my_label_specifier_2" + - "group2": + - "my_label_specifier_3" + - "my_label_specifier_4" + "domain2": + - ... + ``` + +2. You can set global labels in the [`ambassador` `Module`](../../running/ambassador). + These labels will apply to _every_ request that goes through $productName$. + + ```yaml + --- + apiVersion: getambassador.io/v3alpha1 + kind: Module + metadata: + name: ambassador + spec: + config: + default_labels: + "domain1": + defaults: + - "my_label_specifier_a" + - "my_label_specifier_b" + "domain2": + defaults: + - "my_label_specifier_c" + - "my_label_specifier_d" + ``` + + If a `Mapping` and the defaults both give label groups for the same domain, the + default labels are prepended to each label group from the `Mapping`. If the `Mapping` + does not give any labels for that domain, the global labels are placed into a new + label group named "default" for that domain. + +Each label group is a list of labels; each label is a key/value pair. Since the label +group is a list rather than a map: +- it is possible to have multiple labels with the same key, and +- the order of labels matters. + +> Note: The terminology used by the Envoy documentation differs from +> the terminology used by $productName$: +> +> | $productName$ | Envoy | +> |-----------------|-------------------| +> | label group | descriptor | +> | label | descriptor entry | +> | label specifier | rate limit action | + +The `Mapping`s' listing of the groups of specifiers have names for the +groups; the group names are useful for humans dealing with the YAML, +but are ignored by $productName$, all $productName$ cares about are the +*contents* of the groupings of label specifiers. + +There are 5 types of label specifiers in $productName$: + + + +1. `source_cluster` + + ```yaml + source_cluster: + key: source_cluster + ``` + + Sets the label `source_cluster=«Envoy source cluster name»"`. The Envoy + source cluster name is the name of the Envoy cluster that the request came + in on. + + The syntax of this label currently _requires_ `source_cluster: {}`. + +2. `destination_cluster` + + ```yaml + destination_cluster: + key: destination_cluster + ``` + + Sets the label `destination_cluster=«Envoy destination cluster name»"`. The Envoy + destination cluster name is the name of the Envoy cluster to which the `Mapping` + routes the request. You can get the name for a cluster from the + [diagnostics service](../../running/diagnostics). + + The syntax of this label currently _requires_ `destination_cluster: {}`. + +3. `remote_address` + + ```yaml + remote_address: + key: remote_address + ``` + + Sets the label `remote_address=«IP address of the client»"`. The IP address of + the client will be taken from the `X-Forwarded-For` header, to correctly manage + situations with L7 proxies. This requires that $productName$ be correctly + [configured to communicate](../../../howtos/configure-communications). + + The syntax of this label currently _requires_ `remote_address: {}`. + +4. `request_headers` + + ```yaml + request_headers: + header_name: "header-name" + key: mykey + ``` + + If a header named `header-name` is present, set the label `mykey=«value of the header»`. + If no header named `header-name` is present, **the entire label group is dropped**. + +5. `generic_key` + + ```yaml + generic_key: + key: mykey + value: myvalue + ``` + + Sets the label `«mykey»=«myval»`. Note that supplying a `key` is supported only + with the Envoy V3 API. + +## Rate limiting requests based on their labels + +This is determined by your `RateLimitService` implementation. See the +[Basic Rate Limiting tutorial](../../../howtos/rate-limiting-tutorial) for an +example `RateLimitService` implementation for $productName$. + +If you'd rather not write your own `RateLimitService` implementation, +$AESproductName$ provides a `RateLimitService` implementation that is +configured by a `RateLimit` custom resource. See the +[$AESproductName$ RateLimit Reference](/docs/edge-stack/latest/topics/using/rate-limits/rate-limits/) +for more information. diff --git a/docs/emissary/latest/topics/using/redirects.md b/docs/emissary/latest/topics/using/redirects.md new file mode 100644 index 000000000..f55c467d0 --- /dev/null +++ b/docs/emissary/latest/topics/using/redirects.md @@ -0,0 +1,142 @@ +# Redirects + +$productName$ can perform 3xx redirects on `Mapping`s to a different host, with various options to redirect the path and to return a different 3xx response code instead of the default 301. + +## Schema + +| Name | Type | Description | +| --- | --- | --- | +| `spec.host_redirect` | Boolean | This is *required* to be set to `true` to use any type of redirect, otherwise the request will be proxied instead of redirected. | +| `spec.path_redirect` | String | Optional, changes the path for the redirect. | +| `spec.prefix_redirect` | String | Optional, matches the `prefix` value and replaces it with the `prefix_redirect` value. | +| `spec.regex_redirect` | String | Optional, uses regex to match and replace the `prefix` value. | +| `spec.redirect_response_code` | Integer | Optional, changes the response code from the default 301, valid values are 301, 302, 303, 307, and 308. | +| `spec.config. x_forwarded_proto_redirect` | Boolean | Redirect only the originating HTTP requests to HTTPS, allowing the originating HTTPS requests to pass through. | +| `spec.config. use_remote_address` | Boolean | Required to be set to `false` to use the `x_forwarded_proto_redirect` feature. | + +## Examples + +### Basic redirect + +To effect any type of HTTP `Redirect`, the `Mapping` *must* set `host_redirect` to `true`, with `service` set to the host to which the client should be redirected: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: redirect +spec: + prefix: /redirect/ + service: httpbin.org + host_redirect: true + hostname: '*' +``` + +Using this `Mapping`, a request to `http://$AMBASSADOR_URL/redirect/` will be redirected to `http://httpbin.org/redirect/`. + +> As always with $productName$, the trailing `/` on any URL with a +`Mapping` is required! + +### Path redirect + +The `Mapping` may optionally also set additional properties to customize the behavior of the HTTP redirect response. + +To also change the path portion of the URL during the redirect, set `path_redirect`: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: redirect +spec: + hostname: '*' + prefix: /redirect/ + service: httpbin.org + host_redirect: true + path_redirect: /ip +``` + +Here, a request to `http://$AMBASSADOR_URL/redirect/` will be redirected to `http://httpbin.org/ip/`. + +### Prefix redirect + +To change only a prefix of the path portion of the URL, set `prefix_redirect`: + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: redirect +spec: + hostname: '*' + prefix: /redirect/path/ + service: httpbin.org + host_redirect: true + prefix_redirect: /ip +``` + +Now, a request to `http://$AMBASSADOR_URL/redirect/path/` will be redirected to `http://httpbin.org/ip/`. The prefix `/redirect/path/` was matched and replaced with `/ip/`. + +### Regex redirect + +`regex_redirect` matches a regular expression to replace instead of a fixed prefix. +[See more information about using regex with $productName$](../rewrites/#regex_rewrite). + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: redirect +spec: + prefix: /foo/ + service: httpbin.org + host_redirect: true + regex_redirect: + pattern: '/foo/([0-9]*)/list' + substitution: '/bar/\1' +``` +A request to `http://$AMBASSADOR_URL/foo/12345/list` will be redirected to +`http://$AMBASSADOR_URL/bar/12345`. + +### Redirect response code + +To change the HTTP response code return by $productName$, set `redirect_reponse_code`. If this is not set, 301 is returned by default. Valid values include 301, 302, 303, 307, and 308. This +can be used with any type of redirect. + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: redirect +spec: + prefix: /ip/ + service: httpbin.org + host_redirect: true + redirect_response_code: 302 +``` + +A request to `http://$AMBASSADOR_URL/ip/` will result in an HTTP 302 redirect to `http://httpbin.org/ip`. + +### X-FORWARDED-PROTO redirect + +In cases when TLS is being terminated at an external layer 7 load balancer, then you would want to redirect only the originating HTTP requests to HTTPS, and let the originating HTTPS requests pass through. + +This distinction between an originating HTTP request and an originating HTTPS request is done based on the `X-FORWARDED-PROTO` header that the external layer 7 load balancer adds to every request it forwards after TLS termination. + +To enable this `X-FORWARDED-PROTO` based HTTP to HTTPS redirection, add a `x_forwarded_proto_redirect: true` field to `ambassador Module`'s configuration. Note that when this feature is enabled `use_remote_address` MUST be set to false. + +An example configuration is as follows - + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: + config: + x_forwarded_proto_redirect: true + use_remote_address: false +``` + +Note: Setting `x_forwarded_proto_redirect: true` will impact all your $productName$ mappings. Every HTTP request to $productName$ will only be allowed to pass if it has an `X-FORWARDED-PROTO: https` header. diff --git a/docs/emissary/latest/topics/using/retries.md b/docs/emissary/latest/topics/using/retries.md new file mode 100644 index 000000000..d018ab594 --- /dev/null +++ b/docs/emissary/latest/topics/using/retries.md @@ -0,0 +1,84 @@ +# Automatic retries + +Sometimes requests fail. When these requests fail for transient issues, $productName$ can automatically retry the request. + +Retry policy can be set for all $productName$ mappings in the [`ambassador Module`](../../running/ambassador), or set per [`Mapping`](../mappings). Generally speaking, you should set `retry_policy` on a per mapping basis. Global retries can easily result in unexpected cascade failures. + +> Note that when setting `retry_policy`, adjusting `max_retries` in the [circuit breaker](https://www.getambassador.io/docs/edge-stack/pre-release/topics/using/circuit-breakers/) configuration should also be considered in order to account for all desired retries. + +## Configuring retries + +The `retry_policy` attribute configures automatic retries. The following fields are supported: + +```yaml +retry_policy: + retry_on: + num_retries: + per_try_timeout: +``` + +### `retry_on` + +(Required) Specifies the condition under which $productName$ retries a failed request. + +| Value | Description | +| --- | --- | +|`5xx`| Retries if the upstream service responds with any 5xx code or does not respond at all +|`gateway-error`| Similar to a `5xx` but only applies to a 502, 503, or 504 response +|`connect-failure`| Retries on a connection failure to the upstream service (included in `5xx`) +|`retriable-4xx`| Retries on a retriable 4xx response (currently only 409) +|`refused-stream`| Retires if the upstream service sends a REFUSED_STREAM error (included in `5xx`) +|`retriable-status-codes`| Retries based on status codes set in the `x-envoy-retriable-status-codes` header. If that header isn't set, $productName$ will not retry the request. + + For more details on each of these values, see the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/v1.9.0/configuration/http_filters/router_filter#x-envoy-retry-on). + +### `num_retries` + +(Default: 1) Specifies the number of retries to execute for a failed request. + +### `per_try_timeout` + +(Default: global request timeout) Specify the timeout for each retry. Must be in seconds or nanoseconds, e.g., `1s`, `1500ns`. + +## Examples + +A per mapping retry policy: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + hostname: '*' + prefix: /backend/ + service: quote + retry_policy: + retry_on: "5xx" + num_retries: 10 +``` + +A global retry policy (not recommended): + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Module +metadata: + name: ambassador +spec: + config: + retry_policy: + retry_on: "retriable-4xx" + num_retries: 4 +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + prefix: /backend/ + service: quote + hostname: '*' +``` diff --git a/docs/emissary/latest/topics/using/rewrites.md b/docs/emissary/latest/topics/using/rewrites.md new file mode 100644 index 000000000..44d0a961c --- /dev/null +++ b/docs/emissary/latest/topics/using/rewrites.md @@ -0,0 +1,100 @@ +# Rewrites + +Once $productName$ uses a prefix to identify the service to which a given request should be passed, it can rewrite the URL before handing it off to the service. + +There are two approaches for rewriting: `rewrite` for simpler scenarios and `regex_rewrite` for more advanced rewriting. + +**Please note that** only one of these two can be configured for a mapping **at the same time**. As a result $productName$ ignores `rewrite` when `regex_rewrite` is provided. + +## `rewrite` + +By default, the `prefix` is rewritten to `/`, so e.g., if we map `/backend-api/` to the service `service1`, then + + +http://ambassador.example.com/backend-api/foo/bar + + +* ```prefix```: /backend-api/ which rewrites to / by default. +* ```rewrite```: / +* ```remainder```: foo/bar + + +would effectively be written to + + +http://service1/foo/bar + + +* ```prefix```: was /backend-api/ +* ```rewrite```: / (by default) + +You can change the rewriting: for example, if you choose to rewrite the prefix as /v1/ in this example, the final target would be: + + + +http://service1/v1/foo/bar + + +* ```prefix```: was /backend-api/ +* ```rewrite```: /v1/ + +And, of course, you can choose to rewrite the prefix to the prefix itself, so that + + +http://ambassador.example.com/backend-api/foo/bar + + +* ```prefix```: /backend-api/ +* ```rewrite```: /backend-api/ + +would be "rewritten" as: + + +http://service1/backend-api/foo/bar + + +To prevent $productName$ rewrite the matched prefix to `/` by default, it can be configured to not change the prefix as it forwards a request to the upstream service. To do that, specify an empty `rewrite` directive: + +- `rewrite: ""` + +In this case requests that match the prefix /backend-api/ will be forwarded to the service without any rewriting: + + +http://ambassador.example.com/backend-api/foo/bar + + +would be forwarded to: + + +http://service1/backend-api/foo/bar + + +## `regex_rewrite` + +In some cases, a portion of URL needs to be extracted before making the upstream service URL. For example, suppose that when a request is made to `foo/12345/list`, the target URL must be rewritten as `/bar/12345`. We can do this as follows: + +``` +prefix: /foo/ +regex_rewrite: + pattern: '/foo/([0-9]*)/list' + substitution: '/bar/\1' +``` +`([0-9]*)` can be replaced with `(\d)` for simplicity. + + +http://ambassador.example.com/foo/12345/list + + +* ```prefix```: /foo/ +* ```pattern```: /foo/12345/list where `12345` captured by `([0-9]*)` +* ```substitution```: /bar/12345 where `12345` substituted by `\1` + +would be forwarded to: + + +http://service1/bar/12345 + + +More than one group can be captured in the `pattern` to be referenced by `\2`, `\3` and `\n` in the `substitution` section. + +For more information on how `Mapping` can be configured, see [Mappings](../mappings). diff --git a/docs/emissary/latest/topics/using/shadowing.md b/docs/emissary/latest/topics/using/shadowing.md new file mode 100644 index 000000000..dd95fbbaf --- /dev/null +++ b/docs/emissary/latest/topics/using/shadowing.md @@ -0,0 +1,78 @@ +# Traffic shadowing + +Traffic shadowing is a deployment pattern where production traffic is asynchronously copied to a non-production service for testing. Shadowing is a close cousin to two other commonly known deployment patterns, [canary releases](../canary) and blue/green deployments. Shadowing traffic has several important benefits over blue/green and canary testing: + +* Zero production impact. Since traffic is duplicated, any bugs in services that are processing shadow data have no impact on production. + +* Test persistent services. Since there is no production impact, shadowing provides a powerful technique to test persistent services. You can configure your test service to store data in a test database, and shadow traffic to your test service for testing. Both blue/green deployments and canary deployments require more machinery for testing. + +* Test the actual behavior of a service. When used in conjunction with tools such as [Twitter's Diffy](https://github.com/twitter/diffy), shadowing lets you measure the behavior of your service and compare it with an expected output. A typical canary rollout catches exceptions (e.g., HTTP 500 errors), but what happens when your service has a logic error and is not returning an exception? + +## Shadowing and $productName$ + +$productName$ lets you easily shadow traffic to a given endpoint. In $productName$, only requests are shadowed; responses from a service are dropped. All normal metrics are collected for the shadow services. This makes it easy to compare the performance of the shadow service versus the production service on the same data set. $productName$ also prioritizes the production path, i.e., it will return responses from the production service without waiting for any responses from the shadow service. + +![Shadowing](../../images/shadowing.png) + +## The `shadow` Mapping + +In $productName$, you can enable shadowing for a given mapping by setting `shadow: true` in your `Mapping`. One copy proceeds as if the shadowing `Mapping` was not present: the request is handed onward per the `service`(s) defined by the non-shadow `Mapping`s, and the reply from whichever `service` is picked is handed back to the client. + +The second copy is handed to the `service` defined by the `Mapping` with `shadow` set. Any reply from this `service` is ignored, rather than being handed back to the client. Only a single `shadow` per resource can be specified (i.e., you can't shadow the same resource to more than 1 additional destination). In this situation, $productName$ will indicate an error in the diagnostic service, and only one `shadow` will be used. If you need to implement this type of use case, you should shadow traffic to a multicast proxy (or equivalent). + +You can shadow multiple different services. + +During shadowing, the host header is modified such that `-shadow` is appended. + +## Example + +The following example may help illustrate how shadowing can be used. This first attribute sets up a basic mapping between the `myservice` Kubernetes service and the `/myservice/` prefix, as expected. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: myservice +spec: + hostname: '*' + prefix: /myservice/ + service: myservice.default +``` + +What if we want to shadow the traffic to `myservice`, and send that exact same traffic to `myservice-shadow`? We can create a new mapping that does this: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: myservice-shadow +spec: + hostname: '*' + prefix: /myservice/ + service: myservice-shadow.default + shadow: true +``` + +The `prefix` is set to be the same as the first mapping, which tells $productName$ which production traffic to shadow. The destination service, where the shadow traffic is routed, is a *different* Kubernetes service, `myservice-shadow`. Finally, the `shadow: true` attribute actually enables shadowing. + +### Shadow traffic weighting + +It is possible to shadow a portion of the traffic by specifying the `weight` in the mapping. For example: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: myservice-shaddow +spec: + hostname: '*' + prefix: /myservice/ + service: myservice-shadow.default + shadow: true + weight: 10 +``` + +In the example above, only 10% of the traffic will be forwarded to the shadowing service. diff --git a/docs/emissary/latest/topics/using/tcpmappings.md b/docs/emissary/latest/topics/using/tcpmappings.md new file mode 100644 index 000000000..f246e799a --- /dev/null +++ b/docs/emissary/latest/topics/using/tcpmappings.md @@ -0,0 +1,300 @@ +# `TCPMapping` resources + +In addition to managing HTTP, gRPC, and WebSockets at layer 7, $productName$ can also manage TCP connections at layer 4. The core abstraction used to support TCP connections is the `TCPMapping`. + +An $productName$ `TCPMapping` associates TCP connections with upstream _services_. +Cleartext TCP connections are defined by destination IP address and/or destination TCP port; +TLS-encrypted TCP connections can also be defined by the hostname presented using SNI. +A service is exactly the same as in HTTP [`Mappings`](../mappings/) and other $productName$ resources. + +## TCPMapping configuration + +Like all native $productName$ resources, `TCPMappings` have an +`ambassador_id` field to select which $productName$ installations take +notice of it: + +| Attribute | Description | Type | Default value | +|:----------------|:--------------------------------------------------------------------------------------------------------------|:-----------------|----------------------------------| +| `ambassador_id` | [A list of `ambassador_id`s which should pay attention to this resource](../../running/running#ambassador_id) | array of strings | optional; default is ["default"] | + +### Downstream configuration + +The downstream configuration refers to the connection between the end-client and $productName$. + +| Attribute | Description | Type | Default value | +|:----------|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:--------|:-----------------------------------------------------------------| +| `port` | Which TCP port number $productName$ should listen on this `TCPMapping`; may or may not correspond to a [`Listener` resource](../../running/listener) | string | required; no default | +| `host` | If non-empty, [terminate TLS](#tls-termination) on this port; using this hostglob for SNI-based for routing | string | optional; if not present, do not terminate TLS on this port | +| `address` | Which IP address $productName$ should listen on | string | optional; if not present, accept connections on all IP addresses | +| `weight` | The (integer) percentage of traffic for this resource when [canarying](../canary) between multiple `TCPMappings` | integer | optional; default is to not canary | + +If the `port` does not pair with an actual existing +[`Listener`](../../running/listener), then an appropriate internal +`Listener` is automatically created. + +If the `Listener` does *not* terminate TLS (controlled by +`Listener.spec.protocolStack` and by `TCPMapping.spec.host`), then no +[`Hosts`](../../running/host-crd) may associate with the `Listener`, +and only one `TCPMapping` (or set of [canaried](../canary) +`TCPMappings`; see the `weight` attribute) may associate with the +`Listener`. + +If the `Listener` *does* terminate TLS, then any number of +`TCPMappings` and `Hosts` may associate with the `Listener`, and are +selected between using SNI. + +It is an error if the `TCPMapping.spec.host` and +`Listener.spec.protocolStack` do not agree about whether TLS should be +terminated, and the `TCPMapping` will be discarded. + +#### TLS termination + +If the `host` field is non-empty, then the `TCPMapping` will terminate +TLS when listening for connections from end-clients + +To do this, $productName$ needs a TLS certificate and configuration; +there are two ways that this can be provided: + +First, $productName$ checks for any [`Host` +resources](../../running/host-crd) with TLS configured whose +`Host.spec.hostname` glob-matches the `TCPMapping.spec.host`; if such +a `Host` exists, then its TLS certificate and configuration is used. + +Second, if such a `Host` is not found, then $productName$ checks for +any [`TLSContext` resources](../../running/tls) who have an item in +`TLSContext.spec.hosts` that exact-matches the `TCPMapping.spec.host`; +if such a `TLSContext` exists, then it and its certificate are used. +These host fields may _contain_ globs, but they are not considered +when matching; for example, a `TLSContext` host string of +`*.example.com` would not match with a `TCPMapping` host of +`foo.example.com`, but would match with a `TCPMapping` host of +`*.example.com`. + +It is an error if no such `Host` or `TLSContext` is found, then the +`TCPMapping` is discarded. + +### Upstream configuration + +The upstream configuration refers to the connection between +$productName$ and the service that it is a gateway to. + +| Attribute | Description | Type | Default value | +|:-------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|:--------------------------------------------------------------------------------------------------------| +| `service` | The service to talk to; a string of the format `scheme://host:port`, where `scheme://` and `:port` are optional. If the scheme is `https`, then TLS is originated, otherwise the scheme is ignored. | string | required; no default; if originating TLS the default port is 443, otherwise the default port is 80 | +| `resolver` | The [resolver](../../running/resolvers) to use when resolving the hostname in `service` | string | optional | +| `enable_ipv4` | Whether to enable IPv4 DNS lookups when resolving the hostname in `service`; has no affect if the hostname is an IP address or using a non-DNS `resolver`. | Boolean | optional; default is true unless set otherwise by the [`ambassador` `Module`](../../running/ambassador) | +| `enable_ipv6` | Whether to enable IPv6 DNS lookups when resolving the hostname in `service`; has no affect if the hostname is an IP address or using a non-DNS `resolver`. | Boolean | optional; default is true unless set otherwise by the [`ambassador` `Module`](../../running/ambassador) | +| `tls` | The name of a [`TLSContext`](../../running/tls) to originate TLS; TLS is originated if `tls` is non-empty. | string | optional; default is to not use a `TLSContext` | +| `circuit_breakers` | Circuit breakers, same as for [HTTP `Mappings`](../circuit-breakers) | array of objects | optional; default is set by the [`ambassador` `Module`](../../running/ambassador) | +| `idle_timeout_ms` | The timeout, in milliseconds, after which the connection will be terminated if no traffic is seen. | integer | optional; default is no timeout | + +If both `enable_ipv4` and `enable_ipv6` are true, $productName$ will prefer IPv6 to IPv4. See the [`ambassador` `Module`](../../running/ambassador) documentation for more information. + +The values for the scheme-part of the `service` are a bit of a +misnomer; despite the `https://` string being recognized, it does not +imply anything about whether the traffic is HTTP; just whether it is +encrypted. + +If `service` does not specify a port number: if TLS is *not* being +originated, then a default port number of `80` is used; if TLS *is* +being originated (either because the `service` says `https://` or +because `tls` is set), then a default port number of `443` is used +(even if the service says `http://`). + +The default `resolver` is a KubernetesServiceResolver, which takes a [namespace-qualified DNS name](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#namespaces-of-services). +Given that `AMBASSADOR_NAMESPACE` is correctly set, $productName$ can map to services in other namespaces: +- `service: servicename` will route to a service in the same namespace as $productName$, and +- `service: servicename.namespace` will route to a service in a different namespace. + +#### TLS origination + +If the `TCPMapping.spec.service` starts with `https://`, or if the +`TCPMapping.spec.tls` is set, then the `TCPMapping` will originate TLS +when dialing out to the service. + +If originating TLS, but `TCPMapping.spec.tls` is not set, then +$productName$ will use a default TLS client configuration, and will +not provide a client certificate. + +If `TCPMapping.spec.tls` is set, then $productName$ looks for a +[`TLSContext` resource](../../running/tls) with that name (the +`TLSContext` may be found in _any_ namespace). + +### `TCPMapping` and TLS + +The `TCPMapping.spec.host` attribute determines whether $productName$ will _terminate_ TLS when a client connects to $productName$. +The `TCPMapping.spec.service` and `TCPMapping.spec.tls` attributes work together to determine whether $productName$ will _originate_ TLS when connecting to an upstream. +The two are _totally_ independent. +See the sections on [TLS termination](#tls-termination) and [TLS origination](#tls-origination), respectively. + +## Examples + +### neither terminating nor originating TLS + +If `host` is not set, then TLS is not terminated. +If `service` does not start with `https://` and `tls` is empty, then TLS is not originated. +So, if both of these are true, then$productName$ simply proxies bytes between the client and the upstream; TLS may or may not be involved, $productName$ doesn't care. +You should specify in `service` which port to dial to; if you don't, $productName$ will use port 80 because it is not originating TLS. + +So, for example, + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TCPMapping +metadata: + name: ssh +spec: + port: 2222 + service: upstream:22 +``` + +could be used to relay an SSH connection on port 2222, or + +```yaml +apiVersion: getambassador.io/v3alpha1 +kind: TCPMapping +metadata: + name: cockroach +spec: + port: 26257 + service: cockroach:26257 +``` + +could proxy a CockroachDB connection. + +### terminating TLS, but not originating it + +If `host` is set, then TLS is terminated. +If `service` does not start with `https://` and `tls` is empty, then TLS is not originated. +In this case, $productName$ will terminate the TLS connection, require that the host offered with SNI match the `host` attribute, and then make a **cleartext** connection to the upstream host. +You should specify in `service` which port to dial to; if you don't, $productName$ will use port 80 because it is not originating TLS. + +This can be useful for doing host-based TLS proxying of arbitrary protocols, allowing the upstream to not have to care about TLS. + +Note that this case **requires** that you have created a termination `TLSContext` or `Host` that matches the `TCPMapping.spec.host`. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: my-context +spec: + hosts: + - my-host-1 + - my-host-2 + secret: supersecret +--- +apiVersion: getambassador.io/v3alpha1 +kind: TCPMapping +metadata: + name: my-host-1 +spec: + port: 2222 + host: my-host-1 + service: upstream-host-1:9999 +--- +apiVersion: getambassador.io/v3alpha1 +kind: TCPMapping +metadata: + name: my-host-2 +spec: + port: 2222 + host: my-host-2 + service: upstream-host-2:9999 +``` + +The above will accept a TLS connection with SNI on port 2222. +If the client requests SNI host `my-host-1`, the decrypted traffic will be relayed to `upstream-host-1`, port 9999. +If the client requests SNI host `my-host-2`, the decrypted traffic will be relayed to `upstream-host-2`, port 9999. +Any other SNI host will cause the TLS handshake to fail. + +#### both terminating and originating TLS, with and without a client certificate + +If `host` is set, then TLS is terminated. +In this case, $productName$ will terminate the incoming TLS connection, require that the host offered with SNI match the `host` attribute, and then make a **TLS** connection to the upstream host. + +If `tls` is non-empty, then TLS is originated with a client certificate. +In this case, $productName$ will use the `TLSContext` referred to by `tls` to determine the certificate offered to the upstream service. + +If `service` starts with `https://`, then then TLS is originated without a client certificate (unless `tls` is also set) + +In either case, you should specify in `service` which port to dial to; if you don't, $productName$ will use port 443 because it is originating TLS. + +This is useful for doing host routing while ensuring that data is always encrypted while in-transit. + +Note that this case **requires** that you have created a termination `TLSContext` or `Host` that matches the `TCPMapping.spec.host`. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: my-context +spec: + hosts: + - my-host-1 + - my-host-2 + secret: supersecret +--- +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: origination-context +spec: + secret: othersecret +--- +apiVersion: getambassador.io/v3alpha1 +kind: TCPMapping +metadata: + name: test-1 +spec: + port: 2222 + host: my-host-1 + service: https://upstream-host-1:9999 +--- +apiVersion: getambassador.io/v3alpha1 +kind: TCPMapping +metadata: + name: test-2 +spec: + port: 2222 + host: my-host-2 + tls: origination-context + service: https://upstream-host-2:9999 +``` + +The above will accept a TLS connection with SNI on port 2222. + +If the client requests SNI host `my-host-1`, the traffic will be relayed over a TLS connection to `upstream-host-1`, port 9999. No client certificate will be offered for this connection. + +If the client requests SNI host `my-host-2`, the decrypted traffic will be relayed to `upstream-host-2`, port 9999. The client certificate from `origination-context` will be offered for this connection. + +Any other SNI host will cause the TLS handshake to fail. + +#### originating TLS, but not terminating it + +Here, $productName$ will accept the connection **without terminating TLS**, then relay traffic over a **TLS** connection upstream. This is probably useful only to accept unencrypted traffic and force it to be encrypted when it leaves $productName$. + +Example: + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: TLSContext +metadata: + name: origination-context +spec: + secret: othersecret +--- +apiVersion: getambassador.io/v3alpha1 +kind: TCPMapping +metadata: + name: test +spec: + port: 2222 + service: https://upstream-host:9999 +``` + +The example above will accept **any** connection to port 2222 and relay it over a **TLS** connection to `upstream-host` port 9999. No client certificate will be offered. diff --git a/docs/emissary/latest/topics/using/timeouts.md b/docs/emissary/latest/topics/using/timeouts.md new file mode 100644 index 000000000..ae0041024 --- /dev/null +++ b/docs/emissary/latest/topics/using/timeouts.md @@ -0,0 +1,66 @@ +# Timeouts + +$productName$ enables you to control timeouts in several different ways. + +## Request timeout: `timeout_ms` + +`timeout_ms` is the end-to-end timeout for an entire user-level transaction in milliseconds. It begins after the full incoming request is received up until the full response stream is returned to the client. This timeout includes all retries. It can be disabled by setting the value to `0`. + +Default: `3000` + +## Idle timeout: `idle_timeout_ms` + +`idle_timeout_ms` controls how long a connection should remain open when no traffic is being sent through the connection. `idle_timeout_ms` is distinct from `timeout_ms`, as the idle timeout applies on either down or upstream request events and is reset every time an encode/decode event occurrs or data is processed for the stream. `idle_timeout_ms` operates on a per-route basis and will overwrite behavior of the `cluster_idle_timeout_ms`. If not set, $productName$ will default to the value set by `cluster_idle_timeout_ms`. It can be disabled by setting the value to `0`. + +## Cluster max connection lifetime: `cluster_max_connection_lifetime_ms` + +`cluster_max_connection_lifetime_ms` controls how long upstream connections should remain open, regardless of whether traffic is currently being sent through it or not. By setting this value, you can control how long Envoy will hold open healthy connections to upstream services before it is forced to recreate them, providing natural connection churn. This helps in situations where the upstream cluster is represented by a service discovery mechanism that requires new connections in order to discover new backends. In particular, this helps with Kubernetes Service-based routing where the set of upstream Endpoints changes, either naturally due to pod scale up or explicitly because the label selector changed. + +## Cluster idle timeout: `cluster_idle_timeout_ms` + +`cluster_idle_timeout_ms` controls how long a connection stream will remain open if there are no active requests. This timeout operates based on outgoing requests to upstream services. It can be disabled by setting the value to `0`. + +Default `3600000` (1 hour). + +## Connect timeout: `connect_timeout_ms` + +`connect_timeout_ms` sets the connection-level timeout for $productName$ to an upstream service at the network layer. This timeout runs until $productName$ can verify that a TCP connection has been established, including the TLS handshake. This timeout cannot be disabled. + +Default: `3000` + +## Module only + +## Listener idle timeout: `listener_idle_timeout_ms` + +`listener_idle_timeout_ms` configures the [`idle_timeout`](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/upstreams/http/v3/http_protocol_options.proto.html#extensions-upstreams-http-v3-httpprotocoloptions) +in the Envoy HTTP Connection Manager and controls how long a connection from the +downstream client to $productName$ will remain open if there are no active +requests. Only full requests will be counted towards this timeout so clients +sending TCP keepalives will not guarantee a connection remains open. This +timeout It can be disabled by setting the value to `0`. + + +Default: `3600000` (1 hour) + + +**Caution** Disabling this timeout increases the likelihood of stream leaks due +to missed FINs in the TCP connection. + +### Example + +The various timeouts are applied to a Mapping resource and can be combined. + +```yaml +--- +apiVersion: getambassador.io/v3alpha1 +kind: Mapping +metadata: + name: quote-backend +spec: + hostname: '*' + prefix: /backend/ + service: quote + timeout_ms: 4000 + idle_timeout_ms: 500000 + connect_timeout_ms: 2000 +``` diff --git a/docs/emissary/latest/tutorials/dev-portal-tutorial.md b/docs/emissary/latest/tutorials/dev-portal-tutorial.md new file mode 100644 index 000000000..d3c0d0a8a --- /dev/null +++ b/docs/emissary/latest/tutorials/dev-portal-tutorial.md @@ -0,0 +1,29 @@ +# Dev Portal tutorial + +In this tutorial, you will access and explore some of the key features of the Dev Portal. + +## Prerequisites + +You must have [$productName$ installed](../getting-started/) in your +Kubernetes cluster. This tutorial assumes you have connected your cluster to Ambassador Cloud and deployed the `quote` app with the +`Mapping` from the [$productName$ tutorial](../getting-started/). + + + ``` + export AMBASSADOR_LB_ENDPOINT=$(kubectl -n ambassador get svc ambassador -o "go-template={{range .status.loadBalancer.ingress}}{{or .ip .hostname}}{{end}}") + ``` + +## Developer API documentation + +The `quote` service you just deployed publishes its API as an +[OpenAPI (formerly Swagger)](https://swagger.io/solutions/getting-started-with-oas/) +document. $productName$ automatically detects and publishes this documentation. +This can help with internal and external developer onboarding by serving as a +single point of reference for of all your microservice APIs. + +1. To visualize your service's API doc, go to [Ambassador Cloud](https://app.getambassador.io/cloud/), navigate to your service's detailed view, and click on the "API" tab. + +1. Navigate to `https:///docs/` to see the +publicly visible Developer Portal. Make sure you include the trailing `/`. +This is a fully customizable portal that you can share with third parties who +need information about your APIs. diff --git a/docs/emissary/latest/tutorials/getting-started.md b/docs/emissary/latest/tutorials/getting-started.md new file mode 100644 index 000000000..1fb11cec2 --- /dev/null +++ b/docs/emissary/latest/tutorials/getting-started.md @@ -0,0 +1,156 @@ +--- +title: "Getting Started with $productName$" +description: "Learn how to install $productName$ with either Helm or kubectl to get started routing traffic from the edge of your Kubernetes cluster to your services..." +--- + +import Alert from '@material-ui/lab/Alert'; +import GettingStartedEmissary21Tabs from './gs-tabs' + +# $productName$ quick start + +
+

Contents

+ +- [1. Installation](#1-installation) + - [Connecting your installation to Ambassador Cloud](#connecting-your-installation-to-ambassador-cloud) +- [2. Routing traffic from the edge](#2-routing-traffic-from-the-edge) +- [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## 1. Installation + +We'll start by installing $productName$ into your cluster. + +**We recommend using Helm** but there are other options below to choose from. + + + +### Connecting your installation to Ambassador Cloud + +Now is a great moment to connect your new installation to Ambassador Cloud in order to fully leverage the power of $productName$ and the Developer Control Plane (DCP). + +1. Log in to [Ambassador Cloud](https://app.getambassador.io/cloud/services/) with GitHub, GitLab or Google and select your team account. + +2. At the top, click **Add Services** then click **Connection Instructions** in the "Connect your installation" section. + +3. Follow the prompts to name the cluster and click **Generate a Cloud Token**. + +4. Follow the prompts to install the cloud token into your cluster. + +5. When the token installation completes, your services will be listed in the DCP. + +Success! At this point, you have installed $productName$. Now let's get some traffic flowing to your services. + +## 2. Routing traffic from the edge + +$productName$ uses Kubernetes Custom Resource Definitions (CRDs) to declaratively define its desired state. The workflow you are going to build uses a simple demo app, a **`Listener` CRD**, and a **`Mapping` CRD**. The `Listener` CRD tells $productName$ what port to listen on, and the `Mapping` CRD tells $productName$ how to route incoming requests by host and URL path from the edge of your cluster to Kubernetes services. + +1. Start by creating a `Listener` resource for HTTP on port 8080: + + ``` + kubectl apply -f - < + This Listener will associate with all Hosts in your cluster. This is fine for the quickstart, but is likely not what you really want for production use.
+
+ Learn more about Listener.
+ Learn more about Host. + + +2. Apply the YAML for the "Quote" service. + + ``` + kubectl apply -f https://app.getambassador.io/yaml/v2-docs/$ossVersion$/quickstart/qotm.yaml + ``` + + The Service and Deployment are created in your default namespace. You can use kubectl get services,deployments quote to see their status. + +3. Generate the YAML for a `Mapping` to tell $productName$ to route all traffic inbound to the `/backend/` path to the `quote` Service. + + In this step, we'll be using the Mapping Editor, which you can find in the service details view of your [Ambassador Cloud connected installation](#connecting-your-installation-to-ambassador-cloud). + Open your browser to https://app.getambassador.io/cloud/services/quote/details and click on **New Mapping**. + + Default options are automatically populated. **Enable and configure the following settings**, then click **Generate Mapping**: + - **Path Matching**: `/backend/` + - **OpenAPI Docs**: `/.ambassador-internal/openapi-docs` + + ![](../images/mapping-editor.png) + + Whether you decide to automatically push the change to Git for this newly create Mapping resource or not, the resulting Mapping should be similar to the example below. + + **Apply this YAML to your target cluster now.** + + ```yaml + kubectl apply -f - <Victory! You have created your first $productName$ Listener and Mapping, routing a request from your cluster's edge to a service! + +## What's next? + +Explore some of the popular tutorials on $productName$: + +* [Configuring $productName$ communications](../../howtos/configure-communications): configure how $productName$ handles communication with clients +* [Intro to `Mappings`](../../topics/using/intro-mappings/): declaratively routes traffic from +the edge of your cluster to a Kubernetes service +* [`Listener` resource](../../topics/running/listener/): configure ports, protocols, and security options for your ingress. +* [`Host` resource](../../topics/running/host-crd/): configure a hostname and TLS options for your ingress. + +$productName$ has a comprehensive range of [features](/features/) to +support the requirements of any edge microservice. + +To learn more about how $productName$ works, read the [$productName$ Story](../../about/why-ambassador). diff --git a/docs/emissary/latest/tutorials/gs-tabs.js b/docs/emissary/latest/tutorials/gs-tabs.js new file mode 100644 index 000000000..e9b2ad7be --- /dev/null +++ b/docs/emissary/latest/tutorials/gs-tabs.js @@ -0,0 +1,134 @@ +import AppBar from '@material-ui/core/AppBar'; +import Box from '@material-ui/core/Box'; +import Tab from '@material-ui/core/Tab'; +import Tabs from '@material-ui/core/Tabs'; +import { makeStyles } from '@material-ui/core/styles'; +import PropTypes from 'prop-types'; +import React from 'react'; + +import CodeBlock from '../../../../../src/components/CodeBlock'; +import Icon from '../../../../../src/components/Icon'; + +function TabPanel(props) { + const { children, value, index, ...other } = props; + + return ( + + ); +} + +TabPanel.propTypes = { + children: PropTypes.node, + index: PropTypes.any.isRequired, + value: PropTypes.any.isRequired, +}; + +function a11yProps(index) { + return { + id: `simple-tab-${index}`, + 'aria-controls': `simple-tabpanel-${index}`, + }; +} + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + backgroundColor: 'transparent', + }, +})); + +export default function GettingStartedEmissary21Tabs(props) { + const version = props.version; + const classes = useStyles(); + const [value, setValue] = React.useState(0); + + const handleChange = (event, newValue) => { + setValue(newValue); + }; + + return ( +
+ + + } + label="Helm 3" + {...a11yProps(0)} + style={{ minWidth: '10%', textTransform: 'none' }} + /> + } + label="Kubernetes YAML" + {...a11yProps(1)} + style={{ minWidth: '10%', textTransform: 'none' }} + /> + + + + {/*Helm 3 install instructions*/} + + + {'# Add the Repo:' + + '\n' + + 'helm repo add datawire https://app.getambassador.io' + + '\n' + + 'helm repo update' + + '\n \n' + + '# Create Namespace and Install:' + + '\n' + + 'kubectl create namespace emissary && \\' + + '\n' + + `kubectl apply -f https://app.getambassador.io/yaml/emissary/${version}/emissary-crds.yaml` + + '\n \n' + + 'kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system' + + '\n \n' + + 'helm install emissary-ingress --namespace emissary datawire/emissary-ingress && \\' + + '\n' + + 'kubectl -n emissary wait --for condition=available --timeout=90s deploy -lapp.kubernetes.io/instance=emissary-ingress' + + '\n'} + + + + + {/*YAML install instructions*/} + + + {'kubectl create namespace emissary && \\' + + '\n' + + `kubectl apply -f https://app.getambassador.io/yaml/emissary/${version}/emissary-crds.yaml && \\` + + '\n' + + 'kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system' + + '\n \n' + + `kubectl apply -f https://app.getambassador.io/yaml/emissary/${version}/emissary-emissaryns.yaml && \\` + + '\n' + + 'kubectl -n emissary wait --for condition=available --timeout=90s deploy -lproduct=aes' + + '\n'} + + + +
+ ); +} diff --git a/docs/emissary/latest/tutorials/gs-tabs2.js b/docs/emissary/latest/tutorials/gs-tabs2.js new file mode 100644 index 000000000..bfd950477 --- /dev/null +++ b/docs/emissary/latest/tutorials/gs-tabs2.js @@ -0,0 +1,174 @@ +import AppBar from '@material-ui/core/AppBar'; +import Box from '@material-ui/core/Box'; +import Tab from '@material-ui/core/Tab'; +import Tabs from '@material-ui/core/Tabs'; +import { makeStyles } from '@material-ui/core/styles'; +import PropTypes from 'prop-types'; +import React from 'react'; + +import CodeBlock from '../../../../../src/components/CodeBlock'; + +function TabPanel(props) { + const { children, value, index, ...other } = props; + + return ( + + ); +} + +TabPanel.propTypes = { + children: PropTypes.node, + index: PropTypes.any.isRequired, + value: PropTypes.any.isRequired, +}; + +function a11yProps(index) { + return { + id: `simple-tab-${index}`, + 'aria-controls': `simple-tabpanel-${index}`, + }; +} + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + backgroundColor: 'transparent', + }, +})); + +export default function SimpleTabs() { + const classes = useStyles(); + const [value, setValue] = React.useState(0); + + const handleChange = (event, newValue) => { + setValue(newValue); + }; + + return ( +
+ + + + + + + + + + {/*Helm 3 token install instructions*/} + Log in to{' '} + Ambassador Cloud. + Click Connect my cluster to Ambassador Cloud, then{' '} + Connect via Helm. The slideout contains instructions with a + unique cloud-connect-token that is used to connect your + cluster to your Ambassador Cloud account. +
+ Run the following command, replacing $TOKEN + with your token: + + {'helm upgrade ambassador --namespace ambassador datawire/ambassador \\' + + '\n' + + ' --set agent.cloudConnectToken=$TOKEN && \\' + + '\n' + + 'kubectl -n ambassador wait --for condition=available --timeout=90s deploy -lproduct=aes'} + +
+ + + {/*Helm 2 token install instructions*/} + Log in to{' '} + Ambassador Cloud. + Click Connect my cluster to Ambassador Cloud, then{' '} + Connect via Helm. The slideout contains instructions with a + unique cloud-connect-token that is used to connect your + cluster to your Ambassador Cloud account. +
+ Run the following command, replacing $TOKEN + with your token: + + {'helm upgrade --namespace ambassador ambassador datawire/ambassador \\' + + '\n' + + ' --set crds.create=false --set agent.cloudConnectToken=$TOKEN && \\' + + '\n' + + 'kubectl -n ambassador wait --for condition=available --timeout=90s deploy -lproduct=aes'} + +
+ + + {/*YAML token install instructions*/} + Log in to{' '} + Ambassador Cloud. + Click Connect my cluster to Ambassador Cloud, then{' '} + Connect via Kubernetes YAML. The slideout contains instructions + with a unique cloud-connect-token that is used to connect + your cluster to your Ambassador Cloud account. +
+ Run the following command, replacing $TOKEN + with your token: + + {'kubectl create configmap -n ambassador ambassador-agent-cloud-token \\' + + '\n' + + ' --from-literal=CLOUD_CONNECT_TOKEN=$TOKEN'} + +
+ + + {/*edgectl token install instructions*/} + Connecting $productName$ that was installed via edgectl is + identical to the Kubernetes YAML procedure. +
+ Log in to{' '} + Ambassador Cloud. + Click Connect my cluster to Ambassador Cloud, then{' '} + Connect via Kubernetes YAML. The slideout contains instructions + with a unique cloud-connect-token that is used to connect + your cluster to your Ambassador Cloud account. +
+ Run the following command, replacing $TOKEN + with your token: + + {'kubectl create configmap -n ambassador ambassador-agent-cloud-token \\' + + '\n' + + ' --from-literal=CLOUD_CONNECT_TOKEN=$TOKEN'} + +
+
+ ); +} diff --git a/docs/emissary/latest/tutorials/quickstart-demo.md b/docs/emissary/latest/tutorials/quickstart-demo.md new file mode 100644 index 000000000..70cbce8b0 --- /dev/null +++ b/docs/emissary/latest/tutorials/quickstart-demo.md @@ -0,0 +1,176 @@ +# $productName$ Tutorial + +In this article, you will explore some of the key features of $productName$ by walking through an example workflow and exploring the +Edge Policy Console. + +## Prerequisites + +You must have [$productName$ installed](../getting-started/) in your +Kubernetes cluster. + +## Routing Traffic from the Edge + +Like any other Kubernetes object, Custom Resource Definitions (CRDs) are used to +declaratively define $productName$’s desired state. The workflow you are going to +build uses a sample deployment and the `Mapping` CRD, which is the core resource +that you will use with $productName$ to manage your edge. It enables you to route +requests by host and URL path from the edge of your cluster to Kubernetes services. + +1. Copy the configuration below and save it to a file named `quote.yaml` so that +you can deploy these resources to your cluster. This basic configuration creates +the `quote` deployment and a service to expose that deployment on port 80. + + ```yaml + --- + apiVersion: apps/v1 + kind: Deployment + metadata: + name: quote + namespace: ambassador + spec: + replicas: 1 + selector: + matchLabels: + app: quote + strategy: + type: RollingUpdate + template: + metadata: + labels: + app: quote + spec: + containers: + - name: backend + image: docker.io/datawire/quote:$quoteVersion$ + ports: + - name: http + containerPort: 8080 + --- + apiVersion: v1 + kind: Service + metadata: + name: quote + namespace: ambassador + spec: + ports: + - name: http + port: 80 + targetPort: 8080 + selector: + app: quote + ``` + +1. Apply the configuration to the cluster with the command `kubectl apply -f quote.yaml`. + +1. Copy the configuration below and save it to a file called `quote-backend.yaml` +so that you can create a `Mapping` on your cluster. This `Mapping` tells $productName$ to route all traffic inbound to the `/backend/` path, on any host that can be used to reach $productName$, to the `quote` service. + + ```yaml + --- + apiVersion: getambassador.io/v3alpha1 + kind: Mapping + metadata: + name: quote-backend + namespace: ambassador + spec: + hostname: "*" + prefix: /backend/ + service: quote + ``` + +1. Apply the configuration to the cluster with the command +`kubectl apply -f quote-backend.yaml` + +1. Store the $productName$ `LoadBalancer` address to a local environment variable. +You will use this variable to test accessing your pod. + + ``` + export AMBASSADOR_LB_ENDPOINT=$(kubectl -n ambassador get svc ambassador -o "go-template={{range .status.loadBalancer.ingress}}{{or .ip .hostname}}{{end}}") + ``` + +1. Test the configuration by accessing the service through the $productName$ load +balancer. + + ``` + $ curl -Lk "https://$AMBASSADOR_LB_ENDPOINT/backend/" + { + "server": "idle-cranberry-8tbb6iks", + "quote": "Non-locality is the driver of truth. By summoning, we vibrate.", + "time": "2019-12-11T20:10:16.525471212Z" + } + ``` + +Success, you have created your first $productName$ `Mapping`, routing a +request from your cluster's edge to a service! + +Since the `Mapping` you just created controls how requests are routed, +changing the `Mapping` will immediately change the routing. To see this +in action, use `kubectl` to edit the `Mapping`: + +1. Run `kubectl edit Mapping quote-backend`. + +1. Change `prefix: /backend/` to `prefix: /quoteme/`. + +1. Save the file and let `kubectl` update your `Mapping`. + +1. Run `kubectl get Mappings --namespace ambassador`. You will see the +`quote-backend` `Mapping` has the updated prefix listed. Try to access the +endpoint again via `curl` with the updated prefix. + + ``` + $ kubectl get Mappings --namespace ambassador + NAME PREFIX SERVICE STATE REASON + quote-backend /quoteme/ quote + + $ curl -Lk "https://${AMBASSADOR_LB_ENDPOINT}/quoteme/" + { + "server": "snippy-apple-ci10n7qe", + "quote": "A principal idea is omnipresent, much like candy.", + "time": "2020-11-18T17:15:42.095153306Z" + } + ``` + +1. Change the prefix back to `/backend/` so that you can later use the `Mapping` +with other tutorials. + +## Developer API Documentation + +The `quote` service you just deployed publishes its API as an +[OpenAPI (formerly Swagger)](https://swagger.io/solutions/getting-started-with-oas/) +document. $productName$ automatically detects and publishes this documentation. +This can help with internal and external developer onboarding by serving as a +single point of reference for of all your microservice APIs. + +1. In the Edge Policy Console, navigate to the **APIs** tab. You'll see the +OpenAPI documentation there for the "Quote Service API." Click **GET** to +expand out the documentation. + +1. Navigate to `https:///docs/` to see the +publicly visible Developer Portal. Make sure you include the trailing `/`. +This is a fully customizable portal that you can share with third parties who +need information about your APIs. + +## Next Steps + +Further explore some of the concepts you learned about in this article: +* [`Mapping` resource](../../topics/using/intro-mappings/): routes traffic from +the edge of your cluster to a Kubernetes service +* [`Host` resource](../../topics/running/host-crd/): sets the hostname by which +$productName$ will be accessed and secured with TLS certificates +* [Developer Portal](../../tutorials/dev-portal-tutorial/): +publishes an API catalog and OpenAPI documentation + +$productName$ has a comprehensive range of [features](/features/) to +support the requirements of any edge microservice. + +Learn more about [how developers use $productName$](../../topics/using/) to manage +edge policies. + +Learn more about [how site reliability engineers and operators run $productName$](../../topics/running/) +in production environments. + +To learn how $productName$ works, use cases, best practices, and more, check out +the [Quick Start](../getting-started) or read the [$productName$ Story](../../about/why-ambassador). + +For a custom configuration, you can install $productName$ +[manually](../../topics/install/yaml-install). diff --git a/docs/emissary/latest/versions.yml b/docs/emissary/latest/versions.yml new file mode 100644 index 000000000..33f62acaf --- /dev/null +++ b/docs/emissary/latest/versions.yml @@ -0,0 +1,35 @@ +# branch info +branch: release/v3.8 + +# self +version: 3.8.1 +productName: "Emissary-ingress" +productNamePlural: "Emissary-ingresses" +productNamespace: emissary +productDeploymentName: emissary-ingress +productHelmName: emissary-ingress + +# OSS (self) +ossVersion: 3.8.1 +ossDocsVersion: "pre-release" +ossChartVersion: 8.8.1 +OSSproductName: "Emissary-ingress" +OSSproductNamePlural: "Emissary-ingresses" + +# AES (not self) +aesVersion: 3.8.1 +aesDocsVersion: "pre-release" +aesChartVersion: 8.8.1 +AESproductName: "Ambassador Edge Stack" +AESproductNamePlural: "Ambassador Edge Stacks" + +# other products +qotmVersion: 1.7 +quoteVersion: 0.5.0 + +# Most recent version from previous major versions +# This is mostly to ensure that the migration matrix stays up-to-date +versionTwoX: 2.5.1 +chartVersionTwoX: 7.6.1 +versionOneX: 1.14.4 +chartVersionOneX: 6.9.5 diff --git a/docs/telepresence-oss/2.13/ci/github-actions.md b/docs/telepresence-oss/2.13/ci/github-actions.md deleted file mode 120000 index 097833671..000000000 --- a/docs/telepresence-oss/2.13/ci/github-actions.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.13/ci/github-actions.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/ci/github-actions.md b/docs/telepresence-oss/2.13/ci/github-actions.md new file mode 100644 index 000000000..810a2d239 --- /dev/null +++ b/docs/telepresence-oss/2.13/ci/github-actions.md @@ -0,0 +1,176 @@ +--- +title: GitHub Actions for Telepresence +description: "Learn more about GitHub Actions for Telepresence and how to integrate them in your processes to run tests for your own environments and improve your CI/CD pipeline. " +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Telepresence with GitHub Actions + +Telepresence combined with [GitHub Actions](https://docs.github.com/en/actions) allows you to run integration tests in your continuous integration/continuous delivery (CI/CD) pipeline without the need to run any dependant service. When you connect to the target Kubernetes cluster, you can intercept traffic of the remote services and send it to an instance of the local service running in CI. This way, you can quickly test the bugfixes, updates, and features that you develop in your project. + +You can [register here](https://app.getambassador.io/auth/realms/production/protocol/openid-connect/auth?client_id=telepresence-github-actions&response_type=code&code_challenge=qhXI67CwarbmH-pqjDIV1ZE6kqggBKvGfs69cxst43w&code_challenge_method=S256&redirect_uri=https://app.getambassador.io) to get a free Ambassador Cloud account to try the GitHub Actions for Telepresence yourself. + +## GitHub Actions for Telepresence + +Ambassador Labs has created a set of GitHub Actions for Telepresence that enable you to run integration tests in your CI pipeline against any existing remote cluster. The GitHub Actions for Telepresence are the following: + + - **configure**: Initial configuration setup for Telepresence that is needed to run the actions successfully. + - **install**: Installs Telepresence in your CI server with latest version or the one specified by you. + - **login**: Logs into Telepresence, you can create a [personal intercept](/docs/telepresence/latest/concepts/intercepts/#personal-intercept). You'll need a Telepresence API key and set it as an environment variable in your workflow. See the [acquiring an API key guide](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) for instructions on how to get one. + - **connect**: Connects to the remote target environment. + - **intercept**: Redirects traffic to the remote service to the version of the service running in CI so you can run integration tests. + +Each action contains a post-action script to clean up resources. This includes logging out of Telepresence, closing the connection to the remote cluster, and stopping the intercept process. These post scripts are executed automatically, regardless of job result. This way, you don't have to worry about terminating the session yourself. You can look at the [GitHub Actions for Telepresence repository](https://github.com/datawire/telepresence-actions) for more information. + +# Using Telepresence in your GitHub Actions CI pipeline + +## Prerequisites + +To enable GitHub Actions with telepresence, you need: + +* A [Telepresence API KEY](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) and set it as an environment variable in your workflow. +* Access to your remote Kubernetes cluster, like a `kubeconfig.yaml` file with the information to connect to the cluster. +* If your remote cluster already has Telepresence installed, you need to know whether Telepresence is installed [Cluster wide](/docs/telepresence/latest/reference/rbac/#cluster-wide-telepresence-user-access) or [Namespace only](/docs/telepresence/latest/reference/rbac/#namespace-only-telepresence-user-access). If telepresence is configured for namespace only, verify that your `kubeconfig.yaml` is configured to find the installation of the Traffic Manager. For example: + + ```yaml + apiVersion: v1 + clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: traffic-manager-namespace + name: example-cluster + ``` + +* If Telepresence is installed, you also need to know the version of Telepresence running in the cluster. You can run the command `kubectl describe service traffic-manager -n namespace`. The version is listed in the `labels` section of the output. +* You need a GitHub Actions secret named `TELEPRESENCE_API_KEY` in your repository that has your Telepresence API key. See [GitHub docs](https://docs.github.com/en/github-ae@latest/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository) for instructions on how to create GitHub Actions secrets. +* You need a GitHub Actions secret named `KUBECONFIG_FILE` in your repository with the content of your `kubeconfig.yaml`). + +**Does your environment look different?** We're actively working on making GitHub Actions for Telepresence more useful for more + +
+ + +
+ +## Initial configuration setup + +To be able to use the GitHub Actions for Telepresence, you need to do an initial setup to [configure Telepresence](../../reference/config/) so the repository is able to run your workflow. To complete the Telepresence setup: + + +This action only supports Ubuntu runners at the moment. + +1. In your main branch, create a `.github/workflows` directory in your GitHub repository if it does not already exist. +1. Next, in the `.github/workflows` directory, create a new YAML file named `configure-telepresence.yaml`: + + ```yaml + name: Configuring telepresence + on: workflow_dispatch + jobs: + configuring: + name: Configure telepresence + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + steps: + - name : Checkout + uses: actions/checkout@v3 + #---- here run your custom command to connect to your cluster + #- name: Connect to cluster + # shell: bash + # run: ./connnect to cluster + #---- + - name: Congifuring Telepresence + uses: datawire/telepresence-actions/configure@v1.0-rc + with: + version: latest + ``` + +1. Push the `configure-telepresence.yaml` file to your repository. +1. Run the `Configuring Telepresence Workflow` [manually](https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow) in your repository's Actions tab. + +When the workflow runs, the action caches the configuration directory of Telepresence and a Telepresence configuration file if is provided by you. This configuration file should be placed in the`/.github/telepresence-config/config.yml` with your own [Telepresence config](../../reference/config/). If you update your this file with a new configuration, you must run the `Configuring Telepresence Workflow` action manually on your main branch so your workflow detects the new configuration. + + +When you create a branch, do not remove the .telepresence/config.yml file. This is required for Telepresence to run GitHub action properly when there is a new push to the branch in your repository. + + +## Using Telepresence in your GitHub Actions workflows + +1. In the `.github/workflows` directory create a new YAML file named `run-integration-tests.yaml` and modify placeholders with real actions to run your service and perform integration tests. + + ```yaml + name: Run Integration Tests + on: + push: + branches-ignore: + - 'main' + jobs: + my-job: + name: Run Integration Test using Remote Cluster + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + KUBECONFIG_FILE: ${{ secrets.KUBECONFIG_FILE }} + KUBECONFIG: /opt/kubeconfig + steps: + - name : Checkout + uses: actions/checkout@v3 + with: + ref: ${{ github.event.pull_request.head.sha }} + #---- here run your custom command to run your service + #- name: Run your service to test + # shell: bash + # run: ./run_local_service + #---- + # First you need to log in to Telepresence, with your api key + - name: Create kubeconfig file + run: | + cat < /opt/kubeconfig + ${{ env.KUBECONFIG_FILE }} + EOF + - name: Install Telepresence + uses: datawire/telepresence-actions/install@v1.0-rc + with: + version: 2.5.8 # Change the version number here according to the version of Telepresence in your cluster or omit this parameter to install the latest version + - name: Telepresence connect + uses: datawire/telepresence-actions/connect@v1.0-rc + - name: Login + uses: datawire/telepresence-actions/login@v1.0-rc + with: + telepresence_api_key: ${{ secrets.TELEPRESENCE_API_KEY }} + - name: Intercept the service + uses: datawire/telepresence-actions/intercept@v1.0-rc + with: + service_name: service-name + service_port: 8081:8080 + namespace: namespacename-of-your-service + http_header: "x-telepresence-intercept-id=service-intercepted" + print_logs: true # Flag to instruct the action to print out Telepresence logs and export an artifact with them + #---- here run your custom command + #- name: Run integrations test + # shell: bash + # run: ./run_integration_test + #---- + ``` + +The previous is an example of a workflow that: + +* Checks out the repository code. +* Has a placeholder step to run the service during CI. +* Creates the `/opt/kubeconfig` file with the contents of the `secrets.KUBECONFIG_FILE` to make it available for Telepresence. +* Installs Telepresence. +* Runs Telepresence Connect. +* Logs into Telepresence. +* Intercepts traffic to the service running in the remote cluster. +* A placeholder for an action that would run integration tests, such as one that makes HTTP requests to your running service and verifies it works while dependent services run in the remote cluster. + +This workflow gives you the ability to run integration tests during the CI run against an ephemeral instance of your service to verify that the any change that is pushed to the working branch works as expected. After you push the changes, the CI server will run the integration tests against the intercept. You can view the results on your GitHub repository, under "Actions" tab. diff --git a/docs/telepresence-oss/2.13/community.md b/docs/telepresence-oss/2.13/community.md deleted file mode 120000 index 89a07f409..000000000 --- a/docs/telepresence-oss/2.13/community.md +++ /dev/null @@ -1 +0,0 @@ -../../telepresence/2.13/community.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/community.md b/docs/telepresence-oss/2.13/community.md new file mode 100644 index 000000000..922457c9d --- /dev/null +++ b/docs/telepresence-oss/2.13/community.md @@ -0,0 +1,12 @@ +# Community + +## Contributor's guide +Please review our [contributor's guide](https://github.com/telepresenceio/telepresence/blob/release/v2/DEVELOPING.md) +on GitHub to learn how you can help make Telepresence better. + +## Changelog +Our [changelog](https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md) +describes new features, bug fixes, and updates to each version of Telepresence. + +## Meetings +Check out our community [meeting schedule](https://github.com/telepresenceio/telepresence/blob/release/v2/MEETING_SCHEDULE.md) for opportunities to interact with Telepresence developers. diff --git a/docs/telepresence-oss/2.13/concepts/devloop.md b/docs/telepresence-oss/2.13/concepts/devloop.md deleted file mode 120000 index ee5e44e39..000000000 --- a/docs/telepresence-oss/2.13/concepts/devloop.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.13/concepts/devloop.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/concepts/devloop.md b/docs/telepresence-oss/2.13/concepts/devloop.md new file mode 100644 index 000000000..86aac87e2 --- /dev/null +++ b/docs/telepresence-oss/2.13/concepts/devloop.md @@ -0,0 +1,54 @@ +--- +title: "The developer and the inner dev loop | Ambassador " +--- + +# The developer experience and the inner dev loop + +## How is the developer experience changing? + +The developer experience is the workflow a developer uses to develop, test, deploy, and release software. + +Typically this experience has consisted of both an inner dev loop and an outer dev loop. The inner dev loop is where the individual developer codes and tests, and once the developer pushes their code to version control, the outer dev loop is triggered. + +The outer dev loop is _everything else_ that happens leading up to release. This includes code merge, automated code review, test execution, deployment, [controlled (canary) release](https://www.getambassador.io/docs/argo/latest/concepts/canary/), and observation of results. The modern outer dev loop might include, for example, an automated CI/CD pipeline as part of a [GitOps workflow](https://www.getambassador.io/docs/argo/latest/concepts/gitops/#what-is-gitops) and a [progressive delivery](/docs/argo/latest/concepts/cicd/) strategy relying on automated canaries, i.e. to make the outer loop as fast, efficient and automated as possible. + +Cloud-native technologies have fundamentally altered the developer experience in two ways: one, developers now have to take extra steps in the inner dev loop; two, developers need to be concerned with the outer dev loop as part of their workflow, even if most of their time is spent in the inner dev loop. + +Engineers now must design and build distributed service-based applications _and_ also assume responsibility for the full development life cycle. The new developer experience means that developers can no longer rely on monolithic application developer best practices, such as checking out the entire codebase and coding locally with a rapid “live-reload” inner development loop. Now developers have to manage external dependencies, build containers, and implement orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this adds development time to the equation. + +## What is the inner dev loop? + +The inner dev loop is the single developer workflow. A single developer should be able to set up and use an inner dev loop to code and test changes quickly. + +Even within the Kubernetes space, developers will find much of the inner dev loop familiar. That is, code can still be written locally at a level that a developer controls and committed to version control. + +In a traditional inner dev loop, if a typical developer codes for 360 minutes (6 hours) a day, with a traditional local iterative development loop of 5 minutes — 3 coding, 1 building, i.e. compiling/deploying/reloading, 1 testing inspecting, and 10-20 seconds for committing code — they can expect to make ~70 iterations of their code per day. Any one of these iterations could be a release candidate. The only “developer tax” being paid here is for the commit process, which is negligible. + +![traditional inner dev loop](../images/trad-inner-dev-loop.png) + +## In search of lost time: How does containerization change the inner dev loop? + +The inner dev loop is where writing and testing code happens, and time is critical for maximum developer productivity and getting features in front of end users. The faster the feedback loop, the faster developers can refactor and test again. + +Changes to the inner dev loop process, i.e., containerization, threaten to slow this development workflow down. Coding stays the same in the new inner dev loop, but code has to be containerized. The _containerized_ inner dev loop requires a number of new steps: + +* packaging code in containers +* writing a manifest to specify how Kubernetes should run the application (e.g., YAML-based configuration information, such as how much memory should be given to a container) +* pushing the container to the registry +* deploying containers in Kubernetes + +Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. + + +![container inner dev loop](../images/container-inner-dev-loop.png) + +## Tackling the slow inner dev loop + +A slow inner dev loop can negatively impact frontend and backend teams, delaying work on individual and team levels and slowing releases into production overall. + +For example: + +* Frontend developers have to wait for previews of backend changes on a shared dev/staging environment (for example, until CI/CD deploys a new version) and/or rely on mocks/stubs/virtual services when coding their application locally. These changes are only verifiable by going through the CI/CD process to build and deploy within a target environment. +* Backend developers have to wait for CI/CD to build and deploy their app to a target environment to verify that their code works correctly with cluster or cloud-based dependencies as well as to share their work to get feedback. + +New technologies and tools can facilitate cloud-native, containerized development. And in the case of a sluggish inner dev loop, developers can accelerate productivity with tools that help speed the loop up again. diff --git a/docs/telepresence-oss/2.13/concepts/devworkflow.md b/docs/telepresence-oss/2.13/concepts/devworkflow.md deleted file mode 120000 index 19d7ee308..000000000 --- a/docs/telepresence-oss/2.13/concepts/devworkflow.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.13/concepts/devworkflow.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/concepts/devworkflow.md b/docs/telepresence-oss/2.13/concepts/devworkflow.md new file mode 100644 index 000000000..fa24fc2bd --- /dev/null +++ b/docs/telepresence-oss/2.13/concepts/devworkflow.md @@ -0,0 +1,7 @@ +# The changing development workflow + +A changing workflow is one of the main challenges for developers adopting Kubernetes. Software development itself isn’t the challenge. Developers can continue to [code using the languages and tools with which they are most productive and comfortable](https://www.getambassador.io/resources/kubernetes-local-dev-toolkit/). That’s the beauty of containerized development. + +However, the cloud-native, Kubernetes-based approach to development means adopting a new development workflow and development environment. Beyond the basics, such as figuring out how to containerize software, [how to run containers in Kubernetes](https://www.getambassador.io/docs/kubernetes/latest/concepts/appdev/), and how to deploy changes into containers, for example, Kubernetes adds complexity before it delivers efficiency. The promise of a “quicker way to develop software” applies at least within the traditional aspects of the inner dev loop, where the single developer codes, builds and tests their software. But both within the inner dev loop and once code is pushed into version control to trigger the outer dev loop, the developer experience changes considerably from what many developers are used to. + +In this new paradigm, new steps are added to the inner dev loop, and more broadly, the developer begins to share responsibility for the full life cycle of their software. Inevitably this means taking new workflows and tools on board to ensure that the full life cycle continues full speed ahead. diff --git a/docs/telepresence-oss/2.13/concepts/modes.md b/docs/telepresence-oss/2.13/concepts/modes.md deleted file mode 120000 index d3e8ce482..000000000 --- a/docs/telepresence-oss/2.13/concepts/modes.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.13/concepts/modes.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/concepts/modes.md b/docs/telepresence-oss/2.13/concepts/modes.md new file mode 100644 index 000000000..3402f07e4 --- /dev/null +++ b/docs/telepresence-oss/2.13/concepts/modes.md @@ -0,0 +1,36 @@ +--- +title: "Modes" +--- + +# Modes + +A Telepresence installation happens in two locations, initially on your laptop or workstation, and then on your cluster after running `telepresence helm install`. +The main component that gets installed into the cluster is known as the Traffic Manager. +The Traffic Manager can be put either into single user mode (the default) or into team mode. +Modes give cluster admins the ability to enforce both [intercept type](../intercepts) defaults and logins across all connected users, enabling teams to colaborate and intercept without stepping getting in eachothers way. + +## Single user mode + +In single user mode, all intercepts will be [global intercepts](../intercepts?intercept=global) by default. +Global intercepts affect all traffic coming into the intercepted workload; this can cause issues for teams working on the same service. +While single user mode is the default, switching back from team mode is done buy runnning: +``` +telepresence helm install --single-user-mode +``` + +## Team mode + +In team mode, all intercepts will be [personal intercepts](../intercepts?intercept=personal) by default and all intercepting users must be logged in. +Personal intercepts selectively affect http traffic coming into the intercepted workload. +Being in team mode adds an additional layer of confidence for developers working on the same service, knowing their teammates wont interrupt their intercepts by mistake. +Since logins are enforced in this mode as well, you can ensure that Ambassador Cloud features, such as intercept history and saved intercepts, are being taken advantage of by everybody on your team. +To switch from single user mode to team mode, run: +``` +telepresence helm install --team-mode +``` + +## Default intercept types based on modes +The mode of the Traffic Manager determines the default type of intercept, [personal](../intercepts?intercept=personal) vs [global](../intercepts?intercept=global). +When in team mode, intercepts default to [personal intercepts](../intercepts?intercept=personal), and logins are enforced for non logged in users. +When in single user mode, all intercepts default to [global intercepts](../intercepts?intercept=global), regardless of login status. +![mode defaults](../images/mode-defaults.png) \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/howtos/cluster-in-vm.md b/docs/telepresence-oss/2.13/howtos/cluster-in-vm.md deleted file mode 120000 index fd51de800..000000000 --- a/docs/telepresence-oss/2.13/howtos/cluster-in-vm.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.13/howtos/cluster-in-vm.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/howtos/cluster-in-vm.md b/docs/telepresence-oss/2.13/howtos/cluster-in-vm.md new file mode 100644 index 000000000..4762344c9 --- /dev/null +++ b/docs/telepresence-oss/2.13/howtos/cluster-in-vm.md @@ -0,0 +1,192 @@ +--- +title: "Considerations for locally hosted clusters | Ambassador" +description: "Use Telepresence to intercept services in a cluster running in a hosted virtual machine." +--- + +# Network considerations for locally hosted clusters + +## The problem +Telepresence creates a Virtual Network Interface ([VIF](../../reference/tun-device)) that maps the clusters subnets to the host machine when it connects. If you're running Kubernetes locally (e.g., k3s, Minikube, Docker for Desktop), you may encounter network problems because the devices in the host are also accessible from the cluster's nodes. + +### Example: +A k3s cluster runs in a headless VirtualBox machine that uses a "host-only" network. This network will allow both host-to-guest and guest-to-host connections. In other words, the cluster will have access to the host's network and, while Telepresence is connected, also to its VIF. This means that from the cluster's perspective, there will now be more than one interface that maps the cluster's subnets; the ones already present in the cluster's nodes, and then the Telepresence VIF, mapping them again. + +Now, if a request arrives to Telepresence that is covered by a subnet mapped by the VIF, the request is routed to the cluster. If the cluster for some reason doesn't find a corresponding listener that can handle the request, it will eventually try the host network, and find the VIF. The VIF routes the request to the cluster and now the recursion is in motion. The final outcome of the request will likely be a timeout but since the recursion is very resource intensive (a large amount of very rapid connection requests), this will likely also affect other connections in a bad way. + +## Solution + +### Create a bridge network +A bridge network is a Link Layer (L2) device that forwards traffic between network segments. By creating a bridge network, you can bypass the host's network stack which enable the Kubernetes cluster to connect directly to the same router as your host. + +To create a bridge network, you need to change the network settings of the guest running a cluster's node so that it connects directly to a physical network device on your host. The details on how to configure the bridge depends on what type of virtualization solution you're using. + +### Vagrant + Virtualbox + k3s example +Here's a sample `Vagrantfile` that will spin up a server node and two agent nodes in three headless instances using a bridged network. It also adds the configuration needed for the cluster to host a docker repository (very handy in case you want to save bandwidth). The Kubernetes registry manifest must be applied using `kubectl -f registry.yaml` once the cluster is up and running. + +#### Vagrantfile +```ruby +# -*- mode: ruby -*- +# vi: set ft=ruby : + +# bridge is the name of the host's default network device +$bridge = 'wlp5s0' + +# default_route should be the IP of the host's default route. +$default_route = '192.168.1.1' + +# nameserver must be the IP of an external DNS, such as 8.8.8.8 +$nameserver = '8.8.8.8' + +# server_name should also be added to the host's /etc/hosts file and point to the server_ip +# for easy access when pushing docker images +server_name = 'multi' + +# static IPs for the server and agents. Those IPs must be on the default router's subnet +server_ip = '192.168.1.110' +agents = { + 'agent1' => '192.168.1.111', + 'agent2' => '192.168.1.112', +} + +# Extra parameters in INSTALL_K3S_EXEC variable because of +# K3s picking up the wrong interface when starting server and agent +# https://github.com/alexellis/k3sup/issues/306 +server_script = <<-SHELL + sudo -i + apk add curl + export INSTALL_K3S_EXEC="--bind-address=#{server_ip} --node-external-ip=#{server_ip} --flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + echo "Sleeping for 5 seconds to wait for k3s to start" + sleep 5 + cp /var/lib/rancher/k3s/server/token /vagrant_shared + cp /etc/rancher/k3s/k3s.yaml /vagrant_shared + cp /etc/rancher/k3s/registries.yaml /vagrant_shared + SHELL + +agent_script = <<-SHELL + sudo -i + apk add curl + export K3S_TOKEN_FILE=/vagrant_shared/token + export K3S_URL=https://#{server_ip}:6443 + export INSTALL_K3S_EXEC="--flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + SHELL + +def config_vm(name, ip, script, vm) + # The network_script has two objectives: + # 1. Ensure that the guest's default route is the bridged network (bypass the network of the host) + # 2. Ensure that the DNS points to an external DNS service, as opposed to the DNS of the host that + # the NAT network provides. + network_script = <<-SHELL + sudo -i + ip route delete default 2>&1 >/dev/null || true; ip route add default via #{$default_route} + cp /etc/resolv.conf /etc/resolv.conf.orig + sed 's/^nameserver.*/nameserver #{$nameserver}/' /etc/resolv.conf.orig > /etc/resolv.conf + SHELL + + vm.hostname = name + vm.network 'public_network', bridge: $bridge, ip: ip + vm.synced_folder './shared', '/vagrant_shared' + vm.provider 'virtualbox' do |vb| + vb.memory = '4096' + vb.cpus = '2' + end + vm.provision 'shell', inline: script + vm.provision 'shell', inline: network_script, run: 'always' +end + +Vagrant.configure('2') do |config| + config.vm.box = 'generic/alpine314' + + config.vm.define 'server', primary: true do |server| + config_vm(server_name, server_ip, server_script, server.vm) + end + + agents.each do |agent_name, agent_ip| + config.vm.define agent_name do |agent| + config_vm(agent_name, agent_ip, agent_script, agent.vm) + end + end +end +``` + +The Kubernetes manifest to add the registry: + +#### registry.yaml +```yaml +apiVersion: v1 +kind: ReplicationController +metadata: + name: kube-registry-v0 + namespace: kube-system + labels: + k8s-app: kube-registry + version: v0 +spec: + replicas: 1 + selector: + app: kube-registry + version: v0 + template: + metadata: + labels: + app: kube-registry + version: v0 + spec: + containers: + - name: registry + image: registry:2 + resources: + limits: + cpu: 100m + memory: 200Mi + env: + - name: REGISTRY_HTTP_ADDR + value: :5000 + - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY + value: /var/lib/registry + volumeMounts: + - name: image-store + mountPath: /var/lib/registry + ports: + - containerPort: 5000 + name: registry + protocol: TCP + volumes: + - name: image-store + hostPath: + path: /var/lib/registry-storage +--- +apiVersion: v1 +kind: Service +metadata: + name: kube-registry + namespace: kube-system + labels: + app: kube-registry + kubernetes.io/name: "KubeRegistry" +spec: + selector: + app: kube-registry + ports: + - name: registry + port: 5000 + targetPort: 5000 + protocol: TCP + type: LoadBalancer +``` + diff --git a/docs/telepresence-oss/2.13/howtos/request.md b/docs/telepresence-oss/2.13/howtos/request.md deleted file mode 120000 index 22a3e97d4..000000000 --- a/docs/telepresence-oss/2.13/howtos/request.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.13/howtos/request.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/howtos/request.md b/docs/telepresence-oss/2.13/howtos/request.md new file mode 100644 index 000000000..1109c68df --- /dev/null +++ b/docs/telepresence-oss/2.13/howtos/request.md @@ -0,0 +1,12 @@ +import Alert from '@material-ui/lab/Alert'; + +# Send requests to an intercepted service + +Ambassador Cloud can inform you about the required request parameters to reach an intercepted service. + + 1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) + 2. Navigate to the desired service Intercepts page + 3. Click the **Query** button to open the pop-up menu. + 4. Toggle between **CURL**, **Headers** and **Browse**. + +The pre-built queries and header information will help you get started to query the desired intercepted service and manage header propagation. diff --git a/docs/telepresence-oss/2.13/install/helm.md b/docs/telepresence-oss/2.13/install/helm.md deleted file mode 120000 index 13cda765a..000000000 --- a/docs/telepresence-oss/2.13/install/helm.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.13/install/helm.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/install/helm.md b/docs/telepresence-oss/2.13/install/helm.md new file mode 100644 index 000000000..2709ee8f3 --- /dev/null +++ b/docs/telepresence-oss/2.13/install/helm.md @@ -0,0 +1,181 @@ +# Install the Traffic Manager with Helm + +[Helm](https://helm.sh) is a package manager for Kubernetes that automates the release and management of software on Kubernetes. The Telepresence Traffic Manager can be installed via a Helm chart with a few simple steps. + +For more details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +## Before you begin + +Before you begin you need to have [`helm`](https://helm.sh/docs/intro/install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +The Telepresence Helm chart is hosted by Ambassador Labs and published at `https://app.getambassador.io`. + +Start by adding this repo to your Helm client with the following command: + +```shell +helm repo add datawire https://app.getambassador.io +helm repo update +``` + +## Install with Helm + +When you run the Helm chart, it installs all the components required for the Telepresence Traffic Manager. + +1. If you are installing the Telepresence Traffic Manager **for the first time on your cluster**, create the `ambassador` namespace in your cluster: + + ```shell + kubectl create namespace ambassador + ``` + +2. Install the Telepresence Traffic Manager with the following command: + + ```shell + helm install traffic-manager --namespace ambassador datawire/telepresence + ``` + +### Install into custom namespace + +The Helm chart supports being installed into any namespace, not necessarily `ambassador`. Simply pass a different `namespace` argument to `helm install`. +For example, if you wanted to deploy the traffic manager to the `staging` namespace: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence +``` + +Note that users of Telepresence will need to configure their kubeconfig to find this installation of the Traffic Manager: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +See [the kubeconfig documentation](../../reference/config#manager) for more information. + +### Upgrading the Traffic Manager. + +Versions of the Traffic Manager Helm chart are coupled to the versions of the Telepresence CLI that they are intended for. +Thus, for example, if you wish to use Telepresence `v2.4.0`, you'll need to install version `v2.4.0` of the Traffic Manager Helm chart. + +Upgrading the Traffic Manager is the same as upgrading any other Helm chart; for example, if you installed the release into the `ambassador` namespace, and you just wished to upgrade it to the latest version without changing any configuration values: + +```shell +helm repo up +helm upgrade traffic-manager datawire/telepresence --reuse-values --namespace ambassador +``` + +If you want to upgrade the Traffic-Manager to a specific version, add a `--version` flag with the version number to the upgrade command. For example: `--version v2.4.1` + +## RBAC + +### Installing a namespace-scoped traffic manager + +You might not want the Traffic Manager to have permissions across the entire kubernetes cluster, or you might want to be able to install multiple traffic managers per cluster (for example, to separate them by environment). +In these cases, the traffic manager supports being installed with a namespace scope, allowing cluster administrators to limit the reach of a traffic manager's permissions. + +For example, suppose you want a Traffic Manager that only works on namespaces `dev` and `staging`. +To do this, create a `values.yaml` like the following: + +```yaml +managerRbac: + create: true + namespaced: true + namespaces: + - dev + - staging +``` + +This can then be installed via: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence -f ./values.yaml +``` + +**NOTE** Do not install namespace-scoped Traffic Managers and a global Traffic Manager in the same cluster, as it could have unexpected effects. + +#### Namespace collision detection + +The Telepresence Helm chart will try to prevent namespace-scoped Traffic Managers from managing the same namespaces. +It will do this by creating a ConfigMap, called `traffic-manager-claim`, in each namespace that a given install manages. + +So, for example, suppose you install one Traffic Manager to manage namespaces `dev` and `staging`, as: + +```bash +helm install traffic-manager --namespace dev datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={dev,staging}' +``` + +You might then attempt to install another Traffic Manager to manage namespaces `staging` and `prod`: + +```bash +helm install traffic-manager --namespace prod datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={staging,prod}' +``` + +This would fail with an error: + +``` +Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap "traffic-manager-claim" in namespace "staging" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "prod": current value is "dev" +``` + +To fix this error, fix the overlap either by removing `staging` from the first install, or from the second. + +#### Namespace scoped user permissions + +Optionally, you can also configure user rbac to be scoped to the same namespaces as the manager itself. +You might want to do this if you don't give your users permissions throughout the cluster, and want to make sure they only have the minimum set required to perform telepresence commands on certain namespaces. + +Continuing with the `dev` and `staging` example from the previous section, simply add the following to `values.yaml` (make sure you set the `subjects`!): + +```yaml +clientRbac: + create: true + + # These are the users or groups to which the user rbac will be bound. + # This MUST be set. + subjects: {} + # - kind: User + # name: jane + # apiGroup: rbac.authorization.k8s.io + + namespaced: true + + namespaces: + - dev + - staging +``` + +#### Namespace-scoped webhook + +If you wish to use the traffic-manager's [mutating webhook](../../reference/cluster-config#mutating-webhook) with a namespace-scoped traffic manager, you will have to ensure that each namespace has an `app.kubernetes.io/name` label that is identical to its name: + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: staging + labels: + app.kubernetes.io/name: staging +``` + +You can also use `kubectl label` to add the label to an existing namespace, e.g.: + +```shell +kubectl label namespace staging app.kubernetes.io/name=staging +``` + +This is required because the mutating webhook will use the name label to find namespaces to operate on. + +**NOTE** This labelling happens automatically in kubernetes >= 1.21. + +### Installing RBAC only + +Telepresence Traffic Manager does require some [RBAC](../../reference/rbac/) for the traffic-manager deployment itself, as well as for users. +To make it easier for operators to introspect / manage RBAC separately, you can use `rbac.only=true` to +only create the rbac-related objects. +Additionally, you can use `clientRbac.create=true` and `managerRbac.create=true` to toggle which subset(s) of RBAC objects you wish to create. diff --git a/docs/telepresence-oss/2.13/install/migrate-from-legacy.md b/docs/telepresence-oss/2.13/install/migrate-from-legacy.md deleted file mode 120000 index 25399586a..000000000 --- a/docs/telepresence-oss/2.13/install/migrate-from-legacy.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.13/install/migrate-from-legacy.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/install/migrate-from-legacy.md b/docs/telepresence-oss/2.13/install/migrate-from-legacy.md new file mode 100644 index 000000000..94307dfa1 --- /dev/null +++ b/docs/telepresence-oss/2.13/install/migrate-from-legacy.md @@ -0,0 +1,110 @@ +# Migrate from legacy Telepresence + +[Telepresence](/products/telepresence/) (formerly referenced as Telepresence 2, which is the current major version) has different mechanics and requires a different mental model from [legacy Telepresence 1](https://www.telepresence.io/docs/v1/) when working with local instances of your services. + +In legacy Telepresence, a pod running a service was swapped with a pod running the Telepresence proxy. This proxy received traffic intended for the service, and sent the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment". + +In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time. + +Telepresence 2 introduces a [new +architecture](../../reference/architecture/) built around "intercepts" +that addresses these problems. With the new Telepresence, a sidecar +proxy ("traffic agent") is injected onto the pod. The proxy then +intercepts traffic intended for the Pod and routes it to the +workstation/laptop. The advantage of this approach is that the +service is running at all times, and no swapping is used. By using +the proxy approach, we can also do personal intercepts, where rather +than re-routing all traffic to the laptop/workstation, it only +re-routes the traffic designated as belonging to that user, so that +multiple developers can intercept the same service at the same time +without disrupting normal operation or disrupting eacho. + +Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts. + +## Using legacy Telepresence commands + +First please ensure you've [installed Telepresence](../). + +Telepresence is able to translate common legacy Telepresence commands into native Telepresence commands. +So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used +to with the Telepresence binary. + +For example, say you have a deployment (`myserver`) that you want to swap deployment (equivalent to intercept in +Telepresence) with a python server, you could run the following command: + +``` +$ telepresence --swap-deployment myserver --expose 9090 --run python3 -m http.server 9090 +< help text > + +Legacy telepresence command used +Command roughly translates to the following in Telepresence: +telepresence intercept echo-easy --port 9090 -- python3 -m http.server 9090 +running... +Connecting to traffic manager... +Connected to context +Using Deployment myserver +intercepted + Intercept name : myserver + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:9090 + Intercepting : all TCP connections +Serving HTTP on :: port 9090 (http://[::]:9090/) ... +``` + +Telepresence will let you know what the legacy Telepresence command has mapped to and automatically +runs it. So you can get started with Telepresence today, using the commands you are used to +and it will help you learn the Telepresence syntax. + +### Legacy command mapping + +Below is the mapping of legacy Telepresence to Telepresence commands (where they exist and +are supported). + +| Legacy Telepresence Command | Telepresence Command | +|--------------------------------------------------|--------------------------------------------| +| --swap-deployment $workload | intercept $workload | +| --expose localPort[:remotePort] | intercept --port localPort[:remotePort] | +| --swap-deployment $workload --run-shell | intercept $workload -- bash | +| --swap-deployment $workload --run $cmd | intercept $workload -- $cmd | +| --swap-deployment $workload --docker-run $cmd | intercept $workload --docker-run -- $cmd | +| --run-shell | connect -- bash | +| --run $cmd | connect -- $cmd | +| --env-file,--env-json | --env-file, --env-json (haven't changed) | +| --context,--namespace | --context, --namespace (haven't changed) | +| --mount,--docker-mount | --mount, --docker-mount (haven't changed) | + +### Legacy Telepresence command limitations + +Some of the commands and flags from legacy Telepresence either didn't apply to Telepresence or +aren't yet supported in Telepresence. For some known popular commands, such as --method, +Telepresence will include output letting you know that the flag has gone away. For flags that +Telepresence can't translate yet, it will let you know that that flag is "unsupported". + +If Telepresence is missing any flags or functionality that is integral to your usage, please let us know +by [creating an issue](https://github.com/telepresenceio/telepresence/issues) and/or talking to us on our [Slack channel](http://a8r.io/slack)! + +## Telepresence changes + +Telepresence installs a Traffic Manager in the cluster and Traffic Agents alongside workloads when performing intercepts (including +with `--swap-deployment`) and leaves them. If you use `--swap-deployment`, the intercept will be left once the process +dies, but the agent will remain. There's no harm in leaving the agent running alongside your service, but when you +want to remove them from the cluster, the following Telepresence command will help: +``` +$ telepresence uninstall --help +Uninstall telepresence agents + +Usage: + telepresence uninstall [flags] { --agent |--all-agents } + +Flags: + -d, --agent uninstall intercept agent on specific deployments + -a, --all-agents uninstall intercept agent on all deployments + -h, --help help for uninstall + -n, --namespace string If present, the namespace scope for this CLI request +``` + +Since the new architecture deploys a Traffic Manager into the Ambassador namespace, please take a look at +our [rbac guide](../../reference/rbac) if you run into any issues with permissions while upgrading to Telepresence. + +The Traffic Manager can be uninstalled using `telepresence helm uninstall`. \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/quick-start/qs-cards.js b/docs/telepresence-oss/2.13/quick-start/qs-cards.js deleted file mode 120000 index 16f46744e..000000000 --- a/docs/telepresence-oss/2.13/quick-start/qs-cards.js +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.13/quick-start/qs-cards.js \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/quick-start/qs-cards.js b/docs/telepresence-oss/2.13/quick-start/qs-cards.js new file mode 100644 index 000000000..5b68aa4ae --- /dev/null +++ b/docs/telepresence-oss/2.13/quick-start/qs-cards.js @@ -0,0 +1,71 @@ +import Grid from '@material-ui/core/Grid'; +import Paper from '@material-ui/core/Paper'; +import Typography from '@material-ui/core/Typography'; +import { makeStyles } from '@material-ui/core/styles'; +import { Link as GatsbyLink } from 'gatsby'; +import React from 'react'; + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + textAlign: 'center', + alignItem: 'stretch', + padding: 0, + }, + paper: { + padding: theme.spacing(1), + textAlign: 'center', + color: 'black', + height: '100%', + }, +})); + +export default function CenteredGrid() { + const classes = useStyles(); + + return ( +
+ + + + + + Collaborating + + + + Use preview URLS to collaborate with your colleagues and others + outside of your organization. + + + + + + + + Outbound Sessions + + + + While connected to the cluster, your laptop can interact with + services as if it was another pod in the cluster. + + + + + + + + FAQs + + + + Learn more about uses cases and the technical implementation of + Telepresence. + + + + +
+ ); +} diff --git a/docs/telepresence-oss/2.13/quick-start/telepresence-quickstart-landing.less b/docs/telepresence-oss/2.13/quick-start/telepresence-quickstart-landing.less deleted file mode 120000 index 0fde4b18d..000000000 --- a/docs/telepresence-oss/2.13/quick-start/telepresence-quickstart-landing.less +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.13/quick-start/telepresence-quickstart-landing.less \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/quick-start/telepresence-quickstart-landing.less b/docs/telepresence-oss/2.13/quick-start/telepresence-quickstart-landing.less new file mode 100644 index 000000000..e2a83df4f --- /dev/null +++ b/docs/telepresence-oss/2.13/quick-start/telepresence-quickstart-landing.less @@ -0,0 +1,152 @@ +@import '~@src/components/Layout/vars.less'; + +.doc-body .telepresence-quickstart-landing { + font-family: @InterFont; + color: @black; + margin: -8.4px auto 48px; + max-width: 1050px; + min-width: @docs-min-width; + width: 100%; + + h1 { + color: @blue-dark; + font-weight: normal; + letter-spacing: 0.25px; + font-size: 33px; + margin: 0 0 15px; + } + p { + font-size: 0.875rem; + line-height: 24px; + margin: 0; + padding: 0; + } + + .demo-cluster-container { + display: grid; + margin: 40px 0; + grid-template-columns: 1fr; + grid-template-columns: 1fr; + @media screen and (max-width: 900px) { + grid-template-columns: repeat(1, 1fr); + } + } + .main-title-container { + display: flex; + flex-direction: column; + align-items: center; + p { + text-align: center; + font-size: 0.875rem; + } + } + h2 { + font-size: 23px; + color: @black; + margin: 0 0 20px 0; + padding: 0; + &.underlined { + padding-bottom: 2px; + border-bottom: 3px solid @grey-separator; + text-align: center; + } + strong { + font-weight: 800; + } + &.subtitle { + margin-bottom: 10px; + font-size: 19px; + line-height: 28px; + } + } + .learn-more, + .get-started { + font-size: 14px; + font-weight: 600; + letter-spacing: 1.25px; + display: flex; + align-items: center; + text-decoration: none; + &.inline { + display: inline-block; + text-decoration: underline; + font-size: unset; + font-weight: normal; + &:hover { + text-decoration: none; + } + } + &.blue { + color: @blue-5; + } + &.blue:hover { + color: @blue-dark; + } + } + + .learn-more { + margin-top: 20px; + padding: 13px 0; + } + + .box-container { + &.border { + border: 1.5px solid @grey-separator; + border-radius: 5px; + padding: 10px; + } + &::before { + content: ''; + position: absolute; + width: 14px; + height: 14px; + border-radius: 50%; + top: 0; + left: 50%; + transform: translate(-50%, -50%); + } + p { + font-size: 0.875rem; + line-height: 24px; + padding: 0; + } + } + + .telepresence-video { + border: 2px solid @grey-separator; + box-shadow: -6px 12px 0px fade(@black, 12%); + border-radius: 8px; + padding: 18px; + h2.telepresence-video-title { + font-weight: 400; + font-size: 23px; + line-height: 33px; + color: @blue-6; + } + } + + .video-section { + display: grid; + grid-template-columns: 1fr 1fr; + column-gap: 20px; + @media screen and (max-width: 800px) { + grid-template-columns: 1fr; + } + ul { + font-size: 14px; + margin: 0 10px 6px 0; + } + .video-container { + position: relative; + padding-bottom: 56.25%; // 16:9 aspect ratio + height: 0; + iframe { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + } + } + } +} diff --git a/docs/telepresence-oss/2.13/redirects.yml b/docs/telepresence-oss/2.13/redirects.yml deleted file mode 120000 index 8225cb0ad..000000000 --- a/docs/telepresence-oss/2.13/redirects.yml +++ /dev/null @@ -1 +0,0 @@ -../../telepresence/2.13/redirects.yml \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/redirects.yml b/docs/telepresence-oss/2.13/redirects.yml new file mode 100644 index 000000000..5961b3477 --- /dev/null +++ b/docs/telepresence-oss/2.13/redirects.yml @@ -0,0 +1 @@ +- {from: "", to: "quick-start"} diff --git a/docs/telepresence-oss/2.13/reference/dns.md b/docs/telepresence-oss/2.13/reference/dns.md deleted file mode 120000 index ab5d0bd85..000000000 --- a/docs/telepresence-oss/2.13/reference/dns.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.13/reference/dns.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/reference/dns.md b/docs/telepresence-oss/2.13/reference/dns.md new file mode 100644 index 000000000..2f263860e --- /dev/null +++ b/docs/telepresence-oss/2.13/reference/dns.md @@ -0,0 +1,80 @@ +# DNS resolution + +The Telepresence DNS resolver is dynamically configured to resolve names using the namespaces of currently active intercepts. Processes running locally on the desktop will have network access to all services in the such namespaces by service-name only. + +All intercepts contribute to the DNS resolver, even those that do not use the `--namespace=` option. This is because `--namespace default` is implied, and in this context, `default` is treated just like any other namespace. + +No namespaces are used by the DNS resolver (not even `default`) when no intercepts are active, which means that no service is available by `` only. Without an active intercept, the namespace qualified DNS name must be used (in the form `.`). + +See this demonstrated below, using the [quick start's](../../quick-start/) sample app services. + +No intercepts are currently running, we'll connect to the cluster and list the services that can be intercepted. + +``` +$ telepresence connect + + Connecting to traffic manager... + Connected to context default (https://) + +$ telepresence list + + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + emoji : ready to intercept (traffic-agent not yet installed) + web : ready to intercept (traffic-agent not yet installed) + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + +$ curl web-app:80 + + curl: (6) Could not resolve host: web-app + +``` + +This is expected as Telepresence cannot reach the service yet by short name without an active intercept in that namespace. + +``` +$ curl web-app.emojivoto:80 + + + + + + Emoji Vote + ... +``` + +Using the namespaced qualified DNS name though does work. +Now we'll start an intercept against another service in the same namespace. Remember, `--namespace default` is implied since it is not specified. + +``` +$ telepresence intercept web --port 8080 + + Using Deployment web + intercepted + Intercept name : web + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Volume Mount Point: /tmp/telfs-166119801 + Intercepting : HTTP requests that match all headers: + 'x-telepresence-intercept-id: 8eac04e3-bf24-4d62-b3ba-35297c16f5cd:web' + +$ curl webapp:80 + + + + + + Emoji Vote + ... +``` + +Now curling that service by its short name works and will as long as the intercept is active. + +The DNS resolver will always be able to resolve services using `.` regardless of intercepts. + +### Supported Query Types + +The Telepresence DNS resolver is now capable of resolving queries of type `A`, `AAAA`, `CNAME`, +`MX`, `NS`, `PTR`, `SRV`, and `TXT`. + +See [Outbound connectivity](../routing/#dns-resolution) for details on DNS lookups. diff --git a/docs/telepresence-oss/2.13/reference/environment.md b/docs/telepresence-oss/2.13/reference/environment.md deleted file mode 120000 index a20a8b394..000000000 --- a/docs/telepresence-oss/2.13/reference/environment.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.13/reference/environment.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/reference/environment.md b/docs/telepresence-oss/2.13/reference/environment.md new file mode 100644 index 000000000..7f83ff119 --- /dev/null +++ b/docs/telepresence-oss/2.13/reference/environment.md @@ -0,0 +1,46 @@ +--- +description: "How Telepresence can import environment variables from your Kubernetes cluster to use with code running on your laptop." +--- + +# Environment variables + +Telepresence can import environment variables from the cluster pod when running an intercept. +You can then use these variables with the code running on your laptop of the service being intercepted. + +There are three options available to do this: + +1. `telepresence intercept [service] --port [port] --env-file=FILENAME` + + This will write the environment variables to a Docker Compose `.env` file. This file can be used with `docker-compose` when starting containers locally. Please see the Docker documentation regarding the [file syntax](https://docs.docker.com/compose/env-file/) and [usage](https://docs.docker.com/compose/environment-variables/) for more information. + +2. `telepresence intercept [service] --port [port] --env-json=FILENAME` + + This will write the environment variables to a JSON file. This file can be injected into other build processes. + +3. `telepresence intercept [service] --port [port] -- [COMMAND]` + + This will run a command locally with the pod's environment variables set on your laptop. Once the command quits the intercept is stopped (as if `telepresence leave [service]` was run). This can be used in conjunction with a local server command, such as `python [FILENAME]` or `node [FILENAME]` to run a service locally while using the environment variables that were set on the pod via a ConfigMap or other means. + + Another use would be running a subshell, Bash for example: + + `telepresence intercept [service] --port [port] -- /bin/bash` + + This would start the intercept then launch the subshell on your laptop with all the same variables set as on the pod. + +## Telepresence Environment Variables + +Telepresence adds some useful environment variables in addition to the ones imported from the intercepted pod: + +### TELEPRESENCE_ROOT +Directory where all remote volumes mounts are rooted. See [Volume Mounts](../volume/) for more info. + +### TELEPRESENCE_MOUNTS +Colon separated list of remotely mounted directories. + +### TELEPRESENCE_CONTAINER +The name of the intercepted container. Useful when a pod has several containers, and you want to know which one that was intercepted by Telepresence. + +### TELEPRESENCE_INTERCEPT_ID +ID of the intercept (same as the "x-intercept-id" http header). + +Useful if you need special behavior when intercepting a pod. One example might be when dealing with pub/sub systems like Kafka, where all processes that don't have the `TELEPRESENCE_INTERCEPT_ID` set can filter out all messages that contain an `x-intercept-id` header, while those that do, instead filter based on a matching `x-intercept-id` header. This is to assure that messages belonging to a certain intercept always are consumed by the intercepting process. diff --git a/docs/telepresence-oss/2.13/reference/intercepts/manual-agent.md b/docs/telepresence-oss/2.13/reference/intercepts/manual-agent.md deleted file mode 120000 index afaad5c21..000000000 --- a/docs/telepresence-oss/2.13/reference/intercepts/manual-agent.md +++ /dev/null @@ -1 +0,0 @@ -../../../../telepresence/2.13/reference/intercepts/manual-agent.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/reference/intercepts/manual-agent.md b/docs/telepresence-oss/2.13/reference/intercepts/manual-agent.md new file mode 100644 index 000000000..8c24d6dbe --- /dev/null +++ b/docs/telepresence-oss/2.13/reference/intercepts/manual-agent.md @@ -0,0 +1,267 @@ +import Alert from '@material-ui/lab/Alert'; + +# Manually injecting the Traffic Agent + +You can directly modify your workload's YAML configuration to add the Telepresence Traffic Agent and enable it to be intercepted. + +When you use a Telepresence intercept for the first time on a Pod, the [Telepresence Mutating Webhook](../../cluster-config/#mutating-webhook) +will automatically inject a Traffic Agent sidecar into it. There might be some situations where this approach cannot be used, such +as very strict company security policies preventing it. + + +Although it is possible to manually inject the Traffic Agent, it is not the recommended approach to making a workload interceptable, +try the Mutating Webhook before proceeding. + + +## Procedure + +You can manually inject the agent into Deployments, StatefulSets, or ReplicaSets. The example on this page +uses the following Deployment and Service. It's a prerequisite that they have been applied to the cluster: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "my-service" + labels: + service: my-service +spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +--- +apiVersion: v1 +kind: Service +metadata: + name: "my-service" +spec: + type: ClusterIP + selector: + service: my-service + ports: + - port: 80 + targetPort: 8080 +``` + +### 1. Generating the YAML + +First, generate the YAML for the traffic-agent configmap entry. It's important that the generated file have +the same name as the service, and no extension: + +```console +$ telepresence genyaml config --workload my-service -o /tmp/my-service +$ cat /tmp/my-service-config.yaml +agentImage: docker.io/datawire/tel2:2.7.0 +agentName: my-service +containers: +- Mounts: null + envPrefix: A_ + intercepts: + - agentPort: 9900 + containerPort: 8080 + protocol: TCP + serviceName: my-service + servicePort: 80 + serviceUID: f6680334-10ef-4703-aa4e-bb1f9d1665fd + mountPoint: /tel_app_mounts/echo-container + name: echo-container +logLevel: info +managerHost: traffic-manager.ambassador +managerPort: 8081 +manual: true +namespace: default +workloadKind: Deployment +workloadName: my-service +``` + +Next, generate the YAML for the traffic-agent container: + +```console +$ telepresence genyaml container --config /tmp/my-service -o /tmp/my-service-agent.yaml +$ cat /tmp/my-service-agent.yaml +args: +- agent +env: +- name: _TEL_AGENT_POD_IP + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: status.podIP +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: traffic-agent +ports: +- containerPort: 9900 + protocol: TCP +readinessProbe: + exec: + command: + - /bin/stat + - /tmp/agent/ready +resources: {} +volumeMounts: +- mountPath: /tel_pod_info + name: traffic-annotations +- mountPath: /etc/traffic-agent + name: traffic-config +- mountPath: /tel_app_exports + name: export-volume + name: traffic-annotations +``` + +Next, generate the init-container + +```console +$ telepresence genyaml initcontainer --config /tmp/my-service -o /tmp/my-service-init.yaml +$ cat /tmp/my-service-init.yaml +args: +- agent-init +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: tel-agent-init +resources: {} +securityContext: + capabilities: + add: + - NET_ADMIN +volumeMounts: +- mountPath: /etc/traffic-agent + name: traffic-config +``` + +Next, generate the YAML for the volumes: + +```console +$ telepresence genyaml volume --workload my-service -o /tmp/my-service-volume.yaml +$ cat /tmp/my-service-volume.yaml +- downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.annotations + path: annotations + name: traffic-annotations +- configMap: + items: + - key: my-service + path: config.yaml + name: telepresence-agents + name: traffic-config +- emptyDir: {} + name: export-volume + +``` + + +Enter `telepresence genyaml container --help` or `telepresence genyaml volume --help` for more information about these flags. + + +### 2. Creating (or updating) the configmap + +The generated configmap entry must be insterted into the `telepresence-agents` `ConfigMap` in the same namespace as the +modified `Deployment`. If the `ConfigMap` doesn't exist yet, it can be created using the following command: + +```console +$ kubectl create configmap telepresence-agents --from-file=/tmp/my-service +``` + +If it already exists, new entries can be added under the `Data` key using `kubectl edit configmap telepresence-agents`. + +### 3. Injecting the YAML into the Deployment + +You need to add the `Deployment` YAML you genereated to include the container and the volume. These are placed as elements +of `spec.template.spec.containers`, `spec.template.spec.initContainers`, and `spec.template.spec.volumes` respectively. +You also need to modify `spec.template.metadata.annotations` and add the annotation +`telepresence.getambassador.io/manually-injected: "true"`. These changes should look like the following: + +```diff + apiVersion: apps/v1 + kind: Deployment + metadata: + name: "my-service" + labels: + service: my-service + spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service ++ annotations: ++ telepresence.getambassador.io/manually-injected: "true" + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} ++ - args: ++ - agent ++ env: ++ - name: _TEL_AGENT_POD_IP ++ valueFrom: ++ fieldRef: ++ apiVersion: v1 ++ fieldPath: status.podIP ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: traffic-agent ++ ports: ++ - containerPort: 9900 ++ protocol: TCP ++ readinessProbe: ++ exec: ++ command: ++ - /bin/stat ++ - /tmp/agent/ready ++ resources: { } ++ volumeMounts: ++ - mountPath: /tel_pod_info ++ name: traffic-annotations ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ - mountPath: /tel_app_exports ++ name: export-volume ++ initContainers: ++ - args: ++ - agent-init ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: tel-agent-init ++ resources: { } ++ securityContext: ++ capabilities: ++ add: ++ - NET_ADMIN ++ volumeMounts: ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ volumes: ++ - downwardAPI: ++ items: ++ - fieldRef: ++ apiVersion: v1 ++ fieldPath: metadata.annotations ++ path: annotations ++ name: traffic-annotations ++ - configMap: ++ items: ++ - key: my-service ++ path: config.yaml ++ name: telepresence-agents ++ name: traffic-config ++ - emptyDir: { } ++ name: export-volume +``` diff --git a/docs/telepresence-oss/2.13/reference/linkerd.md b/docs/telepresence-oss/2.13/reference/linkerd.md deleted file mode 120000 index 8680b5a36..000000000 --- a/docs/telepresence-oss/2.13/reference/linkerd.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.13/reference/linkerd.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/reference/linkerd.md b/docs/telepresence-oss/2.13/reference/linkerd.md new file mode 100644 index 000000000..9b903fa76 --- /dev/null +++ b/docs/telepresence-oss/2.13/reference/linkerd.md @@ -0,0 +1,75 @@ +--- +Description: "How to get Linkerd meshed services working with Telepresence" +--- + +# Using Telepresence with Linkerd + +## Introduction +Getting started with Telepresence on Linkerd services is as simple as adding an annotation to your Deployment: + +```yaml +spec: + template: + metadata: + annotations: + config.linkerd.io/skip-outbound-ports: "8081" +``` + +The local system and the Traffic Agent connect to the Traffic Manager using its gRPC API on port 8081. Telling Linkerd to skip that port allows the Traffic Agent sidecar to fully communicate with the Traffic Manager, and therefore the rest of the Telepresence system. + +## Prerequisites +1. [Telepresence binary](../../install) +2. Linkerd control plane [installed to cluster](https://linkerd.io/2.10/tasks/install/) +3. Kubectl +4. [Working ingress controller](https://www.getambassador.io/docs/edge-stack/latest/howtos/linkerd2) + +## Deploy +Save and deploy the following YAML. Note the `config.linkerd.io/skip-outbound-ports` annotation in the metadata of the pod template. + +```yaml +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: quote +spec: + replicas: 1 + selector: + matchLabels: + app: quote + strategy: + type: RollingUpdate + template: + metadata: + annotations: + linkerd.io/inject: "enabled" + config.linkerd.io/skip-outbound-ports: "8081,8022,6001" + labels: + app: quote + spec: + containers: + - name: backend + image: docker.io/datawire/quote:0.4.1 + ports: + - name: http + containerPort: 8000 + env: + - name: PORT + value: "8000" + resources: + limits: + cpu: "0.1" + memory: 100Mi +``` + +## Connect to Telepresence +Run `telepresence connect` to connect to the cluster. Then `telepresence list` should show the `quote` deployment as `ready to intercept`: + +``` +$ telepresence list + + quote: ready to intercept (traffic-agent not yet installed) +``` + +## Run the intercept +Run `telepresence intercept quote --port 8080:80` to direct traffic from the `quote` deployment to port 8080 on your local system. Assuming you have something listening on 8080, you should now be able to see your local service whenever attempting to access the `quote` service. diff --git a/docs/telepresence-oss/2.13/reference/rbac.md b/docs/telepresence-oss/2.13/reference/rbac.md deleted file mode 120000 index 7aeee420c..000000000 --- a/docs/telepresence-oss/2.13/reference/rbac.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.13/reference/rbac.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/reference/rbac.md b/docs/telepresence-oss/2.13/reference/rbac.md new file mode 100644 index 000000000..d78133441 --- /dev/null +++ b/docs/telepresence-oss/2.13/reference/rbac.md @@ -0,0 +1,236 @@ +import Alert from '@material-ui/lab/Alert'; + +# Telepresence RBAC +The intention of this document is to provide a template for securing and limiting the permissions of Telepresence. +This documentation covers the full extent of permissions necessary to administrate Telepresence components in a cluster. + +There are two general categories for cluster permissions with respect to Telepresence. There are RBAC settings for a User and for an Administrator described above. The User is expected to only have the minimum cluster permissions necessary to create a Telepresence [intercept](../../howtos/intercepts/), and otherwise be unable to affect Kubernetes resources. + +In addition to the above, there is also a consideration of how to manage Users and Groups in Kubernetes which is outside of the scope of the document. This document will use Service Accounts to assign Roles and Bindings. Other methods of RBAC administration and enforcement can be found on the [Kubernetes RBAC documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) page. + +## Requirements + +- Kubernetes version 1.16+ +- Cluster admin privileges to apply RBAC + +## Editing your kubeconfig + +This guide also assumes that you are utilizing a kubeconfig file that is specified by the `KUBECONFIG` environment variable. This is a `yaml` file that contains the cluster's API endpoint information as well as the user data being supplied for authentication. The Service Account name used in the example below is called tp-user. This can be replaced by any value (i.e. John or Jane) as long as references to the Service Account are consistent throughout the `yaml`. After an administrator has applied the RBAC configuration, a user should create a `config.yaml` in your current directory that looks like the following:​ + +```yaml +apiVersion: v1 +kind: Config +clusters: +- name: my-cluster # Must match the cluster value in the contexts config + cluster: + ## The cluster field is highly cloud dependent. +contexts: +- name: my-context + context: + cluster: my-cluster # Must match the name field in the clusters config + user: tp-user +users: +- name: tp-user # Must match the name of the Service Account created by the cluster admin + user: + token: # See note below +``` + +The Service Account token will be obtained by the cluster administrator after they create the user's Service Account. Creating the Service Account will create an associated Secret in the same namespace with the format `-token-`. This token can be obtained by your cluster administrator by running `kubectl get secret -n ambassador -o jsonpath='{.data.token}' | base64 -d`. + +After creating `config.yaml` in your current directory, export the file's location to KUBECONFIG by running `export KUBECONFIG=$(pwd)/config.yaml`. You should then be able to switch to this context by running `kubectl config use-context my-context`. + +## Administrating Telepresence + +Telepresence administration requires permissions for creating `Namespaces`, `ServiceAccounts`, `ClusterRoles`, `ClusterRoleBindings`, `Secrets`, `Services`, `MutatingWebhookConfiguration`, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator. The following permissions are needed for the installation and use of Telepresence: + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: telepresence-admin + namespace: default +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-admin-role +rules: + - apiGroups: [""] + resources: ["pods", "pods/log"] + verbs: ["get", "list", "create", "delete", "watch"] + - apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "update", "create", "delete"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] + - apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update", "create", "delete", "watch"] + - apiGroups: ["rbac.authorization.k8s.io"] + resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"] + verbs: ["get", "list", "watch", "create", "delete"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["create"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["get", "list", "watch", "delete"] + resourceNames: ["telepresence-agents"] + - apiGroups: [""] + resources: ["namespaces"] + verbs: ["get", "list", "watch", "create"] + - apiGroups: [""] + resources: ["secrets"] + verbs: ["get", "create", "list", "delete"] + - apiGroups: [""] + resources: ["serviceaccounts"] + verbs: ["get", "create", "delete"] + - apiGroups: ["admissionregistration.k8s.io"] + resources: ["mutatingwebhookconfigurations"] + verbs: ["get", "create", "delete"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["list", "get", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-clusterrolebinding +subjects: + - name: telepresence-admin + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-admin-role + kind: ClusterRole +``` + +There are two ways to install the traffic-manager: Using `telepresence connect` and installing the [helm chart](../../install/helm/). + +By using `telepresence connect`, Telepresence will use your kubeconfig to create the objects mentioned above in the cluster if they don't already exist. If you want the most introspection into what is being installed, we recommend using the helm chart to install the traffic-manager. + +## Cluster-wide telepresence user access + +To allow users to make intercepts across all namespaces, but with more limited `kubectl` permissions, the following `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` will allow full `telepresence intercept` functionality. + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate value + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-role +rules: +# For gather-logs command +- apiGroups: [""] + resources: ["pods/log"] + verbs: ["get"] +- apiGroups: [""] + resources: ["pods"] + verbs: ["list"] +# Needed in order to maintain a list of workloads +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +- apiGroups: [""] + resources: ["namespaces", "services"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-rolebinding +subjects: +- name: tp-user + kind: ServiceAccount + namespace: ambassador +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-role + kind: ClusterRole +``` + +### Traffic Manager connect permission +In addition to the cluster-wide permissions, the client will also need the following namespace scoped permissions +in the traffic-manager's namespace in order to establish the needed port-forward to the traffic-manager. +```yaml +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: traffic-manager-connect +rules: + - apiGroups: [""] + resources: ["pods"] + verbs: ["get", "list", "watch"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: traffic-manager-connect +subjects: + - name: telepresence-test-developer + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: traffic-manager-connect + kind: Role +``` + +## Namespace only telepresence user access + +RBAC for multi-tenant scenarios where multiple dev teams are sharing a single cluster where users are constrained to a specific namespace(s). + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +For each accessible namespace +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate user name + namespace: tp-namespace # Update value for appropriate namespace +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +rules: +- apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "watch"] +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role-binding + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount +roleRef: + kind: Role + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +``` + +The user will also need the [Traffic Manager connect permission](#traffic-manager-connect-permission) described above. diff --git a/docs/telepresence-oss/2.13/reference/restapi.md b/docs/telepresence-oss/2.13/reference/restapi.md deleted file mode 120000 index c169626ea..000000000 --- a/docs/telepresence-oss/2.13/reference/restapi.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.13/reference/restapi.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/reference/restapi.md b/docs/telepresence-oss/2.13/reference/restapi.md new file mode 100644 index 000000000..4be1924a3 --- /dev/null +++ b/docs/telepresence-oss/2.13/reference/restapi.md @@ -0,0 +1,93 @@ +# Telepresence RESTful API server + +[Telepresence](/products/telepresence/) can run a RESTful API server on the local host, both on the local workstation and in a pod that contains a `traffic-agent`. The server currently has two endpoints. The standard `healthz` endpoint and the `consume-here` endpoint. + +## Enabling the server +The server is enabled by setting the `telepresenceAPI.port` to a valid port number in the [Telepresence Helm Chart](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). The values may be passed explicitly to Helm during install, or configured using the [Telepresence Config](../config#restful-api-server) to impact an auto-install. + +## Querying the server +On the cluster's side, it's the `traffic-agent` of potentially intercepted pods that runs the server. The server can be accessed using `http://localhost:/` from the application container. Telepresence ensures that the container has the `TELEPRESENCE_API_PORT` environment variable set when the `traffic-agent` is installed. On the workstation, it is the `user-daemon` that runs the server. It uses the `TELEPRESENCE_API_PORT` that is conveyed in the environment of the intercept. This means that the server can be accessed the exact same way locally, provided that the environment is propagated correctly to the interceptor process. + +## Endpoints + +The `consume-here` and `intercept-info` endpoints are both intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar. Telepresence provides the ID of the intercept in the environment variable [TELEPRESENCE_INTERCEPT_ID](../environment/#telepresence_intercept_id) during an intercept. This ID must be provided in a `x-telepresence-caller-intercept-id: = ` header. [Telepresence](/products/telepresence/) needs this to identify the caller correctly. The `` will be empty when running in the cluster, but it's harmless to provide it there too, so there's no need for conditional code. + +There are three prerequisites to fulfill before testing The `consume-here` and `intercept-info` endpoints using `curl -v` on the workstation: +1. An intercept must be active +2. The "/healthz" endpoint must respond with OK +3. The ID of the intercept must be known. It will be visible as `ID` in the output of `telepresence list --debug`. + +### healthz +The `http://localhost:/healthz` endpoint should respond with status code 200 OK. If it doesn't then something isn't configured correctly. Check that the `traffic-agent` container is present and that the `TELEPRESENCE_API_PORT` has been added to the environment of the application container and/or in the environment that is propagated to the interceptor that runs on the local workstation. + +#### test endpoint using curl +A `curl -v` call can be used to test the endpoint when an intercept is active. This example assumes that the API port is configured to be 9980. +```console +$ curl -v localhost:9980/healthz +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /healthz HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Date: Fri, 26 Nov 2021 07:06:18 GMT +< Content-Length: 0 +< +* Connection #0 to host localhost left intact +``` + +### consume-here +`http://localhost:/consume-here` will respond with "true" (consume the message) or "false" (leave the message on the queue). When running in the cluster, this endpoint will respond with `false` if the headers match an ongoing intercept for the same workload because it's assumed that it's up to the intercept to consume the message. When running locally, the response is inverted. Matching headers means that the message should be consumed. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api`, we can now check that the "/consume-here" returns "true" for the path "/api" and given headers. +```console +$ curl -v localhost:9980/consume-here?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /consume-here?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Fri, 26 Nov 2021 06:43:28 GMT +< Content-Length: 4 +< +* Connection #0 to host localhost left intact +true +``` + +If you can run curl from the pod, you can try the exact same URL. The result should be "false" when there's an ongoing intercept. The `x-telepresence-caller-intercept-id` is not needed when the call is made from the pod. + +### intercept-info +`http://localhost:/intercept-info` is intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar, and will respond with a JSON structure containing the two booleans `clientSide` and `intercepted`, and a `metadata` map which corresponds to the `--http-meta` key pairs used when the intercept was created. This field is always omitted in case `intercepted` is `false`. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api --http-meta a=b --http-meta b=c`, we can now check that the "/intercept-info" returns information for the given path and headers. +```console +$ curl -v localhost:9980/intercept-info?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980...* Connected to localhost (127.0.0.1) port 9980 (#0) +> GET /intercept-info?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.79.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Tue, 01 Feb 2022 11:39:55 GMT +< Content-Length: 68 +< +{"intercepted":true,"clientSide":true,"metadata":{"a":"b","b":"c"}} +* Connection #0 to host localhost left intact +``` diff --git a/docs/telepresence-oss/2.13/reference/tun-device.md b/docs/telepresence-oss/2.13/reference/tun-device.md deleted file mode 120000 index 661baf161..000000000 --- a/docs/telepresence-oss/2.13/reference/tun-device.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.13/reference/tun-device.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/reference/tun-device.md b/docs/telepresence-oss/2.13/reference/tun-device.md new file mode 100644 index 000000000..af7e3828c --- /dev/null +++ b/docs/telepresence-oss/2.13/reference/tun-device.md @@ -0,0 +1,27 @@ +# Networking through Virtual Network Interface + +The Telepresence daemon process creates a Virtual Network Interface (VIF) when Telepresence connects to the cluster. The VIF ensures that the cluster's subnets are available to the workstation. It also intercepts DNS requests and forwards them to the traffic-manager which in turn forwards them to intercepted agents, if any, or performs a host lookup by itself. + +### TUN-Device +The VIF is a TUN-device, which means that it communicates with the workstation in terms of L3 IP-packets. The router will recognize UDP and TCP packets and tunnel their payload to the traffic-manager via its encrypted gRPC API. The traffic-manager will then establish corresponding connections in the cluster. All protocol negotiation takes place in the client because the VIF takes care of the L3 to L4 translation (i.e. the tunnel is L4, not L3). + +## Gains when using the VIF + +### Both TCP and UDP +The TUN-device is capable of routing both TCP and UDP traffic. + +### No SSH required + +The VIF approach is somewhat similar to using `sshuttle` but without +any requirements for extra software, configuration or connections. +Using the VIF means that only one single connection needs to be +forwarded through the Kubernetes apiserver (à la `kubectl +port-forward`), using only one single port. There is no need for +`ssh` in the client nor for `sshd` in the traffic-manager. This also +means that the traffic-manager container can run as the default user. + +#### sshfs without ssh encryption +When a POD is intercepted, and its volumes are mounted on the local machine, this mount is performed by [sshfs](https://github.com/libfuse/sshfs). Telepresence will run `sshfs -o slave` which means that instead of using `ssh` to establish an encrypted communication to an `sshd`, which in turn terminates the encryption and forwards to `sftp`, the `sshfs` will talk `sftp` directly on its `stdin/stdout` pair. Telepresence tunnels that directly to an `sftp` in the agent using its already encrypted gRPC API. As a result, no `sshd` is needed in client nor in the traffic-agent, and the traffic-agent container can run as the default user. + +### No Firewall rules +With the VIF in place, there's no longer any need to tamper with firewalls in order to establish IP routes. The VIF makes the cluster subnets available during connect, and the kernel will perform the routing automatically. When the session ends, the kernel is also responsible for cleaning up. diff --git a/docs/telepresence-oss/2.13/reference/volume.md b/docs/telepresence-oss/2.13/reference/volume.md deleted file mode 120000 index 84f361d2f..000000000 --- a/docs/telepresence-oss/2.13/reference/volume.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.13/reference/volume.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/reference/volume.md b/docs/telepresence-oss/2.13/reference/volume.md new file mode 100644 index 000000000..82df9cafa --- /dev/null +++ b/docs/telepresence-oss/2.13/reference/volume.md @@ -0,0 +1,36 @@ +# Volume mounts + +import Alert from '@material-ui/lab/Alert'; + +Telepresence supports locally mounting of volumes that are mounted to your Pods. You can specify a command to run when starting the intercept, this could be a subshell or local server such as Python or Node. + +``` +telepresence intercept --port --mount=/tmp/ -- /bin/bash +``` + +In this case, Telepresence creates the intercept, mounts the Pod's volumes to locally to `/tmp`, and starts a Bash subshell. + +Telepresence can set a random mount point for you by using `--mount=true` instead, you can then find the mount point in the output of `telepresence list` or using the `$TELEPRESENCE_ROOT` variable. + +``` +$ telepresence intercept --port --mount=true -- /bin/bash +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 + Intercepting : all TCP connections + +bash-3.2$ echo $TELEPRESENCE_ROOT +/var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 +``` + +--mount=true is the default if a mount option is not specified, use --mount=false to disable mounting volumes. + +With either method, the code you run locally either from the subshell or from the intercept command will need to be prepended with the `$TELEPRESENCE_ROOT` environment variable to utilize the mounted volumes. + +For example, Kubernetes mounts secrets to `/var/run/secrets/kubernetes.io` (even if no `mountPoint` for it exists in the Pod spec). Once mounted, to access these you would need to change your code to use `$TELEPRESENCE_ROOT/var/run/secrets/kubernetes.io`. + +If using --mount=true without a command, you can use either environment variable flag to retrieve the variable. diff --git a/docs/telepresence-oss/2.13/reference/vpn.md b/docs/telepresence-oss/2.13/reference/vpn.md deleted file mode 120000 index 24f846f5a..000000000 --- a/docs/telepresence-oss/2.13/reference/vpn.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.13/reference/vpn.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.13/reference/vpn.md b/docs/telepresence-oss/2.13/reference/vpn.md new file mode 100644 index 000000000..91213babc --- /dev/null +++ b/docs/telepresence-oss/2.13/reference/vpn.md @@ -0,0 +1,155 @@ + +
+ +# Telepresence and VPNs + +## The test-vpn command + +You can make use of the `telepresence test-vpn` command to diagnose issues +with your VPN setup. +This guides you through a series of steps to figure out if there are +conflicts between your VPN configuration and [Telepresence](/products/telepresence/). + +### Prerequisites + +Before running `telepresence test-vpn` you should ensure that your VPN is +in split-tunnel mode. +This means that only traffic that _must_ pass through the VPN is directed +through it; otherwise, the test results may be inaccurate. + +You may need to configure this on both the client and server sides. +Client-side, taking the Tunnelblick client as an example, you must ensure that +the `Route all IPv4 traffic through the VPN` tickbox is not enabled: + +![Tunnelblick](../images/tunnelblick.png) + +Server-side, taking AWS' ClientVPN as an example, you simply have to enable +split-tunnel mode: + +![Modify client VPN Endpoint](../images/split-tunnel.png) + +In AWS, this setting can be toggled without reprovisioning the VPN. Other cloud providers may work differently. + +### Testing the VPN configuration + +To run it, enter: + +```console +$ telepresence test-vpn +``` + +The test-vpn tool begins by asking you to disconnect from your VPN; ensure you are disconnected then +press enter: + +``` +Telepresence Root Daemon is already stopped +Telepresence User Daemon is already stopped +Please disconnect from your VPN now and hit enter once you're disconnected... +``` + +Once it's gathered information about your network configuration without an active connection, +it will ask you to connect to the VPN: + +``` +Please connect to your VPN now and hit enter once you're connected... +``` + +It will then connect to the cluster: + + +``` +Launching Telepresence Root Daemon +Launching Telepresence User Daemon +Connected to context arn:aws:eks:us-east-1:914373874199:cluster/josec-tp-test-vpn-cluster (https://07C63820C58A0426296DAEFC73AED10C.gr7.us-east-1.eks.amazonaws.com) +Telepresence Root Daemon quitting... done +Telepresence User Daemon quitting... done +``` + +And show you the results of the test: + +``` +---------- Test Results: +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +✅ svc subnet 10.19.0.0/16 is clear of VPN + +Please see https://www.telepresence.io/docs/latest/reference/vpn for more info on these corrective actions, as well as examples + +Still having issues? Please create a new github issue at https://github.com/telepresenceio/telepresence/issues/new?template=Bug_report.md + Please make sure to add the following to your issue: + * Run `telepresence loglevel debug`, try to connect, then run `telepresence gather_logs`. It will produce a zipfile that you should attach to the issue. + * Which VPN client are you using? + * Which VPN server are you using? + * How is your VPN pushing DNS configuration? It may be useful to add the contents of /etc/resolv.conf +``` + +#### Interpreting test results + +##### Case 1: VPN masked by cluster + +In an instance where the VPN is masked by the cluster, the test-vpn tool informs you that a pod or service subnet is masking a CIDR that the VPN +routes: + +``` +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +``` + +This means that all VPN hosts within `10.0.0.0/19` will be rendered inaccessible while +telepresence is connected. + +The ideal resolution in this case is to move the pods to a different subnet. This is possible, +for example, in Amazon EKS by configuring a [new CIDR range](https://aws.amazon.com/premiumsupport/knowledge-center/eks-multiple-cidr-ranges/) for the pods. +In this case, configuring the pods to be located in `10.1.0.0/19` clears the VPN and allows you +to reach hosts inside the VPC's `10.0.0.0/19` + +However, it is not always possible to move the pods to a different subnet. +In these cases, you should use the [never-proxy](../cluster-config#neverproxy) configuration to prevent certain +hosts from being masked. +This might be particularly important for DNS resolution. In an AWS ClientVPN VPN it is often +customary to set the `.2` host as a DNS server (e.g. `10.0.0.2` in this case): + +![Modify Client VPN Endpoint](../images/vpn-dns.png) + +If this is the case for your VPN, you should place the DNS server in the never-proxy list for your +cluster. In the values file that you pass to `telepresence helm install [--upgrade] --values `, add a `client.routing` +entry like so: + +```yaml +client: + routing: + neverProxySubnets: + - 10.0.0.2/32 +``` + +##### Case 2: Cluster masked by VPN + +In an instance where the Cluster is masked by the VPN, the test-vpn tool informs you that a pod or service subnet is being masked by a CIDR +that the VPN routes: + +``` +❌ pod subnet 10.0.0.0/8 being masked by VPN-routed CIDR 10.0.0.0/16. This usually means that Telepresence will not be able to connect to your cluster. To resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, consider shrinking the mask of the 10.0.0.0/16 CIDR (e.g. from /16 to /8), or disabling split-tunneling +``` + +Typically this means that pods within `10.0.0.0/8` are not accessible while the VPN is +connected. + +As with the first case, the ideal resolution is to move the pods away, but this may not always +be possible. In that case, your best bet is to attempt to shrink the VPN's CIDR +(that is, make it route more hosts) to make Telepresence's routes win by virtue of specificity. +One easy way to do this may be by disabling split tunneling (see the [prerequisites](#prerequisites) +section for more on split-tunneling). + +Note that once you fix this, you may find yourself landing again in [Case 1](#case-1-vpn-masked-by-cluster), and may need +to use never-proxy rules to whitelist hosts in the VPN: + +``` +❌ pod subnet 10.0.0.0/8 is masking VPN-routed CIDR 0.0.0.0/1. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 0.0.0.0/1 are placed in the never-proxy list +``` +
diff --git a/docs/telepresence-oss/2.14/ci/github-actions.md b/docs/telepresence-oss/2.14/ci/github-actions.md deleted file mode 120000 index 177821818..000000000 --- a/docs/telepresence-oss/2.14/ci/github-actions.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.14/ci/github-actions.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/ci/github-actions.md b/docs/telepresence-oss/2.14/ci/github-actions.md new file mode 100644 index 000000000..810a2d239 --- /dev/null +++ b/docs/telepresence-oss/2.14/ci/github-actions.md @@ -0,0 +1,176 @@ +--- +title: GitHub Actions for Telepresence +description: "Learn more about GitHub Actions for Telepresence and how to integrate them in your processes to run tests for your own environments and improve your CI/CD pipeline. " +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Telepresence with GitHub Actions + +Telepresence combined with [GitHub Actions](https://docs.github.com/en/actions) allows you to run integration tests in your continuous integration/continuous delivery (CI/CD) pipeline without the need to run any dependant service. When you connect to the target Kubernetes cluster, you can intercept traffic of the remote services and send it to an instance of the local service running in CI. This way, you can quickly test the bugfixes, updates, and features that you develop in your project. + +You can [register here](https://app.getambassador.io/auth/realms/production/protocol/openid-connect/auth?client_id=telepresence-github-actions&response_type=code&code_challenge=qhXI67CwarbmH-pqjDIV1ZE6kqggBKvGfs69cxst43w&code_challenge_method=S256&redirect_uri=https://app.getambassador.io) to get a free Ambassador Cloud account to try the GitHub Actions for Telepresence yourself. + +## GitHub Actions for Telepresence + +Ambassador Labs has created a set of GitHub Actions for Telepresence that enable you to run integration tests in your CI pipeline against any existing remote cluster. The GitHub Actions for Telepresence are the following: + + - **configure**: Initial configuration setup for Telepresence that is needed to run the actions successfully. + - **install**: Installs Telepresence in your CI server with latest version or the one specified by you. + - **login**: Logs into Telepresence, you can create a [personal intercept](/docs/telepresence/latest/concepts/intercepts/#personal-intercept). You'll need a Telepresence API key and set it as an environment variable in your workflow. See the [acquiring an API key guide](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) for instructions on how to get one. + - **connect**: Connects to the remote target environment. + - **intercept**: Redirects traffic to the remote service to the version of the service running in CI so you can run integration tests. + +Each action contains a post-action script to clean up resources. This includes logging out of Telepresence, closing the connection to the remote cluster, and stopping the intercept process. These post scripts are executed automatically, regardless of job result. This way, you don't have to worry about terminating the session yourself. You can look at the [GitHub Actions for Telepresence repository](https://github.com/datawire/telepresence-actions) for more information. + +# Using Telepresence in your GitHub Actions CI pipeline + +## Prerequisites + +To enable GitHub Actions with telepresence, you need: + +* A [Telepresence API KEY](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) and set it as an environment variable in your workflow. +* Access to your remote Kubernetes cluster, like a `kubeconfig.yaml` file with the information to connect to the cluster. +* If your remote cluster already has Telepresence installed, you need to know whether Telepresence is installed [Cluster wide](/docs/telepresence/latest/reference/rbac/#cluster-wide-telepresence-user-access) or [Namespace only](/docs/telepresence/latest/reference/rbac/#namespace-only-telepresence-user-access). If telepresence is configured for namespace only, verify that your `kubeconfig.yaml` is configured to find the installation of the Traffic Manager. For example: + + ```yaml + apiVersion: v1 + clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: traffic-manager-namespace + name: example-cluster + ``` + +* If Telepresence is installed, you also need to know the version of Telepresence running in the cluster. You can run the command `kubectl describe service traffic-manager -n namespace`. The version is listed in the `labels` section of the output. +* You need a GitHub Actions secret named `TELEPRESENCE_API_KEY` in your repository that has your Telepresence API key. See [GitHub docs](https://docs.github.com/en/github-ae@latest/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository) for instructions on how to create GitHub Actions secrets. +* You need a GitHub Actions secret named `KUBECONFIG_FILE` in your repository with the content of your `kubeconfig.yaml`). + +**Does your environment look different?** We're actively working on making GitHub Actions for Telepresence more useful for more + +
+ + +
+ +## Initial configuration setup + +To be able to use the GitHub Actions for Telepresence, you need to do an initial setup to [configure Telepresence](../../reference/config/) so the repository is able to run your workflow. To complete the Telepresence setup: + + +This action only supports Ubuntu runners at the moment. + +1. In your main branch, create a `.github/workflows` directory in your GitHub repository if it does not already exist. +1. Next, in the `.github/workflows` directory, create a new YAML file named `configure-telepresence.yaml`: + + ```yaml + name: Configuring telepresence + on: workflow_dispatch + jobs: + configuring: + name: Configure telepresence + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + steps: + - name : Checkout + uses: actions/checkout@v3 + #---- here run your custom command to connect to your cluster + #- name: Connect to cluster + # shell: bash + # run: ./connnect to cluster + #---- + - name: Congifuring Telepresence + uses: datawire/telepresence-actions/configure@v1.0-rc + with: + version: latest + ``` + +1. Push the `configure-telepresence.yaml` file to your repository. +1. Run the `Configuring Telepresence Workflow` [manually](https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow) in your repository's Actions tab. + +When the workflow runs, the action caches the configuration directory of Telepresence and a Telepresence configuration file if is provided by you. This configuration file should be placed in the`/.github/telepresence-config/config.yml` with your own [Telepresence config](../../reference/config/). If you update your this file with a new configuration, you must run the `Configuring Telepresence Workflow` action manually on your main branch so your workflow detects the new configuration. + + +When you create a branch, do not remove the .telepresence/config.yml file. This is required for Telepresence to run GitHub action properly when there is a new push to the branch in your repository. + + +## Using Telepresence in your GitHub Actions workflows + +1. In the `.github/workflows` directory create a new YAML file named `run-integration-tests.yaml` and modify placeholders with real actions to run your service and perform integration tests. + + ```yaml + name: Run Integration Tests + on: + push: + branches-ignore: + - 'main' + jobs: + my-job: + name: Run Integration Test using Remote Cluster + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + KUBECONFIG_FILE: ${{ secrets.KUBECONFIG_FILE }} + KUBECONFIG: /opt/kubeconfig + steps: + - name : Checkout + uses: actions/checkout@v3 + with: + ref: ${{ github.event.pull_request.head.sha }} + #---- here run your custom command to run your service + #- name: Run your service to test + # shell: bash + # run: ./run_local_service + #---- + # First you need to log in to Telepresence, with your api key + - name: Create kubeconfig file + run: | + cat < /opt/kubeconfig + ${{ env.KUBECONFIG_FILE }} + EOF + - name: Install Telepresence + uses: datawire/telepresence-actions/install@v1.0-rc + with: + version: 2.5.8 # Change the version number here according to the version of Telepresence in your cluster or omit this parameter to install the latest version + - name: Telepresence connect + uses: datawire/telepresence-actions/connect@v1.0-rc + - name: Login + uses: datawire/telepresence-actions/login@v1.0-rc + with: + telepresence_api_key: ${{ secrets.TELEPRESENCE_API_KEY }} + - name: Intercept the service + uses: datawire/telepresence-actions/intercept@v1.0-rc + with: + service_name: service-name + service_port: 8081:8080 + namespace: namespacename-of-your-service + http_header: "x-telepresence-intercept-id=service-intercepted" + print_logs: true # Flag to instruct the action to print out Telepresence logs and export an artifact with them + #---- here run your custom command + #- name: Run integrations test + # shell: bash + # run: ./run_integration_test + #---- + ``` + +The previous is an example of a workflow that: + +* Checks out the repository code. +* Has a placeholder step to run the service during CI. +* Creates the `/opt/kubeconfig` file with the contents of the `secrets.KUBECONFIG_FILE` to make it available for Telepresence. +* Installs Telepresence. +* Runs Telepresence Connect. +* Logs into Telepresence. +* Intercepts traffic to the service running in the remote cluster. +* A placeholder for an action that would run integration tests, such as one that makes HTTP requests to your running service and verifies it works while dependent services run in the remote cluster. + +This workflow gives you the ability to run integration tests during the CI run against an ephemeral instance of your service to verify that the any change that is pushed to the working branch works as expected. After you push the changes, the CI server will run the integration tests against the intercept. You can view the results on your GitHub repository, under "Actions" tab. diff --git a/docs/telepresence-oss/2.14/community.md b/docs/telepresence-oss/2.14/community.md deleted file mode 120000 index 2377a8f94..000000000 --- a/docs/telepresence-oss/2.14/community.md +++ /dev/null @@ -1 +0,0 @@ -../../telepresence/2.14/community.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/community.md b/docs/telepresence-oss/2.14/community.md new file mode 100644 index 000000000..922457c9d --- /dev/null +++ b/docs/telepresence-oss/2.14/community.md @@ -0,0 +1,12 @@ +# Community + +## Contributor's guide +Please review our [contributor's guide](https://github.com/telepresenceio/telepresence/blob/release/v2/DEVELOPING.md) +on GitHub to learn how you can help make Telepresence better. + +## Changelog +Our [changelog](https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md) +describes new features, bug fixes, and updates to each version of Telepresence. + +## Meetings +Check out our community [meeting schedule](https://github.com/telepresenceio/telepresence/blob/release/v2/MEETING_SCHEDULE.md) for opportunities to interact with Telepresence developers. diff --git a/docs/telepresence-oss/2.14/concepts/devloop.md b/docs/telepresence-oss/2.14/concepts/devloop.md deleted file mode 120000 index ccce66a42..000000000 --- a/docs/telepresence-oss/2.14/concepts/devloop.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.14/concepts/devloop.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/concepts/devloop.md b/docs/telepresence-oss/2.14/concepts/devloop.md new file mode 100644 index 000000000..86aac87e2 --- /dev/null +++ b/docs/telepresence-oss/2.14/concepts/devloop.md @@ -0,0 +1,54 @@ +--- +title: "The developer and the inner dev loop | Ambassador " +--- + +# The developer experience and the inner dev loop + +## How is the developer experience changing? + +The developer experience is the workflow a developer uses to develop, test, deploy, and release software. + +Typically this experience has consisted of both an inner dev loop and an outer dev loop. The inner dev loop is where the individual developer codes and tests, and once the developer pushes their code to version control, the outer dev loop is triggered. + +The outer dev loop is _everything else_ that happens leading up to release. This includes code merge, automated code review, test execution, deployment, [controlled (canary) release](https://www.getambassador.io/docs/argo/latest/concepts/canary/), and observation of results. The modern outer dev loop might include, for example, an automated CI/CD pipeline as part of a [GitOps workflow](https://www.getambassador.io/docs/argo/latest/concepts/gitops/#what-is-gitops) and a [progressive delivery](/docs/argo/latest/concepts/cicd/) strategy relying on automated canaries, i.e. to make the outer loop as fast, efficient and automated as possible. + +Cloud-native technologies have fundamentally altered the developer experience in two ways: one, developers now have to take extra steps in the inner dev loop; two, developers need to be concerned with the outer dev loop as part of their workflow, even if most of their time is spent in the inner dev loop. + +Engineers now must design and build distributed service-based applications _and_ also assume responsibility for the full development life cycle. The new developer experience means that developers can no longer rely on monolithic application developer best practices, such as checking out the entire codebase and coding locally with a rapid “live-reload” inner development loop. Now developers have to manage external dependencies, build containers, and implement orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this adds development time to the equation. + +## What is the inner dev loop? + +The inner dev loop is the single developer workflow. A single developer should be able to set up and use an inner dev loop to code and test changes quickly. + +Even within the Kubernetes space, developers will find much of the inner dev loop familiar. That is, code can still be written locally at a level that a developer controls and committed to version control. + +In a traditional inner dev loop, if a typical developer codes for 360 minutes (6 hours) a day, with a traditional local iterative development loop of 5 minutes — 3 coding, 1 building, i.e. compiling/deploying/reloading, 1 testing inspecting, and 10-20 seconds for committing code — they can expect to make ~70 iterations of their code per day. Any one of these iterations could be a release candidate. The only “developer tax” being paid here is for the commit process, which is negligible. + +![traditional inner dev loop](../images/trad-inner-dev-loop.png) + +## In search of lost time: How does containerization change the inner dev loop? + +The inner dev loop is where writing and testing code happens, and time is critical for maximum developer productivity and getting features in front of end users. The faster the feedback loop, the faster developers can refactor and test again. + +Changes to the inner dev loop process, i.e., containerization, threaten to slow this development workflow down. Coding stays the same in the new inner dev loop, but code has to be containerized. The _containerized_ inner dev loop requires a number of new steps: + +* packaging code in containers +* writing a manifest to specify how Kubernetes should run the application (e.g., YAML-based configuration information, such as how much memory should be given to a container) +* pushing the container to the registry +* deploying containers in Kubernetes + +Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. + + +![container inner dev loop](../images/container-inner-dev-loop.png) + +## Tackling the slow inner dev loop + +A slow inner dev loop can negatively impact frontend and backend teams, delaying work on individual and team levels and slowing releases into production overall. + +For example: + +* Frontend developers have to wait for previews of backend changes on a shared dev/staging environment (for example, until CI/CD deploys a new version) and/or rely on mocks/stubs/virtual services when coding their application locally. These changes are only verifiable by going through the CI/CD process to build and deploy within a target environment. +* Backend developers have to wait for CI/CD to build and deploy their app to a target environment to verify that their code works correctly with cluster or cloud-based dependencies as well as to share their work to get feedback. + +New technologies and tools can facilitate cloud-native, containerized development. And in the case of a sluggish inner dev loop, developers can accelerate productivity with tools that help speed the loop up again. diff --git a/docs/telepresence-oss/2.14/concepts/devworkflow.md b/docs/telepresence-oss/2.14/concepts/devworkflow.md deleted file mode 120000 index 6f6d87c87..000000000 --- a/docs/telepresence-oss/2.14/concepts/devworkflow.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.14/concepts/devworkflow.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/concepts/devworkflow.md b/docs/telepresence-oss/2.14/concepts/devworkflow.md new file mode 100644 index 000000000..fa24fc2bd --- /dev/null +++ b/docs/telepresence-oss/2.14/concepts/devworkflow.md @@ -0,0 +1,7 @@ +# The changing development workflow + +A changing workflow is one of the main challenges for developers adopting Kubernetes. Software development itself isn’t the challenge. Developers can continue to [code using the languages and tools with which they are most productive and comfortable](https://www.getambassador.io/resources/kubernetes-local-dev-toolkit/). That’s the beauty of containerized development. + +However, the cloud-native, Kubernetes-based approach to development means adopting a new development workflow and development environment. Beyond the basics, such as figuring out how to containerize software, [how to run containers in Kubernetes](https://www.getambassador.io/docs/kubernetes/latest/concepts/appdev/), and how to deploy changes into containers, for example, Kubernetes adds complexity before it delivers efficiency. The promise of a “quicker way to develop software” applies at least within the traditional aspects of the inner dev loop, where the single developer codes, builds and tests their software. But both within the inner dev loop and once code is pushed into version control to trigger the outer dev loop, the developer experience changes considerably from what many developers are used to. + +In this new paradigm, new steps are added to the inner dev loop, and more broadly, the developer begins to share responsibility for the full life cycle of their software. Inevitably this means taking new workflows and tools on board to ensure that the full life cycle continues full speed ahead. diff --git a/docs/telepresence-oss/2.14/concepts/modes.md b/docs/telepresence-oss/2.14/concepts/modes.md deleted file mode 120000 index 07c628600..000000000 --- a/docs/telepresence-oss/2.14/concepts/modes.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.14/concepts/modes.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/concepts/modes.md b/docs/telepresence-oss/2.14/concepts/modes.md new file mode 100644 index 000000000..3402f07e4 --- /dev/null +++ b/docs/telepresence-oss/2.14/concepts/modes.md @@ -0,0 +1,36 @@ +--- +title: "Modes" +--- + +# Modes + +A Telepresence installation happens in two locations, initially on your laptop or workstation, and then on your cluster after running `telepresence helm install`. +The main component that gets installed into the cluster is known as the Traffic Manager. +The Traffic Manager can be put either into single user mode (the default) or into team mode. +Modes give cluster admins the ability to enforce both [intercept type](../intercepts) defaults and logins across all connected users, enabling teams to colaborate and intercept without stepping getting in eachothers way. + +## Single user mode + +In single user mode, all intercepts will be [global intercepts](../intercepts?intercept=global) by default. +Global intercepts affect all traffic coming into the intercepted workload; this can cause issues for teams working on the same service. +While single user mode is the default, switching back from team mode is done buy runnning: +``` +telepresence helm install --single-user-mode +``` + +## Team mode + +In team mode, all intercepts will be [personal intercepts](../intercepts?intercept=personal) by default and all intercepting users must be logged in. +Personal intercepts selectively affect http traffic coming into the intercepted workload. +Being in team mode adds an additional layer of confidence for developers working on the same service, knowing their teammates wont interrupt their intercepts by mistake. +Since logins are enforced in this mode as well, you can ensure that Ambassador Cloud features, such as intercept history and saved intercepts, are being taken advantage of by everybody on your team. +To switch from single user mode to team mode, run: +``` +telepresence helm install --team-mode +``` + +## Default intercept types based on modes +The mode of the Traffic Manager determines the default type of intercept, [personal](../intercepts?intercept=personal) vs [global](../intercepts?intercept=global). +When in team mode, intercepts default to [personal intercepts](../intercepts?intercept=personal), and logins are enforced for non logged in users. +When in single user mode, all intercepts default to [global intercepts](../intercepts?intercept=global), regardless of login status. +![mode defaults](../images/mode-defaults.png) \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/howtos/cluster-in-vm.md b/docs/telepresence-oss/2.14/howtos/cluster-in-vm.md deleted file mode 120000 index 363d49cf4..000000000 --- a/docs/telepresence-oss/2.14/howtos/cluster-in-vm.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.14/howtos/cluster-in-vm.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/howtos/cluster-in-vm.md b/docs/telepresence-oss/2.14/howtos/cluster-in-vm.md new file mode 100644 index 000000000..4762344c9 --- /dev/null +++ b/docs/telepresence-oss/2.14/howtos/cluster-in-vm.md @@ -0,0 +1,192 @@ +--- +title: "Considerations for locally hosted clusters | Ambassador" +description: "Use Telepresence to intercept services in a cluster running in a hosted virtual machine." +--- + +# Network considerations for locally hosted clusters + +## The problem +Telepresence creates a Virtual Network Interface ([VIF](../../reference/tun-device)) that maps the clusters subnets to the host machine when it connects. If you're running Kubernetes locally (e.g., k3s, Minikube, Docker for Desktop), you may encounter network problems because the devices in the host are also accessible from the cluster's nodes. + +### Example: +A k3s cluster runs in a headless VirtualBox machine that uses a "host-only" network. This network will allow both host-to-guest and guest-to-host connections. In other words, the cluster will have access to the host's network and, while Telepresence is connected, also to its VIF. This means that from the cluster's perspective, there will now be more than one interface that maps the cluster's subnets; the ones already present in the cluster's nodes, and then the Telepresence VIF, mapping them again. + +Now, if a request arrives to Telepresence that is covered by a subnet mapped by the VIF, the request is routed to the cluster. If the cluster for some reason doesn't find a corresponding listener that can handle the request, it will eventually try the host network, and find the VIF. The VIF routes the request to the cluster and now the recursion is in motion. The final outcome of the request will likely be a timeout but since the recursion is very resource intensive (a large amount of very rapid connection requests), this will likely also affect other connections in a bad way. + +## Solution + +### Create a bridge network +A bridge network is a Link Layer (L2) device that forwards traffic between network segments. By creating a bridge network, you can bypass the host's network stack which enable the Kubernetes cluster to connect directly to the same router as your host. + +To create a bridge network, you need to change the network settings of the guest running a cluster's node so that it connects directly to a physical network device on your host. The details on how to configure the bridge depends on what type of virtualization solution you're using. + +### Vagrant + Virtualbox + k3s example +Here's a sample `Vagrantfile` that will spin up a server node and two agent nodes in three headless instances using a bridged network. It also adds the configuration needed for the cluster to host a docker repository (very handy in case you want to save bandwidth). The Kubernetes registry manifest must be applied using `kubectl -f registry.yaml` once the cluster is up and running. + +#### Vagrantfile +```ruby +# -*- mode: ruby -*- +# vi: set ft=ruby : + +# bridge is the name of the host's default network device +$bridge = 'wlp5s0' + +# default_route should be the IP of the host's default route. +$default_route = '192.168.1.1' + +# nameserver must be the IP of an external DNS, such as 8.8.8.8 +$nameserver = '8.8.8.8' + +# server_name should also be added to the host's /etc/hosts file and point to the server_ip +# for easy access when pushing docker images +server_name = 'multi' + +# static IPs for the server and agents. Those IPs must be on the default router's subnet +server_ip = '192.168.1.110' +agents = { + 'agent1' => '192.168.1.111', + 'agent2' => '192.168.1.112', +} + +# Extra parameters in INSTALL_K3S_EXEC variable because of +# K3s picking up the wrong interface when starting server and agent +# https://github.com/alexellis/k3sup/issues/306 +server_script = <<-SHELL + sudo -i + apk add curl + export INSTALL_K3S_EXEC="--bind-address=#{server_ip} --node-external-ip=#{server_ip} --flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + echo "Sleeping for 5 seconds to wait for k3s to start" + sleep 5 + cp /var/lib/rancher/k3s/server/token /vagrant_shared + cp /etc/rancher/k3s/k3s.yaml /vagrant_shared + cp /etc/rancher/k3s/registries.yaml /vagrant_shared + SHELL + +agent_script = <<-SHELL + sudo -i + apk add curl + export K3S_TOKEN_FILE=/vagrant_shared/token + export K3S_URL=https://#{server_ip}:6443 + export INSTALL_K3S_EXEC="--flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + SHELL + +def config_vm(name, ip, script, vm) + # The network_script has two objectives: + # 1. Ensure that the guest's default route is the bridged network (bypass the network of the host) + # 2. Ensure that the DNS points to an external DNS service, as opposed to the DNS of the host that + # the NAT network provides. + network_script = <<-SHELL + sudo -i + ip route delete default 2>&1 >/dev/null || true; ip route add default via #{$default_route} + cp /etc/resolv.conf /etc/resolv.conf.orig + sed 's/^nameserver.*/nameserver #{$nameserver}/' /etc/resolv.conf.orig > /etc/resolv.conf + SHELL + + vm.hostname = name + vm.network 'public_network', bridge: $bridge, ip: ip + vm.synced_folder './shared', '/vagrant_shared' + vm.provider 'virtualbox' do |vb| + vb.memory = '4096' + vb.cpus = '2' + end + vm.provision 'shell', inline: script + vm.provision 'shell', inline: network_script, run: 'always' +end + +Vagrant.configure('2') do |config| + config.vm.box = 'generic/alpine314' + + config.vm.define 'server', primary: true do |server| + config_vm(server_name, server_ip, server_script, server.vm) + end + + agents.each do |agent_name, agent_ip| + config.vm.define agent_name do |agent| + config_vm(agent_name, agent_ip, agent_script, agent.vm) + end + end +end +``` + +The Kubernetes manifest to add the registry: + +#### registry.yaml +```yaml +apiVersion: v1 +kind: ReplicationController +metadata: + name: kube-registry-v0 + namespace: kube-system + labels: + k8s-app: kube-registry + version: v0 +spec: + replicas: 1 + selector: + app: kube-registry + version: v0 + template: + metadata: + labels: + app: kube-registry + version: v0 + spec: + containers: + - name: registry + image: registry:2 + resources: + limits: + cpu: 100m + memory: 200Mi + env: + - name: REGISTRY_HTTP_ADDR + value: :5000 + - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY + value: /var/lib/registry + volumeMounts: + - name: image-store + mountPath: /var/lib/registry + ports: + - containerPort: 5000 + name: registry + protocol: TCP + volumes: + - name: image-store + hostPath: + path: /var/lib/registry-storage +--- +apiVersion: v1 +kind: Service +metadata: + name: kube-registry + namespace: kube-system + labels: + app: kube-registry + kubernetes.io/name: "KubeRegistry" +spec: + selector: + app: kube-registry + ports: + - name: registry + port: 5000 + targetPort: 5000 + protocol: TCP + type: LoadBalancer +``` + diff --git a/docs/telepresence-oss/2.14/howtos/request.md b/docs/telepresence-oss/2.14/howtos/request.md deleted file mode 120000 index 612484a58..000000000 --- a/docs/telepresence-oss/2.14/howtos/request.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.14/howtos/request.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/howtos/request.md b/docs/telepresence-oss/2.14/howtos/request.md new file mode 100644 index 000000000..1109c68df --- /dev/null +++ b/docs/telepresence-oss/2.14/howtos/request.md @@ -0,0 +1,12 @@ +import Alert from '@material-ui/lab/Alert'; + +# Send requests to an intercepted service + +Ambassador Cloud can inform you about the required request parameters to reach an intercepted service. + + 1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) + 2. Navigate to the desired service Intercepts page + 3. Click the **Query** button to open the pop-up menu. + 4. Toggle between **CURL**, **Headers** and **Browse**. + +The pre-built queries and header information will help you get started to query the desired intercepted service and manage header propagation. diff --git a/docs/telepresence-oss/2.14/images b/docs/telepresence-oss/2.14/images deleted file mode 120000 index ed40b5f27..000000000 --- a/docs/telepresence-oss/2.14/images +++ /dev/null @@ -1 +0,0 @@ -../../../../docs/telepresence/v2.14/images/ \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/images/container-inner-dev-loop.png b/docs/telepresence-oss/2.14/images/container-inner-dev-loop.png new file mode 100644 index 000000000..06586cd6e Binary files /dev/null and b/docs/telepresence-oss/2.14/images/container-inner-dev-loop.png differ diff --git a/docs/telepresence-oss/2.14/images/daemon-in-container.png b/docs/telepresence-oss/2.14/images/daemon-in-container.png new file mode 100644 index 000000000..ed02e8386 Binary files /dev/null and b/docs/telepresence-oss/2.14/images/daemon-in-container.png differ diff --git a/docs/telepresence-oss/2.14/images/docker-extension-intercept.png b/docs/telepresence-oss/2.14/images/docker-extension-intercept.png new file mode 100644 index 000000000..d01daef8f Binary files /dev/null and b/docs/telepresence-oss/2.14/images/docker-extension-intercept.png differ diff --git a/docs/telepresence-oss/2.14/images/docker-header-containers.png b/docs/telepresence-oss/2.14/images/docker-header-containers.png new file mode 100644 index 000000000..06f422a93 Binary files /dev/null and b/docs/telepresence-oss/2.14/images/docker-header-containers.png differ diff --git a/docs/telepresence-oss/2.14/images/docker_extension_button_drop_down.png b/docs/telepresence-oss/2.14/images/docker_extension_button_drop_down.png new file mode 100644 index 000000000..775323e56 Binary files /dev/null and b/docs/telepresence-oss/2.14/images/docker_extension_button_drop_down.png differ diff --git a/docs/telepresence-oss/2.14/images/docker_extension_connect_to_cluster.png b/docs/telepresence-oss/2.14/images/docker_extension_connect_to_cluster.png new file mode 100644 index 000000000..eb95e5180 Binary files /dev/null and b/docs/telepresence-oss/2.14/images/docker_extension_connect_to_cluster.png differ diff --git a/docs/telepresence-oss/2.14/images/docker_extension_login.png b/docs/telepresence-oss/2.14/images/docker_extension_login.png new file mode 100644 index 000000000..8874fa959 Binary files /dev/null and b/docs/telepresence-oss/2.14/images/docker_extension_login.png differ diff --git a/docs/telepresence-oss/2.14/images/docker_extension_running_intercepts_page.png b/docs/telepresence-oss/2.14/images/docker_extension_running_intercepts_page.png new file mode 100644 index 000000000..7870e2691 Binary files /dev/null and b/docs/telepresence-oss/2.14/images/docker_extension_running_intercepts_page.png differ diff --git a/docs/telepresence-oss/2.14/images/docker_extension_start_intercept_page.png b/docs/telepresence-oss/2.14/images/docker_extension_start_intercept_page.png new file mode 100644 index 000000000..6788994e3 Binary files /dev/null and b/docs/telepresence-oss/2.14/images/docker_extension_start_intercept_page.png differ diff --git a/docs/telepresence-oss/2.14/images/docker_extension_start_intercept_popup.png b/docs/telepresence-oss/2.14/images/docker_extension_start_intercept_popup.png new file mode 100644 index 000000000..12839b0e5 Binary files /dev/null and b/docs/telepresence-oss/2.14/images/docker_extension_start_intercept_popup.png differ diff --git a/docs/telepresence-oss/2.14/images/docker_extension_upload_spec_button.png b/docs/telepresence-oss/2.14/images/docker_extension_upload_spec_button.png new file mode 100644 index 000000000..f571aefd3 Binary files /dev/null and b/docs/telepresence-oss/2.14/images/docker_extension_upload_spec_button.png differ diff --git a/docs/telepresence-oss/2.14/images/github-login.png b/docs/telepresence-oss/2.14/images/github-login.png new file mode 100644 index 000000000..cfd4d4bf1 Binary files /dev/null and b/docs/telepresence-oss/2.14/images/github-login.png differ diff --git a/docs/telepresence-oss/2.14/images/logo.png b/docs/telepresence-oss/2.14/images/logo.png new file mode 100644 index 000000000..701f63ba8 Binary files /dev/null and b/docs/telepresence-oss/2.14/images/logo.png differ diff --git a/docs/telepresence-oss/2.14/images/mode-defaults.png b/docs/telepresence-oss/2.14/images/mode-defaults.png new file mode 100644 index 000000000..1dcca4116 Binary files /dev/null and b/docs/telepresence-oss/2.14/images/mode-defaults.png differ diff --git a/docs/telepresence-oss/2.14/images/pod-daemon-overview.png b/docs/telepresence-oss/2.14/images/pod-daemon-overview.png new file mode 100644 index 000000000..effb05314 Binary files /dev/null and b/docs/telepresence-oss/2.14/images/pod-daemon-overview.png differ diff --git a/docs/telepresence-oss/2.14/images/split-tunnel.png b/docs/telepresence-oss/2.14/images/split-tunnel.png new file mode 100644 index 000000000..5bf30378e Binary files /dev/null and b/docs/telepresence-oss/2.14/images/split-tunnel.png differ diff --git a/docs/telepresence-oss/2.14/images/tracing.png b/docs/telepresence-oss/2.14/images/tracing.png new file mode 100644 index 000000000..c374807e5 Binary files /dev/null and b/docs/telepresence-oss/2.14/images/tracing.png differ diff --git a/docs/telepresence-oss/2.14/images/trad-inner-dev-loop.png b/docs/telepresence-oss/2.14/images/trad-inner-dev-loop.png new file mode 100644 index 000000000..618b674f8 Binary files /dev/null and b/docs/telepresence-oss/2.14/images/trad-inner-dev-loop.png differ diff --git a/docs/telepresence-oss/2.14/images/tunnelblick.png b/docs/telepresence-oss/2.14/images/tunnelblick.png new file mode 100644 index 000000000..8944d445a Binary files /dev/null and b/docs/telepresence-oss/2.14/images/tunnelblick.png differ diff --git a/docs/telepresence-oss/2.14/images/vpn-dns.png b/docs/telepresence-oss/2.14/images/vpn-dns.png new file mode 100644 index 000000000..eed535c45 Binary files /dev/null and b/docs/telepresence-oss/2.14/images/vpn-dns.png differ diff --git a/docs/telepresence-oss/2.14/images/vpn-k8s-config.jpg b/docs/telepresence-oss/2.14/images/vpn-k8s-config.jpg new file mode 100644 index 000000000..66116e41d Binary files /dev/null and b/docs/telepresence-oss/2.14/images/vpn-k8s-config.jpg differ diff --git a/docs/telepresence-oss/2.14/images/vpn-routing.jpg b/docs/telepresence-oss/2.14/images/vpn-routing.jpg new file mode 100644 index 000000000..18410dd48 Binary files /dev/null and b/docs/telepresence-oss/2.14/images/vpn-routing.jpg differ diff --git a/docs/telepresence-oss/2.14/images/vpn-with-tele.jpg b/docs/telepresence-oss/2.14/images/vpn-with-tele.jpg new file mode 100644 index 000000000..843b253e9 Binary files /dev/null and b/docs/telepresence-oss/2.14/images/vpn-with-tele.jpg differ diff --git a/docs/telepresence-oss/2.14/install/helm.md b/docs/telepresence-oss/2.14/install/helm.md deleted file mode 120000 index a8aec139f..000000000 --- a/docs/telepresence-oss/2.14/install/helm.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.14/install/helm.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/install/helm.md b/docs/telepresence-oss/2.14/install/helm.md new file mode 100644 index 000000000..2709ee8f3 --- /dev/null +++ b/docs/telepresence-oss/2.14/install/helm.md @@ -0,0 +1,181 @@ +# Install the Traffic Manager with Helm + +[Helm](https://helm.sh) is a package manager for Kubernetes that automates the release and management of software on Kubernetes. The Telepresence Traffic Manager can be installed via a Helm chart with a few simple steps. + +For more details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +## Before you begin + +Before you begin you need to have [`helm`](https://helm.sh/docs/intro/install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +The Telepresence Helm chart is hosted by Ambassador Labs and published at `https://app.getambassador.io`. + +Start by adding this repo to your Helm client with the following command: + +```shell +helm repo add datawire https://app.getambassador.io +helm repo update +``` + +## Install with Helm + +When you run the Helm chart, it installs all the components required for the Telepresence Traffic Manager. + +1. If you are installing the Telepresence Traffic Manager **for the first time on your cluster**, create the `ambassador` namespace in your cluster: + + ```shell + kubectl create namespace ambassador + ``` + +2. Install the Telepresence Traffic Manager with the following command: + + ```shell + helm install traffic-manager --namespace ambassador datawire/telepresence + ``` + +### Install into custom namespace + +The Helm chart supports being installed into any namespace, not necessarily `ambassador`. Simply pass a different `namespace` argument to `helm install`. +For example, if you wanted to deploy the traffic manager to the `staging` namespace: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence +``` + +Note that users of Telepresence will need to configure their kubeconfig to find this installation of the Traffic Manager: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +See [the kubeconfig documentation](../../reference/config#manager) for more information. + +### Upgrading the Traffic Manager. + +Versions of the Traffic Manager Helm chart are coupled to the versions of the Telepresence CLI that they are intended for. +Thus, for example, if you wish to use Telepresence `v2.4.0`, you'll need to install version `v2.4.0` of the Traffic Manager Helm chart. + +Upgrading the Traffic Manager is the same as upgrading any other Helm chart; for example, if you installed the release into the `ambassador` namespace, and you just wished to upgrade it to the latest version without changing any configuration values: + +```shell +helm repo up +helm upgrade traffic-manager datawire/telepresence --reuse-values --namespace ambassador +``` + +If you want to upgrade the Traffic-Manager to a specific version, add a `--version` flag with the version number to the upgrade command. For example: `--version v2.4.1` + +## RBAC + +### Installing a namespace-scoped traffic manager + +You might not want the Traffic Manager to have permissions across the entire kubernetes cluster, or you might want to be able to install multiple traffic managers per cluster (for example, to separate them by environment). +In these cases, the traffic manager supports being installed with a namespace scope, allowing cluster administrators to limit the reach of a traffic manager's permissions. + +For example, suppose you want a Traffic Manager that only works on namespaces `dev` and `staging`. +To do this, create a `values.yaml` like the following: + +```yaml +managerRbac: + create: true + namespaced: true + namespaces: + - dev + - staging +``` + +This can then be installed via: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence -f ./values.yaml +``` + +**NOTE** Do not install namespace-scoped Traffic Managers and a global Traffic Manager in the same cluster, as it could have unexpected effects. + +#### Namespace collision detection + +The Telepresence Helm chart will try to prevent namespace-scoped Traffic Managers from managing the same namespaces. +It will do this by creating a ConfigMap, called `traffic-manager-claim`, in each namespace that a given install manages. + +So, for example, suppose you install one Traffic Manager to manage namespaces `dev` and `staging`, as: + +```bash +helm install traffic-manager --namespace dev datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={dev,staging}' +``` + +You might then attempt to install another Traffic Manager to manage namespaces `staging` and `prod`: + +```bash +helm install traffic-manager --namespace prod datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={staging,prod}' +``` + +This would fail with an error: + +``` +Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap "traffic-manager-claim" in namespace "staging" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "prod": current value is "dev" +``` + +To fix this error, fix the overlap either by removing `staging` from the first install, or from the second. + +#### Namespace scoped user permissions + +Optionally, you can also configure user rbac to be scoped to the same namespaces as the manager itself. +You might want to do this if you don't give your users permissions throughout the cluster, and want to make sure they only have the minimum set required to perform telepresence commands on certain namespaces. + +Continuing with the `dev` and `staging` example from the previous section, simply add the following to `values.yaml` (make sure you set the `subjects`!): + +```yaml +clientRbac: + create: true + + # These are the users or groups to which the user rbac will be bound. + # This MUST be set. + subjects: {} + # - kind: User + # name: jane + # apiGroup: rbac.authorization.k8s.io + + namespaced: true + + namespaces: + - dev + - staging +``` + +#### Namespace-scoped webhook + +If you wish to use the traffic-manager's [mutating webhook](../../reference/cluster-config#mutating-webhook) with a namespace-scoped traffic manager, you will have to ensure that each namespace has an `app.kubernetes.io/name` label that is identical to its name: + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: staging + labels: + app.kubernetes.io/name: staging +``` + +You can also use `kubectl label` to add the label to an existing namespace, e.g.: + +```shell +kubectl label namespace staging app.kubernetes.io/name=staging +``` + +This is required because the mutating webhook will use the name label to find namespaces to operate on. + +**NOTE** This labelling happens automatically in kubernetes >= 1.21. + +### Installing RBAC only + +Telepresence Traffic Manager does require some [RBAC](../../reference/rbac/) for the traffic-manager deployment itself, as well as for users. +To make it easier for operators to introspect / manage RBAC separately, you can use `rbac.only=true` to +only create the rbac-related objects. +Additionally, you can use `clientRbac.create=true` and `managerRbac.create=true` to toggle which subset(s) of RBAC objects you wish to create. diff --git a/docs/telepresence-oss/2.14/install/migrate-from-legacy.md b/docs/telepresence-oss/2.14/install/migrate-from-legacy.md deleted file mode 120000 index 347fa5f16..000000000 --- a/docs/telepresence-oss/2.14/install/migrate-from-legacy.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.14/install/migrate-from-legacy.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/install/migrate-from-legacy.md b/docs/telepresence-oss/2.14/install/migrate-from-legacy.md new file mode 100644 index 000000000..94307dfa1 --- /dev/null +++ b/docs/telepresence-oss/2.14/install/migrate-from-legacy.md @@ -0,0 +1,110 @@ +# Migrate from legacy Telepresence + +[Telepresence](/products/telepresence/) (formerly referenced as Telepresence 2, which is the current major version) has different mechanics and requires a different mental model from [legacy Telepresence 1](https://www.telepresence.io/docs/v1/) when working with local instances of your services. + +In legacy Telepresence, a pod running a service was swapped with a pod running the Telepresence proxy. This proxy received traffic intended for the service, and sent the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment". + +In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time. + +Telepresence 2 introduces a [new +architecture](../../reference/architecture/) built around "intercepts" +that addresses these problems. With the new Telepresence, a sidecar +proxy ("traffic agent") is injected onto the pod. The proxy then +intercepts traffic intended for the Pod and routes it to the +workstation/laptop. The advantage of this approach is that the +service is running at all times, and no swapping is used. By using +the proxy approach, we can also do personal intercepts, where rather +than re-routing all traffic to the laptop/workstation, it only +re-routes the traffic designated as belonging to that user, so that +multiple developers can intercept the same service at the same time +without disrupting normal operation or disrupting eacho. + +Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts. + +## Using legacy Telepresence commands + +First please ensure you've [installed Telepresence](../). + +Telepresence is able to translate common legacy Telepresence commands into native Telepresence commands. +So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used +to with the Telepresence binary. + +For example, say you have a deployment (`myserver`) that you want to swap deployment (equivalent to intercept in +Telepresence) with a python server, you could run the following command: + +``` +$ telepresence --swap-deployment myserver --expose 9090 --run python3 -m http.server 9090 +< help text > + +Legacy telepresence command used +Command roughly translates to the following in Telepresence: +telepresence intercept echo-easy --port 9090 -- python3 -m http.server 9090 +running... +Connecting to traffic manager... +Connected to context +Using Deployment myserver +intercepted + Intercept name : myserver + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:9090 + Intercepting : all TCP connections +Serving HTTP on :: port 9090 (http://[::]:9090/) ... +``` + +Telepresence will let you know what the legacy Telepresence command has mapped to and automatically +runs it. So you can get started with Telepresence today, using the commands you are used to +and it will help you learn the Telepresence syntax. + +### Legacy command mapping + +Below is the mapping of legacy Telepresence to Telepresence commands (where they exist and +are supported). + +| Legacy Telepresence Command | Telepresence Command | +|--------------------------------------------------|--------------------------------------------| +| --swap-deployment $workload | intercept $workload | +| --expose localPort[:remotePort] | intercept --port localPort[:remotePort] | +| --swap-deployment $workload --run-shell | intercept $workload -- bash | +| --swap-deployment $workload --run $cmd | intercept $workload -- $cmd | +| --swap-deployment $workload --docker-run $cmd | intercept $workload --docker-run -- $cmd | +| --run-shell | connect -- bash | +| --run $cmd | connect -- $cmd | +| --env-file,--env-json | --env-file, --env-json (haven't changed) | +| --context,--namespace | --context, --namespace (haven't changed) | +| --mount,--docker-mount | --mount, --docker-mount (haven't changed) | + +### Legacy Telepresence command limitations + +Some of the commands and flags from legacy Telepresence either didn't apply to Telepresence or +aren't yet supported in Telepresence. For some known popular commands, such as --method, +Telepresence will include output letting you know that the flag has gone away. For flags that +Telepresence can't translate yet, it will let you know that that flag is "unsupported". + +If Telepresence is missing any flags or functionality that is integral to your usage, please let us know +by [creating an issue](https://github.com/telepresenceio/telepresence/issues) and/or talking to us on our [Slack channel](http://a8r.io/slack)! + +## Telepresence changes + +Telepresence installs a Traffic Manager in the cluster and Traffic Agents alongside workloads when performing intercepts (including +with `--swap-deployment`) and leaves them. If you use `--swap-deployment`, the intercept will be left once the process +dies, but the agent will remain. There's no harm in leaving the agent running alongside your service, but when you +want to remove them from the cluster, the following Telepresence command will help: +``` +$ telepresence uninstall --help +Uninstall telepresence agents + +Usage: + telepresence uninstall [flags] { --agent |--all-agents } + +Flags: + -d, --agent uninstall intercept agent on specific deployments + -a, --all-agents uninstall intercept agent on all deployments + -h, --help help for uninstall + -n, --namespace string If present, the namespace scope for this CLI request +``` + +Since the new architecture deploys a Traffic Manager into the Ambassador namespace, please take a look at +our [rbac guide](../../reference/rbac) if you run into any issues with permissions while upgrading to Telepresence. + +The Traffic Manager can be uninstalled using `telepresence helm uninstall`. \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/quick-start/qs-cards.js b/docs/telepresence-oss/2.14/quick-start/qs-cards.js deleted file mode 120000 index 96b33c103..000000000 --- a/docs/telepresence-oss/2.14/quick-start/qs-cards.js +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.14/quick-start/qs-cards.js \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/quick-start/qs-cards.js b/docs/telepresence-oss/2.14/quick-start/qs-cards.js new file mode 100644 index 000000000..5b68aa4ae --- /dev/null +++ b/docs/telepresence-oss/2.14/quick-start/qs-cards.js @@ -0,0 +1,71 @@ +import Grid from '@material-ui/core/Grid'; +import Paper from '@material-ui/core/Paper'; +import Typography from '@material-ui/core/Typography'; +import { makeStyles } from '@material-ui/core/styles'; +import { Link as GatsbyLink } from 'gatsby'; +import React from 'react'; + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + textAlign: 'center', + alignItem: 'stretch', + padding: 0, + }, + paper: { + padding: theme.spacing(1), + textAlign: 'center', + color: 'black', + height: '100%', + }, +})); + +export default function CenteredGrid() { + const classes = useStyles(); + + return ( +
+ + + + + + Collaborating + + + + Use preview URLS to collaborate with your colleagues and others + outside of your organization. + + + + + + + + Outbound Sessions + + + + While connected to the cluster, your laptop can interact with + services as if it was another pod in the cluster. + + + + + + + + FAQs + + + + Learn more about uses cases and the technical implementation of + Telepresence. + + + + +
+ ); +} diff --git a/docs/telepresence-oss/2.14/quick-start/telepresence-quickstart-landing.less b/docs/telepresence-oss/2.14/quick-start/telepresence-quickstart-landing.less deleted file mode 120000 index a0c88ece4..000000000 --- a/docs/telepresence-oss/2.14/quick-start/telepresence-quickstart-landing.less +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.14/quick-start/telepresence-quickstart-landing.less \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/quick-start/telepresence-quickstart-landing.less b/docs/telepresence-oss/2.14/quick-start/telepresence-quickstart-landing.less new file mode 100644 index 000000000..e2a83df4f --- /dev/null +++ b/docs/telepresence-oss/2.14/quick-start/telepresence-quickstart-landing.less @@ -0,0 +1,152 @@ +@import '~@src/components/Layout/vars.less'; + +.doc-body .telepresence-quickstart-landing { + font-family: @InterFont; + color: @black; + margin: -8.4px auto 48px; + max-width: 1050px; + min-width: @docs-min-width; + width: 100%; + + h1 { + color: @blue-dark; + font-weight: normal; + letter-spacing: 0.25px; + font-size: 33px; + margin: 0 0 15px; + } + p { + font-size: 0.875rem; + line-height: 24px; + margin: 0; + padding: 0; + } + + .demo-cluster-container { + display: grid; + margin: 40px 0; + grid-template-columns: 1fr; + grid-template-columns: 1fr; + @media screen and (max-width: 900px) { + grid-template-columns: repeat(1, 1fr); + } + } + .main-title-container { + display: flex; + flex-direction: column; + align-items: center; + p { + text-align: center; + font-size: 0.875rem; + } + } + h2 { + font-size: 23px; + color: @black; + margin: 0 0 20px 0; + padding: 0; + &.underlined { + padding-bottom: 2px; + border-bottom: 3px solid @grey-separator; + text-align: center; + } + strong { + font-weight: 800; + } + &.subtitle { + margin-bottom: 10px; + font-size: 19px; + line-height: 28px; + } + } + .learn-more, + .get-started { + font-size: 14px; + font-weight: 600; + letter-spacing: 1.25px; + display: flex; + align-items: center; + text-decoration: none; + &.inline { + display: inline-block; + text-decoration: underline; + font-size: unset; + font-weight: normal; + &:hover { + text-decoration: none; + } + } + &.blue { + color: @blue-5; + } + &.blue:hover { + color: @blue-dark; + } + } + + .learn-more { + margin-top: 20px; + padding: 13px 0; + } + + .box-container { + &.border { + border: 1.5px solid @grey-separator; + border-radius: 5px; + padding: 10px; + } + &::before { + content: ''; + position: absolute; + width: 14px; + height: 14px; + border-radius: 50%; + top: 0; + left: 50%; + transform: translate(-50%, -50%); + } + p { + font-size: 0.875rem; + line-height: 24px; + padding: 0; + } + } + + .telepresence-video { + border: 2px solid @grey-separator; + box-shadow: -6px 12px 0px fade(@black, 12%); + border-radius: 8px; + padding: 18px; + h2.telepresence-video-title { + font-weight: 400; + font-size: 23px; + line-height: 33px; + color: @blue-6; + } + } + + .video-section { + display: grid; + grid-template-columns: 1fr 1fr; + column-gap: 20px; + @media screen and (max-width: 800px) { + grid-template-columns: 1fr; + } + ul { + font-size: 14px; + margin: 0 10px 6px 0; + } + .video-container { + position: relative; + padding-bottom: 56.25%; // 16:9 aspect ratio + height: 0; + iframe { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + } + } + } +} diff --git a/docs/telepresence-oss/2.14/redirects.yml b/docs/telepresence-oss/2.14/redirects.yml deleted file mode 120000 index 5015c7d1d..000000000 --- a/docs/telepresence-oss/2.14/redirects.yml +++ /dev/null @@ -1 +0,0 @@ -../../telepresence/2.14/redirects.yml \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/redirects.yml b/docs/telepresence-oss/2.14/redirects.yml new file mode 100644 index 000000000..5961b3477 --- /dev/null +++ b/docs/telepresence-oss/2.14/redirects.yml @@ -0,0 +1 @@ +- {from: "", to: "quick-start"} diff --git a/docs/telepresence-oss/2.14/reference/dns.md b/docs/telepresence-oss/2.14/reference/dns.md deleted file mode 120000 index 6f30e17b1..000000000 --- a/docs/telepresence-oss/2.14/reference/dns.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.14/reference/dns.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/reference/dns.md b/docs/telepresence-oss/2.14/reference/dns.md new file mode 100644 index 000000000..2f263860e --- /dev/null +++ b/docs/telepresence-oss/2.14/reference/dns.md @@ -0,0 +1,80 @@ +# DNS resolution + +The Telepresence DNS resolver is dynamically configured to resolve names using the namespaces of currently active intercepts. Processes running locally on the desktop will have network access to all services in the such namespaces by service-name only. + +All intercepts contribute to the DNS resolver, even those that do not use the `--namespace=` option. This is because `--namespace default` is implied, and in this context, `default` is treated just like any other namespace. + +No namespaces are used by the DNS resolver (not even `default`) when no intercepts are active, which means that no service is available by `` only. Without an active intercept, the namespace qualified DNS name must be used (in the form `.`). + +See this demonstrated below, using the [quick start's](../../quick-start/) sample app services. + +No intercepts are currently running, we'll connect to the cluster and list the services that can be intercepted. + +``` +$ telepresence connect + + Connecting to traffic manager... + Connected to context default (https://) + +$ telepresence list + + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + emoji : ready to intercept (traffic-agent not yet installed) + web : ready to intercept (traffic-agent not yet installed) + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + +$ curl web-app:80 + + curl: (6) Could not resolve host: web-app + +``` + +This is expected as Telepresence cannot reach the service yet by short name without an active intercept in that namespace. + +``` +$ curl web-app.emojivoto:80 + + + + + + Emoji Vote + ... +``` + +Using the namespaced qualified DNS name though does work. +Now we'll start an intercept against another service in the same namespace. Remember, `--namespace default` is implied since it is not specified. + +``` +$ telepresence intercept web --port 8080 + + Using Deployment web + intercepted + Intercept name : web + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Volume Mount Point: /tmp/telfs-166119801 + Intercepting : HTTP requests that match all headers: + 'x-telepresence-intercept-id: 8eac04e3-bf24-4d62-b3ba-35297c16f5cd:web' + +$ curl webapp:80 + + + + + + Emoji Vote + ... +``` + +Now curling that service by its short name works and will as long as the intercept is active. + +The DNS resolver will always be able to resolve services using `.` regardless of intercepts. + +### Supported Query Types + +The Telepresence DNS resolver is now capable of resolving queries of type `A`, `AAAA`, `CNAME`, +`MX`, `NS`, `PTR`, `SRV`, and `TXT`. + +See [Outbound connectivity](../routing/#dns-resolution) for details on DNS lookups. diff --git a/docs/telepresence-oss/2.14/reference/environment.md b/docs/telepresence-oss/2.14/reference/environment.md deleted file mode 120000 index 114efbb04..000000000 --- a/docs/telepresence-oss/2.14/reference/environment.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.14/reference/environment.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/reference/environment.md b/docs/telepresence-oss/2.14/reference/environment.md new file mode 100644 index 000000000..7f83ff119 --- /dev/null +++ b/docs/telepresence-oss/2.14/reference/environment.md @@ -0,0 +1,46 @@ +--- +description: "How Telepresence can import environment variables from your Kubernetes cluster to use with code running on your laptop." +--- + +# Environment variables + +Telepresence can import environment variables from the cluster pod when running an intercept. +You can then use these variables with the code running on your laptop of the service being intercepted. + +There are three options available to do this: + +1. `telepresence intercept [service] --port [port] --env-file=FILENAME` + + This will write the environment variables to a Docker Compose `.env` file. This file can be used with `docker-compose` when starting containers locally. Please see the Docker documentation regarding the [file syntax](https://docs.docker.com/compose/env-file/) and [usage](https://docs.docker.com/compose/environment-variables/) for more information. + +2. `telepresence intercept [service] --port [port] --env-json=FILENAME` + + This will write the environment variables to a JSON file. This file can be injected into other build processes. + +3. `telepresence intercept [service] --port [port] -- [COMMAND]` + + This will run a command locally with the pod's environment variables set on your laptop. Once the command quits the intercept is stopped (as if `telepresence leave [service]` was run). This can be used in conjunction with a local server command, such as `python [FILENAME]` or `node [FILENAME]` to run a service locally while using the environment variables that were set on the pod via a ConfigMap or other means. + + Another use would be running a subshell, Bash for example: + + `telepresence intercept [service] --port [port] -- /bin/bash` + + This would start the intercept then launch the subshell on your laptop with all the same variables set as on the pod. + +## Telepresence Environment Variables + +Telepresence adds some useful environment variables in addition to the ones imported from the intercepted pod: + +### TELEPRESENCE_ROOT +Directory where all remote volumes mounts are rooted. See [Volume Mounts](../volume/) for more info. + +### TELEPRESENCE_MOUNTS +Colon separated list of remotely mounted directories. + +### TELEPRESENCE_CONTAINER +The name of the intercepted container. Useful when a pod has several containers, and you want to know which one that was intercepted by Telepresence. + +### TELEPRESENCE_INTERCEPT_ID +ID of the intercept (same as the "x-intercept-id" http header). + +Useful if you need special behavior when intercepting a pod. One example might be when dealing with pub/sub systems like Kafka, where all processes that don't have the `TELEPRESENCE_INTERCEPT_ID` set can filter out all messages that contain an `x-intercept-id` header, while those that do, instead filter based on a matching `x-intercept-id` header. This is to assure that messages belonging to a certain intercept always are consumed by the intercepting process. diff --git a/docs/telepresence-oss/2.14/reference/intercepts/manual-agent.md b/docs/telepresence-oss/2.14/reference/intercepts/manual-agent.md deleted file mode 120000 index d9f460f07..000000000 --- a/docs/telepresence-oss/2.14/reference/intercepts/manual-agent.md +++ /dev/null @@ -1 +0,0 @@ -../../../../telepresence/2.14/reference/intercepts/manual-agent.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/reference/intercepts/manual-agent.md b/docs/telepresence-oss/2.14/reference/intercepts/manual-agent.md new file mode 100644 index 000000000..8c24d6dbe --- /dev/null +++ b/docs/telepresence-oss/2.14/reference/intercepts/manual-agent.md @@ -0,0 +1,267 @@ +import Alert from '@material-ui/lab/Alert'; + +# Manually injecting the Traffic Agent + +You can directly modify your workload's YAML configuration to add the Telepresence Traffic Agent and enable it to be intercepted. + +When you use a Telepresence intercept for the first time on a Pod, the [Telepresence Mutating Webhook](../../cluster-config/#mutating-webhook) +will automatically inject a Traffic Agent sidecar into it. There might be some situations where this approach cannot be used, such +as very strict company security policies preventing it. + + +Although it is possible to manually inject the Traffic Agent, it is not the recommended approach to making a workload interceptable, +try the Mutating Webhook before proceeding. + + +## Procedure + +You can manually inject the agent into Deployments, StatefulSets, or ReplicaSets. The example on this page +uses the following Deployment and Service. It's a prerequisite that they have been applied to the cluster: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "my-service" + labels: + service: my-service +spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +--- +apiVersion: v1 +kind: Service +metadata: + name: "my-service" +spec: + type: ClusterIP + selector: + service: my-service + ports: + - port: 80 + targetPort: 8080 +``` + +### 1. Generating the YAML + +First, generate the YAML for the traffic-agent configmap entry. It's important that the generated file have +the same name as the service, and no extension: + +```console +$ telepresence genyaml config --workload my-service -o /tmp/my-service +$ cat /tmp/my-service-config.yaml +agentImage: docker.io/datawire/tel2:2.7.0 +agentName: my-service +containers: +- Mounts: null + envPrefix: A_ + intercepts: + - agentPort: 9900 + containerPort: 8080 + protocol: TCP + serviceName: my-service + servicePort: 80 + serviceUID: f6680334-10ef-4703-aa4e-bb1f9d1665fd + mountPoint: /tel_app_mounts/echo-container + name: echo-container +logLevel: info +managerHost: traffic-manager.ambassador +managerPort: 8081 +manual: true +namespace: default +workloadKind: Deployment +workloadName: my-service +``` + +Next, generate the YAML for the traffic-agent container: + +```console +$ telepresence genyaml container --config /tmp/my-service -o /tmp/my-service-agent.yaml +$ cat /tmp/my-service-agent.yaml +args: +- agent +env: +- name: _TEL_AGENT_POD_IP + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: status.podIP +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: traffic-agent +ports: +- containerPort: 9900 + protocol: TCP +readinessProbe: + exec: + command: + - /bin/stat + - /tmp/agent/ready +resources: {} +volumeMounts: +- mountPath: /tel_pod_info + name: traffic-annotations +- mountPath: /etc/traffic-agent + name: traffic-config +- mountPath: /tel_app_exports + name: export-volume + name: traffic-annotations +``` + +Next, generate the init-container + +```console +$ telepresence genyaml initcontainer --config /tmp/my-service -o /tmp/my-service-init.yaml +$ cat /tmp/my-service-init.yaml +args: +- agent-init +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: tel-agent-init +resources: {} +securityContext: + capabilities: + add: + - NET_ADMIN +volumeMounts: +- mountPath: /etc/traffic-agent + name: traffic-config +``` + +Next, generate the YAML for the volumes: + +```console +$ telepresence genyaml volume --workload my-service -o /tmp/my-service-volume.yaml +$ cat /tmp/my-service-volume.yaml +- downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.annotations + path: annotations + name: traffic-annotations +- configMap: + items: + - key: my-service + path: config.yaml + name: telepresence-agents + name: traffic-config +- emptyDir: {} + name: export-volume + +``` + + +Enter `telepresence genyaml container --help` or `telepresence genyaml volume --help` for more information about these flags. + + +### 2. Creating (or updating) the configmap + +The generated configmap entry must be insterted into the `telepresence-agents` `ConfigMap` in the same namespace as the +modified `Deployment`. If the `ConfigMap` doesn't exist yet, it can be created using the following command: + +```console +$ kubectl create configmap telepresence-agents --from-file=/tmp/my-service +``` + +If it already exists, new entries can be added under the `Data` key using `kubectl edit configmap telepresence-agents`. + +### 3. Injecting the YAML into the Deployment + +You need to add the `Deployment` YAML you genereated to include the container and the volume. These are placed as elements +of `spec.template.spec.containers`, `spec.template.spec.initContainers`, and `spec.template.spec.volumes` respectively. +You also need to modify `spec.template.metadata.annotations` and add the annotation +`telepresence.getambassador.io/manually-injected: "true"`. These changes should look like the following: + +```diff + apiVersion: apps/v1 + kind: Deployment + metadata: + name: "my-service" + labels: + service: my-service + spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service ++ annotations: ++ telepresence.getambassador.io/manually-injected: "true" + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} ++ - args: ++ - agent ++ env: ++ - name: _TEL_AGENT_POD_IP ++ valueFrom: ++ fieldRef: ++ apiVersion: v1 ++ fieldPath: status.podIP ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: traffic-agent ++ ports: ++ - containerPort: 9900 ++ protocol: TCP ++ readinessProbe: ++ exec: ++ command: ++ - /bin/stat ++ - /tmp/agent/ready ++ resources: { } ++ volumeMounts: ++ - mountPath: /tel_pod_info ++ name: traffic-annotations ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ - mountPath: /tel_app_exports ++ name: export-volume ++ initContainers: ++ - args: ++ - agent-init ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: tel-agent-init ++ resources: { } ++ securityContext: ++ capabilities: ++ add: ++ - NET_ADMIN ++ volumeMounts: ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ volumes: ++ - downwardAPI: ++ items: ++ - fieldRef: ++ apiVersion: v1 ++ fieldPath: metadata.annotations ++ path: annotations ++ name: traffic-annotations ++ - configMap: ++ items: ++ - key: my-service ++ path: config.yaml ++ name: telepresence-agents ++ name: traffic-config ++ - emptyDir: { } ++ name: export-volume +``` diff --git a/docs/telepresence-oss/2.14/reference/linkerd.md b/docs/telepresence-oss/2.14/reference/linkerd.md deleted file mode 120000 index 0d570179c..000000000 --- a/docs/telepresence-oss/2.14/reference/linkerd.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.14/reference/linkerd.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/reference/linkerd.md b/docs/telepresence-oss/2.14/reference/linkerd.md new file mode 100644 index 000000000..9b903fa76 --- /dev/null +++ b/docs/telepresence-oss/2.14/reference/linkerd.md @@ -0,0 +1,75 @@ +--- +Description: "How to get Linkerd meshed services working with Telepresence" +--- + +# Using Telepresence with Linkerd + +## Introduction +Getting started with Telepresence on Linkerd services is as simple as adding an annotation to your Deployment: + +```yaml +spec: + template: + metadata: + annotations: + config.linkerd.io/skip-outbound-ports: "8081" +``` + +The local system and the Traffic Agent connect to the Traffic Manager using its gRPC API on port 8081. Telling Linkerd to skip that port allows the Traffic Agent sidecar to fully communicate with the Traffic Manager, and therefore the rest of the Telepresence system. + +## Prerequisites +1. [Telepresence binary](../../install) +2. Linkerd control plane [installed to cluster](https://linkerd.io/2.10/tasks/install/) +3. Kubectl +4. [Working ingress controller](https://www.getambassador.io/docs/edge-stack/latest/howtos/linkerd2) + +## Deploy +Save and deploy the following YAML. Note the `config.linkerd.io/skip-outbound-ports` annotation in the metadata of the pod template. + +```yaml +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: quote +spec: + replicas: 1 + selector: + matchLabels: + app: quote + strategy: + type: RollingUpdate + template: + metadata: + annotations: + linkerd.io/inject: "enabled" + config.linkerd.io/skip-outbound-ports: "8081,8022,6001" + labels: + app: quote + spec: + containers: + - name: backend + image: docker.io/datawire/quote:0.4.1 + ports: + - name: http + containerPort: 8000 + env: + - name: PORT + value: "8000" + resources: + limits: + cpu: "0.1" + memory: 100Mi +``` + +## Connect to Telepresence +Run `telepresence connect` to connect to the cluster. Then `telepresence list` should show the `quote` deployment as `ready to intercept`: + +``` +$ telepresence list + + quote: ready to intercept (traffic-agent not yet installed) +``` + +## Run the intercept +Run `telepresence intercept quote --port 8080:80` to direct traffic from the `quote` deployment to port 8080 on your local system. Assuming you have something listening on 8080, you should now be able to see your local service whenever attempting to access the `quote` service. diff --git a/docs/telepresence-oss/2.14/reference/rbac.md b/docs/telepresence-oss/2.14/reference/rbac.md deleted file mode 120000 index c750461d1..000000000 --- a/docs/telepresence-oss/2.14/reference/rbac.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.14/reference/rbac.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/reference/rbac.md b/docs/telepresence-oss/2.14/reference/rbac.md new file mode 100644 index 000000000..d78133441 --- /dev/null +++ b/docs/telepresence-oss/2.14/reference/rbac.md @@ -0,0 +1,236 @@ +import Alert from '@material-ui/lab/Alert'; + +# Telepresence RBAC +The intention of this document is to provide a template for securing and limiting the permissions of Telepresence. +This documentation covers the full extent of permissions necessary to administrate Telepresence components in a cluster. + +There are two general categories for cluster permissions with respect to Telepresence. There are RBAC settings for a User and for an Administrator described above. The User is expected to only have the minimum cluster permissions necessary to create a Telepresence [intercept](../../howtos/intercepts/), and otherwise be unable to affect Kubernetes resources. + +In addition to the above, there is also a consideration of how to manage Users and Groups in Kubernetes which is outside of the scope of the document. This document will use Service Accounts to assign Roles and Bindings. Other methods of RBAC administration and enforcement can be found on the [Kubernetes RBAC documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) page. + +## Requirements + +- Kubernetes version 1.16+ +- Cluster admin privileges to apply RBAC + +## Editing your kubeconfig + +This guide also assumes that you are utilizing a kubeconfig file that is specified by the `KUBECONFIG` environment variable. This is a `yaml` file that contains the cluster's API endpoint information as well as the user data being supplied for authentication. The Service Account name used in the example below is called tp-user. This can be replaced by any value (i.e. John or Jane) as long as references to the Service Account are consistent throughout the `yaml`. After an administrator has applied the RBAC configuration, a user should create a `config.yaml` in your current directory that looks like the following:​ + +```yaml +apiVersion: v1 +kind: Config +clusters: +- name: my-cluster # Must match the cluster value in the contexts config + cluster: + ## The cluster field is highly cloud dependent. +contexts: +- name: my-context + context: + cluster: my-cluster # Must match the name field in the clusters config + user: tp-user +users: +- name: tp-user # Must match the name of the Service Account created by the cluster admin + user: + token: # See note below +``` + +The Service Account token will be obtained by the cluster administrator after they create the user's Service Account. Creating the Service Account will create an associated Secret in the same namespace with the format `-token-`. This token can be obtained by your cluster administrator by running `kubectl get secret -n ambassador -o jsonpath='{.data.token}' | base64 -d`. + +After creating `config.yaml` in your current directory, export the file's location to KUBECONFIG by running `export KUBECONFIG=$(pwd)/config.yaml`. You should then be able to switch to this context by running `kubectl config use-context my-context`. + +## Administrating Telepresence + +Telepresence administration requires permissions for creating `Namespaces`, `ServiceAccounts`, `ClusterRoles`, `ClusterRoleBindings`, `Secrets`, `Services`, `MutatingWebhookConfiguration`, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator. The following permissions are needed for the installation and use of Telepresence: + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: telepresence-admin + namespace: default +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-admin-role +rules: + - apiGroups: [""] + resources: ["pods", "pods/log"] + verbs: ["get", "list", "create", "delete", "watch"] + - apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "update", "create", "delete"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] + - apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update", "create", "delete", "watch"] + - apiGroups: ["rbac.authorization.k8s.io"] + resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"] + verbs: ["get", "list", "watch", "create", "delete"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["create"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["get", "list", "watch", "delete"] + resourceNames: ["telepresence-agents"] + - apiGroups: [""] + resources: ["namespaces"] + verbs: ["get", "list", "watch", "create"] + - apiGroups: [""] + resources: ["secrets"] + verbs: ["get", "create", "list", "delete"] + - apiGroups: [""] + resources: ["serviceaccounts"] + verbs: ["get", "create", "delete"] + - apiGroups: ["admissionregistration.k8s.io"] + resources: ["mutatingwebhookconfigurations"] + verbs: ["get", "create", "delete"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["list", "get", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-clusterrolebinding +subjects: + - name: telepresence-admin + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-admin-role + kind: ClusterRole +``` + +There are two ways to install the traffic-manager: Using `telepresence connect` and installing the [helm chart](../../install/helm/). + +By using `telepresence connect`, Telepresence will use your kubeconfig to create the objects mentioned above in the cluster if they don't already exist. If you want the most introspection into what is being installed, we recommend using the helm chart to install the traffic-manager. + +## Cluster-wide telepresence user access + +To allow users to make intercepts across all namespaces, but with more limited `kubectl` permissions, the following `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` will allow full `telepresence intercept` functionality. + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate value + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-role +rules: +# For gather-logs command +- apiGroups: [""] + resources: ["pods/log"] + verbs: ["get"] +- apiGroups: [""] + resources: ["pods"] + verbs: ["list"] +# Needed in order to maintain a list of workloads +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +- apiGroups: [""] + resources: ["namespaces", "services"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-rolebinding +subjects: +- name: tp-user + kind: ServiceAccount + namespace: ambassador +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-role + kind: ClusterRole +``` + +### Traffic Manager connect permission +In addition to the cluster-wide permissions, the client will also need the following namespace scoped permissions +in the traffic-manager's namespace in order to establish the needed port-forward to the traffic-manager. +```yaml +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: traffic-manager-connect +rules: + - apiGroups: [""] + resources: ["pods"] + verbs: ["get", "list", "watch"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: traffic-manager-connect +subjects: + - name: telepresence-test-developer + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: traffic-manager-connect + kind: Role +``` + +## Namespace only telepresence user access + +RBAC for multi-tenant scenarios where multiple dev teams are sharing a single cluster where users are constrained to a specific namespace(s). + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +For each accessible namespace +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate user name + namespace: tp-namespace # Update value for appropriate namespace +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +rules: +- apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "watch"] +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role-binding + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount +roleRef: + kind: Role + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +``` + +The user will also need the [Traffic Manager connect permission](#traffic-manager-connect-permission) described above. diff --git a/docs/telepresence-oss/2.14/reference/restapi.md b/docs/telepresence-oss/2.14/reference/restapi.md deleted file mode 120000 index d0b7a0a01..000000000 --- a/docs/telepresence-oss/2.14/reference/restapi.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.14/reference/restapi.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/reference/restapi.md b/docs/telepresence-oss/2.14/reference/restapi.md new file mode 100644 index 000000000..4be1924a3 --- /dev/null +++ b/docs/telepresence-oss/2.14/reference/restapi.md @@ -0,0 +1,93 @@ +# Telepresence RESTful API server + +[Telepresence](/products/telepresence/) can run a RESTful API server on the local host, both on the local workstation and in a pod that contains a `traffic-agent`. The server currently has two endpoints. The standard `healthz` endpoint and the `consume-here` endpoint. + +## Enabling the server +The server is enabled by setting the `telepresenceAPI.port` to a valid port number in the [Telepresence Helm Chart](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). The values may be passed explicitly to Helm during install, or configured using the [Telepresence Config](../config#restful-api-server) to impact an auto-install. + +## Querying the server +On the cluster's side, it's the `traffic-agent` of potentially intercepted pods that runs the server. The server can be accessed using `http://localhost:/` from the application container. Telepresence ensures that the container has the `TELEPRESENCE_API_PORT` environment variable set when the `traffic-agent` is installed. On the workstation, it is the `user-daemon` that runs the server. It uses the `TELEPRESENCE_API_PORT` that is conveyed in the environment of the intercept. This means that the server can be accessed the exact same way locally, provided that the environment is propagated correctly to the interceptor process. + +## Endpoints + +The `consume-here` and `intercept-info` endpoints are both intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar. Telepresence provides the ID of the intercept in the environment variable [TELEPRESENCE_INTERCEPT_ID](../environment/#telepresence_intercept_id) during an intercept. This ID must be provided in a `x-telepresence-caller-intercept-id: = ` header. [Telepresence](/products/telepresence/) needs this to identify the caller correctly. The `` will be empty when running in the cluster, but it's harmless to provide it there too, so there's no need for conditional code. + +There are three prerequisites to fulfill before testing The `consume-here` and `intercept-info` endpoints using `curl -v` on the workstation: +1. An intercept must be active +2. The "/healthz" endpoint must respond with OK +3. The ID of the intercept must be known. It will be visible as `ID` in the output of `telepresence list --debug`. + +### healthz +The `http://localhost:/healthz` endpoint should respond with status code 200 OK. If it doesn't then something isn't configured correctly. Check that the `traffic-agent` container is present and that the `TELEPRESENCE_API_PORT` has been added to the environment of the application container and/or in the environment that is propagated to the interceptor that runs on the local workstation. + +#### test endpoint using curl +A `curl -v` call can be used to test the endpoint when an intercept is active. This example assumes that the API port is configured to be 9980. +```console +$ curl -v localhost:9980/healthz +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /healthz HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Date: Fri, 26 Nov 2021 07:06:18 GMT +< Content-Length: 0 +< +* Connection #0 to host localhost left intact +``` + +### consume-here +`http://localhost:/consume-here` will respond with "true" (consume the message) or "false" (leave the message on the queue). When running in the cluster, this endpoint will respond with `false` if the headers match an ongoing intercept for the same workload because it's assumed that it's up to the intercept to consume the message. When running locally, the response is inverted. Matching headers means that the message should be consumed. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api`, we can now check that the "/consume-here" returns "true" for the path "/api" and given headers. +```console +$ curl -v localhost:9980/consume-here?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /consume-here?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Fri, 26 Nov 2021 06:43:28 GMT +< Content-Length: 4 +< +* Connection #0 to host localhost left intact +true +``` + +If you can run curl from the pod, you can try the exact same URL. The result should be "false" when there's an ongoing intercept. The `x-telepresence-caller-intercept-id` is not needed when the call is made from the pod. + +### intercept-info +`http://localhost:/intercept-info` is intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar, and will respond with a JSON structure containing the two booleans `clientSide` and `intercepted`, and a `metadata` map which corresponds to the `--http-meta` key pairs used when the intercept was created. This field is always omitted in case `intercepted` is `false`. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api --http-meta a=b --http-meta b=c`, we can now check that the "/intercept-info" returns information for the given path and headers. +```console +$ curl -v localhost:9980/intercept-info?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980...* Connected to localhost (127.0.0.1) port 9980 (#0) +> GET /intercept-info?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.79.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Tue, 01 Feb 2022 11:39:55 GMT +< Content-Length: 68 +< +{"intercepted":true,"clientSide":true,"metadata":{"a":"b","b":"c"}} +* Connection #0 to host localhost left intact +``` diff --git a/docs/telepresence-oss/2.14/reference/tun-device.md b/docs/telepresence-oss/2.14/reference/tun-device.md deleted file mode 120000 index f18846b0f..000000000 --- a/docs/telepresence-oss/2.14/reference/tun-device.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.14/reference/tun-device.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/reference/tun-device.md b/docs/telepresence-oss/2.14/reference/tun-device.md new file mode 100644 index 000000000..af7e3828c --- /dev/null +++ b/docs/telepresence-oss/2.14/reference/tun-device.md @@ -0,0 +1,27 @@ +# Networking through Virtual Network Interface + +The Telepresence daemon process creates a Virtual Network Interface (VIF) when Telepresence connects to the cluster. The VIF ensures that the cluster's subnets are available to the workstation. It also intercepts DNS requests and forwards them to the traffic-manager which in turn forwards them to intercepted agents, if any, or performs a host lookup by itself. + +### TUN-Device +The VIF is a TUN-device, which means that it communicates with the workstation in terms of L3 IP-packets. The router will recognize UDP and TCP packets and tunnel their payload to the traffic-manager via its encrypted gRPC API. The traffic-manager will then establish corresponding connections in the cluster. All protocol negotiation takes place in the client because the VIF takes care of the L3 to L4 translation (i.e. the tunnel is L4, not L3). + +## Gains when using the VIF + +### Both TCP and UDP +The TUN-device is capable of routing both TCP and UDP traffic. + +### No SSH required + +The VIF approach is somewhat similar to using `sshuttle` but without +any requirements for extra software, configuration or connections. +Using the VIF means that only one single connection needs to be +forwarded through the Kubernetes apiserver (à la `kubectl +port-forward`), using only one single port. There is no need for +`ssh` in the client nor for `sshd` in the traffic-manager. This also +means that the traffic-manager container can run as the default user. + +#### sshfs without ssh encryption +When a POD is intercepted, and its volumes are mounted on the local machine, this mount is performed by [sshfs](https://github.com/libfuse/sshfs). Telepresence will run `sshfs -o slave` which means that instead of using `ssh` to establish an encrypted communication to an `sshd`, which in turn terminates the encryption and forwards to `sftp`, the `sshfs` will talk `sftp` directly on its `stdin/stdout` pair. Telepresence tunnels that directly to an `sftp` in the agent using its already encrypted gRPC API. As a result, no `sshd` is needed in client nor in the traffic-agent, and the traffic-agent container can run as the default user. + +### No Firewall rules +With the VIF in place, there's no longer any need to tamper with firewalls in order to establish IP routes. The VIF makes the cluster subnets available during connect, and the kernel will perform the routing automatically. When the session ends, the kernel is also responsible for cleaning up. diff --git a/docs/telepresence-oss/2.14/reference/volume.md b/docs/telepresence-oss/2.14/reference/volume.md deleted file mode 120000 index 16ff0c149..000000000 --- a/docs/telepresence-oss/2.14/reference/volume.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.14/reference/volume.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/reference/volume.md b/docs/telepresence-oss/2.14/reference/volume.md new file mode 100644 index 000000000..82df9cafa --- /dev/null +++ b/docs/telepresence-oss/2.14/reference/volume.md @@ -0,0 +1,36 @@ +# Volume mounts + +import Alert from '@material-ui/lab/Alert'; + +Telepresence supports locally mounting of volumes that are mounted to your Pods. You can specify a command to run when starting the intercept, this could be a subshell or local server such as Python or Node. + +``` +telepresence intercept --port --mount=/tmp/ -- /bin/bash +``` + +In this case, Telepresence creates the intercept, mounts the Pod's volumes to locally to `/tmp`, and starts a Bash subshell. + +Telepresence can set a random mount point for you by using `--mount=true` instead, you can then find the mount point in the output of `telepresence list` or using the `$TELEPRESENCE_ROOT` variable. + +``` +$ telepresence intercept --port --mount=true -- /bin/bash +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 + Intercepting : all TCP connections + +bash-3.2$ echo $TELEPRESENCE_ROOT +/var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 +``` + +--mount=true is the default if a mount option is not specified, use --mount=false to disable mounting volumes. + +With either method, the code you run locally either from the subshell or from the intercept command will need to be prepended with the `$TELEPRESENCE_ROOT` environment variable to utilize the mounted volumes. + +For example, Kubernetes mounts secrets to `/var/run/secrets/kubernetes.io` (even if no `mountPoint` for it exists in the Pod spec). Once mounted, to access these you would need to change your code to use `$TELEPRESENCE_ROOT/var/run/secrets/kubernetes.io`. + +If using --mount=true without a command, you can use either environment variable flag to retrieve the variable. diff --git a/docs/telepresence-oss/2.14/reference/vpn.md b/docs/telepresence-oss/2.14/reference/vpn.md deleted file mode 120000 index cc429db17..000000000 --- a/docs/telepresence-oss/2.14/reference/vpn.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.14/reference/vpn.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.14/reference/vpn.md b/docs/telepresence-oss/2.14/reference/vpn.md new file mode 100644 index 000000000..457cc873c --- /dev/null +++ b/docs/telepresence-oss/2.14/reference/vpn.md @@ -0,0 +1,89 @@ + +
+ + +# Telepresence and VPNs + +It is often important to set up Kubernetes API server endpoints to be only accessible via a VPN. +In setups like these, users need to connect first to their VPN, and then use Telepresence to connect +to their cluster. As Telepresence uses many of the same underlying technologies that VPNs use, +the two can sometimes conflict. This page will help you identify and resolve such VPN conflicts. + + + +The test-vpn command, which was once part of Telepresence, became obsolete in 2.14 due to a change in functionality and was subsequently removed. + + + +## VPN Configuration + +Let's begin by reviewing what a VPN does and imagining a sample configuration that might come +to conflict with Telepresence. +Usually, a VPN client adds two kinds of routes to your machine when you connect. +The first serves to override your default route; in other words, it makes sure that packets +you send out to the public internet go through the private tunnel instead of your +ethernet or wifi adapter. We'll call this a `public VPN route`. +The second kind of route is a `private VPN route`. These are the routes that allow your +machine to access hosts inside the VPN that are not accessible to the public internet. +Generally speaking, this is a more circumscribed route that will connect your machine +only to reachable hosts on the private network, such as your Kubernetes API server. + +This diagram represents what happens when you connect to a VPN, supposing that your +private network spans the CIDR range: `10.0.0.0/8`. + +![VPN routing](../images/vpn-routing.jpg) + +## Kubernetes configuration + +One of the things a Kubernetes cluster does for you is assign IP addresses to pods and services. +This is one of the key elements of Kubernetes networking, as it allows applications on the cluster +to reach each other. When Telepresence connects you to the cluster, it will try to connect you +to the IP addresses that your cluster assigns to services and pods. +Cluster administrators can configure, on cluster creation, the CIDR ranges that the Kubernetes +cluster will place resources in. Let's imagine your cluster is configured to place services in +`10.130.0.0/16` and pods in `10.132.0.0/16`: + +![VPN Kubernetes config](../images/vpn-k8s-config.jpg) + +## Telepresence conflicts + +When you run `telepresence connect` to connect to a cluster, it talks to the API server +to figure out what pod and service CIDRs it needs to map in your machine. If it detects +that these CIDR ranges are already mapped by a VPN's `private route`, it will produce an +error and inform you of the conflicting subnets: + +```console +$ telepresence connect +telepresence connect: error: connector.Connect: failed to connect to root daemon: rpc error: code = Unknown desc = subnet 10.43.0.0/16 overlaps with existing route "10.0.0.0/8 via 10.0.0.0 dev utun4, gw 10.0.0.1" +``` + +To resolve this, you'll need to carefully consider what your network layout looks like. +Telepresence is refusing to map these conflicting subnets because its mapping them +could render certain hosts that are inside the VPN completely unreachable. However, +you (or your network admin) know better than anyone how hosts are spread out inside your VPN. +Even if the private route routes ALL of `10.0.0.0/8`, it's possible that hosts are only +being spun up in one of the subblocks of the `/8` space. Let's say, for example, +that you happen to know that all your hosts in the VPN are bunched up in the first +half of the space -- `10.0.0.0/9` (and that you know that any new hosts will +only be assigned IP addresses from the `/9` block). In this case you +can configure Telepresence to override the other half of this CIDR block, which is where the +services and pods happen to be. +To do this, all you have to do is configure the `client.routing.allowConflictingSubnets` flag +in the Telepresence helm chart. You can do this directly via `telepresence helm upgrade`: + +```console +$ telepresence helm upgrade --set client.routing.allowConflictingSubnets="{10.128.0.0/9}" +``` + +You can also choose to be more specific about this, and only allow the CIDRs that you KNOW +are in use by the cluster: + +```console +$ telepresence helm upgrade --set client.routing.allowConflictingSubnets="{10.130.0.0/16,10.132.0.0/16}" +``` + +The end result of this (assuming an allow list of `/9`) will be a configuration like this: + +![VPN Telepresence](../images/vpn-with-tele.jpg) + +
diff --git a/docs/telepresence-oss/2.15/ci/github-actions.md b/docs/telepresence-oss/2.15/ci/github-actions.md deleted file mode 120000 index c14bd13e9..000000000 --- a/docs/telepresence-oss/2.15/ci/github-actions.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.15/ci/github-actions.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.15/ci/github-actions.md b/docs/telepresence-oss/2.15/ci/github-actions.md new file mode 100644 index 000000000..810a2d239 --- /dev/null +++ b/docs/telepresence-oss/2.15/ci/github-actions.md @@ -0,0 +1,176 @@ +--- +title: GitHub Actions for Telepresence +description: "Learn more about GitHub Actions for Telepresence and how to integrate them in your processes to run tests for your own environments and improve your CI/CD pipeline. " +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Telepresence with GitHub Actions + +Telepresence combined with [GitHub Actions](https://docs.github.com/en/actions) allows you to run integration tests in your continuous integration/continuous delivery (CI/CD) pipeline without the need to run any dependant service. When you connect to the target Kubernetes cluster, you can intercept traffic of the remote services and send it to an instance of the local service running in CI. This way, you can quickly test the bugfixes, updates, and features that you develop in your project. + +You can [register here](https://app.getambassador.io/auth/realms/production/protocol/openid-connect/auth?client_id=telepresence-github-actions&response_type=code&code_challenge=qhXI67CwarbmH-pqjDIV1ZE6kqggBKvGfs69cxst43w&code_challenge_method=S256&redirect_uri=https://app.getambassador.io) to get a free Ambassador Cloud account to try the GitHub Actions for Telepresence yourself. + +## GitHub Actions for Telepresence + +Ambassador Labs has created a set of GitHub Actions for Telepresence that enable you to run integration tests in your CI pipeline against any existing remote cluster. The GitHub Actions for Telepresence are the following: + + - **configure**: Initial configuration setup for Telepresence that is needed to run the actions successfully. + - **install**: Installs Telepresence in your CI server with latest version or the one specified by you. + - **login**: Logs into Telepresence, you can create a [personal intercept](/docs/telepresence/latest/concepts/intercepts/#personal-intercept). You'll need a Telepresence API key and set it as an environment variable in your workflow. See the [acquiring an API key guide](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) for instructions on how to get one. + - **connect**: Connects to the remote target environment. + - **intercept**: Redirects traffic to the remote service to the version of the service running in CI so you can run integration tests. + +Each action contains a post-action script to clean up resources. This includes logging out of Telepresence, closing the connection to the remote cluster, and stopping the intercept process. These post scripts are executed automatically, regardless of job result. This way, you don't have to worry about terminating the session yourself. You can look at the [GitHub Actions for Telepresence repository](https://github.com/datawire/telepresence-actions) for more information. + +# Using Telepresence in your GitHub Actions CI pipeline + +## Prerequisites + +To enable GitHub Actions with telepresence, you need: + +* A [Telepresence API KEY](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) and set it as an environment variable in your workflow. +* Access to your remote Kubernetes cluster, like a `kubeconfig.yaml` file with the information to connect to the cluster. +* If your remote cluster already has Telepresence installed, you need to know whether Telepresence is installed [Cluster wide](/docs/telepresence/latest/reference/rbac/#cluster-wide-telepresence-user-access) or [Namespace only](/docs/telepresence/latest/reference/rbac/#namespace-only-telepresence-user-access). If telepresence is configured for namespace only, verify that your `kubeconfig.yaml` is configured to find the installation of the Traffic Manager. For example: + + ```yaml + apiVersion: v1 + clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: traffic-manager-namespace + name: example-cluster + ``` + +* If Telepresence is installed, you also need to know the version of Telepresence running in the cluster. You can run the command `kubectl describe service traffic-manager -n namespace`. The version is listed in the `labels` section of the output. +* You need a GitHub Actions secret named `TELEPRESENCE_API_KEY` in your repository that has your Telepresence API key. See [GitHub docs](https://docs.github.com/en/github-ae@latest/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository) for instructions on how to create GitHub Actions secrets. +* You need a GitHub Actions secret named `KUBECONFIG_FILE` in your repository with the content of your `kubeconfig.yaml`). + +**Does your environment look different?** We're actively working on making GitHub Actions for Telepresence more useful for more + +
+ + +
+ +## Initial configuration setup + +To be able to use the GitHub Actions for Telepresence, you need to do an initial setup to [configure Telepresence](../../reference/config/) so the repository is able to run your workflow. To complete the Telepresence setup: + + +This action only supports Ubuntu runners at the moment. + +1. In your main branch, create a `.github/workflows` directory in your GitHub repository if it does not already exist. +1. Next, in the `.github/workflows` directory, create a new YAML file named `configure-telepresence.yaml`: + + ```yaml + name: Configuring telepresence + on: workflow_dispatch + jobs: + configuring: + name: Configure telepresence + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + steps: + - name : Checkout + uses: actions/checkout@v3 + #---- here run your custom command to connect to your cluster + #- name: Connect to cluster + # shell: bash + # run: ./connnect to cluster + #---- + - name: Congifuring Telepresence + uses: datawire/telepresence-actions/configure@v1.0-rc + with: + version: latest + ``` + +1. Push the `configure-telepresence.yaml` file to your repository. +1. Run the `Configuring Telepresence Workflow` [manually](https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow) in your repository's Actions tab. + +When the workflow runs, the action caches the configuration directory of Telepresence and a Telepresence configuration file if is provided by you. This configuration file should be placed in the`/.github/telepresence-config/config.yml` with your own [Telepresence config](../../reference/config/). If you update your this file with a new configuration, you must run the `Configuring Telepresence Workflow` action manually on your main branch so your workflow detects the new configuration. + + +When you create a branch, do not remove the .telepresence/config.yml file. This is required for Telepresence to run GitHub action properly when there is a new push to the branch in your repository. + + +## Using Telepresence in your GitHub Actions workflows + +1. In the `.github/workflows` directory create a new YAML file named `run-integration-tests.yaml` and modify placeholders with real actions to run your service and perform integration tests. + + ```yaml + name: Run Integration Tests + on: + push: + branches-ignore: + - 'main' + jobs: + my-job: + name: Run Integration Test using Remote Cluster + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + KUBECONFIG_FILE: ${{ secrets.KUBECONFIG_FILE }} + KUBECONFIG: /opt/kubeconfig + steps: + - name : Checkout + uses: actions/checkout@v3 + with: + ref: ${{ github.event.pull_request.head.sha }} + #---- here run your custom command to run your service + #- name: Run your service to test + # shell: bash + # run: ./run_local_service + #---- + # First you need to log in to Telepresence, with your api key + - name: Create kubeconfig file + run: | + cat < /opt/kubeconfig + ${{ env.KUBECONFIG_FILE }} + EOF + - name: Install Telepresence + uses: datawire/telepresence-actions/install@v1.0-rc + with: + version: 2.5.8 # Change the version number here according to the version of Telepresence in your cluster or omit this parameter to install the latest version + - name: Telepresence connect + uses: datawire/telepresence-actions/connect@v1.0-rc + - name: Login + uses: datawire/telepresence-actions/login@v1.0-rc + with: + telepresence_api_key: ${{ secrets.TELEPRESENCE_API_KEY }} + - name: Intercept the service + uses: datawire/telepresence-actions/intercept@v1.0-rc + with: + service_name: service-name + service_port: 8081:8080 + namespace: namespacename-of-your-service + http_header: "x-telepresence-intercept-id=service-intercepted" + print_logs: true # Flag to instruct the action to print out Telepresence logs and export an artifact with them + #---- here run your custom command + #- name: Run integrations test + # shell: bash + # run: ./run_integration_test + #---- + ``` + +The previous is an example of a workflow that: + +* Checks out the repository code. +* Has a placeholder step to run the service during CI. +* Creates the `/opt/kubeconfig` file with the contents of the `secrets.KUBECONFIG_FILE` to make it available for Telepresence. +* Installs Telepresence. +* Runs Telepresence Connect. +* Logs into Telepresence. +* Intercepts traffic to the service running in the remote cluster. +* A placeholder for an action that would run integration tests, such as one that makes HTTP requests to your running service and verifies it works while dependent services run in the remote cluster. + +This workflow gives you the ability to run integration tests during the CI run against an ephemeral instance of your service to verify that the any change that is pushed to the working branch works as expected. After you push the changes, the CI server will run the integration tests against the intercept. You can view the results on your GitHub repository, under "Actions" tab. diff --git a/docs/telepresence-oss/2.15/community.md b/docs/telepresence-oss/2.15/community.md deleted file mode 120000 index 4366f7c84..000000000 --- a/docs/telepresence-oss/2.15/community.md +++ /dev/null @@ -1 +0,0 @@ -../../telepresence/2.15/community.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.15/community.md b/docs/telepresence-oss/2.15/community.md new file mode 100644 index 000000000..922457c9d --- /dev/null +++ b/docs/telepresence-oss/2.15/community.md @@ -0,0 +1,12 @@ +# Community + +## Contributor's guide +Please review our [contributor's guide](https://github.com/telepresenceio/telepresence/blob/release/v2/DEVELOPING.md) +on GitHub to learn how you can help make Telepresence better. + +## Changelog +Our [changelog](https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md) +describes new features, bug fixes, and updates to each version of Telepresence. + +## Meetings +Check out our community [meeting schedule](https://github.com/telepresenceio/telepresence/blob/release/v2/MEETING_SCHEDULE.md) for opportunities to interact with Telepresence developers. diff --git a/docs/telepresence-oss/2.15/concepts/devloop.md b/docs/telepresence-oss/2.15/concepts/devloop.md deleted file mode 120000 index de9a4d193..000000000 --- a/docs/telepresence-oss/2.15/concepts/devloop.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.15/concepts/devloop.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.15/concepts/devloop.md b/docs/telepresence-oss/2.15/concepts/devloop.md new file mode 100644 index 000000000..86aac87e2 --- /dev/null +++ b/docs/telepresence-oss/2.15/concepts/devloop.md @@ -0,0 +1,54 @@ +--- +title: "The developer and the inner dev loop | Ambassador " +--- + +# The developer experience and the inner dev loop + +## How is the developer experience changing? + +The developer experience is the workflow a developer uses to develop, test, deploy, and release software. + +Typically this experience has consisted of both an inner dev loop and an outer dev loop. The inner dev loop is where the individual developer codes and tests, and once the developer pushes their code to version control, the outer dev loop is triggered. + +The outer dev loop is _everything else_ that happens leading up to release. This includes code merge, automated code review, test execution, deployment, [controlled (canary) release](https://www.getambassador.io/docs/argo/latest/concepts/canary/), and observation of results. The modern outer dev loop might include, for example, an automated CI/CD pipeline as part of a [GitOps workflow](https://www.getambassador.io/docs/argo/latest/concepts/gitops/#what-is-gitops) and a [progressive delivery](/docs/argo/latest/concepts/cicd/) strategy relying on automated canaries, i.e. to make the outer loop as fast, efficient and automated as possible. + +Cloud-native technologies have fundamentally altered the developer experience in two ways: one, developers now have to take extra steps in the inner dev loop; two, developers need to be concerned with the outer dev loop as part of their workflow, even if most of their time is spent in the inner dev loop. + +Engineers now must design and build distributed service-based applications _and_ also assume responsibility for the full development life cycle. The new developer experience means that developers can no longer rely on monolithic application developer best practices, such as checking out the entire codebase and coding locally with a rapid “live-reload” inner development loop. Now developers have to manage external dependencies, build containers, and implement orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this adds development time to the equation. + +## What is the inner dev loop? + +The inner dev loop is the single developer workflow. A single developer should be able to set up and use an inner dev loop to code and test changes quickly. + +Even within the Kubernetes space, developers will find much of the inner dev loop familiar. That is, code can still be written locally at a level that a developer controls and committed to version control. + +In a traditional inner dev loop, if a typical developer codes for 360 minutes (6 hours) a day, with a traditional local iterative development loop of 5 minutes — 3 coding, 1 building, i.e. compiling/deploying/reloading, 1 testing inspecting, and 10-20 seconds for committing code — they can expect to make ~70 iterations of their code per day. Any one of these iterations could be a release candidate. The only “developer tax” being paid here is for the commit process, which is negligible. + +![traditional inner dev loop](../images/trad-inner-dev-loop.png) + +## In search of lost time: How does containerization change the inner dev loop? + +The inner dev loop is where writing and testing code happens, and time is critical for maximum developer productivity and getting features in front of end users. The faster the feedback loop, the faster developers can refactor and test again. + +Changes to the inner dev loop process, i.e., containerization, threaten to slow this development workflow down. Coding stays the same in the new inner dev loop, but code has to be containerized. The _containerized_ inner dev loop requires a number of new steps: + +* packaging code in containers +* writing a manifest to specify how Kubernetes should run the application (e.g., YAML-based configuration information, such as how much memory should be given to a container) +* pushing the container to the registry +* deploying containers in Kubernetes + +Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. + + +![container inner dev loop](../images/container-inner-dev-loop.png) + +## Tackling the slow inner dev loop + +A slow inner dev loop can negatively impact frontend and backend teams, delaying work on individual and team levels and slowing releases into production overall. + +For example: + +* Frontend developers have to wait for previews of backend changes on a shared dev/staging environment (for example, until CI/CD deploys a new version) and/or rely on mocks/stubs/virtual services when coding their application locally. These changes are only verifiable by going through the CI/CD process to build and deploy within a target environment. +* Backend developers have to wait for CI/CD to build and deploy their app to a target environment to verify that their code works correctly with cluster or cloud-based dependencies as well as to share their work to get feedback. + +New technologies and tools can facilitate cloud-native, containerized development. And in the case of a sluggish inner dev loop, developers can accelerate productivity with tools that help speed the loop up again. diff --git a/docs/telepresence-oss/2.15/concepts/devworkflow.md b/docs/telepresence-oss/2.15/concepts/devworkflow.md deleted file mode 120000 index 509e28d90..000000000 --- a/docs/telepresence-oss/2.15/concepts/devworkflow.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.15/concepts/devworkflow.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.15/concepts/devworkflow.md b/docs/telepresence-oss/2.15/concepts/devworkflow.md new file mode 100644 index 000000000..fa24fc2bd --- /dev/null +++ b/docs/telepresence-oss/2.15/concepts/devworkflow.md @@ -0,0 +1,7 @@ +# The changing development workflow + +A changing workflow is one of the main challenges for developers adopting Kubernetes. Software development itself isn’t the challenge. Developers can continue to [code using the languages and tools with which they are most productive and comfortable](https://www.getambassador.io/resources/kubernetes-local-dev-toolkit/). That’s the beauty of containerized development. + +However, the cloud-native, Kubernetes-based approach to development means adopting a new development workflow and development environment. Beyond the basics, such as figuring out how to containerize software, [how to run containers in Kubernetes](https://www.getambassador.io/docs/kubernetes/latest/concepts/appdev/), and how to deploy changes into containers, for example, Kubernetes adds complexity before it delivers efficiency. The promise of a “quicker way to develop software” applies at least within the traditional aspects of the inner dev loop, where the single developer codes, builds and tests their software. But both within the inner dev loop and once code is pushed into version control to trigger the outer dev loop, the developer experience changes considerably from what many developers are used to. + +In this new paradigm, new steps are added to the inner dev loop, and more broadly, the developer begins to share responsibility for the full life cycle of their software. Inevitably this means taking new workflows and tools on board to ensure that the full life cycle continues full speed ahead. diff --git a/docs/telepresence-oss/2.15/howtos/cluster-in-vm.md b/docs/telepresence-oss/2.15/howtos/cluster-in-vm.md deleted file mode 120000 index 52b8a2e04..000000000 --- a/docs/telepresence-oss/2.15/howtos/cluster-in-vm.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.15/howtos/cluster-in-vm.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.15/howtos/cluster-in-vm.md b/docs/telepresence-oss/2.15/howtos/cluster-in-vm.md new file mode 100644 index 000000000..4762344c9 --- /dev/null +++ b/docs/telepresence-oss/2.15/howtos/cluster-in-vm.md @@ -0,0 +1,192 @@ +--- +title: "Considerations for locally hosted clusters | Ambassador" +description: "Use Telepresence to intercept services in a cluster running in a hosted virtual machine." +--- + +# Network considerations for locally hosted clusters + +## The problem +Telepresence creates a Virtual Network Interface ([VIF](../../reference/tun-device)) that maps the clusters subnets to the host machine when it connects. If you're running Kubernetes locally (e.g., k3s, Minikube, Docker for Desktop), you may encounter network problems because the devices in the host are also accessible from the cluster's nodes. + +### Example: +A k3s cluster runs in a headless VirtualBox machine that uses a "host-only" network. This network will allow both host-to-guest and guest-to-host connections. In other words, the cluster will have access to the host's network and, while Telepresence is connected, also to its VIF. This means that from the cluster's perspective, there will now be more than one interface that maps the cluster's subnets; the ones already present in the cluster's nodes, and then the Telepresence VIF, mapping them again. + +Now, if a request arrives to Telepresence that is covered by a subnet mapped by the VIF, the request is routed to the cluster. If the cluster for some reason doesn't find a corresponding listener that can handle the request, it will eventually try the host network, and find the VIF. The VIF routes the request to the cluster and now the recursion is in motion. The final outcome of the request will likely be a timeout but since the recursion is very resource intensive (a large amount of very rapid connection requests), this will likely also affect other connections in a bad way. + +## Solution + +### Create a bridge network +A bridge network is a Link Layer (L2) device that forwards traffic between network segments. By creating a bridge network, you can bypass the host's network stack which enable the Kubernetes cluster to connect directly to the same router as your host. + +To create a bridge network, you need to change the network settings of the guest running a cluster's node so that it connects directly to a physical network device on your host. The details on how to configure the bridge depends on what type of virtualization solution you're using. + +### Vagrant + Virtualbox + k3s example +Here's a sample `Vagrantfile` that will spin up a server node and two agent nodes in three headless instances using a bridged network. It also adds the configuration needed for the cluster to host a docker repository (very handy in case you want to save bandwidth). The Kubernetes registry manifest must be applied using `kubectl -f registry.yaml` once the cluster is up and running. + +#### Vagrantfile +```ruby +# -*- mode: ruby -*- +# vi: set ft=ruby : + +# bridge is the name of the host's default network device +$bridge = 'wlp5s0' + +# default_route should be the IP of the host's default route. +$default_route = '192.168.1.1' + +# nameserver must be the IP of an external DNS, such as 8.8.8.8 +$nameserver = '8.8.8.8' + +# server_name should also be added to the host's /etc/hosts file and point to the server_ip +# for easy access when pushing docker images +server_name = 'multi' + +# static IPs for the server and agents. Those IPs must be on the default router's subnet +server_ip = '192.168.1.110' +agents = { + 'agent1' => '192.168.1.111', + 'agent2' => '192.168.1.112', +} + +# Extra parameters in INSTALL_K3S_EXEC variable because of +# K3s picking up the wrong interface when starting server and agent +# https://github.com/alexellis/k3sup/issues/306 +server_script = <<-SHELL + sudo -i + apk add curl + export INSTALL_K3S_EXEC="--bind-address=#{server_ip} --node-external-ip=#{server_ip} --flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + echo "Sleeping for 5 seconds to wait for k3s to start" + sleep 5 + cp /var/lib/rancher/k3s/server/token /vagrant_shared + cp /etc/rancher/k3s/k3s.yaml /vagrant_shared + cp /etc/rancher/k3s/registries.yaml /vagrant_shared + SHELL + +agent_script = <<-SHELL + sudo -i + apk add curl + export K3S_TOKEN_FILE=/vagrant_shared/token + export K3S_URL=https://#{server_ip}:6443 + export INSTALL_K3S_EXEC="--flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + SHELL + +def config_vm(name, ip, script, vm) + # The network_script has two objectives: + # 1. Ensure that the guest's default route is the bridged network (bypass the network of the host) + # 2. Ensure that the DNS points to an external DNS service, as opposed to the DNS of the host that + # the NAT network provides. + network_script = <<-SHELL + sudo -i + ip route delete default 2>&1 >/dev/null || true; ip route add default via #{$default_route} + cp /etc/resolv.conf /etc/resolv.conf.orig + sed 's/^nameserver.*/nameserver #{$nameserver}/' /etc/resolv.conf.orig > /etc/resolv.conf + SHELL + + vm.hostname = name + vm.network 'public_network', bridge: $bridge, ip: ip + vm.synced_folder './shared', '/vagrant_shared' + vm.provider 'virtualbox' do |vb| + vb.memory = '4096' + vb.cpus = '2' + end + vm.provision 'shell', inline: script + vm.provision 'shell', inline: network_script, run: 'always' +end + +Vagrant.configure('2') do |config| + config.vm.box = 'generic/alpine314' + + config.vm.define 'server', primary: true do |server| + config_vm(server_name, server_ip, server_script, server.vm) + end + + agents.each do |agent_name, agent_ip| + config.vm.define agent_name do |agent| + config_vm(agent_name, agent_ip, agent_script, agent.vm) + end + end +end +``` + +The Kubernetes manifest to add the registry: + +#### registry.yaml +```yaml +apiVersion: v1 +kind: ReplicationController +metadata: + name: kube-registry-v0 + namespace: kube-system + labels: + k8s-app: kube-registry + version: v0 +spec: + replicas: 1 + selector: + app: kube-registry + version: v0 + template: + metadata: + labels: + app: kube-registry + version: v0 + spec: + containers: + - name: registry + image: registry:2 + resources: + limits: + cpu: 100m + memory: 200Mi + env: + - name: REGISTRY_HTTP_ADDR + value: :5000 + - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY + value: /var/lib/registry + volumeMounts: + - name: image-store + mountPath: /var/lib/registry + ports: + - containerPort: 5000 + name: registry + protocol: TCP + volumes: + - name: image-store + hostPath: + path: /var/lib/registry-storage +--- +apiVersion: v1 +kind: Service +metadata: + name: kube-registry + namespace: kube-system + labels: + app: kube-registry + kubernetes.io/name: "KubeRegistry" +spec: + selector: + app: kube-registry + ports: + - name: registry + port: 5000 + targetPort: 5000 + protocol: TCP + type: LoadBalancer +``` + diff --git a/docs/telepresence-oss/2.15/howtos/request.md b/docs/telepresence-oss/2.15/howtos/request.md deleted file mode 120000 index 2cc74dbaf..000000000 --- a/docs/telepresence-oss/2.15/howtos/request.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.15/howtos/request.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.15/howtos/request.md b/docs/telepresence-oss/2.15/howtos/request.md new file mode 100644 index 000000000..1109c68df --- /dev/null +++ b/docs/telepresence-oss/2.15/howtos/request.md @@ -0,0 +1,12 @@ +import Alert from '@material-ui/lab/Alert'; + +# Send requests to an intercepted service + +Ambassador Cloud can inform you about the required request parameters to reach an intercepted service. + + 1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) + 2. Navigate to the desired service Intercepts page + 3. Click the **Query** button to open the pop-up menu. + 4. Toggle between **CURL**, **Headers** and **Browse**. + +The pre-built queries and header information will help you get started to query the desired intercepted service and manage header propagation. diff --git a/docs/telepresence-oss/2.15/install/helm.md b/docs/telepresence-oss/2.15/install/helm.md deleted file mode 120000 index 92113524c..000000000 --- a/docs/telepresence-oss/2.15/install/helm.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.15/install/helm.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.15/install/helm.md b/docs/telepresence-oss/2.15/install/helm.md new file mode 100644 index 000000000..8aefb1d59 --- /dev/null +++ b/docs/telepresence-oss/2.15/install/helm.md @@ -0,0 +1,181 @@ +# Install the Traffic Manager with Helm + +[Helm](https://helm.sh) is a package manager for Kubernetes that automates the release and management of software on Kubernetes. The Telepresence Traffic Manager can be installed via a Helm chart with a few simple steps. + +For more details on what the Helm chart installs and what can be configured, see the Helm chart [configuration on artifacthub](https://artifacthub.io/packages/helm/datawire/telepresence). + +## Before you begin + +Before you begin you need to have [`helm`](https://helm.sh/docs/intro/install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +The Telepresence Helm chart is hosted by Ambassador Labs and published at `https://app.getambassador.io`. + +Start by adding this repo to your Helm client with the following command: + +```shell +helm repo add datawire https://app.getambassador.io +helm repo update +``` + +## Install with Helm + +When you run the Helm chart, it installs all the components required for the Telepresence Traffic Manager. + +1. If you are installing the Telepresence Traffic Manager **for the first time on your cluster**, create the `ambassador` namespace in your cluster: + + ```shell + kubectl create namespace ambassador + ``` + +2. Install the Telepresence Traffic Manager with the following command: + + ```shell + helm install traffic-manager --namespace ambassador datawire/telepresence + ``` + +### Install into custom namespace + +The Helm chart supports being installed into any namespace, not necessarily `ambassador`. Simply pass a different `namespace` argument to `helm install`. +For example, if you wanted to deploy the traffic manager to the `staging` namespace: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence +``` + +Note that users of Telepresence will need to configure their kubeconfig to find this installation of the Traffic Manager: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +See [the kubeconfig documentation](../../reference/config#manager) for more information. + +### Upgrading the Traffic Manager. + +Versions of the Traffic Manager Helm chart are coupled to the versions of the Telepresence CLI that they are intended for. +Thus, for example, if you wish to use Telepresence `v2.4.0`, you'll need to install version `v2.4.0` of the Traffic Manager Helm chart. + +Upgrading the Traffic Manager is the same as upgrading any other Helm chart; for example, if you installed the release into the `ambassador` namespace, and you just wished to upgrade it to the latest version without changing any configuration values: + +```shell +helm repo up +helm upgrade traffic-manager datawire/telepresence --reuse-values --namespace ambassador +``` + +If you want to upgrade the Traffic-Manager to a specific version, add a `--version` flag with the version number to the upgrade command. For example: `--version v2.4.1` + +## RBAC + +### Installing a namespace-scoped traffic manager + +You might not want the Traffic Manager to have permissions across the entire kubernetes cluster, or you might want to be able to install multiple traffic managers per cluster (for example, to separate them by environment). +In these cases, the traffic manager supports being installed with a namespace scope, allowing cluster administrators to limit the reach of a traffic manager's permissions. + +For example, suppose you want a Traffic Manager that only works on namespaces `dev` and `staging`. +To do this, create a `values.yaml` like the following: + +```yaml +managerRbac: + create: true + namespaced: true + namespaces: + - dev + - staging +``` + +This can then be installed via: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence -f ./values.yaml +``` + +**NOTE** Do not install namespace-scoped Traffic Managers and a global Traffic Manager in the same cluster, as it could have unexpected effects. + +#### Namespace collision detection + +The Telepresence Helm chart will try to prevent namespace-scoped Traffic Managers from managing the same namespaces. +It will do this by creating a ConfigMap, called `traffic-manager-claim`, in each namespace that a given install manages. + +So, for example, suppose you install one Traffic Manager to manage namespaces `dev` and `staging`, as: + +```bash +helm install traffic-manager --namespace dev datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={dev,staging}' +``` + +You might then attempt to install another Traffic Manager to manage namespaces `staging` and `prod`: + +```bash +helm install traffic-manager --namespace prod datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={staging,prod}' +``` + +This would fail with an error: + +``` +Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap "traffic-manager-claim" in namespace "staging" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "prod": current value is "dev" +``` + +To fix this error, fix the overlap either by removing `staging` from the first install, or from the second. + +#### Namespace scoped user permissions + +Optionally, you can also configure user rbac to be scoped to the same namespaces as the manager itself. +You might want to do this if you don't give your users permissions throughout the cluster, and want to make sure they only have the minimum set required to perform telepresence commands on certain namespaces. + +Continuing with the `dev` and `staging` example from the previous section, simply add the following to `values.yaml` (make sure you set the `subjects`!): + +```yaml +clientRbac: + create: true + + # These are the users or groups to which the user rbac will be bound. + # This MUST be set. + subjects: {} + # - kind: User + # name: jane + # apiGroup: rbac.authorization.k8s.io + + namespaced: true + + namespaces: + - dev + - staging +``` + +#### Namespace-scoped webhook + +If you wish to use the traffic-manager's [mutating webhook](../../reference/cluster-config#mutating-webhook) with a namespace-scoped traffic manager, you will have to ensure that each namespace has an `app.kubernetes.io/name` label that is identical to its name: + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: staging + labels: + app.kubernetes.io/name: staging +``` + +You can also use `kubectl label` to add the label to an existing namespace, e.g.: + +```shell +kubectl label namespace staging app.kubernetes.io/name=staging +``` + +This is required because the mutating webhook will use the name label to find namespaces to operate on. + +**NOTE** This labelling happens automatically in kubernetes >= 1.21. + +### Installing RBAC only + +Telepresence Traffic Manager does require some [RBAC](../../reference/rbac/) for the traffic-manager deployment itself, as well as for users. +To make it easier for operators to introspect / manage RBAC separately, you can use `rbac.only=true` to +only create the rbac-related objects. +Additionally, you can use `clientRbac.create=true` and `managerRbac.create=true` to toggle which subset(s) of RBAC objects you wish to create. diff --git a/docs/telepresence-oss/2.15/install/migrate-from-legacy.md b/docs/telepresence-oss/2.15/install/migrate-from-legacy.md deleted file mode 120000 index 64c89bc1e..000000000 --- a/docs/telepresence-oss/2.15/install/migrate-from-legacy.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.15/install/migrate-from-legacy.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.15/install/migrate-from-legacy.md b/docs/telepresence-oss/2.15/install/migrate-from-legacy.md new file mode 100644 index 000000000..94307dfa1 --- /dev/null +++ b/docs/telepresence-oss/2.15/install/migrate-from-legacy.md @@ -0,0 +1,110 @@ +# Migrate from legacy Telepresence + +[Telepresence](/products/telepresence/) (formerly referenced as Telepresence 2, which is the current major version) has different mechanics and requires a different mental model from [legacy Telepresence 1](https://www.telepresence.io/docs/v1/) when working with local instances of your services. + +In legacy Telepresence, a pod running a service was swapped with a pod running the Telepresence proxy. This proxy received traffic intended for the service, and sent the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment". + +In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time. + +Telepresence 2 introduces a [new +architecture](../../reference/architecture/) built around "intercepts" +that addresses these problems. With the new Telepresence, a sidecar +proxy ("traffic agent") is injected onto the pod. The proxy then +intercepts traffic intended for the Pod and routes it to the +workstation/laptop. The advantage of this approach is that the +service is running at all times, and no swapping is used. By using +the proxy approach, we can also do personal intercepts, where rather +than re-routing all traffic to the laptop/workstation, it only +re-routes the traffic designated as belonging to that user, so that +multiple developers can intercept the same service at the same time +without disrupting normal operation or disrupting eacho. + +Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts. + +## Using legacy Telepresence commands + +First please ensure you've [installed Telepresence](../). + +Telepresence is able to translate common legacy Telepresence commands into native Telepresence commands. +So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used +to with the Telepresence binary. + +For example, say you have a deployment (`myserver`) that you want to swap deployment (equivalent to intercept in +Telepresence) with a python server, you could run the following command: + +``` +$ telepresence --swap-deployment myserver --expose 9090 --run python3 -m http.server 9090 +< help text > + +Legacy telepresence command used +Command roughly translates to the following in Telepresence: +telepresence intercept echo-easy --port 9090 -- python3 -m http.server 9090 +running... +Connecting to traffic manager... +Connected to context +Using Deployment myserver +intercepted + Intercept name : myserver + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:9090 + Intercepting : all TCP connections +Serving HTTP on :: port 9090 (http://[::]:9090/) ... +``` + +Telepresence will let you know what the legacy Telepresence command has mapped to and automatically +runs it. So you can get started with Telepresence today, using the commands you are used to +and it will help you learn the Telepresence syntax. + +### Legacy command mapping + +Below is the mapping of legacy Telepresence to Telepresence commands (where they exist and +are supported). + +| Legacy Telepresence Command | Telepresence Command | +|--------------------------------------------------|--------------------------------------------| +| --swap-deployment $workload | intercept $workload | +| --expose localPort[:remotePort] | intercept --port localPort[:remotePort] | +| --swap-deployment $workload --run-shell | intercept $workload -- bash | +| --swap-deployment $workload --run $cmd | intercept $workload -- $cmd | +| --swap-deployment $workload --docker-run $cmd | intercept $workload --docker-run -- $cmd | +| --run-shell | connect -- bash | +| --run $cmd | connect -- $cmd | +| --env-file,--env-json | --env-file, --env-json (haven't changed) | +| --context,--namespace | --context, --namespace (haven't changed) | +| --mount,--docker-mount | --mount, --docker-mount (haven't changed) | + +### Legacy Telepresence command limitations + +Some of the commands and flags from legacy Telepresence either didn't apply to Telepresence or +aren't yet supported in Telepresence. For some known popular commands, such as --method, +Telepresence will include output letting you know that the flag has gone away. For flags that +Telepresence can't translate yet, it will let you know that that flag is "unsupported". + +If Telepresence is missing any flags or functionality that is integral to your usage, please let us know +by [creating an issue](https://github.com/telepresenceio/telepresence/issues) and/or talking to us on our [Slack channel](http://a8r.io/slack)! + +## Telepresence changes + +Telepresence installs a Traffic Manager in the cluster and Traffic Agents alongside workloads when performing intercepts (including +with `--swap-deployment`) and leaves them. If you use `--swap-deployment`, the intercept will be left once the process +dies, but the agent will remain. There's no harm in leaving the agent running alongside your service, but when you +want to remove them from the cluster, the following Telepresence command will help: +``` +$ telepresence uninstall --help +Uninstall telepresence agents + +Usage: + telepresence uninstall [flags] { --agent |--all-agents } + +Flags: + -d, --agent uninstall intercept agent on specific deployments + -a, --all-agents uninstall intercept agent on all deployments + -h, --help help for uninstall + -n, --namespace string If present, the namespace scope for this CLI request +``` + +Since the new architecture deploys a Traffic Manager into the Ambassador namespace, please take a look at +our [rbac guide](../../reference/rbac) if you run into any issues with permissions while upgrading to Telepresence. + +The Traffic Manager can be uninstalled using `telepresence helm uninstall`. \ No newline at end of file diff --git a/docs/telepresence-oss/2.15/quick-start/qs-cards.js b/docs/telepresence-oss/2.15/quick-start/qs-cards.js deleted file mode 120000 index a5dfdfb47..000000000 --- a/docs/telepresence-oss/2.15/quick-start/qs-cards.js +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.15/quick-start/qs-cards.js \ No newline at end of file diff --git a/docs/telepresence-oss/2.15/quick-start/qs-cards.js b/docs/telepresence-oss/2.15/quick-start/qs-cards.js new file mode 100644 index 000000000..084af19b3 --- /dev/null +++ b/docs/telepresence-oss/2.15/quick-start/qs-cards.js @@ -0,0 +1,68 @@ +import Grid from '@material-ui/core/Grid'; +import Paper from '@material-ui/core/Paper'; +import Typography from '@material-ui/core/Typography'; +import { makeStyles } from '@material-ui/core/styles'; +import { Link as GatsbyLink } from 'gatsby'; +import React from 'react'; + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + textAlign: 'center', + alignItem: 'stretch', + padding: 0, + }, + paper: { + padding: theme.spacing(1), + textAlign: 'center', + color: 'black', + height: '100%', + }, +})); + +export default function CenteredGrid() { + const classes = useStyles(); + + return ( +
+ + + + + + Collaborating + + + + Use personal intercepts to get specific requests when working with colleagues. + + + + + + + + Outbound Sessions + + + + Control what your laptop can reach in the cluster while connected. + + + + + + + + Telepresence for Docker Compose + + + + Develop in a hybrid local/cluster environment using Telepresence for Docker Compose. + + + + +
+ ); +} diff --git a/docs/telepresence-oss/2.15/quick-start/telepresence-quickstart-landing.less b/docs/telepresence-oss/2.15/quick-start/telepresence-quickstart-landing.less deleted file mode 120000 index 2bfa76e1b..000000000 --- a/docs/telepresence-oss/2.15/quick-start/telepresence-quickstart-landing.less +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.15/quick-start/telepresence-quickstart-landing.less \ No newline at end of file diff --git a/docs/telepresence-oss/2.15/quick-start/telepresence-quickstart-landing.less b/docs/telepresence-oss/2.15/quick-start/telepresence-quickstart-landing.less new file mode 100644 index 000000000..e2a83df4f --- /dev/null +++ b/docs/telepresence-oss/2.15/quick-start/telepresence-quickstart-landing.less @@ -0,0 +1,152 @@ +@import '~@src/components/Layout/vars.less'; + +.doc-body .telepresence-quickstart-landing { + font-family: @InterFont; + color: @black; + margin: -8.4px auto 48px; + max-width: 1050px; + min-width: @docs-min-width; + width: 100%; + + h1 { + color: @blue-dark; + font-weight: normal; + letter-spacing: 0.25px; + font-size: 33px; + margin: 0 0 15px; + } + p { + font-size: 0.875rem; + line-height: 24px; + margin: 0; + padding: 0; + } + + .demo-cluster-container { + display: grid; + margin: 40px 0; + grid-template-columns: 1fr; + grid-template-columns: 1fr; + @media screen and (max-width: 900px) { + grid-template-columns: repeat(1, 1fr); + } + } + .main-title-container { + display: flex; + flex-direction: column; + align-items: center; + p { + text-align: center; + font-size: 0.875rem; + } + } + h2 { + font-size: 23px; + color: @black; + margin: 0 0 20px 0; + padding: 0; + &.underlined { + padding-bottom: 2px; + border-bottom: 3px solid @grey-separator; + text-align: center; + } + strong { + font-weight: 800; + } + &.subtitle { + margin-bottom: 10px; + font-size: 19px; + line-height: 28px; + } + } + .learn-more, + .get-started { + font-size: 14px; + font-weight: 600; + letter-spacing: 1.25px; + display: flex; + align-items: center; + text-decoration: none; + &.inline { + display: inline-block; + text-decoration: underline; + font-size: unset; + font-weight: normal; + &:hover { + text-decoration: none; + } + } + &.blue { + color: @blue-5; + } + &.blue:hover { + color: @blue-dark; + } + } + + .learn-more { + margin-top: 20px; + padding: 13px 0; + } + + .box-container { + &.border { + border: 1.5px solid @grey-separator; + border-radius: 5px; + padding: 10px; + } + &::before { + content: ''; + position: absolute; + width: 14px; + height: 14px; + border-radius: 50%; + top: 0; + left: 50%; + transform: translate(-50%, -50%); + } + p { + font-size: 0.875rem; + line-height: 24px; + padding: 0; + } + } + + .telepresence-video { + border: 2px solid @grey-separator; + box-shadow: -6px 12px 0px fade(@black, 12%); + border-radius: 8px; + padding: 18px; + h2.telepresence-video-title { + font-weight: 400; + font-size: 23px; + line-height: 33px; + color: @blue-6; + } + } + + .video-section { + display: grid; + grid-template-columns: 1fr 1fr; + column-gap: 20px; + @media screen and (max-width: 800px) { + grid-template-columns: 1fr; + } + ul { + font-size: 14px; + margin: 0 10px 6px 0; + } + .video-container { + position: relative; + padding-bottom: 56.25%; // 16:9 aspect ratio + height: 0; + iframe { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + } + } + } +} diff --git a/docs/telepresence-oss/2.15/redirects.yml b/docs/telepresence-oss/2.15/redirects.yml deleted file mode 120000 index a2abb472b..000000000 --- a/docs/telepresence-oss/2.15/redirects.yml +++ /dev/null @@ -1 +0,0 @@ -../../telepresence/2.15/redirects.yml \ No newline at end of file diff --git a/docs/telepresence-oss/2.15/redirects.yml b/docs/telepresence-oss/2.15/redirects.yml new file mode 100644 index 000000000..c73de44b4 --- /dev/null +++ b/docs/telepresence-oss/2.15/redirects.yml @@ -0,0 +1,6 @@ +- {from: "", to: "quick-start"} +- {from: /docs/telepresence/v2.15/quick-start/qs-go, to: /docs/telepresence/v2.15/quickstart/} +- {from: /docs/telepresence/v2.15/quick-start/qs-java, to: /docs/telepresence/v2.15/quickstart/} +- {from: /docs/telepresence/v2.15/quick-start/qs-node, to: /docs/telepresence/v2.15/quickstart/} +- {from: /docs/telepresence/v2.15/quick-start/qs-python, to: /docs/telepresence/v2.15/quickstart/} +- {from: /docs/telepresence/v2.15/quick-start/qs-python-fastapi, to: /docs/telepresence/v2.15/quickstart/} diff --git a/docs/telepresence-oss/2.15/reference/dns.md b/docs/telepresence-oss/2.15/reference/dns.md deleted file mode 120000 index 6a8773534..000000000 --- a/docs/telepresence-oss/2.15/reference/dns.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.15/reference/dns.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.15/reference/dns.md b/docs/telepresence-oss/2.15/reference/dns.md new file mode 100644 index 000000000..2f263860e --- /dev/null +++ b/docs/telepresence-oss/2.15/reference/dns.md @@ -0,0 +1,80 @@ +# DNS resolution + +The Telepresence DNS resolver is dynamically configured to resolve names using the namespaces of currently active intercepts. Processes running locally on the desktop will have network access to all services in the such namespaces by service-name only. + +All intercepts contribute to the DNS resolver, even those that do not use the `--namespace=` option. This is because `--namespace default` is implied, and in this context, `default` is treated just like any other namespace. + +No namespaces are used by the DNS resolver (not even `default`) when no intercepts are active, which means that no service is available by `` only. Without an active intercept, the namespace qualified DNS name must be used (in the form `.`). + +See this demonstrated below, using the [quick start's](../../quick-start/) sample app services. + +No intercepts are currently running, we'll connect to the cluster and list the services that can be intercepted. + +``` +$ telepresence connect + + Connecting to traffic manager... + Connected to context default (https://) + +$ telepresence list + + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + emoji : ready to intercept (traffic-agent not yet installed) + web : ready to intercept (traffic-agent not yet installed) + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + +$ curl web-app:80 + + curl: (6) Could not resolve host: web-app + +``` + +This is expected as Telepresence cannot reach the service yet by short name without an active intercept in that namespace. + +``` +$ curl web-app.emojivoto:80 + + + + + + Emoji Vote + ... +``` + +Using the namespaced qualified DNS name though does work. +Now we'll start an intercept against another service in the same namespace. Remember, `--namespace default` is implied since it is not specified. + +``` +$ telepresence intercept web --port 8080 + + Using Deployment web + intercepted + Intercept name : web + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Volume Mount Point: /tmp/telfs-166119801 + Intercepting : HTTP requests that match all headers: + 'x-telepresence-intercept-id: 8eac04e3-bf24-4d62-b3ba-35297c16f5cd:web' + +$ curl webapp:80 + + + + + + Emoji Vote + ... +``` + +Now curling that service by its short name works and will as long as the intercept is active. + +The DNS resolver will always be able to resolve services using `.` regardless of intercepts. + +### Supported Query Types + +The Telepresence DNS resolver is now capable of resolving queries of type `A`, `AAAA`, `CNAME`, +`MX`, `NS`, `PTR`, `SRV`, and `TXT`. + +See [Outbound connectivity](../routing/#dns-resolution) for details on DNS lookups. diff --git a/docs/telepresence-oss/2.15/reference/environment.md b/docs/telepresence-oss/2.15/reference/environment.md deleted file mode 120000 index d79328a91..000000000 --- a/docs/telepresence-oss/2.15/reference/environment.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.15/reference/environment.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.15/reference/environment.md b/docs/telepresence-oss/2.15/reference/environment.md new file mode 100644 index 000000000..7f83ff119 --- /dev/null +++ b/docs/telepresence-oss/2.15/reference/environment.md @@ -0,0 +1,46 @@ +--- +description: "How Telepresence can import environment variables from your Kubernetes cluster to use with code running on your laptop." +--- + +# Environment variables + +Telepresence can import environment variables from the cluster pod when running an intercept. +You can then use these variables with the code running on your laptop of the service being intercepted. + +There are three options available to do this: + +1. `telepresence intercept [service] --port [port] --env-file=FILENAME` + + This will write the environment variables to a Docker Compose `.env` file. This file can be used with `docker-compose` when starting containers locally. Please see the Docker documentation regarding the [file syntax](https://docs.docker.com/compose/env-file/) and [usage](https://docs.docker.com/compose/environment-variables/) for more information. + +2. `telepresence intercept [service] --port [port] --env-json=FILENAME` + + This will write the environment variables to a JSON file. This file can be injected into other build processes. + +3. `telepresence intercept [service] --port [port] -- [COMMAND]` + + This will run a command locally with the pod's environment variables set on your laptop. Once the command quits the intercept is stopped (as if `telepresence leave [service]` was run). This can be used in conjunction with a local server command, such as `python [FILENAME]` or `node [FILENAME]` to run a service locally while using the environment variables that were set on the pod via a ConfigMap or other means. + + Another use would be running a subshell, Bash for example: + + `telepresence intercept [service] --port [port] -- /bin/bash` + + This would start the intercept then launch the subshell on your laptop with all the same variables set as on the pod. + +## Telepresence Environment Variables + +Telepresence adds some useful environment variables in addition to the ones imported from the intercepted pod: + +### TELEPRESENCE_ROOT +Directory where all remote volumes mounts are rooted. See [Volume Mounts](../volume/) for more info. + +### TELEPRESENCE_MOUNTS +Colon separated list of remotely mounted directories. + +### TELEPRESENCE_CONTAINER +The name of the intercepted container. Useful when a pod has several containers, and you want to know which one that was intercepted by Telepresence. + +### TELEPRESENCE_INTERCEPT_ID +ID of the intercept (same as the "x-intercept-id" http header). + +Useful if you need special behavior when intercepting a pod. One example might be when dealing with pub/sub systems like Kafka, where all processes that don't have the `TELEPRESENCE_INTERCEPT_ID` set can filter out all messages that contain an `x-intercept-id` header, while those that do, instead filter based on a matching `x-intercept-id` header. This is to assure that messages belonging to a certain intercept always are consumed by the intercepting process. diff --git a/docs/telepresence-oss/2.15/reference/linkerd.md b/docs/telepresence-oss/2.15/reference/linkerd.md deleted file mode 120000 index 30efa4206..000000000 --- a/docs/telepresence-oss/2.15/reference/linkerd.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.15/reference/linkerd.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.15/reference/linkerd.md b/docs/telepresence-oss/2.15/reference/linkerd.md new file mode 100644 index 000000000..9b903fa76 --- /dev/null +++ b/docs/telepresence-oss/2.15/reference/linkerd.md @@ -0,0 +1,75 @@ +--- +Description: "How to get Linkerd meshed services working with Telepresence" +--- + +# Using Telepresence with Linkerd + +## Introduction +Getting started with Telepresence on Linkerd services is as simple as adding an annotation to your Deployment: + +```yaml +spec: + template: + metadata: + annotations: + config.linkerd.io/skip-outbound-ports: "8081" +``` + +The local system and the Traffic Agent connect to the Traffic Manager using its gRPC API on port 8081. Telling Linkerd to skip that port allows the Traffic Agent sidecar to fully communicate with the Traffic Manager, and therefore the rest of the Telepresence system. + +## Prerequisites +1. [Telepresence binary](../../install) +2. Linkerd control plane [installed to cluster](https://linkerd.io/2.10/tasks/install/) +3. Kubectl +4. [Working ingress controller](https://www.getambassador.io/docs/edge-stack/latest/howtos/linkerd2) + +## Deploy +Save and deploy the following YAML. Note the `config.linkerd.io/skip-outbound-ports` annotation in the metadata of the pod template. + +```yaml +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: quote +spec: + replicas: 1 + selector: + matchLabels: + app: quote + strategy: + type: RollingUpdate + template: + metadata: + annotations: + linkerd.io/inject: "enabled" + config.linkerd.io/skip-outbound-ports: "8081,8022,6001" + labels: + app: quote + spec: + containers: + - name: backend + image: docker.io/datawire/quote:0.4.1 + ports: + - name: http + containerPort: 8000 + env: + - name: PORT + value: "8000" + resources: + limits: + cpu: "0.1" + memory: 100Mi +``` + +## Connect to Telepresence +Run `telepresence connect` to connect to the cluster. Then `telepresence list` should show the `quote` deployment as `ready to intercept`: + +``` +$ telepresence list + + quote: ready to intercept (traffic-agent not yet installed) +``` + +## Run the intercept +Run `telepresence intercept quote --port 8080:80` to direct traffic from the `quote` deployment to port 8080 on your local system. Assuming you have something listening on 8080, you should now be able to see your local service whenever attempting to access the `quote` service. diff --git a/docs/telepresence-oss/2.15/reference/rbac.md b/docs/telepresence-oss/2.15/reference/rbac.md deleted file mode 120000 index 1fcd48219..000000000 --- a/docs/telepresence-oss/2.15/reference/rbac.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.15/reference/rbac.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.15/reference/rbac.md b/docs/telepresence-oss/2.15/reference/rbac.md new file mode 100644 index 000000000..d78133441 --- /dev/null +++ b/docs/telepresence-oss/2.15/reference/rbac.md @@ -0,0 +1,236 @@ +import Alert from '@material-ui/lab/Alert'; + +# Telepresence RBAC +The intention of this document is to provide a template for securing and limiting the permissions of Telepresence. +This documentation covers the full extent of permissions necessary to administrate Telepresence components in a cluster. + +There are two general categories for cluster permissions with respect to Telepresence. There are RBAC settings for a User and for an Administrator described above. The User is expected to only have the minimum cluster permissions necessary to create a Telepresence [intercept](../../howtos/intercepts/), and otherwise be unable to affect Kubernetes resources. + +In addition to the above, there is also a consideration of how to manage Users and Groups in Kubernetes which is outside of the scope of the document. This document will use Service Accounts to assign Roles and Bindings. Other methods of RBAC administration and enforcement can be found on the [Kubernetes RBAC documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) page. + +## Requirements + +- Kubernetes version 1.16+ +- Cluster admin privileges to apply RBAC + +## Editing your kubeconfig + +This guide also assumes that you are utilizing a kubeconfig file that is specified by the `KUBECONFIG` environment variable. This is a `yaml` file that contains the cluster's API endpoint information as well as the user data being supplied for authentication. The Service Account name used in the example below is called tp-user. This can be replaced by any value (i.e. John or Jane) as long as references to the Service Account are consistent throughout the `yaml`. After an administrator has applied the RBAC configuration, a user should create a `config.yaml` in your current directory that looks like the following:​ + +```yaml +apiVersion: v1 +kind: Config +clusters: +- name: my-cluster # Must match the cluster value in the contexts config + cluster: + ## The cluster field is highly cloud dependent. +contexts: +- name: my-context + context: + cluster: my-cluster # Must match the name field in the clusters config + user: tp-user +users: +- name: tp-user # Must match the name of the Service Account created by the cluster admin + user: + token: # See note below +``` + +The Service Account token will be obtained by the cluster administrator after they create the user's Service Account. Creating the Service Account will create an associated Secret in the same namespace with the format `-token-`. This token can be obtained by your cluster administrator by running `kubectl get secret -n ambassador -o jsonpath='{.data.token}' | base64 -d`. + +After creating `config.yaml` in your current directory, export the file's location to KUBECONFIG by running `export KUBECONFIG=$(pwd)/config.yaml`. You should then be able to switch to this context by running `kubectl config use-context my-context`. + +## Administrating Telepresence + +Telepresence administration requires permissions for creating `Namespaces`, `ServiceAccounts`, `ClusterRoles`, `ClusterRoleBindings`, `Secrets`, `Services`, `MutatingWebhookConfiguration`, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator. The following permissions are needed for the installation and use of Telepresence: + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: telepresence-admin + namespace: default +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-admin-role +rules: + - apiGroups: [""] + resources: ["pods", "pods/log"] + verbs: ["get", "list", "create", "delete", "watch"] + - apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "update", "create", "delete"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] + - apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update", "create", "delete", "watch"] + - apiGroups: ["rbac.authorization.k8s.io"] + resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"] + verbs: ["get", "list", "watch", "create", "delete"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["create"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["get", "list", "watch", "delete"] + resourceNames: ["telepresence-agents"] + - apiGroups: [""] + resources: ["namespaces"] + verbs: ["get", "list", "watch", "create"] + - apiGroups: [""] + resources: ["secrets"] + verbs: ["get", "create", "list", "delete"] + - apiGroups: [""] + resources: ["serviceaccounts"] + verbs: ["get", "create", "delete"] + - apiGroups: ["admissionregistration.k8s.io"] + resources: ["mutatingwebhookconfigurations"] + verbs: ["get", "create", "delete"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["list", "get", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-clusterrolebinding +subjects: + - name: telepresence-admin + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-admin-role + kind: ClusterRole +``` + +There are two ways to install the traffic-manager: Using `telepresence connect` and installing the [helm chart](../../install/helm/). + +By using `telepresence connect`, Telepresence will use your kubeconfig to create the objects mentioned above in the cluster if they don't already exist. If you want the most introspection into what is being installed, we recommend using the helm chart to install the traffic-manager. + +## Cluster-wide telepresence user access + +To allow users to make intercepts across all namespaces, but with more limited `kubectl` permissions, the following `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` will allow full `telepresence intercept` functionality. + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate value + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-role +rules: +# For gather-logs command +- apiGroups: [""] + resources: ["pods/log"] + verbs: ["get"] +- apiGroups: [""] + resources: ["pods"] + verbs: ["list"] +# Needed in order to maintain a list of workloads +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +- apiGroups: [""] + resources: ["namespaces", "services"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-rolebinding +subjects: +- name: tp-user + kind: ServiceAccount + namespace: ambassador +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-role + kind: ClusterRole +``` + +### Traffic Manager connect permission +In addition to the cluster-wide permissions, the client will also need the following namespace scoped permissions +in the traffic-manager's namespace in order to establish the needed port-forward to the traffic-manager. +```yaml +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: traffic-manager-connect +rules: + - apiGroups: [""] + resources: ["pods"] + verbs: ["get", "list", "watch"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: traffic-manager-connect +subjects: + - name: telepresence-test-developer + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: traffic-manager-connect + kind: Role +``` + +## Namespace only telepresence user access + +RBAC for multi-tenant scenarios where multiple dev teams are sharing a single cluster where users are constrained to a specific namespace(s). + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +For each accessible namespace +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate user name + namespace: tp-namespace # Update value for appropriate namespace +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +rules: +- apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "watch"] +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role-binding + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount +roleRef: + kind: Role + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +``` + +The user will also need the [Traffic Manager connect permission](#traffic-manager-connect-permission) described above. diff --git a/docs/telepresence-oss/2.15/reference/restapi.md b/docs/telepresence-oss/2.15/reference/restapi.md deleted file mode 120000 index af009b98d..000000000 --- a/docs/telepresence-oss/2.15/reference/restapi.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.15/reference/restapi.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.15/reference/restapi.md b/docs/telepresence-oss/2.15/reference/restapi.md new file mode 100644 index 000000000..4be1924a3 --- /dev/null +++ b/docs/telepresence-oss/2.15/reference/restapi.md @@ -0,0 +1,93 @@ +# Telepresence RESTful API server + +[Telepresence](/products/telepresence/) can run a RESTful API server on the local host, both on the local workstation and in a pod that contains a `traffic-agent`. The server currently has two endpoints. The standard `healthz` endpoint and the `consume-here` endpoint. + +## Enabling the server +The server is enabled by setting the `telepresenceAPI.port` to a valid port number in the [Telepresence Helm Chart](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). The values may be passed explicitly to Helm during install, or configured using the [Telepresence Config](../config#restful-api-server) to impact an auto-install. + +## Querying the server +On the cluster's side, it's the `traffic-agent` of potentially intercepted pods that runs the server. The server can be accessed using `http://localhost:/` from the application container. Telepresence ensures that the container has the `TELEPRESENCE_API_PORT` environment variable set when the `traffic-agent` is installed. On the workstation, it is the `user-daemon` that runs the server. It uses the `TELEPRESENCE_API_PORT` that is conveyed in the environment of the intercept. This means that the server can be accessed the exact same way locally, provided that the environment is propagated correctly to the interceptor process. + +## Endpoints + +The `consume-here` and `intercept-info` endpoints are both intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar. Telepresence provides the ID of the intercept in the environment variable [TELEPRESENCE_INTERCEPT_ID](../environment/#telepresence_intercept_id) during an intercept. This ID must be provided in a `x-telepresence-caller-intercept-id: = ` header. [Telepresence](/products/telepresence/) needs this to identify the caller correctly. The `` will be empty when running in the cluster, but it's harmless to provide it there too, so there's no need for conditional code. + +There are three prerequisites to fulfill before testing The `consume-here` and `intercept-info` endpoints using `curl -v` on the workstation: +1. An intercept must be active +2. The "/healthz" endpoint must respond with OK +3. The ID of the intercept must be known. It will be visible as `ID` in the output of `telepresence list --debug`. + +### healthz +The `http://localhost:/healthz` endpoint should respond with status code 200 OK. If it doesn't then something isn't configured correctly. Check that the `traffic-agent` container is present and that the `TELEPRESENCE_API_PORT` has been added to the environment of the application container and/or in the environment that is propagated to the interceptor that runs on the local workstation. + +#### test endpoint using curl +A `curl -v` call can be used to test the endpoint when an intercept is active. This example assumes that the API port is configured to be 9980. +```console +$ curl -v localhost:9980/healthz +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /healthz HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Date: Fri, 26 Nov 2021 07:06:18 GMT +< Content-Length: 0 +< +* Connection #0 to host localhost left intact +``` + +### consume-here +`http://localhost:/consume-here` will respond with "true" (consume the message) or "false" (leave the message on the queue). When running in the cluster, this endpoint will respond with `false` if the headers match an ongoing intercept for the same workload because it's assumed that it's up to the intercept to consume the message. When running locally, the response is inverted. Matching headers means that the message should be consumed. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api`, we can now check that the "/consume-here" returns "true" for the path "/api" and given headers. +```console +$ curl -v localhost:9980/consume-here?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /consume-here?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Fri, 26 Nov 2021 06:43:28 GMT +< Content-Length: 4 +< +* Connection #0 to host localhost left intact +true +``` + +If you can run curl from the pod, you can try the exact same URL. The result should be "false" when there's an ongoing intercept. The `x-telepresence-caller-intercept-id` is not needed when the call is made from the pod. + +### intercept-info +`http://localhost:/intercept-info` is intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar, and will respond with a JSON structure containing the two booleans `clientSide` and `intercepted`, and a `metadata` map which corresponds to the `--http-meta` key pairs used when the intercept was created. This field is always omitted in case `intercepted` is `false`. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api --http-meta a=b --http-meta b=c`, we can now check that the "/intercept-info" returns information for the given path and headers. +```console +$ curl -v localhost:9980/intercept-info?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980...* Connected to localhost (127.0.0.1) port 9980 (#0) +> GET /intercept-info?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.79.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Tue, 01 Feb 2022 11:39:55 GMT +< Content-Length: 68 +< +{"intercepted":true,"clientSide":true,"metadata":{"a":"b","b":"c"}} +* Connection #0 to host localhost left intact +``` diff --git a/docs/telepresence-oss/2.15/reference/tun-device.md b/docs/telepresence-oss/2.15/reference/tun-device.md deleted file mode 120000 index 5a2638183..000000000 --- a/docs/telepresence-oss/2.15/reference/tun-device.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.15/reference/tun-device.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.15/reference/tun-device.md b/docs/telepresence-oss/2.15/reference/tun-device.md new file mode 100644 index 000000000..af7e3828c --- /dev/null +++ b/docs/telepresence-oss/2.15/reference/tun-device.md @@ -0,0 +1,27 @@ +# Networking through Virtual Network Interface + +The Telepresence daemon process creates a Virtual Network Interface (VIF) when Telepresence connects to the cluster. The VIF ensures that the cluster's subnets are available to the workstation. It also intercepts DNS requests and forwards them to the traffic-manager which in turn forwards them to intercepted agents, if any, or performs a host lookup by itself. + +### TUN-Device +The VIF is a TUN-device, which means that it communicates with the workstation in terms of L3 IP-packets. The router will recognize UDP and TCP packets and tunnel their payload to the traffic-manager via its encrypted gRPC API. The traffic-manager will then establish corresponding connections in the cluster. All protocol negotiation takes place in the client because the VIF takes care of the L3 to L4 translation (i.e. the tunnel is L4, not L3). + +## Gains when using the VIF + +### Both TCP and UDP +The TUN-device is capable of routing both TCP and UDP traffic. + +### No SSH required + +The VIF approach is somewhat similar to using `sshuttle` but without +any requirements for extra software, configuration or connections. +Using the VIF means that only one single connection needs to be +forwarded through the Kubernetes apiserver (à la `kubectl +port-forward`), using only one single port. There is no need for +`ssh` in the client nor for `sshd` in the traffic-manager. This also +means that the traffic-manager container can run as the default user. + +#### sshfs without ssh encryption +When a POD is intercepted, and its volumes are mounted on the local machine, this mount is performed by [sshfs](https://github.com/libfuse/sshfs). Telepresence will run `sshfs -o slave` which means that instead of using `ssh` to establish an encrypted communication to an `sshd`, which in turn terminates the encryption and forwards to `sftp`, the `sshfs` will talk `sftp` directly on its `stdin/stdout` pair. Telepresence tunnels that directly to an `sftp` in the agent using its already encrypted gRPC API. As a result, no `sshd` is needed in client nor in the traffic-agent, and the traffic-agent container can run as the default user. + +### No Firewall rules +With the VIF in place, there's no longer any need to tamper with firewalls in order to establish IP routes. The VIF makes the cluster subnets available during connect, and the kernel will perform the routing automatically. When the session ends, the kernel is also responsible for cleaning up. diff --git a/docs/telepresence-oss/2.15/reference/volume.md b/docs/telepresence-oss/2.15/reference/volume.md deleted file mode 120000 index c9ac9d181..000000000 --- a/docs/telepresence-oss/2.15/reference/volume.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.15/reference/volume.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.15/reference/volume.md b/docs/telepresence-oss/2.15/reference/volume.md new file mode 100644 index 000000000..82df9cafa --- /dev/null +++ b/docs/telepresence-oss/2.15/reference/volume.md @@ -0,0 +1,36 @@ +# Volume mounts + +import Alert from '@material-ui/lab/Alert'; + +Telepresence supports locally mounting of volumes that are mounted to your Pods. You can specify a command to run when starting the intercept, this could be a subshell or local server such as Python or Node. + +``` +telepresence intercept --port --mount=/tmp/ -- /bin/bash +``` + +In this case, Telepresence creates the intercept, mounts the Pod's volumes to locally to `/tmp`, and starts a Bash subshell. + +Telepresence can set a random mount point for you by using `--mount=true` instead, you can then find the mount point in the output of `telepresence list` or using the `$TELEPRESENCE_ROOT` variable. + +``` +$ telepresence intercept --port --mount=true -- /bin/bash +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 + Intercepting : all TCP connections + +bash-3.2$ echo $TELEPRESENCE_ROOT +/var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 +``` + +--mount=true is the default if a mount option is not specified, use --mount=false to disable mounting volumes. + +With either method, the code you run locally either from the subshell or from the intercept command will need to be prepended with the `$TELEPRESENCE_ROOT` environment variable to utilize the mounted volumes. + +For example, Kubernetes mounts secrets to `/var/run/secrets/kubernetes.io` (even if no `mountPoint` for it exists in the Pod spec). Once mounted, to access these you would need to change your code to use `$TELEPRESENCE_ROOT/var/run/secrets/kubernetes.io`. + +If using --mount=true without a command, you can use either environment variable flag to retrieve the variable. diff --git a/docs/telepresence-oss/2.15/reference/vpn.md b/docs/telepresence-oss/2.15/reference/vpn.md deleted file mode 120000 index 2faed38b4..000000000 --- a/docs/telepresence-oss/2.15/reference/vpn.md +++ /dev/null @@ -1 +0,0 @@ -../../../telepresence/2.15/reference/vpn.md \ No newline at end of file diff --git a/docs/telepresence-oss/2.15/reference/vpn.md b/docs/telepresence-oss/2.15/reference/vpn.md new file mode 100644 index 000000000..457cc873c --- /dev/null +++ b/docs/telepresence-oss/2.15/reference/vpn.md @@ -0,0 +1,89 @@ + +
+ + +# Telepresence and VPNs + +It is often important to set up Kubernetes API server endpoints to be only accessible via a VPN. +In setups like these, users need to connect first to their VPN, and then use Telepresence to connect +to their cluster. As Telepresence uses many of the same underlying technologies that VPNs use, +the two can sometimes conflict. This page will help you identify and resolve such VPN conflicts. + + + +The test-vpn command, which was once part of Telepresence, became obsolete in 2.14 due to a change in functionality and was subsequently removed. + + + +## VPN Configuration + +Let's begin by reviewing what a VPN does and imagining a sample configuration that might come +to conflict with Telepresence. +Usually, a VPN client adds two kinds of routes to your machine when you connect. +The first serves to override your default route; in other words, it makes sure that packets +you send out to the public internet go through the private tunnel instead of your +ethernet or wifi adapter. We'll call this a `public VPN route`. +The second kind of route is a `private VPN route`. These are the routes that allow your +machine to access hosts inside the VPN that are not accessible to the public internet. +Generally speaking, this is a more circumscribed route that will connect your machine +only to reachable hosts on the private network, such as your Kubernetes API server. + +This diagram represents what happens when you connect to a VPN, supposing that your +private network spans the CIDR range: `10.0.0.0/8`. + +![VPN routing](../images/vpn-routing.jpg) + +## Kubernetes configuration + +One of the things a Kubernetes cluster does for you is assign IP addresses to pods and services. +This is one of the key elements of Kubernetes networking, as it allows applications on the cluster +to reach each other. When Telepresence connects you to the cluster, it will try to connect you +to the IP addresses that your cluster assigns to services and pods. +Cluster administrators can configure, on cluster creation, the CIDR ranges that the Kubernetes +cluster will place resources in. Let's imagine your cluster is configured to place services in +`10.130.0.0/16` and pods in `10.132.0.0/16`: + +![VPN Kubernetes config](../images/vpn-k8s-config.jpg) + +## Telepresence conflicts + +When you run `telepresence connect` to connect to a cluster, it talks to the API server +to figure out what pod and service CIDRs it needs to map in your machine. If it detects +that these CIDR ranges are already mapped by a VPN's `private route`, it will produce an +error and inform you of the conflicting subnets: + +```console +$ telepresence connect +telepresence connect: error: connector.Connect: failed to connect to root daemon: rpc error: code = Unknown desc = subnet 10.43.0.0/16 overlaps with existing route "10.0.0.0/8 via 10.0.0.0 dev utun4, gw 10.0.0.1" +``` + +To resolve this, you'll need to carefully consider what your network layout looks like. +Telepresence is refusing to map these conflicting subnets because its mapping them +could render certain hosts that are inside the VPN completely unreachable. However, +you (or your network admin) know better than anyone how hosts are spread out inside your VPN. +Even if the private route routes ALL of `10.0.0.0/8`, it's possible that hosts are only +being spun up in one of the subblocks of the `/8` space. Let's say, for example, +that you happen to know that all your hosts in the VPN are bunched up in the first +half of the space -- `10.0.0.0/9` (and that you know that any new hosts will +only be assigned IP addresses from the `/9` block). In this case you +can configure Telepresence to override the other half of this CIDR block, which is where the +services and pods happen to be. +To do this, all you have to do is configure the `client.routing.allowConflictingSubnets` flag +in the Telepresence helm chart. You can do this directly via `telepresence helm upgrade`: + +```console +$ telepresence helm upgrade --set client.routing.allowConflictingSubnets="{10.128.0.0/9}" +``` + +You can also choose to be more specific about this, and only allow the CIDRs that you KNOW +are in use by the cluster: + +```console +$ telepresence helm upgrade --set client.routing.allowConflictingSubnets="{10.130.0.0/16,10.132.0.0/16}" +``` + +The end result of this (assuming an allow list of `/9`) will be a configuration like this: + +![VPN Telepresence](../images/vpn-with-tele.jpg) + +
diff --git a/docs/telepresence-oss/latest b/docs/telepresence-oss/latest deleted file mode 120000 index d991af3a2..000000000 --- a/docs/telepresence-oss/latest +++ /dev/null @@ -1 +0,0 @@ -2.15 \ No newline at end of file diff --git a/docs/telepresence-oss/latest/ci/github-actions.md b/docs/telepresence-oss/latest/ci/github-actions.md new file mode 100644 index 000000000..810a2d239 --- /dev/null +++ b/docs/telepresence-oss/latest/ci/github-actions.md @@ -0,0 +1,176 @@ +--- +title: GitHub Actions for Telepresence +description: "Learn more about GitHub Actions for Telepresence and how to integrate them in your processes to run tests for your own environments and improve your CI/CD pipeline. " +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Telepresence with GitHub Actions + +Telepresence combined with [GitHub Actions](https://docs.github.com/en/actions) allows you to run integration tests in your continuous integration/continuous delivery (CI/CD) pipeline without the need to run any dependant service. When you connect to the target Kubernetes cluster, you can intercept traffic of the remote services and send it to an instance of the local service running in CI. This way, you can quickly test the bugfixes, updates, and features that you develop in your project. + +You can [register here](https://app.getambassador.io/auth/realms/production/protocol/openid-connect/auth?client_id=telepresence-github-actions&response_type=code&code_challenge=qhXI67CwarbmH-pqjDIV1ZE6kqggBKvGfs69cxst43w&code_challenge_method=S256&redirect_uri=https://app.getambassador.io) to get a free Ambassador Cloud account to try the GitHub Actions for Telepresence yourself. + +## GitHub Actions for Telepresence + +Ambassador Labs has created a set of GitHub Actions for Telepresence that enable you to run integration tests in your CI pipeline against any existing remote cluster. The GitHub Actions for Telepresence are the following: + + - **configure**: Initial configuration setup for Telepresence that is needed to run the actions successfully. + - **install**: Installs Telepresence in your CI server with latest version or the one specified by you. + - **login**: Logs into Telepresence, you can create a [personal intercept](/docs/telepresence/latest/concepts/intercepts/#personal-intercept). You'll need a Telepresence API key and set it as an environment variable in your workflow. See the [acquiring an API key guide](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) for instructions on how to get one. + - **connect**: Connects to the remote target environment. + - **intercept**: Redirects traffic to the remote service to the version of the service running in CI so you can run integration tests. + +Each action contains a post-action script to clean up resources. This includes logging out of Telepresence, closing the connection to the remote cluster, and stopping the intercept process. These post scripts are executed automatically, regardless of job result. This way, you don't have to worry about terminating the session yourself. You can look at the [GitHub Actions for Telepresence repository](https://github.com/datawire/telepresence-actions) for more information. + +# Using Telepresence in your GitHub Actions CI pipeline + +## Prerequisites + +To enable GitHub Actions with telepresence, you need: + +* A [Telepresence API KEY](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) and set it as an environment variable in your workflow. +* Access to your remote Kubernetes cluster, like a `kubeconfig.yaml` file with the information to connect to the cluster. +* If your remote cluster already has Telepresence installed, you need to know whether Telepresence is installed [Cluster wide](/docs/telepresence/latest/reference/rbac/#cluster-wide-telepresence-user-access) or [Namespace only](/docs/telepresence/latest/reference/rbac/#namespace-only-telepresence-user-access). If telepresence is configured for namespace only, verify that your `kubeconfig.yaml` is configured to find the installation of the Traffic Manager. For example: + + ```yaml + apiVersion: v1 + clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: traffic-manager-namespace + name: example-cluster + ``` + +* If Telepresence is installed, you also need to know the version of Telepresence running in the cluster. You can run the command `kubectl describe service traffic-manager -n namespace`. The version is listed in the `labels` section of the output. +* You need a GitHub Actions secret named `TELEPRESENCE_API_KEY` in your repository that has your Telepresence API key. See [GitHub docs](https://docs.github.com/en/github-ae@latest/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository) for instructions on how to create GitHub Actions secrets. +* You need a GitHub Actions secret named `KUBECONFIG_FILE` in your repository with the content of your `kubeconfig.yaml`). + +**Does your environment look different?** We're actively working on making GitHub Actions for Telepresence more useful for more + +
+ + +
+ +## Initial configuration setup + +To be able to use the GitHub Actions for Telepresence, you need to do an initial setup to [configure Telepresence](../../reference/config/) so the repository is able to run your workflow. To complete the Telepresence setup: + + +This action only supports Ubuntu runners at the moment. + +1. In your main branch, create a `.github/workflows` directory in your GitHub repository if it does not already exist. +1. Next, in the `.github/workflows` directory, create a new YAML file named `configure-telepresence.yaml`: + + ```yaml + name: Configuring telepresence + on: workflow_dispatch + jobs: + configuring: + name: Configure telepresence + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + steps: + - name : Checkout + uses: actions/checkout@v3 + #---- here run your custom command to connect to your cluster + #- name: Connect to cluster + # shell: bash + # run: ./connnect to cluster + #---- + - name: Congifuring Telepresence + uses: datawire/telepresence-actions/configure@v1.0-rc + with: + version: latest + ``` + +1. Push the `configure-telepresence.yaml` file to your repository. +1. Run the `Configuring Telepresence Workflow` [manually](https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow) in your repository's Actions tab. + +When the workflow runs, the action caches the configuration directory of Telepresence and a Telepresence configuration file if is provided by you. This configuration file should be placed in the`/.github/telepresence-config/config.yml` with your own [Telepresence config](../../reference/config/). If you update your this file with a new configuration, you must run the `Configuring Telepresence Workflow` action manually on your main branch so your workflow detects the new configuration. + + +When you create a branch, do not remove the .telepresence/config.yml file. This is required for Telepresence to run GitHub action properly when there is a new push to the branch in your repository. + + +## Using Telepresence in your GitHub Actions workflows + +1. In the `.github/workflows` directory create a new YAML file named `run-integration-tests.yaml` and modify placeholders with real actions to run your service and perform integration tests. + + ```yaml + name: Run Integration Tests + on: + push: + branches-ignore: + - 'main' + jobs: + my-job: + name: Run Integration Test using Remote Cluster + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + KUBECONFIG_FILE: ${{ secrets.KUBECONFIG_FILE }} + KUBECONFIG: /opt/kubeconfig + steps: + - name : Checkout + uses: actions/checkout@v3 + with: + ref: ${{ github.event.pull_request.head.sha }} + #---- here run your custom command to run your service + #- name: Run your service to test + # shell: bash + # run: ./run_local_service + #---- + # First you need to log in to Telepresence, with your api key + - name: Create kubeconfig file + run: | + cat < /opt/kubeconfig + ${{ env.KUBECONFIG_FILE }} + EOF + - name: Install Telepresence + uses: datawire/telepresence-actions/install@v1.0-rc + with: + version: 2.5.8 # Change the version number here according to the version of Telepresence in your cluster or omit this parameter to install the latest version + - name: Telepresence connect + uses: datawire/telepresence-actions/connect@v1.0-rc + - name: Login + uses: datawire/telepresence-actions/login@v1.0-rc + with: + telepresence_api_key: ${{ secrets.TELEPRESENCE_API_KEY }} + - name: Intercept the service + uses: datawire/telepresence-actions/intercept@v1.0-rc + with: + service_name: service-name + service_port: 8081:8080 + namespace: namespacename-of-your-service + http_header: "x-telepresence-intercept-id=service-intercepted" + print_logs: true # Flag to instruct the action to print out Telepresence logs and export an artifact with them + #---- here run your custom command + #- name: Run integrations test + # shell: bash + # run: ./run_integration_test + #---- + ``` + +The previous is an example of a workflow that: + +* Checks out the repository code. +* Has a placeholder step to run the service during CI. +* Creates the `/opt/kubeconfig` file with the contents of the `secrets.KUBECONFIG_FILE` to make it available for Telepresence. +* Installs Telepresence. +* Runs Telepresence Connect. +* Logs into Telepresence. +* Intercepts traffic to the service running in the remote cluster. +* A placeholder for an action that would run integration tests, such as one that makes HTTP requests to your running service and verifies it works while dependent services run in the remote cluster. + +This workflow gives you the ability to run integration tests during the CI run against an ephemeral instance of your service to verify that the any change that is pushed to the working branch works as expected. After you push the changes, the CI server will run the integration tests against the intercept. You can view the results on your GitHub repository, under "Actions" tab. diff --git a/docs/telepresence-oss/latest/community.md b/docs/telepresence-oss/latest/community.md new file mode 100644 index 000000000..922457c9d --- /dev/null +++ b/docs/telepresence-oss/latest/community.md @@ -0,0 +1,12 @@ +# Community + +## Contributor's guide +Please review our [contributor's guide](https://github.com/telepresenceio/telepresence/blob/release/v2/DEVELOPING.md) +on GitHub to learn how you can help make Telepresence better. + +## Changelog +Our [changelog](https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md) +describes new features, bug fixes, and updates to each version of Telepresence. + +## Meetings +Check out our community [meeting schedule](https://github.com/telepresenceio/telepresence/blob/release/v2/MEETING_SCHEDULE.md) for opportunities to interact with Telepresence developers. diff --git a/docs/telepresence-oss/latest/concepts/devloop.md b/docs/telepresence-oss/latest/concepts/devloop.md new file mode 100644 index 000000000..86aac87e2 --- /dev/null +++ b/docs/telepresence-oss/latest/concepts/devloop.md @@ -0,0 +1,54 @@ +--- +title: "The developer and the inner dev loop | Ambassador " +--- + +# The developer experience and the inner dev loop + +## How is the developer experience changing? + +The developer experience is the workflow a developer uses to develop, test, deploy, and release software. + +Typically this experience has consisted of both an inner dev loop and an outer dev loop. The inner dev loop is where the individual developer codes and tests, and once the developer pushes their code to version control, the outer dev loop is triggered. + +The outer dev loop is _everything else_ that happens leading up to release. This includes code merge, automated code review, test execution, deployment, [controlled (canary) release](https://www.getambassador.io/docs/argo/latest/concepts/canary/), and observation of results. The modern outer dev loop might include, for example, an automated CI/CD pipeline as part of a [GitOps workflow](https://www.getambassador.io/docs/argo/latest/concepts/gitops/#what-is-gitops) and a [progressive delivery](/docs/argo/latest/concepts/cicd/) strategy relying on automated canaries, i.e. to make the outer loop as fast, efficient and automated as possible. + +Cloud-native technologies have fundamentally altered the developer experience in two ways: one, developers now have to take extra steps in the inner dev loop; two, developers need to be concerned with the outer dev loop as part of their workflow, even if most of their time is spent in the inner dev loop. + +Engineers now must design and build distributed service-based applications _and_ also assume responsibility for the full development life cycle. The new developer experience means that developers can no longer rely on monolithic application developer best practices, such as checking out the entire codebase and coding locally with a rapid “live-reload” inner development loop. Now developers have to manage external dependencies, build containers, and implement orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this adds development time to the equation. + +## What is the inner dev loop? + +The inner dev loop is the single developer workflow. A single developer should be able to set up and use an inner dev loop to code and test changes quickly. + +Even within the Kubernetes space, developers will find much of the inner dev loop familiar. That is, code can still be written locally at a level that a developer controls and committed to version control. + +In a traditional inner dev loop, if a typical developer codes for 360 minutes (6 hours) a day, with a traditional local iterative development loop of 5 minutes — 3 coding, 1 building, i.e. compiling/deploying/reloading, 1 testing inspecting, and 10-20 seconds for committing code — they can expect to make ~70 iterations of their code per day. Any one of these iterations could be a release candidate. The only “developer tax” being paid here is for the commit process, which is negligible. + +![traditional inner dev loop](../images/trad-inner-dev-loop.png) + +## In search of lost time: How does containerization change the inner dev loop? + +The inner dev loop is where writing and testing code happens, and time is critical for maximum developer productivity and getting features in front of end users. The faster the feedback loop, the faster developers can refactor and test again. + +Changes to the inner dev loop process, i.e., containerization, threaten to slow this development workflow down. Coding stays the same in the new inner dev loop, but code has to be containerized. The _containerized_ inner dev loop requires a number of new steps: + +* packaging code in containers +* writing a manifest to specify how Kubernetes should run the application (e.g., YAML-based configuration information, such as how much memory should be given to a container) +* pushing the container to the registry +* deploying containers in Kubernetes + +Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. + + +![container inner dev loop](../images/container-inner-dev-loop.png) + +## Tackling the slow inner dev loop + +A slow inner dev loop can negatively impact frontend and backend teams, delaying work on individual and team levels and slowing releases into production overall. + +For example: + +* Frontend developers have to wait for previews of backend changes on a shared dev/staging environment (for example, until CI/CD deploys a new version) and/or rely on mocks/stubs/virtual services when coding their application locally. These changes are only verifiable by going through the CI/CD process to build and deploy within a target environment. +* Backend developers have to wait for CI/CD to build and deploy their app to a target environment to verify that their code works correctly with cluster or cloud-based dependencies as well as to share their work to get feedback. + +New technologies and tools can facilitate cloud-native, containerized development. And in the case of a sluggish inner dev loop, developers can accelerate productivity with tools that help speed the loop up again. diff --git a/docs/telepresence-oss/latest/concepts/devworkflow.md b/docs/telepresence-oss/latest/concepts/devworkflow.md new file mode 100644 index 000000000..fa24fc2bd --- /dev/null +++ b/docs/telepresence-oss/latest/concepts/devworkflow.md @@ -0,0 +1,7 @@ +# The changing development workflow + +A changing workflow is one of the main challenges for developers adopting Kubernetes. Software development itself isn’t the challenge. Developers can continue to [code using the languages and tools with which they are most productive and comfortable](https://www.getambassador.io/resources/kubernetes-local-dev-toolkit/). That’s the beauty of containerized development. + +However, the cloud-native, Kubernetes-based approach to development means adopting a new development workflow and development environment. Beyond the basics, such as figuring out how to containerize software, [how to run containers in Kubernetes](https://www.getambassador.io/docs/kubernetes/latest/concepts/appdev/), and how to deploy changes into containers, for example, Kubernetes adds complexity before it delivers efficiency. The promise of a “quicker way to develop software” applies at least within the traditional aspects of the inner dev loop, where the single developer codes, builds and tests their software. But both within the inner dev loop and once code is pushed into version control to trigger the outer dev loop, the developer experience changes considerably from what many developers are used to. + +In this new paradigm, new steps are added to the inner dev loop, and more broadly, the developer begins to share responsibility for the full life cycle of their software. Inevitably this means taking new workflows and tools on board to ensure that the full life cycle continues full speed ahead. diff --git a/docs/telepresence-oss/latest/concepts/faster.md b/docs/telepresence-oss/latest/concepts/faster.md new file mode 100644 index 000000000..fe5f3dd91 --- /dev/null +++ b/docs/telepresence-oss/latest/concepts/faster.md @@ -0,0 +1,28 @@ +--- +title: Install the Telepresence Docker extension | Ambassador +--- +# Making the remote local: Faster feedback, collaboration and debugging + +With the goal of achieving [fast, efficient development](https://www.getambassador.io/use-case/local-kubernetes-development/), developers need a set of approaches to bridge the gap between remote Kubernetes clusters and local development, and reduce time to feedback and debugging. + +## How should I set up a Kubernetes development environment? + +[Setting up a development environment](https://www.getambassador.io/resources/development-environments-microservices/) for Kubernetes can be much more complex than the setup for traditional web applications. Creating and maintaining a Kubernetes development environment relies on a number of external dependencies, such as databases or authentication. + +While there are several ways to set up a Kubernetes development environment, most introduce complexities and impediments to speed. The dev environment should be set up to easily code and test in conditions where a service can access the resources it depends on. + +A good way to meet the goals of faster feedback, possibilities for collaboration, and scale in a realistic production environment is the "single service local, all other remote" environment. Developing in a fully remote environment offers some benefits, but for developers, it offers the slowest possible feedback loop. With local development in a remote environment, the developer retains considerable control while using tools like [Telepresence](../../quick-start/) to facilitate fast feedback, debugging and collaboration. + +## What is Telepresence? + +Telepresence is an open source tool that lets developers [code and test microservices locally against a remote Kubernetes cluster](../../quick-start/). Telepresence facilitates more efficient development workflows while relieving the need to worry about other service dependencies. + +## How can I get fast, efficient local development? + +The dev loop can be jump-started with the right development environment and Kubernetes development tools to support speed, efficiency and collaboration. Telepresence is designed to let Kubernetes developers code as though their laptop is in their Kubernetes cluster, enabling the service to run locally and be proxied into the remote cluster. Telepresence runs code locally and forwards requests to and from the remote Kubernetes cluster, bypassing the much slower process of waiting for a container to build, pushing it to registry, and deploying to production. + +A rapid and continuous feedback loop is essential for productivity and speed; Telepresence enables the fast, efficient feedback loop to ensure that developers can access the rapid local development loop they rely on without disrupting their own or other developers' workflows. Telepresence safely intercepts traffic from the production cluster and enables near-instant testing of code and local debugging in production. + +Telepresence works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This pod proxies data from the Kubernetes environment (e.g., TCP/UDP connections, environment variables, volumes) to the local process. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +The intercept proxy works thanks to context propagation, which is most frequently associated with distributed tracing but also plays a key role in controllable intercepts. diff --git a/docs/telepresence-oss/latest/concepts/goldenpaths.md b/docs/telepresence-oss/latest/concepts/goldenpaths.md new file mode 100644 index 000000000..5f83e7a8c --- /dev/null +++ b/docs/telepresence-oss/latest/concepts/goldenpaths.md @@ -0,0 +1,7 @@ +# Golden Paths + +A golden path is a best practice or a standardized process you should apply to Telepresence, often used to optimize productivity or quality control. It can be used as a benchmark or a reference point for measuring success and progress towards a particular goal or outcome. + +We have provided Golden Paths for multiple use cases listed below. + +1. [Using Telepresence with Docker](../goldenpaths/docker) \ No newline at end of file diff --git a/docs/telepresence-oss/latest/concepts/goldenpaths/docker.md b/docs/telepresence-oss/latest/concepts/goldenpaths/docker.md new file mode 100644 index 000000000..1b34a4008 --- /dev/null +++ b/docs/telepresence-oss/latest/concepts/goldenpaths/docker.md @@ -0,0 +1,66 @@ +# Telepresence with Docker Golden Path + +## Why? + +It can be tedious to adopt Telepresence across your organization, since in its handiest form, it requires admin access, and needs to get along with any exotic +networking setup that your company may have. + +If Docker is already approved in your organization, this Golden path should be considered. + +## How? + +When using Telepresence in Docker mode, users can eliminate the need for admin access on their machines, address several networking challenges, and forego the need for third-party applications to enable volume mounts. + +You can simply add the docker flag to any Telepresence command, and it will start your daemon in a container. +Thus removing the need for root access, making it easier to adopt as an organization + +Let's illustrate with a quick demo, assuming a default Kubernetes context named default, and a simple HTTP service: + +```cli +$ telepresence connect --docker +Connected to context default (https://default.cluster.bakerstreet.io) + +$ docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +7a0e01cab325 datawire/telepresence:2.12.1 "telepresence connec…" 18 seconds ago Up 16 seconds 127.0.0.1:58802->58802/tcp tp-default +``` + +This method limits the scope of the potential networking issues since everything stays inside Docker. The Telepresence daemon can be found under the name `tp-` when listing your containers. + +Start an intercept: + +```cli +$ telepresence intercept echo-easy --port 8080:80 -n default +Using Deployment echo-easy + Intercept name : echo-easy-default + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: proxied + Volume Mount Point : /var/folders/x_/4x_4pfvx2j3_94f36x551g140000gp/T/telfs-505935483 + Intercepting : HTTP requests with headers + 'x-telepresence-intercept-id: e20f0764-7fd8-45c1-b911-b2adeee1af45:echo-easy-default' + Preview URL : https://gracious-ishizaka-5365.preview.edgestack.me + Layer 5 Hostname : echo-easy.default.svc.cluster.local +``` + +Start your intercept handler (interceptor) by targeting the daemon container `--network=container:tp-`, and open the preview URL to see the traffic routed to your machine. + +```cli +$ docker run \ + --network=container:tp-default \ + -e PORT=8080 jmalloc/echo-server +Echo server listening on port 8080. +127.0.0.1:41500 | GET / +127.0.0.1:41512 | GET /favicon.ico +127.0.0.1:41500 | GET / +127.0.0.1:41512 | GET /favicon.ico +``` + +It's essential to ensure that users also open the debugging port on their container to allow them to attach their local debugger from their IDE. + +## Key learnings + +* Using the Docker mode of telepresence **do not require root access**, and make it **easier** to adopt it across your organization. +* It **limits the potential networking issues** you can encounter. +* It leverages **Docker** for your interceptor. diff --git a/docs/telepresence-oss/latest/concepts/intercepts.md b/docs/telepresence-oss/latest/concepts/intercepts.md new file mode 100644 index 000000000..2356e8bfb --- /dev/null +++ b/docs/telepresence-oss/latest/concepts/intercepts.md @@ -0,0 +1,96 @@ +--- +title: "Types of intercepts" +description: "Short demonstration of personal vs global intercepts" +--- + +import React from 'react'; + +import Alert from '@material-ui/lab/Alert'; +import AppBar from '@material-ui/core/AppBar'; +import Paper from '@material-ui/core/Paper'; +import Tab from '@material-ui/core/Tab'; +import TabContext from '@material-ui/lab/TabContext'; +import TabList from '@material-ui/lab/TabList'; +import TabPanel from '@material-ui/lab/TabPanel'; +import Animation from '@src/components/InterceptAnimation'; + +export function TabsContainer({ children, ...props }) { + const [state, setState] = React.useState({curTab: "personal"}); + React.useEffect(() => { + const query = new URLSearchParams(window.location.search); + var interceptType = query.get('intercept') || "regular"; + if (state.curTab != interceptType) { + setState({curTab: interceptType}); + } + }, [state, setState]) + var setURL = function(newTab) { + history.replaceState(null,null, + `?intercept=${newTab}${window.location.hash}`, + ); + }; + return ( +
+ + + {setState({curTab: newTab}); setURL(newTab)}} aria-label="intercept types"> + + + + + {children} + +
+ ); +}; + +# Types of intercepts + + + + +# No intercept + + + + +This is the normal operation of your cluster without Telepresence. + + + + + +# Global intercept + + + + +**Global intercepts** replace the Kubernetes "Orders" service with the +Orders service running on your laptop. The users see no change, but +with all the traffic coming to your laptop, you can observe and debug +with all your dev tools. + + + +### Creating and using global intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=all + ``` + + + + Make sure your current kubectl context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service: + + All requests will be sent to the version of your service that is + running in the local development environment. + + + diff --git a/docs/telepresence-oss/latest/doc-links.yml b/docs/telepresence-oss/latest/doc-links.yml new file mode 100644 index 000000000..ecc9da4f2 --- /dev/null +++ b/docs/telepresence-oss/latest/doc-links.yml @@ -0,0 +1,83 @@ +- title: Quick start + link: quick-start +- title: Install Telepresence + items: + - title: Install + link: install/ + - title: Upgrade + link: install/upgrade/ + - title: Install Traffic Manager + link: install/manager/ + - title: Install Traffic Manager with Helm + link: install/helm/ + - title: Cloud Provider Prerequisites + link: install/cloud/ + - title: Migrate from legacy Telepresence + link: install/migrate-from-legacy/ +- title: Core concepts + items: + - title: The changing development workflow + link: concepts/devworkflow + - title: The developer experience and the inner dev loop + link: concepts/devloop + - title: "Making the remote local: Faster feedback, collaboration and debugging" + link: concepts/faster + - title: Types of intercepts + link: concepts/intercepts +- title: How do I... + items: + - title: Intercept a service in your own environment + link: howtos/intercepts + - title: Proxy outbound traffic to my cluster + link: howtos/outbound + - title: Host a cluster in a local VM + link: howtos/cluster-in-vm +- title: Technical reference + items: + - title: Architecture + link: reference/architecture + - title: Client reference + link: reference/client + - title: Laptop-side configuration + link: reference/config + - title: Cluster-side configuration + link: reference/cluster-config + - title: Using Docker for intercepts + link: reference/docker-run + - title: Running Telepresence in a Docker container + link: reference/inside-container + - title: Environment variables + link: reference/environment + - title: Intercepts + link: reference/intercepts/ + items: + - title: Configure intercept using CLI + link: reference/intercepts/cli + - title: Manually injecting the Traffic Agent + link: reference/intercepts/manual-agent + - title: Volume mounts + link: reference/volume + - title: RESTful API service + link: reference/restapi + - title: DNS resolution + link: reference/dns + - title: RBAC + link: reference/rbac + - title: Telepresence and VPNs + link: reference/vpn + - title: Networking through Virtual Network Interface + link: reference/tun-device + - title: Connection Routing + link: reference/routing + - title: Using Telepresence with Linkerd + link: reference/linkerd +- title: FAQs + link: faqs +- title: Troubleshooting + link: troubleshooting +- title: Community + link: community +- title: Release Notes + link: release-notes +- title: Licenses + link: licenses \ No newline at end of file diff --git a/docs/telepresence-oss/latest/faqs.md b/docs/telepresence-oss/latest/faqs.md new file mode 100644 index 000000000..c6c80a9b7 --- /dev/null +++ b/docs/telepresence-oss/latest/faqs.md @@ -0,0 +1,103 @@ +--- +description: "Learn how Telepresence helps with fast development and debugging in your Kubernetes cluster." +--- + +# FAQs + +** Why Telepresence?** + +Modern microservices-based applications that are deployed into Kubernetes often consist of tens or hundreds of services. The resource constraints and number of these services means that it is often difficult to impossible to run all of this on a local development machine, which makes fast development and debugging very challenging. The fast [inner development loop](../concepts/devloop/) from previous software projects is often a distant memory for cloud developers. + +Telepresence enables you to connect your local development machine seamlessly to the cluster via a two way proxying mechanism. This enables you to code locally and run the majority of your services within a remote Kubernetes cluster -- which in the cloud means you have access to effectively unlimited resources. + +Ultimately, this empowers you to develop services locally and still test integrations with dependent services or data stores running in the remote cluster. + +You can “intercept” any requests made to a target Kubernetes workload, and code and debug your associated service locally using your favourite local IDE and in-process debugger. You can test your integrations by making requests against the remote cluster’s ingress and watching how the resulting internal traffic is handled by your service running locally. + +** What operating systems does Telepresence work on?** + +Telepresence currently works natively on macOS (Intel and Apple Silicon), Linux, and Windows. + +** What protocols can be intercepted by Telepresence?** + +Both TCP and UDP are supported for global intercepts. + +Personal intercepts require HTTP. All HTTP/1.1 and HTTP/2 protocols can be intercepted. This includes: + +- REST +- JSON/XML over HTTP +- gRPC +- GraphQL + +If you need another protocol supported, please [drop us a line](https://www.getambassador.io/feedback/) to request it. + +** When using Telepresence to intercept a pod, are the Kubernetes cluster environment variables proxied to my local machine?** + +Yes, you can either set the pod's environment variables on your machine or write the variables to a file to use with Docker or another build process. Please see [the environment variable reference doc](../reference/environment) for more information. + +** When using Telepresence to intercept a pod, can the associated pod volume mounts also be mounted by my local machine?** + +Yes, please see [the volume mounts reference doc](../reference/volume/) for more information. + +** When connected to a Kubernetes cluster via Telepresence, can I access cluster-based services via their DNS name?** + +Yes. After you have successfully connected to your cluster via `telepresence connect` you will be able to access any service in your cluster via their namespace qualified DNS name. + +This means you can curl endpoints directly e.g. `curl .:8080/mypath`. + +If you create an intercept for a service in a namespace, you will be able to use the service name directly. + +This means if you `telepresence intercept -n `, you will be able to resolve just the `` DNS record. + +You can connect to databases or middleware running in the cluster, such as MySQL, PostgreSQL and RabbitMQ, via their service name. + +** When connected to a Kubernetes cluster via Telepresence, can I access cloud-based services and data stores via their DNS name?** + +You can connect to cloud-based data stores and services that are directly addressable within the cluster (e.g. when using an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) Service type), such as AWS RDS, Google pub-sub, or Azure SQL Database. + + + + +** Will Telepresence be able to intercept workloads running on a private cluster or cluster running within a virtual private cloud (VPC)?** + +Yes, but it doesn't need to have a publicly accessible IP address. + +The cluster must also have access to an external registry in order to be able to download the traffic-manager and traffic-agent images that are deployed when connecting with Telepresence. + +** Why does running Telepresence require sudo access for the local daemon unless it runs in a Docker container?** + +The local daemon needs sudo to create a VIF (Virtual Network Interface) for outbound routing and DNS. Root access is needed to do that unless the daemon runs in a Docker container. + +** What components get installed in the cluster when running Telepresence?** + +A single `traffic-manager` service is deployed in the `ambassador` namespace within your cluster, and this manages resilient intercepts and connections between your local machine and the cluster. + +A Traffic Agent container is injected per pod that is being intercepted. The first time a workload is intercepted all pods associated with this workload will be restarted with the Traffic Agent automatically injected. + +** How can I remove all the Telepresence components installed within my cluster?** + +You can run the command `telepresence helm uninstall` to remove everything from the cluster, including the `traffic-manager`, and all the `traffic-agent` containers injected into each pod being intercepted. + +Also run `telepresence quit -s` to stop the local daemon running. + +** What language is Telepresence written in?** + +All components of the Telepresence application and cluster components are written using Go. + +** How does Telepresence connect and tunnel into the Kubernetes cluster?** + +The connection between your laptop and cluster is established by using +the `kubectl port-forward` machinery (though without actually spawning +a separate program) to establish a TLS encrypted connection to Telepresence +Traffic Manager in the cluster, and running Telepresence's custom VPN +protocol over that connection. + + + +** Is Telepresence OSS open source?** + +Yes it is! You can find its source code on [GitHub](https://github.com/telepresenceio/telepresence). + +** How do I share my feedback on Telepresence?** + +Your feedback is always appreciated and helps us build a product that provides as much value as possible for our community. You can chat with us directly on our [feedback page](https://www.getambassador.io/feedback/), or you can [join our Slack channel](http://a8r.io/slack) to share your thoughts. diff --git a/docs/telepresence-oss/latest/howtos/cluster-in-vm.md b/docs/telepresence-oss/latest/howtos/cluster-in-vm.md new file mode 100644 index 000000000..4762344c9 --- /dev/null +++ b/docs/telepresence-oss/latest/howtos/cluster-in-vm.md @@ -0,0 +1,192 @@ +--- +title: "Considerations for locally hosted clusters | Ambassador" +description: "Use Telepresence to intercept services in a cluster running in a hosted virtual machine." +--- + +# Network considerations for locally hosted clusters + +## The problem +Telepresence creates a Virtual Network Interface ([VIF](../../reference/tun-device)) that maps the clusters subnets to the host machine when it connects. If you're running Kubernetes locally (e.g., k3s, Minikube, Docker for Desktop), you may encounter network problems because the devices in the host are also accessible from the cluster's nodes. + +### Example: +A k3s cluster runs in a headless VirtualBox machine that uses a "host-only" network. This network will allow both host-to-guest and guest-to-host connections. In other words, the cluster will have access to the host's network and, while Telepresence is connected, also to its VIF. This means that from the cluster's perspective, there will now be more than one interface that maps the cluster's subnets; the ones already present in the cluster's nodes, and then the Telepresence VIF, mapping them again. + +Now, if a request arrives to Telepresence that is covered by a subnet mapped by the VIF, the request is routed to the cluster. If the cluster for some reason doesn't find a corresponding listener that can handle the request, it will eventually try the host network, and find the VIF. The VIF routes the request to the cluster and now the recursion is in motion. The final outcome of the request will likely be a timeout but since the recursion is very resource intensive (a large amount of very rapid connection requests), this will likely also affect other connections in a bad way. + +## Solution + +### Create a bridge network +A bridge network is a Link Layer (L2) device that forwards traffic between network segments. By creating a bridge network, you can bypass the host's network stack which enable the Kubernetes cluster to connect directly to the same router as your host. + +To create a bridge network, you need to change the network settings of the guest running a cluster's node so that it connects directly to a physical network device on your host. The details on how to configure the bridge depends on what type of virtualization solution you're using. + +### Vagrant + Virtualbox + k3s example +Here's a sample `Vagrantfile` that will spin up a server node and two agent nodes in three headless instances using a bridged network. It also adds the configuration needed for the cluster to host a docker repository (very handy in case you want to save bandwidth). The Kubernetes registry manifest must be applied using `kubectl -f registry.yaml` once the cluster is up and running. + +#### Vagrantfile +```ruby +# -*- mode: ruby -*- +# vi: set ft=ruby : + +# bridge is the name of the host's default network device +$bridge = 'wlp5s0' + +# default_route should be the IP of the host's default route. +$default_route = '192.168.1.1' + +# nameserver must be the IP of an external DNS, such as 8.8.8.8 +$nameserver = '8.8.8.8' + +# server_name should also be added to the host's /etc/hosts file and point to the server_ip +# for easy access when pushing docker images +server_name = 'multi' + +# static IPs for the server and agents. Those IPs must be on the default router's subnet +server_ip = '192.168.1.110' +agents = { + 'agent1' => '192.168.1.111', + 'agent2' => '192.168.1.112', +} + +# Extra parameters in INSTALL_K3S_EXEC variable because of +# K3s picking up the wrong interface when starting server and agent +# https://github.com/alexellis/k3sup/issues/306 +server_script = <<-SHELL + sudo -i + apk add curl + export INSTALL_K3S_EXEC="--bind-address=#{server_ip} --node-external-ip=#{server_ip} --flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + echo "Sleeping for 5 seconds to wait for k3s to start" + sleep 5 + cp /var/lib/rancher/k3s/server/token /vagrant_shared + cp /etc/rancher/k3s/k3s.yaml /vagrant_shared + cp /etc/rancher/k3s/registries.yaml /vagrant_shared + SHELL + +agent_script = <<-SHELL + sudo -i + apk add curl + export K3S_TOKEN_FILE=/vagrant_shared/token + export K3S_URL=https://#{server_ip}:6443 + export INSTALL_K3S_EXEC="--flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + SHELL + +def config_vm(name, ip, script, vm) + # The network_script has two objectives: + # 1. Ensure that the guest's default route is the bridged network (bypass the network of the host) + # 2. Ensure that the DNS points to an external DNS service, as opposed to the DNS of the host that + # the NAT network provides. + network_script = <<-SHELL + sudo -i + ip route delete default 2>&1 >/dev/null || true; ip route add default via #{$default_route} + cp /etc/resolv.conf /etc/resolv.conf.orig + sed 's/^nameserver.*/nameserver #{$nameserver}/' /etc/resolv.conf.orig > /etc/resolv.conf + SHELL + + vm.hostname = name + vm.network 'public_network', bridge: $bridge, ip: ip + vm.synced_folder './shared', '/vagrant_shared' + vm.provider 'virtualbox' do |vb| + vb.memory = '4096' + vb.cpus = '2' + end + vm.provision 'shell', inline: script + vm.provision 'shell', inline: network_script, run: 'always' +end + +Vagrant.configure('2') do |config| + config.vm.box = 'generic/alpine314' + + config.vm.define 'server', primary: true do |server| + config_vm(server_name, server_ip, server_script, server.vm) + end + + agents.each do |agent_name, agent_ip| + config.vm.define agent_name do |agent| + config_vm(agent_name, agent_ip, agent_script, agent.vm) + end + end +end +``` + +The Kubernetes manifest to add the registry: + +#### registry.yaml +```yaml +apiVersion: v1 +kind: ReplicationController +metadata: + name: kube-registry-v0 + namespace: kube-system + labels: + k8s-app: kube-registry + version: v0 +spec: + replicas: 1 + selector: + app: kube-registry + version: v0 + template: + metadata: + labels: + app: kube-registry + version: v0 + spec: + containers: + - name: registry + image: registry:2 + resources: + limits: + cpu: 100m + memory: 200Mi + env: + - name: REGISTRY_HTTP_ADDR + value: :5000 + - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY + value: /var/lib/registry + volumeMounts: + - name: image-store + mountPath: /var/lib/registry + ports: + - containerPort: 5000 + name: registry + protocol: TCP + volumes: + - name: image-store + hostPath: + path: /var/lib/registry-storage +--- +apiVersion: v1 +kind: Service +metadata: + name: kube-registry + namespace: kube-system + labels: + app: kube-registry + kubernetes.io/name: "KubeRegistry" +spec: + selector: + app: kube-registry + ports: + - name: registry + port: 5000 + targetPort: 5000 + protocol: TCP + type: LoadBalancer +``` + diff --git a/docs/telepresence-oss/latest/howtos/intercepts.md b/docs/telepresence-oss/latest/howtos/intercepts.md new file mode 100644 index 000000000..f933e6da2 --- /dev/null +++ b/docs/telepresence-oss/latest/howtos/intercepts.md @@ -0,0 +1,106 @@ +--- +description: "Start using Telepresence in your own environment. Follow these steps to intercept your service in your cluster." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Intercept a service in your own environment + +Telepresence enables you to create intercepts to a target Kubernetes workload. Once you have created and intercept, you can code and debug your associated service locally. + + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +This guide assumes you have a Kubernetes deployment and service accessible publicly by an ingress controller, and that you can run a copy of that service on your laptop. + + +## Intercept your service with a global intercept + +With Telepresence, you can create [global intercepts](../../concepts/intercepts/?intercept=global) that intercept all traffic going to a service in your cluster and route it to your local environment instead. + +1. Connect to your cluster with `telepresence connect` and connect to the Kubernetes API server: + + ```console + $ curl -ik https://kubernetes.default + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected when you first connect. + + + You now have access to your remote Kubernetes API server as if you were on the same network. You can now use any local tools to connect to any service in the cluster. + + If you have difficulties connecting, make sure you are using Telepresence 2.0.3 or a later version. Check your version by entering `telepresence version` and [upgrade if needed](../../install/upgrade/). + + +2. Enter `telepresence list` and make sure the service you want to intercept is listed. For example: + + ```console + $ telepresence list + ... + example-service: ready to intercept (traffic-agent not yet installed) + ... + ``` + +3. Get the name of the port you want to intercept on your service: + `kubectl get service --output yaml`. + + For example: + + ```console + $ kubectl get service example-service --output yaml + ... + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + ... + ``` + +4. Intercept all traffic going to the service in your cluster: + `telepresence intercept --port [:] --env-file `. + * For `--port`: specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * For `--env-file`: specify a file path for Telepresence to write the environment variables that are set in the pod. + The example below shows Telepresence intercepting traffic going to service `example-service`. Requests now reach the service on port `http` in the cluster get routed to `8080` on the workstation and write the environment variables of the service to `~/example-service-intercept.env`. + ```console + $ telepresence intercept example-service --port 8080:http --env-file ~/example-service-intercept.env + Using Deployment example-service + intercepted + Intercept name: example-service + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Intercepting : all TCP connections + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + The following are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#env). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Query the environment in which you intercepted a service and verify your local instance being invoked. + All the traffic previously routed to your Kubernetes Service is now routed to your local environment + +You can now: +- Make changes on the fly and see them reflected when interacting with + your Kubernetes environment. +- Query services only exposed in your cluster's network. +- Set breakpoints in your IDE to investigate bugs. + + + + **Didn't work?** Make sure the port you're listening on matches the one you specified when you created your intercept. + + diff --git a/docs/telepresence-oss/latest/howtos/outbound.md b/docs/telepresence-oss/latest/howtos/outbound.md new file mode 100644 index 000000000..1e063a665 --- /dev/null +++ b/docs/telepresence-oss/latest/howtos/outbound.md @@ -0,0 +1,83 @@ +--- +description: "Telepresence can connect to your Kubernetes cluster, letting you access cluster services as if your laptop was another pod in the cluster." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Proxy outbound traffic to my cluster + +Telepresence offers other options for proxying traffic between your laptop and the cluster. This section discribes how to proxy outbound traffic and control outbound connectivity to your cluster. + +## Proxying outbound traffic + +Connecting to the cluster instead of running an intercept allows you to access cluster workloads as if your laptop was another pod in the cluster. This enables you to access other Kubernetes services using `.`. A service running on your laptop can interact with other services on the cluster by name. + +When you connect to your cluster, the background daemon on your machine runs and installs the [Traffic Manager deployment](../../reference/architecture/) into the cluster of your current `kubectl` context. The Traffic Manager handles the service proxying. + +1. Run `telepresence connect` and enter your password to run the daemon. + + ``` + $ telepresence connect + Launching Telepresence Daemon v2.3.7 (api v3) + Need root privileges to run "/usr/local/bin/telepresence daemon-foreground /home//.cache/telepresence/logs '' ''" + [sudo] password: + Connecting to traffic manager... + Connected to context default (https://) + ``` + +2. Run `telepresence status` to confirm connection to your cluster and that it is proxying traffic. + + ``` + $ telepresence status + Root Daemon: Running + Version : v2.3.7 (api 3) + Primary DNS : "" + Fallback DNS: "" + User Daemon: Running + Version : v2.3.7 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 0 total + ``` + +3. Access your service by name with `curl web-app.emojivoto:80`. Telepresence routes the request to the cluster, as if your laptop is actually running in the cluster. + + ``` + $ curl web-app.emojivoto:80 + + + + + Emoji Vote + ... + ``` + +If you terminate the client with `telepresence quit` and try to access the service again, it will fail because traffic is no longer proxied from your laptop. + + ``` + $ telepresence quit + Telepresence Daemon quitting...done + ``` + +When using Telepresence in this way, you need to access services with the namespace qualified DNS name (<service name>.<namespace>) before you start an intercept. After you start an intercept, only <service name> is required. + +## Controlling outbound connectivity + +By default, Telepresence provides access to all Services found in all namespaces in the connected cluster. This can lead to problems if the user does not have RBAC access permissions to all namespaces. You can use the `--mapped-namespaces ` flag to control which namespaces are accessible. + +When you use the `--mapped-namespaces` flag, you need to include all namespaces containing services you want to access, as well as all namespaces that contain services related to the intercept. + +### Using local-only intercepts + +When you develop on isolated apps or on a virtualized container, you don't need an outbound connection. However, when developing services that aren't deployed to the cluster, it can be necessary to provide outbound connectivity to the namespace where the service will be deployed. This is because services that aren't exposed through ingress controllers require connectivity to those services. When you provide outbound connectivity, the service can access other services in that namespace without using qualified names. A local-only intercept does not cause outbound connections to originate from the intercepted namespace. The reason for this is to establish correct origin; the connection must be routed to a `traffic-agent`of an intercepted pod. For local-only intercepts, the outbound connections originates from the `traffic-manager`. + +To control outbound connectivity to specific namespaces, add the `--local-only` flag: + + ``` + $ telepresence intercept --namespace --local-only + ``` +The resources in the given namespace can now be accessed using unqualified names as long as the intercept is active. +You can deactivate the intercept with `telepresence leave `. This removes unqualified name access. diff --git a/docs/telepresence-oss/latest/howtos/request.md b/docs/telepresence-oss/latest/howtos/request.md new file mode 100644 index 000000000..1109c68df --- /dev/null +++ b/docs/telepresence-oss/latest/howtos/request.md @@ -0,0 +1,12 @@ +import Alert from '@material-ui/lab/Alert'; + +# Send requests to an intercepted service + +Ambassador Cloud can inform you about the required request parameters to reach an intercepted service. + + 1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) + 2. Navigate to the desired service Intercepts page + 3. Click the **Query** button to open the pop-up menu. + 4. Toggle between **CURL**, **Headers** and **Browse**. + +The pre-built queries and header information will help you get started to query the desired intercepted service and manage header propagation. diff --git a/docs/telepresence-oss/latest/images/TP_Architecture.svg b/docs/telepresence-oss/latest/images/TP_Architecture.svg new file mode 100644 index 000000000..a93bdd7eb --- /dev/null +++ b/docs/telepresence-oss/latest/images/TP_Architecture.svg @@ -0,0 +1,900 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/telepresence-oss/latest/images/container-inner-dev-loop.png b/docs/telepresence-oss/latest/images/container-inner-dev-loop.png new file mode 100644 index 000000000..06586cd6e Binary files /dev/null and b/docs/telepresence-oss/latest/images/container-inner-dev-loop.png differ diff --git a/docs/telepresence-oss/latest/images/docker-extension-intercept.png b/docs/telepresence-oss/latest/images/docker-extension-intercept.png new file mode 100644 index 000000000..d01daef8f Binary files /dev/null and b/docs/telepresence-oss/latest/images/docker-extension-intercept.png differ diff --git a/docs/telepresence-oss/latest/images/docker-header-containers.png b/docs/telepresence-oss/latest/images/docker-header-containers.png new file mode 100644 index 000000000..06f422a93 Binary files /dev/null and b/docs/telepresence-oss/latest/images/docker-header-containers.png differ diff --git a/docs/telepresence-oss/latest/images/docker_extension_button_drop_down.png b/docs/telepresence-oss/latest/images/docker_extension_button_drop_down.png new file mode 100644 index 000000000..775323e56 Binary files /dev/null and b/docs/telepresence-oss/latest/images/docker_extension_button_drop_down.png differ diff --git a/docs/telepresence-oss/latest/images/docker_extension_connect_to_cluster.png b/docs/telepresence-oss/latest/images/docker_extension_connect_to_cluster.png new file mode 100644 index 000000000..eb95e5180 Binary files /dev/null and b/docs/telepresence-oss/latest/images/docker_extension_connect_to_cluster.png differ diff --git a/docs/telepresence-oss/latest/images/docker_extension_login.png b/docs/telepresence-oss/latest/images/docker_extension_login.png new file mode 100644 index 000000000..8874fa959 Binary files /dev/null and b/docs/telepresence-oss/latest/images/docker_extension_login.png differ diff --git a/docs/telepresence-oss/latest/images/docker_extension_running_intercepts_page.png b/docs/telepresence-oss/latest/images/docker_extension_running_intercepts_page.png new file mode 100644 index 000000000..7870e2691 Binary files /dev/null and b/docs/telepresence-oss/latest/images/docker_extension_running_intercepts_page.png differ diff --git a/docs/telepresence-oss/latest/images/docker_extension_start_intercept_page.png b/docs/telepresence-oss/latest/images/docker_extension_start_intercept_page.png new file mode 100644 index 000000000..6788994e3 Binary files /dev/null and b/docs/telepresence-oss/latest/images/docker_extension_start_intercept_page.png differ diff --git a/docs/telepresence-oss/latest/images/docker_extension_start_intercept_popup.png b/docs/telepresence-oss/latest/images/docker_extension_start_intercept_popup.png new file mode 100644 index 000000000..12839b0e5 Binary files /dev/null and b/docs/telepresence-oss/latest/images/docker_extension_start_intercept_popup.png differ diff --git a/docs/telepresence-oss/latest/images/docker_extension_upload_spec_button.png b/docs/telepresence-oss/latest/images/docker_extension_upload_spec_button.png new file mode 100644 index 000000000..f571aefd3 Binary files /dev/null and b/docs/telepresence-oss/latest/images/docker_extension_upload_spec_button.png differ diff --git a/docs/telepresence-oss/latest/images/github-login.png b/docs/telepresence-oss/latest/images/github-login.png new file mode 100644 index 000000000..cfd4d4bf1 Binary files /dev/null and b/docs/telepresence-oss/latest/images/github-login.png differ diff --git a/docs/telepresence-oss/latest/images/logo.png b/docs/telepresence-oss/latest/images/logo.png new file mode 100644 index 000000000..701f63ba8 Binary files /dev/null and b/docs/telepresence-oss/latest/images/logo.png differ diff --git a/docs/telepresence-oss/latest/images/mode-defaults.png b/docs/telepresence-oss/latest/images/mode-defaults.png new file mode 100644 index 000000000..1dcca4116 Binary files /dev/null and b/docs/telepresence-oss/latest/images/mode-defaults.png differ diff --git a/docs/telepresence-oss/latest/images/split-tunnel.png b/docs/telepresence-oss/latest/images/split-tunnel.png new file mode 100644 index 000000000..5bf30378e Binary files /dev/null and b/docs/telepresence-oss/latest/images/split-tunnel.png differ diff --git a/docs/telepresence-oss/latest/images/tracing.png b/docs/telepresence-oss/latest/images/tracing.png new file mode 100644 index 000000000..c374807e5 Binary files /dev/null and b/docs/telepresence-oss/latest/images/tracing.png differ diff --git a/docs/telepresence-oss/latest/images/trad-inner-dev-loop.png b/docs/telepresence-oss/latest/images/trad-inner-dev-loop.png new file mode 100644 index 000000000..618b674f8 Binary files /dev/null and b/docs/telepresence-oss/latest/images/trad-inner-dev-loop.png differ diff --git a/docs/telepresence-oss/latest/images/tunnelblick.png b/docs/telepresence-oss/latest/images/tunnelblick.png new file mode 100644 index 000000000..8944d445a Binary files /dev/null and b/docs/telepresence-oss/latest/images/tunnelblick.png differ diff --git a/docs/telepresence-oss/latest/images/vpn-dns.png b/docs/telepresence-oss/latest/images/vpn-dns.png new file mode 100644 index 000000000..eed535c45 Binary files /dev/null and b/docs/telepresence-oss/latest/images/vpn-dns.png differ diff --git a/docs/telepresence-oss/latest/install/cloud.md b/docs/telepresence-oss/latest/install/cloud.md new file mode 100644 index 000000000..1aac0855f --- /dev/null +++ b/docs/telepresence-oss/latest/install/cloud.md @@ -0,0 +1,55 @@ +# Provider Prerequisites for Traffic Manager + +## GKE + +### Firewall Rules for private clusters + +A GKE cluster with private networking will come preconfigured with firewall rules that prevent the Traffic Manager's +webhook injector from being invoked by the Kubernetes API server. +For Telepresence to work in such a cluster, you'll need to [add a firewall rule](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) allowing the Kubernetes masters to access TCP port `8443` in your pods. +For example, for a cluster named `tele-webhook-gke` in region `us-central1-c1`: + +```bash +$ gcloud container clusters describe tele-webhook-gke --region us-central1-c | grep masterIpv4CidrBlock + masterIpv4CidrBlock: 172.16.0.0/28 # Take note of the IP range, 172.16.0.0/28 + +$ gcloud compute firewall-rules list \ + --filter 'name~^gke-tele-webhook-gke' \ + --format 'table( + name, + network, + direction, + sourceRanges.list():label=SRC_RANGES, + allowed[].map().firewall_rule().list():label=ALLOW, + targetTags.list():label=TARGET_TAGS + )' + +NAME NETWORK DIRECTION SRC_RANGES ALLOW TARGET_TAGS +gke-tele-webhook-gke-33fa1791-all tele-webhook-net INGRESS 10.40.0.0/14 esp,ah,sctp,tcp,udp,icmp gke-tele-webhook-gke-33fa1791-node +gke-tele-webhook-gke-33fa1791-master tele-webhook-net INGRESS 172.16.0.0/28 tcp:10250,tcp:443 gke-tele-webhook-gke-33fa1791-node +gke-tele-webhook-gke-33fa1791-vms tele-webhook-net INGRESS 10.128.0.0/9 icmp,tcp:1-65535,udp:1-65535 gke-tele-webhook-gke-33fa1791-node +# Take note fo the TARGET_TAGS value, gke-tele-webhook-gke-33fa1791-node + +$ gcloud compute firewall-rules create gke-tele-webhook-gke-webhook \ + --action ALLOW \ + --direction INGRESS \ + --source-ranges 172.16.0.0/28 \ + --rules tcp:8443 \ + --target-tags gke-tele-webhook-gke-33fa1791-node --network tele-webhook-net +Creating firewall...⠹Created [https://www.googleapis.com/compute/v1/projects/datawire-dev/global/firewalls/gke-tele-webhook-gke-webhook]. +Creating firewall...done. +NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED +gke-tele-webhook-gke-webhook tele-webhook-net INGRESS 1000 tcp:8443 False +``` + +### GKE Authentication Plugin + +Starting with Kubernetes version 1.26 GKE will require the use of the [gke-gcloud-auth-plugin](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke). +You will need to install this plugin to use Telepresence with Docker while using GKE. + +## EKS + +### EKS Authentication Plugin + +If you are using AWS CLI version earlier than `1.16.156` you will need to install [aws-iam-authenticator](https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html). +You will need to install this plugin to use Telepresence with Docker while using EKS. \ No newline at end of file diff --git a/docs/telepresence-oss/latest/install/helm.md b/docs/telepresence-oss/latest/install/helm.md new file mode 100644 index 000000000..8aefb1d59 --- /dev/null +++ b/docs/telepresence-oss/latest/install/helm.md @@ -0,0 +1,181 @@ +# Install the Traffic Manager with Helm + +[Helm](https://helm.sh) is a package manager for Kubernetes that automates the release and management of software on Kubernetes. The Telepresence Traffic Manager can be installed via a Helm chart with a few simple steps. + +For more details on what the Helm chart installs and what can be configured, see the Helm chart [configuration on artifacthub](https://artifacthub.io/packages/helm/datawire/telepresence). + +## Before you begin + +Before you begin you need to have [`helm`](https://helm.sh/docs/intro/install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +The Telepresence Helm chart is hosted by Ambassador Labs and published at `https://app.getambassador.io`. + +Start by adding this repo to your Helm client with the following command: + +```shell +helm repo add datawire https://app.getambassador.io +helm repo update +``` + +## Install with Helm + +When you run the Helm chart, it installs all the components required for the Telepresence Traffic Manager. + +1. If you are installing the Telepresence Traffic Manager **for the first time on your cluster**, create the `ambassador` namespace in your cluster: + + ```shell + kubectl create namespace ambassador + ``` + +2. Install the Telepresence Traffic Manager with the following command: + + ```shell + helm install traffic-manager --namespace ambassador datawire/telepresence + ``` + +### Install into custom namespace + +The Helm chart supports being installed into any namespace, not necessarily `ambassador`. Simply pass a different `namespace` argument to `helm install`. +For example, if you wanted to deploy the traffic manager to the `staging` namespace: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence +``` + +Note that users of Telepresence will need to configure their kubeconfig to find this installation of the Traffic Manager: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +See [the kubeconfig documentation](../../reference/config#manager) for more information. + +### Upgrading the Traffic Manager. + +Versions of the Traffic Manager Helm chart are coupled to the versions of the Telepresence CLI that they are intended for. +Thus, for example, if you wish to use Telepresence `v2.4.0`, you'll need to install version `v2.4.0` of the Traffic Manager Helm chart. + +Upgrading the Traffic Manager is the same as upgrading any other Helm chart; for example, if you installed the release into the `ambassador` namespace, and you just wished to upgrade it to the latest version without changing any configuration values: + +```shell +helm repo up +helm upgrade traffic-manager datawire/telepresence --reuse-values --namespace ambassador +``` + +If you want to upgrade the Traffic-Manager to a specific version, add a `--version` flag with the version number to the upgrade command. For example: `--version v2.4.1` + +## RBAC + +### Installing a namespace-scoped traffic manager + +You might not want the Traffic Manager to have permissions across the entire kubernetes cluster, or you might want to be able to install multiple traffic managers per cluster (for example, to separate them by environment). +In these cases, the traffic manager supports being installed with a namespace scope, allowing cluster administrators to limit the reach of a traffic manager's permissions. + +For example, suppose you want a Traffic Manager that only works on namespaces `dev` and `staging`. +To do this, create a `values.yaml` like the following: + +```yaml +managerRbac: + create: true + namespaced: true + namespaces: + - dev + - staging +``` + +This can then be installed via: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence -f ./values.yaml +``` + +**NOTE** Do not install namespace-scoped Traffic Managers and a global Traffic Manager in the same cluster, as it could have unexpected effects. + +#### Namespace collision detection + +The Telepresence Helm chart will try to prevent namespace-scoped Traffic Managers from managing the same namespaces. +It will do this by creating a ConfigMap, called `traffic-manager-claim`, in each namespace that a given install manages. + +So, for example, suppose you install one Traffic Manager to manage namespaces `dev` and `staging`, as: + +```bash +helm install traffic-manager --namespace dev datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={dev,staging}' +``` + +You might then attempt to install another Traffic Manager to manage namespaces `staging` and `prod`: + +```bash +helm install traffic-manager --namespace prod datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={staging,prod}' +``` + +This would fail with an error: + +``` +Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap "traffic-manager-claim" in namespace "staging" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "prod": current value is "dev" +``` + +To fix this error, fix the overlap either by removing `staging` from the first install, or from the second. + +#### Namespace scoped user permissions + +Optionally, you can also configure user rbac to be scoped to the same namespaces as the manager itself. +You might want to do this if you don't give your users permissions throughout the cluster, and want to make sure they only have the minimum set required to perform telepresence commands on certain namespaces. + +Continuing with the `dev` and `staging` example from the previous section, simply add the following to `values.yaml` (make sure you set the `subjects`!): + +```yaml +clientRbac: + create: true + + # These are the users or groups to which the user rbac will be bound. + # This MUST be set. + subjects: {} + # - kind: User + # name: jane + # apiGroup: rbac.authorization.k8s.io + + namespaced: true + + namespaces: + - dev + - staging +``` + +#### Namespace-scoped webhook + +If you wish to use the traffic-manager's [mutating webhook](../../reference/cluster-config#mutating-webhook) with a namespace-scoped traffic manager, you will have to ensure that each namespace has an `app.kubernetes.io/name` label that is identical to its name: + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: staging + labels: + app.kubernetes.io/name: staging +``` + +You can also use `kubectl label` to add the label to an existing namespace, e.g.: + +```shell +kubectl label namespace staging app.kubernetes.io/name=staging +``` + +This is required because the mutating webhook will use the name label to find namespaces to operate on. + +**NOTE** This labelling happens automatically in kubernetes >= 1.21. + +### Installing RBAC only + +Telepresence Traffic Manager does require some [RBAC](../../reference/rbac/) for the traffic-manager deployment itself, as well as for users. +To make it easier for operators to introspect / manage RBAC separately, you can use `rbac.only=true` to +only create the rbac-related objects. +Additionally, you can use `clientRbac.create=true` and `managerRbac.create=true` to toggle which subset(s) of RBAC objects you wish to create. diff --git a/docs/telepresence-oss/latest/install/index.md b/docs/telepresence-oss/latest/install/index.md new file mode 100644 index 000000000..2f1f61153 --- /dev/null +++ b/docs/telepresence-oss/latest/install/index.md @@ -0,0 +1,106 @@ +import Platform from '@src/components/Platform'; + +# Install + +Install Telepresence OSS by running the commands below for your OS. If you are not the administrator of your cluster, you will need [administrative RBAC permissions](../reference/rbac#administrating-telepresence) to install and use Telepresence in your cluster. + + + + +```shell +# Intel Macs + +# 1. Download the latest binary (~105 MB): +sudo curl -fL https://app.getambassador.io/download/tel2oss/releases/download/$dlVersion$/telepresence-darwin-amd64 -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# 1. Download the latest binary (~101 MB): +sudo curl -fL https://app.getambassador.io/download/tel2oss/releases/download/$dlVersion$/telepresence-darwin-arm64 -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~95 MB): +sudo curl -fL https://app.getambassador.io/download/tel2oss/releases/download/$dlVersion$/telepresence-linux-amd64 -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +We've developed a Powershell script to simplify the process of installing telepresence. Here are the commands you can execute: + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2oss/releases/download/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## What's Next? + +Follow one of our [quick start guides](../quick-start/) to start using Telepresence, either with our sample app or in your own environment. + +## Installing older versions of Telepresence + +Use these URLs to download an older version for your OS (including older nightly builds), replacing `x.y.z` with the versions you want. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2oss/releases/download/vx.y.z/telepresence-darwin-amd64 + +# Apple silicon Macs +https://app.getambassador.io/download/tel2oss/releases/download/vx.y.z/telepresence-darwin-arm64 +``` + + + + +``` +https://app.getambassador.io/download/tel2oss/releases/download/vx.y.z/telepresence-linux-amd64 +``` + + + + +``` +(https://app.getambassador.io/download/tel2oss/releases/download/vx.y.z/telepresence-windows-amd64.exe +``` + + + + + diff --git a/docs/telepresence-oss/latest/install/manager.md b/docs/telepresence-oss/latest/install/manager.md new file mode 100644 index 000000000..9a747d895 --- /dev/null +++ b/docs/telepresence-oss/latest/install/manager.md @@ -0,0 +1,53 @@ +# Install/Uninstall the Traffic Manager + +Telepresence uses a traffic manager to send/recieve cloud traffic to the user. Telepresence uses [Helm](https://helm.sh) under the hood to install the traffic manager in your cluster. + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/). +In addition, you may need certain prerequisites depending on your cloud provider and platform. +See the [cloud provider installation notes](../../install/cloud) for more. + +## Install the Traffic Manager + +The telepresence cli can install the traffic manager for you. The basic install will install the same version as the client used. + +1. Install the Telepresence Traffic Manager with the following command: + + ```shell + telepresence helm install + ``` + +### Customizing the Traffic Manager. + +For details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +1. Create a values.yaml file with your config values. + +2. Run the install command with the values flag set to the path to your values file. + + ```shell + telepresence helm install --values values.yaml + ``` + + +## Upgrading/Downgrading the Traffic Manager. + +1. Download the cli of the version of Telepresence you wish to use. + +2. Run the install command with the upgrade flag. + + ```shell + telepresence helm install --upgrade + ``` + + +## Uninstall + +The telepresence cli can uninstall the traffic manager for you using the `telepresence helm uninstall` command (previously `telepresence uninstall --everything`). + +1. Uninstall the Telepresence Traffic Manager and all of the agents installed by it using the following command: + + ```shell + telepresence helm uninstall + ``` diff --git a/docs/telepresence-oss/latest/install/migrate-from-legacy.md b/docs/telepresence-oss/latest/install/migrate-from-legacy.md new file mode 100644 index 000000000..94307dfa1 --- /dev/null +++ b/docs/telepresence-oss/latest/install/migrate-from-legacy.md @@ -0,0 +1,110 @@ +# Migrate from legacy Telepresence + +[Telepresence](/products/telepresence/) (formerly referenced as Telepresence 2, which is the current major version) has different mechanics and requires a different mental model from [legacy Telepresence 1](https://www.telepresence.io/docs/v1/) when working with local instances of your services. + +In legacy Telepresence, a pod running a service was swapped with a pod running the Telepresence proxy. This proxy received traffic intended for the service, and sent the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment". + +In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time. + +Telepresence 2 introduces a [new +architecture](../../reference/architecture/) built around "intercepts" +that addresses these problems. With the new Telepresence, a sidecar +proxy ("traffic agent") is injected onto the pod. The proxy then +intercepts traffic intended for the Pod and routes it to the +workstation/laptop. The advantage of this approach is that the +service is running at all times, and no swapping is used. By using +the proxy approach, we can also do personal intercepts, where rather +than re-routing all traffic to the laptop/workstation, it only +re-routes the traffic designated as belonging to that user, so that +multiple developers can intercept the same service at the same time +without disrupting normal operation or disrupting eacho. + +Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts. + +## Using legacy Telepresence commands + +First please ensure you've [installed Telepresence](../). + +Telepresence is able to translate common legacy Telepresence commands into native Telepresence commands. +So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used +to with the Telepresence binary. + +For example, say you have a deployment (`myserver`) that you want to swap deployment (equivalent to intercept in +Telepresence) with a python server, you could run the following command: + +``` +$ telepresence --swap-deployment myserver --expose 9090 --run python3 -m http.server 9090 +< help text > + +Legacy telepresence command used +Command roughly translates to the following in Telepresence: +telepresence intercept echo-easy --port 9090 -- python3 -m http.server 9090 +running... +Connecting to traffic manager... +Connected to context +Using Deployment myserver +intercepted + Intercept name : myserver + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:9090 + Intercepting : all TCP connections +Serving HTTP on :: port 9090 (http://[::]:9090/) ... +``` + +Telepresence will let you know what the legacy Telepresence command has mapped to and automatically +runs it. So you can get started with Telepresence today, using the commands you are used to +and it will help you learn the Telepresence syntax. + +### Legacy command mapping + +Below is the mapping of legacy Telepresence to Telepresence commands (where they exist and +are supported). + +| Legacy Telepresence Command | Telepresence Command | +|--------------------------------------------------|--------------------------------------------| +| --swap-deployment $workload | intercept $workload | +| --expose localPort[:remotePort] | intercept --port localPort[:remotePort] | +| --swap-deployment $workload --run-shell | intercept $workload -- bash | +| --swap-deployment $workload --run $cmd | intercept $workload -- $cmd | +| --swap-deployment $workload --docker-run $cmd | intercept $workload --docker-run -- $cmd | +| --run-shell | connect -- bash | +| --run $cmd | connect -- $cmd | +| --env-file,--env-json | --env-file, --env-json (haven't changed) | +| --context,--namespace | --context, --namespace (haven't changed) | +| --mount,--docker-mount | --mount, --docker-mount (haven't changed) | + +### Legacy Telepresence command limitations + +Some of the commands and flags from legacy Telepresence either didn't apply to Telepresence or +aren't yet supported in Telepresence. For some known popular commands, such as --method, +Telepresence will include output letting you know that the flag has gone away. For flags that +Telepresence can't translate yet, it will let you know that that flag is "unsupported". + +If Telepresence is missing any flags or functionality that is integral to your usage, please let us know +by [creating an issue](https://github.com/telepresenceio/telepresence/issues) and/or talking to us on our [Slack channel](http://a8r.io/slack)! + +## Telepresence changes + +Telepresence installs a Traffic Manager in the cluster and Traffic Agents alongside workloads when performing intercepts (including +with `--swap-deployment`) and leaves them. If you use `--swap-deployment`, the intercept will be left once the process +dies, but the agent will remain. There's no harm in leaving the agent running alongside your service, but when you +want to remove them from the cluster, the following Telepresence command will help: +``` +$ telepresence uninstall --help +Uninstall telepresence agents + +Usage: + telepresence uninstall [flags] { --agent |--all-agents } + +Flags: + -d, --agent uninstall intercept agent on specific deployments + -a, --all-agents uninstall intercept agent on all deployments + -h, --help help for uninstall + -n, --namespace string If present, the namespace scope for this CLI request +``` + +Since the new architecture deploys a Traffic Manager into the Ambassador namespace, please take a look at +our [rbac guide](../../reference/rbac) if you run into any issues with permissions while upgrading to Telepresence. + +The Traffic Manager can be uninstalled using `telepresence helm uninstall`. \ No newline at end of file diff --git a/docs/telepresence-oss/latest/install/upgrade.md b/docs/telepresence-oss/latest/install/upgrade.md new file mode 100644 index 000000000..3cd1a90e7 --- /dev/null +++ b/docs/telepresence-oss/latest/install/upgrade.md @@ -0,0 +1,59 @@ +--- +description: "How to upgrade your installation of Telepresence and install previous versions." +--- + +# Upgrade Process +The Telepresence CLI will periodically check for new versions and notify you when an upgrade is available. Running the same commands used for installation will replace your current binary with the latest version. + +Before upgrading your CLI, you must stop any live Telepresence processes by issuing `telepresence quit -s` (or `telepresence quit -ur` +if your current version is less than 2.8.0). + + + + +```shell +# Intel Macs + +# 1. Download the latest binary (~105 MB): +sudo curl -fL https://app.getambassador.io/download/tel2oss/releases/download/$dlVersion$/telepresence-darwin-amd64 -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# 1. Download the latest binary (~101 MB): +sudo curl -fL https://app.getambassador.io/download/tel2oss/releases/download/$dlVersion$/telepresence-darwin-arm64 -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~95 MB): +sudo curl -fL https://app.getambassador.io/download/tel2oss/releases/download/$dlVersion$/telepresence-linux-amd64 -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +To upgrade Telepresence,[Click here to download the Telepresence binary](https://app.getambassador.io/download/tel2oss/releases/download/$dlVersion$/telepresence-windows-amd64.zip). + +Once you have the binary downloaded and unzipped you will need to do a few things: + +1. Rename the binary from `telepresence-windows-amd64.exe` to `telepresence.exe` +2. Move the binary to `C:\Program Files (x86)\$USER\Telepresence\` + + + + +The Telepresence CLI contains an embedded Helm chart. See [Install/Uninstall the Traffic Manager](../manager/) if you want to also upgrade +the Traffic Manager in your cluster. + + diff --git a/docs/telepresence-oss/latest/quick-start/TelepresenceQuickStartLanding.js b/docs/telepresence-oss/latest/quick-start/TelepresenceQuickStartLanding.js new file mode 100644 index 000000000..9a68e9bbf --- /dev/null +++ b/docs/telepresence-oss/latest/quick-start/TelepresenceQuickStartLanding.js @@ -0,0 +1,64 @@ +import queryString from 'query-string'; +import React, { useEffect, useState } from 'react'; + +import Embed from '../../../../src/components/Embed'; +import Icon from '../../../../../src/components/Icon'; +import Link from '../../../../../src/components/Link'; + +import './telepresence-quickstart-landing.less'; + +/** @type React.FC> */ +const RightArrow = (props) => ( + + + +); + +const TelepresenceQuickStartLanding = () => { + + return ( +
+

+ Telepresence OSS +

+

+ Set up your ideal development environment for Kubernetes in seconds. + Accelerate your inner development loop with hot reload using your + existing IDE, and workflow. +

+ +
+
+
+

+ Install Telepresence and connect to your Kubernetes workloads. +

+ + Get Started + +
+
+
+ +
+
+

+ What Can Telepresence Do for You? +

+

Telepresence gives Kubernetes application developers:

+
    +
  • Make changes on the fly and see them reflected when interacting with your remote Kubernetes environment, this is just like hot reloading, but it works across both local and remote environments.
  • +
  • Query services and microservice APIs that are only accessible in your remote cluster's network.
  • +
  • Set breakpoints in your IDE and re-route remote traffic to your local machine to investigate bugs with realistic user traffic and API calls.
  • +
+ + LEARN MORE{' '} + + +
+
+
+ ); +}; + +export default TelepresenceQuickStartLanding; diff --git a/docs/telepresence-oss/latest/quick-start/go.md b/docs/telepresence-oss/latest/quick-start/go.md new file mode 100644 index 000000000..3c8eff33f --- /dev/null +++ b/docs/telepresence-oss/latest/quick-start/go.md @@ -0,0 +1,190 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + + +# Telepresence Quick Start - **Go** + +This guide provides you with a hands-on tutorial with Telepresence and Golang. To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + +## 1. Get a free remote cluster + +Telepresence connects your local workstation with a remote Kubernetes cluster. In this tutorial, you'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. +Test out the application: + +1. Go to the Emojivoto webapp and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. + +## 3. Run the Docker container + +The bug is present in the `voting-svc` service, you'll run that service locally. To save your time, we prepared a Docker container with this service running and all you'll need to fix the bug. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + +2. The application is failing due to a little bug inside this service which uses gRPC to communicate with the others services. We can use `grpcurl` to test the gRPC endpoint and see the error by running: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + (empty) + + Response trailers received: + content-type: application/grpc + Sent 0 requests and received 0 responses + ERROR: + Code: Unknown + Message: ERROR + ``` + +3. In order to fix the bug, use the Docker container's embedded IDE to fix this error. Go to http://localhost:8083 and open `api/api.go`. Remove the `"fmt"` package by deleting the line 5. + + ```go + 3 import ( + 4 "context" + 5 "fmt" // DELETE THIS LINE + 6 + 7 pb "github.com/buoyantio/emojivoto/emojivoto-voting-svc/gen/proto" + ``` + + and also replace the line `21`: + + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return nil, fmt.Errorf("ERROR") + 22 } + ``` + with + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return pS.vote(":doughnut:") + 22 } + ``` + Then save the file (`Ctrl+s` for Windows, `Cmd+s` for Mac or `Menu -> File -> Save`) and verify that the error is fixed now: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + content-type: application/grpc + + Response contents: + { + } + + Response trailers received: + (empty) + Sent 0 requests and received 1 response + ``` + +## 4. Telepresence intercept + +1. Now the bug is fixed, you can use Telepresence to intercept *all* the traffic through our local service. +Run the following command inside the container: + + ``` + $ telepresence intercept voting --port 8081:8080 + + Using Deployment voting + intercepted + Intercept name : voting + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8081 + Service Port Identifier: 8080 + Volume Mount Point : /tmp/telfs-XXXXXXXXX + Intercepting : all TCP connections + ``` + Now you can go back to Emojivoto webapp and you'll see that voting for 🍩 woks as expected. + +You have created an intercept to tell Telepresence where to send traffic. The `voting-svc` traffic is now destined to the local Dockerized version of the service. This intercepts *all the traffic* to the local `voting-svc` service, which has been fixed with the Telepresence intercept. + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Telepresence intercept with a preview URL + +Preview URLs allows you to safely share your development environment. With this approach, you can try and test your local service more accurately because you have a total control about which traffic is handled through your service, all of this thanks to the preview URL. + +1. First leave the current intercept: + + ``` + $ telepresence leave voting + ``` + +2. Then login to telepresence: + + + +3. Create an intercept, which will tell Telepresence to send traffic to the service in our container instead of the service in the cluster. When prompted for ingress configuration, all default values should be correct as displayed below. + + + +4. If you access the Emojivoto webapp application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +5. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version which is running locally. + +
+ +## What's Next? + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! diff --git a/docs/telepresence-oss/latest/quick-start/index.md b/docs/telepresence-oss/latest/quick-start/index.md new file mode 100644 index 000000000..9a481d427 --- /dev/null +++ b/docs/telepresence-oss/latest/quick-start/index.md @@ -0,0 +1,182 @@ +--- +description: "Start using Telepresence in your own environment. Follow these steps to intercept your service in your cluster." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Telepresence Quickstart + +Telepresence is an open source tool that enables you to set up remote development environments for Kubernetes where you can still use all of your favorite local tools like IDEs, debuggers, and profilers. + +## Prerequisites + + - [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/), the Kubernetes command-line tool, or the OpenShift Container Platform command-line interface, [oc](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). + - A Kubernetes Deployment and Service. + + + + **Don’t have access to Kubernetes cluster?** Try Telepresence in a free remote Kubernetes cluster without having to mess with your production environment. [Get Started >](https://app.getambassador.io/cloud/welcome?select=developer&utm_source=telepresence&utm_medium=website&utm_campaign=quickstart). + + + +## Install Telepresence on Your Machine + +Install Telepresence by running the relevant commands below for your OS. If you are not the administrator of your cluster, you will need [administrative RBAC permissions](../reference/rbac/#administrating-telepresence) to install and use the Telepresence traffic-manager in your cluster. + + + + +```shell +# Intel Macs + +# 1. Download the latest binary (~105 MB): +sudo curl -fL https://app.getambassador.io/download/tel2oss/releases/download/$dlVersion$/telepresence-darwin-amd64 -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# 1. Download the latest binary (~101 MB): +sudo curl -fL https://app.getambassador.io/download/tel2oss/releases/download/$dlVersion$/telepresence-darwin-arm64 -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~95 MB): +sudo curl -fL https://app.getambassador.io/download/tel2oss/releases/download/$dlVersion$/telepresence-linux-amd64 -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +To install Telepresence, [Click here to download the Telepresence binary](https://app.getambassador.io/download/tel2oss/releases/download/$dlVersion$/telepresence-windows-amd64.zip). + +Once you have the binary downloaded and unzipped you will need to do a few things: + +1. Rename the binary from `telepresence-windows-amd64.exe` to `telepresence.exe` +2. Move the binary to `C:\Program Files (x86)\$USER\Telepresence\` + + + + + +## Install Telepresence in Your Cluster + +1. Install the traffic manager into your cluster with `telepresence helm install`. More information about installing Telepresence can be found [here](../install/manager). This will require root access on your machine. + +``` +$ telepresence helm install +... +Traffic Manager installed successfully +``` + +## Intercept Your Service + +With Telepresence, you can create [global intercepts](../concepts/intercepts/?intercept=global) that intercept all traffic going to a service in your remote cluster and route it to your local environment instead. + +1. Connect to your cluster with `telepresence connect` and connect to the Kubernetes API server: + + ``` + $ telepresence connect + connected to context + + ``` + + ```console + $ curl -ik https://kubernetes.default + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected when you first connect. + + + You now have access to your remote Kubernetes API server as if you were on the same network. You can now use any local tools to connect to any service in the cluster. + +2. Enter `telepresence list` and make sure the service you want to intercept is listed. For example: + + ``` + $ telepresence list + ... + example-service: ready to intercept (traffic-agent not yet installed) + ... + ``` + +3. Get the name of the port you want to intercept on your service: + `kubectl get service --output yaml`. + + For example: + + ```console + $ kubectl get service example-service --output yaml + ... + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + ... + ``` + +4. Intercept all traffic going to the service in your cluster: + `telepresence intercept --port [:] --env-file `. + + - For `--port`: specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + - For `--env-file`: specify a file path for Telepresence to write the environment variables that are set in the pod. + The example below shows Telepresence intercepting traffic going to service `example-service`. Requests now reach the service on port `http` in the cluster get routed to `8080` on the workstation and write the environment variables of the service to `~/example-service-intercept.env`. + + ``` + $ telepresence intercept example-service --port 8080:http --env-file ~/example-service-intercept.env + Using Deployment example-service + intercepted + Intercept name: example-service + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Intercepting : all TCP connections + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + +The following are some examples of how to pass the environment variables to your local process: + +- **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#env). +- **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. +- **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Query the environment in which you intercepted a service and verify your local instance being invoked. + All the traffic previously routed to your Kubernetes Service is now routed to your local environment + +## 🎉 You've Unlocked a Faster Development Workflow for Kubernetes with Telepresence + +Now, with Telepresence, you can: + +-
+ Make changes on the fly and see them reflected when interacting with your remote Kubernetes environment, this is just like hot reloading, but it works across both local and remote environments. +
+-
Query services and microservice APIs that are only accessible in your remote cluster's network.
+-
Set breakpoints in your IDE and re-route remote traffic to your local machine to investigate bugs with realistic user traffic and API calls.
+ + + + **Didn't work?** Make sure the port you're listening on matches the one you specified when you created your intercept. + + + +## What’s Next? +- [Learn about the Telepresence architecture.](../reference/architecture) \ No newline at end of file diff --git a/docs/telepresence-oss/latest/quick-start/qs-cards.js b/docs/telepresence-oss/latest/quick-start/qs-cards.js new file mode 100644 index 000000000..084af19b3 --- /dev/null +++ b/docs/telepresence-oss/latest/quick-start/qs-cards.js @@ -0,0 +1,68 @@ +import Grid from '@material-ui/core/Grid'; +import Paper from '@material-ui/core/Paper'; +import Typography from '@material-ui/core/Typography'; +import { makeStyles } from '@material-ui/core/styles'; +import { Link as GatsbyLink } from 'gatsby'; +import React from 'react'; + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + textAlign: 'center', + alignItem: 'stretch', + padding: 0, + }, + paper: { + padding: theme.spacing(1), + textAlign: 'center', + color: 'black', + height: '100%', + }, +})); + +export default function CenteredGrid() { + const classes = useStyles(); + + return ( +
+ + + + + + Collaborating + + + + Use personal intercepts to get specific requests when working with colleagues. + + + + + + + + Outbound Sessions + + + + Control what your laptop can reach in the cluster while connected. + + + + + + + + Telepresence for Docker Compose + + + + Develop in a hybrid local/cluster environment using Telepresence for Docker Compose. + + + + +
+ ); +} diff --git a/docs/telepresence-oss/latest/quick-start/telepresence-quickstart-landing.less b/docs/telepresence-oss/latest/quick-start/telepresence-quickstart-landing.less new file mode 100644 index 000000000..e2a83df4f --- /dev/null +++ b/docs/telepresence-oss/latest/quick-start/telepresence-quickstart-landing.less @@ -0,0 +1,152 @@ +@import '~@src/components/Layout/vars.less'; + +.doc-body .telepresence-quickstart-landing { + font-family: @InterFont; + color: @black; + margin: -8.4px auto 48px; + max-width: 1050px; + min-width: @docs-min-width; + width: 100%; + + h1 { + color: @blue-dark; + font-weight: normal; + letter-spacing: 0.25px; + font-size: 33px; + margin: 0 0 15px; + } + p { + font-size: 0.875rem; + line-height: 24px; + margin: 0; + padding: 0; + } + + .demo-cluster-container { + display: grid; + margin: 40px 0; + grid-template-columns: 1fr; + grid-template-columns: 1fr; + @media screen and (max-width: 900px) { + grid-template-columns: repeat(1, 1fr); + } + } + .main-title-container { + display: flex; + flex-direction: column; + align-items: center; + p { + text-align: center; + font-size: 0.875rem; + } + } + h2 { + font-size: 23px; + color: @black; + margin: 0 0 20px 0; + padding: 0; + &.underlined { + padding-bottom: 2px; + border-bottom: 3px solid @grey-separator; + text-align: center; + } + strong { + font-weight: 800; + } + &.subtitle { + margin-bottom: 10px; + font-size: 19px; + line-height: 28px; + } + } + .learn-more, + .get-started { + font-size: 14px; + font-weight: 600; + letter-spacing: 1.25px; + display: flex; + align-items: center; + text-decoration: none; + &.inline { + display: inline-block; + text-decoration: underline; + font-size: unset; + font-weight: normal; + &:hover { + text-decoration: none; + } + } + &.blue { + color: @blue-5; + } + &.blue:hover { + color: @blue-dark; + } + } + + .learn-more { + margin-top: 20px; + padding: 13px 0; + } + + .box-container { + &.border { + border: 1.5px solid @grey-separator; + border-radius: 5px; + padding: 10px; + } + &::before { + content: ''; + position: absolute; + width: 14px; + height: 14px; + border-radius: 50%; + top: 0; + left: 50%; + transform: translate(-50%, -50%); + } + p { + font-size: 0.875rem; + line-height: 24px; + padding: 0; + } + } + + .telepresence-video { + border: 2px solid @grey-separator; + box-shadow: -6px 12px 0px fade(@black, 12%); + border-radius: 8px; + padding: 18px; + h2.telepresence-video-title { + font-weight: 400; + font-size: 23px; + line-height: 33px; + color: @blue-6; + } + } + + .video-section { + display: grid; + grid-template-columns: 1fr 1fr; + column-gap: 20px; + @media screen and (max-width: 800px) { + grid-template-columns: 1fr; + } + ul { + font-size: 14px; + margin: 0 10px 6px 0; + } + .video-container { + position: relative; + padding-bottom: 56.25%; // 16:9 aspect ratio + height: 0; + iframe { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + } + } + } +} diff --git a/docs/telepresence-oss/latest/redirects.yml b/docs/telepresence-oss/latest/redirects.yml new file mode 100644 index 000000000..c73de44b4 --- /dev/null +++ b/docs/telepresence-oss/latest/redirects.yml @@ -0,0 +1,6 @@ +- {from: "", to: "quick-start"} +- {from: /docs/telepresence/v2.15/quick-start/qs-go, to: /docs/telepresence/v2.15/quickstart/} +- {from: /docs/telepresence/v2.15/quick-start/qs-java, to: /docs/telepresence/v2.15/quickstart/} +- {from: /docs/telepresence/v2.15/quick-start/qs-node, to: /docs/telepresence/v2.15/quickstart/} +- {from: /docs/telepresence/v2.15/quick-start/qs-python, to: /docs/telepresence/v2.15/quickstart/} +- {from: /docs/telepresence/v2.15/quick-start/qs-python-fastapi, to: /docs/telepresence/v2.15/quickstart/} diff --git a/docs/telepresence-oss/latest/reference/architecture.md b/docs/telepresence-oss/latest/reference/architecture.md new file mode 100644 index 000000000..2ab163fd1 --- /dev/null +++ b/docs/telepresence-oss/latest/reference/architecture.md @@ -0,0 +1,64 @@ +--- +description: "How Telepresence works to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Telepresence Architecture + +
+ +![Telepresence Architecture](../images/TP_Architecture.svg) + +
+ +## Telepresence CLI + +The Telepresence CLI orchestrates the moving parts on the workstation: it starts the Telepresence Daemons, +authenticates against Ambassador Cloud, and then acts as a user-friendly interface to the Telepresence User Daemon. + +## Telepresence Daemons +Telepresence has Daemons that run on a developer's workstation and act as the main point of communication to the cluster's +network in order to communicate with the cluster and handle intercepted traffic. + +### User-Daemon +The User-Daemon coordinates the creation and deletion of intercepts by communicating with the [Traffic Manager](#traffic-manager). +All requests from and to the cluster go through this Daemon. + +When you run telepresence login, Telepresence installs an enhanced version of the User-Daemon. This replaces the existing User-Daemon and +allows you to create intercepts on your local machine from Ambassador Cloud. + +### Root-Daemon +The Root-Daemon manages the networking necessary to handle traffic between the local workstation and the cluster by setting up a +[Virtual Network Device](../tun-device) (VIF). For a detailed description of how the VIF manages traffic and why it is necessary +please refer to this blog post: +[Implementing Telepresence Networking with a TUN Device](https://blog.getambassador.io/implementing-telepresence-networking-with-a-tun-device-a23a786d51e9). + +When you run telepresence login, Telepresence installs an enhanced Telepresence User Daemon. This replaces the open source +User Daemon and allows you to create intercepts on your local machine from Ambassador Cloud. + +## Traffic Manager + +The Traffic Manager is the central point of communication between Traffic Agents in the cluster and Telepresence Daemons +on developer workstations. It is responsible for injecting the Traffic Agent sidecar into intercepted pods, proxying all +relevant inbound and outbound traffic, and tracking active intercepts. + +The Traffic-Manager is installed, either by a cluster administrator using a Helm Chart, or on demand by the Telepresence +User Daemon. When the User Daemon performs its initial connect, it first checks the cluster for the Traffic Manager +deployment, and if missing it will make an attempt to install it using its embedded Helm Chart. + +When an intercept gets created with a Preview URL, the Traffic Manager will establish a connection with Ambassador Cloud +so that Preview URL requests can be routed to the cluster. This allows Ambassador Cloud to reach the Traffic Manager +without requiring the Traffic Manager to be publicly exposed. Once the Traffic Manager receives a request from a Preview +URL, it forwards the request to the ingress service specified at the Preview URL creation. + +## Traffic Agent + +The Traffic Agent is a sidecar container that facilitates intercepts. When an intercept is first started, the Traffic Agent +container is injected into the workload's pod(s). You can see the Traffic Agent's status by running `telepresence list` +or `kubectl describe pod `. + +Depending on the type of intercept that gets created, the Traffic Agent will either route the incoming request to the +Traffic Manager so that it gets routed to a developer's workstation, or it will pass it along to the container in the +pod usually handling requests on that port. + + + diff --git a/docs/telepresence-oss/latest/reference/client.md b/docs/telepresence-oss/latest/reference/client.md new file mode 100644 index 000000000..d121953b4 --- /dev/null +++ b/docs/telepresence-oss/latest/reference/client.md @@ -0,0 +1,26 @@ +--- +description: "CLI options for Telepresence to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Client reference + +The [Telepresence CLI client](../../quick-start) is used to connect Telepresence to your cluster and start and stop intercepts. All commands are run in the form of `telepresence `. + +## Commands + +A list of all CLI commands and flags is available by running `telepresence help`, but here is more detail on the most common ones. +You can append `--help` to each command below to get even more information about its usage. + +| Command | Description | +|----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `connect` | Starts the local daemon and connects Telepresence to your cluster and installs the Traffic Manager if it is missing. After connecting, outbound traffic is routed to the cluster so that you can interact with services as if your laptop was another pod (for example, curling a service by it's name) | +| `status` | Shows the current connectivity status | +| `quit` | Tell Telepresence daemons to quit | +| `list` | Lists the current active intercepts | +| `intercept` | Intercepts a service, run followed by the service name to be intercepted and what port to proxy to your laptop: `telepresence intercept --port ` (use `port/UDP` to force UDP). This command can also start a process so you can run a local instance of the service you are intercepting. For example the following will intercept the hello service on port 8000 and start a Python web server: `telepresence intercept hello --port 8000 -- python3 -m http.server 8000`. A special flag `--docker-run` can be used to run the local instance [in a docker container](../docker-run). | +| `leave` | Stops an active intercept: `telepresence leave hello` | +| `loglevel` | Temporarily change the log-level of the traffic-manager, traffic-agents, and user and root daemons | +| `gather-logs` | Gather logs from traffic-manager, traffic-agents, user, and root daemons, and export them into a zip file that can be shared with others or included with a github issue. Use `--get-pod-yaml` to include the yaml for the `traffic-manager` and `traffic-agent`s. Use `--anonymize` to replace the actual pod names + namespaces used for the `traffic-manager` and pods containing `traffic-agent`s in the logs. | +| `version` | Show version of Telepresence CLI + Traffic-Manager (if connected) | +| `uninstall` | Uninstalls Telepresence from your cluster, using the `--agent` flag to target the Traffic Agent for a specific workload, the `--all-agents` flag to remove all Traffic Agents from all workloads, or the `--everything` flag to remove all Traffic Agents and the Traffic Manager. | + diff --git a/docs/telepresence-oss/latest/reference/cluster-config.md b/docs/telepresence-oss/latest/reference/cluster-config.md new file mode 100644 index 000000000..453ee3713 --- /dev/null +++ b/docs/telepresence-oss/latest/reference/cluster-config.md @@ -0,0 +1,263 @@ +import Alert from '@material-ui/lab/Alert'; +import { ClusterConfig } from '../../../../../src/components/Docs/Telepresence'; + +# Cluster-side configuration + +For the most part, Telepresence doesn't require any special +configuration in the cluster and can be used right away in any +cluster (as long as the user has adequate [RBAC permissions](../rbac) +and the cluster's server version is `1.19.0` or higher). + +## Helm Chart configuration +Some cluster specific configuration can be provided when installing +or upgrading the Telepresence cluster installation using Helm. Once +installed, the Telepresence client will configure itself from values +that it receives when connecting to the Traffic manager. + +See the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) +for a full list of available configuration settings. + +### Values +To add configuration, create a yaml file with the configuration values and then use it executing `telepresence helm install [--upgrade] --values ` + +## Client Configuration + +It is possible for the Traffic Manager to automatically push config to all +connecting clients. To learn more about this, please see the [client config docs](../config#global-configuration) + +### Agent Configuration + +The `agent` structure of the Helm chart configures the behavior of the Telepresence agents. + +#### Application Protocol Selection +The `agent.appProtocolStrategy` is relevant when using personal intercepts and controls how telepresence selects the application protocol to use +when intercepting a service that has no `service.ports.appProtocol` declared. The port's `appProtocol` is always trusted if it is present. +Valid values are: + +| Value | Resulting action | +|--------------|------------------------------------------------------------------------------------------------------------------------------| +| `http2Probe` | The Telepresence Traffic Agent will probe the intercepted container to check whether it supports http2. This is the default. | +| `portName` | Telepresence will make an educated guess about the protocol based on the name of the service port | +| `http` | Telepresence will use http | +| `http2` | Telepresence will use http2 | + +When `portName` is used, Telepresence will determine the protocol by the name of the port: `[-suffix]`. The following protocols +are recognized: + +| Protocol | Meaning | +|----------|---------------------------------------| +| `http` | Plaintext HTTP/1.1 traffic | +| `http2` | Plaintext HTTP/2 traffic | +| `https` | TLS Encrypted HTTP (1.1 or 2) traffic | +| `grpc` | Same as http2 | + +The application protocol strategy can also be configured on a workstation. See [Intercepts](../config/#intercept) for more info. + +#### Image Configuration + +The `agent.image` structure contains the following values: + +| Setting | Meaning | +|------------|-----------------------------------------------------------------------------| +| `registry` | Registry used when downloading the image. Defaults to "docker.io/datawire". | +| `name` | The name of the image. Retrieved from Ambassador Cloud if not set. | +| `tag` | The tag of the image. Retrieved from Ambassador Cloud if not set. | + +#### Log level + +The `agent.LogLevel` controls the log level of the traffic-agent. See [Log Levels](../config/#log-levels) for more info. + +#### Resources + +The `agent.resources` and `agent.initResources` will be used as the `resources` element when injecting traffic-agents and init-containers. + +## TLS + +In this example, other applications in the cluster expect to speak TLS to your +intercepted application (perhaps you're using a service-mesh that does +mTLS). + +In order to use `--mechanism=http` (or any features that imply +`--mechanism=http`) you need to tell Telepresence about the TLS +certificates in use. + +Tell Telepresence about the certificates in use by adjusting your +[workload's](../intercepts/#supported-workloads) Pod template to set a couple of +annotations on the intercepted Pods: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ "getambassador.io/inject-terminating-tls-secret": "your-terminating-secret" # optional ++ "getambassador.io/inject-originating-tls-secret": "your-originating-secret" # optional +``` + +- The `getambassador.io/inject-terminating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS server + certificate to use for decrypting and responding to incoming + requests. + + When Telepresence modifies the Service and workload port + definitions to point at the Telepresence Agent sidecar's port + instead of your application's actual port, the sidecar will use this + certificate to terminate TLS. + +- The `getambassador.io/inject-originating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS + client certificate to use for communicating with your application. + + You will need to set this if your application expects incoming + requests to speak TLS (for example, your + code expects to handle mTLS itself instead of letting a service-mesh + sidecar handle mTLS for it, or the port definition that Telepresence + modified pointed at the service-mesh sidecar instead of at your + application). + + If you do set this, you should to set it to the + same client certificate Secret that you configure the Ambassador + Edge Stack to use for mTLS. + +It is only possible to refer to a Secret that is in the same Namespace +as the Pod. The Secret will be mounted into the traffic agent's container. + +Telepresence understands `type: kubernetes.io/tls` Secrets and +`type: istio.io/key-and-cert` Secrets; as well as `type: Opaque` +Secrets that it detects to be formatted as one of those types. + +## Mutating Webhook + +Telepresence uses a Mutating Webhook to inject the [Traffic Agent](../architecture/#traffic-agent) sidecar container and update the +port definitions. This means that an intercepted workload (Deployment, StatefulSet, ReplicaSet) will remain untouched +and in sync as far as GitOps workflows (such as ArgoCD) are concerned. + +The injection will happen on demand the first time an attempt is made to intercept the workload. + +If you want to prevent that the injection ever happens, simply add the `telepresence.getambassador.io/inject-traffic-agent: disabled` +annotation to your workload template's annotations: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ telepresence.getambassador.io/inject-traffic-agent: disabled + spec: + containers: +``` + +### Service Name and Port Annotations + +Telepresence will automatically find all services and all ports that will connect to a workload and make them available +for an intercept, but you can explicitly define that only one service and/or port can be intercepted. + +```diff + spec: + template: + metadata: + labels: + service: your-service + annotations: ++ telepresence.getambassador.io/inject-service-name: my-service ++ telepresence.getambassador.io/inject-service-port: https + spec: + containers: +``` + +### Ignore Certain Volume Mounts + +An annotation `telepresence.getambassador.io/inject-ignore-volume-mounts` can be used to make the injector ignore certain volume mounts denoted by a comma-separated string. The specified volume mounts from the original container will not be appended to the agent sidecar container. + +```diff + spec: + template: + metadata: + annotations: ++ telepresence.getambassador.io/inject-ignore-volume-mounts: "foo,bar" + spec: + containers: +``` + +### Note on Numeric Ports + +If the targetPort of your intercepted service is pointing at a port number, in addition to +injecting the Traffic Agent sidecar, Telepresence will also inject an initContainer that will +reconfigure the pod's firewall rules to redirect traffic to the Traffic Agent. + + +Note that this initContainer requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + +If you need to use numeric ports without the aforementioned capabilities, you can [manually install the agent](../intercepts/manual-agent) + +For example, the following service is using a numeric port, so Telepresence would inject an initContainer into it: +```yaml +apiVersion: v1 +kind: Service +metadata: + name: your-service +spec: + type: ClusterIP + selector: + service: your-service + ports: + - port: 80 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: your-service + labels: + service: your-service +spec: + replicas: 1 + selector: + matchLabels: + service: your-service + template: + metadata: + annotations: + telepresence.getambassador.io/inject-traffic-agent: enabled + labels: + service: your-service + spec: + containers: + - name: your-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 +``` + +## Excluding Envrionment Variables + +If your pod contains sensitive variables like a database password, or third party API Key, you may want to exclude those from being propagated through an intercept. +Telepresence allows you to configure this through a ConfigMap that is then read and removes the sensitive variables. + +This can be done in two ways: + +When installing your traffic-manager through helm you can use the `--set` flag and pass a comma separated list of variables: + +`telepresence helm install --set intercept.environment.excluded="{DATABASE_PASSWORD,API_KEY}"` + +This also applies when upgrading: + +`telepresence helm upgrade --set intercept.environment.excluded="{DATABASE_PASSWORD,API_KEY}"` + +Once this is completed the environment variables will no longer be in the environment file created by an Intercept. + +The other way to complete this is in your custom `values.yaml`. Customizing your traffic-manager through a values file can be viewed [here](../../install/manager). + +```yaml +intercept: + environment: + excluded: ['DATABASE_PASSWORD', 'API_KEY'] +``` + +You can exclude any number of variables, they just need to match the `key` of the variable within a pod to be excluded. \ No newline at end of file diff --git a/docs/telepresence-oss/latest/reference/config.md b/docs/telepresence-oss/latest/reference/config.md new file mode 100644 index 000000000..c8e6d1c03 --- /dev/null +++ b/docs/telepresence-oss/latest/reference/config.md @@ -0,0 +1,321 @@ +# Laptop-side configuration + +There are a number of configuration values that can be tweaked to change how Telepresence behaves. +These can be set in two ways: globally, by a platform engineer with powers to deploy the Telepresence Traffic Manager, or locally by any user. +One important exception is the location of the traffic manager itself, which, if it's different from the default of `ambassador`, [must be set](#manager) locally per-cluster to be able to connect. + +## Global Configuration + +Global configuration is set at the Traffic Manager level and applies to any user connecting to that Traffic Manager. +To set it, simply pass in a `client` dictionary to the `helm install` command, with any config values you wish to set. + +### Values + +The `client` config supports values for `timeouts`, `logLevels`, `images`, `cloud`, `grpc`, `dns`, and `routing`. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +client: + timeouts: + agentInstall: 1m + intercept: 10s + logLevels: + userDaemon: debug + images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting + cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. + grpc: + maxReceiveSize: 10Mi + telepresenceAPI: + port: 9980 + dns: + includeSuffixes: [.private] + excludeSuffixes: [.se, .com, .io, .net, .org, .ru] + lookupTimeout: 30s + routing: + alsoProxySubnets: + - 1.2.3.4/32 + neverProxySubnets: + - 1.2.3.4/32 +``` + +#### Timeouts + +Values for `client.timeouts` are all durations either as a number of seconds +or as a string with a unit suffix of `ms`, `s`, `m`, or `h`. Strings +can be fractional (`1.5h`) or combined (`2h45m`). + +These are the valid fields for the `timeouts` key: + +| Field | Description | Type | Default | +|-------------------------|------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|------------| +| `agentInstall` | Waiting for Traffic Agent to be installed | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | +| `apply` | Waiting for a Kubernetes manifest to be applied | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 1 minute | +| `clusterConnect` | Waiting for cluster to be connected | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `intercept` | Waiting for an intercept to become active | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `proxyDial` | Waiting for an outbound connection to be established | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `trafficManagerConnect` | Waiting for the Traffic Manager API to connect for port forwards | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `trafficManagerAPI` | Waiting for connection to the gPRC API after `trafficManagerConnect` is successful | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 15 seconds | +| `helm` | Waiting for Helm operations (e.g. `install`) on the Traffic Manager | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | + +#### Log Levels + +Values for the `client.logLevels` fields are one of the following strings, +case-insensitive: + + - `trace` + - `debug` + - `info` + - `warning` or `warn` + - `error` + +For whichever log-level you select, you will get logs labeled with that level and of higher severity. +(e.g. if you use `info`, you will also get logs labeled `error`. You will NOT get logs labeled `debug`. + +These are the valid fields for the `client.logLevels` key: + +| Field | Description | Type | Default | +|--------------|---------------------------------------------------------------------|---------------------------------------------|---------| +| `userDaemon` | Logging level to be used by the User Daemon (logs to connector.log) | [loglevel][logrus-level] [string][yaml-str] | debug | +| `rootDaemon` | Logging level to be used for the Root Daemon (logs to daemon.log) | [loglevel][logrus-level] [string][yaml-str] | info | + +#### Images +Values for `client.images` are strings. These values affect the objects that are deployed in the cluster, +so it's important to ensure users have the same configuration. + +Additionally, you can deploy the server-side components with [Helm](../../install/helm), to prevent them +from being overridden by a client's config and use the [mutating-webhook](../cluster-config/#mutating-webhook) +to handle installation of the `traffic-agents`. + +These are the valid fields for the `client.images` key: + +| Field | Description | Type | Default | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------|----------------------| +| `registry` | Docker registry to be used for installing the Traffic Manager and default Traffic Agent. If not using a helm chart to deploy server-side objects, changing this value will create a new traffic-manager deployment when using Telepresence commands. Additionally, changing this value will update installed default `traffic-agents` to use the new registry when creating a new intercept. | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `agentImage` | `$registry/$imageName:$imageTag` to use when installing the Traffic Agent. Changing this value will update pre-existing `traffic-agents` to use this new image. *The `registry` value is not used for the `traffic-agent` if you have this value set.* | qualified Docker image name [string][yaml-str] | (unset) | +| `webhookRegistry` | The container `$registry` that the [Traffic Manager](../cluster-config/#mutating-webhook) will use with the `webhookAgentImage` *This value is only used if a new `traffic-manager` is deployed* | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `webhookAgentImage` | The container image that the [Traffic Manager](../cluster-config/#mutating-webhook) will pull from the `webhookRegistry` when installing the Traffic Agent in annotated pods *This value is only used if a new `traffic-manager` is deployed* | non-qualified Docker image name [string][yaml-str] | (unset) | + +#### Grpc +The `maxReceiveSize` determines how large a message that the workstation receives via gRPC can be. The default is 4Mi (determined by gRPC). All traffic to and from the cluster is tunneled via gRPC. + +The size is measured in bytes. You can express it as a plain integer or as a fixed-point number using E, G, M, or K. You can also use the power-of-two equivalents: Gi, Mi, Ki. For example, the following represent roughly the same value: +``` +128974848, 129e6, 129M, 123Mi +``` + +#### RESTful API server +The `client.telepresenceAPI` controls the behavior of Telepresence's RESTful API server that can be queried for additional information about ongoing intercepts. When present, and the `port` is set to a valid port number, it's propagated to the auto-installer so that application containers that can be intercepted gets the `TELEPRESENCE_API_PORT` environment set. The server can then be queried at `localhost:`. In addition, the `traffic-agent` and the `user-daemon` on the workstation that performs an intercept will start the server on that port. +If the `traffic-manager` is auto-installed, its webhook agent injector will be configured to add the `TELEPRESENCE_API_PORT` environment to the app container when the `traffic-agent` is injected. +See [RESTful API server](../restapi) for more info. + +### DNS + +#### Intercept +The `intercept` controls applies to how Telepresence will intercept the communications to the intercepted service. + +The `defaultPort` controls which port is selected when no `--port` flag is given to the `telepresence intercept` command. The default value is "8080". + +The `appProtocolStrategy` is only relevant when using personal intercepts. This controls how Telepresence selects the application protocol to use when intercepting a service that has no `service.ports.appProtocol` defined. Valid values are: + +| Value | Resulting action | +|--------------|--------------------------------------------------------------------------------------------------------| +| `http2Probe` | The Telepresence Traffic Agent will probe the intercepted container to check whether it supports http2 | +| `portName` | Telepresence will make an educated guess about the protocol based on the name of the service port | +| `http` | Telepresence will use http | +| `http2` | Telepresence will use http2 | + +When `portName` is used, Telepresence will determine the protocol by the name of the port: `[-suffix]`. The following protocols are recognized: + +| Protocol | Meaning | +|----------|---------------------------------------| +| `http` | Plaintext HTTP/1.1 traffic | +| `http2` | Plaintext HTTP/2 traffic | +| `https` | TLS Encrypted HTTP (1.1 or 2) traffic | +| `grpc` | Same as http2 | + +#### Daemons + +`client.daemons` controls which binary to use for the user daemon. By default it will +use the Telepresence binary. For example, this can be used to tell Telepresence to +use the Telepresence Pro binary. + +The fields for `client.dns` are: `localIP`, `excludeSuffixes`, `includeSuffixes`, and `lookupTimeout`. + +| Field | Description | Type | Default | +|-------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|--------------------------------------------------------------------------| +| `localIP` | The address of the local DNS server. This entry is only used on Linux systems that are not configured to use systemd-resolved. | IP address [string][yaml-str] | first `nameserver` mentioned in `/etc/resolv.conf` | +| `excludeSuffixes` | Suffixes for which the DNS resolver will always fail (or fallback in case of the overriding resolver). Can be globally configured in the Helm chart. | [sequence][yaml-seq] of [strings][yaml-str] | `[".arpa", ".com", ".io", ".net", ".org", ".ru"]` | +| `includeSuffixes` | Suffixes for which the DNS resolver will always attempt to do a lookup. Includes have higher priority than excludes. Can be globally configured in the Helm chart. | [sequence][yaml-seq] of [strings][yaml-str] | `[]` | +| `lookupTimeout` | Maximum time to wait for a cluster side host lookup. | [duration][go-duration] [string][yaml-str] | 4 seconds | + +Here is an example values.yaml: +```yaml +client: + dns: + includeSuffixes: [.private] + excludeSuffixes: [.se, .com, .io, .net, .org, .ru] + localIP: 8.8.8.8 + lookupTimeout: 30s +``` + +### Routing + +#### AlsoProxySubnets + +When using `alsoProxySubnets`, you provide a list of subnets to be added to the TUN device. +All connections to addresses that the subnet spans will be dispatched to the cluster + +Here is an example values.yaml for the subnet `1.2.3.4/32`: +```yaml +client: + routing: + alsoProxySubnets: + - 1.2.3.4/32 +``` + +#### NeverProxySubnets + +When using `neverProxySubnets` you provide a list of subnets. These will never be routed via the TUN device, +even if they fall within the subnets (pod or service) for the cluster. Instead, whatever route they have before +telepresence connects is the route they will keep. + +Here is an example kubeconfig for the subnet `1.2.3.4/32`: + +```yaml +client: + routing: + neverProxySubnets: + - 1.2.3.4/32 +``` + +#### Using AlsoProxy together with NeverProxy + +Never proxy and also proxy are implemented as routing rules, meaning that when the two conflict, regular routing routes apply. +Usually this means that the most specific route will win. + +So, for example, if an `alsoProxySubnets` subnet falls within a broader `neverProxySubnets` subnet: + +```yaml +neverProxySubnets: [10.0.0.0/16] +alsoProxySubnets: [10.0.5.0/24] +``` + +Then the specific `alsoProxySubnets` of `10.0.5.0/24` will be proxied by the TUN device, whereas the rest of `10.0.0.0/16` will not. + +Conversely, if a `neverProxySubnets` subnet is inside a larger `alsoProxySubnets` subnet: + +```yaml +alsoProxySubnets: [10.0.0.0/16] +neverProxySubnets: [10.0.5.0/24] +``` + +Then all of the `alsoProxySubnets` of `10.0.0.0/16` will be proxied, with the exception of the specific `neverProxySubnets` of `10.0.5.0/24` + +## Local Overrides + +In addition, it is possible to override each of these variables at the local level by setting up new values in local config files. +There are two types of config values that can be set locally: those that apply to all clusters, which are set in a single `config.yml` file, and those +that only apply to specific clusters, which are set as extensions to the `$KUBECONFIG` file. + +### Config for all clusters +Telepresence uses a `config.yml` file to store and change those configuration values that will be used for all clusters you use Telepresence with. +The location of this file varies based on your OS: + +* macOS: `$HOME/Library/Application Support/telepresence/config.yml` +* Linux: `$XDG_CONFIG_HOME/telepresence/config.yml` or, if that variable is not set, `$HOME/.config/telepresence/config.yml` +* Windows: `%APPDATA%\telepresence\config.yml` + +For Linux, the above paths are for a user-level configuration. For system-level configuration, use the file at `$XDG_CONFIG_DIRS/telepresence/config.yml` or, if that variable is empty, `/etc/xdg/telepresence/config.yml`. If a file exists at both the user-level and system-level paths, the user-level path file will take precedence. + +### Values + +The config file currently supports values for the `timeouts`, `logLevels`, `images`, `cloud`, and `grpc` keys. +The definitions of these values are identical to those values in the `client` config above. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +timeouts: + agentInstall: 1m + intercept: 10s +logLevels: + userDaemon: debug +images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting +cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. +grpc: + maxReceiveSize: 10Mi +telepresenceAPI: + port: 9980 +``` + + +## Workstation Per-Cluster Configuration + +Configuration that is specific to a cluster can also be overriden per-workstation by modifying your `$KUBECONFIG` file. +It is recommended that you do not do this, and instead rely on upstream values provided to the Traffic Manager. This ensures +that all users that connect to the Traffic Manager will have the same routing and DNS resolution behavior. +An important exception to this is the [`manager.namespace` configuration](#manager) which must be set locally. + +### Values + +The kubeconfig supports values for `dns`, `also-proxy`, `never-proxy`, and `manager`. + +Example kubeconfig: +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + dns: + include-suffixes: [.private] + exclude-suffixes: [.se, .com, .io, .net, .org, .ru] + local-ip: 8.8.8.8 + lookup-timeout: 30s + never-proxy: [10.0.0.0/16] + also-proxy: [10.0.5.0/24] + name: example-cluster +``` + +#### Manager + +This is the one cluster configuration that cannot be set using the Helm chart because it defines how Telepresence connects to +the Traffic manager. When not default, that setting needs to be configured in the workstation's kubeconfig for the cluster. + +The `manager` key contains configuration for finding the `traffic-manager` that telepresence will connect to. It supports one key, `namespace`, indicating the namespace where the traffic manager is to be found + +Here is an example kubeconfig that will instruct telepresence to connect to a manager in namespace `staging`: + +```yaml +apiVersion: v1 +clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +[yaml-bool]: https://yaml.org/type/bool.html +[yaml-float]: https://yaml.org/type/float.html +[yaml-int]: https://yaml.org/type/int.html +[yaml-seq]: https://yaml.org/type/seq.html +[yaml-str]: https://yaml.org/type/str.html +[go-duration]: https://pkg.go.dev/time#ParseDuration +[logrus-level]: https://github.com/sirupsen/logrus/blob/v1.8.1/logrus.go#L25-L45 diff --git a/docs/telepresence-oss/latest/reference/dns.md b/docs/telepresence-oss/latest/reference/dns.md new file mode 100644 index 000000000..2f263860e --- /dev/null +++ b/docs/telepresence-oss/latest/reference/dns.md @@ -0,0 +1,80 @@ +# DNS resolution + +The Telepresence DNS resolver is dynamically configured to resolve names using the namespaces of currently active intercepts. Processes running locally on the desktop will have network access to all services in the such namespaces by service-name only. + +All intercepts contribute to the DNS resolver, even those that do not use the `--namespace=` option. This is because `--namespace default` is implied, and in this context, `default` is treated just like any other namespace. + +No namespaces are used by the DNS resolver (not even `default`) when no intercepts are active, which means that no service is available by `` only. Without an active intercept, the namespace qualified DNS name must be used (in the form `.`). + +See this demonstrated below, using the [quick start's](../../quick-start/) sample app services. + +No intercepts are currently running, we'll connect to the cluster and list the services that can be intercepted. + +``` +$ telepresence connect + + Connecting to traffic manager... + Connected to context default (https://) + +$ telepresence list + + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + emoji : ready to intercept (traffic-agent not yet installed) + web : ready to intercept (traffic-agent not yet installed) + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + +$ curl web-app:80 + + curl: (6) Could not resolve host: web-app + +``` + +This is expected as Telepresence cannot reach the service yet by short name without an active intercept in that namespace. + +``` +$ curl web-app.emojivoto:80 + + + + + + Emoji Vote + ... +``` + +Using the namespaced qualified DNS name though does work. +Now we'll start an intercept against another service in the same namespace. Remember, `--namespace default` is implied since it is not specified. + +``` +$ telepresence intercept web --port 8080 + + Using Deployment web + intercepted + Intercept name : web + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Volume Mount Point: /tmp/telfs-166119801 + Intercepting : HTTP requests that match all headers: + 'x-telepresence-intercept-id: 8eac04e3-bf24-4d62-b3ba-35297c16f5cd:web' + +$ curl webapp:80 + + + + + + Emoji Vote + ... +``` + +Now curling that service by its short name works and will as long as the intercept is active. + +The DNS resolver will always be able to resolve services using `.` regardless of intercepts. + +### Supported Query Types + +The Telepresence DNS resolver is now capable of resolving queries of type `A`, `AAAA`, `CNAME`, +`MX`, `NS`, `PTR`, `SRV`, and `TXT`. + +See [Outbound connectivity](../routing/#dns-resolution) for details on DNS lookups. diff --git a/docs/telepresence-oss/latest/reference/docker-run.md b/docs/telepresence-oss/latest/reference/docker-run.md new file mode 100644 index 000000000..fb1b7d8d0 --- /dev/null +++ b/docs/telepresence-oss/latest/reference/docker-run.md @@ -0,0 +1,87 @@ +--- +Description: "How a Telepresence intercept can run a Docker container with configured environment and volume mounts." +--- + +# Using Docker for intercepts + +## Using command flags + +### The docker flag +You can start the Telepresence daemon in a Docker container on your laptop using the command: + +```console +$ telepresence connect --docker +``` + +The `--docker` flag is a global flag, and if passed directly like `telepresence intercept --docker`, then the implicit connect that takes place if no connections is active, will use a container based daemon. + +### The docker-run flag + +If you want your intercept to go to another Docker container, you can use the `--docker-run` flag. It creates the intercept, runs your container in the foreground, then automatically ends the intercept when the container exits. + +```console +$ telepresence intercept --port --docker-run -- +``` + +The `--` separates flags intended for `telepresence intercept` from flags intended for `docker run`. + +It's recommended that you always use the `--docker-run` in combination with the global `--docker` flag, because that makes everything less intrusive. +- No admin user access is needed. Network modifications are confined to a Docker network. +- There's no need for special filesystem mount software like MacFUSE or WinFSP. The volume mounts happen in the Docker engine. + +The following happens under the hood when both flags are in use: + +- The network of for the intercept handler will be set to the same as the network used by the daemon. This guarantees that the + intercept handler can access the Telepresence VIF, and hence have access the cluster. +- Volume mounts will be automatic and made using the Telemount Docker volume plugin so that all volumes exposed by the intercepted + container are mounted on the intercept handler container. +- The environment of the intercepted container becomes the environment of the intercept handler container. + +### The docker-build flag + +The `--docker-build ` and the repeatable `docker-build-opt key=value` flags enable container's to be build on the fly by the intercept command. + +When using `--docker-build`, the image name used in the argument list must be verbatim `IMAGE`. The word acts as a placeholder and will be replaced by the ID of the image that is built. + +The `--docker-build` flag implies `--docker-run`. + +## Using docker-run flag without docker + +It is possible to use `--docker-run` with a daemon running on your host, which is the default behavior of Telepresence. + +However, it isn't recommended since you'll be in a hybrid mode: while your intercept runs in a container, the daemon will modify the host network, and if remote mounts are desired, they may require extra software. + +The ability to use this special combination is retained for backward compatibility reasons. It might be removed in a future release of Telepresence. + +The `--port` flag has slightly different semantics and can be used in situations when the local and container port must be different. This +is done using `--port :`. The container port will default to the local port when using the `--port ` syntax. + +## Examples + +Imagine you are working on a new version of your frontend service. It is running in your cluster as a Deployment called `frontend-v1`. You use Docker on your laptop to build an improved version of the container called `frontend-v2`. To test it out, use this command to run the new container on your laptop and start an intercept of the cluster service to your local container. + +```console +$ telepresence intercept --docker frontend-v1 --port 8000 --docker-run -- frontend-v2 +``` + +Now, imagine that the `frontend-v2` image is built by a `Dockerfile` that resides in the directory `images/frontend-v2`. You can build and intercept directly. + +```console +$ telepresence intercept --docker frontend-v1 --port8000 --docker-build images/frontend-v2 --docker-build-opt tag=mytag -- IMAGE +``` + +## Automatic flags + +Telepresence will automatically pass some relevant flags to Docker in order to connect the container with the intercept. Those flags are combined with the arguments given after `--` on the command line. + +- `--env-file ` Loads the intercepted environment +- `--name intercept--` Names the Docker container, this flag is omitted if explicitly given on the command line +- `-v ` Volume mount specification, see CLI help for `--docker-mount` flags for more info + +When used with a container based daemon: +- `--rm` Mandatory, because the volume mounts cannot be removed until the container is removed. +- `-v :` Volume mount specifications propagated from the intercepted container + +When used with a daemon that isn't container based: +- `--dns-search tel2-search` Enables single label name lookups in intercepted namespaces +- `-p ` The local port for the intercept and the container port diff --git a/docs/telepresence-oss/latest/reference/environment.md b/docs/telepresence-oss/latest/reference/environment.md new file mode 100644 index 000000000..7f83ff119 --- /dev/null +++ b/docs/telepresence-oss/latest/reference/environment.md @@ -0,0 +1,46 @@ +--- +description: "How Telepresence can import environment variables from your Kubernetes cluster to use with code running on your laptop." +--- + +# Environment variables + +Telepresence can import environment variables from the cluster pod when running an intercept. +You can then use these variables with the code running on your laptop of the service being intercepted. + +There are three options available to do this: + +1. `telepresence intercept [service] --port [port] --env-file=FILENAME` + + This will write the environment variables to a Docker Compose `.env` file. This file can be used with `docker-compose` when starting containers locally. Please see the Docker documentation regarding the [file syntax](https://docs.docker.com/compose/env-file/) and [usage](https://docs.docker.com/compose/environment-variables/) for more information. + +2. `telepresence intercept [service] --port [port] --env-json=FILENAME` + + This will write the environment variables to a JSON file. This file can be injected into other build processes. + +3. `telepresence intercept [service] --port [port] -- [COMMAND]` + + This will run a command locally with the pod's environment variables set on your laptop. Once the command quits the intercept is stopped (as if `telepresence leave [service]` was run). This can be used in conjunction with a local server command, such as `python [FILENAME]` or `node [FILENAME]` to run a service locally while using the environment variables that were set on the pod via a ConfigMap or other means. + + Another use would be running a subshell, Bash for example: + + `telepresence intercept [service] --port [port] -- /bin/bash` + + This would start the intercept then launch the subshell on your laptop with all the same variables set as on the pod. + +## Telepresence Environment Variables + +Telepresence adds some useful environment variables in addition to the ones imported from the intercepted pod: + +### TELEPRESENCE_ROOT +Directory where all remote volumes mounts are rooted. See [Volume Mounts](../volume/) for more info. + +### TELEPRESENCE_MOUNTS +Colon separated list of remotely mounted directories. + +### TELEPRESENCE_CONTAINER +The name of the intercepted container. Useful when a pod has several containers, and you want to know which one that was intercepted by Telepresence. + +### TELEPRESENCE_INTERCEPT_ID +ID of the intercept (same as the "x-intercept-id" http header). + +Useful if you need special behavior when intercepting a pod. One example might be when dealing with pub/sub systems like Kafka, where all processes that don't have the `TELEPRESENCE_INTERCEPT_ID` set can filter out all messages that contain an `x-intercept-id` header, while those that do, instead filter based on a matching `x-intercept-id` header. This is to assure that messages belonging to a certain intercept always are consumed by the intercepting process. diff --git a/docs/telepresence-oss/latest/reference/inside-container.md b/docs/telepresence-oss/latest/reference/inside-container.md new file mode 100644 index 000000000..230823ec6 --- /dev/null +++ b/docs/telepresence-oss/latest/reference/inside-container.md @@ -0,0 +1,12 @@ +# Running Telepresence inside a container + +All Telepresence commands now have the global option `--docker`. This option tells telepresence to start the Telepresence daemon in a +docker container. + +Running the daemon in a container brings many advantages. The daemon will no longer make modifications to the host's network or DNS, and +it will not mount files in the host's filesystem. Consequently, it will not need admin privileges to run, nor will it need special software +like macFUSE or WinFSP to mount the remote file systems. + +The intercept handler (the process that will receive the intercepted traffic) must also be a docker container, because that is the only +way to access the cluster network that the daemon makes available, and to mount the docker volumes needed. + diff --git a/docs/telepresence-oss/latest/reference/intercepts/cli.md b/docs/telepresence-oss/latest/reference/intercepts/cli.md new file mode 100644 index 000000000..40a511d2f --- /dev/null +++ b/docs/telepresence-oss/latest/reference/intercepts/cli.md @@ -0,0 +1,313 @@ +import Alert from '@material-ui/lab/Alert'; + +# Configuring intercept using CLI + +## Specifying a namespace for an intercept + +The namespace of the intercepted workload is specified using the +`--namespace` option. When this option is used, and `--workload` is +not used, then the given name is interpreted as the name of the +workload and the name of the intercept will be constructed from that +name and the namespace. + +```shell +telepresence intercept hello --namespace myns --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept +`hello-myns`. In order to remove the intercept, you will need to run +`telepresence leave hello-mydns` instead of just `telepresence leave +hello`. + +The name of the intercept will be left unchanged if the workload is specified. + +```shell +telepresence intercept myhello --namespace myns --workload hello --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept `myhello`. + +## Importing environment variables + +Telepresence can import the environment variables from the pod that is +being intercepted, see [this doc](../../environment) for more details. + +## Creating an intercept + +The following command will intercept all traffic bound to the service and proxy it to your +laptop. This includes traffic coming through your ingress controller, +so use this option carefully as to not disrupt production +environments. + +```shell +telepresence intercept --port= +``` + +This will output an HTTP header that you can set on your request for +that traffic to be intercepted: + +```console +$ telepresence intercept --port= +Using Deployment +intercepted + Intercept name: + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":") +``` + +Run `telepresence status` to see the list of active intercepts. + +```console +$ telepresence status +Root Daemon: Running + Version : v2.1.4 (api 3) + Primary DNS : "" + Fallback DNS: "" +User Daemon: Running + Version : v2.1.4 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 1 total + dataprocessingnodeservice: @ +``` + +Finally, run `telepresence leave ` to stop the intercept. + +## Skipping the ingress dialogue + +You can skip the ingress dialogue by setting the relevant parameters using flags. If any of the following flags are set, the dialogue will be skipped and the flag values will be used instead. If any of the required flags are missing, an error will be thrown. + +| Flag | Description | Required | +|------------------|------------------------------------------------------------------|------------| +| `--ingress-host` | The ip address for the ingress | yes | +| `--ingress-port` | The port for the ingress | yes | +| `--ingress-tls` | Whether tls should be used | no | +| `--ingress-l5` | Whether a different ip address should be used in request headers | no | + +## Creating an intercept when a service has multiple ports + +If you are trying to intercept a service that has multiple ports, you +need to tell Telepresence which service port you are trying to +intercept. To specify, you can either use the name of the service +port or the port number itself. To see which options might be +available to you and your service, use kubectl to describe your +service or look in the object's YAML. For more information on multiple +ports, see the [Kubernetes documentation][kube-multi-port-services]. + +[kube-multi-port-services]: https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services + +```console +$ telepresence intercept --port=: +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +When intercepting a service that has multiple ports, the name of the +service port that has been intercepted is also listed. + +If you want to change which port has been intercepted, you can create +a new intercept the same way you did above and it will change which +service port is being intercepted. + +## Creating an intercept When multiple services match your workload + +Oftentimes, there's a 1-to-1 relationship between a service and a +workload, so telepresence is able to auto-detect which service it +should intercept based on the workload you are trying to intercept. +But if you use something like +[Argo](https://www.getambassador.io/docs/argo/latest/), there may be +two services (that use the same labels) to manage traffic between a +canary and a stable service. + +Fortunately, if you know which service you want to use when +intercepting a workload, you can use the `--service` flag. So in the +aforementioned example, if you wanted to use the `echo-stable` service +when intercepting your workload, your command would look like this: + +```console +$ telepresence intercept echo-rollout- --port --service echo-stable +Using ReplicaSet echo-rollout- +intercepted + Intercept name : echo-rollout- + State : ACTIVE + Workload kind : ReplicaSet + Destination : 127.0.0.1:3000 + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-921196036 + Intercepting : all TCP connections +``` + +## Intercepting multiple ports + +It is possible to intercept more than one service and/or service port that are using the same workload. You do this +by creating more than one intercept that identify the same workload using the `--workload` flag. + +Let's assume that we have a service `multi-echo` with the two ports `http` and `grpc`. They are both +targeting the same `multi-echo` deployment. + +```console +$ telepresence intercept multi-echo-http --workload multi-echo --port 8080:http --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-http + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Volume Mount Point : /tmp/telfs-893700837 + Intercepting : all TCP requests + Preview URL : https://sleepy-bassi-1140.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +$ telepresence intercept multi-echo-grpc --workload multi-echo --port 8443:grpc --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-grpc + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8443 + Service Port Identifier: extra + Volume Mount Point : /tmp/telfs-1277723591 + Intercepting : all TCP requests + Preview URL : https://upbeat-thompson-6613.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +``` + +## Port-forwarding an intercepted container's sidecars + +Sidecars are containers that sit in the same pod as an application +container; they usually provide auxiliary functionality to an +application, and can usually be reached at +`localhost:${SIDECAR_PORT}`. For example, a common use case for a +sidecar is to proxy requests to a database, your application would +connect to `localhost:${SIDECAR_PORT}`, and the sidecar would then +connect to the database, perhaps augmenting the connection with TLS or +authentication. + +When intercepting a container that uses sidecars, you might want those +sidecars' ports to be available to your local application at +`localhost:${SIDECAR_PORT}`, exactly as they would be if running +in-cluster. Telepresence's `--to-pod ${PORT}` flag implements this +behavior, adding port-forwards for the port given. + +```console +$ telepresence intercept --port=: --to-pod= +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +If there are multiple ports that you need forwarded, simply repeat the +flag (`--to-pod= --to-pod=`). + +## Intercepting headless services + +Kubernetes supports creating [services without a ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services), +which, when they have a pod selector, serve to provide a DNS record that will directly point to the service's backing pods. +Telepresence supports intercepting these `headless` services as it would a regular service with a ClusterIP. +So, for example, if you have the following service: + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: my-headless +spec: + type: ClusterIP + clusterIP: None + selector: + service: my-headless + ports: + - port: 8080 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: my-headless + labels: + service: my-headless +spec: + replicas: 1 + serviceName: my-headless + selector: + matchLabels: + service: my-headless + template: + metadata: + labels: + service: my-headless + spec: + containers: + - name: my-headless + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +``` + +You can intercept it like any other: + +```console +$ telepresence intercept my-headless --port 8080 +Using StatefulSet my-headless +intercepted + Intercept name : my-headless + State : ACTIVE + Workload kind : StatefulSet + Destination : 127.0.0.1:8080 + Volume Mount Point: /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-524189712 + Intercepting : all TCP connections +``` + + +This utilizes an initContainer that requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + + +This requires the Traffic Agent to run as GID 7777. By default, this is disabled on openshift clusters. +To enable running as GID 7777 on a specific openshift namespace, run: +oc adm policy add-scc-to-group anyuid system:serviceaccounts:$NAMESPACE + + + +Intercepting headless services without a selector is not supported. + + +## Specifying the intercept traffic target + +By default, it's assumed that your local app is reachable on `127.0.0.1`, and intercepted traffic will be sent to that IP +at the port given by `--port`. If you wish to change this behavior and send traffic to a different IP address, you can use the `--address` parameter +to `telepresence intercept`. Say your machine is configured to respond to HTTP requests for an intercept on `172.16.0.19:8080`. You would run this as: + +```console +$ telepresence intercept my-service --address 172.16.0.19 --port 8080 +Using Deployment echo-easy + Intercept name : echo-easy + State : ACTIVE + Workload kind : Deployment + Destination : 172.16.0.19:8080 + Service Port Identifier: proxied + Volume Mount Point : /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-517018422 + Intercepting : HTTP requests with headers + 'x-telepresence-intercept-id: 8e0dd8ea-b55a-43bd-ad04-018b9de9cfab:echo-easy' + Preview URL : https://laughing-curran-5375.preview.edgestack.me + Layer 5 Hostname : echo-easy.default.svc.cluster.local +``` diff --git a/docs/telepresence-oss/latest/reference/intercepts/index.md b/docs/telepresence-oss/latest/reference/intercepts/index.md new file mode 100644 index 000000000..c63203587 --- /dev/null +++ b/docs/telepresence-oss/latest/reference/intercepts/index.md @@ -0,0 +1,29 @@ +import Alert from '@material-ui/lab/Alert'; + +# Intercepts + +When intercepting a service, the Telepresence Traffic Manager ensures +that a Traffic Agent has been injected into the intercepted workload. +The injection is triggered by a Kubernetes Mutating Webhook and will +only happen once. The Traffic Agent is responsible for redirecting +intercepted traffic to the developer's workstation. + +### Global intercept +This intercept will intercept all`tcp` and/or `udp` traffic to the +intercepted service and send all of that traffic down to the developer's +workstation. This means that a global intercept will affect all users of +the intercepted service. + +## Supported workloads + +Kubernetes has various +[workloads](https://kubernetes.io/docs/concepts/workloads/). +Currently, Telepresence supports intercepting (installing a +traffic-agent on) `Deployments`, `ReplicaSets`, and `StatefulSets`. + + + +While many of our examples use Deployments, they would also work on +ReplicaSets and StatefulSets + + diff --git a/docs/telepresence-oss/latest/reference/intercepts/manual-agent.md b/docs/telepresence-oss/latest/reference/intercepts/manual-agent.md new file mode 100644 index 000000000..8c24d6dbe --- /dev/null +++ b/docs/telepresence-oss/latest/reference/intercepts/manual-agent.md @@ -0,0 +1,267 @@ +import Alert from '@material-ui/lab/Alert'; + +# Manually injecting the Traffic Agent + +You can directly modify your workload's YAML configuration to add the Telepresence Traffic Agent and enable it to be intercepted. + +When you use a Telepresence intercept for the first time on a Pod, the [Telepresence Mutating Webhook](../../cluster-config/#mutating-webhook) +will automatically inject a Traffic Agent sidecar into it. There might be some situations where this approach cannot be used, such +as very strict company security policies preventing it. + + +Although it is possible to manually inject the Traffic Agent, it is not the recommended approach to making a workload interceptable, +try the Mutating Webhook before proceeding. + + +## Procedure + +You can manually inject the agent into Deployments, StatefulSets, or ReplicaSets. The example on this page +uses the following Deployment and Service. It's a prerequisite that they have been applied to the cluster: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "my-service" + labels: + service: my-service +spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +--- +apiVersion: v1 +kind: Service +metadata: + name: "my-service" +spec: + type: ClusterIP + selector: + service: my-service + ports: + - port: 80 + targetPort: 8080 +``` + +### 1. Generating the YAML + +First, generate the YAML for the traffic-agent configmap entry. It's important that the generated file have +the same name as the service, and no extension: + +```console +$ telepresence genyaml config --workload my-service -o /tmp/my-service +$ cat /tmp/my-service-config.yaml +agentImage: docker.io/datawire/tel2:2.7.0 +agentName: my-service +containers: +- Mounts: null + envPrefix: A_ + intercepts: + - agentPort: 9900 + containerPort: 8080 + protocol: TCP + serviceName: my-service + servicePort: 80 + serviceUID: f6680334-10ef-4703-aa4e-bb1f9d1665fd + mountPoint: /tel_app_mounts/echo-container + name: echo-container +logLevel: info +managerHost: traffic-manager.ambassador +managerPort: 8081 +manual: true +namespace: default +workloadKind: Deployment +workloadName: my-service +``` + +Next, generate the YAML for the traffic-agent container: + +```console +$ telepresence genyaml container --config /tmp/my-service -o /tmp/my-service-agent.yaml +$ cat /tmp/my-service-agent.yaml +args: +- agent +env: +- name: _TEL_AGENT_POD_IP + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: status.podIP +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: traffic-agent +ports: +- containerPort: 9900 + protocol: TCP +readinessProbe: + exec: + command: + - /bin/stat + - /tmp/agent/ready +resources: {} +volumeMounts: +- mountPath: /tel_pod_info + name: traffic-annotations +- mountPath: /etc/traffic-agent + name: traffic-config +- mountPath: /tel_app_exports + name: export-volume + name: traffic-annotations +``` + +Next, generate the init-container + +```console +$ telepresence genyaml initcontainer --config /tmp/my-service -o /tmp/my-service-init.yaml +$ cat /tmp/my-service-init.yaml +args: +- agent-init +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: tel-agent-init +resources: {} +securityContext: + capabilities: + add: + - NET_ADMIN +volumeMounts: +- mountPath: /etc/traffic-agent + name: traffic-config +``` + +Next, generate the YAML for the volumes: + +```console +$ telepresence genyaml volume --workload my-service -o /tmp/my-service-volume.yaml +$ cat /tmp/my-service-volume.yaml +- downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.annotations + path: annotations + name: traffic-annotations +- configMap: + items: + - key: my-service + path: config.yaml + name: telepresence-agents + name: traffic-config +- emptyDir: {} + name: export-volume + +``` + + +Enter `telepresence genyaml container --help` or `telepresence genyaml volume --help` for more information about these flags. + + +### 2. Creating (or updating) the configmap + +The generated configmap entry must be insterted into the `telepresence-agents` `ConfigMap` in the same namespace as the +modified `Deployment`. If the `ConfigMap` doesn't exist yet, it can be created using the following command: + +```console +$ kubectl create configmap telepresence-agents --from-file=/tmp/my-service +``` + +If it already exists, new entries can be added under the `Data` key using `kubectl edit configmap telepresence-agents`. + +### 3. Injecting the YAML into the Deployment + +You need to add the `Deployment` YAML you genereated to include the container and the volume. These are placed as elements +of `spec.template.spec.containers`, `spec.template.spec.initContainers`, and `spec.template.spec.volumes` respectively. +You also need to modify `spec.template.metadata.annotations` and add the annotation +`telepresence.getambassador.io/manually-injected: "true"`. These changes should look like the following: + +```diff + apiVersion: apps/v1 + kind: Deployment + metadata: + name: "my-service" + labels: + service: my-service + spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service ++ annotations: ++ telepresence.getambassador.io/manually-injected: "true" + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} ++ - args: ++ - agent ++ env: ++ - name: _TEL_AGENT_POD_IP ++ valueFrom: ++ fieldRef: ++ apiVersion: v1 ++ fieldPath: status.podIP ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: traffic-agent ++ ports: ++ - containerPort: 9900 ++ protocol: TCP ++ readinessProbe: ++ exec: ++ command: ++ - /bin/stat ++ - /tmp/agent/ready ++ resources: { } ++ volumeMounts: ++ - mountPath: /tel_pod_info ++ name: traffic-annotations ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ - mountPath: /tel_app_exports ++ name: export-volume ++ initContainers: ++ - args: ++ - agent-init ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: tel-agent-init ++ resources: { } ++ securityContext: ++ capabilities: ++ add: ++ - NET_ADMIN ++ volumeMounts: ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ volumes: ++ - downwardAPI: ++ items: ++ - fieldRef: ++ apiVersion: v1 ++ fieldPath: metadata.annotations ++ path: annotations ++ name: traffic-annotations ++ - configMap: ++ items: ++ - key: my-service ++ path: config.yaml ++ name: telepresence-agents ++ name: traffic-config ++ - emptyDir: { } ++ name: export-volume +``` diff --git a/docs/telepresence-oss/latest/reference/linkerd.md b/docs/telepresence-oss/latest/reference/linkerd.md new file mode 100644 index 000000000..9b903fa76 --- /dev/null +++ b/docs/telepresence-oss/latest/reference/linkerd.md @@ -0,0 +1,75 @@ +--- +Description: "How to get Linkerd meshed services working with Telepresence" +--- + +# Using Telepresence with Linkerd + +## Introduction +Getting started with Telepresence on Linkerd services is as simple as adding an annotation to your Deployment: + +```yaml +spec: + template: + metadata: + annotations: + config.linkerd.io/skip-outbound-ports: "8081" +``` + +The local system and the Traffic Agent connect to the Traffic Manager using its gRPC API on port 8081. Telling Linkerd to skip that port allows the Traffic Agent sidecar to fully communicate with the Traffic Manager, and therefore the rest of the Telepresence system. + +## Prerequisites +1. [Telepresence binary](../../install) +2. Linkerd control plane [installed to cluster](https://linkerd.io/2.10/tasks/install/) +3. Kubectl +4. [Working ingress controller](https://www.getambassador.io/docs/edge-stack/latest/howtos/linkerd2) + +## Deploy +Save and deploy the following YAML. Note the `config.linkerd.io/skip-outbound-ports` annotation in the metadata of the pod template. + +```yaml +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: quote +spec: + replicas: 1 + selector: + matchLabels: + app: quote + strategy: + type: RollingUpdate + template: + metadata: + annotations: + linkerd.io/inject: "enabled" + config.linkerd.io/skip-outbound-ports: "8081,8022,6001" + labels: + app: quote + spec: + containers: + - name: backend + image: docker.io/datawire/quote:0.4.1 + ports: + - name: http + containerPort: 8000 + env: + - name: PORT + value: "8000" + resources: + limits: + cpu: "0.1" + memory: 100Mi +``` + +## Connect to Telepresence +Run `telepresence connect` to connect to the cluster. Then `telepresence list` should show the `quote` deployment as `ready to intercept`: + +``` +$ telepresence list + + quote: ready to intercept (traffic-agent not yet installed) +``` + +## Run the intercept +Run `telepresence intercept quote --port 8080:80` to direct traffic from the `quote` deployment to port 8080 on your local system. Assuming you have something listening on 8080, you should now be able to see your local service whenever attempting to access the `quote` service. diff --git a/docs/telepresence-oss/latest/reference/rbac.md b/docs/telepresence-oss/latest/reference/rbac.md new file mode 100644 index 000000000..d78133441 --- /dev/null +++ b/docs/telepresence-oss/latest/reference/rbac.md @@ -0,0 +1,236 @@ +import Alert from '@material-ui/lab/Alert'; + +# Telepresence RBAC +The intention of this document is to provide a template for securing and limiting the permissions of Telepresence. +This documentation covers the full extent of permissions necessary to administrate Telepresence components in a cluster. + +There are two general categories for cluster permissions with respect to Telepresence. There are RBAC settings for a User and for an Administrator described above. The User is expected to only have the minimum cluster permissions necessary to create a Telepresence [intercept](../../howtos/intercepts/), and otherwise be unable to affect Kubernetes resources. + +In addition to the above, there is also a consideration of how to manage Users and Groups in Kubernetes which is outside of the scope of the document. This document will use Service Accounts to assign Roles and Bindings. Other methods of RBAC administration and enforcement can be found on the [Kubernetes RBAC documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) page. + +## Requirements + +- Kubernetes version 1.16+ +- Cluster admin privileges to apply RBAC + +## Editing your kubeconfig + +This guide also assumes that you are utilizing a kubeconfig file that is specified by the `KUBECONFIG` environment variable. This is a `yaml` file that contains the cluster's API endpoint information as well as the user data being supplied for authentication. The Service Account name used in the example below is called tp-user. This can be replaced by any value (i.e. John or Jane) as long as references to the Service Account are consistent throughout the `yaml`. After an administrator has applied the RBAC configuration, a user should create a `config.yaml` in your current directory that looks like the following:​ + +```yaml +apiVersion: v1 +kind: Config +clusters: +- name: my-cluster # Must match the cluster value in the contexts config + cluster: + ## The cluster field is highly cloud dependent. +contexts: +- name: my-context + context: + cluster: my-cluster # Must match the name field in the clusters config + user: tp-user +users: +- name: tp-user # Must match the name of the Service Account created by the cluster admin + user: + token: # See note below +``` + +The Service Account token will be obtained by the cluster administrator after they create the user's Service Account. Creating the Service Account will create an associated Secret in the same namespace with the format `-token-`. This token can be obtained by your cluster administrator by running `kubectl get secret -n ambassador -o jsonpath='{.data.token}' | base64 -d`. + +After creating `config.yaml` in your current directory, export the file's location to KUBECONFIG by running `export KUBECONFIG=$(pwd)/config.yaml`. You should then be able to switch to this context by running `kubectl config use-context my-context`. + +## Administrating Telepresence + +Telepresence administration requires permissions for creating `Namespaces`, `ServiceAccounts`, `ClusterRoles`, `ClusterRoleBindings`, `Secrets`, `Services`, `MutatingWebhookConfiguration`, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator. The following permissions are needed for the installation and use of Telepresence: + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: telepresence-admin + namespace: default +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-admin-role +rules: + - apiGroups: [""] + resources: ["pods", "pods/log"] + verbs: ["get", "list", "create", "delete", "watch"] + - apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "update", "create", "delete"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] + - apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update", "create", "delete", "watch"] + - apiGroups: ["rbac.authorization.k8s.io"] + resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"] + verbs: ["get", "list", "watch", "create", "delete"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["create"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["get", "list", "watch", "delete"] + resourceNames: ["telepresence-agents"] + - apiGroups: [""] + resources: ["namespaces"] + verbs: ["get", "list", "watch", "create"] + - apiGroups: [""] + resources: ["secrets"] + verbs: ["get", "create", "list", "delete"] + - apiGroups: [""] + resources: ["serviceaccounts"] + verbs: ["get", "create", "delete"] + - apiGroups: ["admissionregistration.k8s.io"] + resources: ["mutatingwebhookconfigurations"] + verbs: ["get", "create", "delete"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["list", "get", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-clusterrolebinding +subjects: + - name: telepresence-admin + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-admin-role + kind: ClusterRole +``` + +There are two ways to install the traffic-manager: Using `telepresence connect` and installing the [helm chart](../../install/helm/). + +By using `telepresence connect`, Telepresence will use your kubeconfig to create the objects mentioned above in the cluster if they don't already exist. If you want the most introspection into what is being installed, we recommend using the helm chart to install the traffic-manager. + +## Cluster-wide telepresence user access + +To allow users to make intercepts across all namespaces, but with more limited `kubectl` permissions, the following `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` will allow full `telepresence intercept` functionality. + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate value + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-role +rules: +# For gather-logs command +- apiGroups: [""] + resources: ["pods/log"] + verbs: ["get"] +- apiGroups: [""] + resources: ["pods"] + verbs: ["list"] +# Needed in order to maintain a list of workloads +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +- apiGroups: [""] + resources: ["namespaces", "services"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-rolebinding +subjects: +- name: tp-user + kind: ServiceAccount + namespace: ambassador +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-role + kind: ClusterRole +``` + +### Traffic Manager connect permission +In addition to the cluster-wide permissions, the client will also need the following namespace scoped permissions +in the traffic-manager's namespace in order to establish the needed port-forward to the traffic-manager. +```yaml +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: traffic-manager-connect +rules: + - apiGroups: [""] + resources: ["pods"] + verbs: ["get", "list", "watch"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: traffic-manager-connect +subjects: + - name: telepresence-test-developer + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: traffic-manager-connect + kind: Role +``` + +## Namespace only telepresence user access + +RBAC for multi-tenant scenarios where multiple dev teams are sharing a single cluster where users are constrained to a specific namespace(s). + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +For each accessible namespace +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate user name + namespace: tp-namespace # Update value for appropriate namespace +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +rules: +- apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "watch"] +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role-binding + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount +roleRef: + kind: Role + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +``` + +The user will also need the [Traffic Manager connect permission](#traffic-manager-connect-permission) described above. diff --git a/docs/telepresence-oss/latest/reference/restapi.md b/docs/telepresence-oss/latest/reference/restapi.md new file mode 100644 index 000000000..4be1924a3 --- /dev/null +++ b/docs/telepresence-oss/latest/reference/restapi.md @@ -0,0 +1,93 @@ +# Telepresence RESTful API server + +[Telepresence](/products/telepresence/) can run a RESTful API server on the local host, both on the local workstation and in a pod that contains a `traffic-agent`. The server currently has two endpoints. The standard `healthz` endpoint and the `consume-here` endpoint. + +## Enabling the server +The server is enabled by setting the `telepresenceAPI.port` to a valid port number in the [Telepresence Helm Chart](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). The values may be passed explicitly to Helm during install, or configured using the [Telepresence Config](../config#restful-api-server) to impact an auto-install. + +## Querying the server +On the cluster's side, it's the `traffic-agent` of potentially intercepted pods that runs the server. The server can be accessed using `http://localhost:/` from the application container. Telepresence ensures that the container has the `TELEPRESENCE_API_PORT` environment variable set when the `traffic-agent` is installed. On the workstation, it is the `user-daemon` that runs the server. It uses the `TELEPRESENCE_API_PORT` that is conveyed in the environment of the intercept. This means that the server can be accessed the exact same way locally, provided that the environment is propagated correctly to the interceptor process. + +## Endpoints + +The `consume-here` and `intercept-info` endpoints are both intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar. Telepresence provides the ID of the intercept in the environment variable [TELEPRESENCE_INTERCEPT_ID](../environment/#telepresence_intercept_id) during an intercept. This ID must be provided in a `x-telepresence-caller-intercept-id: = ` header. [Telepresence](/products/telepresence/) needs this to identify the caller correctly. The `` will be empty when running in the cluster, but it's harmless to provide it there too, so there's no need for conditional code. + +There are three prerequisites to fulfill before testing The `consume-here` and `intercept-info` endpoints using `curl -v` on the workstation: +1. An intercept must be active +2. The "/healthz" endpoint must respond with OK +3. The ID of the intercept must be known. It will be visible as `ID` in the output of `telepresence list --debug`. + +### healthz +The `http://localhost:/healthz` endpoint should respond with status code 200 OK. If it doesn't then something isn't configured correctly. Check that the `traffic-agent` container is present and that the `TELEPRESENCE_API_PORT` has been added to the environment of the application container and/or in the environment that is propagated to the interceptor that runs on the local workstation. + +#### test endpoint using curl +A `curl -v` call can be used to test the endpoint when an intercept is active. This example assumes that the API port is configured to be 9980. +```console +$ curl -v localhost:9980/healthz +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /healthz HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Date: Fri, 26 Nov 2021 07:06:18 GMT +< Content-Length: 0 +< +* Connection #0 to host localhost left intact +``` + +### consume-here +`http://localhost:/consume-here` will respond with "true" (consume the message) or "false" (leave the message on the queue). When running in the cluster, this endpoint will respond with `false` if the headers match an ongoing intercept for the same workload because it's assumed that it's up to the intercept to consume the message. When running locally, the response is inverted. Matching headers means that the message should be consumed. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api`, we can now check that the "/consume-here" returns "true" for the path "/api" and given headers. +```console +$ curl -v localhost:9980/consume-here?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /consume-here?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Fri, 26 Nov 2021 06:43:28 GMT +< Content-Length: 4 +< +* Connection #0 to host localhost left intact +true +``` + +If you can run curl from the pod, you can try the exact same URL. The result should be "false" when there's an ongoing intercept. The `x-telepresence-caller-intercept-id` is not needed when the call is made from the pod. + +### intercept-info +`http://localhost:/intercept-info` is intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar, and will respond with a JSON structure containing the two booleans `clientSide` and `intercepted`, and a `metadata` map which corresponds to the `--http-meta` key pairs used when the intercept was created. This field is always omitted in case `intercepted` is `false`. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api --http-meta a=b --http-meta b=c`, we can now check that the "/intercept-info" returns information for the given path and headers. +```console +$ curl -v localhost:9980/intercept-info?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980...* Connected to localhost (127.0.0.1) port 9980 (#0) +> GET /intercept-info?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.79.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Tue, 01 Feb 2022 11:39:55 GMT +< Content-Length: 68 +< +{"intercepted":true,"clientSide":true,"metadata":{"a":"b","b":"c"}} +* Connection #0 to host localhost left intact +``` diff --git a/docs/telepresence-oss/latest/reference/routing.md b/docs/telepresence-oss/latest/reference/routing.md new file mode 100644 index 000000000..9c88402d0 --- /dev/null +++ b/docs/telepresence-oss/latest/reference/routing.md @@ -0,0 +1,69 @@ +# Connection Routing + +## Outbound + +### DNS resolution +When requesting a connection to a host, the IP of that host must be determined. Telepresence provides DNS resolvers to help with this task. There are currently four types of resolvers but only one of them will be used on a workstation at any given time. Common for all of them is that they will propagate a selection of the host lookups to be performed in the cluster. The selection normally includes all names ending with `.cluster.local` or a currently mapped namespace but more entries can be added to the list using the `includeSuffixes` option in the +[cluster DNS configuration](../cluster-config) + +#### Cluster side DNS lookups +The cluster side host lookup will be performed by the traffic-manager unless the client has an active intercept, in which case, the agent performing that intercept will be responsible for doing it. If the client has multiple intercepts, then all of them will be asked to perform the lookup, and the response to the client will contain the unique sum of IPs that they produce. It's therefore important to never have multiple intercepts that span more than one namespace[[1](#namespacelimit)] running concurrently on the same workstation because that would logically put the workstation in several namespaces and make the DNS resolution ambiguous. The reason for asking all of them is that the workstation currently impersonates multiple containers, and it is not possible to determine on behalf of what container the lookup request is made. + +#### macOS resolver +This resolver hooks into the macOS DNS system by creating files under `/etc/resolver`. Those files correspond to some domain and contain the port number of the Telepresence resolver. Telepresence creates one such file for each of the currently mapped namespaces and `include-suffixes` option. The file `telepresence.local` contains a search path that is configured based on current intercepts so that single label names can be resolved correctly. + +#### Linux systemd-resolved resolver +This resolver registers itself as part of telepresence's [VIF](../tun-device) using `systemd-resolved` and uses the DBus API to configure domains and routes that corresponds to the current set of intercepts and namespaces. + +#### Linux overriding resolver +Linux systems that aren't configured with `systemd-resolved` will use this resolver. A Typical case is when running Telepresence [inside a docker container](../inside-container). During initialization, the resolver will first establish a _fallback_ connection to the IP passed as `--dns`, the one configured as `local-ip` in the [local DNS configuration](../config), or the primary `nameserver` registered in `/etc/resolv.conf`. It will then use iptables to actually override that IP so that requests to it instead end up in the overriding resolver, which unless it succeeds on its own, will use the _fallback_. + +#### Windows resolver +This resolver uses the DNS resolution capabilities of the [win-tun](https://www.wintun.net/) device in conjunction with [Win32_NetworkAdapterConfiguration SetDNSDomain](https://docs.microsoft.com/en-us/powershell/scripting/samples/performing-networking-tasks?view=powershell-7.2#assigning-the-dns-domain-for-a-network-adapter). + +#### DNS caching +The Telepresence DNS resolver often changes its configuration. This means that Telepresence must either flush the DNS caches on the local host, or ensure that DNS-records returned from the Telepresence resolver aren't cached (or cached for a very short time). All operating systems have different ways of flushing the DNS caches and even different versions of one system may have differences. Also, on some systems it is necessary to actually kill and restart processes to ensure a proper flush, which in turn may result in network instabilities. + +Starting with 2.4.7, Telepresence will no longer flush the host's DNS caches. Instead, all records will have a short Time To Live (TTL) so that such caches evict the entries quickly. This causes increased load on the Telepresence resolver (shorter TTL means more frequent queries) and to cater for that, telepresence now has an internal cache to minimize the number of DNS queries that it sends to the cluster. This cache is flushed as needed without causing instabilities. + +### Routing + +#### Subnets +The Telepresence `traffic-manager` service is responsible for discovering the cluster's service subnet and all subnets used by the pods. In order to do this, it needs permission to create a dummy service[[2](#servicesubnet)] in its own namespace, and the ability to list, get, and watch nodes and pods. Most clusters will expose the pod subnets as `podCIDR` in the `Node` while others, like Amazon EKS, don't. Telepresence will then fall back to deriving the subnets from the IPs of all pods. If you'd like to choose a specific method for discovering subnets, or want to provide the list yourself, you can use the `podCIDRStrategy` configuration value in the [helm](../../install/helm) chart to do that. + +The complete set of subnets that the [VIF](../tun-device) will be configured with is dynamic and may change during a connection's life cycle as new nodes arrive or disappear from the cluster. The set consists of what that the traffic-manager finds in the cluster, and the subnets configured using the [also-proxy](../cluster-config) configuration option. Telepresence will remove subnets that are equal to, or completely covered by, other subnets. + +#### Connection origin +A request to connect to an IP-address that belongs to one of the subnets of the [VIF](../tun-device) will cause a connection request to be made in the cluster. As with host name lookups, the request will originate from the traffic-manager unless the client has ongoing intercepts. If it does, one of the intercepted pods will be chosen, and the request will instead originate from that pod. This is a best-effort approach. Telepresence only knows that the request originated from the workstation. It cannot know that it is intended to originate from a specific pod when multiple intercepts are active. + +A `--local-only` intercept will not have any effect on the connection origin because there is no pod from which the connection can originate. The intercept must be made on a workload that has been deployed in the cluster if there's a requirement for correct connection origin. + +There are multiple reasons for doing this. One is that it is important that the request originates from the correct namespace. Example: + +```bash +curl some-host +``` +results in a http request with header `Host: some-host`. Now, if a service-mesh like Istio performs header based routing, then it will fail to find that host unless the request originates from the same namespace as the host resides in. Another reason is that the configuration of a service mesh can contain very strict rules. If the request then originates from the wrong pod, it will be denied. Only one intercept at a time can be used if there is a need to ensure that the chosen pod is exactly right. + +### Recursion detection +It is common that clusters used in development, such as Minikube, Minishift or k3s, run on the same host as the Telepresence client, often in a Docker container. Such clusters may have access to host network, which means that both DNS and L4 routing may be subjected to recursion. + +#### DNS recursion +When a local cluster's DNS-resolver fails to resolve a hostname, it may fall back to querying the local host network. This means that the Telepresence resolver will be asked to resolve a query that was issued from the cluster. Telepresence must check if such a query is recursive because there is a chance that it actually originated from the Telepresence DNS resolver and was dispatched to the `traffic-manager`, or a `traffic-agent`. + +Telepresence handles this by sending one initial DNS-query to resolve the hostname "tel2-recursion-check.kube-system". If the cluster runs locally, and has access to the local host's network, then that query will recurse back into the Telepresence resolver. Telepresence remembers this and alters its own behavior so that queries that are believed to be recursions are detected and respond with an NXNAME record. Telepresence performs this solution to the best of its ability, but may not be completely accurate in all situations. There's a chance that the DNS-resolver will yield a false negative for the second query if the same hostname is queried more than once in rapid succession, that is when the second query is made before the first query has received a response from the cluster. + +#### Connect recursion +A cluster running locally may dispatch connection attempts to non-existing host:port combinations to the host network. This means that they may reach the Telepresence [VIF](../tun-device). Endless recursions occur if the VIF simply dispatches such attempts on to the cluster. + +The telepresence client handles this by serializing all connection attempts to one specific IP:PORT, trapping all subsequent attempts to connect to that IP:PORT until the first attempt has completed. If the first attempt was deemed a success, then the currently trapped attempts are allowed to proceed. If the first attempt failed, then the currently trapped attempts fail. + +## Inbound + +The traffic-manager and traffic-agent are mutually responsible for setting up the necessary connection to the workstation when an intercept becomes active. In versions prior to 2.3.2, this would be accomplished by the traffic-manager creating a port dynamically that it would pass to the traffic-agent. The traffic-agent would then forward the intercepted connection to that port, and the traffic-manager would forward it to the workstation. This lead to problems when integrating with service meshes like Istio since those dynamic ports needed to be configured. It also imposed an undesired requirement to be able to use mTLS between the traffic-manager and traffic-agent. + +In 2.3.2, this changes, so that the traffic-agent instead creates a tunnel to the traffic-manager using the already existing gRPC API connection. The traffic-manager then forwards that using another tunnel to the workstation. This is completely invisible to other service meshes and is therefore much easier to configure. + +##### Footnotes: +

1: Starting with 2.8.0, Telepresence will not allow that the same workstation creates concurrent intercepts that span multiple namespaces.

+

2: The error message from an attempt to create a service in a bad subnet contains the service subnet. The trick of creating a dummy service is currently the only way to get Kubernetes to expose that subnet.

diff --git a/docs/telepresence-oss/latest/reference/tun-device.md b/docs/telepresence-oss/latest/reference/tun-device.md new file mode 100644 index 000000000..af7e3828c --- /dev/null +++ b/docs/telepresence-oss/latest/reference/tun-device.md @@ -0,0 +1,27 @@ +# Networking through Virtual Network Interface + +The Telepresence daemon process creates a Virtual Network Interface (VIF) when Telepresence connects to the cluster. The VIF ensures that the cluster's subnets are available to the workstation. It also intercepts DNS requests and forwards them to the traffic-manager which in turn forwards them to intercepted agents, if any, or performs a host lookup by itself. + +### TUN-Device +The VIF is a TUN-device, which means that it communicates with the workstation in terms of L3 IP-packets. The router will recognize UDP and TCP packets and tunnel their payload to the traffic-manager via its encrypted gRPC API. The traffic-manager will then establish corresponding connections in the cluster. All protocol negotiation takes place in the client because the VIF takes care of the L3 to L4 translation (i.e. the tunnel is L4, not L3). + +## Gains when using the VIF + +### Both TCP and UDP +The TUN-device is capable of routing both TCP and UDP traffic. + +### No SSH required + +The VIF approach is somewhat similar to using `sshuttle` but without +any requirements for extra software, configuration or connections. +Using the VIF means that only one single connection needs to be +forwarded through the Kubernetes apiserver (à la `kubectl +port-forward`), using only one single port. There is no need for +`ssh` in the client nor for `sshd` in the traffic-manager. This also +means that the traffic-manager container can run as the default user. + +#### sshfs without ssh encryption +When a POD is intercepted, and its volumes are mounted on the local machine, this mount is performed by [sshfs](https://github.com/libfuse/sshfs). Telepresence will run `sshfs -o slave` which means that instead of using `ssh` to establish an encrypted communication to an `sshd`, which in turn terminates the encryption and forwards to `sftp`, the `sshfs` will talk `sftp` directly on its `stdin/stdout` pair. Telepresence tunnels that directly to an `sftp` in the agent using its already encrypted gRPC API. As a result, no `sshd` is needed in client nor in the traffic-agent, and the traffic-agent container can run as the default user. + +### No Firewall rules +With the VIF in place, there's no longer any need to tamper with firewalls in order to establish IP routes. The VIF makes the cluster subnets available during connect, and the kernel will perform the routing automatically. When the session ends, the kernel is also responsible for cleaning up. diff --git a/docs/telepresence-oss/latest/reference/volume.md b/docs/telepresence-oss/latest/reference/volume.md new file mode 100644 index 000000000..82df9cafa --- /dev/null +++ b/docs/telepresence-oss/latest/reference/volume.md @@ -0,0 +1,36 @@ +# Volume mounts + +import Alert from '@material-ui/lab/Alert'; + +Telepresence supports locally mounting of volumes that are mounted to your Pods. You can specify a command to run when starting the intercept, this could be a subshell or local server such as Python or Node. + +``` +telepresence intercept --port --mount=/tmp/ -- /bin/bash +``` + +In this case, Telepresence creates the intercept, mounts the Pod's volumes to locally to `/tmp`, and starts a Bash subshell. + +Telepresence can set a random mount point for you by using `--mount=true` instead, you can then find the mount point in the output of `telepresence list` or using the `$TELEPRESENCE_ROOT` variable. + +``` +$ telepresence intercept --port --mount=true -- /bin/bash +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 + Intercepting : all TCP connections + +bash-3.2$ echo $TELEPRESENCE_ROOT +/var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 +``` + +--mount=true is the default if a mount option is not specified, use --mount=false to disable mounting volumes. + +With either method, the code you run locally either from the subshell or from the intercept command will need to be prepended with the `$TELEPRESENCE_ROOT` environment variable to utilize the mounted volumes. + +For example, Kubernetes mounts secrets to `/var/run/secrets/kubernetes.io` (even if no `mountPoint` for it exists in the Pod spec). Once mounted, to access these you would need to change your code to use `$TELEPRESENCE_ROOT/var/run/secrets/kubernetes.io`. + +If using --mount=true without a command, you can use either environment variable flag to retrieve the variable. diff --git a/docs/telepresence-oss/latest/reference/vpn.md b/docs/telepresence-oss/latest/reference/vpn.md new file mode 100644 index 000000000..457cc873c --- /dev/null +++ b/docs/telepresence-oss/latest/reference/vpn.md @@ -0,0 +1,89 @@ + +
+ + +# Telepresence and VPNs + +It is often important to set up Kubernetes API server endpoints to be only accessible via a VPN. +In setups like these, users need to connect first to their VPN, and then use Telepresence to connect +to their cluster. As Telepresence uses many of the same underlying technologies that VPNs use, +the two can sometimes conflict. This page will help you identify and resolve such VPN conflicts. + + + +The test-vpn command, which was once part of Telepresence, became obsolete in 2.14 due to a change in functionality and was subsequently removed. + + + +## VPN Configuration + +Let's begin by reviewing what a VPN does and imagining a sample configuration that might come +to conflict with Telepresence. +Usually, a VPN client adds two kinds of routes to your machine when you connect. +The first serves to override your default route; in other words, it makes sure that packets +you send out to the public internet go through the private tunnel instead of your +ethernet or wifi adapter. We'll call this a `public VPN route`. +The second kind of route is a `private VPN route`. These are the routes that allow your +machine to access hosts inside the VPN that are not accessible to the public internet. +Generally speaking, this is a more circumscribed route that will connect your machine +only to reachable hosts on the private network, such as your Kubernetes API server. + +This diagram represents what happens when you connect to a VPN, supposing that your +private network spans the CIDR range: `10.0.0.0/8`. + +![VPN routing](../images/vpn-routing.jpg) + +## Kubernetes configuration + +One of the things a Kubernetes cluster does for you is assign IP addresses to pods and services. +This is one of the key elements of Kubernetes networking, as it allows applications on the cluster +to reach each other. When Telepresence connects you to the cluster, it will try to connect you +to the IP addresses that your cluster assigns to services and pods. +Cluster administrators can configure, on cluster creation, the CIDR ranges that the Kubernetes +cluster will place resources in. Let's imagine your cluster is configured to place services in +`10.130.0.0/16` and pods in `10.132.0.0/16`: + +![VPN Kubernetes config](../images/vpn-k8s-config.jpg) + +## Telepresence conflicts + +When you run `telepresence connect` to connect to a cluster, it talks to the API server +to figure out what pod and service CIDRs it needs to map in your machine. If it detects +that these CIDR ranges are already mapped by a VPN's `private route`, it will produce an +error and inform you of the conflicting subnets: + +```console +$ telepresence connect +telepresence connect: error: connector.Connect: failed to connect to root daemon: rpc error: code = Unknown desc = subnet 10.43.0.0/16 overlaps with existing route "10.0.0.0/8 via 10.0.0.0 dev utun4, gw 10.0.0.1" +``` + +To resolve this, you'll need to carefully consider what your network layout looks like. +Telepresence is refusing to map these conflicting subnets because its mapping them +could render certain hosts that are inside the VPN completely unreachable. However, +you (or your network admin) know better than anyone how hosts are spread out inside your VPN. +Even if the private route routes ALL of `10.0.0.0/8`, it's possible that hosts are only +being spun up in one of the subblocks of the `/8` space. Let's say, for example, +that you happen to know that all your hosts in the VPN are bunched up in the first +half of the space -- `10.0.0.0/9` (and that you know that any new hosts will +only be assigned IP addresses from the `/9` block). In this case you +can configure Telepresence to override the other half of this CIDR block, which is where the +services and pods happen to be. +To do this, all you have to do is configure the `client.routing.allowConflictingSubnets` flag +in the Telepresence helm chart. You can do this directly via `telepresence helm upgrade`: + +```console +$ telepresence helm upgrade --set client.routing.allowConflictingSubnets="{10.128.0.0/9}" +``` + +You can also choose to be more specific about this, and only allow the CIDRs that you KNOW +are in use by the cluster: + +```console +$ telepresence helm upgrade --set client.routing.allowConflictingSubnets="{10.130.0.0/16,10.132.0.0/16}" +``` + +The end result of this (assuming an allow list of `/9`) will be a configuration like this: + +![VPN Telepresence](../images/vpn-with-tele.jpg) + +
diff --git a/docs/telepresence-oss/latest/release-notes/no-ssh.png b/docs/telepresence-oss/latest/release-notes/no-ssh.png new file mode 100644 index 000000000..025f20ab7 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/no-ssh.png differ diff --git a/docs/telepresence-oss/latest/release-notes/run-tp-in-docker.png b/docs/telepresence-oss/latest/release-notes/run-tp-in-docker.png new file mode 100644 index 000000000..53b66a9b2 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/run-tp-in-docker.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.2.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.2.png new file mode 100644 index 000000000..43abc7e89 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.2.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.3.0-homebrew.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.0-homebrew.png new file mode 100644 index 000000000..e203a9750 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.0-homebrew.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.3.0-loglevels.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.0-loglevels.png new file mode 100644 index 000000000..3d628c54a Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.0-loglevels.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.3.1-alsoProxy.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.1-alsoProxy.png new file mode 100644 index 000000000..4052b927b Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.1-alsoProxy.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.3.1-brew.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.1-brew.png new file mode 100644 index 000000000..2af424904 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.1-brew.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.3.1-dns.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.1-dns.png new file mode 100644 index 000000000..c6335e7a7 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.1-dns.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.3.1-inject.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.1-inject.png new file mode 100644 index 000000000..aea1003ef Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.1-inject.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.3.1-large-file-transfer.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.1-large-file-transfer.png new file mode 100644 index 000000000..48ceb3817 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.1-large-file-transfer.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.3.1-trafficmanagerconnect.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.1-trafficmanagerconnect.png new file mode 100644 index 000000000..78128c174 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.1-trafficmanagerconnect.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.3.2-subnets.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.2-subnets.png new file mode 100644 index 000000000..778c722ab Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.2-subnets.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.3.2-svcport-annotation.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.2-svcport-annotation.png new file mode 100644 index 000000000..1e1e92408 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.2-svcport-annotation.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.3.3-helm.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.3-helm.png new file mode 100644 index 000000000..7b81480a7 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.3-helm.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.3.3-namespace-config.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.3-namespace-config.png new file mode 100644 index 000000000..7864d3a30 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.3-namespace-config.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.3.3-to-pod.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.3-to-pod.png new file mode 100644 index 000000000..aa7be3f63 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.3-to-pod.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.3.4-improved-error.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.4-improved-error.png new file mode 100644 index 000000000..fa8a12986 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.4-improved-error.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.3.4-ip-error.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.4-ip-error.png new file mode 100644 index 000000000..1d37380c7 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.4-ip-error.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.3.5-agent-config.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.5-agent-config.png new file mode 100644 index 000000000..67d6d3e8b Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.5-agent-config.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.3.5-grpc-max-receive-size.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.5-grpc-max-receive-size.png new file mode 100644 index 000000000..32939f9dd Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.5-grpc-max-receive-size.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.3.5-skipLogin.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.5-skipLogin.png new file mode 100644 index 000000000..bf79c1910 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.5-skipLogin.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png new file mode 100644 index 000000000..d29a05ad7 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.3.7-keydesc.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.7-keydesc.png new file mode 100644 index 000000000..9bffe5ccb Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.7-keydesc.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.3.7-newkey.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.7-newkey.png new file mode 100644 index 000000000..c7d47c42d Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.3.7-newkey.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.4.0-cloud-messages.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.0-cloud-messages.png new file mode 100644 index 000000000..ffd045ae0 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.0-cloud-messages.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.4.0-windows.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.0-windows.png new file mode 100644 index 000000000..d27ba254a Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.0-windows.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.4.1-systema-vars.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.1-systema-vars.png new file mode 100644 index 000000000..c098b439f Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.1-systema-vars.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.4.10-actions.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.10-actions.png new file mode 100644 index 000000000..6d849ac21 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.10-actions.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.4.10-intercept-config.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.10-intercept-config.png new file mode 100644 index 000000000..e3f1136ac Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.10-intercept-config.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.4.4-gather-logs.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.4-gather-logs.png new file mode 100644 index 000000000..7db541735 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.4-gather-logs.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.4.5-logs-anonymize.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.5-logs-anonymize.png new file mode 100644 index 000000000..edd01fde4 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.5-logs-anonymize.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.4.5-pod-yaml.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.5-pod-yaml.png new file mode 100644 index 000000000..3f565c4f8 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.5-pod-yaml.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.4.5-preview-url-questions.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.5-preview-url-questions.png new file mode 100644 index 000000000..1823aaa14 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.5-preview-url-questions.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.4.6-help-text.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.6-help-text.png new file mode 100644 index 000000000..aab9178ad Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.6-help-text.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.4.8-health-check.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.8-health-check.png new file mode 100644 index 000000000..e10a0b472 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.8-health-check.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.4.8-vpn.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.8-vpn.png new file mode 100644 index 000000000..fbb215882 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.4.8-vpn.png differ diff --git a/docs/telepresence-oss/latest/release-notes/telepresence-2.5.0-pro-daemon.png b/docs/telepresence-oss/latest/release-notes/telepresence-2.5.0-pro-daemon.png new file mode 100644 index 000000000..5b82fc769 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/telepresence-2.5.0-pro-daemon.png differ diff --git a/docs/telepresence-oss/latest/release-notes/tunnel.jpg b/docs/telepresence-oss/latest/release-notes/tunnel.jpg new file mode 100644 index 000000000..59a0397e6 Binary files /dev/null and b/docs/telepresence-oss/latest/release-notes/tunnel.jpg differ diff --git a/docs/telepresence-oss/latest/releaseNotes.yml b/docs/telepresence-oss/latest/releaseNotes.yml new file mode 100644 index 000000000..34168185a --- /dev/null +++ b/docs/telepresence-oss/latest/releaseNotes.yml @@ -0,0 +1,2356 @@ +# This file should be placed in the folder for the version of the +# product that's meant to be documented. A `/release-notes` page will +# be automatically generated and populated at build time. +# +# Note that an entry needs to be added to the `doc-links.yml` file in +# order to surface the release notes in the table of contents. +# +# The YAML in this file should contain: +# +# changelog: An (optional) URL to the CHANGELOG for the product. +# items: An array of releases with the following attributes: +# - version: The (optional) version number of the release, if applicable. +# - date: The date of the release in the format YYYY-MM-DD. +# - notes: An array of noteworthy changes included in the release, each having the following attributes: +# - type: The type of change, one of `bugfix`, `feature`, `security` or `change`. +# - title: A short title of the noteworthy change. +# - body: >- +# Two or three sentences describing the change and why it +# is noteworthy. This is HTML, not plain text or +# markdown. It is handy to use YAML's ">-" feature to +# allow line-wrapping. +# - image: >- +# The URL of an image that visually represents the +# noteworthy change. This path is relative to the +# `release-notes` directory; if this file is +# `FOO/releaseNotes.yml`, then the image paths are +# relative to `FOO/release-notes/`. +# - docs: The path to the documentation page where additional information can be found. +# - href: A path from the root to a resource on the getambassador website, takes precedence over a docs link. + +docTitle: Telepresence Release Notes +docDescription: >- + Release notes for Telepresence by Ambassador Labs, a CNCF project + that enables developers to iterate rapidly on Kubernetes + microservices by arming them with infinite-scale development + environments, access to instantaneous feedback loops, and highly + customizable development environments. + +changelog: https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md + +items: + - version: 2.15.1 + date: "2023-09-08" + notes: + - type: security + title: Rebuild with go 1.21.1 + body: >- + Rebuild Telepresence with go 1.21.1 to address CVEs. + - type: security + title: Set security context for traffic agent + body: >- + Openshift users reported that the traffic agent injection was failing due to a missing security context. + - version: 2.15.0 + date: "2023-08-28" + notes: + - type: security + title: Add ASLR to telepresence binaries + body: >- + ASLR hardens binary sercurity against fixed memory attacks. + - type: feature + title: Added client builds for arm64 architecture. + body: >- + Updated the release workflow files in github actions to including building and publishing the client binaries for arm64 architecture. + docs: https://github.com/telepresenceio/telepresence/issues/3259 + - type: bugfix + title: KUBECONFIG env var can now be used with the docker mode. + body: >- + If provided, the KUBECONFIG environment variable was passed to the kubeauth-foreground service as a parameter. + However, since it didn't exist, the CLI was throwing an error when using telepresence connect --docker. + docs: https://github.com/telepresenceio/telepresence/pull/3300 + - type: bugfix + title: Fix deadlock while watching workloads + body: >- + The telepresence list --output json-stream wasn't releasing the session's lock after being + stopped, including with a telepresence quit. The user could be blocked as a result. + docs: https://github.com/telepresenceio/telepresence/pull/3298 + - type: bugfix + title: Change json output of telepresence list command + body: >- + Replace deprecated info in the JSON output of the telepresence list command. + - version: 2.14.4 + date: "2023-08-21" + notes: + - type: bugfix + title: Nil pointer exception when upgrading the traffic-manager. + body: >- + Upgrading the traffic-manager using telepresence helm upgrade would sometimes + result in a helm error message executing "telepresence/templates/intercept-env-configmap.yaml" + at <.Values.intercept.environment.excluded>: nil pointer evaluating interface {}.excluded" + docs: https://github.com/telepresenceio/telepresence/issues/3313 + - version: 2.14.2 + date: "2023-07-26" + notes: + - type: bugfix + title: Telepresence now use the OSS agent in its latest version by default. + body: >- + The traffic manager admin was forced to set it manually during the chart installation. + docs: https://github.com/telepresenceio/telepresence/issues/3271 + - version: 2.14.1 + date: "2023-07-07" + notes: + - type: feature + title: Envoy's http idle timout is now configurable. + body: >- + A new agent.helm.httpIdleTimeout setting was added to the Helm chart that controls + the proprietary Traffic agent's http idle timeout. The default of one hour, which in some situations + would cause a lot of resource consuming and lingering connections, was changed to 70 seconds. + - type: feature + title: Add more gauges to the Traffic manager's Prometheus client. + body: >- + Several gauges were added to the Prometheus client to make it easier to monitor + what the Traffic manager spends resources on. + - type: feature + title: Agent Pull Policy + body: >- + Add option to set traffic agent pull policy in helm chart. + - type: bugfix + title: Resource leak in the Traffic manager. + body: >- + Fixes a resource leak in the Traffic manager caused by lingering tunnels between the clients and + Traffic agents. The tunnels are now closed correctly when terminated from the side that created them. + - type: bugfix + title: Fixed problem setting traffic manager namespace using the kubeconfig extension. + body: >- + Fixes a regression introduced in version 2.10.5, making it impossible to set the traffic-manager namespace + using the telepresence.io kubeconfig extension. + docs: https://www.getambassador.io/docs/telepresence/latest/reference/config#manager + - version: 2.14.0 + date: "2023-06-12" + notes: + - type: feature + title: DNS configuration now supports excludes and mappings. + body: >- + The DNS configuration now supports two new fields, excludes and mappings. The excludes field allows you to + exclude a given list of hostnames from resolution, while the mappings field can be used to resolve a hostname with + another. + docs: https://github.com/telepresenceio/telepresence/pull/3172 + + - type: feature + title: Added the ability to exclude environment variables + body: >- + Added a new config map that can take an array of environment variables that will + then be excluded from an intercept that retrieves the environment of a pod. + + - type: bugfix + title: Fixed traffic-agent backward incompatibility issue causing lack of remote mounts + body: >- + A traffic-agent of version 2.13.3 (or 1.13.15) would not propagate the directories under + /var/run/secrets when used with a traffic manager older than 2.13.3. + + - type: bugfix + title: Fixed race condition causing segfaults on rare occasions when a tunnel stream timed out. + body: >- + A context cancellation could sometimes be trapped in a stream reader, causing it to incorrectly return + an undefined message which in turn caused the parent reader to panic on a nil pointer reference. + docs: https://github.com/telepresenceio/telepresence/pull/2963 + + - type: change + title: Routing conflict reporting. + body: >- + Telepresence will now attempt to detect and report routing conflicts with other running VPN software on client machines. + There is a new configuration flag that can be tweaked to allow certain CIDRs to be overridden by Telepresence. + + - type: change + title: test-vpn command deprecated + body: >- + Running telepresence test-vpn will now print a deprecation warning and exit. The command will be removed in a future release. + Instead, please configure telepresence for your VPN's routes. + - version: 2.13.3 + date: "2023-05-25" + notes: + - type: feature + title: Add imagePullSecrets to hooks + body: >- + Add .Values.hooks.curl.imagePullSecrets and .Values.hooks curl.imagePullSecrets to Helm values. + docs: https://github.com/telepresenceio/telepresence/pull/3079 + + - type: change + title: Change reinvocation policy to Never for the mutating webhook + body: >- + The default setting of the reinvocationPolicy for the mutating webhook dealing with agent injections changed from Never to IfNeeded. + + - type: bugfix + title: Fix mounting fail of IAM roles for service accounts web identity token + body: >- + The eks.amazonaws.com/serviceaccount volume injected by EKS is now exported and remotely mounted during an intercept. + docs: https://github.com/telepresenceio/telepresence/issues/3166 + + - type: bugfix + title: Correct namespace selector for cluster versions with non-numeric characters + body: >- + The mutating webhook now correctly applies the namespace selector even if the cluster version contains non-numeric characters. For example, it can now handle versions such as Major:"1", Minor:"22+". + docs: https://github.com/telepresenceio/telepresence/pull/3184 + + - type: bugfix + title: Enable IPv6 on the telepresence docker network + body: >- + The "telepresence" Docker network will now propagate DNS AAAA queries to the Telepresence DNS resolver when it runs in a Docker container. + docs: https://github.com/telepresenceio/telepresence/issues/3179 + + - type: bugfix + title: Fix the crash when intercepting with --local-only and --docker-run + body: >- + Running telepresence intercept --local-only --docker-run no longer results in a panic. + docs: https://github.com/telepresenceio/telepresence/issues/3171 + + - type: bugfix + title: Fix incorrect error message with local-only mounts + body: >- + Running telepresence intercept --local-only --mount false no longer results in an incorrect error message saying "a local-only intercept cannot have mounts". + docs: https://github.com/telepresenceio/telepresence/issues/3171 + + - type: bugfix + title: specify port in hook urls + body: >- + The helm chart now correctly handles custom agentInjector.webhook.port that was not being set in hook URLs. + docs: https://github.com/telepresenceio/telepresence/pull/3161 + + - type: bugfix + title: Fix wrong default value for disableGlobal and agentArrival + body: >- + Params .intercept.disableGlobal and .timeouts.agentArrival are now correctly honored. + + - version: 2.13.2 + date: "2023-05-12" + notes: + - type: bugfix + title: Authenticator Service Update + body: >- + Replaced / characters with a - when the authenticator service creates the kubeconfig in the Telepresence cache. + docs: https://github.com/telepresenceio/telepresence/pull/3167 + + - type: bugfix + title: Enhanced DNS Search Path Configuration for Windows (Auto, PowerShell, and Registry Options) + body: >- + Configurable strategy (auto, powershell. or registry) to set the global DNS search path on Windows. Default is auto which means try powershell first, and if it fails, fall back to registry. + docs: https://github.com/telepresenceio/telepresence/pull/3154 + + - type: feature + title: Configurable Traffic Manager Timeout in values.yaml + body: >- + The timeout for the traffic manager to wait for traffic agent to arrive can now be configured in the values.yaml file using timeouts.agentArrival. The default timeout is still 30 seconds. + docs: https://github.com/telepresenceio/telepresence/pull/3148 + + - type: bugfix + title: Enhanced Local Cluster Discovery for macOS and Windows + body: >- + The automatic discovery of a local container based cluster (minikube or kind) used when the Telepresence daemon runs in a container, now works on macOS and Windows, and with different profiles, ports, and cluster names + docs: https://github.com/telepresenceio/telepresence/pull/3165 + + - type: bugfix + title: FTP Stability Improvements + body: >- + Multiple simultaneous intercepts can transfer large files in bidirectionally and in parallel. + docs: https://github.com/telepresenceio/telepresence/pull/3157 + + - type: bugfix + title: Intercepted Persistent Volume Pods No Longer Cause Timeouts + body: >- + Pods using persistent volumes no longer causes timeouts when intercepted. + docs: https://github.com/telepresenceio/telepresence/pull/3157 + + - type: bugfix + title: Successful 'Telepresence Connect' Regardless of DNS Configuration + body: >- + Ensure that `telepresence connect`` succeeds even though DNS isn't configured correctly. + docs: https://github.com/telepresenceio/telepresence/pull/3154 + + - type: bugfix + title: Traffic-Manager's 'Close of Closed Channel' Panic Issue + body: >- + The traffic-manager would sometimes panic with a "close of closed channel" message and exit. + docs: https://github.com/telepresenceio/telepresence/pull/3160 + + - type: bugfix + title: Traffic-Manager's Type Cast Panic Issue + body: >- + The traffic-manager would sometimes panic and exit after some time due to a type cast panic. + docs: https://github.com/telepresenceio/telepresence/pull/3153 + + - type: bugfix + title: Login Friction + body: >- + Improve login behavior by clearing the saved intermediary API Keys when a user logins to force Telepresence to generate new ones. + + - version: 2.13.1 + date: "2023-04-20" + notes: + - type: change + title: Update ambassador-telepresence-agent to version 1.13.13 + body: >- + The malfunction of the Ambassador Telepresence Agent occurred as a result of an update which compressed the executable file. + + - version: 2.13.0 + date: "2023-04-18" + notes: + - type: feature + title: Better kind / minikube network integration with docker + body: >- + The Docker network used by a Kind or Minikube (using the "docker" driver) installation, is automatically detected and connected to a Docker container running the Telepresence daemon. + docs: https://github.com/telepresenceio/telepresence/pull/3104 + + - type: feature + title: New mapped namespace output + body: >- + Mapped namespaces are included in the output of the telepresence status command. + + - type: feature + title: Setting of the target IP of the intercept + docs: reference/intercepts/cli + body: >- + There's a new --address flag to the intercept command allowing users to set the target IP of the intercept. + + - type: feature + title: Multi-tenant support + body: >- + The client will no longer need cluster wide permissions when connected to a namespace scoped Traffic Manager. + + - type: bugfix + title: Cluster domain resolution bugfix + body: >- + The Traffic Manager now uses a fail-proof way to determine the cluster domain. + docs: https://github.com/telepresenceio/telepresence/issues/3114 + + - type: bugfix + title: Windows DNS + body: >- + DNS on windows is more reliable and performant. + docs: https://github.com/telepresenceio/telepresence/issues/2939 + + - type: bugfix + title: Agent injection with huge amount of deployments + body: >- + The agent is now correctly injected even with a high number of deployment starting at the same time. + docs: https://github.com/telepresenceio/telepresence/issues/3025 + + - type: bugfix + title: Self-contained kubeconfig with Docker + body: >- + The kubeconfig is made self-contained before running Telepresence daemon in a Docker container. + docs: https://github.com/telepresenceio/telepresence/issues/3099 + + - type: bugfix + title: Version command error + body: >- + The version command won't throw an error anymore if there is no kubeconfig file defined. + docs: https://github.com/telepresenceio/telepresence/issues/3095 + + - version: 2.12.2 + date: "2023-04-04" + notes: + - type: security + title: Update Golang build version to 1.20.3 + body: >- + Update Golang to 1.20.3 to address CVE-2023-24534, CVE-2023-24536, CVE-2023-24537, and CVE-2023-24538 + - version: 2.12.1 + date: "2023-03-22" + notes: + - type: feature + title: Additions to gather-logs + body: >- + Telepresence now includes the kubeauth logs when running + the gather-logs command + - type: bugfix + title: Environment Variables are now propagated to kubeauth + body: >- + Telepresence now propagates environment variables properly + to the kubeauth-foreground to be used with cluster authentication + - version: 2.12.0 + date: "2023-03-20" + notes: + - type: feature + title: Check for service connectivity independently from pod connectivity + body: >- + Telepresence now enables you to check for a service and pod's connectivity independently, so that it can proxy one without proxying the other. + docs: https://github.com/telepresenceio/telepresence/issues/2911 + - type: bugfix + title: Fix cluster authentication when running the telepresence daemon in a docker container. + body: >- + Authentication to EKS and GKE clusters have been fixed (k8s >= v1.26) + docs: https://github.com/telepresenceio/telepresence/pull/3055 + - type: bugfix + body: >- + Telepresence will not longer panic when a CNAME does not contain the .svc in it + title: Fix panic when CNAME of kubernetes.default doesn't contain .svc + docs: https://github.com/telepresenceio/telepresence/issues/3015 + - version: 2.11.1 + date: "2023-02-27" + notes: + - type: bugfix + title: Multiple architectures + docs: https://github.com/telepresenceio/telepresence/issues/3043 + body: >- + The multi-arch build for the ambassador-telepresence-manager and ambassador-telepresence-agent now + works for both amd64 and arm64. + - type: bugfix + title: Ambassador agent Helm chart duplicates + docs: https://github.com/telepresenceio/telepresence/issues/3046 + body: >- + Some labels in the Helm chart for the Ambassador Agent were duplicated, causing problems for FluxCD. + - version: 2.11.0 + date: "2023-02-22" + notes: + - type: feature + title: Support for arm64 (Apple Silicon) + body: >- + The ambassador-telepresence-manager and ambassador-telepresence-agent are now distributed as + multi-architecture images and can run natively on both linux/amd64 and linux/arm64. + - type: bugfix + title: Connectivity check can break routing in VPN setups + docs: https://github.com/telepresenceio/telepresence/issues/3006 + body: >- + The connectivity check failed to recognize that the connected peer wasn't a traffic-manager. Consequently, + it didn't proxy the cluster because it incorrectly assumed that a successful connect meant cluster connectivity, + - type: bugfix + title: VPN routes not detected by telepresence test-vpn on macOS + docs: https://github.com/telepresenceio/telepresence/pull/3038 + body: >- + The telepresence test-vpn did not include routes of type link when checking for subnet + conflicts. + - version: 2.10.5 + date: "2023-02-06" + notes: + - type: bugfix + title: Daemon reconnection fix + body: >- + Fixed a bug that prevented the local daemons from automatically reconnecting to the traffic manager when the network connection was lost. + - version: 2.10.4 + date: "2023-01-20" + notes: + - type: bugfix + title: Backward compatibility restored + body: >- + Telepresence can now create intercepts with traffic-managers of version 2.9.5 and older. + - version: 2.10.2 + date: "2023-01-16" + notes: + - type: bugfix + title: version consistency in helm commands + body: >- + Ensure that CLI and user-daemon binaries are the same version when running telepresence helm install + or telepresence helm upgrade. + docs: https://github.com/telepresenceio/telepresence/pull/2975 + - type: bugfix + title: Release Process + body: >- + Fixed an issue that prevented the --use-saved-intercept flag from working. + - version: 2.10.1 + date: "2023-01-11" + notes: + - type: bugfix + title: Release Process + body: >- + Fixed a regex in our release process that prevented 2.10.0 promotion. + - version: 2.10.0 + date: "2023-01-11" + notes: + - type: feature + title: Added `insert` and `upgrade` Subcommands to `telepresence helm` + body: >- + The `telepresence helm` sub-commands `insert` and `upgrade` now accepts all types of helm `--set-XXX` flags. + - type: feature + title: Added Image Pull Secrets to Helm Chart + body: >- + Image pull secrets for the traffic-agent can now be added using the Helm chart setting `agent.image.pullSecrets`. + - type: change + title: Rename Configmap + body: >- + The configmap `traffic-manager-clients` has been renamed to `traffic-manager`. + - type: change + title: Webhook Namespace Field + body: >- + If the cluster is Kubernetes 1.21 or later, the mutating webhook will find the correct namespace using the label `kubernetes.io/metadata.name` rather than `app.kuberenetes.io/name`. + docs: https://github.com/telepresenceio/telepresence/issues/2913 + - type: change + title: Rename Webhook + body: >- + The name of the mutating webhook now contains the namespace of the traffic-manager so that the webhook is easier to identify when there are multiple namespace scoped telepresence installations in the cluster. + - type: change + title: OSS Binaries + body: >- + The OSS Helm chart is no longer pushed to the datawire Helm repository. It will instead be pushed from the telepresence proprietary repository. The OSS Helm chart is still what's embedded in the OSS telepresence client. + docs: https://github.com/telepresenceio/telepresence/pull/2943 + - type: bugfix + title: Fix Panic Using `--docker-run` + body: >- + Telepresence no longer panics when `--docker-run` is combined with `--name ` instead of `--name=`. + docs: https://github.com/telepresenceio/telepresence/issues/2953 + - type: bugfix + title: Stop assuming cluster domain + body: >- + Telepresence traffic-manager extracts the cluster domain (e.g. "cluster.local") using a CNAME lookup for "kubernetes.default" instead of "kubernetes.default.svc". + docs: https://github.com/telepresenceio/telepresence/pull/2959 + - type: bugfix + title: Uninstall hook timeout + body: >- + A timeout was added to the pre-delete hook `uninstall-agents`, so that a helm uninstall doesn't hang when there is no running traffic-manager. + docs: https://github.com/telepresenceio/telepresence/pull/2937 + - type: bugfix + title: Uninstall hook check + body: >- + The `Helm.Revision` is now used to prevent that Helm hook calls are served by the wrong revision of the traffic-manager. + docs: https://github.com/telepresenceio/telepresence/issues/2954 + - version: 2.9.5 + date: "2022-12-08" + notes: + - type: security + title: Update to golang v1.19.4 + body: >- + Apply security updates by updating to golang v1.19.4 + docs: https://groups.google.com/g/golang-announce/c/L_3rmdT0BMU + - type: bugfix + title: GCE authentication + body: >- + Fixed a regression, that was introduced in 2.9.3, preventing use of gce authentication without also having a config element present in the gce configuration in the kubeconfig. + - version: 2.9.4 + date: "2022-12-02" + notes: + - type: feature + title: Subnet detection strategy + body: >- + The traffic-manager can automatically detect that the node subnets are different from the pod subnets, and switch detection strategy to instead use subnets that cover the pod IPs. + - type: bugfix + title: Fix `--set` flag for `telepresence helm install` + body: >- + The `telepresence helm` command `--set x=y` flag didn't correctly set values of other types than `string`. The code now uses standard Helm semantics for this flag. + - type: bugfix + title: Fix `agent.image` setting propigation + body: >- + Telepresence now uses the correct `agent.image` properties in the Helm chart when copying agent image settings from the `config.yml` file. + - type: bugfix + title: Delay file sharing until needed + body: >- + Initialization of FTP type file sharing is delayed, so that setting it using the Helm chart value `intercept.useFtp=true` works as expected. + - type: bugfix + title: Cleanup on `telepresence quit` + body: >- + The port-forward that is created when Telepresence connects to a cluster is now properly closed when `telepresence quit` is called. + - type: bugfix + title: Watch `config.yml` without panic + body: >- + The user daemon no longer panics when the `config.yml` is modified at a time when the user daemon is running but no session is active. + - type: bugfix + title: Thread safety + body: >- + Fix race condition that would occur when `telepresence connect` `telepresence leave` was called several times in rapid succession. + - version: 2.9.3 + date: "2022-11-23" + notes: + - type: feature + title: Helm options for `livenessProbe` and `readinessProbe` + body: >- + The helm chart now supports `livenessProbe` and `readinessProbe` for the traffic-manager deployment, so that the pod automatically restarts if it doesn't respond. + - type: change + title: Improved network communication + body: >- + The root daemon now communicates directly with the traffic-manager instead of routing all outbound traffic through the user daemon. + - type: bugfix + title: Root daemon debug logging + body: >- + Using `telepresence loglevel LEVEL` now also sets the log level in the root daemon. + - type: bugfix + title: Multivalue flag value propagation + body: >- + Multi valued kubernetes flags such as `--as-group` are now propagated correctly. + - type: bugfix + title: Root daemon stability + body: >- + The root daemon would sometimes hang indefinitely when quit and connect were called in rapid succession. + - type: bugfix + title: Base DNS resolver + body: >- + Don't use `systemd.resolved` base DNS resolver unless cluster is proxied. + - version: 2.9.2 + date: "2022-11-16" + notes: + - type: bugfix + title: Fix panic + body: >- + Fix panic when connecting to an older traffic-manager. + - type: bugfix + title: Fix header flag + body: >- + Fix an issue where the `http-header` flag sometimes wouldn't propagate correctly. + - version: 2.9.1 + date: "2022-11-16" + notes: + - type: bugfix + title: Connect failures due to missing auth provider. + body: >- + The regression in 2.9.0 that caused a `no Auth Provider found for name “gcp”` error when connecting was fixed. + - version: 2.9.0 + date: "2022-11-15" + notes: + - type: feature + title: New command to view client configuration. + body: >- + A new telepresence config view was added to make it easy to view the current + client configuration. + docs: new-in-2.9#view-the-client-configuration + - type: feature + title: Configure Clients using the Helm chart. + body: >- + The traffic-manager can now configure all clients that connect through the client: map in + the values.yaml file. + docs: reference/cluster-config#client-configuration + - type: feature + title: The Traffic manager version is more visible. + body: >- + The command telepresence version will now include the version of the traffic manager when + the client is connected to a cluster. + - type: feature + title: Command output in YAML format. + body: >- + The global --output flag now accepts both yaml and json. + docs: new-in-2.9#yaml-output + - type: change + title: Deprecated status command flag + body: >- + The telepresence status --json flag is deprecated. Use telepresence status --output=json instead. + - type: bugfix + title: Unqualified service name resolution in docker. + body: >- + Unqualified service names now resolves OK from the docker container when using telepresence intercept --docker-run. + docs: https://github.com/telepresenceio/telepresence/issues/2870 + - type: bugfix + title: Output no longer mixes plaintext and json. + body: >- + Informational messages that don't really originate from the command, such as "Launching Telepresence Root Daemon", + or "An update of telepresence ...", are discarded instead of being printed as plain text before the actual formatted + output when using the --output=json. + docs: https://github.com/telepresenceio/telepresence/issues/2854 + - type: bugfix + title: No more panic when invalid port names are detected. + body: >- + A `telepresence intercept` of services with invalid port no longer causes a panic. + docs: https://github.com/telepresenceio/telepresence/issues/2880 + - type: bugfix + title: Proper errors for bad output formats. + body: >- + An attempt to use an invalid value for the global --output flag now renders a proper error message. + - type: bugfix + title: Remove lingering DNS config on macOS. + body: >- + Files lingering under /etc/resolver as a result of ungraceful shutdown of the root daemon on macOS, are + now removed when a new root daemon starts. + - version: 2.8.5 + date: "2022-11-2" + notes: + - type: security + title: CVE-2022-41716 + body: >- + Updated Golang to 1.19.3 to address CVE-2022-41716. + - version: 2.8.4 + date: "2022-11-2" + notes: + - type: bugfix + title: Release Process + body: >- + This release resulted in changes to our release process. + - version: 2.8.3 + date: "2022-10-27" + notes: + - type: feature + title: Ability to disable global intercepts. + body: >- + Global intercepts (a.k.a. TCP intercepts) can now be disabled by using the new Helm chart setting intercept.disableGlobal. + docs: https://github.com/telepresenceio/telepresence/issues/2140 + - type: feature + title: Configurable mutating webhook port + body: >- + The port used for the mutating webhook can be configured using the Helm chart setting + agentInjector.webhook.port. + docs: install/helm + - type: change + title: Mutating webhook port defaults to 443 + body: >- + The default port for the mutating webhook is now 443. It used to be 8443. + - type: change + title: Agent image configuration mandatory in air-gapped environments. + body: >- + The traffic-manager will no longer default to use the tel2 image for the traffic-agent when it is + unable to connect to Ambassador Cloud. Air-gapped environments must declare what image to use in the Helm chart. + - type: bugfix + title: Can now connect to non-helm installs + body: >- + telepresence connect now works as long as the traffic manager is installed, even if + it wasn't installed via >code>helm install + docs: https://github.com/telepresenceio/telepresence/issues/2824 + - type: bugfix + title: check-vpn crash fixed + body: >- + telepresence check-vpn no longer crashes when the daemons don't start properly. + - version: 2.8.2 + date: "2022-10-15" + notes: + - type: bugfix + title: Reinstate 2.8.0 + body: >- + There was an issue downloading the free enhanced client. This problem was fixed, 2.8.0 was reinstated + - version: 2.8.1 + date: "2022-10-14" + notes: + - type: bugfix + title: Rollback 2.8.0 + body: >- + Rollback 2.8.0 while we investigate an issue with ambassador cloud. + - version: 2.8.0 + date: "2022-10-14" + notes: + - type: feature + title: Improved DNS resolver + body: >- + The Telepresence DNS resolver is now capable of resolving queries of type A, AAAA, CNAME, + MX, NS, PTR, SRV, and TXT. + docs: reference/dns + - type: feature + title: New `client` structure in Helm chart + body: >- + A new client struct was added to the Helm chart. It contains a connectionTTL that controls + how long the traffic manager will retain a client connection without seeing any sign of life from the client. + docs: reference/cluster-config#Client-Configuration + - type: feature + title: Include and exclude suffixes configurable using the Helm chart. + body: >- + A dns element was added to the client struct in Helm chart. It contains an includeSuffixes and + an excludeSuffixes value that controls what type of names that the DNS resolver in the client will delegate to + the cluster. + docs: reference/cluster-config#DNS + - type: feature + title: Configurable traffic-manager API port + body: >- + The API port used by the traffic-manager is now configurable using the Helm chart value apiPort. + The default port is 8081. + docs: https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence + - type: feature + title: Envoy server and admin port configuration. + body: >- + An new agent struct was added to the Helm chart. It contains an `envoy` structure where the server and + admin port of the Envoy proxy running in the enhanced traffic-agent can be configured. + docs: reference/cluster-config#Envoy-Configuration + - type: change + title: Helm chart `dnsConfig` moved to `client.routing`. + body: >- + The Helm chart dnsConfig was deprecated but retained for backward compatibility. The fields alsoProxySubnets + and neverProxySubnets can now be found under routing in the client struct. + docs: reference/cluster-config#Routing + - type: change + title: Helm chart `agentInjector.agentImage` moved to `agent.image`. + body: >- + The Helm chart agentInjector.agentImage was moved to agent.image. The old value is deprecated but + retained for backward compatibility. + docs: reference/cluster-config#Image-Configuration + - type: change + title: Helm chart `agentInjector.appProtocolStrategy` moved to `agent.appProtocolStrategy`. + body: >- + The Helm chart agentInjector.appProtocolStrategy was moved to agent.appProtocolStrategy. The old + value is deprecated but retained for backward compatibility. + docs: reference/cluster-config#Application-Protocol-Selection + - type: change + title: Helm chart `dnsServiceName`, `dnsServiceNamespace`, and `dnsServiceIP` removed. + body: >- + The Helm chart dnsServiceName, dnsServiceNamespace, and dnsServiceIP has been removed, because + they are no longer needed. The TUN-device will use the traffic-manager pod-IP on platforms where it needs to + dedicate an IP for its local resolver. + - type: change + title: Quit daemons with `telepresence quit -s` + body: >- + The former options `-u` and `-r` for `telepresence quit` has been deprecated and replaced with one option `-s` which will + quit both the root daemon and the user daemon. + - type: bugfix + title: Environment variable interpolation in pods now works. + body: >- + Environment variable interpolation now works for all definitions that are copied from pod containers + into the injected traffic-agent container. + - type: bugfix + title: Early detection of namespace conflict + body: >- + An attempt to create simultaneous intercepts that span multiple namespace on the same workstation + is detected early and prohibited instead of resulting in failing DNS lookups later on. + - type: bugfix + title: Annoying log message removed + body: >- + Spurious and incorrect ""!! SRV xxx"" messages will no longer appear in the logs when the reason + is normal context cancellation. + - type: bugfix + title: Single name DNS resolution in Docker on Linux host + body: >- + Single label names now resolves correctly when using Telepresence in Docker on a Linux host + - type: bugfix + title: Misnomer `appPortStrategy` in Helm chart renamed to `appProtocolStrategy`. + body: >- + The Helm chart value appProtocolStrategy is now correctly named (used to be appPortStategy) + - version: 2.7.6 + date: "2022-09-16" + notes: + - type: feature + title: Helm chart resource entries for injected agents + body: >- + The resources for the traffic-agent container and the optional init container can be + specified in the Helm chart using the resources and initResource fields + of the agentInjector.agentImage + - type: feature + title: Cluster event propagation when injection fails + body: >- + When the traffic-manager fails to inject a traffic-agent, the cause for the failure is + detected by reading the cluster events, and propagated to the user. + - type: feature + title: FTP-client instead of sshfs for remote mounts + body: >- + Telepresence can now use an embedded FTP client and load an existing FUSE library instead of running + an external sshfs or sshfs-win binary. This feature is experimental in 2.7.x + and enabled by setting intercept.useFtp to true> in the config.yml. + - type: change + title: Upgrade of winfsp + body: >- + Telepresence on Windows upgraded winfsp from version 1.10 to 1.11 + - type: bugfix + title: Removal of invalid warning messages + body: >- + Running CLI commands on Apple M1 machines will no longer throw warnings about /proc/cpuinfo + and /proc/self/auxv. + - version: 2.7.5 + date: "2022-09-14" + notes: + - type: change + title: Rollback of release 2.7.4 + body: >- + This release is a rollback of the changes in 2.7.4, so essentially the same as 2.7.3 + - version: 2.7.4 + date: "2022-09-14" + notes: + - type: change + body: >- + This release was broken on some platforms. Use 2.7.6 instead. + - version: 2.7.3 + date: "2022-09-07" + notes: + - type: bugfix + title: PTY for CLI commands + body: >- + CLI commands that are executed by the user daemon now use a pseudo TTY. This enables + docker run -it to allocate a TTY and will also give other commands like bash read the + same behavior as when executed directly in a terminal. + docs: https://github.com/telepresenceio/telepresence/issues/2724 + - type: bugfix + title: Traffic Manager useless warning silenced + body: >- + The traffic-manager will no longer log numerous warnings saying Issuing a + systema request without ApiKey or InstallID may result in an error. + - type: bugfix + title: Traffic Manager useless error silenced + body: >- + The traffic-manager will no longer log an error saying Unable to derive subnets + from nodes when the podCIDRStrategy is auto and it chooses to instead derive the + subnets from the pod IPs. + - version: 2.7.2 + date: "2022-08-25" + notes: + - type: feature + title: Autocompletion scripts + body: >- + Autocompletion scripts can now be generated with telepresence completion SHELL where SHELL can be bash, zsh, fish or powershell. + - type: feature + title: Connectivity check timeout + body: >- + The timeout for the initial connectivity check that Telepresence performs + in order to determine if the cluster's subnets are proxied or not can now be configured + in the config.yml file using timeouts.connectivityCheck. The default timeout was + changed from 5 seconds to 500 milliseconds to speed up the actual connect. + docs: reference/config#timeouts + - type: change + title: gather-traces feedback + body: >- + The command telepresence gather-traces now prints out a message on success. + docs: troubleshooting#distributed-tracing + - type: change + title: upload-traces feedback + body: >- + The command telepresence upload-traces now prints out a message on success. + docs: troubleshooting#distributed-tracing + - type: change + title: gather-traces tracing + body: >- + The command telepresence gather-traces now traces itself and reports errors with trace gathering. + docs: troubleshooting#distributed-tracing + - type: change + title: CLI log level + body: >- + The cli.log log is now logged at the same level as the connector.log + docs: reference/config#log-levels + - type: bugfix + title: Telepresence --help fixed + body: >- + telepresence --help now works once more even if there's no user daemon running. + docs: https://github.com/telepresenceio/telepresence/issues/2735 + - type: bugfix + title: Stream cancellation when no process intercepts + body: >- + Streams created between the traffic-agent and the workstation are now properly closed + when no interceptor process has been started on the workstation. This fixes a potential problem where + a large number of attempts to connect to a non-existing interceptor would cause stream congestion + and an unresponsive intercept. + - type: bugfix + title: List command excludes the traffic-manager + body: >- + The telepresence list command no longer includes the traffic-manager deployment. + - version: 2.7.1 + date: "2022-08-10" + notes: + - type: change + title: Reinstate telepresence uninstall + body: >- + Reinstate telepresence uninstall with --everything depreciated + - type: change + title: Reduce telepresence helm uninstall + body: >- + telepresence helm uninstall will only uninstall the traffic-manager helm chart and no longer accepts the --everything, --agent, or --all-agents flags. + - type: bugfix + title: Auto-connect for telepresence intercpet + body: >- + telepresence intercept will attempt to connect to the traffic manager before creating an intercept. + - version: 2.7.0 + date: "2022-08-07" + notes: + - type: feature + title: Distributed Tracing + body: >- + The Telepresence components now collect OpenTelemetry traces. + Up to 10MB of trace data are available at any given time for collection from + components. telepresence gather-traces is a new command that will collect + all that data and place it into a gzip file, and telepresence upload-traces is + a new command that will push the gzipped data into an OTLP collector. + docs: troubleshooting#distributed-tracing + - type: feature + title: Helm install + body: >- + A new telepresence helm command was added to provide an easy way to install, upgrade, or uninstall the telepresence traffic-manager. + docs: install/manager + - type: feature + title: Ignore Volume Mounts + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-ignore-volume-mounts, that can be used to make the injector ignore specified volume mounts denoted by a comma-separated string. + - type: feature + title: telepresence pod-daemon + body: >- + The Docker image now contains a new program in addition to + the existing traffic-manager and traffic-agent: the pod-daemon. The + pod-daemon is a trimmed-down version of the user-daemon that is + designed to run as a sidecar in a Pod, enabling CI systems to create + preview deploys. + - type: feature + title: Prometheus support for traffic manager + body: >- + Added prometheus support to the traffic manager. + - type: change + title: No install on telepresence connect + body: >- + The traffic manager is no longer automatically installed into the cluster. Connecting or creating an intercept in a cluster without a traffic manager will return an error. + docs: install/manager + - type: change + title: Helm Uninstall + body: >- + The command telepresence uninstall has been moved to telepresence helm uninstall. + docs: install/manager + - type: bugfix + title: readOnlyRootFileSystem mounts work + body: >- + Add an emptyDir volume and volume mount under /tmp on the agent sidecar so it works with `readOnlyRootFileSystem: true` + docs: https://github.com/telepresenceio/telepresence/pull/2666 + - version: 2.6.8 + date: "2022-06-23" + notes: + - type: feature + title: Specify Your DNS + body: >- + The name and namespace for the DNS Service that the traffic-manager uses in DNS auto-detection can now be specified. + - type: feature + title: Specify a Fallback DNS + body: >- + Should the DNS auto-detection logic in the traffic-manager fail, users can now specify a fallback IP to use. + - type: feature + title: Intercept UDP Ports + body: >- + It is now possible to intercept UDP ports with Telepresence and also use --to-pod to forward UDP traffic from ports on localhost. + - type: change + title: Additional Helm Values + body: >- + The Helm chart will now add the nodeSelector, affinity and tolerations values to the traffic-manager's post-upgrade-hook and pre-delete-hook jobs. + - type: bugfix + title: Agent Injection Bugfix + body: >- + Telepresence no longer fails to inject the traffic agent into the pod generated for workloads that have no volumes and `automountServiceAccountToken: false`. + - version: 2.6.7 + date: "2022-06-22" + notes: + - type: bugfix + title: Persistant Sessions + body: >- + The Telepresence client will remember and reuse the traffic-manager session after a network failure or other reason that caused an unclean disconnect. + - type: bugfix + title: DNS Requests + body: >- + Telepresence will no longer forward DNS requests for "wpad" to the cluster. + - type: bugfix + title: Graceful Shutdown + body: >- + The traffic-agent will properly shut down if one of its goroutines errors. + - version: 2.6.6 + date: "2022-06-9" + notes: + - type: bugfix + title: Env Var `TELEPRESENCE_API_PORT` + body: >- + The propagation of the TELEPRESENCE_API_PORT environment variable now works correctly. + - type: bugfix + title: Double Printing `--output json` + body: >- + The --output json global flag no longer outputs multiple objects + - version: 2.6.5 + date: "2022-06-03" + notes: + - type: feature + title: Helm Option -- `reinvocationPolicy` + body: >- + The reinvocationPolicy or the traffic-agent injector webhook can now be configured using the Helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Proxy Certificate + body: >- + The traffic manager now accepts a root CA for a proxy, allowing it to connect to ambassador cloud from behind an HTTPS proxy. This can be configured through the helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Agent Injection + body: >- + A policy that controls when the mutating webhook injects the traffic-agent was added, and can be configured in the Helm chart. + docs: install/helm + - type: change + title: Windows Tunnel Version Upgrade + body: >- + Telepresence on Windows upgraded wintun.dll from version 0.12 to version 0.14.1 + - type: change + title: Helm Version Upgrade + body: >- + Telepresence upgraded its embedded Helm from version 3.8.1 to 3.9 + - type: change + title: Kubernetes API Version Upgrade + body: >- + Telepresence upgraded its embedded Kubernetes API from version 0.23.4 to 0.24.1 + - type: feature + title: Flag `--watch` Added to `list` Command + body: >- + Added a --watch flag to telepresence list that can be used to watch interceptable workloads in a namespace. + - type: change + title: Depreciated `images.webhookAgentImage` + body: >- + The Telepresence configuration setting for `images.webhookAgentImage` is now deprecated. Use `images.agentImage` instead. + - type: bugfix + title: Default `reinvocationPolicy` Set to Never + body: >- + The reinvocationPolicy or the traffic-agent injector webhook now defaults to Never insteadof IfNeeded so that LimitRanges on namespaces can inject a missing resources element into the injected traffic-agent container. + - type: bugfix + title: UDP + body: >- + UDP based communication with services in the cluster now works as expected. + - type: bugfix + title: Telepresence `--help` + body: >- + The command help will only show Kubernetes flags on the commands that supports them + - type: change + title: Error Count + body: >- + Only the errors from the last session will be considered when counting the number of errors in the log after a command failure. + - version: 2.6.4 + date: "2022-05-23" + notes: + - type: bugfix + title: Upgrade RBAC Permissions + body: >- + The traffic-manager RBAC grants permissions to update services, deployments, replicatsets, and statefulsets. Those permissions are needed when the traffic-manager upgrades from versions < 2.6.0 and can be revoked after the upgrade. + - version: 2.6.3 + date: "2022-05-20" + notes: + - type: bugfix + title: Relative Mount Paths + body: >- + The --mount intercept flag now handles relative mount points correctly on non-windows platforms. Windows still require the argument to be a drive letter followed by a colon. + - type: bugfix + title: Traffic Agent Config + body: >- + The traffic-agent's configuration update automatically when services are added, updated or deleted. + - type: bugfix + title: Container Injection for Numeric Ports + body: >- + Telepresence will now always inject an initContainer when the service's targetPort is numeric + - type: bugfix + title: Matching Services + body: >- + Workloads that have several matching services pointing to the same target port are now handled correctly. + - type: bugfix + title: Unexpected Panic + body: >- + A potential race condition causing a panic when closing a DNS connection is now handled correctly. + - type: bugfix + title: Mount Volume Cleanup + body: >- + A container start would sometimes fail because and old directory remained in a mounted temp volume. + - version: 2.6.2 + date: "2022-05-17" + notes: + - type: bugfix + title: Argo Injection + body: >- + Workloads controlled by workloads like Argo Rollout are injected correctly. + - type: bugfix + title: Agent Port Mapping + body: >- + Multiple services appointing the same container port no longer result in duplicated ports in an injected pod. + - type: bugfix + title: GRPC Max Message Size + body: >- + The telepresence list command no longer errors out with "grpc: received message larger than max" when listing namespaces with a large number of workloads. + - version: 2.6.1 + date: "2022-05-16" + notes: + - type: bugfix + title: KUBECONFIG environment variable + body: >- + Telepresence will now handle multiple path entries in the KUBECONFIG environment correctly. + - version: 2.6.0 + date: "2022-05-13" + notes: + - type: feature + title: Intercept multiple containers in a pod, and multiple ports per container + body: >- + Telepresence can now intercept multiple services and/or service-ports that connect to the same pod. + docs: new-in-2.6#intercept-multiple-containers-and-ports + - type: feature + title: The Traffic Agent sidecar is always injected by the Traffic Manager's mutating webhook + body: >- + The client will no longer modify deployments, replicasets, or statefulsets in order to + inject a Traffic Agent into an intercepted pod. Instead, all injection is now performed by a mutating webhook. As a result, + the client now needs less permissions in the cluster. + docs: install/upgrade#important-note-about-upgrading-to-2.6.0 + - type: change + title: Automatic upgrade of Traffic Agents + body: >- + When upgrading, all workloads with injected agents will have their agent "uninstalled" automatically. + The mutating webhook will then ensure that their pods will receive an updated Traffic Agent. + docs: new-in-2.6#no-more-workload-modifications + - type: change + title: No default image in the Helm chart + body: >- + The helm chart no longer has a default set for the agentInjector.image.name, and unless it's set, the + traffic-manager will ask Ambassador Could for the preferred image. + docs: new-in-2.6#smarter-agent + - type: change + title: Upgrade to Helm version 3.8.1 + body: The Telepresence client now uses Helm version 3.8.1 when auto-installing the Traffic Manager. + - type: bugfix + title: Remote mounts will now function correctly with custom securityContext + body: >- + The bug causing permission problems when the Traffic Agent is in a Pod with a custom securityContext has been fixed. + - type: bugfix + title: Improved presentation of flags in CLI help + body: The help for commands that accept Kubernetes flags will now display those flags in a separate group. + - type: bugfix + title: Better termination of process parented by intercept + body: >- + Occasionally an intercept will spawn a command using -- on the command line, often in another console. + When you use telepresence leave or telepresence quit while the intercept with the spawned command is still active, + Telepresence will now terminate that the command because it's considered to be parented by the intercept that is being removed. + - version: 2.5.8 + date: "2022-04-27" + notes: + - type: bugfix + title: Folder creation on `telepresence login` + body: >- + Fixed a bug where the telepresence config folder would not be created if the user ran telepresence login before other commands. + - version: 2.5.7 + date: "2022-04-25" + notes: + - type: change + title: RBAC requirements + body: >- + A namespaced traffic-manager will no longer require cluster wide RBAC. Only Roles and RoleBindings are now used. + - type: bugfix + title: Windows DNS + body: >- + The DNS recursion detector didn't work correctly on Windows, resulting in sporadic failures to resolve names that were resolved correctly at other times. + - type: bugfix + title: Session TTL and Reconnect + body: >- + A telepresence session will now last for 24 hours after the user's last connectivity. If a session expires, the connector will automatically try to reconnect. + - version: 2.5.6 + date: "2022-04-18" + notes: + - type: change + title: Less Watchers + body: >- + Telepresence agents watcher will now only watch namespaces that the user has accessed since the last connect. + - type: bugfix + title: More Efficient `gather-logs` + body: >- + The gather-logs command will no longer send any logs through gRPC. + - version: 2.5.5 + date: "2022-04-08" + notes: + - type: change + title: Traffic Manager Permissions + body: >- + The traffic-manager now requires permissions to read pods across namespaces even if installed with limited permissions + - type: bugfix + title: Linux DNS Cache + body: >- + The DNS resolver used on Linux with systemd-resolved now flushes the cache when the search path changes. + - type: bugfix + title: Automatic Connect Sync + body: >- + The telepresence list command will produce a correct listing even when not preceded by a telepresence connect. + - type: bugfix + title: Disconnect Reconnect Stability + body: >- + The root daemon will no longer get into a bad state when a disconnect is rapidly followed by a new connect. + - type: bugfix + title: Limit Watched Namespaces + body: >- + The client will now only watch agents from accessible namespaces, and is also constrained to namespaces explicitly mapped using the connect command's --mapped-namespaces flag. + - type: bugfix + title: Limit Namespaces used in `gather-logs` + body: >- + The gather-logs command will only gather traffic-agent logs from accessible namespaces, and is also constrained to namespaces explicitly mapped using the connect command's --mapped-namespaces flag. + - version: 2.5.4 + date: "2022-03-29" + notes: + - type: bugfix + title: Linux DNS Concurrency + body: >- + The DNS fallback resolver on Linux now correctly handles concurrent requests without timing them out + - type: bugfix + title: Non-Functional Flag + body: >- + The ingress-l5 flag will no longer be forcefully set to equal the --ingress-host flag + - type: bugfix + title: Automatically Remove Failed Intercepts + body: >- + Intercepts that fail to create are now consistently removed to prevent non-working dangling intercepts from sticking around. + - type: bugfix + title: Agent UID + body: >- + Agent container is no longer sensitive to a random UID or an UID imposed by a SecurityContext. + - type: bugfix + title: Gather-Logs Output Filepath + body: >- + Removed a bad concatenation that corrupted the output path of telepresence gather-logs. + - type: change + title: Remove Unnecessary Error Advice + body: >- + An advice to "see logs for details" is no longer printed when the argument count is incorrect in a CLI command. + - type: bugfix + title: Garbage Collection + body: >- + Client and agent sessions no longer leaves dangling waiters in the traffic-manager when they depart. + - type: bugfix + title: Limit Gathered Logs + body: >- + The client's gather logs command and agent watcher will now respect the configured grpc.maxReceiveSize + - type: change + title: In-Cluster Checks + body: >- + The TUN device will no longer route pod or service subnets if it is running in a machine that's already connected to the cluster + - type: change + title: Expanded Status Command + body: >- + The status command includes the install id, user id, account id, and user email in its result, and can print output as JSON + - type: change + title: List Command Shows All Intercepts + body: >- + The list command, when used with the --intercepts flag, will list the users intercepts from all namespaces + - version: 2.5.3 + date: "2022-02-25" + notes: + - type: bugfix + title: TCP Connectivity + body: >- + Fixed bug in the TCP stack causing timeouts after repeated connects to the same address + - type: feature + title: Linux Binaries + body: >- + Client-side binaries for the arm64 architecture are now available for linux + - version: 2.5.2 + date: "2022-02-23" + notes: + - type: bugfix + title: DNS server bugfix + body: >- + Fixed a bug where Telepresence would use the last server in resolv.conf + - version: 2.5.1 + date: "2022-02-19" + notes: + - type: bugfix + title: Fix GKE auth issue + body: >- + Fixed a bug where using a GKE cluster would error with: No Auth Provider found for name "gcp" + - version: 2.5.0 + date: "2022-02-18" + notes: + - type: feature + title: Intercept metadata + body: >- + The flag --http-meta can be used to declare metadata key value pairs that will be returned by the Telepresence rest + API endpoint /intercept-info + docs: reference/restapi#intercept-info + - type: change + title: Client RBAC watch + body: >- + The verb "watch" was added to the set of required verbs when accessing services and workloads for the client RBAC + ClusterRole + docs: reference/rbac + - type: change + title: Dropped backward compatibility with versions <=2.4.4 + body: >- + Telepresence is no longer backward compatible with versions 2.4.4 or older because the deprecated multiplexing tunnel + functionality was removed. + - type: change + title: No global networking flags + body: >- + The global networking flags are no longer used and using them will render a deprecation warning unless they are supported by the + command. The subcommands that support networking flags are connect, current-cluster-id, + and genyaml. + - type: bugfix + title: Output of status command + body: >- + The also-proxy and never-proxy subnets are now displayed correctly when using the + telepresence status command. + - type: bugfix + title: SETENV sudo privilege no longer needed + body: >- + Telepresence longer requires SETENV privileges when starting the root daemon. + - type: bugfix + title: Network device names containing dash + body: >- + Telepresence will now parse device names containing dashes correctly when determining routes that it should never block. + - type: bugfix + title: Linux uses cluster.local as domain instead of search + body: >- + The cluster domain (typically "cluster.local") is no longer added to the DNS search on Linux using + systemd-resolved. Instead, it is added as a domain so that names ending with it are routed + to the DNS server. + - version: 2.4.11 + date: "2022-02-10" + notes: + - type: change + title: Add additional logging to troubleshoot intermittent issues with intercepts + body: >- + We've noticed some issues with intercepts in v2.4.10, so we are releasing a version + with enhanced logging to help debug and fix the issue. + - version: 2.4.10 + date: "2022-01-13" + notes: + - type: feature + title: New --http-plaintext option + body: >- + The flag --http-plaintext can be used to ensure that an intercept uses plaintext http or grpc when + communicating with the workstation process. + docs: reference/intercepts/#tls + - type: feature + title: Configure the default intercept port + body: >- + The port used by default in the telepresence intercept command (8080), can now be changed by setting + the intercept.defaultPort in the config.yml file. + docs: reference/config/#intercept + - type: change + title: Telepresence CI now uses Github Actions + body: >- + Telepresence now uses Github Actions for doing unit and integration testing. It is + now easier for contributors to run tests on PRs since maintainers can add an + "ok to test" label to PRs (including from forks) to run integration tests. + docs: https://github.com/telepresenceio/telepresence/actions + image: telepresence-2.4.10-actions.png + - type: bugfix + title: Check conditions before asking questions + body: >- + User will not be asked to log in or add ingress information when creating an intercept until a check has been + made that the intercept is possible. + docs: reference/intercepts/ + - type: bugfix + title: Fix invalid log statement + body: >- + Telepresence will no longer log invalid: "unhandled connection control message: code DIAL_OK" errors. + - type: bugfix + title: Log errors from sshfs/sftp + body: >- + Output to stderr from the traffic-agent's sftp and the client's sshfs processes + are properly logged as errors. + - type: bugfix + title: Don't use Windows path separators in workload pod template + body: >- + Auto installer will no longer not emit backslash separators for the /tel-app-mounts paths in the + traffic-agent container spec when running on Windows. + - version: 2.4.9 + date: "2021-12-09" + notes: + - type: bugfix + title: Helm upgrade nil pointer error + body: >- + A helm upgrade using the --reuse-values flag no longer fails on a "nil pointer" error caused by a nil + telpresenceAPI value. + docs: install/helm#upgrading-the-traffic-manager + - version: 2.4.8 + date: "2021-12-03" + notes: + - type: feature + title: VPN diagnostics tool + body: >- + There is a new subcommand, test-vpn, that can be used to diagnose connectivity issues with a VPN. + See the VPN docs for more information on how to use it. + docs: reference/vpn + image: telepresence-2.4.8-vpn.png + + - type: feature + title: RESTful API service + body: >- + A RESTful service was added to Telepresence, both locally to the client and to the traffic-agent to + help determine if messages with a set of headers should be consumed or not from a message queue where the + intercept headers are added to the messages. + docs: reference/restapi + image: telepresence-2.4.8-health-check.png + + - type: change + title: TELEPRESENCE_LOGIN_CLIENT_ID env variable no longer used + body: >- + You could previously configure this value, but there was no reason to change it, so the value + was removed. + + - type: bugfix + title: Tunneled network connections behave more like ordinary TCP connections. + body: >- + When using Telepresence with an external cloud provider for extensions, those tunneled + connections now behave more like TCP connections, especially when it comes to timeouts. + We've also added increased testing around these types of connections. + - version: 2.4.7 + date: "2021-11-24" + notes: + - type: feature + title: Injector service-name annotation + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-service-name, that can be used to set the name of the service to be intercepted. + This will help disambiguate which service to intercept for when a workload is exposed by multiple services, such as can happen with Argo Rollouts + docs: reference/cluster-config#service-name-annotation + - type: feature + title: Skip the Ingress Dialogue + body: >- + You can now skip the ingress dialogue by setting the ingress parameters in the corresponding flags. + docs: reference/intercepts#skipping-the-ingress-dialogue + - type: feature + title: Never proxy subnets + body: >- + The kubeconfig extensions now support a never-proxy argument, + analogous to also-proxy, that defines a set of subnets that + will never be proxied via telepresence. + docs: reference/config#neverproxy + - type: change + title: Daemon versions check + body: >- + Telepresence now checks the versions of the client and the daemons and asks the user to quit and restart if they don't match. + - type: change + title: No explicit DNS flushes + body: >- + Telepresence DNS now uses a very short TTL instead of explicitly flushing DNS by killing the mDNSResponder or doing resolvectl flush-caches + docs: reference/routing#dns-caching + - type: bugfix + title: Legacy flags now work with global flags + body: >- + Legacy flags such as --swap-deployment can now be used together with global flags. + - type: bugfix + title: Outbound connection closing + body: >- + Outbound connections are now properly closed when the peer closes. + - type: bugfix + title: Prevent DNS recursion + body: >- + The DNS-resolver will trap recursive resolution attempts (may happen when the cluster runs in a docker-container on the client). + docs: reference/routing#dns-recursion + - type: bugfix + title: Prevent network recursion + body: >- + The TUN-device will trap failed connection attempts that results in recursive calls back into the TUN-device (may happen when the + cluster runs in a docker-container on the client). + docs: reference/routing#connect-recursion + - type: bugfix + title: Traffic Manager deadlock fix + body: >- + The Traffic Manager no longer runs a risk of entering a deadlock when a new Traffic agent arrives. + - type: bugfix + title: webhookRegistry config propagation + body: >- + The configured webhookRegistry is now propagated to the webhook installer even if no webhookAgentImage has been set. + docs: reference/config#images + - type: bugfix + title: Login refreshes expired tokens + body: >- + When a user's token has expired, telepresence login + will prompt the user to log in again to get a new token. Previously, + the user had to telepresence quit and telepresence logout + to get a new token. + docs: https://github.com/telepresenceio/telepresence/issues/2062 + - version: 2.4.6 + date: "2021-11-02" + notes: + - type: feature + title: Manually injecting Traffic Agent + body: >- + Telepresence now supports manually injecting the traffic-agent YAML into workload manifests. + Use the genyaml command to create the sidecar YAML, then add the telepresence.getambassador.io/manually-injected: "true" annotation to your pods to allow Telepresence to intercept them. + docs: reference/intercepts/manual-agent + + - type: feature + title: Telepresence CLI released for Apple silicon + body: >- + Telepresence is now built and released for Apple silicon. + docs: install/?os=macos + + - type: change + title: Telepresence help text now links to telepresence.io + body: >- + We now include a link to our documentation when you run telepresence --help. This will make it easier + for users to find this page whether they acquire Telepresence through Brew or some other mechanism. + image: telepresence-2.4.6-help-text.png + + - type: bugfix + title: Fixed bug when API server is inside CIDR range of pods/services + body: >- + If the API server for your kubernetes cluster had an IP that fell within the + subnet generated from pods/services in a kubernetes cluster, it would proxy traffic + to the API server which would result in hanging or a failed connection. We now ensure + that the API server is explicitly not proxied. + - version: 2.4.5 + date: "2021-10-15" + notes: + - type: feature + title: Get pod yaml with gather-logs command + body: >- + Adding the flag --get-pod-yaml to your request will get the + pod yaml manifest for all kubernetes components you are getting logs for + ( traffic-manager and/or pods containing a + traffic-agent container). This flag is set to false + by default. + docs: reference/client + image: telepresence-2.4.5-pod-yaml.png + + - type: feature + title: Anonymize pod name + namespace when using gather-logs command + body: >- + Adding the flag --anonymize to your command will + anonymize your pod names + namespaces in the output file. We replace the + sensitive names with simple names (e.g. pod-1, namespace-2) to maintain + relationships between the objects without exposing the real names of your + objects. This flag is set to false by default. + docs: reference/client + image: telepresence-2.4.5-logs-anonymize.png + + - type: feature + title: Support for intercepting headless services + body: >- + Intercepting headless services is now officially supported. You can request a + headless service on whatever port it exposes and get a response from the + intercept. This leverages the same approach as intercepting numeric ports when + using the mutating webhook injector, mainly requires the initContainer + to have NET_ADMIN capabilities. + docs: reference/intercepts/#intercepting-headless-services + + - type: change + title: Use one tunnel per connection instead of multiplexing into one tunnel + body: >- + We have changed Telepresence so that it uses one tunnel per connection instead + of multiplexing all connections into one tunnel. This will provide substantial + performance improvements. Clients will still be backwards compatible with older + managers that only support multiplexing. + + - type: bugfix + title: Added checks for Telepresence kubernetes compatibility + body: >- + Telepresence currently works with Kubernetes server versions 1.17.0 + and higher. We have added logs in the connector and traffic-manager + to let users know when they are using Telepresence with a cluster it doesn't support. + docs: reference/cluster-config + + - type: bugfix + title: Traffic Agent security context is now only added when necessary + body: >- + When creating an intercept, Telepresence will now only set the traffic agent's GID + when strictly necessary (i.e. when using headless services or numeric ports). This mitigates + an issue on openshift clusters where the traffic agent can fail to be created due to + openshift's security policies banning arbitrary GIDs. + + - version: 2.4.4 + date: "2021-09-27" + notes: + - type: feature + title: Numeric ports in agent injector + body: >- + The agent injector now supports injecting Traffic Agents into pods that have unnamed ports. + docs: reference/cluster-config/#note-on-numeric-ports + + - type: feature + title: New subcommand to gather logs and export into zip file + body: >- + Telepresence has logs for various components (the + traffic-manager, traffic-agents, the root and + user daemons), which are integral for understanding and debugging + Telepresence behavior. We have added the telepresence + gather-logs command to make it simple to compile logs for + all Telepresence components and export them in a zip file that can + be shared to others and/or included in a github issue. For more + information on usage, run telepresence gather-logs --help + . + docs: reference/client + image: telepresence-2.4.4-gather-logs.png + + - type: feature + title: Pod CIDR strategy is configurable in Helm chart + body: >- + Telepresence now enables you to directly configure how to get + pod CIDRs when deploying Telepresence with the Helm chart. + The default behavior remains the same. We've also introduced + the ability to explicitly set what the pod CIDRs should be. + docs: install/helm + + - type: bugfix + title: Compute pod CIDRs more efficiently + body: >- + When computing subnets using the pod CIDRs, the traffic-manager + now uses less CPU cycles. + docs: reference/routing/#subnets + + - type: bugfix + title: Prevent busy loop in traffic-manager + body: >- + In some circumstances, the traffic-manager's CPU + would max out and get pinned at its limit. This required a + shutdown or pod restart to fix. We've added some fixes + to prevent the traffic-manager from getting into this state. + + - type: bugfix + title: Added a fixed buffer size to TUN-device + body: >- + The TUN-device now has a max buffer size of 64K. This prevents the + buffer from growing limitlessly until it receies a PSH, which could + be a blocking operation when receiving lots of TCP-packets. + docs: reference/tun-device + + - type: bugfix + title: Fix hanging user daemon + body: >- + When Telepresence encountered an issue connecting to the cluster or + the root daemon, it could hang indefintely. It now will error correctly + when it encounters that situation. + + - type: bugfix + title: Improved proprietary agent connectivity + body: >- + To determine whether the environment cluster is air-gapped, the + proprietary agent attempts to connect to the cloud during startup. + To deal with a possible initial failure, the agent backs off + and retries the connection with an increasing backoff duration. + + - type: bugfix + title: Telepresence correctly reports intercept port conflict + body: >- + When creating a second intercept targetting the same local port, + it now gives the user an informative error message. Additionally, + it tells them which intercept is currently using that port to make + it easier to remedy. + + - version: 2.4.3 + date: "2021-09-15" + notes: + - type: feature + title: Environment variable TELEPRESENCE_INTERCEPT_ID available in interceptor's environment + body: >- + When you perform an intercept, we now include a TELEPRESENCE_INTERCEPT_ID environment + variable in the environment. + docs: reference/environment/#telepresence-environment-variables + + - type: bugfix + title: Improved daemon stability + body: >- + Fixed a timing bug that sometimes caused a "daemon did not start" failure. + + - type: bugfix + title: Complete logs for Windows + body: >- + Crash stack traces and other errors were incorrectly not written to log files. This has + been fixed so logs for Windows should be at parity with the ones in MacOS and Linux. + + - type: bugfix + title: Log rotation fix for Linux kernel 4.11+ + body: >- + On Linux kernel 4.11 and above, the log file rotation now properly reads the + birth-time of the log file. Older kernels continue to use the old behavior + of using the change-time in place of the birth-time. + + - type: bugfix + title: Improved error messaging + body: >- + When Telepresence encounters an error, it tells the user where they should look for + logs related to the error. We have refined this so that it only tells users to look + for errors in the daemon logs for issues that are logged there. + + - type: bugfix + title: Stop resolving localhost + body: >- + When using the overriding DNS resolver, it will no longer apply search paths when + resolving localhost, since that should be resolved on the user's machine + instead of the cluster. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Variable cluster domain + body: >- + Previously, the cluster domain was hardcoded to cluster.local. While this + is true for many kubernetes clusters, it is not for all of them. Now this value is + retrieved from the traffic-manager. + + - type: bugfix + title: Improved cleanup of traffic-agents + body: >- + Telepresence now uninstalls traffic-agents installed via mutating webhook + when using telepresence uninstall --everything. + + - type: bugfix + title: More large file transfer fixes + body: >- + Downloading large files during an intercept will no longer cause timeouts and hanging + traffic-agents. + + - type: bugfix + title: Setting --mount to false when intercepting works as expected + body: >- + When using --mount=false while performing an intercept, the file system + was still mounted. This has been remedied so the intercept behavior respects the + flag. + docs: reference/volume + + - type: bugfix + title: Traffic-manager establishes outbound connections in parallel + body: >- + Previously, the traffic-manager established outbound connections + sequentially. This resulted in slow (and failing) Dial calls would + block all outbound traffic from the workstation (for up to 30 seconds). We now + establish these connections in parallel so that won't occur. + docs: reference/routing/#outbound + + - type: bugfix + title: Status command reports correct DNS settings + body: >- + Telepresence status now correctly reports DNS settings for all operating + systems, instead of Local IP:nil, Remote IP:nil when they don't exist. + + - version: 2.4.2 + date: "2021-09-01" + notes: + - type: feature + title: New subcommand to temporarily change log-level + body: >- + We have added a new telepresence loglevel subcommand that enables users + to temporarily change the log-level for the local demons, the traffic-manager and + the traffic-agents. While the logLevels settings from the config will + still be used by default, this can be helpful if you are currently experiencing an issue and + want to have higher fidelity logs, without doing a telepresence quit and + telepresence connect. You can use telepresence loglevel --help to get + more information on options for the command. + docs: reference/config + + - type: change + title: All components have info as the default log-level + body: >- + We've now set the default for all components of Telepresence (traffic-agent, + traffic-manager, local daemons) to use info as the default log-level. + + - type: bugfix + title: Updating RBAC in helm chart to fix cluster-id regression + body: >- + In 2.4.1, we enabled the traffic-manager to get the cluster ID by getting the UID + of the default namespace. The helm chart was not updated to give the traffic-manager + those permissions, which has since been fixed. This impacted users who use licensed features of + the Telepresence extension in an air-gapped environment. + docs: reference/cluster-config/#air-gapped-cluster + + - type: bugfix + title: Timeouts for Helm actions are now respected + body: >- + The user-defined timeout for Helm actions wasn't always respected, causing the daemon to hang + indefinitely when failing to install the traffic-manager. + docs: reference/config#timeouts + + - version: 2.4.1 + date: "2021-08-30" + notes: + - type: feature + title: External cloud variables are now configurable + body: >- + We now support configuring the host and port for the cloud in your config.yml. These + are used when logging in to utilize features provided by an extension, and are also passed + along as environment variables when installing the traffic-manager. Additionally, we + now run our testsuite with these variables set to localhost to continue to ensure Telepresence + is fully fuctional without depeneding on an external service. The SYSTEMA_HOST and SYSTEMA_PORT + environment variables are no longer used. + image: telepresence-2.4.1-systema-vars.png + docs: reference/config/#cloud + + - type: feature + title: Helm chart can now regenerate certificate used for mutating webhook on-demand. + body: >- + You can now set agentInjector.certificate.regenerate when deploying Telepresence + with the Helm chart to automatically regenerate the certificate used by the agent injector webhook. + docs: install/helm + + - type: change + title: Traffic Manager installed via helm + body: >- + The traffic-manager is now installed via an embedded version of the Helm chart when telepresence connect is first performed on a cluster. + This change is transparent to the user. + A new configuration flag, timeouts.helm sets the timeouts for all helm operations performed by the Telepresence binary. + docs: reference/config#timeouts + + - type: change + title: traffic-manager gets cluster ID itself instead of via environment variable + body: >- + The traffic-manager used to get the cluster ID as an environment variable when running + telepresence connnect or via adding the value in the helm chart. This was + clunky so now the traffic-manager gets the value itself as long as it has permissions + to "get" and "list" namespaces (this has been updated in the helm chart). + docs: install/helm + + - type: bugfix + title: Telepresence now mounts all directories from /var/run/secrets + body: >- + In the past, we only mounted secret directories in /var/run/secrets/kubernetes.io. + We now mount *all* directories in /var/run/secrets, which, for example, includes + directories like eks.amazonaws.com used for IRSA tokens. + docs: reference/volume + + - type: bugfix + title: Max gRPC receive size correctly propagates to all grpc servers + body: >- + This fixes a bug where the max gRPC receive size was only propagated to some of the + grpc servers, causing failures when the message size was over the default. + docs: reference/config/#grpc + + - type: bugfix + title: Updated our Homebrew packaging to run manually + body: >- + We made some updates to our script that packages Telepresence for Homebrew so that it + can be run manually. This will enable maintainers of Telepresence to run the script manually + should we ever need to rollback a release and have latest point to an older verison. + docs: install/ + + - type: bugfix + title: Telepresence uses namespace from kubeconfig context on each call + body: >- + In the past, Telepresence would use whatever namespace was specified in the kubeconfig's current-context + for the entirety of the time a user was connected to Telepresence. This would lead to confusing behavior + when a user changed the context in their kubeconfig and expected Telepresence to acknowledge that change. + Telepresence now will do that and use the namespace designated by the context on each call. + + - type: bugfix + title: Idle outbound TCP connections timeout increased to 7200 seconds + body: >- + Some users were noticing that their intercepts would start failing after 60 seconds. + This was because the keep idle outbound TCP connections were set to 60 seconds, which we have + now bumped to 7200 seconds to match Linux's tcp_keepalive_time default. + + - type: bugfix + title: Telepresence will automatically remove a socket upon ungraceful termination + body: >- + When a Telepresence process terminates ungracefully, it would inform users that "this usually means + that the process has terminated ungracefully" and implied that they should remove the socket. We've + now made it so Telepresence will automatically attempt to remove the socket upon ungraceful termination. + + - type: bugfix + title: Fixed user daemon deadlock + body: >- + Remedied a situation where the user daemon could hang when a user was logged in. + + - type: bugfix + title: Fixed agentImage config setting + body: >- + The config setting images.agentImages is no longer required to contain the repository, and it + will use the value at images.repository. + docs: reference/config/#images + + - version: 2.4.0 + date: "2021-08-04" + notes: + - type: feature + title: Windows Client Developer Preview + body: >- + There is now a native Windows client for Telepresence that is being released as a Developer Preview. + All the same features supported by the MacOS and Linux client are available on Windows. + image: telepresence-2.4.0-windows.png + docs: install + + - type: feature + title: CLI raises helpful messages from Ambassador Cloud + body: >- + Telepresence can now receive messages from Ambassador Cloud and raise + them to the user when they perform certain commands. This enables us + to send you messages that may enhance your Telepresence experience when + using certain commands. Frequency of messages can be configured in your + config.yml. + image: telepresence-2.4.0-cloud-messages.png + docs: reference/config#cloud + + - type: bugfix + title: Improved stability of systemd-resolved-based DNS + body: >- + When initializing the systemd-resolved-based DNS, the routing domain + is set to improve stability in non-standard configurations. This also enables the + overriding resolver to do a proper take over once the DNS service ends. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Fixed an edge case when intercepting a container with multiple ports + body: >- + When specifying a port of a container to intercept, if there was a container in the + pod without ports, it was automatically selected. This has been fixed so we'll only + choose the container with "no ports" if there's no container that explicitly matches + the port used in your intercept. + docs: reference/intercepts/#creating-an-intercept-when-a-service-has-multiple-ports + + - type: bugfix + title: $(NAME) references in agent's environments are now interpolated correctly. + body: >- + If you had an environment variable $(NAME) in your workload that referenced another, intercepts + would not correctly interpolate $(NAME). This has been fixed and works automatically. + + - type: bugfix + title: Telepresence no longer prints INFO message when there is no config.yml + body: >- + Fixed a regression that printed an INFO message to the terminal when there wasn't a + config.yml present. The config is optional, so this message has been + removed. + docs: reference/config + + - type: bugfix + title: Telepresence no longer panics when using --http-match + body: >- + Fixed a bug where Telepresence would panic if the value passed to --http-match + didn't contain an equal sign, which has been fixed. The correct syntax is in the --help + string and looks like --http-match=HTTP2_HEADER=REGEX + docs: reference/intercepts/#intercept-behavior-when-logged-in-to-ambassador-cloud + + - type: bugfix + title: Improved subnet updates + body: >- + The traffic-manager used to update subnets whenever the Nodes or Pods changed, even if + the underlying subnet hadn't changed, which created a lot of unnecessary traffic between the + client and the traffic-manager. This has been fixed so we only send updates when the subnets + themselves actually change. + docs: reference/routing/#subnets + + - version: 2.3.7 + date: "2021-07-23" + notes: + - type: feature + title: Also-proxy in telepresence status + body: >- + An also-proxy entry in the Kubernetes cluster config will + show up in the output of the telepresence status command. + docs: reference/config + + - type: feature + title: Non-interactive telepresence login + body: >- + telepresence login now has an + --apikey=KEY flag that allows for + non-interactive logins. This is useful for headless + environments where launching a web-browser is impossible, + such as cloud shells, Docker containers, or CI. + image: telepresence-2.3.7-newkey.png + docs: reference/client/login/ + + - type: bugfix + title: Mutating webhook injector correctly hides named ports for probes. + body: >- + The mutating webhook injector has been fixed to correctly rename named ports for liveness and readiness probes + docs: reference/cluster-config + + - type: bugfix + title: telepresence current-cluster-id crash fixed + body: >- + Fixed a regression introduced in 2.3.5 that caused telepresence current-cluster-id + to crash. + docs: reference/cluster-config + + - type: bugfix + title: Better UX around intercepts with no local process running + body: >- + Requests would hang indefinitely when initiating an intercept before you + had a local process running. This has been fixed and will result in an + Empty reply from server until you start a local process. + docs: reference/intercepts + + - type: bugfix + title: API keys no longer show as "no description" + body: >- + New API keys generated internally for communication with + Ambassador Cloud no longer show up as "no description" in + the Ambassador Cloud web UI. Existing API keys generated by + older versions of Telepresence will still show up this way. + image: telepresence-2.3.7-keydesc.png + + - type: bugfix + title: Fix corruption of user-info.json + body: >- + Fixed a race condition that logging in and logging out + rapidly could cause memory corruption or corruption of the + user-info.json cache file used when + authenticating with Ambassador Cloud. + + - type: bugfix + title: Improved DNS resolver for systemd-resolved + body: + Telepresence's systemd-resolved-based DNS resolver is now more + stable and in case it fails to initialize, the overriding resolver + will no longer cause general DNS lookup failures when telepresence defaults to + using it. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Faster telepresence list command + body: + The performance of telepresence list has been increased + significantly by reducing the number of calls the command makes to the cluster. + docs: reference/client + + - version: 2.3.6 + date: "2021-07-20" + notes: + - type: bugfix + title: Fix subnet discovery + body: >- + Fixed a regression introduced in 2.3.5 where the Traffic + Manager's RoleBinding did not correctly appoint + the traffic-manager Role, causing + subnet discovery to not be able to work correctly. + docs: reference/rbac/ + + - type: bugfix + title: Fix root-user configuration loading + body: >- + Fixed a regression introduced in 2.3.5 where the root daemon + did not correctly read the configuration file; ignoring the + user's configured log levels and timeouts. + docs: reference/config/ + + - type: bugfix + title: Fix a user daemon crash + body: >- + Fixed an issue that could cause the user daemon to crash + during shutdown, as during shutdown it unconditionally + attempted to close a channel even though the channel might + already be closed. + + - version: 2.3.5 + date: "2021-07-15" + notes: + - type: feature + title: traffic-manager in multiple namespaces + body: >- + We now support installing multiple traffic managers in the same cluster. + This will allow operators to install deployments of telepresence that are + limited to certain namespaces. + image: ./telepresence-2.3.5-traffic-manager-namespaces.png + docs: install/helm + - type: feature + title: No more dependence on kubectl + body: >- + Telepresence no longer depends on having an external + kubectl binary, which might not be present for + OpenShift users (who have oc instead of + kubectl). + - type: feature + title: Max gRPC receive size now configurable + body: >- + The default max size of messages received through gRPC (4 MB) is sometimes insufficient. It can now be configured. + image: ./telepresence-2.3.5-grpc-max-receive-size.png + docs: reference/config/#grpc + - type: feature + title: CLI can be used in air-gapped environments + body: >- + While Telepresence will auto-detect if your cluster is in an air-gapped environment, + we've added an option users can add to their config.yml to ensure the cli acts like it + is in an air-gapped environment. Air-gapped environments require a manually installed + licence. + docs: reference/cluster-config/#air-gapped-cluster + image: ./telepresence-2.3.5-skipLogin.png + - version: 2.3.4 + date: "2021-07-09" + notes: + - type: bugfix + title: Improved IP log statements + body: >- + Some log statements were printing incorrect characters, when they should have been IP addresses. + This has been resolved to include more accurate and useful logging. + docs: reference/config/#log-levels + image: ./telepresence-2.3.4-ip-error.png + - type: bugfix + title: Improved messaging when multiple services match a workload + body: >- + If multiple services matched a workload when performing an intercept, Telepresence would crash. + It now gives the correct error message, instructing the user on how to specify which + service the intercept should use. + image: ./telepresence-2.3.4-improved-error.png + docs: reference/intercepts + - type: bugfix + title: Traffic-manger creates services in its own namespace to determine subnet + body: >- + Telepresence will now determine the service subnet by creating a dummy-service in its own + namespace, instead of the default namespace, which was causing RBAC permissions issues in + some clusters. + docs: reference/routing/#subnets + - type: bugfix + title: Telepresence connect respects pre-existing clusterrole + body: >- + When Telepresence connects, if the traffic-manager's desired clusterrole already exists in the + cluster, Telepresence will no longer try to update the clusterrole. + docs: reference/rbac + - type: bugfix + title: Helm Chart fixed for clientRbac.namespaced + body: >- + The Telepresence Helm chart no longer fails when installing with --set clientRbac.namespaced=true. + docs: install/helm + - version: 2.3.3 + date: "2021-07-07" + notes: + - type: feature + title: Traffic Manager Helm Chart + body: >- + Telepresence now supports installing the Traffic Manager via Helm. + This will make it easy for operators to install and configure the + server-side components of Telepresence separately from the CLI (which + in turn allows for better separation of permissions). + image: ./telepresence-2.3.3-helm.png + docs: install/helm/ + - type: feature + title: Traffic-manager in custom namespace + body: >- + As the traffic-manager can now be installed in any + namespace via Helm, Telepresence can now be configured to look for the + Traffic Manager in a namespace other than ambassador. + This can be configured on a per-cluster basis. + image: ./telepresence-2.3.3-namespace-config.png + docs: reference/config + - type: feature + title: Intercept --to-pod + body: >- + telepresence intercept now supports a + --to-pod flag that can be used to port-forward sidecars' + ports from an intercepted pod. + image: ./telepresence-2.3.3-to-pod.png + docs: reference/intercepts + - type: change + title: Change in migration from edgectl + body: >- + Telepresence no longer automatically shuts down the old + api_version=1 edgectl daemon. If migrating + from such an old version of edgectl you must now manually + shut down the edgectl daemon before running Telepresence. + This was already the case when migrating from the newer + api_version=2 edgectl. + - type: bugfix + title: Fixed error during shutdown + body: >- + The root daemon no longer terminates when the user daemon disconnects + from its gRPC streams, and instead waits to be terminated by the CLI. + This could cause problems with things not being cleaned up correctly. + - type: bugfix + title: Intercepts will survive deletion of intercepted pod + body: >- + An intercept will survive deletion of the intercepted pod provided + that another pod is created (or already exists) that can take over. + - version: 2.3.2 + date: "2021-06-18" + notes: + # Headliners + - type: feature + title: Service Port Annotation + body: >- + The mutator webhook for injecting traffic-agents now + recognizes a + telepresence.getambassador.io/inject-service-port + annotation to specify which port to intercept; bringing the + functionality of the --port flag to users who + use the mutator webook in order to control Telepresence via + GitOps. + image: ./telepresence-2.3.2-svcport-annotation.png + docs: reference/cluster-config#service-port-annotation + - type: feature + title: Outbound Connections + body: >- + Outbound connections are now routed through the intercepted + Pods which means that the connections originate from that + Pod from the cluster's perspective. This allows service + meshes to correctly identify the traffic. + docs: reference/routing/#outbound + - type: change + title: Inbound Connections + body: >- + Inbound connections from an intercepted agent are now + tunneled to the manager over the existing gRPC connection, + instead of establishing a new connection to the manager for + each inbound connection. This avoids interference from + certain service mesh configurations. + docs: reference/routing/#inbound + + # RBAC changes + - type: change + title: Traffic Manager needs new RBAC permissions + body: >- + The Traffic Manager requires RBAC + permissions to list Nodes, Pods, and to create a dummy + Service in the manager's namespace. + docs: reference/routing/#subnets + - type: change + title: Reduced developer RBAC requirements + body: >- + The on-laptop client no longer requires RBAC permissions to list the Nodes + in the cluster or to create Services, as that functionality + has been moved to the Traffic Manager. + + # Bugfixes + - type: bugfix + title: Able to detect subnets + body: >- + Telepresence will now detect the Pod CIDR ranges even if + they are not listed in the Nodes. + image: ./telepresence-2.3.2-subnets.png + docs: reference/routing/#subnets + - type: bugfix + title: Dynamic IP ranges + body: >- + The list of cluster subnets that the virtual network + interface will route is now configured dynamically and will + follow changes in the cluster. + - type: bugfix + title: No duplicate subnets + body: >- + Subnets fully covered by other subnets are now pruned + internally and thus never superfluously added to the + laptop's routing table. + docs: reference/routing/#subnets + - type: change # not a bugfix, but it only makes sense to mention after the above bugfixes + title: Change in default timeout + body: >- + The trafficManagerAPI timeout default has + changed from 5 seconds to 15 seconds, in order to facilitate + the extended time it takes for the traffic-manager to do its + initial discovery of cluster info as a result of the above + bugfixes. + - type: bugfix + title: Removal of DNS config files on macOS + body: >- + On macOS, files generated under + /etc/resolver/ as the result of using + include-suffixes in the cluster config are now + properly removed on quit. + docs: reference/routing/#macos-resolver + + - type: bugfix + title: Large file transfers + body: >- + Telepresence no longer erroneously terminates connections + early when sending a large HTTP response from an intercepted + service. + - type: bugfix + title: Race condition in shutdown + body: >- + When shutting down the user-daemon or root-daemon on the + laptop, telepresence quit and related commands + no longer return early before everything is fully shut down. + Now it can be counted on that by the time the command has + returned that all of the side-effects on the laptop have + been cleaned up. + - version: 2.3.1 + date: "2021-06-14" + notes: + - title: DNS Resolver Configuration + body: "Telepresence now supports per-cluster configuration for custom dns behavior, which will enable users to determine which local + remote resolver to use and which suffixes should be ignored + included. These can be configured on a per-cluster basis." + image: ./telepresence-2.3.1-dns.png + docs: reference/config + type: feature + - title: AlsoProxy Configuration + body: "Telepresence now supports also proxying user-specified subnets so that they can access external services only accessible to the cluster while connected to Telepresence. These can be configured on a per-cluster basis and each subnet is added to the TUN device so that requests are routed to the cluster for IPs that fall within that subnet." + image: ./telepresence-2.3.1-alsoProxy.png + docs: reference/config + type: feature + - title: Mutating Webhook for Injecting Traffic Agents + body: "The Traffic Manager now contains a mutating webhook to automatically add an agent to pods that have the telepresence.getambassador.io/traffic-agent: enabled annotation. This enables Telepresence to work well with GitOps CD platforms that rely on higher level kubernetes objects matching what is stored in git. For workloads without the annotation, Telepresence will add the agent the way it has in the past" + image: ./telepresence-2.3.1-inject.png + docs: reference/rbac + type: feature + - title: Traffic Manager Connect Timeout + body: "The trafficManagerConnect timeout default has changed from 20 seconds to 60 seconds, in order to facilitate the extended time it takes to apply everything needed for the mutator webhook." + image: ./telepresence-2.3.1-trafficmanagerconnect.png + docs: reference/config + type: change + - title: Fix for large file transfers + body: "Fix a tun-device bug where sometimes large transfers from services on the cluster would hang indefinitely" + image: ./telepresence-2.3.1-large-file-transfer.png + docs: reference/tun-device + type: bugfix + - title: Brew Formula Changed + body: "Now that the Telepresence rewrite is the main version of Telepresence, you can install it via Brew like so: brew install datawire/blackbird/telepresence." + image: ./telepresence-2.3.1-brew.png + docs: install/ + type: change + - version: 2.3.0 + date: "2021-06-01" + notes: + - title: Brew install Telepresence + body: "Telepresence can now be installed via brew on macOS, which makes it easier for users to stay up-to-date with the latest telepresence version. To install via brew, you can use the following command: brew install datawire/blackbird/telepresence2." + image: ./telepresence-2.3.0-homebrew.png + docs: install/ + type: feature + - title: TCP and UDP routing via Virtual Network Interface + body: "Telepresence will now perform routing of outbound TCP and UDP traffic via a Virtual Network Interface (VIF). The VIF is a layer 3 TUN-device that exists while Telepresence is connected. It makes the subnets in the cluster available to the workstation and will also route DNS requests to the cluster and forward them to intercepted pods. This means that pods with custom DNS configuration will work as expected. Prior versions of Telepresence would use firewall rules and were only capable of routing TCP." + image: ./tunnel.jpg + docs: reference/tun-device + type: feature + - title: SSH is no longer used + body: "All traffic between the client and the cluster is now tunneled via the traffic manager gRPC API. This means that Telepresence no longer uses ssh tunnels and that the manager no longer have an sshd installed. Volume mounts are still established using sshfs but it is now configured to communicate using the sftp-protocol directly, which means that the traffic agent also runs without sshd. A desired side effect of this is that the manager and agent containers no longer need a special user configuration." + image: ./no-ssh.png + docs: reference/tun-device/#no-ssh-required + type: change + - title: Running in a Docker container + body: "Telepresence can now be run inside a Docker container. This can be useful for avoiding side effects on a workstation's network, establishing multiple sessions with the traffic manager, or working with different clusters simultaneously." + image: ./run-tp-in-docker.png + docs: reference/inside-container + type: feature + - title: Configurable Log Levels + body: "Telepresence now supports configuring the log level for Root Daemon and User Daemon logs. This provides control over the nature and volume of information that Telepresence generates in daemon.log and connector.log." + image: ./telepresence-2.3.0-loglevels.png + docs: reference/config/#log-levels + type: feature + - version: 2.2.2 + date: "2021-05-17" + notes: + - title: Legacy Telepresence subcommands + body: Telepresence is now able to translate common legacy Telepresence commands into native Telepresence commands. So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used to with the new Telepresence binary. + image: ./telepresence-2.2.png + docs: install/migrate-from-legacy/ + type: feature diff --git a/docs/telepresence-oss/latest/troubleshooting/index.md b/docs/telepresence-oss/latest/troubleshooting/index.md new file mode 100644 index 000000000..db370a400 --- /dev/null +++ b/docs/telepresence-oss/latest/troubleshooting/index.md @@ -0,0 +1,255 @@ +--- +title: "Telepresence Troubleshooting" +description: "Learn how to troubleshoot common issues related to Telepresence, including intercept issues, cluster connection issues, and errors related to Ambassador Cloud." +--- +# Troubleshooting + +## Connecting to a cluster via VPN doesn't work. + +There are a few different issues that could arise when working with a VPN. Please see the [dedicated page](../reference/vpn) on Telepresence and VPNs to learn more on how to fix these. + +## Connecting to a cluster hosted in a VM on the workstation doesn't work + +The cluster probably has access to the host's network and gets confused when it is mapped by Telepresence. +Please check the [cluster in hosted vm](../howtos/cluster-in-vm) for more details. + + +## Volume mounts are not working on macOS + +It's necessary to have `sshfs` installed in order for volume mounts to work correctly during intercepts. Lately there's been some issues using `brew install sshfs` a macOS workstation because the required component `osxfuse` (now named `macfuse`) isn't open source and hence, no longer supported. As a workaround, you can now use `gromgit/fuse/sshfs-mac` instead. Follow these steps: + +1. Remove old sshfs, macfuse, osxfuse using `brew uninstall` +2. `brew install --cask macfuse` +3. `brew install gromgit/fuse/sshfs-mac` +4. `brew link --overwrite sshfs-mac` + +Now sshfs -V shows you the correct version, e.g.: +``` +$ sshfs -V +SSHFS version 2.10 +FUSE library version: 2.9.9 +fuse: no mount point +``` + +5. Next, try a mount (or an intercept that performs a mount). It will fail because you need to give permission to “Benjamin Fleischer” to execute a kernel extension (a pop-up appears that takes you to the system preferences). +6. Approve the needed permission +7. Reboot your computer. + +## Volume mounts are not working on Linux +It's necessary to have `sshfs` installed in order for volume mounts to work correctly during intercepts. + +After you've installed `sshfs`, if mounts still aren't working: +1. Uncomment `user_allow_other` in `/etc/fuse.conf` +2. Add your user to the "fuse" group with: `sudo usermod -a -G fuse ` +3. Restart your computer after uncommenting `user_allow_other` + + +## Distributed tracing + +Telepresence is a complex piece of software with components running locally on your laptop and remotely in a distributed kubernetes environment. +As such, troubleshooting investigations require tools that can give users, cluster admins, and maintainers a broad view of what these distributed components are doing. +In order to facilitate such investigations, telepresence >= 2.7.0 includes distributed tracing functionality via [OpenTelemetry](https://opentelemetry.io/) +Tracing is controlled via a `grpcPort` flag under the `tracing` configuration of your `values.yaml`. It is enabled by default and can be disabled by setting `grpcPort` to `0`, or `tracing` to an empty object: + +```yaml +tracing: {} +``` + +If tracing is configured, the traffic manager and traffic agents will open a GRPC server under the port given, from which telepresence clients will be able to gather trace data. +To collect trace data, ensure you're connected to the cluster, perform whatever operation you'd like to debug and then run `gather-traces` immediately after: + +```console +$ telepresence gather-traces +``` + +This command will gather traces from both the cloud and local components of telepresence and output them into a file called `traces.gz` in your current working directory: + +```console +$ file traces.gz + traces.gz: gzip compressed data, original size modulo 2^32 158255 +``` + +Please do not try to open or uncompress this file, as it contains binary trace data. +Instead, you can use the `upload-traces` command built into telepresence to send it to an [OpenTelemetry collector](https://opentelemetry.io/docs/collector/) for ingestion: + +```console +$ telepresence upload-traces traces.gz $OTLP_GRPC_ENDPOINT +``` + +Once that's been done, the traces will be visible via whatever means your usual collector allows. For example, this is what they look like when loaded into Jaeger's [OTLP API](https://www.jaegertracing.io/docs/1.36/apis/#opentelemetry-protocol-stable): + +![Jaeger Interface](../images/tracing.png) + +**Note:** The host and port provided for the `OTLP_GRPC_ENDPOINT` must accept OTLP formatted spans (instead of e.g. Jaeger or Zipkin specific spans) via a GRPC API (instead of the HTTP API that is also available in some collectors) +**Note:** Since traces are not automatically shipped to the backend by telepresence, they are stored in memory. Hence, to avoid running telepresence components out of memory, only the last 10MB of trace data are available for export. + +## No Sidecar Injected in GKE private clusters + +An attempt to `telepresence intercept` results in a timeout, and upon examination of the pods (`kubectl get pods`) it's discovered that the intercept command did not inject a sidecar into the workload's pods: + +```bash +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +echo-easy-7f6d54cff8-rz44k 1/1 Running 0 5m5s + +$ telepresence intercept echo-easy -p 8080 +telepresence: error: connector.CreateIntercept: request timed out while waiting for agent echo-easy.default to arrive +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +echo-easy-d8dc4cc7c-27567 1/1 Running 0 2m9s + +# Notice how 1/1 containers are ready. +``` + +If this is occurring in a GKE cluster with private networking enabled, it is likely due to firewall rules blocking the +Traffic Manager's webhook injector from the API server. +To fix this, add a firewall rule allowing your cluster's master nodes to access TCP port `443` in your cluster's pods, +or change the port number that Telepresence is using for the agent injector by providing the number of an allowed port +using the Helm chart value `agentInjector.webhook.port`. +Please refer to the [telepresence install instructions](../install/cloud#gke) or the [GCP docs](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) for information to resolve this. + +## Injected init-container doesn't function properly + +The init-container is injected to insert `iptables` rules that redirects port numbers from the app container to the +traffic-agent sidecar. This is necessary when the service's `targetPort` is numeric. It requires elevated privileges +(`NET_ADMIN` capabilities), and the inserted rules may get overridden by `iptables` rules inserted by other vendors, +such as Istio or Linkerd. + +Injection of the init-container can often be avoided by using a `targetPort` _name_ instead of a number, and ensure +that the corresponding container's `containerPort` is also named. This example uses the name "http", but any valid +name will do: +```yaml +apiVersion: v1 +kind: Pod +metadata: + ... +spec: + ... + containers: + - ... + ports: + - name: http + containerPort: 8080 +--- +apiVersion: v1 +kind: Service +metadata: + ... +spec: + ... + ports: + - port: 80 + targetPort: http +``` + +Telepresence's mutating webhook will refrain from injecting an init-container when the `targetPort` is a name. Instead, +it will do the following during the injection of the traffic-agent: + +1. Rename the designated container's port by prefixing it (i.e., containerPort: http becomes containerPort: tm-http). +2. Let the container port of our injected traffic-agent use the original name (i.e., containerPort: http). + +Kubernetes takes care of the rest and will now associate the service's `targetPort` with our traffic-agent's +`containerPort`. + +### Important note +If the service is "headless" (using `ClusterIP: None`), then using named ports won't help because the `targetPort` will +not get remapped. A headless service will always require the init-container. + +## Error connecting to GKE or EKS cluster + +GKE and EKS require a plugin that utilizes their resepective IAM providers. +You will need to install the [gke](../install/cloud#gke-authentication-plugin) or [eks](../install/cloud#eks-authentication-plugin) plugins +for Telepresence to connect to your cluster. + +## `too many files open` error when running `telepresence connect` on Linux + +If `telepresence connect` on linux fails with a message in the logs `too many files open`, then check if `fs.inotify.max_user_instances` is set too low. Check the current settings with `sysctl fs.notify.max_user_instances` and increase it temporarily with `sudo sysctl -w fs.inotify.max_user_instances=512`. For more information about permanently increasing it see [Kernel inotify watch limit reached](https://unix.stackexchange.com/a/13757/514457). + +## Connected to cluster via VPN but IPs don't resolve + +If `telepresence connect` succeeds, but you find yourself unable to reach services on your cluster, a routing conflict may be to blame. This frequently happens when connecting to a VPN at the same time as telepresence, +as often VPN clients may add routes that conflict with those added by telepresence. To debug this, pick an IP address in the cluster and get its route information. In this case, we'll get the route for `100.124.150.45`, and discover +that it's running through a `tailscale` device. + + + + +```console +$ route -n get 100.124.150.45 + route to: 100.64.2.3 +destination: 100.64.0.0 + mask: 255.192.0.0 + interface: utun4 + flags: + recvpipe sendpipe ssthresh rtt,msec rttvar hopcount mtu expire + 0 0 0 0 0 0 1280 0 +``` + +Note that in macos it's difficult to determine what software the name of a virtual interface corresponds to -- `utun4` doesn't indicate that it was created by tailscale. +One option is to look at the output of `ifconfig` before and after connecting to your VPN to see if the interface in question is being added upon connection + + + + +```console +$ ip route get 100.124.150.45 +100.64.2.3 dev tailscale0 table 52 src 100.111.250.89 uid 0 +``` + + + + +```console +$ Find-NetRoute -RemoteIPAddress 100.124.150.45 + +IPAddress : 100.102.111.26 +InterfaceIndex : 29 +InterfaceAlias : Tailscale +AddressFamily : IPv4 +Type : Unicast +PrefixLength : 32 +PrefixOrigin : Manual +SuffixOrigin : Manual +AddressState : Preferred +ValidLifetime : Infinite ([TimeSpan]::MaxValue) +PreferredLifetime : Infinite ([TimeSpan]::MaxValue) +SkipAsSource : False +PolicyStore : ActiveStore + + +Caption : +Description : +ElementName : +InstanceID : ;::8;;;8 + + +This will tell you which device the traffic is being routed through. As a rule, if the traffic is not being routed by the telepresence device, +your VPN may need to be reconfigured, as its routing configuration is conflicting with telepresence. One way to determine if this is the case +is to run `telepresence quit -s`, check the route for an IP in the cluster (see commands above), run `telepresence connect`, and re-run the commands to see if the output changes. +If it doesn't change, that means telepresence is unable to override your VPN routes, and your VPN may need to be reconfigured. Talk to your network admins +to configure it such that clients do not add routes that conflict with the pod and service CIDRs of the clusters. How this will be done will +vary depending on the VPN provider. +Future versions of telepresence will be smarter about informing you of such conflicts upon connection. diff --git a/docs/telepresence-oss/latest/versions.yml b/docs/telepresence-oss/latest/versions.yml new file mode 100644 index 000000000..6436f95da --- /dev/null +++ b/docs/telepresence-oss/latest/versions.yml @@ -0,0 +1,5 @@ +version: "2.15.1" +dlVersion: "v2.15.1" +docsVersion: "2.15" +branch: release/v2 +productName: "Telepresence OSS" diff --git a/docs/telepresence/2.0 b/docs/telepresence/2.0 deleted file mode 120000 index 979d2e8b1..000000000 --- a/docs/telepresence/2.0 +++ /dev/null @@ -1 +0,0 @@ -../../../docs/telepresence/v2.0 \ No newline at end of file diff --git a/docs/telepresence/2.0/concepts/context-prop.md b/docs/telepresence/2.0/concepts/context-prop.md new file mode 100644 index 000000000..86cbe2951 --- /dev/null +++ b/docs/telepresence/2.0/concepts/context-prop.md @@ -0,0 +1,24 @@ +--- +description: "Telepresence uses context propagation to intelligently route requests, transferring request metadata across the components of a distributed system." +--- + +# Context Propagation + +Telepresence uses *context propagation* to intelligently route requests to the appropriate destination. Context propagation is transferring request metadata across the services and remote processes of a distributed system. + +This metadata is the *context* that is transferred across the system services. It commonly takes the form of HTTP headers, such that context propagation is usually referred to as header propagation. A component of the system (like a proxy or performance monitoring tool) injects the headers into requests as it relays them. + +The metadata *propagation* refers to any service or other middleware not stripping away the headers. Propagation facilitates the movement of the injected contexts between other downstream services and processes. + +A common application for context propagation is *distributed tracing*. This is a technique for troubleshooting and profiling distributed microservices applications. In a microservices architecture, a single request may trigger additional requests to other services. The originating service may not cause the failure or slow request directly; a downstream dependent service may instead be to blame. + +An application like Datadog or New Relic will use agents running on services throughout the system to inject traffic with HTTP headers (the context). They will track the request’s entire path from origin to destination to reply, gathering data on routes the requests follow and performance. The injected headers follow the [W3C Trace Context specification](https://www.w3.org/TR/trace-context/), which facilitates maintaining the headers though every service without being stripped (the propagation). + +Similarly, Telepresence also uses custom headers and header propagation. Our use case however is controllable intercepts and preview URLs instead of tracing. The headers facilitate the smart routing of requests either to live services in the cluster or services running on a developer’s machine. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to Ambassador Cloud with other info about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. + + + + + diff --git a/docs/telepresence/2.0/concepts/devloop.md b/docs/telepresence/2.0/concepts/devloop.md new file mode 100644 index 000000000..292ed13a5 --- /dev/null +++ b/docs/telepresence/2.0/concepts/devloop.md @@ -0,0 +1,21 @@ +--- +description: "Inner and outer dev loops describe the processes developers repeat to iterate on code. As these loops get more complex, productivity decreases." +--- + +# Inner and Outer Dev Loops + +Cloud native technologies also fundamentally altered the developer experience. Not only are engineers now expected to design and build distributed service-based applications, but their entire development loop has been disrupted. No longer can developers rely on monolithic application development best practices, such as checking out the entire codebase and coding locally with a rapid “live-reload” inner developer loop. They now have to manage external dependencies, build containers, and implement orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this has a large impact on development time. + +If a typical developer codes for 360 minutes (6 hours) a day, with a traditional local iterative development loop of 5 minutes -- 3 coding, 1 building i.e. compiling/deploying/reloading, 1 testing inspecting, and 10-20 seconds for committing code -- they can expect to make ~70 iterations of their code per day. Any one of these iterations could be a release candidate. The only “developer tax” being paid here is for the commit process, which is negligible. + +![Traditional inner dev loop](../../images/trad-inner-dev-loop.png) + +If the build time is incremented to 5 minutes -- not atypical with a standard container build, registry upload, and deploy -- then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. + +![Container inner dev loop](../../images/container-inner-dev-loop.png) + +Many development teams began using custom proxies to either automatically and continually sync their local development code base with a remote surrogate (enabling “live reload” in a remote cluster), or route all remote service traffic to their local services for testing. The former approach had limited value for compiled languages, and the latter often did not support collaboration within teams where multiple users want to work on the same services. + +In addition to the challenges with the inner development loop, the changing outer development loop also caused issues. Over the past 20 years, end users and customers have become more demanding, but also less sure of their requirements. Pioneered by disruptive organizations like Netflix, Spotify, and Google, this has resulted in software delivery teams needing to be capable of rapidly delivering experiments into production. Unit, integration, and component testing is still vitally important, but modern application platforms must also support the incremental release of functionality and applications to end users in order to allow testing in production. + +The traditional outer development loop for software engineers of code merge, code review, build artifact, test execution, and deploy has now evolved. A typical modern outer loop now consists of code merge, automated code review, build artifact and container, test execution, deployment, controlled (canary) release, and observation of results. If a developer doesn’t have access to self-service configuration of the release then the time taken for this outer loop increases by at least an order of magnitude e.g. 1 minute to deploy an updated canary release routing configuration versus 10 minutes to raise a ticket for a route to be modified via the platform team. diff --git a/docs/telepresence/2.0/doc-links.yml b/docs/telepresence/2.0/doc-links.yml new file mode 100644 index 000000000..bea547771 --- /dev/null +++ b/docs/telepresence/2.0/doc-links.yml @@ -0,0 +1,36 @@ + - title: Quick Start + link: quick-start + - title: How-to Guides + items: + - title: Intercept a Service + link: howtos/intercepts + - title: Collaborating with Preview URLs + link: howtos/preview-urls + - title: Create a Demo Cluster + link: howtos/democluster + - title: Outbound Sessions + link: howtos/outbound + - title: Upgrading and Previous Versions + link: howtos/upgrading + - title: Core Concepts + items: + - title: Inner and Outer Dev Loop + link: concepts/devloop + - title: Context Propagation + link: concepts/context-prop + - title: Technical Reference + items: + - title: Architecture + link: reference/architecture + - title: Client Reference + link: reference/client + - title: Environment Variables + link: reference/environment + - title: Volume Mounts + link: reference/volume + - title: DNS Resolution + link: reference/dns + - title: FAQs + link: faqs + - title: Troubleshooting + link: troubleshooting diff --git a/docs/telepresence/2.0/faqs.md b/docs/telepresence/2.0/faqs.md new file mode 100644 index 000000000..412640463 --- /dev/null +++ b/docs/telepresence/2.0/faqs.md @@ -0,0 +1,104 @@ +--- +description: "Learn how Telepresence helps with fast development and debugging in your Kubernetes cluster." +--- + +# FAQs + +** Why Telepresence?** + +Modern microservices-based applications that are deployed into Kubernetes often consist of tens or hundreds of services. The resource constraints and number of these services means that it is often difficult to impossible to run all of this on a local development machine, which makes fast development and debugging very challenging. The fast [inner development loop](../concepts/devloop/) from previous software projects is often a distant memory for cloud developers. + +Telepresence enables you to connect your local development machine seamlessly to the cluster via a two way proxying mechanism. This enables you to code locally and run the majority of your services within a remote Kubernetes cluster -- which in the cloud means you have access to effectively unlimited resources. + +Ultimately, this empowers you to develop services locally and still test integrations with dependent services or data stores running in the remote cluster. + +You can “intercept” any requests made to a target Kubernetes deployment, and code and debug your associated service locally using your favourite local IDE and in-process debugger. You can test your integrations by making requests against the remote cluster’s ingress and watching how the resulting internal traffic is handled by your service running locally. + +By using the preview URL functionality you can share access with additional developers or stakeholders to the application via an entry point associated with your intercept and locally developed service. You can make changes that are visible in near real-time to all of the participants authenticated and viewing the preview URL. All other viewers of the application entrypoint will not see the results of your changes. + +** What protocols can be intercepted by Telepresence?** + +All HTTP/1.1 and HTTP/2 protocols can be intercepted. This includes: + +- REST +- JSON/XML over HTTP +- gRPC +- GraphQL + +If you need another protocol supported, please [drop us a line](../../../../feedback) to request it. + +** When using Telepresence to intercept a pod, are the Kubernetes cluster environment variables proxied to my local machine?** + +Yes, please see [this document](../reference/environment/) for more information. + +** When using Telepresence to intercept a pod, are the associated pod volume mounts also proxied and shared with my local machine?** + +Yes, please see [this doc on using volume mounts](../reference/volume/). + +** When connected to a Kubernetes cluster via Telepresence, can I access cluster-based services via their DNS name?** + +Yes. After you have successfully connected to your cluster via `telepresence connect` you will be able to access any service in your cluster via their namespace qualified DNS name. + +This means you can curl endpoints directly e.g. `curl .:8080/mypath`. + +If you create an intercept for a service in a namespace, you will be able to use the service name directly. + +This means if you `telepresence intercept -n `, you will be able to resolve just the `` DNS record. + +You can connect to databases or middleware running in the cluster, such as MySQL, PostgreSQL and RabbitMQ, via their service name. + +** When connected to a Kubernetes cluster via Telepresence, can I access cloud-based services and data stores via their DNS name?** + +You can connect to cloud-based data stores and services that are directly addressable within the cluster (e.g. when using an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) Service type), such as AWS RDS, Google pub-sub, or Azure SQL Database. + +** What types of ingress does Telepresence support for the preview URL functionality?** + +The preview URL functionality should work with most ingress configurations, including straightforward load balancer setups. + +Telepresence will discover/prompt during first use for this info and make its best guess at figuring this out and ask you to confirm or update this. + +** Will Telepresence be able to intercept deployments running on a private cluster or cluster running within a virtual private cloud (VPC)?** + +Yes. The cluster has to have outbound access to the internet for the preview URLs to function correctly, but it doesn’t need to have a publicly accessible IP address. + +The cluster must also have access to an external registry in order to be able to download the Traffic Manager and Traffic Agent containers that are deployed when connecting with Telepresence. + +** Why does running Telepresence require sudo access for the local daemon?** + +The local daemon needs sudo to create iptable mappings. Telepresence uses this to create outbound access from the laptop to the cluster. + +On Fedora, Telepresence also creates a virtual network device (a TUN network) for DNS routing. That also requires root access. + +** What components get installed in the cluster when running Telepresence?** + +A single Traffic Manager service is deployed in the `ambassador` namespace within your cluster, and this manages resilient intercepts and connections between your local machine and the cluster. + +A Traffic Agent container is injected per pod that is being intercepted. The first time a deployment is intercepted all pods associated with this deployment will be restarted with the Traffic Agent automatically injected. + +** How can I remove all of the Telepresence components installed within my cluster?** + +You can run the command `telepresence uninstall --everything` to remove the Traffic Manager service installed in the cluster and Traffic Agent containers injected into each pod being intercepted. + +Running this command will also stop the local daemon running. + +** What language is Telepresence written in?** + +All components of the Telepresence application and cluster components are written using Go. + +** How does Telepresence connect and tunnel into the Kubernetes cluster?** + +The connection between your laptop and cluster is established via the standard `kubectl` mechanisms and SSH tunnelling. + +** What identity providers are supported for authenticating to view a preview URL?** + +Currently GitHub is used to authenticate a user of Telepresence (triggered via the `telepresence login` command) and any viewers of a preview URL. + +More authentication mechanisms and identity provider support will be added soon. Please [let us know](../../../../feedback) which providers are the most important to you and your team in order for us to prioritize those. + +** Is Telepresence open source?** + +Telepresence will be open source soon, in the meantime it is free to download. We prioritized releasing the binary as soon as possible for community feedback, but are actively working on the open sourcing logistics. + +** How do I share my feedback on Telepresence?** + +Your feedback is always appreciated and helps us build a product that provides as much value as possible for our community. You can chat with us directly on our [feedback page](../../../../feedback), or you can [join our Slack channel](http://a8r.io/slack) to share your thoughts. diff --git a/docs/telepresence/2.0/howtos/democluster.md b/docs/telepresence/2.0/howtos/democluster.md new file mode 100644 index 000000000..7e56aa420 --- /dev/null +++ b/docs/telepresence/2.0/howtos/democluster.md @@ -0,0 +1,17 @@ +# Create a Demo Cluster + +Ambassador has free demo Kubernetes clusters for you to use to test out Telepresence. + +The cluster creation process will provide you with a `config` file to use with `kubectl`. If you need to install `kubectl`, please see [the Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/install-kubectl/). + +After creation, the cluster will remain available for three hours, plenty of time for you to finish one of our Telepresence [quick start guides](../../quick-start/). + +## Creating a Cluster + +1. Login to [Ambassador Cloud](http://app.getambassador.io/cloud/) using your GitHub account. + +1. Click the option for **Use Our Demo Cluster**. + +1. Click **Generate Demo Cluster** in step 1 and follow the instructions to configure your `kubectl`. + +1. Begin the [quick start guide](../../quick-start/qs-node/). diff --git a/docs/telepresence/2.0/howtos/intercepts.md b/docs/telepresence/2.0/howtos/intercepts.md new file mode 100644 index 000000000..2556854d2 --- /dev/null +++ b/docs/telepresence/2.0/howtos/intercepts.md @@ -0,0 +1,178 @@ +--- +description: "Telepresence help you develop Kubernetes services locally without running dependent services or redeploying code updates to your cluster on every change." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; + +# Intercept a Service + +Intercepts enable you to test and debug services locally without needing to run dependent services or redeploy code updates to your cluster on every change. A typical workflow would be to run the service you wish to develop on locally, then start an intercept. Changes to the local code can then be tested immediately along side other services running in the cluster. + +When starting an intercept, Telepresence will create a preview URLs. When visiting the preview URL, your request is proxied to your ingress with a special header set. When the traffic within the cluster requests the service you are intercepting, the [Traffic Manager](../../reference/architecture) will proxy that traffic to your laptop. Other traffic entering your ingress will use the service running in the cluster as normal. + +Preview URLs are all managed through Ambassador Cloud. You must run `telepresence login` to access Ambassador Cloud and access the preview URL dashboard. From the dashboard you can see all your active intercepts, delete active intercepts, and change them between private and public for collaboration. Private preview URLs can be accessed by anyone else in the GitHub organization you select when logging in. Public URLs can be accessed by anyone who has the link. + +While preview URLs selectively proxy traffic to your laptop, you can also run an [intercept without creating a preview URL](#creating-an-intercept-without-a-preview-url), which will proxy all traffic to the service. + +For a detailed walk though on creating intercepts using our sample app, follow the quick start guide. + +## Creating an Intercept + +The following quick overview on creating an intercept assumes you have a deployment and service accessible publicly by an ingress controller and that you can run a copy of that service on your laptop. + +1. Install Telepresence if needed. + + + + + ```shell + # 1. Download the latest binary (~60 MB): + sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + + # 2. Make the binary executable: + sudo chmod a+x /usr/local/bin/telepresence + ``` + + + + + ```shell + # 1. Download the latest binary (~50 MB): + sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + + # 2. Make the binary executable: + sudo chmod a+x /usr/local/bin/telepresence + ``` + + + + +1. In your terminal run `telepresence login`. This logs you into the Ambassador Cloud, which will track your intercepts and let you share them with colleagues. + + If you are logged in and close the dashboard browser tab, you can quickly reopen it by running telepresence dashboard. + +2. Return to your terminal and run `telepresence list`. This will connect to your cluster, install the [Traffic Manager](../../reference/architecture) to proxy the traffic, and return a list of services that Telepresence is able to intercept. + +3. Start the service on your laptop and make a change to the code that will be apparent in the browser when the service runs, such as a text or other UI change. + +4. In a new terminal window start the intercept. This will proxy requests to the cluster service to your laptop. It will also generate a preview URL, which will let you access your service from the ingress but with requests to the intercepted service proxied to your laptop. + + The intercept requires you specify the name of the deployment to be + intercepted and the port on your laptop to proxy to. + + ``` + telepresence intercept ${base_name_of_intercept} --port=${local_TCP_port} + ``` + + The name of the Deployment to be intercepted will default to the + base name of the intercept that you give, but you can specify a + different deployment name using the `--deployment` flag: + + ``` + telepresence intercept ${base_name_of_intercept} --deployment=${name_of_deployment} --port=${local_TCP_port} + ``` + + Because you're logged in (from `telepresence login` in step 2), it + will default to `--preview-url=true`, which will use Ambassador + Cloud to create a sharable preview URL for this intercept; if you + hadn't been logged in it would have defaulted to + `--preview-url=false`. In order to do this, it will prompt you for + three options. For the first, `Ingress`, Telepresence tries to + intelligently determine the ingress controller deployment and + namespace for you. If they are correct, you can hit `enter` to + accept the defaults. Set the next two options, `TLS` and `Port`, + appropriately based on your ingress service. + + Also because you're logged in, it will default to `--mechanism=http + --http-match=auto` (or just `--http-match=auto`; `--http-match` + implies `--mechanism=http`); if you hadn't been logged in it would + have defaulted to `--mechanism=tcp`. This tells it to do smart + intercepts and only intercept a subset of HTTP requests, rather + than just intercepting the entirety of all TCP connections. This + is important for working in a shared cluster with teammates, and is + important for the preview URL functionality. See `telepresence + intercept --help` for information on using `--http-match` to + customize which requests it intercepts. + +5. Open the preview URL in your browser. The page that loads will proxy requests to the intercepted service to your laptop. You will also see a banner at the bottom on the page informing that you are viewing a preview URL with your name and org name. + +6. Switch back in your browser to the Ambassador Cloud dashboard page and refresh it to see your preview URL listed. Click the box to expand out options where you can disable authentication or remove the preview. + +7. Stop the intercept with the `leave` command and `quit` to stop the daemon. Finally, use `uninstall --everything` to remove the Traffic Manager and Agents from your cluster. + + ``` + telepresence leave ${full_name_of_intercept} + telepresence quit + telepresence uninstall --everything + ``` + + The resulting intercept might have a full name that is different + than the base name that you gave to `telepresence intercept` in + step 4; see the section [Specifing a namespace for an + intercept](#specifying-a-namespace-for-an-intercept) for more + information. + +## Specifying a namespace for an intercept + +The namespace of the intercepted deployment is specified using the `--namespace` option. When this option is used, and `--deployment` is not used, then the given name is interpreted as the name of the deployment and the name of the intercept will be constructed from that name and the namespace. + + ``` + telepresence intercept hello --namespace myns --port 9000 + ``` + +This will intercept a Deployment named "hello" and name the intercept +"hello-myns". In order to remove the intercept, you will need to run +`telepresence leave hello-mydns` instead of just `telepresence leave +hello`. + +The name of the intercept will be left unchanged if the deployment is specified. + + ``` + telepresence intercept myhello --namespace myns --deployment hello --port 9000 + ``` +This will intercept a deployment named "hello" and name the intercept "myhello". + +## Importing Environment Variables + +Telepresence can import the environment variables from the pod that is being intercepted, see [this doc](../../reference/environment/) for more details. + +## Creating an Intercept Without a Preview URL + +If you *are not* logged into Ambassador Cloud, the following command will intercept all traffic bound to the service and proxy it to your laptop. This includes traffic coming through your ingress controller, so use this option carefully as to not disrupt production environments. + +``` +telepresence intercept ${base_name_of_intercept} --port=${local_TCP_port} +``` + +If you *are* logged into Ambassador Cloud, setting the `preview-url` flag to `false` is necessary. + +``` +telepresence intercept ${base_name_of_intercept} --port=${local_TCP_port} --preview-url=false +``` + +This will output a header that you can set on your request for that traffic to be intercepted: + +``` +$ telepresence intercept --port= --preview-url=false +Using deployment +intercepted + Intercept name: + State : ACTIVE + Destination : 127.0.0.1: + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":") +``` + +Run `telepresence status` to see the list of active intercepts. + +``` +$ telepresence status +Connected + Context: default (https://) + Proxy: ON (networking to the cluster is enabled) + Intercepts: 1 total + dataprocessingnodeservice: +``` + +Finally, run `telepresence leave [name of intercept]` to stop the intercept. diff --git a/docs/telepresence/2.0/howtos/outbound.md b/docs/telepresence/2.0/howtos/outbound.md new file mode 100644 index 000000000..3c227705f --- /dev/null +++ b/docs/telepresence/2.0/howtos/outbound.md @@ -0,0 +1,78 @@ +--- +description: "Telepresence can connect to your Kubernetes cluster, letting you access cluster services as if your laptop was another pod in the cluster." +--- + +# Outbound Sessions + +While preview URLs are a powerful feature, there are other options to use Telepresence for proxying traffic between your laptop and the cluster. + +## Prerequistes + +It is assumed that you have the demo web app from the [tutorial](../../tutorial/) running in your cluster, but deployment names used below can be substituted for any other running deployment. + +## Proxying Outbound Traffic + +Connecting to the cluster instead of running an intercept will allow you to access cluster deployments as if your laptop was another pod in the cluster. You will be able to access other Kubernetes services using `.`, for example by curling a service from your terminal. A service running on your laptop will also be able to interact with other services on the cluster by name. + +Connecting to the cluster starts the background daemon on your machine and installs the [Traffic Manager pod](../../reference/) into the cluster of your current `kubectl` context. The Traffic Manager handles the service proxying. + +1. Run `telepresence connect`, you will be prompted for your password to run the daemon. + + ``` + $ telepresence connect + Launching Telepresence Daemon v2.0.0 (api v3) + Need root privileges to run "/usr/local/bin/telepresence daemon-foreground" + Password: + Connecting to traffic manager... + Connected to context default (https://) + ``` + +1. Run `telepresence status` to confirm that you are connected to your cluster and are proxying traffic to it. + + ``` + $ telepresence status + Connected + Context: default (https://) + Proxy: ON (networking to the cluster is enabled) + Intercepts: 0 total + ``` + +1. Now try to access your service by name with `curl verylargejavaservice.default:8080`. Telepresence will route the request to the cluster, as if your laptop is actually running in the cluster. + + ``` + $ curl verylargejavaservice.default:8080 + + + + Welcome to the EdgyCorp WebApp + ... + ``` + +3. Terminate the client with `telepresence quit` and try to access the service again, it will fail because traffic is no longer being proxied from your laptop. + + ``` + $ telepresence quit + Telepresence Daemon quitting...done + ``` + +## Controlling Outbound Connectivity + +By default, Telepresence will provide access to all Services found in all namespaces in the connected cluster. This might lead to problems if the user does not have access permissions to all namespaces via RBAC. The `--mapped-namespaces ` flag was added to give the user control over exactly which namespaces will be accessible. + +When using this option, it is important to include all namespaces containing services to be accessed and also all namespaces that contain services that those intercepted services might use. + +### Using local-only intercepts + +An intercept with the flag`--local-only` can be used to control outbound connectivity to specific namespaces. + +When developing services that have not yet been deployed to the cluster, it can be necessary to provide outbound connectivity to the namespace where the service is intended to be deployed so that it can access other services in that namespace without using qualified names. + + ``` + $ telepresence intercept [name of intercept] --namespace [name of namespace] --local-only + ``` +The resources in the given namespace can now be accessed using unqualified names as long as the intercept is active. The intercept is deactivated just like any other intercept. + + ``` + $ telepresence leave [name of intercept] + ``` +The unqualified name access is now removed provided that no other intercept is active and using the same namespace. diff --git a/docs/telepresence/2.0/howtos/preview-urls.md b/docs/telepresence/2.0/howtos/preview-urls.md new file mode 100644 index 000000000..85d34b03d --- /dev/null +++ b/docs/telepresence/2.0/howtos/preview-urls.md @@ -0,0 +1,27 @@ +--- +description: "Telepresence uses Preview URLs to help you collaborate on developing Kubernetes services with teammates." +--- + +# Collaboration with Preview URLs + +For collaborating on development work, Telepresence generates preview URLs that you can share with your teammate or collaborators outside of our organization. This opens up new possibilities for real time development, debugging, and pair programming among increasingly distributed teams. + +Preview URLs are protected behind authentication via Ambassador Cloud, ensuring that only users in your organization can view them. A preview URL can also be set to allow public access, for sharing with outside collaborators. + +## Prerequisites + +You must have an active intercept running to your cluster with the intercepted service running on your laptop. + +Sharing a preview URL with a teammate requires you both be members of the same GitHub organization. + +> More methods of authentication will be available in future Telepresence releases, allowing for collaboration via other service organizations. + +## Sharing a Preview URL (With Teammates) + +You can collaborate with teammates by sending your preview URL to them via Slack or however you communicate. They will be asked to authenticate via GitHub if they are not already logged into Ambassador Cloud. When they visit the preview URL, they will see the intercepted service running on your laptop. Your laptop must be online and running the service for them to see the live intercept. + +## Sharing a Preview URL (With Outside Collaborators) + +To collaborate with someone outside of your GitHub organization, you must go to the Ambassador Cloud dashboard (run `telepresence dashboard` to reopen it), select the preview URL, and click **Make Publicly Accessible**. Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on your laptop. Your laptop must be online and running the service for them to see the live intercept. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. Removing the intercept either from the dashboard or by running `telepresence leave ` also removes all access to the preview URL. diff --git a/docs/telepresence/2.0/howtos/upgrading.md b/docs/telepresence/2.0/howtos/upgrading.md new file mode 100644 index 000000000..527efa3af --- /dev/null +++ b/docs/telepresence/2.0/howtos/upgrading.md @@ -0,0 +1,49 @@ +--- +description: "How to upgrade your installation of Telepresence and install previous versions." +--- + +# Upgrading Telepresence + +The Telepresence CLI will periodically check for new versions and notify you when an upgrade is available. [Running the same commands used for installation](../../quick-start/) will replace your current binary with the latest version. + +### macOS + +``` +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/latest/telepresence \ +-o /usr/local/bin/telepresence && \ +sudo chmod a+x /usr/local/bin/telepresence && \ +telepresence version +``` + +### Linux + +``` +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/latest/telepresence \ +-o /usr/local/bin/telepresence && \ +sudo chmod a+x /usr/local/bin/telepresence && \ +telepresence version +``` + +### Upgrading Telepresence of version 2.0.1 or older + +The traffic-manager must be uninstalled manually. This can be done using `telepresence uninstall --everything` _before_ the upgrade or by using `kubectl delete svc,deploy traffic-manager`. + +## Installing Older Versions of Telepresence + +Use the following URLs to install an older version, replacing `x.x.x` with the version you want. + +### macOS +`https://app.getambassador.io/download/tel2/linux/amd64/x.x.x/telepresence` + +### Linux +`https://app.getambassador.io/download/tel2/darwin/amd64/x.x.x/telepresence` + +
+ +Use the following URLs to find the current latest version number. + +### macOS +`https://app.getambassador.io/download/tel2/linux/amd64/stable.txt` + +### Linux +`https://app.getambassador.io/download/tel2/darwin/amd64/stable.txt` diff --git a/docs/telepresence/2.0/images/apple.png b/docs/telepresence/2.0/images/apple.png new file mode 100644 index 000000000..8b8277f16 Binary files /dev/null and b/docs/telepresence/2.0/images/apple.png differ diff --git a/docs/telepresence/2.0/images/github-login.png b/docs/telepresence/2.0/images/github-login.png new file mode 100644 index 000000000..cfd4d4bf1 Binary files /dev/null and b/docs/telepresence/2.0/images/github-login.png differ diff --git a/docs/telepresence/2.0/images/linux.png b/docs/telepresence/2.0/images/linux.png new file mode 100644 index 000000000..1832c5940 Binary files /dev/null and b/docs/telepresence/2.0/images/linux.png differ diff --git a/docs/telepresence/2.0/quick-start/TelepresenceQuickStartLanding.js b/docs/telepresence/2.0/quick-start/TelepresenceQuickStartLanding.js new file mode 100644 index 000000000..3e87c3ad6 --- /dev/null +++ b/docs/telepresence/2.0/quick-start/TelepresenceQuickStartLanding.js @@ -0,0 +1,129 @@ +import React from 'react'; + +import Embed from '../../../../src/components/Embed'; +import Icon from '../../../../src/components/Icon'; + +import './telepresence-quickstart-landing.less'; + +/** @type React.FC> */ +const RightArrow = (props) => ( + + + +); + +/** @type React.FC<{color: 'green'|'blue', withConnector: boolean}> */ +const Box = ({ children, color = 'blue', withConnector = false }) => ( + <> + {withConnector && ( +
+ +
+ )} +
{children}
+ +); + +const TelepresenceQuickStartLanding = () => ( +
+

+ Telepresence +

+

+ Explore the use cases of Telepresence with a free remote Kubernetes + cluster, or dive right in using your own. +

+ +
+
+
+

+ Use Our Free Demo Cluster +

+

+ See how Telepresence works without having to mess with your + production environments. +

+
+ +

6 minutes

+

Integration Testing

+

+ See how changes to a single service impact your entire application + without having to run your entire app locally. +

+ + GET STARTED{' '} + + +
+ +

5 minutes

+

Fast code changes

+

+ Make changes to your service locally and see the results instantly, + without waiting for containers to build. +

+ + GET STARTED{' '} + + +
+
+
+
+

+ Use Your Cluster +

+

+ Understand how Telepresence fits in to your Kubernetes development + workflow. +

+
+ +

10 minutes

+

Intercept your service in your cluster

+

+ Query services only exposed in your cluster's network. Make changes + and see them instantly in your K8s environment. +

+ + GET STARTED{' '} + + +
+
+
+ +
+

Watch the Demo

+
+
+

+ See Telepresence in action in our 3-minute demo + video that you can share with your teammates. +

+
    +
  • Instant feedback loops
  • +
  • Infinite-scale development environments
  • +
  • Access to your favorite local tools
  • +
  • Easy collaborative development with teammates
  • +
+
+
+ +
+
+
+
+); + +export default TelepresenceQuickStartLanding; diff --git a/docs/telepresence/2.0/quick-start/demo-node.md b/docs/telepresence/2.0/quick-start/demo-node.md new file mode 100644 index 000000000..8c936cc7b --- /dev/null +++ b/docs/telepresence/2.0/quick-start/demo-node.md @@ -0,0 +1,289 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import QSCards from './qs-cards' + +# Telepresence Quick Start + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Download the demo cluster archive](#1-download-the-demo-cluster-archive) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Check out the sample application](#3-check-out-the-sample-application) +* [4. Run a service on your laptop](#4-run-a-service-on-your-laptop) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +In this guide we'll give you **everything you need in a preconfigured demo cluster:** the Telepresence CLI, a config file for connecting to your demo cluster, and code to run a cluster service locally. + + + Already have a cluster? Switch over to a version of this guide that takes you though the same steps using your own cluster. + + +## 1. Download the demo cluster archive + +1. {window.open('https://app.getambassador.io/cloud/demo-cluster-download-popup', 'ambassador-cloud-demo-cluster', 'menubar=no,location=no,resizable=yes,scrollbars=yes,status=no,width=550,height=750'); e.preventDefault(); }} target="_blank">Sign in to Ambassador Cloud to download your demo cluster archive. The archive contains all the tools and configurations you need to complete this guide. + +2. Extract the archive file, open the `ambassador-demo-cluster` folder, and run the installer script (the commands below might vary based on where your browser saves downloaded files). + + + This step will also install some dependency packages onto your laptop using npm, you can see those packages at ambassador-demo-cluster/edgey-corp-nodejs/DataProcessingService/package.json. + + + ``` + cd ~/Downloads + unzip ambassador-demo-cluster.zip -d ambassador-demo-cluster + cd ambassador-demo-cluster + ./install.sh + ``` + +3. The demo cluster we provided already has a demo app running. List the app's services: + `kubectl get services` + + ``` + $ kubectl get services + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + kubernetes ClusterIP 10.43.0.1 443/TCP 14h + dataprocessingservice ClusterIP 10.43.159.239 3000/TCP 14h + verylargejavaservice ClusterIP 10.43.223.61 8080/TCP 14h + verylargedatastore ClusterIP 10.43.203.19 8080/TCP 14h + ``` + +4. Confirm that the Telepresence CLI is now installed, we expect to see that the daemons are not yet running: +`telepresence status` + + ``` + $ telepresence status + + Root Daemon: Not running + User Daemon: Not running + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open System Preferences → Security & Privacy → General. Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence status command. + + + + You now have Telepresence installed on your workstation and a Kubernetes cluster configured in your terminal. + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster (this requires root privileges and will ask for your password): +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Check out the sample application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + +We'll use a sample app that is already installed in your demo cluster. Let's take a quick look at it's architecture before continuing. + +1. Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +2. Since you’ve already connected Telepresence to your cluster, you can access the frontend service in your browser at http://verylargejavaservice.default:8080. + +3. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Run a service on your laptop + +Now start up the DataProcessingService service on your laptop. This version of the code has the UI color set to blue instead of green. + +1. **In a new terminal window**, go the demo application directory in the extracted archive folder: + `cd edgey-corp-nodejs/DataProcessingService` + +2. Start the application: + `npm start` + + ``` + $ npm start + + ... + Welcome to the DataProcessingService! + { _: [] } + Server running on port 3000 + ``` + +4. **Back in your previous terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Node server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + + Didn't work? Make sure you are working in the terminal window where you ran the script because it sets environment variables to access the demo cluster. Those variables will only will apply to that terminal session. + + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + ... + ``` + +2. Go to the frontend service again in your browser at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Node server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-nodejs/DataProcessingService/app.js` in your editor and change line 6 from `blue` to `orange`. Save the file and the Node server will auto reload. + +2. Now visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. The frontend `verylargejavaservice` is still running on the cluster, but it's request to the `DataProcessingService` for retrieve the color to show is being proxied by Telepresence to your laptop. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + This opens your browser; login with your preferred identity provider and choose your org. + + ``` + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n`. The default for the fourth value is correct so hit enter to accept it + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: n + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.0/quick-start/go.md b/docs/telepresence/2.0/quick-start/go.md new file mode 100644 index 000000000..5be151704 --- /dev/null +++ b/docs/telepresence/2.0/quick-start/go.md @@ -0,0 +1,322 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Go** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Go application](#3-install-a-sample-go-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites +You’ll need [`kubectl` installed](https://kubernetes.io/docs/tasks/tools/#kubectl) +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. + + + Need a cluster? We provide free demo clusters to use with this quick start, quickly set one up!. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Go application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Go. We have versions in Python (Flask), Python (FastAPI), Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-go.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-go.git + + Cloning into 'edgey-corp-go'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-go/DataProcessingService/` + +3. You will use [Fresh](https://pkg.go.dev/github.com/BUGLAN/fresh) to support auto reloading of the Go server, which we'll use later. Confirm it is installed by running: + `go get github.com/pilu/fresh` + Then start the Go server: + `$GOPATH/bin/fresh` + + ``` + $ go get github.com/pilu/fresh + + $ $GOPATH/bin/fresh + + ... + 10:23:41 app | Welcome to the DataProcessingGoService! + ``` + + + Install Go from here and set your GOPATH if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + “blue” + ``` + + + Victory, your local Go server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using deployment dataprocessingservice + intercepted + State : ACTIVE + Destination : 127.0.0.1:3000 + Intercepting: all connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Go server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-go/DataProcessingService/main.go` in your editor and change `var color string` from `blue` to `orange`. Save the file and the Go server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + This opens your browser; login with your GitHub account and choose your org. + + ``` + $ telepresence login + + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000 --mount=false` + You will be asked for your ingress; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`. + Finally, type `n` for “Use TLS”. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 --mount=false + + Confirm the ingress to use for preview URL access + Ingress service.namespace ? verylargejavaservice.default + Port ? 8080 + Use TLS y/n ? n + Using deployment dataprocessingservice + intercepted + State : ACTIVE + Destination : 127.0.0.1:3000 + Intercepting: HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.0/quick-start/index.md b/docs/telepresence/2.0/quick-start/index.md new file mode 100644 index 000000000..efcb65b52 --- /dev/null +++ b/docs/telepresence/2.0/quick-start/index.md @@ -0,0 +1,7 @@ +--- + description: Telepresence Quick Start. +--- + +import TelepresenceQuickStartLanding from './TelepresenceQuickStartLanding' + + diff --git a/docs/telepresence/2.0/quick-start/qs-cards.js b/docs/telepresence/2.0/quick-start/qs-cards.js new file mode 100644 index 000000000..31582355b --- /dev/null +++ b/docs/telepresence/2.0/quick-start/qs-cards.js @@ -0,0 +1,70 @@ +import Grid from '@material-ui/core/Grid'; +import Paper from '@material-ui/core/Paper'; +import Typography from '@material-ui/core/Typography'; +import { makeStyles } from '@material-ui/core/styles'; +import React from 'react'; + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + textAlign: 'center', + alignItem: 'stretch', + padding: 0, + }, + paper: { + padding: theme.spacing(1), + textAlign: 'center', + color: 'black', + height: '100%', + }, +})); + +export default function CenteredGrid() { + const classes = useStyles(); + + return ( +
+ + + + + + Collaborating + + + + Use preview URLS to collaborate with your colleagues and others + outside of your organization. + + + + + + + + Outbound Sessions + + + + While connected to the cluster, your laptop can interact with + services as if it was another pod in the cluster. + + + + + + + + FAQs + + + + Learn more about uses cases and the technical implementation of + Telepresence. + + + + +
+ ); +} diff --git a/docs/telepresence/2.0/quick-start/qs-go.md b/docs/telepresence/2.0/quick-start/qs-go.md new file mode 100644 index 000000000..5be151704 --- /dev/null +++ b/docs/telepresence/2.0/quick-start/qs-go.md @@ -0,0 +1,322 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Go** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Go application](#3-install-a-sample-go-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites +You’ll need [`kubectl` installed](https://kubernetes.io/docs/tasks/tools/#kubectl) +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. + + + Need a cluster? We provide free demo clusters to use with this quick start, quickly set one up!. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Go application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Go. We have versions in Python (Flask), Python (FastAPI), Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-go.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-go.git + + Cloning into 'edgey-corp-go'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-go/DataProcessingService/` + +3. You will use [Fresh](https://pkg.go.dev/github.com/BUGLAN/fresh) to support auto reloading of the Go server, which we'll use later. Confirm it is installed by running: + `go get github.com/pilu/fresh` + Then start the Go server: + `$GOPATH/bin/fresh` + + ``` + $ go get github.com/pilu/fresh + + $ $GOPATH/bin/fresh + + ... + 10:23:41 app | Welcome to the DataProcessingGoService! + ``` + + + Install Go from here and set your GOPATH if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + “blue” + ``` + + + Victory, your local Go server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using deployment dataprocessingservice + intercepted + State : ACTIVE + Destination : 127.0.0.1:3000 + Intercepting: all connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Go server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-go/DataProcessingService/main.go` in your editor and change `var color string` from `blue` to `orange`. Save the file and the Go server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + This opens your browser; login with your GitHub account and choose your org. + + ``` + $ telepresence login + + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000 --mount=false` + You will be asked for your ingress; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`. + Finally, type `n` for “Use TLS”. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 --mount=false + + Confirm the ingress to use for preview URL access + Ingress service.namespace ? verylargejavaservice.default + Port ? 8080 + Use TLS y/n ? n + Using deployment dataprocessingservice + intercepted + State : ACTIVE + Destination : 127.0.0.1:3000 + Intercepting: HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.0/quick-start/qs-java.md b/docs/telepresence/2.0/quick-start/qs-java.md new file mode 100644 index 000000000..b7dad8042 --- /dev/null +++ b/docs/telepresence/2.0/quick-start/qs-java.md @@ -0,0 +1,316 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Java** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Java application](#3-install-a-sample-java-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites +You’ll need [`kubectl` installed](https://kubernetes.io/docs/tasks/tools/#kubectl) +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. + + + Need a cluster? We provide free demo clusters to use with this quick start, quickly set one up!. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Java application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Java. We have versions in Python (FastAPI), Python (Flask), Go, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-java.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-java.git + + Cloning into 'edgey-corp-java'... + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-java/DataProcessingService/` + +3. Start the Maven server. + `mvn spring-boot:run` + + + Install Java and Maven first if needed. + + + ``` + $ mvn spring-boot:run + + ... + g.d.DataProcessingServiceJavaApplication : Started DataProcessingServiceJavaApplication in 1.408 seconds (JVM running for 1.684) + + ``` + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + “blue” + ``` + + + Victory, your local Java server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using deployment dataprocessingservice + intercepted + State : ACTIVE + Destination : 127.0.0.1:3000 + Intercepting: all connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Java server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-java/DataProcessingService/src/main/resources/application.properties` in your editor and change `app.default.color` on line 2 from `blue` to `orange`. Save the file then stop and restart your Java server. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + This opens your browser; login with your GitHub account and choose your org. + + ``` + $ telepresence login + + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000 --mount=false` + You will be asked for your ingress; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`. + Finally, type `n` for “Use TLS”. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 --mount=false + + Confirm the ingress to use for preview URL access + Ingress service.namespace ? verylargejavaservice.default + Port ? 8080 + Use TLS y/n ? n + Using deployment dataprocessingservice + intercepted + State : ACTIVE + Destination : 127.0.0.1:3000 + Intercepting: HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.0/quick-start/qs-node.md b/docs/telepresence/2.0/quick-start/qs-node.md new file mode 100644 index 000000000..bf5a3fd33 --- /dev/null +++ b/docs/telepresence/2.0/quick-start/qs-node.md @@ -0,0 +1,330 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Node.js** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Node.js application](#3-install-a-sample-nodejs-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites +You’ll need [`kubectl` installed](https://kubernetes.io/docs/tasks/tools/#kubectl) +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. + + + Need a cluster? We provide free demo clusters to use with this quick start, quickly set one up!. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Node.js application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js. We have versions in Go, Java,Python using Flask, and Python using FastAPI if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-nodejs.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-nodejs.git + + Cloning into 'edgey-corp-nodejs'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-nodejs/DataProcessingService/` + +3. Install the dependencies and start the Node server: +`npm install && npm start` + + ``` + $ npm install && npm start + + ... + Welcome to the DataProcessingService! + { _: [] } + Server running on port 3000 + ``` + + + Install Node.js from here if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Node server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + See this doc for more information on how Telepresence resolves DNS. + + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Node server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-nodejs/DataProcessingService/app.js` in your editor and change line 6 from `blue` to `orange`. Save the file and the Node server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + This opens your browser; login with your GitHub account and choose your org. + + ``` + $ telepresence login + + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.0/quick-start/qs-python-fastapi.md b/docs/telepresence/2.0/quick-start/qs-python-fastapi.md new file mode 100644 index 000000000..3358aa6bf --- /dev/null +++ b/docs/telepresence/2.0/quick-start/qs-python-fastapi.md @@ -0,0 +1,307 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Python (FastAPI)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites +You’ll need [`kubectl` installed](https://kubernetes.io/docs/tasks/tools/#kubectl) +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. + + + Need a cluster? We provide free demo clusters to use with this quick start, quickly set one up!. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the FastAPI framework. We have versions in Python (Flask), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python-fastapi.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python-fastapi.git + + Cloning into 'edgey-corp-python-fastapi'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python-fastapi/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install fastapi uvicorn requests && python app.py + + Collecting fastapi + ... + Application startup complete. + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + “blue” + ``` + + + Victory, your local service is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using deployment dataprocessingservice + intercepted + State : ACTIVE + Destination : 127.0.0.1:3000 + Intercepting: all connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python-fastapi/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 17 from `blue` to `orange`. Save the file and the Python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + This opens your browser; login with your GitHub account and choose your org. + + ``` + $ telepresence login + + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000 --mount=false` + You will be asked for your ingress; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`. + Finally, type `n` for “Use TLS”. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 --mount=false + + Confirm the ingress to use for preview URL access + Ingress service.namespace ? verylargejavaservice.default + Port ? 8080 + Use TLS y/n ? n + Using deployment dataprocessingservice + intercepted + State : ACTIVE + Destination : 127.0.0.1:3000 + Intercepting: HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080) and it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.0/quick-start/qs-python.md b/docs/telepresence/2.0/quick-start/qs-python.md new file mode 100644 index 000000000..952cd4421 --- /dev/null +++ b/docs/telepresence/2.0/quick-start/qs-python.md @@ -0,0 +1,318 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Python (Flask)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites +You’ll need [`kubectl` installed](https://kubernetes.io/docs/tasks/tools/#kubectl) +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. + + + Need a cluster? We provide free demo clusters to use with this quick start, quickly set one up!. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the Flask framework. We have versions in Python (FastAPI), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python.git + + Cloning into 'edgey-corp-python'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install flask requests && python app.py + + Collecting flask + ... + Welcome to the DataServiceProcessingPythonService! + ... + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + “blue” + ``` + + + Victory, your local Python server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using deployment dataprocessingservice + intercepted + State : ACTIVE + Destination : 127.0.0.1:3000 + Intercepting: all connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 15 from `blue` to `orange`. Save the file and the python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + This opens your browser; login with your GitHub account and choose your org. + + ``` + $ telepresence login + + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000 --mount=false` + You will be asked for your ingress; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`. + Finally, type `n` for “Use TLS”. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 --mount=false + + Confirm the ingress to use for preview URL access + Ingress service.namespace ? verylargejavaservice.default + Port ? 8080 + Use TLS y/n ? n + Using deployment dataprocessingservice + intercepted + State : ACTIVE + Destination : 127.0.0.1:3000 + Intercepting: HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.0/quick-start/telepresence-quickstart-landing.less b/docs/telepresence/2.0/quick-start/telepresence-quickstart-landing.less new file mode 100644 index 000000000..1a8c3ddc7 --- /dev/null +++ b/docs/telepresence/2.0/quick-start/telepresence-quickstart-landing.less @@ -0,0 +1,185 @@ +@import '~@src/components/Layout/vars.less'; + +.doc-body .telepresence-quickstart-landing { + font-family: @InterFont; + color: @black; + margin: 0 auto 140px; + max-width: @docs-max-width; + min-width: @docs-min-width; + + h1, + h2 { + color: @blue-dark; + font-style: normal; + font-weight: normal; + letter-spacing: 0.25px; + } + + h1 { + font-size: 33px; + line-height: 40px; + + svg { + vertical-align: text-bottom; + } + } + + h2 { + font-size: 23px; + line-height: 33px; + margin: 0 0 1rem; + + .highlight-mark { + background: transparent; + color: @blue-dark; + background: -moz-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: -webkit-gradient( + linear, + left top, + left bottom, + color-stop(0%, transparent), + color-stop(60%, transparent), + color-stop(60%, fade(@blue-electric, 15%)), + color-stop(100%, fade(@blue-electric, 15%)) + ); + background: -webkit-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: -o-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: -ms-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: linear-gradient( + to bottom, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='transparent', endColorstr='fade(@blue-electric, 15%)',GradientType=0 ); + padding: 0 3px; + margin: 0 0.1em 0 0; + } + } + + .telepresence-choice { + background: @white; + border: 2px solid @grey-separator; + box-shadow: -6px 12px 0px fade(@black, 12%); + border-radius: 8px; + padding: 20px; + + strong { + color: @blue; + } + } + + .telepresence-choice-wrapper { + border-bottom: solid 1px @grey-separator; + column-gap: 60px; + display: inline-grid; + grid-template-columns: repeat(2, 1fr); + margin: 20px 0 50px; + padding: 0 0 62px; + width: 100%; + + .telepresence-choice { + ol { + li { + font-size: 14px; + } + } + + .get-started-button { + background-color: @green; + border-radius: 5px; + color: @white; + display: inline-flex; + font-style: normal; + font-weight: 600; + font-size: 14px; + line-height: 24px; + margin: 0 0 15px 5px; + padding: 13px 20px; + align-items: center; + letter-spacing: 1.25px; + text-decoration: none; + text-transform: uppercase; + transition: background-color 200ms linear 0ms; + + svg { + fill: @white; + height: 20px; + width: 20px; + } + + &:hover { + background-color: @green-dark; + text-decoration: none; + } + } + + p { + font-style: normal; + font-weight: normal; + font-size: 16px; + line-height: 26px; + letter-spacing: 0.5px; + } + } + } + + .video-wrapper { + display: flex; + flex-direction: row; + + ul { + li { + font-size: 14px; + margin: 0 10px 10px 0; + } + } + + div { + &.video-container { + flex: 1 1 70%; + position: relative; + width: 100%; + padding-bottom: 39.375%; + + .video { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + border: 0; + } + } + + &.description { + flex: 0 1 30%; + } + } + } +} diff --git a/docs/telepresence/2.0/redirects.yml b/docs/telepresence/2.0/redirects.yml new file mode 100644 index 000000000..5961b3477 --- /dev/null +++ b/docs/telepresence/2.0/redirects.yml @@ -0,0 +1 @@ +- {from: "", to: "quick-start"} diff --git a/docs/telepresence/2.0/reference/architecture.md b/docs/telepresence/2.0/reference/architecture.md new file mode 100644 index 000000000..477399a51 --- /dev/null +++ b/docs/telepresence/2.0/reference/architecture.md @@ -0,0 +1,63 @@ +--- +description: "How Telepresence works to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Telepresence Architecture + +
+ +![Telepresence Architecture](../../../../../images/documentation/telepresence-architecture.inline.svg) + +
+ +## Telepresence CLI + +The Telepresence CLI orchestrates all the moving parts: it starts the Telepresence Daemon, installs the Traffic Manager +in your cluster, authenticates against Ambassador Cloud and configure all those elements to communicate with one +another. + +## Telepresence Daemon + +The Telepresence Daemon runs on a developer's workstation and is its main point of communication with the cluster's +network. All requests from and to the cluster go through the Daemon, which communicates with the Traffic Manager. + +## Traffic Manager + +The Traffic Manager is the central point of communication between Traffic Agents in the cluster and Telepresence Daemons +on developer workstations, proxying all relevant inbound and outbound traffic and tracking active intercepts. When +Telepresence is run with either the `connect`, `intercept`, or `list` commands, the Telepresence CLI first checks the +cluster for the Traffic Manager deployment, and if missing it creates it. + +When an intercept gets created with a Preview URL, the Traffic Manager will establish a connection with Ambassador Cloud +so that Preview URL requests can be routed to the cluster. This allows Ambassador Cloud to reach the Traffic Manager +without requiring the Traffic Manager to be publicly exposed. Once the Traffic Manager receives a request from a Preview +URL, it forwards the request to the ingress service specified at the Preview URL creation. + +## Traffic Agent + +The Traffic Agent is a sidecar container that facilitates intercepts. When an intercept is started, the Traffic Agent +container is injected into the deployment's pod(s). You can see the Traffic Agent's status by running `kubectl describe +pod `. + +Depending on the type of intercept that gets created, the Traffic Agent will either route the incoming request to the +Traffic Manager so that it gets routed to a developer's workstation, or it will pass it along to the container in the +pod usually handling requests on that port. + +## Ambassador Cloud + +Ambassador Cloud enables Preview URLs by generating random ephemeral domain names and routing requests received on those +domains from authorized users to the appropriate Traffic Manager. + +Ambassador Cloud also lets users manage their Preview URLs: making them publicly accessible, seeing users who have +accessed them and deleting them. + +# Changes from Service Preview + +Using Ambassador's previous offering, Service Preview, the Traffic Agent had to be manually added to a pod by an +annotation. This is no longer required as the Traffic Agent is automatically injected when an intercept is started. + +Service Preview also started an intercept via `edgectl intercept`. The `edgectl` CLI is no longer required to intercept +as this functionality has been moved to the Telepresence CLI. + +For both the Traffic Manager and Traffic Agents, configuring Kubernetes ClusterRoles and ClusterRoleBindings is not +required as it was in Service Preview. Instead, the user running Telepresence must already have sufficient permissions in the cluster to add and modify deployments in the cluster. diff --git a/docs/telepresence/2.0/reference/client.md b/docs/telepresence/2.0/reference/client.md new file mode 100644 index 000000000..5ff8e389e --- /dev/null +++ b/docs/telepresence/2.0/reference/client.md @@ -0,0 +1,25 @@ +--- +description: "CLI options for Telepresence to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Client Reference + +The [Telepresence CLI client](../../quick-start) is used to connect Telepresence to your cluster, start and stop intercepts, and create preview URLs. All commands are run in the form of `telepresence `. + +## Commands + +A list of all CLI commands and flags is available by running `telepresence help`, but here is more detail on the most common ones. + +| Command | Description | +| --- | --- | +| `connect` | Starts the local daemon and connects Telepresence to your cluster and installs the Traffic Manager if it is missing. After connecting, outbound traffic is routed to the cluster so that you can interact with services as if your laptop was another pod (for example, curling a service by it's name) | +| `login` | Authenticates you to Ambassador Cloud to create, manage, and share [preview URLs](../../howtos/preview-urls/) +| `logout` | Logs out out of Ambassador Cloud | +| `dashboard` | Reopens the Ambassador Cloud dashboard in your browser | +| `preview` | Create or remove preview domains for existing intercepts | +| `status` | Shows the current connectivity status | +| `quit` | Quits the local daemon, stopping all intercepts and outbound traffic to the cluster| +| `list` | Lists the current active intercepts | +| `intercept` | Intercepts a service, run followed by the service name to be intercepted and what port to proxy to your laptop: `telepresence intercept --port `. This command can also start a process so you can run a local instance of the service you are intercepting. For example the following will intercept the hello service on port 8000 and start a Python web server: `telepresence intercept hello --port 8000 -- python3 -m http.server 8000` | +| `leave` | Stops an active intercept, for example: `telepresence leave hello` | +| `uninstall` | Uninstalls Telepresence from your cluster, using the `--agent` flag to target the Traffic Agent for a specific deployment, the `--all-agents` flag to remove all Traffic Agents from all deployments, or the `--everything` flag to remove all Traffic Agents and the Traffic Manager. diff --git a/docs/telepresence/2.0/reference/dns.md b/docs/telepresence/2.0/reference/dns.md new file mode 100644 index 000000000..4f0482c1a --- /dev/null +++ b/docs/telepresence/2.0/reference/dns.md @@ -0,0 +1,66 @@ +# DNS Resolution + +The Telepresence DNS resolver is dynamically configured to resolve names using the namespaces of currently active intercepts. Processes running locally on the desktop will have network access to all services in the such namespaces by service-name only. + +All intercepts contribute to the DNS resolver, even those that do not use the `--namespace=` option. This is because `--namespace default` is implied, and in this context, `default` is treated just like any other namespace. + +No namespaces are used by the DNS resolver (not even `default`) when no intercepts are active, which means that no service is available by `` only. Without an active intercept, the namespace qualified DNS name must be used (in the form `.`). + +See this demonstrated below, using the [quick start's](../../quick-start/) sample app services. + +No intercepts are currently running, we'll connect to the cluster and list the services that can be intercepted. + +``` +$ telepresence connect + + Connecting to traffic manager... + Connected to context default (https://) + +$ telepresence list + + verylargejavaservice : ready to intercept (traffic-agent not yet installed) + dataprocessingservice: ready to intercept (traffic-agent not yet installed) + verylargedatastore : ready to intercept (traffic-agent not yet installed) + +$ curl verylargejavaservice:8080 + + curl: (6) Could not resolve host: verylargejavaservice + +``` + +This is expected as Telepresence cannot reach the service yet by short name without an active intercept in that namespace. + +``` +$ curl verylargejavaservice.default:8080 + + + + + Welcome to the EdgyCorp WebApp + ... +``` + +Using the namespaced qualified DNS name though does work. +Now we'll start an intercept against another service in the same namespace. Remember, `--namespace default` is implied since it is not specified. + +``` +$ telepresence intercept dataprocessingservice --port 3000 + + Using deployment dataprocessingservice + intercepted + State : ACTIVE + Destination : 127.0.0.1:3000 + Intercepting: all connections + +$ curl verylargejavaservice:8080 + + + + + Welcome to the EdgyCorp WebApp + ... +``` + +Now curling that service by its short name works and will as long as the intercept is active. + +The DNS resolver will always be able to resolve services using `.` regardless of intercepts. diff --git a/docs/telepresence/2.0/reference/environment.md b/docs/telepresence/2.0/reference/environment.md new file mode 100644 index 000000000..08fa18861 --- /dev/null +++ b/docs/telepresence/2.0/reference/environment.md @@ -0,0 +1,28 @@ +--- +description: "How Telepresence can import environment variables from your Kubernetes cluster to use with code running on your laptop." +--- + +# Environment Variables + +Telepresence can import environment variables from the cluster pod when running an intercept. +You can then use these variables with the code running on your laptop of the service being intercepted. + +There are three options available to do this: + +1. `telepresence intercept --port --env-file=` + + This will write the environment variables to a Docker Compose `.env` file. This file can be used with `docker-compose` when starting containers locally. Please see the Docker documentation regarding the [file syntax](https://docs.docker.com/compose/env-file/) and [usage](https://docs.docker.com/compose/environment-variables/) for more information. + +2. `telepresence intercept --port --env-json=` + + This will write the environment variables to a JSON file. This file can be injected into other build processes. + +3. `telepresence intercept --port -- ` + + This will run a command locally with the Pod's environment variables set on your laptop. Once the command quits the intercept is stopped (as if `telepresence leave ` was run). This can be used in conjunction with a local server command, such as `python ` or `node ` to run a service locally while using the environment variables that were set on the pod via a ConfigMap or other means. + + Another use would be running a subshell, Bash for example: + + `telepresence intercept --port -- /bin/bash` + + This would start the intercept then launch the subshell on your laptop with all the same variables set as on the pod. diff --git a/docs/telepresence/2.0/reference/volume.md b/docs/telepresence/2.0/reference/volume.md new file mode 100644 index 000000000..4f22ca50e --- /dev/null +++ b/docs/telepresence/2.0/reference/volume.md @@ -0,0 +1,33 @@ +# Volume Mounts + +import Alert from '@material-ui/lab/Alert'; + +Telepresence supports locally mounting of volumes that are mounted to your Pods. You can specify a command to run when starting the intercept, this could be a subshell or local server such as Python or Node. + +``` +telepresence intercept --port --mount=/tmp/ -- /bin/bash +``` + +In this case, Telepresence creates the intercept, mounts the Pod's volumes to locally to `/tmp`, and starts a Bash subshell. + +Telepresence can set a random mount point for you by using `--mount=true` instead, you can then find the mount point using the `$TELEPRESENCE_ROOT` variable. + +``` +$ telepresence intercept --port --mount=true -- /bin/bash +Using deployment +intercepted + State : ACTIVE + Destination : 127.0.0.1: + Intercepting: all connections + +bash-3.2$ echo $TELEPRESENCE_ROOT +/var/folders/yh/42y5h_7s5992f80sjlv3wlgc0000gn/T/telfs-427288831 +``` + +--mount=true is the default if a mount option is not specified, use --mount=false to disable mounting volumes. + +With either method, the code you run locally either from the subshell or from the intercept command will need to be prepended with the `$TELEPRESENCE_ROOT` environment variable to utilitze the mounted volumes. + +For example, Kubernetes mounts secrets to `/var/run/secrets`. Once mounted, to access these you would need to change your code to use `$TELEPRESENCE_ROOT/var/run/secrets`. + +If using --mount=true without a command, you can use either environment variable flag to retrieve the variable. diff --git a/docs/telepresence/2.0/troubleshooting/index.md b/docs/telepresence/2.0/troubleshooting/index.md new file mode 100644 index 000000000..e1ec85d65 --- /dev/null +++ b/docs/telepresence/2.0/troubleshooting/index.md @@ -0,0 +1,41 @@ +--- +description: "Troubleshooting issues related to Telepresence." +--- +# Troubleshooting + +## Creating an Intercept Did Not Generate a Preview URL + +Preview URLs are only generated when you are logged into Ambassador Cloud, so that you can use it to manage all your preview URLs. When not logged in, the intercept will not generate a preview URL and will proxy all traffic. Remove the intercept with `telepresence leave [deployment name]`, run `telepresence login` to login to Ambassador Cloud, then recreate the intercept. See the [intercepts how-to doc](../howtos/intercepts) for more details. + +## Error on Accessing Preview URL: `First record does not look like a TLS handshake` + +The service you are intercepting is likely not using TLS, however when configuring the intercept you indicated that it does use TLS. Remove the intercept with `telepresence leave [deployment name]` and recreate it, setting `TLS` to `n`. Telepresence tries to intelligently determine these settings for you when creating an intercept and offer them as defaults, but odd service configurations might cause it to suggest the wrong settings. + +## Error on Accessing Preview URL: Detected a 301 Redirect Loop + +If your ingress is set to redirect HTTP requests to HTTPS and your web app uses HTTPS, but you configure the intercept to not use TLS, you will get this error when opening the preview URL. Remove the intercept with `telepresence leave [deployment name]` and recreate it, selecting the correct port and setting `TLS` to `y` when prompted. + +## Your GitHub Organization Isn't Listed + +Ambassador Cloud needs access granted to your GitHub organization as a third-party OAuth app. If an org isn't listed during login then the correct access has not been granted. + +The quickest way to resolve this is to go to the **Github menu** → **Settings** → **Applications** → **Authorized OAuth Apps** → **Ambassador Labs**. An org owner will have a **Grant** button, anyone not an owner will have **Request** which sends an email to the owner. If an access request has been denied in the past the user will not see the **Request** button, they will have to reach out to the owner. + +Once access is granted, log out of Ambassador Cloud and log back in, you should see the GitHub org listed. + +The org owner can go to the **GitHub menu** → **Your organizations** → **[org name]** → **Settings** → **Third-party access** to see if Ambassador Labs has access already or authorize a request for access (only owners will see **Settings** on the org page). Clicking the pencil icon will show the permissions that were granted. + +GitHub's documentation provides more detail about [managing access granted to third-party applications](https://docs.github.com/en/github/authenticating-to-github/connecting-with-third-party-applications) and [approving access to apps](https://docs.github.com/en/github/setting-up-and-managing-organizations-and-teams/approving-oauth-apps-for-your-organization). + +### Granting or Requesting Access on Initial Login + +The first time you login to Ambassador Cloud, GitHub will ask to authorize Ambassador Labs to access your orgs and certain user data. + + + +Any listed org with a green check has already granted access to Ambassador Labs (you still need to authorize to allow Ambassador Labs to read your user data and org membership). + +Any org with a red X requires access to be granted to Ambassador Labs. Owners of the org will see a **Grant** button. Anyone who is not an owner will see a **Request** button. This will send an email to the org owner requesting approval to access the org. If an access request has been denied in the past the user will not see the **Request** button, they will have to reach out to the owner. + +Once approval is granted, you will have to log out of Ambassador Cloud then back in to select the org. + diff --git a/docs/telepresence/2.0/tutorial.md b/docs/telepresence/2.0/tutorial.md new file mode 100644 index 000000000..36135738b --- /dev/null +++ b/docs/telepresence/2.0/tutorial.md @@ -0,0 +1,171 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Telepresence Quick Start + +In this guide you will explore some of the key features of Telepresence. First, you will install the Telepresence CLI and set up a test cluster with a demo web app. Then, you will run one of the app's services on your laptop, using Telepresence to intercept requests to the service on the cluster and see your changes live via a preview URL. + +## Prerequisites + +It is recommended to use an empty development cluster for this guide. You must have access via RBAC to create and update deployments and services in the cluster. You must also have [Node.js installed](https://nodejs.org/en/download/package-manager/) on your laptop to run the demo app code. + +Finally, you will need the Telepresence CLI. Run the commands for your OS to install it and login to Ambassador Cloud in your browser. Follow the prompts to login with GitHub then select your organization. You will be redirected to the dashboard; later you will manage your preview URLs here. + +### macOS + +``` +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/latest/telepresence \ +-o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# 3. Login with the CLI: +telepresence login +``` +If you receive an error saying the developer cannot be verified, open System Preferences → Security & Privacy → General. Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence login command. + + +### Linux + +``` +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/latest/telepresence \ +-o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# 3. Login with the CLI: +telepresence login +``` + +## Cluster Setup + +1. You will use a sample Java app for this guide. Later, after deploying the app into your cluster, we will review its architecture. Start by cloning the repo: + + ``` + git clone https://github.com/datawire/amb-code-quickstart-app.git + ``` + +2. Install [Edge Stack](../../../../../../products/edge-stack/) to use as an ingress controller for your cluster. We need an ingress controller to allow access to the web app from the internet. + + Change into the repo directory, then into `k8s-config`, and apply the YAML files to deploy Edge Stack. + + ``` + cd amb-code-quickstart-app/k8s-config + kubectl apply -f 1-aes-crds.yml && kubectl wait --for condition=established --timeout=90s crd -lproduct=aes + kubectl apply -f 2-aes.yml && kubectl wait -n ambassador deploy -lproduct=aes --for condition=available --timeout=90s + ``` + +3. Install the web app by applying its manifest: + + ``` + kubectl apply -f edgy-corp-web-app.yaml + ``` + +4. Wait a few moments for the external load balancer to become available, then retrieve its IP address: + + ``` + kubectl get service -n ambassador ambassador -o jsonpath='{.status.loadBalancer.ingress[0].ip}' + ``` + +
+ + + + +
  1. Wait until all the pods start, then access the the Edgy Corp web app in your browser at http://<load-balancer-ip/>. Be sure you use http, not https!
    You should see the landing page for the web app with an architecture diagram. The web app is composed of three services, with the frontend VeryLargeJavaService dependent on the two backend services.
+ +## Developing with Telepresence + +Now that your app is all wired up you're ready to start doing development work with Telepresence. Imagine you are a Java developer and first on your to-do list for the day is a change on the `DataProcessingNodeService`. One thing this service does is set the color for the title and a pod in the diagram. The production version of the app on the cluster uses green elements, but you want to see a version with these elements set to blue. + +The `DataProcessingNodeService` service is dependent on the `VeryLargeJavaService` and `VeryLargeDataStore` services to run. Local development would require one of the two following setups, neither of which is ideal. + +First, you could run the two dependent services on your laptop. However, as their names suggest, they are too large to run locally. This option also doesn't scale well. Two services isn't a lot to manage, but more complex apps requiring many more dependencies is not feasible to manage running on your laptop. + +Second, you could run everything in a development cluster. However, the cycle of writing code then waiting on containers to build and deploy is incredibly disruptive. The lengthening of the [inner dev loop](../concepts/devloop) in this way can have a significant impact on developer productivity. + +## Intercepting a Service + +Alternatively, you can use Telepresence's `intercept` command to proxy traffic bound for a service to your laptop. This will let you test and debug services on code running locally without needing to run dependent services or redeploy code updates to your cluster on every change. It also will generate a preview URL, which loads your web app from the cluster ingress but with requests to the intercepted service proxied to your laptop. + +1. You started this guide by installing the Telepresence CLI and logging into Ambassador Cloud. The Cloud dashboard is used to manage your intercepts and share them with colleagues. You must be logged in to create selective intercepts as we are going to do here. + + Run telepresence dashboard if you are already logged in and just need to reopen the dashboard. + +2. In your terminal and run `telepresence list`. This will connect to your cluster, install the [Traffic Manager](../reference/#architecture) to proxy the traffic, and return a list of services that Telepresence is able to intercept. + +3. Navigate up one directory to the root of the repo then into `DataProcessingNodeService`. Install the Node.js dependencies and start the app passing the `blue` argument, which is used by the app to set the title and pod color in the diagram you saw earlier. + + ``` + cd ../DataProcessingNodeService + npm install + node app -c blue + ``` + +4. In a new terminal window start the intercept with the command below. This will proxy requests to the `DataProcessingNodeService` service to your laptop. It will also generate a preview URL, which will let you view the app with the intercepted service in your browser. + + The intercept requires you specify the name of the deployment to be intercepted and the port to proxy. + + ``` + telepresence intercept dataprocessingnodeservice --port 3000 + ``` + + You will be prompted with a few options. Telepresence tries to intelligently determine the deployment and namespace of your ingress controller. Hit `enter` to accept the default value of `ambassador.ambassador` for `Ingress`. For simplicity's sake, our app uses 80 for the port and does *not* use TLS, so use those options when prompted for the `port` and `TLS` settings. Your output should be similar to this: + + ``` + $ telepresence intercept dataprocessingnodeservice --port 3000 + Confirm the ingress to use for preview URL access + Ingress service.namespace [ambassador.ambassador] ? + Port [443] ? 80 + Use TLS y/n [y] ? n + Using deployment dataprocessingnodeservice + intercepted + State : ACTIVE + Destination : 127.0.0.1:3000 + Intercepting: HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp ("76a1e848-1829-74x-1138-e3294c1e9119:dataprocessingnodeservice") + Preview URL : https://[random-subdomain].preview.edgestack.me + ``` + + + + + + +
  1. Open the preview URL in your browser to see the intercepted version of the app. The Node server on your laptop replies back to the cluster with the blue option enabled; you will see a blue title and blue pod in the diagram. Remember that previously these elements were green.
    You will also see a banner at the bottom on the page informing that you are viewing a preview URL with your name and org name.
+ + + + + + +
  1. Switch back in your browser to the dashboard page and refresh it to see your preview URL listed. Click the box to expand out options where you can disable authentication or remove the preview.
    If there were other developers in your organization also creating preview URLs, you would see them here as well.
+ +This diagram demonstrates the flow of requests using the intercept. The laptop on the left visits the preview URL, the request is redirected to the cluster ingress, and requests to and from the `DataProcessingNodeService` by other pods are proxied to the developer laptop running Telepresence. + +![Intercept Architecture](../../images/tp-tutorial-4.png) + +7. Clean up your environment by first typing `Ctrl+C` in the terminal running Node. Then stop the intercept with the `leave` command and `quit` to stop the daemon. Finally, use `uninstall --everything` to remove the Traffic Manager and Agents from your cluster. + + ``` + telepresence leave dataprocessingnodeservice + telepresence quit + telepresence uninstall --everything + ``` + +8. Refresh the dashboard page again and you will see the intercept was removed after running the `leave` command. Refresh the browser tab with the preview URL and you will see that it has been disabled. + +## What's Next? + +Telepresence and preview URLS open up powerful possibilities for [collaborating](../howtos/preview-urls) with your colleagues and others outside of your organization. + +Learn more about how Telepresence handles [outbound sessions](../howtos/outbound), allowing locally running services to interact with cluster services without an intercept. + +Read the [FAQs](../faqs) to learn more about uses cases and the technical implementation of Telepresence. diff --git a/docs/telepresence/2.0/versions.yml b/docs/telepresence/2.0/versions.yml new file mode 100644 index 000000000..67a427dca --- /dev/null +++ b/docs/telepresence/2.0/versions.yml @@ -0,0 +1,4 @@ +version: "2.0.3" +dlVersion: "2.0.3" +docsVersion: "2.0" +productName: "Telepresence" diff --git a/docs/telepresence/2.1 b/docs/telepresence/2.1 deleted file mode 120000 index a0529ff8c..000000000 --- a/docs/telepresence/2.1 +++ /dev/null @@ -1 +0,0 @@ -../../../docs/telepresence/v2.1 \ No newline at end of file diff --git a/docs/telepresence/2.1/community.md b/docs/telepresence/2.1/community.md new file mode 100644 index 000000000..aa0b6f0e2 --- /dev/null +++ b/docs/telepresence/2.1/community.md @@ -0,0 +1,12 @@ +# Community + +## Contributor's Guide +Please review our [contributor's guide](https://github.com/telepresenceio/telepresence/blob/release/v2/DEVELOPING.md) +on GitHub to learn how you can help make Telepresence better. + +## Changelog +Our [changelog](https://github.com/telepresenceio/telepresence/blob/release/v2/CHANGELOG.md) +describes new features, bug fixes, and updates to each version of Telepresence. + +## Meetings +Check out our community [meeting schedule](https://github.com/telepresenceio/telepresence/blob/release/v2/MEETING_SCHEDULE.md) for opportunities to interact with Telepresence developers. diff --git a/docs/telepresence/2.1/concepts/context-prop.md b/docs/telepresence/2.1/concepts/context-prop.md new file mode 100644 index 000000000..4ec09396f --- /dev/null +++ b/docs/telepresence/2.1/concepts/context-prop.md @@ -0,0 +1,25 @@ +# Context propagation + +**Context propagation** is the transfer of request metadata across the services and remote processes of a distributed system. Telepresence uses context propagation to intelligently route requests to the appropriate destination. + +This metadata is the context that is transferred across system services. It commonly takes the form of HTTP headers; context propagation is usually referred to as header propagation. A component of the system (like a proxy or performance monitoring tool) injects the headers into requests as it relays them. + +Metadata propagation refers to any service or other middleware not stripping away the headers. Propagation facilitates the movement of the injected contexts between other downstream services and processes. + + +## What is distributed tracing? + +Distributed tracing is a technique for troubleshooting and profiling distributed microservices applications and is a common application for context propagation. It is becoming a key component for debugging. + +In a microservices architecture, a single request may trigger additional requests to other services. The originating service may not cause the failure or slow request directly; a downstream dependent service may instead be to blame. + +An application like Datadog or New Relic will use agents running on services throughout the system to inject traffic with HTTP headers (the context). They will track the request’s entire path from origin to destination to reply, gathering data on routes the requests follow and performance. The injected headers follow the [W3C Trace Context specification](https://www.w3.org/TR/trace-context/) (or another header format, such as [B3 headers](https://github.com/openzipkin/b3-propagation)), which facilitates maintaining the headers through every service without being stripped (the propagation). + + +## What are intercepts and preview URLs? + +[Intercepts](../../reference/intercepts) and [preview URLs](../../howtos/preview-urls/) are functions of Telepresence that enable easy local development from a remote Kubernetes cluster and offer a preview environment for sharing and real-time collaboration. + +Telepresence also uses custom headers and header propagation for controllable intercepts and preview URLs instead of for tracing. The headers facilitate the smart routing of requests either to live services in the cluster or services running locally on a developer’s machine. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to [Ambassador Cloud](https://app.getambassador.io) with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. diff --git a/docs/telepresence/2.1/concepts/devloop.md b/docs/telepresence/2.1/concepts/devloop.md new file mode 100644 index 000000000..fd58950ed --- /dev/null +++ b/docs/telepresence/2.1/concepts/devloop.md @@ -0,0 +1,50 @@ +# The developer experience and the inner dev loop + +## How is the developer experience changing? + +The developer experience is the workflow a developer uses to develop, test, deploy, and release software. + +Typically this experience has consisted of both an inner dev loop and an outer dev loop. The inner dev loop is where the individual developer codes and tests, and once the developer pushes their code to version control, the outer dev loop is triggered. + +The outer dev loop is _everything else_ that happens leading up to release. This includes code merge, automated code review, test execution, deployment, [controlled (canary) release](../../../../argo/latest/concepts/canary/), and observation of results. The modern outer dev loop might include, for example, an automated CI/CD pipeline as part of a [GitOps workflow](../../../../argo/latest/concepts/gitops/#what-is-gitops) and a progressive delivery strategy relying on automated canaries, i.e. to make the outer loop as fast, efficient and automated as possible. + +Cloud-native technologies have fundamentally altered the developer experience in two ways: one, developers now have to take extra steps in the inner dev loop; two, developers need to be concerned with the outer dev loop as part of their workflow, even if most of their time is spent in the inner dev loop. + +Engineers now must design and build distributed service-based applications _and_ also assume responsibility for the full development life cycle. The new developer experience means that developers can no longer rely on monolithic application developer best practices, such as checking out the entire codebase and coding locally with a rapid “live-reload” inner development loop. Now developers have to manage external dependencies, build containers, and implement orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this adds development time to the equation. + +## What is the inner dev loop? + +The inner dev loop is the single developer workflow. A single developer should be able to set up and use an inner dev loop to code and test changes quickly. + +Even within the Kubernetes space, developers will find much of the inner dev loop familiar. That is, code can still be written locally at a level that a developer controls and committed to version control. + +In a traditional inner dev loop, if a typical developer codes for 360 minutes (6 hours) a day, with a traditional local iterative development loop of 5 minutes — 3 coding, 1 building, i.e. compiling/deploying/reloading, 1 testing inspecting, and 10-20 seconds for committing code -- they can expect to make ~70 iterations of their code per day. Any one of these iterations could be a release candidate. The only “developer tax” being paid here is for the commit process, which is negligible. + +![traditional inner dev loop](../../images/trad-inner-dev-loop.png) + +## In search of lost time: How does containerization change the inner dev loop? + +The inner dev loop is where writing and testing code happens, and time is critical for maximum developer productivity and getting features in front of end users. The faster the feedback loop, the faster developers can refactor and test again. + +Changes to the inner dev loop process, i.e., containerization, threaten to slow this development workflow down. Coding stays the same in the new inner dev loop, but code has to be containerized. The _containerized_ inner dev loop requires a number of new steps: + +* packaging code in containers +* writing a manifest to specify how Kubernetes should run the application (e.g., YAML-based configuration information, such as how much memory should be given to a container) +* pushing the container to the registry +* deploying containers in Kubernetes + +Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. + + +![container inner dev loop](../../images/container-inner-dev-loop.png) + +## Tackling the slow inner dev loop + +A slow inner dev loop can negatively impact frontend and backend teams, delaying work on individual and team levels and slowing releases into production overall. + +For example: + +* Frontend developers have to wait for previews of backend changes on a shared dev/staging environment (for example, until CI/CD deploys a new version) and/or rely on mocks/stubs/virtual services when coding their application locally. These changes are only verifiable by going through the CI/CD process to build and deploy within a target environment. +* Backend developers have to wait for CI/CD to build and deploy their app to a target environment to verify that their code works correctly with cluster or cloud-based dependencies as well as to share their work to get feedback. + +New technologies and tools can facilitate cloud-native, containerized development. And in the case of a sluggish inner dev loop, developers can accelerate productivity with tools that help speed the loop up again. diff --git a/docs/telepresence/2.1/concepts/devworkflow.md b/docs/telepresence/2.1/concepts/devworkflow.md new file mode 100644 index 000000000..b09f186d0 --- /dev/null +++ b/docs/telepresence/2.1/concepts/devworkflow.md @@ -0,0 +1,7 @@ +# The changing development workflow + +A changing workflow is one of the main challenges for developers adopting Kubernetes. Software development itself isn’t the challenge. Developers can continue to [code using the languages and tools with which they are most productive and comfortable](/resources/kubernetes-local-dev-toolkit/). That’s the beauty of containerized development. + +However, the cloud-native, Kubernetes-based approach to development means adopting a new development workflow and development environment. Beyond the basics, such as figuring out how to containerize software, [how to run containers in Kubernetes](/docs/kubernetes/latest/concepts/appdev/), and how to deploy changes into containers, for example, Kubernetes adds complexity before it delivers efficiency. The promise of a “quicker way to develop software” applies at least within the traditional aspects of the inner dev loop, where the single developer codes, builds and tests their software. But both within the inner dev loop and once code is pushed into version control to trigger the outer dev loop, the developer experience changes considerably from what many developers are used to. + +In this new paradigm, new steps are added to the inner dev loop, and more broadly, the developer begins to share responsibility for the full life cycle of their software. Inevitably this means taking new workflows and tools on board to ensure that the full life cycle continues full speed ahead. diff --git a/docs/telepresence/2.1/concepts/faster.md b/docs/telepresence/2.1/concepts/faster.md new file mode 100644 index 000000000..7aa74ad1a --- /dev/null +++ b/docs/telepresence/2.1/concepts/faster.md @@ -0,0 +1,25 @@ +# Making the remote local: Faster feedback, collaboration and debugging + +With the goal of achieving [fast, efficient development](/use-case/local-kubernetes-development/), developers need a set of approaches to bridge the gap between remote Kubernetes clusters and local development, and reduce time to feedback and debugging. + +## How should I set up a Kubernetes development environment? + +[Setting up a development environment](/resources/development-environments-microservices/) for Kubernetes can be much more complex than the set up for traditional web applications. Creating and maintaining a Kubernetes development environment relies on a number of external dependencies, such as databases or authentication. + +While there are several ways to set up a Kubernetes development environment, most introduce complexities and impediments to speed. The dev environment should be set up to easily code and test in conditions where a service can access the resources it depends on. + +A good way to meet the goals of faster feedback, possibilities for collaboration, and scale in a realistic production environment is the "single service local, all other remote" environment. Developing in a fully remote environment offers some benefits, but for developers, it offers the slowest possible feedback loop. With local development in a remote environment, the developer retains considerable control while using tools like [Telepresence](../../quick-start/) to facilitate fast feedback, debugging and collaboration. + +## What is Telepresence? + +Telepresence is an open source tool that lets developers [code and test microservices locally against a remote Kubernetes cluster](../../quick-start/). Telepresence facilitates more efficient development workflows while relieving the need to worry about other service dependencies. + +## How can I get fast, efficient local development? + +The dev loop can be jump-started with the right development environment and Kubernetes development tools to support speed, efficiency and collaboration. Telepresence is designed to let Kubernetes developers code as though their laptop is in their Kubernetes cluster, enabling the service to run locally and be proxied into the remote cluster. Telepresence runs code locally and forwards requests to and from the remote Kubernetes cluster, bypassing the much slower process of waiting for a container to build, pushing it to registry, and deploying to production. + +A rapid and continuous feedback loop is essential for productivity and speed; Telepresence enables the fast, efficient feedback loop to ensure that developers can access the rapid local development loop they rely on without disrupting their own or other developers' workflows. Telepresence safely intercepts traffic from the production cluster and enables near-instant testing of code, local debugging in production, and [preview URL](../../howtos/preview-urls/) functionality to share dev environments with others for multi-user collaboration. + +Telepresence works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This pod proxies data from the Kubernetes environment (e.g., TCP connections, environment variables, volumes) to the local process. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +The intercept proxy works thanks to context propagation, which is most frequently associated with distributed tracing but also plays a key role in controllable intercepts and preview URLs. diff --git a/docs/telepresence/2.1/doc-links.yml b/docs/telepresence/2.1/doc-links.yml new file mode 100644 index 000000000..95e6c4bdc --- /dev/null +++ b/docs/telepresence/2.1/doc-links.yml @@ -0,0 +1,52 @@ + - title: Quick Start + link: quick-start + - title: Install Telepresence + items: + - title: Install + link: install/ + - title: Upgrade + link: install/upgrade/ + - title: Core Concepts + items: + - title: The changing development workflow + link: concepts/devworkflow + - title: The developer experience and the inner dev loop + link: concepts/devloop + - title: "Making the remote local: Faster feedback, collaboration and debugging" + link: concepts/faster + - title: Context Propagation + link: concepts/context-prop + - title: How Do I... + items: + - title: Intercept a Service + link: howtos/intercepts + - title: Share Dev Environments with Preview URLs + link: howtos/preview-urls + - title: Proxy Outbound Traffic to My Cluster + link: howtos/outbound + - title: Technical Reference + items: + - title: Architecture + link: reference/architecture + - title: Client Reference + link: reference/client + - title: Laptop-side configuration + link: reference/config + - title: Cluster-side configuration + link: reference/cluster-config + - title: Environment Variables + link: reference/environment + - title: Intercepts + link: reference/intercepts + - title: Volume Mounts + link: reference/volume + - title: DNS Resolution + link: reference/dns + - title: RBAC + link: reference/rbac + - title: FAQs + link: faqs + - title: Troubleshooting + link: troubleshooting + - title: Community + link: community diff --git a/docs/telepresence/2.1/faqs.md b/docs/telepresence/2.1/faqs.md new file mode 100644 index 000000000..e2a86805d --- /dev/null +++ b/docs/telepresence/2.1/faqs.md @@ -0,0 +1,108 @@ +--- +description: "Learn how Telepresence helps with fast development and debugging in your Kubernetes cluster." +--- + +# FAQs + +** Why Telepresence?** + +Modern microservices-based applications that are deployed into Kubernetes often consist of tens or hundreds of services. The resource constraints and number of these services means that it is often difficult to impossible to run all of this on a local development machine, which makes fast development and debugging very challenging. The fast [inner development loop](../concepts/devloop/) from previous software projects is often a distant memory for cloud developers. + +Telepresence enables you to connect your local development machine seamlessly to the cluster via a two way proxying mechanism. This enables you to code locally and run the majority of your services within a remote Kubernetes cluster -- which in the cloud means you have access to effectively unlimited resources. + +Ultimately, this empowers you to develop services locally and still test integrations with dependent services or data stores running in the remote cluster. + +You can “intercept” any requests made to a target Kubernetes workload, and code and debug your associated service locally using your favourite local IDE and in-process debugger. You can test your integrations by making requests against the remote cluster’s ingress and watching how the resulting internal traffic is handled by your service running locally. + +By using the preview URL functionality you can share access with additional developers or stakeholders to the application via an entry point associated with your intercept and locally developed service. You can make changes that are visible in near real-time to all of the participants authenticated and viewing the preview URL. All other viewers of the application entrypoint will not see the results of your changes. + +** What protocols can be intercepted by Telepresence?** + +All HTTP/1.1 and HTTP/2 protocols can be intercepted. This includes: + +- REST +- JSON/XML over HTTP +- gRPC +- GraphQL + +If you need another protocol supported, please [drop us a line](../../../../feedback) to request it. + +** When using Telepresence to intercept a pod, are the Kubernetes cluster environment variables proxied to my local machine?** + +Yes, you can either set the pod's environment variables on your machine or write the variables to a file to use with Docker or another build process. Please see [the environment variable reference doc](../reference/environment) for more information. + +** When using Telepresence to intercept a pod, can the associated pod volume mounts also be mounted by my local machine?** + +Yes, please see [the volume mounts reference doc](../reference/volume/) for more information. + +** When connected to a Kubernetes cluster via Telepresence, can I access cluster-based services via their DNS name?** + +Yes. After you have successfully connected to your cluster via `telepresence connect` you will be able to access any service in your cluster via their namespace qualified DNS name. + +This means you can curl endpoints directly e.g. `curl .:8080/mypath`. + +If you create an intercept for a service in a namespace, you will be able to use the service name directly. + +This means if you `telepresence intercept -n `, you will be able to resolve just the `` DNS record. + +You can connect to databases or middleware running in the cluster, such as MySQL, PostgreSQL and RabbitMQ, via their service name. + +** When connected to a Kubernetes cluster via Telepresence, can I access cloud-based services and data stores via their DNS name?** + +You can connect to cloud-based data stores and services that are directly addressable within the cluster (e.g. when using an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) Service type), such as AWS RDS, Google pub-sub, or Azure SQL Database. + +** What types of ingress does Telepresence support for the preview URL functionality?** + +The preview URL functionality should work with most ingress configurations, including straightforward load balancer setups. + +Telepresence will discover/prompt during first use for this info and make its best guess at figuring this out and ask you to confirm or update this. + +** Will Telepresence be able to intercept workloads running on a private cluster or cluster running within a virtual private cloud (VPC)?** + +Yes. The cluster has to have outbound access to the internet for the preview URLs to function correctly, but it doesn’t need to have a publicly accessible IP address. + +The cluster must also have access to an external registry in order to be able to download the Traffic Manager and Traffic Agent containers that are deployed when connecting with Telepresence. + +** Why does running Telepresence require sudo access for the local daemon?** + +The local daemon needs sudo to create iptable mappings. Telepresence uses this to create outbound access from the laptop to the cluster. + +On Fedora, Telepresence also creates a virtual network device (a TUN network) for DNS routing. That also requires root access. + +** What components get installed in the cluster when running Telepresence?** + +A single Traffic Manager service is deployed in the `ambassador` namespace within your cluster, and this manages resilient intercepts and connections between your local machine and the cluster. + +A Traffic Agent container is injected per pod that is being intercepted. The first time a workload is intercepted all pods associated with this workload will be restarted with the Traffic Agent automatically injected. + +** How can I remove all of the Telepresence components installed within my cluster?** + +You can run the command `telepresence uninstall --everything` to remove the Traffic Manager service installed in the cluster and Traffic Agent containers injected into each pod being intercepted. + +Running this command will also stop the local daemon running. + +** What language is Telepresence written in?** + +All components of the Telepresence application and cluster components are written using Go. + +** How does Telepresence connect and tunnel into the Kubernetes cluster?** + +The connection between your laptop and cluster is established via the standard `kubectl` mechanisms and SSH tunnelling. + + + +** What identity providers are supported for authenticating to view a preview URL?** + +* GitHub +* GitLab +* Google + +More authentication mechanisms and identity provider support will be added soon. Please [let us know](../../../../feedback) which providers are the most important to you and your team in order for us to prioritize those. + +** Is Telepresence open source?** + +Telepresence will be open source soon, in the meantime it is free to download. We prioritized releasing the binary as soon as possible for community feedback, but are actively working on the open sourcing logistics. + +** How do I share my feedback on Telepresence?** + +Your feedback is always appreciated and helps us build a product that provides as much value as possible for our community. You can chat with us directly on our [feedback page](../../../../feedback), or you can [join our Slack channel](http://a8r.io/slack) to share your thoughts. diff --git a/docs/telepresence/2.1/howtos/intercepts.md b/docs/telepresence/2.1/howtos/intercepts.md new file mode 100644 index 000000000..9be2ff2c0 --- /dev/null +++ b/docs/telepresence/2.1/howtos/intercepts.md @@ -0,0 +1,280 @@ +--- +description: "Start using Telepresence in your own environment. Follow these steps to intercept your service in your cluster." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Intercept a Service in Your Own Environment + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Intercept your service](#3-intercept-your-service) +* [4. Create a Preview URL to only intercept certain requests to your service](#4-create-a-preview-url-to-only-intercept-certain-requests-to-your-service) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +For a detailed walk-though on creating intercepts using our sample app, follow the quick start guide. + +## Prerequisites +You’ll need [`kubectl` installed](https://kubernetes.io/docs/tasks/tools/install-kubectl/) and [set up](https://kubernetes.io/docs/tasks/tools/install-kubectl/#verifying-kubectl-configuration) to use a Kubernetes cluster, preferably an empty test cluster. + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +This guide assumes you have a Kubernetes deployment and service accessible publicly by an ingress controller and that you can run a copy of that service on your laptop. + +## 1. Install the Telepresence CLI + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: + `telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: + `curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Intercept your service + +In this section, we will go through the steps required for you to intercept all traffic going to a service in your cluster and route it to your local environment instead. + +1. List the services that you can intercept with `telepresence list` and make sure the one you want to intercept is listed. + + For example, this would confirm that `example-service` can be intercepted by Telepresence: + ``` + $ telepresence list + + ... + example-service: ready to intercept (traffic-agent not yet installed) + ... + ``` + +2. Get the name of the port you want to intercept on your service: + `kubectl get service --output yaml`. + + For example, this would show that the port `80` is named `http` in the `example-service`: + + ``` + $ kubectl get service example-service --output yaml + + ... + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + ... + ``` + +3. Intercept all traffic going to the service in your cluster: + `telepresence intercept --port [:] --env-file `. + + - For the `--port` argument, specify the port on which your local instance of your service will be running. + - If the service you are intercepting exposes more than one port, specify the one you want to intercept after a colon. + - For the `--env-file` argument, specify the path to a file on which Telepresence should write the environment variables that your service is currently running with. This is going to be useful as we start our service. + + For the example below, Telepresence will intercept traffic going to service `example-service` so that requests reaching it on port `http` in the cluster get routed to `8080` on the workstation and write the environment variables of the service to `~/example-service-intercept.env`. + + ``` + $ telepresence intercept example-service --port 8080:http --env-file ~/example-service-intercept.env + + Using Deployment example-service + intercepted + Intercept name: example-service + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Intercepting : all TCP connections + ``` + +4. Start your local environment using the environment variables retrieved in the previous step. + + Here are a few options to pass the environment variables to your local process: + - with `docker run`, provide the path to the file using the [`--env-file` argument](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file) + - with JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.) use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile) + - with Visual Studio Code, specify the path to the environment variables file in the `envFile` field of your configuration + +5. Query the environment in which you intercepted a service the way you usually would and see your local instance being invoked. + + + Didn't work? Make sure the port you're listening on matches the one specified when creating your intercept. + + + + Congratulations! All the traffic usually going to your Kubernetes Service is now being routed to your local environment! + + +You can now: +- Make changes on the fly and see them reflected when interacting with your Kubernetes environment. +- Query services only exposed in your cluster's network. +- Set breakpoints in your IDE to investigate bugs. + +## 4. Create a Preview URL to only intercept certain requests to your service + +When working on a development environment with multiple engineers, you don't want your intercepts to impact your +teammates. Ambassador Cloud automatically generates a Preview URL when creating an intercept if you are logged in. By +doing so, Telepresence can route only the requests coming from that Preview URL to your local environment; the rest will +be routed to your cluster as usual. + +1. Clean up your previous intercept by removing it: +`telepresence leave ` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + ``` + $ telepresence login + + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept --port [:] --env-file ` + + You will be asked for the following information: + 1. **Ingress layer 3 address**: This would usually be the internal address of your ingress controller in the format `.namespace`. For example, if you have a service `ambassador-edge-stack` in the `ambassador` namespace, you would enter `ambassador-edge-stack.ambassador`. + 2. **Ingress port**: The port on which your ingress controller is listening (often 80 for non-TLS and 443 for TLS). + 3. **Ingress TLS encryption**: Whether the ingress controller is expecting TLS communication on the specified port. + 4. **Ingress layer 5 hostname**: If your ingress controller routes traffic based on a domain name (often using the `Host` HTTP header), this is the value you would need to enter here. + + + Telepresence supports any ingress controller, not just Ambassador Edge Stack. + + + For the example below, you will create a preview URL that will send traffic to the `ambassador` service in the `ambassador` namespace on port `443` using TLS encryption and setting the `Host` HTTP header to `dev-environment.edgestack.me`: + + ``` + $ telepresence intercept example-service --port 8080:http --env-file ~/example-service-intercept.env + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Confirm the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: -]: ambassador.ambassador + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [default: -]: 443 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: y + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: ambassador.ambassador]: dev-environment.edgestack.me + + Using Deployment example-service + intercepted + Intercept name : example-service + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":example-service") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname : dev-environment.edgestack.me + ``` + +4. Start your local service as in the previous step. + +5. Go to the preview URL printed after doing the intercept and see that your local service is processing the request. + + + Didn't work? It might be because you have services in between your ingress controller and the service you are intercepting that do not propagate the x-telepresence-intercept-id HTTP Header. Read more on context propagation. + + +6. Make a request on the URL you would usually query for that environment. The request should not be routed to your laptop. + + Normal traffic coming into the cluster through the Ingress (i.e. not coming from the preview URL) will route to services in the cluster like normal. + + + Congratulations! You have now only intercepted traffic coming from your Preview URL, without impacting your teammates. + + +You can now: +- Make changes on the fly and see them reflected when interacting with your Kubernetes environment. +- Query services only exposed in your cluster's network. +- Set breakpoints in your IDE to investigate bugs. + +...and all of this without impacting your teammates! +## What's Next? + + diff --git a/docs/telepresence/2.1/howtos/outbound.md b/docs/telepresence/2.1/howtos/outbound.md new file mode 100644 index 000000000..6405ff49a --- /dev/null +++ b/docs/telepresence/2.1/howtos/outbound.md @@ -0,0 +1,97 @@ +--- +description: "Telepresence can connect to your Kubernetes cluster, letting you access cluster services as if your laptop was another pod in the cluster." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Proxy Outbound Traffic to My Cluster + +While preview URLs are a powerful feature, there are other options to use Telepresence for proxying traffic between your laptop and the cluster. + + We'll assume below that you have the quick start sample web app running in your cluster so that we can test accessing the verylargejavaservice service. That service can be substituted however for any service you are running. + +## Proxying Outbound Traffic + +Connecting to the cluster instead of running an intercept will allow you to access cluster workloads as if your laptop was another pod in the cluster. You will be able to access other Kubernetes services using `.`, for example by curling a service from your terminal. A service running on your laptop will also be able to interact with other services on the cluster by name. + +Connecting to the cluster starts the background daemon on your machine and installs the [Traffic Manager pod](../../reference/architecture/) into the cluster of your current `kubectl` context. The Traffic Manager handles the service proxying. + +1. Run `telepresence connect`, you will be prompted for your password to run the daemon. + + ``` + $ telepresence connect + Launching Telepresence Daemon v2.1.4 (api v3) + Need root privileges to run "/usr/local/bin/telepresence daemon-foreground /home//.cache/telepresence/logs '' ''" + [sudo] password: + Connecting to traffic manager... + Connected to context default (https://) + ``` + +1. Run `telepresence status` to confirm that you are connected to your cluster and are proxying traffic to it. + + ``` + $ telepresence status + Root Daemon: Running + Version : v2.1.4 (api 3) + Primary DNS : "" + Fallback DNS: "" + User Daemon: Running + Version : v2.1.4 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 0 total + ``` + +1. Now try to access your service by name with `curl verylargejavaservice.default:8080`. Telepresence will route the request to the cluster, as if your laptop is actually running in the cluster. + + ``` + $ curl verylargejavaservice.default:8080 + + + + Welcome to the EdgyCorp WebApp + ... + ``` + +3. Terminate the client with `telepresence quit` and try to access the service again, it will fail because traffic is no longer being proxied from your laptop. + + ``` + $ telepresence quit + Telepresence Daemon quitting...done + ``` + +When using Telepresence in this way, services must be accessed with the namespace qualified DNS name (<service name>.<namespace>) before starting an intercept. After starting an intercept, only <service name> is required. Read more about these differences in DNS resolution here. + +## Controlling Outbound Connectivity + +By default, Telepresence will provide access to all Services found in all namespaces in the connected cluster. This might lead to problems if the user does not have access permissions to all namespaces via RBAC. The `--mapped-namespaces ` flag was added to give the user control over exactly which namespaces will be accessible. + +When using this option, it is important to include all namespaces containing services to be accessed and also all namespaces that contain services that those intercepted services might use. + +### Using local-only intercepts + +An intercept with the flag`--local-only` can be used to control outbound connectivity to specific namespaces. + +When developing services that have not yet been deployed to the cluster, it can be necessary to provide outbound connectivity to the namespace where the service is intended to be deployed so that it can access other services in that namespace without using qualified names. + + ``` + $ telepresence intercept --namespace --local-only + ``` +The resources in the given namespace can now be accessed using unqualified names as long as the intercept is active. The intercept is deactivated just like any other intercept. + + ``` + $ telepresence leave + ``` +The unqualified name access is now removed provided that no other intercept is active and using the same namespace. + +### External dependencies (formerly --also-proxy) +If you have a resource outside of the cluster that you need access to, you can leverage Headless Services to provide access. This will give you a kubernetes service formatted like all other services (`my-service.prod.svc.cluster.local`), that resolves to your resource. + +If the outside service has a DNS name, you can use the [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) service type, which will create a service that can be used from within your cluster and from your local machine when connected with telepresence. + +If the outside service is an ip, create a [service without selectors](https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors) and then create an endpoint of the same name. + +In both scenarios, Kubernetes will create a service that can be used from within your cluster and from your local machine when connected with telepresence. diff --git a/docs/telepresence/2.1/howtos/preview-urls.md b/docs/telepresence/2.1/howtos/preview-urls.md new file mode 100644 index 000000000..b88fb20a2 --- /dev/null +++ b/docs/telepresence/2.1/howtos/preview-urls.md @@ -0,0 +1,131 @@ +--- +description: "Telepresence uses Preview URLs to help you collaborate on developing Kubernetes services with teammates." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Share Dev Environments with Preview URLs + +Telepresence can generate sharable preview URLs, allowing you to work on a copy of your service locally and share that environment directly with a teammate for pair programming. While using preview URLs Telepresence will route only the requests coming from that preview URL to your local environment; requests to the ingress will be routed to your cluster as usual. + +Preview URLs are protected behind authentication via Ambassador Cloud, ensuring that only users in your organization can view them. A preview URL can also be set to allow public access for sharing with outside collaborators. + +## Prerequisites + +* You should have the Telepresence CLI [installed](../../install/) on your laptop. + +* If you have Telepresence already installed and have used it previously, please first reset it with `telepresence uninstall --everything`. + +* You will need a service running in your cluster that you would like to intercept. + + +Need a sample app to try with preview URLs? Check out the quick start. It has a multi-service app to install in your cluster with instructions to create a preview URL for that app. + + +## Creating a Preview URL + +1. List the services that you can intercept with `telepresence list` and make sure the one you want is listed. + + If it isn't: + + * Only Deployments, ReplicaSets, or StatefulSets are supported, and each of those requires a label matching a Service + + * If the service is in a different namespace, specify it with the `--namespace` flag + +2. Login to Ambassador Cloud where you can manage and share preview URLs: +`telepresence login` + + ``` + $ telepresence login + + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept: +`telepresence intercept --port --env-file ` + + For `--port`, specify the port on which your local instance of your service will be running. If the service you are intercepting exposes more than one port, specify the one you want to intercept after a colon. + + For `--env-file`, specify a file path where Telepresence will write the environment variables that are set in the Pod. This is going to be useful as we start our service locally. + + You will be asked for the following information: + 1. **Ingress layer 3 address**: This would usually be the internal address of your ingress controller in the format `.namespace `. For example, if you have a service `ambassador-edge-stack` in the `ambassador` namespace, you would enter `ambassador-edge-stack.ambassador`. + 2. **Ingress port**: The port on which your ingress controller is listening (often 80 for non-TLS and 443 for TLS). + 3. **Ingress TLS encryption**: Whether the ingress controller is expecting TLS communication on the specified port. + 4. **Ingress layer 5 hostname**: If your ingress controller routes traffic based on a domain name (often using the `Host` HTTP header), enter that value here. + + For the example below, you will create a preview URL for `example-service` which listens on port 8080. The preview URL for ingress will use the `ambassador` service in the `ambassador` namespace on port `443` using TLS encryption and the hostname `dev-environment.edgestack.me`: + + ``` + $ telepresence intercept example-service --port 8080 --env-file ~/ex-svc.env + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Confirm the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: -]: ambassador.ambassador + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [default: -]: 443 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: y + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: ambassador.ambassador]: dev-environment.edgestack.me + + Using deployment example-service + intercepted + Intercept name : example-service + State : ACTIVE + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":example-service") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname : dev-environment.edgestack.me + ``` + +4. Start your local environment using the environment variables retrieved in the previous step. + + Here are a few options to pass the environment variables to your local process: + - with `docker run`, provide the path to the file using the [`--env-file` argument](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file) + - with JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.) use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile) + - with Visual Studio Code, specify the path to the environment variables file in the `envFile` field of your configuration + +5. Go to the preview URL that was provided after starting the intercept (the next to last line in the terminal output above). Your local service will be processing the request. + + + Success! You have intercepted traffic coming from your preview URL without impacting other traffic from your Ingress. + + + + Didn't work? It might be because you have services in between your ingress controller and the service you are intercepting that do not propagate the x-telepresence-intercept-id HTTP Header. Read more on context propagation. + + +6. Make a request on the URL you would usually query for that environment. The request should **not** be routed to your laptop. + + Normal traffic coming into the cluster through the Ingress (i.e. not coming from the preview URL) will route to services in the cluster like normal. + +7. Share with a teammate. + + You can collaborate with teammates by sending your preview URL to them. They will be asked to log in to Ambassador Cloud if they are not already. Upon log in they must select the same identity provider and org as you are using; that is how they are authorized to access the preview URL (see the [list of supported identity providers](../../faqs/#idps)). When they visit the preview URL, they will see the intercepted service running on your laptop. + + + Congratulations! You have now created a dev environment and shared it with a teammate! While you and your partner work together to debug your service, the production version remains unchanged to the rest of your team until you commit your changes. + + +## Sharing a Preview URL with People Outside Your Team + +To collaborate with someone outside of your identity provider's organization, you must go to [Ambassador Cloud](https://app.getambassador.io/cloud/), select the preview URL, and click **Make Publicly Accessible**. Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on your laptop. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. Removing the intercept either from the dashboard or by running `telepresence leave ` also removes all access to the preview URL. diff --git a/docs/telepresence/2.1/images/container-inner-dev-loop.png b/docs/telepresence/2.1/images/container-inner-dev-loop.png new file mode 100644 index 000000000..06586cd6e Binary files /dev/null and b/docs/telepresence/2.1/images/container-inner-dev-loop.png differ diff --git a/docs/telepresence/2.1/images/github-login.png b/docs/telepresence/2.1/images/github-login.png new file mode 100644 index 000000000..cfd4d4bf1 Binary files /dev/null and b/docs/telepresence/2.1/images/github-login.png differ diff --git a/docs/telepresence/2.1/images/logo.png b/docs/telepresence/2.1/images/logo.png new file mode 100644 index 000000000..701f63ba8 Binary files /dev/null and b/docs/telepresence/2.1/images/logo.png differ diff --git a/docs/telepresence/2.1/images/trad-inner-dev-loop.png b/docs/telepresence/2.1/images/trad-inner-dev-loop.png new file mode 100644 index 000000000..618b674f8 Binary files /dev/null and b/docs/telepresence/2.1/images/trad-inner-dev-loop.png differ diff --git a/docs/telepresence/2.1/install/index.md b/docs/telepresence/2.1/install/index.md new file mode 100644 index 000000000..2afa65c49 --- /dev/null +++ b/docs/telepresence/2.1/install/index.md @@ -0,0 +1,34 @@ +import Platform from '@src/components/Platform'; + +# Install Telepresence + +Install Telepresence by running the commands below for your OS. + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## What's Next? + +Follow one of our [quick start guides](../quick-start/) to start using Telepresence, either with our sample app or in your own environment. diff --git a/docs/telepresence/2.1/install/upgrade.md b/docs/telepresence/2.1/install/upgrade.md new file mode 100644 index 000000000..7fef9ca31 --- /dev/null +++ b/docs/telepresence/2.1/install/upgrade.md @@ -0,0 +1,79 @@ +--- +description: "How to upgrade your installation of Telepresence and install previous versions." +--- + +import Platform from '@src/components/Platform'; + +# Upgrade Telepresence + +
+

Contents

+ +* [Upgrade Process](#upgrade-process) +* [Installing Older Versions of Telepresence](#installing-older-versions-of-telepresence) +* [Migrating from Telepresence 1 to Telepresence 2](#migrating-from-telepresence-1-to-telepresence-2) + +
+ +## Upgrade Process +The Telepresence CLI will periodically check for new versions and notify you when an upgrade is available. Running the same commands used for installation will replace your current binary with the latest version. + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +After upgrading your CLI, the Traffic Manager **must be uninstalled** from your cluster. This can be done using `telepresence uninstall --everything` or by `kubectl delete svc,deploy traffic-manager`. The next time you run a `telepresence` command it will deploy an upgraded Traffic Manager. + +## Installing Older Versions of Telepresence + +Use these URLs to download an older version for your OS, replacing `x.x.x` with the version you want. + + + + +``` +https://app.getambassador.io/download/tel2/darwin/amd64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/x.y.z/telepresence +``` + + + + +## Migrating from Telepresence 1 to Telepresence 2 + +Telepresence 2 (the current major version) has different mechanics and requires a different mental model from [Telepresence 1](https://www.telepresence.io/docs/v1/) when working with local instances of your services. + +In Telepresence 1, a pod running a service is swapped with a pod running the Telepresence proxy. This proxy receives traffic intended for the service, and sends the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment". + +In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time. + +Telepresence 2 introduces a [new architecture](../../reference/architecture/) built around "intercepts" that addresses this problem. With Telepresence 2, a sidecar proxy is injected onto the pod. The proxy then intercepts traffic intended for the pod and routes it to the workstation/laptop. The advantage of this approach is that the service is running at all times, and no swapping is used. By using the proxy approach, we can also do selective intercepts, where certain types of traffic get routed to the service while other traffic gets routed to your laptop/workstation. + +Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts. diff --git a/docs/telepresence/2.1/quick-start/TelepresenceQuickStartLanding.js b/docs/telepresence/2.1/quick-start/TelepresenceQuickStartLanding.js new file mode 100644 index 000000000..3e87c3ad6 --- /dev/null +++ b/docs/telepresence/2.1/quick-start/TelepresenceQuickStartLanding.js @@ -0,0 +1,129 @@ +import React from 'react'; + +import Embed from '../../../../src/components/Embed'; +import Icon from '../../../../src/components/Icon'; + +import './telepresence-quickstart-landing.less'; + +/** @type React.FC> */ +const RightArrow = (props) => ( + + + +); + +/** @type React.FC<{color: 'green'|'blue', withConnector: boolean}> */ +const Box = ({ children, color = 'blue', withConnector = false }) => ( + <> + {withConnector && ( +
+ +
+ )} +
{children}
+ +); + +const TelepresenceQuickStartLanding = () => ( +
+

+ Telepresence +

+

+ Explore the use cases of Telepresence with a free remote Kubernetes + cluster, or dive right in using your own. +

+ +
+
+
+

+ Use Our Free Demo Cluster +

+

+ See how Telepresence works without having to mess with your + production environments. +

+
+ +

6 minutes

+

Integration Testing

+

+ See how changes to a single service impact your entire application + without having to run your entire app locally. +

+ + GET STARTED{' '} + + +
+ +

5 minutes

+

Fast code changes

+

+ Make changes to your service locally and see the results instantly, + without waiting for containers to build. +

+ + GET STARTED{' '} + + +
+
+
+
+

+ Use Your Cluster +

+

+ Understand how Telepresence fits in to your Kubernetes development + workflow. +

+
+ +

10 minutes

+

Intercept your service in your cluster

+

+ Query services only exposed in your cluster's network. Make changes + and see them instantly in your K8s environment. +

+ + GET STARTED{' '} + + +
+
+
+ +
+

Watch the Demo

+
+
+

+ See Telepresence in action in our 3-minute demo + video that you can share with your teammates. +

+
    +
  • Instant feedback loops
  • +
  • Infinite-scale development environments
  • +
  • Access to your favorite local tools
  • +
  • Easy collaborative development with teammates
  • +
+
+
+ +
+
+
+
+); + +export default TelepresenceQuickStartLanding; diff --git a/docs/telepresence/2.1/quick-start/demo-node.md b/docs/telepresence/2.1/quick-start/demo-node.md new file mode 100644 index 000000000..8c936cc7b --- /dev/null +++ b/docs/telepresence/2.1/quick-start/demo-node.md @@ -0,0 +1,289 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import QSCards from './qs-cards' + +# Telepresence Quick Start + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Download the demo cluster archive](#1-download-the-demo-cluster-archive) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Check out the sample application](#3-check-out-the-sample-application) +* [4. Run a service on your laptop](#4-run-a-service-on-your-laptop) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +In this guide we'll give you **everything you need in a preconfigured demo cluster:** the Telepresence CLI, a config file for connecting to your demo cluster, and code to run a cluster service locally. + + + Already have a cluster? Switch over to a version of this guide that takes you though the same steps using your own cluster. + + +## 1. Download the demo cluster archive + +1. {window.open('https://app.getambassador.io/cloud/demo-cluster-download-popup', 'ambassador-cloud-demo-cluster', 'menubar=no,location=no,resizable=yes,scrollbars=yes,status=no,width=550,height=750'); e.preventDefault(); }} target="_blank">Sign in to Ambassador Cloud to download your demo cluster archive. The archive contains all the tools and configurations you need to complete this guide. + +2. Extract the archive file, open the `ambassador-demo-cluster` folder, and run the installer script (the commands below might vary based on where your browser saves downloaded files). + + + This step will also install some dependency packages onto your laptop using npm, you can see those packages at ambassador-demo-cluster/edgey-corp-nodejs/DataProcessingService/package.json. + + + ``` + cd ~/Downloads + unzip ambassador-demo-cluster.zip -d ambassador-demo-cluster + cd ambassador-demo-cluster + ./install.sh + ``` + +3. The demo cluster we provided already has a demo app running. List the app's services: + `kubectl get services` + + ``` + $ kubectl get services + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + kubernetes ClusterIP 10.43.0.1 443/TCP 14h + dataprocessingservice ClusterIP 10.43.159.239 3000/TCP 14h + verylargejavaservice ClusterIP 10.43.223.61 8080/TCP 14h + verylargedatastore ClusterIP 10.43.203.19 8080/TCP 14h + ``` + +4. Confirm that the Telepresence CLI is now installed, we expect to see that the daemons are not yet running: +`telepresence status` + + ``` + $ telepresence status + + Root Daemon: Not running + User Daemon: Not running + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open System Preferences → Security & Privacy → General. Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence status command. + + + + You now have Telepresence installed on your workstation and a Kubernetes cluster configured in your terminal. + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster (this requires root privileges and will ask for your password): +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Check out the sample application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + +We'll use a sample app that is already installed in your demo cluster. Let's take a quick look at it's architecture before continuing. + +1. Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +2. Since you’ve already connected Telepresence to your cluster, you can access the frontend service in your browser at http://verylargejavaservice.default:8080. + +3. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Run a service on your laptop + +Now start up the DataProcessingService service on your laptop. This version of the code has the UI color set to blue instead of green. + +1. **In a new terminal window**, go the demo application directory in the extracted archive folder: + `cd edgey-corp-nodejs/DataProcessingService` + +2. Start the application: + `npm start` + + ``` + $ npm start + + ... + Welcome to the DataProcessingService! + { _: [] } + Server running on port 3000 + ``` + +4. **Back in your previous terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Node server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + + Didn't work? Make sure you are working in the terminal window where you ran the script because it sets environment variables to access the demo cluster. Those variables will only will apply to that terminal session. + + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + ... + ``` + +2. Go to the frontend service again in your browser at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Node server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-nodejs/DataProcessingService/app.js` in your editor and change line 6 from `blue` to `orange`. Save the file and the Node server will auto reload. + +2. Now visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. The frontend `verylargejavaservice` is still running on the cluster, but it's request to the `DataProcessingService` for retrieve the color to show is being proxied by Telepresence to your laptop. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + This opens your browser; login with your preferred identity provider and choose your org. + + ``` + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n`. The default for the fourth value is correct so hit enter to accept it + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: n + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.1/quick-start/go.md b/docs/telepresence/2.1/quick-start/go.md new file mode 100644 index 000000000..87b5d6009 --- /dev/null +++ b/docs/telepresence/2.1/quick-start/go.md @@ -0,0 +1,343 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Go** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Go application](#3-install-a-sample-go-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites +You’ll need [`kubectl` installed](https://kubernetes.io/docs/tasks/tools/#kubectl) +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Go application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Go. We have versions in Python (Flask), Python (FastAPI), Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-go.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-go.git + + Cloning into 'edgey-corp-go'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-go/DataProcessingService/` + +3. You will use [Fresh](https://pkg.go.dev/github.com/BUGLAN/fresh) to support auto reloading of the Go server, which we'll use later. Confirm it is installed by running: + `go get github.com/pilu/fresh` + Then start the Go server: + `$GOPATH/bin/fresh` + + ``` + $ go get github.com/pilu/fresh + + $ $GOPATH/bin/fresh + + ... + 10:23:41 app | Welcome to the DataProcessingGoService! + ``` + + + Install Go from here and set your GOPATH if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Go server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Go server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-go/DataProcessingService/main.go` in your editor and change `var color string` from `blue` to `orange`. Save the file and the Go server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + This opens your browser; login with your preferred identity provider and choose your org. + + ``` + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.1/quick-start/index.md b/docs/telepresence/2.1/quick-start/index.md new file mode 100644 index 000000000..efcb65b52 --- /dev/null +++ b/docs/telepresence/2.1/quick-start/index.md @@ -0,0 +1,7 @@ +--- + description: Telepresence Quick Start. +--- + +import TelepresenceQuickStartLanding from './TelepresenceQuickStartLanding' + + diff --git a/docs/telepresence/2.1/quick-start/qs-cards.js b/docs/telepresence/2.1/quick-start/qs-cards.js new file mode 100644 index 000000000..31582355b --- /dev/null +++ b/docs/telepresence/2.1/quick-start/qs-cards.js @@ -0,0 +1,70 @@ +import Grid from '@material-ui/core/Grid'; +import Paper from '@material-ui/core/Paper'; +import Typography from '@material-ui/core/Typography'; +import { makeStyles } from '@material-ui/core/styles'; +import React from 'react'; + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + textAlign: 'center', + alignItem: 'stretch', + padding: 0, + }, + paper: { + padding: theme.spacing(1), + textAlign: 'center', + color: 'black', + height: '100%', + }, +})); + +export default function CenteredGrid() { + const classes = useStyles(); + + return ( +
+ + + + + + Collaborating + + + + Use preview URLS to collaborate with your colleagues and others + outside of your organization. + + + + + + + + Outbound Sessions + + + + While connected to the cluster, your laptop can interact with + services as if it was another pod in the cluster. + + + + + + + + FAQs + + + + Learn more about uses cases and the technical implementation of + Telepresence. + + + + +
+ ); +} diff --git a/docs/telepresence/2.1/quick-start/qs-go.md b/docs/telepresence/2.1/quick-start/qs-go.md new file mode 100644 index 000000000..87b5d6009 --- /dev/null +++ b/docs/telepresence/2.1/quick-start/qs-go.md @@ -0,0 +1,343 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Go** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Go application](#3-install-a-sample-go-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites +You’ll need [`kubectl` installed](https://kubernetes.io/docs/tasks/tools/#kubectl) +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Go application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Go. We have versions in Python (Flask), Python (FastAPI), Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-go.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-go.git + + Cloning into 'edgey-corp-go'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-go/DataProcessingService/` + +3. You will use [Fresh](https://pkg.go.dev/github.com/BUGLAN/fresh) to support auto reloading of the Go server, which we'll use later. Confirm it is installed by running: + `go get github.com/pilu/fresh` + Then start the Go server: + `$GOPATH/bin/fresh` + + ``` + $ go get github.com/pilu/fresh + + $ $GOPATH/bin/fresh + + ... + 10:23:41 app | Welcome to the DataProcessingGoService! + ``` + + + Install Go from here and set your GOPATH if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Go server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Go server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-go/DataProcessingService/main.go` in your editor and change `var color string` from `blue` to `orange`. Save the file and the Go server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + This opens your browser; login with your preferred identity provider and choose your org. + + ``` + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.1/quick-start/qs-java.md b/docs/telepresence/2.1/quick-start/qs-java.md new file mode 100644 index 000000000..0b039096b --- /dev/null +++ b/docs/telepresence/2.1/quick-start/qs-java.md @@ -0,0 +1,337 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Java** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Java application](#3-install-a-sample-java-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites +You’ll need [`kubectl` installed](https://kubernetes.io/docs/tasks/tools/#kubectl) +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Java application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Java. We have versions in Python (FastAPI), Python (Flask), Go, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-java.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-java.git + + Cloning into 'edgey-corp-java'... + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-java/DataProcessingService/` + +3. Start the Maven server. + `mvn spring-boot:run` + + + Install Java and Maven first if needed. + + + ``` + $ mvn spring-boot:run + + ... + g.d.DataProcessingServiceJavaApplication : Started DataProcessingServiceJavaApplication in 1.408 seconds (JVM running for 1.684) + + ``` + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Java server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Java server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-java/DataProcessingService/src/main/resources/application.properties` in your editor and change `app.default.color` on line 2 from `blue` to `orange`. Save the file then stop and restart your Java server. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + This opens your browser; login with your preferred identity provider and choose your org. + + ``` + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.1/quick-start/qs-node.md b/docs/telepresence/2.1/quick-start/qs-node.md new file mode 100644 index 000000000..806d9d47d --- /dev/null +++ b/docs/telepresence/2.1/quick-start/qs-node.md @@ -0,0 +1,331 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Node.js** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Node.js application](#3-install-a-sample-nodejs-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites +You’ll need [`kubectl` installed](https://kubernetes.io/docs/tasks/tools/#kubectl) +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Node.js application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js. We have versions in Go, Java,Python using Flask, and Python using FastAPI if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-nodejs.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-nodejs.git + + Cloning into 'edgey-corp-nodejs'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-nodejs/DataProcessingService/` + +3. Install the dependencies and start the Node server: +`npm install && npm start` + + ``` + $ npm install && npm start + + ... + Welcome to the DataProcessingService! + { _: [] } + Server running on port 3000 + ``` + + + Install Node.js from here if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Node server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + See this doc for more information on how Telepresence resolves DNS. + + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Node server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-nodejs/DataProcessingService/app.js` in your editor and change line 6 from `blue` to `orange`. Save the file and the Node server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + This opens your browser; login with your preferred identity provider and choose your org. + + ``` + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.1/quick-start/qs-python-fastapi.md b/docs/telepresence/2.1/quick-start/qs-python-fastapi.md new file mode 100644 index 000000000..24f86037f --- /dev/null +++ b/docs/telepresence/2.1/quick-start/qs-python-fastapi.md @@ -0,0 +1,328 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Python (FastAPI)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites +You’ll need [`kubectl` installed](https://kubernetes.io/docs/tasks/tools/#kubectl) +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the FastAPI framework. We have versions in Python (Flask), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python-fastapi.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python-fastapi.git + + Cloning into 'edgey-corp-python-fastapi'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python-fastapi/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install fastapi uvicorn requests && python app.py + + Collecting fastapi + ... + Application startup complete. + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local service is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python-fastapi/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 17 from `blue` to `orange`. Save the file and the Python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + This opens your browser; login with your preferred identity provider and choose your org. + + ``` + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080) and it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.1/quick-start/qs-python.md b/docs/telepresence/2.1/quick-start/qs-python.md new file mode 100644 index 000000000..4d79336e0 --- /dev/null +++ b/docs/telepresence/2.1/quick-start/qs-python.md @@ -0,0 +1,339 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Python (Flask)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites +You’ll need [`kubectl` installed](https://kubernetes.io/docs/tasks/tools/#kubectl) +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the Flask framework. We have versions in Python (FastAPI), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python.git + + Cloning into 'edgey-corp-python'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install flask requests && python app.py + + Collecting flask + ... + Welcome to the DataServiceProcessingPythonService! + ... + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Python server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 15 from `blue` to `orange`. Save the file and the python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + This opens your browser; login with your preferred identity provider and choose your org. + + ``` + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.1/quick-start/telepresence-quickstart-landing.less b/docs/telepresence/2.1/quick-start/telepresence-quickstart-landing.less new file mode 100644 index 000000000..1a8c3ddc7 --- /dev/null +++ b/docs/telepresence/2.1/quick-start/telepresence-quickstart-landing.less @@ -0,0 +1,185 @@ +@import '~@src/components/Layout/vars.less'; + +.doc-body .telepresence-quickstart-landing { + font-family: @InterFont; + color: @black; + margin: 0 auto 140px; + max-width: @docs-max-width; + min-width: @docs-min-width; + + h1, + h2 { + color: @blue-dark; + font-style: normal; + font-weight: normal; + letter-spacing: 0.25px; + } + + h1 { + font-size: 33px; + line-height: 40px; + + svg { + vertical-align: text-bottom; + } + } + + h2 { + font-size: 23px; + line-height: 33px; + margin: 0 0 1rem; + + .highlight-mark { + background: transparent; + color: @blue-dark; + background: -moz-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: -webkit-gradient( + linear, + left top, + left bottom, + color-stop(0%, transparent), + color-stop(60%, transparent), + color-stop(60%, fade(@blue-electric, 15%)), + color-stop(100%, fade(@blue-electric, 15%)) + ); + background: -webkit-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: -o-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: -ms-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: linear-gradient( + to bottom, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='transparent', endColorstr='fade(@blue-electric, 15%)',GradientType=0 ); + padding: 0 3px; + margin: 0 0.1em 0 0; + } + } + + .telepresence-choice { + background: @white; + border: 2px solid @grey-separator; + box-shadow: -6px 12px 0px fade(@black, 12%); + border-radius: 8px; + padding: 20px; + + strong { + color: @blue; + } + } + + .telepresence-choice-wrapper { + border-bottom: solid 1px @grey-separator; + column-gap: 60px; + display: inline-grid; + grid-template-columns: repeat(2, 1fr); + margin: 20px 0 50px; + padding: 0 0 62px; + width: 100%; + + .telepresence-choice { + ol { + li { + font-size: 14px; + } + } + + .get-started-button { + background-color: @green; + border-radius: 5px; + color: @white; + display: inline-flex; + font-style: normal; + font-weight: 600; + font-size: 14px; + line-height: 24px; + margin: 0 0 15px 5px; + padding: 13px 20px; + align-items: center; + letter-spacing: 1.25px; + text-decoration: none; + text-transform: uppercase; + transition: background-color 200ms linear 0ms; + + svg { + fill: @white; + height: 20px; + width: 20px; + } + + &:hover { + background-color: @green-dark; + text-decoration: none; + } + } + + p { + font-style: normal; + font-weight: normal; + font-size: 16px; + line-height: 26px; + letter-spacing: 0.5px; + } + } + } + + .video-wrapper { + display: flex; + flex-direction: row; + + ul { + li { + font-size: 14px; + margin: 0 10px 10px 0; + } + } + + div { + &.video-container { + flex: 1 1 70%; + position: relative; + width: 100%; + padding-bottom: 39.375%; + + .video { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + border: 0; + } + } + + &.description { + flex: 0 1 30%; + } + } + } +} diff --git a/docs/telepresence/2.1/redirects.yml b/docs/telepresence/2.1/redirects.yml new file mode 100644 index 000000000..5961b3477 --- /dev/null +++ b/docs/telepresence/2.1/redirects.yml @@ -0,0 +1 @@ +- {from: "", to: "quick-start"} diff --git a/docs/telepresence/2.1/reference/architecture.md b/docs/telepresence/2.1/reference/architecture.md new file mode 100644 index 000000000..47facb0b8 --- /dev/null +++ b/docs/telepresence/2.1/reference/architecture.md @@ -0,0 +1,63 @@ +--- +description: "How Telepresence works to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Telepresence Architecture + +
+ +![Telepresence Architecture](../../../../../images/documentation/telepresence-architecture.inline.svg) + +
+ +## Telepresence CLI + +The Telepresence CLI orchestrates all the moving parts: it starts the Telepresence Daemon, installs the Traffic Manager +in your cluster, authenticates against Ambassador Cloud and configure all those elements to communicate with one +another. + +## Telepresence Daemon + +The Telepresence Daemon runs on a developer's workstation and is its main point of communication with the cluster's +network. All requests from and to the cluster go through the Daemon, which communicates with the Traffic Manager. + +## Traffic Manager + +The Traffic Manager is the central point of communication between Traffic Agents in the cluster and Telepresence Daemons +on developer workstations, proxying all relevant inbound and outbound traffic and tracking active intercepts. When +Telepresence is run with either the `connect`, `intercept`, or `list` commands, the Telepresence CLI first checks the +cluster for the Traffic Manager deployment, and if missing it creates it. + +When an intercept gets created with a Preview URL, the Traffic Manager will establish a connection with Ambassador Cloud +so that Preview URL requests can be routed to the cluster. This allows Ambassador Cloud to reach the Traffic Manager +without requiring the Traffic Manager to be publicly exposed. Once the Traffic Manager receives a request from a Preview +URL, it forwards the request to the ingress service specified at the Preview URL creation. + +## Traffic Agent + +The Traffic Agent is a sidecar container that facilitates intercepts. When an intercept is started, the Traffic Agent +container is injected into the workload's pod(s). You can see the Traffic Agent's status by running `kubectl describe +pod `. + +Depending on the type of intercept that gets created, the Traffic Agent will either route the incoming request to the +Traffic Manager so that it gets routed to a developer's workstation, or it will pass it along to the container in the +pod usually handling requests on that port. + +## Ambassador Cloud + +Ambassador Cloud enables Preview URLs by generating random ephemeral domain names and routing requests received on those +domains from authorized users to the appropriate Traffic Manager. + +Ambassador Cloud also lets users manage their Preview URLs: making them publicly accessible, seeing users who have +accessed them and deleting them. + +# Changes from Service Preview + +Using Ambassador's previous offering, Service Preview, the Traffic Agent had to be manually added to a pod by an +annotation. This is no longer required as the Traffic Agent is automatically injected when an intercept is started. + +Service Preview also started an intercept via `edgectl intercept`. The `edgectl` CLI is no longer required to intercept +as this functionality has been moved to the Telepresence CLI. + +For both the Traffic Manager and Traffic Agents, configuring Kubernetes ClusterRoles and ClusterRoleBindings is not +required as it was in Service Preview. Instead, the user running Telepresence must already have sufficient permissions in the cluster to add and modify deployments in the cluster. diff --git a/docs/telepresence/2.1/reference/client.md b/docs/telepresence/2.1/reference/client.md new file mode 100644 index 000000000..db59e26a6 --- /dev/null +++ b/docs/telepresence/2.1/reference/client.md @@ -0,0 +1,25 @@ +--- +description: "CLI options for Telepresence to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Client Reference + +The [Telepresence CLI client](../../quick-start) is used to connect Telepresence to your cluster, start and stop intercepts, and create preview URLs. All commands are run in the form of `telepresence `. + +## Commands + +A list of all CLI commands and flags is available by running `telepresence help`, but here is more detail on the most common ones. + +| Command | Description | +| --- | --- | +| `connect` | Starts the local daemon and connects Telepresence to your cluster and installs the Traffic Manager if it is missing. After connecting, outbound traffic is routed to the cluster so that you can interact with services as if your laptop was another pod (for example, curling a service by it's name) | +| `login` | Authenticates you to Ambassador Cloud to create, manage, and share [preview URLs](../../howtos/preview-urls/) +| `logout` | Logs out out of Ambassador Cloud | +| `dashboard` | Reopens the Ambassador Cloud dashboard in your browser | +| `preview` | Create or remove [preview URLs](../../howtos/preview-urls) for existing intercepts: `telepresence preview create ` | +| `status` | Shows the current connectivity status | +| `quit` | Quits the local daemon, stopping all intercepts and outbound traffic to the cluster| +| `list` | Lists the current active intercepts | +| `intercept` | Intercepts a service, run followed by the service name to be intercepted and what port to proxy to your laptop: `telepresence intercept --port `. This command can also start a process so you can run a local instance of the service you are intercepting. For example the following will intercept the hello service on port 8000 and start a Python web server: `telepresence intercept hello --port 8000 -- python3 -m http.server 8000` | +| `leave` | Stops an active intercept: `telepresence leave hello` | +| `uninstall` | Uninstalls Telepresence from your cluster, using the `--agent` flag to target the Traffic Agent for a specific workload, the `--all-agents` flag to remove all Traffic Agents from all workloads, or the `--everything` flag to remove all Traffic Agents and the Traffic Manager. diff --git a/docs/telepresence/2.1/reference/cluster-config.md b/docs/telepresence/2.1/reference/cluster-config.md new file mode 100644 index 000000000..a00b10675 --- /dev/null +++ b/docs/telepresence/2.1/reference/cluster-config.md @@ -0,0 +1,68 @@ +# Cluster-side configuration + +For the most part, Telepresence doesn't require any special +configuration in the cluster, and can be used right away in any +cluster, as long as the user has adequate [permission](../rbac). + +However, some advanced features do require some configuration in the +cluster. + +# TLS + +If other applications in the cluster expect to speak TLS to your +intercepted application (perhaps you're using a service-mesh that does +mTLS), in order to use `--mechanism=http` (or any features that imply +`--mechanism=http`) you need to tell Telepresence about the TLS +certificates in use. + +Tell Telepresence about the certificates in use by adjusting your +workload's (eg. Deployment's) Pod template to set a couple of +annotations on the intercepted Pods: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ "getambassador.io/inject-terminating-tls-secret": "your-terminating-secret" # optional ++ "getambassador.io/inject-originating-tls-secret": "your-originating-secret" # optional + spec: ++ serviceAccountName: "your-account-that-has-rbac-to-read-those-secrets" + containers: +``` + +- The `getambassador.io/inject-terminating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS server + certificate to use for decrypting and responding to incoming + requests. + + When Telepresence modifies the Service's and workload's port + definitions to point at the Telepresence Agent sidecar's port + instead of your application's actual port, the sidecar will use this + certificate to terminate TLS. + +- The `getambassador.io/inject-originating-tls-secret` annotation + (optional) and names the Kubernetes Secret that contains the TLS + client certificate to use for communicating with your application. + + If your application expects incoming requests to speak TLS (eg. your + code expects to handle mTLS itself instead of letting a service-mesh + sidecar handle mTLS for it; or the port definition that Telepresence + modified pointed at the service-mesh sidecar instead of at your + application), then you will need to set this. + + If you do set this, it is usually the correct thing to set it to the + same client certificate Secret that you configure the Ambassador + Edge Stack to use for mTLS. + +It is only possible to refer to a Secret that is in the same Namespace +as the Pod. + +The Pod will need to have permission to `get` and `watch` each of +those Secrets. + +Telepresence understands `type: kubernetes.io/tls` Secrets and +`type: istio.io/key-and-cert` Secrets; as well as `type: Opaque` +Secrets that it detects to be formatted as one of those types. diff --git a/docs/telepresence/2.1/reference/config.md b/docs/telepresence/2.1/reference/config.md new file mode 100644 index 000000000..ac81202a4 --- /dev/null +++ b/docs/telepresence/2.1/reference/config.md @@ -0,0 +1,32 @@ +# Laptop-side configuration + +Telepresence uses a `config.yml` file to store and change certain values. The location of this file varies based on your OS: + +* macOS: `$HOME/Library/Application Support/telepresence/config.yml` +* Linux: `$XDG_CONFIG_HOME/telepresence/config.yml` or, if that variable is not set, `$HOME/.config/telepresence/config.yml` + +For Linux, the above paths are for a user-level configuration. For system-level configuration, use the file at `$XDG_CONFIG_DIRS/telepresence/config.yml` or, if that variable is empty, `/etc/xdg/telepresence/config.yml`. If a file exists at both the user-level and system-level paths, the user-level path file will take precedence. + +## Values + +The config file currently only supports values for the `timeouts` key, here is an example file: + +```yaml +timeouts: + agentInstall: 1m + intercept: 10s +``` + +Values are all durations either as a number respresenting seconds or a string with a unit suffix of `ms`, `s`, `m`, or `h`. Strings can be fractional (`1.5h`) or combined (`2h45m`). + +These are the valid fields for the `timeouts` key: + +|Field|Description|Default| +|---|---|---| +|`agentInstall`|Waiting for Traffic Agent to be installed|2 minutes| +|`apply`|Waiting for a Kubernetes manifest to be applied|1 minute| +|`clusterConnect`|Waiting for cluster to be connected|20 seconds| +|`intercept`|Waiting for an intercept to become active|5 seconds| +|`proxyDial`|Waiting for an outbound connection to be established|5 seconds| +|`trafficManagerConnect`|Waiting for the Traffic Manager API to connect for port fowards|20 seconds| +|`trafficManagerAPI`|Waiting for connection to the gPRC API after `trafficManagerConnect` is successful|5 seconds| diff --git a/docs/telepresence/2.1/reference/dns.md b/docs/telepresence/2.1/reference/dns.md new file mode 100644 index 000000000..01a5ebb35 --- /dev/null +++ b/docs/telepresence/2.1/reference/dns.md @@ -0,0 +1,68 @@ +# DNS Resolution + +The Telepresence DNS resolver is dynamically configured to resolve names using the namespaces of currently active intercepts. Processes running locally on the desktop will have network access to all services in the such namespaces by service-name only. + +All intercepts contribute to the DNS resolver, even those that do not use the `--namespace=` option. This is because `--namespace default` is implied, and in this context, `default` is treated just like any other namespace. + +No namespaces are used by the DNS resolver (not even `default`) when no intercepts are active, which means that no service is available by `` only. Without an active intercept, the namespace qualified DNS name must be used (in the form `.`). + +See this demonstrated below, using the [quick start's](../../quick-start/) sample app services. + +No intercepts are currently running, we'll connect to the cluster and list the services that can be intercepted. + +``` +$ telepresence connect + + Connecting to traffic manager... + Connected to context default (https://) + +$ telepresence list + + verylargejavaservice : ready to intercept (traffic-agent not yet installed) + dataprocessingservice: ready to intercept (traffic-agent not yet installed) + verylargedatastore : ready to intercept (traffic-agent not yet installed) + +$ curl verylargejavaservice:8080 + + curl: (6) Could not resolve host: verylargejavaservice + +``` + +This is expected as Telepresence cannot reach the service yet by short name without an active intercept in that namespace. + +``` +$ curl verylargejavaservice.default:8080 + + + + + Welcome to the EdgyCorp WebApp + ... +``` + +Using the namespaced qualified DNS name though does work. +Now we'll start an intercept against another service in the same namespace. Remember, `--namespace default` is implied since it is not specified. + +``` +$ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + +$ curl verylargejavaservice:8080 + + + + + Welcome to the EdgyCorp WebApp + ... +``` + +Now curling that service by its short name works and will as long as the intercept is active. + +The DNS resolver will always be able to resolve services using `.` regardless of intercepts. diff --git a/docs/telepresence/2.1/reference/environment.md b/docs/telepresence/2.1/reference/environment.md new file mode 100644 index 000000000..a94783d23 --- /dev/null +++ b/docs/telepresence/2.1/reference/environment.md @@ -0,0 +1,28 @@ +--- +description: "How Telepresence can import environment variables from your Kubernetes cluster to use with code running on your laptop." +--- + +# Environment Variables + +Telepresence can import environment variables from the cluster pod when running an intercept. +You can then use these variables with the code running on your laptop of the service being intercepted. + +There are three options available to do this: + +1. `telepresence intercept [service] --port [port] --env-file=FILENAME` + + This will write the environment variables to a Docker Compose `.env` file. This file can be used with `docker-compose` when starting containers locally. Please see the Docker documentation regarding the [file syntax](https://docs.docker.com/compose/env-file/) and [usage](https://docs.docker.com/compose/environment-variables/) for more information. + +2. `telepresence intercept [service] --port [port] --env-json=FILENAME` + + This will write the environment variables to a JSON file. This file can be injected into other build processes. + +3. `telepresence intercept [service] --port [port] -- [COMMAND]` + + This will run a command locally with the pod's environment variables set on your laptop. Once the command quits the intercept is stopped (as if `telepresence leave [service]` was run). This can be used in conjunction with a local server command, such as `python [FILENAME]` or `node [FILENAME]` to run a service locally while using the environment variables that were set on the pod via a ConfigMap or other means. + + Another use would be running a subshell, Bash for example: + + `telepresence intercept [service] --port [port] -- /bin/bash` + + This would start the intercept then launch the subshell on your laptop with all the same variables set as on the pod. diff --git a/docs/telepresence/2.1/reference/intercepts.md b/docs/telepresence/2.1/reference/intercepts.md new file mode 100644 index 000000000..15bad0a61 --- /dev/null +++ b/docs/telepresence/2.1/reference/intercepts.md @@ -0,0 +1,126 @@ +import Alert from '@material-ui/lab/Alert'; + +# Intercepts + +## Intercept Behavior When Logged into Ambassador Cloud + +After logging into Ambassador Cloud (with `telepresence login`), Telepresence will default to `--preview-url=true`, which will use Ambassador Cloud to create a sharable preview URL for this intercept. (Creating an intercept without logging in will default to `--preview-url=false`). + +In order to do this, it will prompt you for four options. For the first, `Ingress`, Telepresence tries to intelligently determine the ingress controller deployment and namespace for you. If they are correct, you can hit `enter` to accept the defaults. Set the next two options, `TLS` and `Port`, appropriately based on your ingress service. The fourth is a hostname for the service, if required by your ingress. + +Also because you're logged in, Telepresence will default to `--mechanism=http --http-match=auto` (or just `--http-match=auto`; `--http-match` implies `--mechanism=http`). If you hadn't been logged in it would have defaulted to `--mechanism=tcp`. This tells it to do smart intercepts and only intercept a subset of HTTP requests, rather than just intercepting the entirety of all TCP connections. This is important for working in a shared cluster with teammates, and is important for the preview URL functionality. See `telepresence intercept --help` for information on using `--http-match` to customize which requests it intercepts. + +## Supported Workloads +Kubernetes has various [workloads](https://kubernetes.io/docs/concepts/workloads/). Currently, telepresence supports intercepting Deployments, ReplicaSets, and StatefulSets. + While many of our examples may use Deployments, they would also work on ReplicaSets and StatefulSets + +## Specifying a namespace for an intercept + +The namespace of the intercepted workload is specified using the `--namespace` option. When this option is used, and `--workload` is not used, then the given name is interpreted as the name of the workload and the name of the intercept will be constructed from that name and the namespace. + +``` +telepresence intercept hello --namespace myns --port 9000 +``` + +This will intercept a workload named "hello" and name the intercept +"hello-myns". In order to remove the intercept, you will need to run +`telepresence leave hello-mydns` instead of just `telepresence leave +hello`. + +The name of the intercept will be left unchanged if the workload is specified. + +``` +telepresence intercept myhello --namespace myns --workload hello --port 9000 +``` + +This will intercept a workload named "hello" and name the intercept "myhello". + +## Importing Environment Variables + +Telepresence can import the environment variables from the pod that is being intercepted, see [this doc](../environment/) for more details. + +## Creating an Intercept Without a Preview URL + +If you *are not* logged into Ambassador Cloud, the following command will intercept all traffic bound to the service and proxy it to your laptop. This includes traffic coming through your ingress controller, so use this option carefully as to not disrupt production environments. + +``` +telepresence intercept --port= +``` + +If you *are* logged into Ambassador Cloud, setting the `preview-url` flag to `false` is necessary. + +``` +telepresence intercept --port= --preview-url=false +``` + +This will output a header that you can set on your request for that traffic to be intercepted: + +``` +$ telepresence intercept --port= --preview-url=false +Using Deployment +intercepted + Intercept name: + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":") +``` + +Run `telepresence status` to see the list of active intercepts. + +``` +$ telepresence status +Root Daemon: Running + Version : v2.1.4 (api 3) + Primary DNS : "" + Fallback DNS: "" +User Daemon: Running + Version : v2.1.4 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 1 total + dataprocessingnodeservice: @ +``` + +Finally, run `telepresence leave ` to stop the intercept. + +## Creating an Intercept When a Service has Multiple Ports + +If you are trying to intercept a service that has multiple ports, you need to tell telepresence which service port you are trying to intercept. To specify, you can either use the name of the service port or the port number itself. To see which options might be available to you and your service, use kubectl to describe your service or look in the object's yaml. For more information on multiple ports, see the [Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services). + +``` +$ telepresence intercept --port=: +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +When intercepting a service that has multiple ports, the name of the service port that has been intercepted is also listed. + +If you want to change which port has been intercepted, you can create a new intercept the same way you did above and it will change which service port is being intercepted. + +## Creating an Intercept When Multiple Services Match your Workload + +Oftentimes, there's a 1-to-1 relationship between a service and a workload, so telepresence is able to auto-detect which service it should intercept based on the workload you are trying to intercept. But if you use something like [Argo](../../../../argo/latest/), it uses two services (that use the same labels) to manage traffic between a canary and a stable service. + +Fortunately, if you know which service you want to use when intercepting a workload, you can use the --service flag. So in the aforementioned demo, if you wanted to use the `echo-stable` service when intercepting your workload, your command would look like this: +``` +$ telepresence intercept echo-rollout- --port --service echo-stable +Using ReplicaSet echo-rollout- +intercepted + Intercept name : echo-rollout- + State : ACTIVE + Workload kind : ReplicaSet + Destination : 127.0.0.1:3000 + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-921196036 + Intercepting : all TCP connections +``` diff --git a/docs/telepresence/2.1/reference/rbac.md b/docs/telepresence/2.1/reference/rbac.md new file mode 100644 index 000000000..76103d3cb --- /dev/null +++ b/docs/telepresence/2.1/reference/rbac.md @@ -0,0 +1,35 @@ +# RBAC + +## Necessary RBAC for Users + +To use telepresence, users will need to have at least the following permissions: +``` +- apiGroups: + - "" + resources: ["pods"] + verbs: ["get", "list", "create", "watch", "delete"] +- apiGroups: + - "" + resources: ["services"] + verbs: ["get", "list", "watch", "update"] +- apiGroups: + - "" + resources: ["pods/portforward"] + verbs: ["create"] +- apiGroups: + - "apps" + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update"] +- apiGroups: + - "getambassador.io" + resources: ["hosts", "mappings"] + verbs: ["*"] +- apiGroups: + - "" + resources: ["endpoints"] + verbs: ["get", "list", "watch"] +- apiGroups: + - "" + resources: ["namespaces"] + verbs: ["get", "list", "watch"] +``` diff --git a/docs/telepresence/2.1/reference/volume.md b/docs/telepresence/2.1/reference/volume.md new file mode 100644 index 000000000..828ac0583 --- /dev/null +++ b/docs/telepresence/2.1/reference/volume.md @@ -0,0 +1,36 @@ +# Volume Mounts + +import Alert from '@material-ui/lab/Alert'; + +Telepresence supports locally mounting of volumes that are mounted to your Pods. You can specify a command to run when starting the intercept, this could be a subshell or local server such as Python or Node. + +``` +telepresence intercept --port --mount=/tmp/ -- /bin/bash +``` + +In this case, Telepresence creates the intercept, mounts the Pod's volumes to locally to `/tmp`, and starts a Bash subshell. + +Telepresence can set a random mount point for you by using `--mount=true` instead, you can then find the mount point in the output of `telepresence list` or using the `$TELEPRESENCE_ROOT` variable. + +``` +$ telepresence intercept --port --mount=true -- /bin/bash +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 + Intercepting : all TCP connections + +bash-3.2$ echo $TELEPRESENCE_ROOT +/var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 +``` + +--mount=true is the default if a mount option is not specified, use --mount=false to disable mounting volumes. + +With either method, the code you run locally either from the subshell or from the intercept command will need to be prepended with the `$TELEPRESENCE_ROOT` environment variable to utilitze the mounted volumes. + +For example, Kubernetes mounts secrets to `/var/run/secrets/kubernetes.io` (even if no `mountPoint` for it exists in the Pod spec). Once mounted, to access these you would need to change your code to use `$TELEPRESENCE_ROOT/var/run/secrets/kubernetes.io`. + +If using --mount=true without a command, you can use either environment variable flag to retrieve the variable. diff --git a/docs/telepresence/2.1/troubleshooting/index.md b/docs/telepresence/2.1/troubleshooting/index.md new file mode 100644 index 000000000..bdfdb8c95 --- /dev/null +++ b/docs/telepresence/2.1/troubleshooting/index.md @@ -0,0 +1,41 @@ +--- +description: "Troubleshooting issues related to Telepresence." +--- +# Troubleshooting + +## Creating an Intercept Did Not Generate a Preview URL + +Preview URLs are only generated when you are logged into Ambassador Cloud, so that you can use it to manage all your preview URLs. When not logged in, the intercept will not generate a preview URL and will proxy all traffic. Remove the intercept with `telepresence leave [deployment name]`, run `telepresence login` to login to Ambassador Cloud, then recreate the intercept. See the [intercepts how-to doc](../howtos/intercepts) for more details. + +## Error on Accessing Preview URL: `First record does not look like a TLS handshake` + +The service you are intercepting is likely not using TLS, however when configuring the intercept you indicated that it does use TLS. Remove the intercept with `telepresence leave [deployment name]` and recreate it, setting `TLS` to `n`. Telepresence tries to intelligently determine these settings for you when creating an intercept and offer them as defaults, but odd service configurations might cause it to suggest the wrong settings. + +## Error on Accessing Preview URL: Detected a 301 Redirect Loop + +If your ingress is set to redirect HTTP requests to HTTPS and your web app uses HTTPS, but you configure the intercept to not use TLS, you will get this error when opening the preview URL. Remove the intercept with `telepresence leave [deployment name]` and recreate it, selecting the correct port and setting `TLS` to `y` when prompted. + +## Your GitHub Organization Isn't Listed + +Ambassador Cloud needs access granted to your GitHub organization as a third-party OAuth app. If an org isn't listed during login then the correct access has not been granted. + +The quickest way to resolve this is to go to the **Github menu** → **Settings** → **Applications** → **Authorized OAuth Apps** → **Ambassador Labs**. An org owner will have a **Grant** button, anyone not an owner will have **Request** which sends an email to the owner. If an access request has been denied in the past the user will not see the **Request** button, they will have to reach out to the owner. + +Once access is granted, log out of Ambassador Cloud and log back in, you should see the GitHub org listed. + +The org owner can go to the **GitHub menu** → **Your organizations** → **[org name]** → **Settings** → **Third-party access** to see if Ambassador Labs has access already or authorize a request for access (only owners will see **Settings** on the org page). Clicking the pencil icon will show the permissions that were granted. + +GitHub's documentation provides more detail about [managing access granted to third-party applications](https://docs.github.com/en/github/authenticating-to-github/connecting-with-third-party-applications) and [approving access to apps](https://docs.github.com/en/github/setting-up-and-managing-organizations-and-teams/approving-oauth-apps-for-your-organization). + +### Granting or Requesting Access on Initial Login + +When using GitHub as your identity provider, the first time you login to Ambassador Cloud GitHub will ask to authorize Ambassador Labs to access your orgs and certain user data. + + + +Any listed org with a green check has already granted access to Ambassador Labs (you still need to authorize to allow Ambassador Labs to read your user data and org membership). + +Any org with a red X requires access to be granted to Ambassador Labs. Owners of the org will see a **Grant** button. Anyone who is not an owner will see a **Request** button. This will send an email to the org owner requesting approval to access the org. If an access request has been denied in the past the user will not see the **Request** button, they will have to reach out to the owner. + +Once approval is granted, you will have to log out of Ambassador Cloud then back in to select the org. + diff --git a/docs/telepresence/2.1/versions.yml b/docs/telepresence/2.1/versions.yml new file mode 100644 index 000000000..e9bc7faa2 --- /dev/null +++ b/docs/telepresence/2.1/versions.yml @@ -0,0 +1,4 @@ +version: "2.1.5" +dlVersion: "2.1.5" +docsVersion: "2.1" +productName: "Telepresence" diff --git a/docs/telepresence/2.10 b/docs/telepresence/2.10 deleted file mode 120000 index 8d1348e84..000000000 --- a/docs/telepresence/2.10 +++ /dev/null @@ -1 +0,0 @@ -../../../docs/telepresence/v2.10 \ No newline at end of file diff --git a/docs/telepresence/2.10/ci/github-actions.md b/docs/telepresence/2.10/ci/github-actions.md new file mode 100644 index 000000000..810a2d239 --- /dev/null +++ b/docs/telepresence/2.10/ci/github-actions.md @@ -0,0 +1,176 @@ +--- +title: GitHub Actions for Telepresence +description: "Learn more about GitHub Actions for Telepresence and how to integrate them in your processes to run tests for your own environments and improve your CI/CD pipeline. " +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Telepresence with GitHub Actions + +Telepresence combined with [GitHub Actions](https://docs.github.com/en/actions) allows you to run integration tests in your continuous integration/continuous delivery (CI/CD) pipeline without the need to run any dependant service. When you connect to the target Kubernetes cluster, you can intercept traffic of the remote services and send it to an instance of the local service running in CI. This way, you can quickly test the bugfixes, updates, and features that you develop in your project. + +You can [register here](https://app.getambassador.io/auth/realms/production/protocol/openid-connect/auth?client_id=telepresence-github-actions&response_type=code&code_challenge=qhXI67CwarbmH-pqjDIV1ZE6kqggBKvGfs69cxst43w&code_challenge_method=S256&redirect_uri=https://app.getambassador.io) to get a free Ambassador Cloud account to try the GitHub Actions for Telepresence yourself. + +## GitHub Actions for Telepresence + +Ambassador Labs has created a set of GitHub Actions for Telepresence that enable you to run integration tests in your CI pipeline against any existing remote cluster. The GitHub Actions for Telepresence are the following: + + - **configure**: Initial configuration setup for Telepresence that is needed to run the actions successfully. + - **install**: Installs Telepresence in your CI server with latest version or the one specified by you. + - **login**: Logs into Telepresence, you can create a [personal intercept](/docs/telepresence/latest/concepts/intercepts/#personal-intercept). You'll need a Telepresence API key and set it as an environment variable in your workflow. See the [acquiring an API key guide](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) for instructions on how to get one. + - **connect**: Connects to the remote target environment. + - **intercept**: Redirects traffic to the remote service to the version of the service running in CI so you can run integration tests. + +Each action contains a post-action script to clean up resources. This includes logging out of Telepresence, closing the connection to the remote cluster, and stopping the intercept process. These post scripts are executed automatically, regardless of job result. This way, you don't have to worry about terminating the session yourself. You can look at the [GitHub Actions for Telepresence repository](https://github.com/datawire/telepresence-actions) for more information. + +# Using Telepresence in your GitHub Actions CI pipeline + +## Prerequisites + +To enable GitHub Actions with telepresence, you need: + +* A [Telepresence API KEY](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) and set it as an environment variable in your workflow. +* Access to your remote Kubernetes cluster, like a `kubeconfig.yaml` file with the information to connect to the cluster. +* If your remote cluster already has Telepresence installed, you need to know whether Telepresence is installed [Cluster wide](/docs/telepresence/latest/reference/rbac/#cluster-wide-telepresence-user-access) or [Namespace only](/docs/telepresence/latest/reference/rbac/#namespace-only-telepresence-user-access). If telepresence is configured for namespace only, verify that your `kubeconfig.yaml` is configured to find the installation of the Traffic Manager. For example: + + ```yaml + apiVersion: v1 + clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: traffic-manager-namespace + name: example-cluster + ``` + +* If Telepresence is installed, you also need to know the version of Telepresence running in the cluster. You can run the command `kubectl describe service traffic-manager -n namespace`. The version is listed in the `labels` section of the output. +* You need a GitHub Actions secret named `TELEPRESENCE_API_KEY` in your repository that has your Telepresence API key. See [GitHub docs](https://docs.github.com/en/github-ae@latest/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository) for instructions on how to create GitHub Actions secrets. +* You need a GitHub Actions secret named `KUBECONFIG_FILE` in your repository with the content of your `kubeconfig.yaml`). + +**Does your environment look different?** We're actively working on making GitHub Actions for Telepresence more useful for more + +
+ + +
+ +## Initial configuration setup + +To be able to use the GitHub Actions for Telepresence, you need to do an initial setup to [configure Telepresence](../../reference/config/) so the repository is able to run your workflow. To complete the Telepresence setup: + + +This action only supports Ubuntu runners at the moment. + +1. In your main branch, create a `.github/workflows` directory in your GitHub repository if it does not already exist. +1. Next, in the `.github/workflows` directory, create a new YAML file named `configure-telepresence.yaml`: + + ```yaml + name: Configuring telepresence + on: workflow_dispatch + jobs: + configuring: + name: Configure telepresence + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + steps: + - name : Checkout + uses: actions/checkout@v3 + #---- here run your custom command to connect to your cluster + #- name: Connect to cluster + # shell: bash + # run: ./connnect to cluster + #---- + - name: Congifuring Telepresence + uses: datawire/telepresence-actions/configure@v1.0-rc + with: + version: latest + ``` + +1. Push the `configure-telepresence.yaml` file to your repository. +1. Run the `Configuring Telepresence Workflow` [manually](https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow) in your repository's Actions tab. + +When the workflow runs, the action caches the configuration directory of Telepresence and a Telepresence configuration file if is provided by you. This configuration file should be placed in the`/.github/telepresence-config/config.yml` with your own [Telepresence config](../../reference/config/). If you update your this file with a new configuration, you must run the `Configuring Telepresence Workflow` action manually on your main branch so your workflow detects the new configuration. + + +When you create a branch, do not remove the .telepresence/config.yml file. This is required for Telepresence to run GitHub action properly when there is a new push to the branch in your repository. + + +## Using Telepresence in your GitHub Actions workflows + +1. In the `.github/workflows` directory create a new YAML file named `run-integration-tests.yaml` and modify placeholders with real actions to run your service and perform integration tests. + + ```yaml + name: Run Integration Tests + on: + push: + branches-ignore: + - 'main' + jobs: + my-job: + name: Run Integration Test using Remote Cluster + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + KUBECONFIG_FILE: ${{ secrets.KUBECONFIG_FILE }} + KUBECONFIG: /opt/kubeconfig + steps: + - name : Checkout + uses: actions/checkout@v3 + with: + ref: ${{ github.event.pull_request.head.sha }} + #---- here run your custom command to run your service + #- name: Run your service to test + # shell: bash + # run: ./run_local_service + #---- + # First you need to log in to Telepresence, with your api key + - name: Create kubeconfig file + run: | + cat < /opt/kubeconfig + ${{ env.KUBECONFIG_FILE }} + EOF + - name: Install Telepresence + uses: datawire/telepresence-actions/install@v1.0-rc + with: + version: 2.5.8 # Change the version number here according to the version of Telepresence in your cluster or omit this parameter to install the latest version + - name: Telepresence connect + uses: datawire/telepresence-actions/connect@v1.0-rc + - name: Login + uses: datawire/telepresence-actions/login@v1.0-rc + with: + telepresence_api_key: ${{ secrets.TELEPRESENCE_API_KEY }} + - name: Intercept the service + uses: datawire/telepresence-actions/intercept@v1.0-rc + with: + service_name: service-name + service_port: 8081:8080 + namespace: namespacename-of-your-service + http_header: "x-telepresence-intercept-id=service-intercepted" + print_logs: true # Flag to instruct the action to print out Telepresence logs and export an artifact with them + #---- here run your custom command + #- name: Run integrations test + # shell: bash + # run: ./run_integration_test + #---- + ``` + +The previous is an example of a workflow that: + +* Checks out the repository code. +* Has a placeholder step to run the service during CI. +* Creates the `/opt/kubeconfig` file with the contents of the `secrets.KUBECONFIG_FILE` to make it available for Telepresence. +* Installs Telepresence. +* Runs Telepresence Connect. +* Logs into Telepresence. +* Intercepts traffic to the service running in the remote cluster. +* A placeholder for an action that would run integration tests, such as one that makes HTTP requests to your running service and verifies it works while dependent services run in the remote cluster. + +This workflow gives you the ability to run integration tests during the CI run against an ephemeral instance of your service to verify that the any change that is pushed to the working branch works as expected. After you push the changes, the CI server will run the integration tests against the intercept. You can view the results on your GitHub repository, under "Actions" tab. diff --git a/docs/telepresence/2.10/community.md b/docs/telepresence/2.10/community.md new file mode 100644 index 000000000..922457c9d --- /dev/null +++ b/docs/telepresence/2.10/community.md @@ -0,0 +1,12 @@ +# Community + +## Contributor's guide +Please review our [contributor's guide](https://github.com/telepresenceio/telepresence/blob/release/v2/DEVELOPING.md) +on GitHub to learn how you can help make Telepresence better. + +## Changelog +Our [changelog](https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md) +describes new features, bug fixes, and updates to each version of Telepresence. + +## Meetings +Check out our community [meeting schedule](https://github.com/telepresenceio/telepresence/blob/release/v2/MEETING_SCHEDULE.md) for opportunities to interact with Telepresence developers. diff --git a/docs/telepresence/2.10/concepts/context-prop.md b/docs/telepresence/2.10/concepts/context-prop.md new file mode 100644 index 000000000..b3eb41e32 --- /dev/null +++ b/docs/telepresence/2.10/concepts/context-prop.md @@ -0,0 +1,37 @@ +# Context propagation + +**Context propagation** is the transfer of request metadata across the services and remote processes of a distributed system. Telepresence uses context propagation to intelligently route requests to the appropriate destination. + +This metadata is the context that is transferred across system services. It commonly takes the form of HTTP headers; context propagation is usually referred to as header propagation. A component of the system (like a proxy or performance monitoring tool) injects the headers into requests as it relays them. + +Metadata propagation refers to any service or other middleware not stripping away the headers. Propagation facilitates the movement of the injected contexts between other downstream services and processes. + + +## What is distributed tracing? + +Distributed tracing is a technique for troubleshooting and profiling distributed microservices applications and is a common application for context propagation. It is becoming a key component for debugging. + +In a microservices architecture, a single request may trigger additional requests to other services. The originating service may not cause the failure or slow request directly; a downstream dependent service may instead be to blame. + +An application like Datadog or New Relic will use agents running on services throughout the system to inject traffic with HTTP headers (the context). They will track the request’s entire path from origin to destination to reply, gathering data on routes the requests follow and performance. The injected headers follow the [W3C Trace Context specification](https://www.w3.org/TR/trace-context/) (or another header format, such as [B3 headers](https://github.com/openzipkin/b3-propagation)), which facilitates maintaining the headers through every service without being stripped (the propagation). + + +## What are intercepts and preview URLs? + +[Intercepts](../../reference/intercepts) and [preview +URLs](../../howtos/preview-urls/) are functions of Telepresence that +enable easy local development from a remote Kubernetes cluster and +offer a preview environment for sharing and real-time collaboration. + +Telepresence uses custom HTTP headers and header propagation to +identify which traffic to intercept both for plain personal intercepts +and for personal intercepts with preview URLs; these techniques are +more commonly used for distributed tracing, so what they are being +used for is a little unorthodox, but the mechanisms for their use are +already widely deployed because of the prevalence of tracing. The +headers facilitate the smart routing of requests either to live +services in the cluster or services running locally on a developer’s +machine. The intercepted traffic can be further limited by using path +based routing. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to [Ambassador Cloud](https://app.getambassador.io) with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. diff --git a/docs/telepresence/2.10/concepts/devloop.md b/docs/telepresence/2.10/concepts/devloop.md new file mode 100644 index 000000000..86aac87e2 --- /dev/null +++ b/docs/telepresence/2.10/concepts/devloop.md @@ -0,0 +1,54 @@ +--- +title: "The developer and the inner dev loop | Ambassador " +--- + +# The developer experience and the inner dev loop + +## How is the developer experience changing? + +The developer experience is the workflow a developer uses to develop, test, deploy, and release software. + +Typically this experience has consisted of both an inner dev loop and an outer dev loop. The inner dev loop is where the individual developer codes and tests, and once the developer pushes their code to version control, the outer dev loop is triggered. + +The outer dev loop is _everything else_ that happens leading up to release. This includes code merge, automated code review, test execution, deployment, [controlled (canary) release](https://www.getambassador.io/docs/argo/latest/concepts/canary/), and observation of results. The modern outer dev loop might include, for example, an automated CI/CD pipeline as part of a [GitOps workflow](https://www.getambassador.io/docs/argo/latest/concepts/gitops/#what-is-gitops) and a [progressive delivery](/docs/argo/latest/concepts/cicd/) strategy relying on automated canaries, i.e. to make the outer loop as fast, efficient and automated as possible. + +Cloud-native technologies have fundamentally altered the developer experience in two ways: one, developers now have to take extra steps in the inner dev loop; two, developers need to be concerned with the outer dev loop as part of their workflow, even if most of their time is spent in the inner dev loop. + +Engineers now must design and build distributed service-based applications _and_ also assume responsibility for the full development life cycle. The new developer experience means that developers can no longer rely on monolithic application developer best practices, such as checking out the entire codebase and coding locally with a rapid “live-reload” inner development loop. Now developers have to manage external dependencies, build containers, and implement orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this adds development time to the equation. + +## What is the inner dev loop? + +The inner dev loop is the single developer workflow. A single developer should be able to set up and use an inner dev loop to code and test changes quickly. + +Even within the Kubernetes space, developers will find much of the inner dev loop familiar. That is, code can still be written locally at a level that a developer controls and committed to version control. + +In a traditional inner dev loop, if a typical developer codes for 360 minutes (6 hours) a day, with a traditional local iterative development loop of 5 minutes — 3 coding, 1 building, i.e. compiling/deploying/reloading, 1 testing inspecting, and 10-20 seconds for committing code — they can expect to make ~70 iterations of their code per day. Any one of these iterations could be a release candidate. The only “developer tax” being paid here is for the commit process, which is negligible. + +![traditional inner dev loop](../images/trad-inner-dev-loop.png) + +## In search of lost time: How does containerization change the inner dev loop? + +The inner dev loop is where writing and testing code happens, and time is critical for maximum developer productivity and getting features in front of end users. The faster the feedback loop, the faster developers can refactor and test again. + +Changes to the inner dev loop process, i.e., containerization, threaten to slow this development workflow down. Coding stays the same in the new inner dev loop, but code has to be containerized. The _containerized_ inner dev loop requires a number of new steps: + +* packaging code in containers +* writing a manifest to specify how Kubernetes should run the application (e.g., YAML-based configuration information, such as how much memory should be given to a container) +* pushing the container to the registry +* deploying containers in Kubernetes + +Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. + + +![container inner dev loop](../images/container-inner-dev-loop.png) + +## Tackling the slow inner dev loop + +A slow inner dev loop can negatively impact frontend and backend teams, delaying work on individual and team levels and slowing releases into production overall. + +For example: + +* Frontend developers have to wait for previews of backend changes on a shared dev/staging environment (for example, until CI/CD deploys a new version) and/or rely on mocks/stubs/virtual services when coding their application locally. These changes are only verifiable by going through the CI/CD process to build and deploy within a target environment. +* Backend developers have to wait for CI/CD to build and deploy their app to a target environment to verify that their code works correctly with cluster or cloud-based dependencies as well as to share their work to get feedback. + +New technologies and tools can facilitate cloud-native, containerized development. And in the case of a sluggish inner dev loop, developers can accelerate productivity with tools that help speed the loop up again. diff --git a/docs/telepresence/2.10/concepts/devworkflow.md b/docs/telepresence/2.10/concepts/devworkflow.md new file mode 100644 index 000000000..fa24fc2bd --- /dev/null +++ b/docs/telepresence/2.10/concepts/devworkflow.md @@ -0,0 +1,7 @@ +# The changing development workflow + +A changing workflow is one of the main challenges for developers adopting Kubernetes. Software development itself isn’t the challenge. Developers can continue to [code using the languages and tools with which they are most productive and comfortable](https://www.getambassador.io/resources/kubernetes-local-dev-toolkit/). That’s the beauty of containerized development. + +However, the cloud-native, Kubernetes-based approach to development means adopting a new development workflow and development environment. Beyond the basics, such as figuring out how to containerize software, [how to run containers in Kubernetes](https://www.getambassador.io/docs/kubernetes/latest/concepts/appdev/), and how to deploy changes into containers, for example, Kubernetes adds complexity before it delivers efficiency. The promise of a “quicker way to develop software” applies at least within the traditional aspects of the inner dev loop, where the single developer codes, builds and tests their software. But both within the inner dev loop and once code is pushed into version control to trigger the outer dev loop, the developer experience changes considerably from what many developers are used to. + +In this new paradigm, new steps are added to the inner dev loop, and more broadly, the developer begins to share responsibility for the full life cycle of their software. Inevitably this means taking new workflows and tools on board to ensure that the full life cycle continues full speed ahead. diff --git a/docs/telepresence/2.10/concepts/faster.md b/docs/telepresence/2.10/concepts/faster.md new file mode 100644 index 000000000..03dc9bd8b --- /dev/null +++ b/docs/telepresence/2.10/concepts/faster.md @@ -0,0 +1,28 @@ +--- +title: Install the Telepresence Docker extension | Ambassador +--- +# Making the remote local: Faster feedback, collaboration and debugging + +With the goal of achieving [fast, efficient development](https://www.getambassador.io/use-case/local-kubernetes-development/), developers need a set of approaches to bridge the gap between remote Kubernetes clusters and local development, and reduce time to feedback and debugging. + +## How should I set up a Kubernetes development environment? + +[Setting up a development environment](https://www.getambassador.io/resources/development-environments-microservices/) for Kubernetes can be much more complex than the setup for traditional web applications. Creating and maintaining a Kubernetes development environment relies on a number of external dependencies, such as databases or authentication. + +While there are several ways to set up a Kubernetes development environment, most introduce complexities and impediments to speed. The dev environment should be set up to easily code and test in conditions where a service can access the resources it depends on. + +A good way to meet the goals of faster feedback, possibilities for collaboration, and scale in a realistic production environment is the "single service local, all other remote" environment. Developing in a fully remote environment offers some benefits, but for developers, it offers the slowest possible feedback loop. With local development in a remote environment, the developer retains considerable control while using tools like [Telepresence](../../quick-start/) to facilitate fast feedback, debugging and collaboration. + +## What is Telepresence? + +Telepresence is an open source tool that lets developers [code and test microservices locally against a remote Kubernetes cluster](../../quick-start/). Telepresence facilitates more efficient development workflows while relieving the need to worry about other service dependencies. + +## How can I get fast, efficient local development? + +The dev loop can be jump-started with the right development environment and Kubernetes development tools to support speed, efficiency and collaboration. Telepresence is designed to let Kubernetes developers code as though their laptop is in their Kubernetes cluster, enabling the service to run locally and be proxied into the remote cluster. Telepresence runs code locally and forwards requests to and from the remote Kubernetes cluster, bypassing the much slower process of waiting for a container to build, pushing it to registry, and deploying to production. + +A rapid and continuous feedback loop is essential for productivity and speed; Telepresence enables the fast, efficient feedback loop to ensure that developers can access the rapid local development loop they rely on without disrupting their own or other developers' workflows. Telepresence safely intercepts traffic from the production cluster and enables near-instant testing of code, local debugging in production, and [preview URL](../../howtos/preview-urls/) functionality to share dev environments with others for multi-user collaboration. + +Telepresence works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This pod proxies data from the Kubernetes environment (e.g., TCP connections, environment variables, volumes) to the local process. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +The intercept proxy works thanks to context propagation, which is most frequently associated with distributed tracing but also plays a key role in controllable intercepts and preview URLs. diff --git a/docs/telepresence/2.10/concepts/intercepts.md b/docs/telepresence/2.10/concepts/intercepts.md new file mode 100644 index 000000000..0a2909be2 --- /dev/null +++ b/docs/telepresence/2.10/concepts/intercepts.md @@ -0,0 +1,208 @@ +--- +title: "Types of intercepts" +description: "Short demonstration of personal vs global intercepts" +--- + +import React from 'react'; + +import Alert from '@material-ui/lab/Alert'; +import AppBar from '@material-ui/core/AppBar'; +import Paper from '@material-ui/core/Paper'; +import Tab from '@material-ui/core/Tab'; +import TabContext from '@material-ui/lab/TabContext'; +import TabList from '@material-ui/lab/TabList'; +import TabPanel from '@material-ui/lab/TabPanel'; +import Animation from '@src/components/InterceptAnimation'; + +export function TabsContainer({ children, ...props }) { + const [state, setState] = React.useState({curTab: "personal"}); + React.useEffect(() => { + const query = new URLSearchParams(window.location.search); + var interceptType = query.get('intercept') || "personal"; + if (state.curTab != interceptType) { + setState({curTab: interceptType}); + } + }, [state, setState]) + var setURL = function(newTab) { + history.replaceState(null,null, + `?intercept=${newTab}${window.location.hash}`, + ); + }; + return ( +
+ + + {setState({curTab: newTab}); setURL(newTab)}} aria-label="intercept types"> + + + + + + {children} + +
+ ); +}; + +# Types of intercepts + + + + +# No intercept + + + + +This is the normal operation of your cluster without Telepresence. + + + + + +# Global intercept + + + + +**Global intercepts** replace the Kubernetes "Orders" service with the +Orders service running on your laptop. The users see no change, but +with all the traffic coming to your laptop, you can observe and debug +with all your dev tools. + + + +### Creating and using global intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=all + ``` + + + + Make sure your current kubectl context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service: + + All requests will be sent to the version of your service that is + running in the local development environment. + + + + +# Personal intercept + +**Personal intercepts** allow you to be selective and intercept only +some of the traffic to a service while not interfering with the rest +of the traffic. This allows you to share a cluster with others on your +team without interfering with their work. + +Personal intercepts are subject to the Ambassador Cloud active service and user limit quotas. +To read more about these quotas limits, see the [subscription management page](../../../cloud/latest/subscriptions/howtos/manage-my-subscriptions). + + + + +In the illustration above, **Orange** +requests are being made by Developer 2 on their laptop and the +**green** are made by a teammate, +Developer 1, on a different laptop. + +Each developer can intercept the Orders service for their requests only, +while sharing the rest of the development environment. + + + +### Creating and using personal intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b + ``` + + We're using + `Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b` as the + header for the sake of the example, but you can use any + `key=value` pair you want, or `--http-header=auto` to have it + choose something automatically. + + + + Make sure your current kubect context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service by passing the + HTTP header: + + ```http + Personal-Intercept: 126a72c7-be8b-4329-af64-768e207a184b + ``` + + + + Need a browser extension to modify or remove an HTTP-request-headers? + + Chrome + {' '} + Firefox + + + + 3. Using the intercept: Send requests to your service without the + HTTP header: + + Requests without the header will be sent to the version of your + service that is running in the cluster. This enables you to share + the cluster with a team! + +### Intercepting a specific endpoint + +It's not uncommon to have one service serving several endpoints. Telepresence is capable of limiting an +intercept to only affect the endpoints you want to work with by using one of the `--http-path-xxx` +flags below in addition to using `--http-header` flags. Only one such flag can be used in an intercept +and, contrary to the `--http-header` flag, it cannot be repeated. + +The following flags are available: + +| Flag | Meaning | +|-------------------------------|------------------------------------------------------------------| +| `--http-path-equal ` | Only intercept the endpoint for this exact path | +| `--http-path-prefix ` | Only intercept endpoints with a matching path prefix | +| `--http-path-regex ` | Only intercept endpoints that match the given regular expression | + +#### Examples: + +1. A personal intercept using the header "Coder: Bob" limited to all endpoints that start with "/api': + + ```shell + telepresence intercept SERVICENAME --http-path-prefix=/api --http-header=Coder=Bob + ``` + +2. A personal intercept using the auto generated header that applies only to the endpoint "/api/version": + + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version --http-header=auto + ``` + or, since `--http-header=auto` is the implicit when using `--http` options, just: + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version + ``` + +3. A personal intercept using the auto generated header limited to all endpoints matching the regular expression "(staging-)?api/.*": + + ```shell + telepresence intercept SERVICENAME --http-path-regex='/(staging-)?api/.*' + ``` + + + diff --git a/docs/telepresence/2.10/concepts/modes.md b/docs/telepresence/2.10/concepts/modes.md new file mode 100644 index 000000000..3402f07e4 --- /dev/null +++ b/docs/telepresence/2.10/concepts/modes.md @@ -0,0 +1,36 @@ +--- +title: "Modes" +--- + +# Modes + +A Telepresence installation happens in two locations, initially on your laptop or workstation, and then on your cluster after running `telepresence helm install`. +The main component that gets installed into the cluster is known as the Traffic Manager. +The Traffic Manager can be put either into single user mode (the default) or into team mode. +Modes give cluster admins the ability to enforce both [intercept type](../intercepts) defaults and logins across all connected users, enabling teams to colaborate and intercept without stepping getting in eachothers way. + +## Single user mode + +In single user mode, all intercepts will be [global intercepts](../intercepts?intercept=global) by default. +Global intercepts affect all traffic coming into the intercepted workload; this can cause issues for teams working on the same service. +While single user mode is the default, switching back from team mode is done buy runnning: +``` +telepresence helm install --single-user-mode +``` + +## Team mode + +In team mode, all intercepts will be [personal intercepts](../intercepts?intercept=personal) by default and all intercepting users must be logged in. +Personal intercepts selectively affect http traffic coming into the intercepted workload. +Being in team mode adds an additional layer of confidence for developers working on the same service, knowing their teammates wont interrupt their intercepts by mistake. +Since logins are enforced in this mode as well, you can ensure that Ambassador Cloud features, such as intercept history and saved intercepts, are being taken advantage of by everybody on your team. +To switch from single user mode to team mode, run: +``` +telepresence helm install --team-mode +``` + +## Default intercept types based on modes +The mode of the Traffic Manager determines the default type of intercept, [personal](../intercepts?intercept=personal) vs [global](../intercepts?intercept=global). +When in team mode, intercepts default to [personal intercepts](../intercepts?intercept=personal), and logins are enforced for non logged in users. +When in single user mode, all intercepts default to [global intercepts](../intercepts?intercept=global), regardless of login status. +![mode defaults](../images/mode-defaults.png) \ No newline at end of file diff --git a/docs/telepresence/2.10/doc-links.yml b/docs/telepresence/2.10/doc-links.yml new file mode 100644 index 000000000..268996901 --- /dev/null +++ b/docs/telepresence/2.10/doc-links.yml @@ -0,0 +1,104 @@ +- title: Quick start + link: quick-start +- title: Install Telepresence + items: + - title: Install + link: install/ + - title: Upgrade + link: install/upgrade/ + - title: Install Traffic Manager + link: install/manager/ + - title: Install Traffic Manager with Helm + link: install/helm/ + - title: Cloud Provider Prerequisites + link: install/cloud/ + - title: Migrate from legacy Telepresence + link: install/migrate-from-legacy/ +- title: Core concepts + items: + - title: The changing development workflow + link: concepts/devworkflow + - title: The developer experience and the inner dev loop + link: concepts/devloop + - title: "Making the remote local: Faster feedback, collaboration and debugging" + link: concepts/faster + - title: Context propagation + link: concepts/context-prop + - title: Types of intercepts + link: concepts/intercepts + - title: Modes + link: concepts/modes +- title: How do I... + items: + - title: Intercept a service in your own environment + link: howtos/intercepts + - title: Share dev environments with preview URLs + link: howtos/preview-urls + - title: Proxy outbound traffic to my cluster + link: howtos/outbound + - title: Host a cluster in a local VM + link: howtos/cluster-in-vm + - title: Send requests to an intercepted service + link: howtos/request +- title: Telepresence for Docker + items: + - title: What is Telepresence for Docker + link: extension/intro + - title: Install into Docker-Desktop + link: extension/install + - title: Intercept into a Docker Container + link: extension/intercept +- title: Telepresence for CI + items: + - title: Github Actions + link: ci/github-actions +- title: Technical reference + items: + - title: Architecture + link: reference/architecture + - title: Client reference + link: reference/client + items: + - title: login + link: reference/client/login + - title: Laptop-side configuration + link: reference/config + - title: Cluster-side configuration + link: reference/cluster-config + - title: Using Docker for intercepts + link: reference/docker-run + - title: Running Telepresence in a Docker container + link: reference/inside-container + - title: Environment variables + link: reference/environment + - title: Intercepts + link: reference/intercepts/ + items: + - title: Manually injecting the Traffic Agent + link: reference/intercepts/manual-agent + - title: Volume mounts + link: reference/volume + - title: RESTful API service + link: reference/restapi + - title: DNS resolution + link: reference/dns + - title: RBAC + link: reference/rbac + - title: Telepresence and VPNs + link: reference/vpn + - title: Networking through Virtual Network Interface + link: reference/tun-device + - title: Connection Routing + link: reference/routing + - title: Using Telepresence with Linkerd + link: reference/linkerd +- title: FAQs + link: faqs +- title: Troubleshooting + link: troubleshooting +- title: Community + link: community +- title: Release Notes + link: release-notes +- title: Licenses + link: licenses diff --git a/docs/telepresence/2.10/extension/install.md b/docs/telepresence/2.10/extension/install.md new file mode 100644 index 000000000..471752775 --- /dev/null +++ b/docs/telepresence/2.10/extension/install.md @@ -0,0 +1,39 @@ +--- +title: "Telepresence for Docker installation and connection guide" +description: "Learn how to install and update Ambassador Labs' Telepresence for Docker." +indexable: true +--- + +# Install and connect the Telepresence Docker extension + +[Docker](https://docker.com), the popular containerized runtime environment, now offers the [Telepresence](../../../../../kubernetes-learning-center/telepresence-docker-extension/) extension for Docker Desktop. With this extension, you can quickly install Telepresence and begin using its features with your Docker containers in a matter of minutes. + +## Install Telepresence for Docker + +Telepresence for Docker is available through the Docker Destktop. To install Telepresence for Docker: + +1. Open Docker Desktop. +2. In the Docker Dashboard, click **Add Extensions** in the left navigation bar. +3. In the Extensions Marketplace, search for the Ambassador Telepresence extension. +4. Click **Install**. + +## Connect to Ambassador Cloud through the Telepresence extension. + + After you install the Telepresence extension in Docker Desktop, you need to generate an API key to connect the Telepresence extension to Ambassador Cloud. + + 1. Click the Telepresence extension in Docker Desktop, then click **Get Started**. + + 2. Click the **Get API Key** button to open Ambassador Cloud in a browser window. + + 3. Sign in with your Google, Github, or Gitlab account. + Ambassador Cloud opens to your profile and displays the API key. + + 4. Copy the API key and paste it into the API key field in the Docker Dashboard. + +## Connect to your cluster in Docker Desktop. + + 1. Select the desired clister from the dropdown menu and click **Next**. + This cluster is now set to kubectl's current context. + + 2. Click **Connect to [your cluster]**. + Your cluster is connected and you can now create [intercepts](../intercept/). \ No newline at end of file diff --git a/docs/telepresence/2.10/extension/intercept.md b/docs/telepresence/2.10/extension/intercept.md new file mode 100644 index 000000000..3868407a8 --- /dev/null +++ b/docs/telepresence/2.10/extension/intercept.md @@ -0,0 +1,48 @@ +--- +title: "Create an intercept with Telepreence for Docker" +description: "Create an intercept with Telepresence for Docker. With Telepresence, you can create intercepts to debug, " +indexable: true +--- + +# Create an intercept + +With the Telepresence for Docker extension, you can create [personal intercepts](../../concepts/intercepts/?intercept=personal). These intercepts route the cluster traffic through a proxy UTL to your local Docker container. Follow the instructions below to create an intercept with Docker Desktop. + +## Prerequisites + +Before you begin, you need: +- [Docker Desktop](https://www.docker.com/products/docker-desktop). +- The [Telepresence ](../../../../../kubernetes-learning-center/telepresence-docker-extension/) extension [installed](../install). +- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/), the Kubernetes command-line tool. + +This guide assumes you have a Kubernetes deployment with a running service, and that you can run a copy of that service in a docker container on your laptop. + +## Copy the service you want to intercept + +Once you have the Telepresence extension [installed and connected](../install/) the Telepresence extension, you need to copy the service. To do this, use the `docker run` command with the following flags: + + ```console + $ docker run --rm -it --network host + ``` + +The Telepresence extension requires the target service to be on the host network. This allows Telepresence to share a network with your container. The mounted network device redirects cluster-related traffic back into the cluster. + +## Intercept a service + +In Docker Desktop, the Telepresence extension shows all the services in the namespace. + + 1. Choose a service to intercept and click the **Intercept** button. + + 2. Select the service port for the intercept from the dropdown. + + 3. Enter the target port of the service you previously copied in the Docker container. + + 4. Click **Submit** to create the intercept. + +The intercept now shows up in the Docker Telepresence extension. + +## Test your code + +Now you can make your code changes in your preferred IDE. When you're finished, build a new container with your code changes and run your container on Docker's host network. All the traffic previously routed to and from your Kubernetes service is now routed to and from your local container. + +Click the globe icon next to your intercept to get the preview URL. From here, you can view the intercept details in Ambassador Cloud, open the preview URL in your browser to see the changes you've made in realtime, or you can share the preview URL with teammates so they can review your work. \ No newline at end of file diff --git a/docs/telepresence/2.10/extension/intro.md b/docs/telepresence/2.10/extension/intro.md new file mode 100644 index 000000000..6a653ae06 --- /dev/null +++ b/docs/telepresence/2.10/extension/intro.md @@ -0,0 +1,29 @@ +--- +title: "Telepresence for Docker introduction" +description: "Learn about the Telepresence extension for Docker." +indexable: true +--- + +# Telepresence for Docker + +Telepresence is now available as a [Docker Extension](https://www.docker.com/products/extensions/) for Docker Desktop. + +## What is the Telepresence extension for Docker? + +The [Telepresence Docker extension](../../../../../kubernetes-learning-center/telepresence-docker-extension/) is an extension that runs in Docker Desktop. This extension allows you to spin up a selection of your application and run the Telepresence daemons in that container. The Telepresence extension allows you to intercept a service and redirect cloud traffic to other containers on the Docker host network. + +## What does the Telepresence Docker extension do? + +Telepresence for Docker is designed to simplify your coding experience and test your code faster. Traditionally, you need to build a container within docker with your code changes, push them, wait for it to upload, deploy the changes, verify them, view them, and repeat that process as you continually test your changes. This makes it a slow and cumbersome process when you need to continually test changes. + +With the Telepresence extension for Docker Desktop, you can use intercepts to immediately preview changes as you make them, without the need to redeploy after every change. Because the Telepresence extension also enables you to isolate your machine and operate it entirely within the Docker runtime, this means you can make changes without root permission on your machine. + +## How does Telepresence for Docker work? + +The Telepresence extension is configured to use Docker's host network (VM network for Windows and Mac, host network on Linux). + +Telepresence runs entirely within containers. The Telepresence daemons run in a container, which can be given commands using the extension UI. When Telepresence intercepts a service, it redirects cloud traffic to other containers on the Docker host network. + +## What do I need to begin? + +All you need is [Docker Desktop](https://www.docker.com/products/docker-desktop) with the [Ambassador Telepresence extension installed](../install) and the Kubernetes command-line tool [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/). diff --git a/docs/telepresence/2.10/faqs.md b/docs/telepresence/2.10/faqs.md new file mode 100644 index 000000000..3c37f1cc5 --- /dev/null +++ b/docs/telepresence/2.10/faqs.md @@ -0,0 +1,124 @@ +--- +description: "Learn how Telepresence helps with fast development and debugging in your Kubernetes cluster." +--- + +# FAQs + +** Why Telepresence?** + +Modern microservices-based applications that are deployed into Kubernetes often consist of tens or hundreds of services. The resource constraints and number of these services means that it is often difficult to impossible to run all of this on a local development machine, which makes fast development and debugging very challenging. The fast [inner development loop](../concepts/devloop/) from previous software projects is often a distant memory for cloud developers. + +Telepresence enables you to connect your local development machine seamlessly to the cluster via a two way proxying mechanism. This enables you to code locally and run the majority of your services within a remote Kubernetes cluster -- which in the cloud means you have access to effectively unlimited resources. + +Ultimately, this empowers you to develop services locally and still test integrations with dependent services or data stores running in the remote cluster. + +You can “intercept” any requests made to a target Kubernetes workload, and code and debug your associated service locally using your favourite local IDE and in-process debugger. You can test your integrations by making requests against the remote cluster’s ingress and watching how the resulting internal traffic is handled by your service running locally. + +By using the preview URL functionality you can share access with additional developers or stakeholders to the application via an entry point associated with your intercept and locally developed service. You can make changes that are visible in near real-time to all of the participants authenticated and viewing the preview URL. All other viewers of the application entrypoint will not see the results of your changes. + +** What operating systems does Telepresence work on?** + +Telepresence currently works natively on macOS (Intel and Apple silicon), Linux, and WSL 2. Starting with v2.4.0, we are also releasing a native Windows version of Telepresence that we are considering a Developer Preview. + +** What protocols can be intercepted by Telepresence?** + +All HTTP/1.1 and HTTP/2 protocols can be intercepted. This includes: + +- REST +- JSON/XML over HTTP +- gRPC +- GraphQL + +If you need another protocol supported, please [drop us a line](https://www.getambassador.io/feedback/) to request it. + +** When using Telepresence to intercept a pod, are the Kubernetes cluster environment variables proxied to my local machine?** + +Yes, you can either set the pod's environment variables on your machine or write the variables to a file to use with Docker or another build process. Please see [the environment variable reference doc](../reference/environment) for more information. + +** When using Telepresence to intercept a pod, can the associated pod volume mounts also be mounted by my local machine?** + +Yes, please see [the volume mounts reference doc](../reference/volume/) for more information. + +** When connected to a Kubernetes cluster via Telepresence, can I access cluster-based services via their DNS name?** + +Yes. After you have successfully connected to your cluster via `telepresence connect` you will be able to access any service in your cluster via their namespace qualified DNS name. + +This means you can curl endpoints directly e.g. `curl .:8080/mypath`. + +If you create an intercept for a service in a namespace, you will be able to use the service name directly. + +This means if you `telepresence intercept -n `, you will be able to resolve just the `` DNS record. + +You can connect to databases or middleware running in the cluster, such as MySQL, PostgreSQL and RabbitMQ, via their service name. + +** When connected to a Kubernetes cluster via Telepresence, can I access cloud-based services and data stores via their DNS name?** + +You can connect to cloud-based data stores and services that are directly addressable within the cluster (e.g. when using an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) Service type), such as AWS RDS, Google pub-sub, or Azure SQL Database. + +** What types of ingress does Telepresence support for the preview URL functionality?** + +The preview URL functionality should work with most ingress configurations, including straightforward load balancer setups. + +Telepresence will discover/prompt during first use for this info and make its best guess at figuring this out and ask you to confirm or update this. + +** Why are my intercepts still reporting as active when they've been disconnected?** + + In certain cases, Telepresence might not have been able to communicate back with Ambassador Cloud to update the intercept's status. Worry not, they will get garbage collected after a period of time. + +** Why is my intercept associated with an "Unreported" cluster?** + + Intercepts tagged with "Unreported" clusters simply mean Ambassador Cloud was unable to associate a service instance with a known detailed service from an Edge Stack or API Gateway cluster. [Connecting your cluster to the Service Catalog](/docs/telepresence/latest/quick-start/) will properly match your services from multiple data sources. + +** Will Telepresence be able to intercept workloads running on a private cluster or cluster running within a virtual private cloud (VPC)?** + +Yes. The cluster has to have outbound access to the internet for the preview URLs to function correctly, but it doesn’t need to have a publicly accessible IP address. + +The cluster must also have access to an external registry in order to be able to download the traffic-manager and traffic-agent images that are deployed when connecting with Telepresence. + +** Why does running Telepresence require sudo access for the local daemon?** + +The local daemon needs sudo to create iptable mappings. Telepresence uses this to create outbound access from the laptop to the cluster. + +On Fedora, Telepresence also creates a virtual network device (a TUN network) for DNS routing. That also requires root access. + +** What components get installed in the cluster when running Telepresence?** + +A single `traffic-manager` service is deployed in the `ambassador` namespace within your cluster, and this manages resilient intercepts and connections between your local machine and the cluster. + +A Traffic Agent container is injected per pod that is being intercepted. The first time a workload is intercepted all pods associated with this workload will be restarted with the Traffic Agent automatically injected. + +** How can I remove all of the Telepresence components installed within my cluster?** + +You can run the command `telepresence uninstall --everything` to remove the `traffic-manager` service installed in the cluster and `traffic-agent` containers injected into each pod being intercepted. + +Running this command will also stop the local daemon running. + +** What language is Telepresence written in?** + +All components of the Telepresence application and cluster components are written using Go. + +** How does Telepresence connect and tunnel into the Kubernetes cluster?** + +The connection between your laptop and cluster is established by using +the `kubectl port-forward` machinery (though without actually spawning +a separate program) to establish a TCP connection to Telepresence +Traffic Manager in the cluster, and running Telepresence's custom VPN +protocol over that TCP connection. + + + +** What identity providers are supported for authenticating to view a preview URL?** + +* GitHub +* GitLab +* Google + +More authentication mechanisms and identity provider support will be added soon. Please [let us know](https://www.getambassador.io/feedback/) which providers are the most important to you and your team in order for us to prioritize those. + +** Is Telepresence open source?** + +Yes it is! You can find its source code on [GitHub](https://github.com/telepresenceio/telepresence). + +** How do I share my feedback on Telepresence?** + +Your feedback is always appreciated and helps us build a product that provides as much value as possible for our community. You can chat with us directly on our [feedback page](https://www.getambassador.io/feedback/), or you can [join our Slack channel](http://a8r.io/slack) to share your thoughts. diff --git a/docs/telepresence/2.10/howtos/cluster-in-vm.md b/docs/telepresence/2.10/howtos/cluster-in-vm.md new file mode 100644 index 000000000..4762344c9 --- /dev/null +++ b/docs/telepresence/2.10/howtos/cluster-in-vm.md @@ -0,0 +1,192 @@ +--- +title: "Considerations for locally hosted clusters | Ambassador" +description: "Use Telepresence to intercept services in a cluster running in a hosted virtual machine." +--- + +# Network considerations for locally hosted clusters + +## The problem +Telepresence creates a Virtual Network Interface ([VIF](../../reference/tun-device)) that maps the clusters subnets to the host machine when it connects. If you're running Kubernetes locally (e.g., k3s, Minikube, Docker for Desktop), you may encounter network problems because the devices in the host are also accessible from the cluster's nodes. + +### Example: +A k3s cluster runs in a headless VirtualBox machine that uses a "host-only" network. This network will allow both host-to-guest and guest-to-host connections. In other words, the cluster will have access to the host's network and, while Telepresence is connected, also to its VIF. This means that from the cluster's perspective, there will now be more than one interface that maps the cluster's subnets; the ones already present in the cluster's nodes, and then the Telepresence VIF, mapping them again. + +Now, if a request arrives to Telepresence that is covered by a subnet mapped by the VIF, the request is routed to the cluster. If the cluster for some reason doesn't find a corresponding listener that can handle the request, it will eventually try the host network, and find the VIF. The VIF routes the request to the cluster and now the recursion is in motion. The final outcome of the request will likely be a timeout but since the recursion is very resource intensive (a large amount of very rapid connection requests), this will likely also affect other connections in a bad way. + +## Solution + +### Create a bridge network +A bridge network is a Link Layer (L2) device that forwards traffic between network segments. By creating a bridge network, you can bypass the host's network stack which enable the Kubernetes cluster to connect directly to the same router as your host. + +To create a bridge network, you need to change the network settings of the guest running a cluster's node so that it connects directly to a physical network device on your host. The details on how to configure the bridge depends on what type of virtualization solution you're using. + +### Vagrant + Virtualbox + k3s example +Here's a sample `Vagrantfile` that will spin up a server node and two agent nodes in three headless instances using a bridged network. It also adds the configuration needed for the cluster to host a docker repository (very handy in case you want to save bandwidth). The Kubernetes registry manifest must be applied using `kubectl -f registry.yaml` once the cluster is up and running. + +#### Vagrantfile +```ruby +# -*- mode: ruby -*- +# vi: set ft=ruby : + +# bridge is the name of the host's default network device +$bridge = 'wlp5s0' + +# default_route should be the IP of the host's default route. +$default_route = '192.168.1.1' + +# nameserver must be the IP of an external DNS, such as 8.8.8.8 +$nameserver = '8.8.8.8' + +# server_name should also be added to the host's /etc/hosts file and point to the server_ip +# for easy access when pushing docker images +server_name = 'multi' + +# static IPs for the server and agents. Those IPs must be on the default router's subnet +server_ip = '192.168.1.110' +agents = { + 'agent1' => '192.168.1.111', + 'agent2' => '192.168.1.112', +} + +# Extra parameters in INSTALL_K3S_EXEC variable because of +# K3s picking up the wrong interface when starting server and agent +# https://github.com/alexellis/k3sup/issues/306 +server_script = <<-SHELL + sudo -i + apk add curl + export INSTALL_K3S_EXEC="--bind-address=#{server_ip} --node-external-ip=#{server_ip} --flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + echo "Sleeping for 5 seconds to wait for k3s to start" + sleep 5 + cp /var/lib/rancher/k3s/server/token /vagrant_shared + cp /etc/rancher/k3s/k3s.yaml /vagrant_shared + cp /etc/rancher/k3s/registries.yaml /vagrant_shared + SHELL + +agent_script = <<-SHELL + sudo -i + apk add curl + export K3S_TOKEN_FILE=/vagrant_shared/token + export K3S_URL=https://#{server_ip}:6443 + export INSTALL_K3S_EXEC="--flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + SHELL + +def config_vm(name, ip, script, vm) + # The network_script has two objectives: + # 1. Ensure that the guest's default route is the bridged network (bypass the network of the host) + # 2. Ensure that the DNS points to an external DNS service, as opposed to the DNS of the host that + # the NAT network provides. + network_script = <<-SHELL + sudo -i + ip route delete default 2>&1 >/dev/null || true; ip route add default via #{$default_route} + cp /etc/resolv.conf /etc/resolv.conf.orig + sed 's/^nameserver.*/nameserver #{$nameserver}/' /etc/resolv.conf.orig > /etc/resolv.conf + SHELL + + vm.hostname = name + vm.network 'public_network', bridge: $bridge, ip: ip + vm.synced_folder './shared', '/vagrant_shared' + vm.provider 'virtualbox' do |vb| + vb.memory = '4096' + vb.cpus = '2' + end + vm.provision 'shell', inline: script + vm.provision 'shell', inline: network_script, run: 'always' +end + +Vagrant.configure('2') do |config| + config.vm.box = 'generic/alpine314' + + config.vm.define 'server', primary: true do |server| + config_vm(server_name, server_ip, server_script, server.vm) + end + + agents.each do |agent_name, agent_ip| + config.vm.define agent_name do |agent| + config_vm(agent_name, agent_ip, agent_script, agent.vm) + end + end +end +``` + +The Kubernetes manifest to add the registry: + +#### registry.yaml +```yaml +apiVersion: v1 +kind: ReplicationController +metadata: + name: kube-registry-v0 + namespace: kube-system + labels: + k8s-app: kube-registry + version: v0 +spec: + replicas: 1 + selector: + app: kube-registry + version: v0 + template: + metadata: + labels: + app: kube-registry + version: v0 + spec: + containers: + - name: registry + image: registry:2 + resources: + limits: + cpu: 100m + memory: 200Mi + env: + - name: REGISTRY_HTTP_ADDR + value: :5000 + - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY + value: /var/lib/registry + volumeMounts: + - name: image-store + mountPath: /var/lib/registry + ports: + - containerPort: 5000 + name: registry + protocol: TCP + volumes: + - name: image-store + hostPath: + path: /var/lib/registry-storage +--- +apiVersion: v1 +kind: Service +metadata: + name: kube-registry + namespace: kube-system + labels: + app: kube-registry + kubernetes.io/name: "KubeRegistry" +spec: + selector: + app: kube-registry + ports: + - name: registry + port: 5000 + targetPort: 5000 + protocol: TCP + type: LoadBalancer +``` + diff --git a/docs/telepresence/2.10/howtos/intercepts.md b/docs/telepresence/2.10/howtos/intercepts.md new file mode 100644 index 000000000..87bd9f92b --- /dev/null +++ b/docs/telepresence/2.10/howtos/intercepts.md @@ -0,0 +1,108 @@ +--- +description: "Start using Telepresence in your own environment. Follow these steps to intercept your service in your cluster." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Intercept a service in your own environment + +Telepresence enables you to create intercepts to a target Kubernetes workload. Once you have created and intercept, you can code and debug your associated service locally. + +For a detailed walk-though on creating intercepts using our sample app, follow the [quick start guide](../../quick-start/demo-node/). + + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +This guide assumes you have a Kubernetes deployment and service accessible publicly by an ingress controller, and that you can run a copy of that service on your laptop. + + +## Intercept your service with a global intercept + +With Telepresence, you can create [global intercepts](../../concepts/intercepts/?intercept=global) that intercept all traffic going to a service in your cluster and route it to your local environment instead. + +1. Connect to your cluster with `telepresence connect` and connect to the Kubernetes API server: + + ```console + $ curl -ik https://kubernetes.default + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected when you first connect. + + + You now have access to your remote Kubernetes API server as if you were on the same network. You can now use any local tools to connect to any service in the cluster. + + If you have difficulties connecting, make sure you are using Telepresence 2.0.3 or a later version. Check your version by entering `telepresence version` and [upgrade if needed](../../install/upgrade/). + + +2. Enter `telepresence list` and make sure the service you want to intercept is listed. For example: + + ```console + $ telepresence list + ... + example-service: ready to intercept (traffic-agent not yet installed) + ... + ``` + +3. Get the name of the port you want to intercept on your service: + `kubectl get service --output yaml`. + + For example: + + ```console + $ kubectl get service example-service --output yaml + ... + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + ... + ``` + +4. Intercept all traffic going to the service in your cluster: + `telepresence intercept --port [:] --env-file `. + * For `--port`: specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * For `--env-file`: specify a file path for Telepresence to write the environment variables that are set in the pod. + The example below shows Telepresence intercepting traffic going to service `example-service`. Requests now reach the service on port `http` in the cluster get routed to `8080` on the workstation and write the environment variables of the service to `~/example-service-intercept.env`. + ```console + $ telepresence intercept example-service --port 8080:http --env-file ~/example-service-intercept.env + Using Deployment example-service + intercepted + Intercept name: example-service + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Intercepting : all TCP connections + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + The following are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Query the environment in which you intercepted a service and verify your local instance being invoked. + All the traffic previously routed to your Kubernetes Service is now routed to your local environment + +You can now: +- Make changes on the fly and see them reflected when interacting with + your Kubernetes environment. +- Query services only exposed in your cluster's network. +- Set breakpoints in your IDE to investigate bugs. + + + + **Didn't work?** Make sure the port you're listening on matches the one you specified when you created your intercept. + + diff --git a/docs/telepresence/2.10/howtos/outbound.md b/docs/telepresence/2.10/howtos/outbound.md new file mode 100644 index 000000000..48877df8c --- /dev/null +++ b/docs/telepresence/2.10/howtos/outbound.md @@ -0,0 +1,89 @@ +--- +description: "Telepresence can connect to your Kubernetes cluster, letting you access cluster services as if your laptop was another pod in the cluster." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Proxy outbound traffic to my cluster + +While preview URLs are a powerful feature, Telepresence offers other options for proxying traffic between your laptop and the cluster. This section discribes how to proxy outbound traffic and control outbound connectivity to your cluster. + + This guide assumes that you have the quick start sample web app running in your cluster to test accessing the web-app service. You can substitute this service for any other service you are running. + +## Proxying outbound traffic + +Connecting to the cluster instead of running an intercept allows you to access cluster workloads as if your laptop was another pod in the cluster. This enables you to access other Kubernetes services using `.`. A service running on your laptop can interact with other services on the cluster by name. + +When you connect to your cluster, the background daemon on your machine runs and installs the [Traffic Manager deployment](../../reference/architecture/) into the cluster of your current `kubectl` context. The Traffic Manager handles the service proxying. + +1. Run `telepresence connect` and enter your password to run the daemon. + + ``` + $ telepresence connect + Launching Telepresence Daemon v2.3.7 (api v3) + Need root privileges to run "/usr/local/bin/telepresence daemon-foreground /home//.cache/telepresence/logs '' ''" + [sudo] password: + Connecting to traffic manager... + Connected to context default (https://) + ``` + +2. Run `telepresence status` to confirm connection to your cluster and that it is proxying traffic. + + ``` + $ telepresence status + Root Daemon: Running + Version : v2.3.7 (api 3) + Primary DNS : "" + Fallback DNS: "" + User Daemon: Running + Version : v2.3.7 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 0 total + ``` + +3. Access your service by name with `curl web-app.emojivoto:80`. Telepresence routes the request to the cluster, as if your laptop is actually running in the cluster. + + ``` + $ curl web-app.emojivoto:80 + + + + + Emoji Vote + ... + ``` + +If you terminate the client with `telepresence quit` and try to access the service again, it will fail because traffic is no longer proxied from your laptop. + + ``` + $ telepresence quit + Telepresence Daemon quitting...done + ``` + +When using Telepresence in this way, you need to access services with the namespace qualified DNS name (<service name>.<namespace>) before you start an intercept. After you start an intercept, only <service name> is required. Read more about these differences in the DNS resolution reference guide. + +## Controlling outbound connectivity + +By default, Telepresence provides access to all Services found in all namespaces in the connected cluster. This can lead to problems if the user does not have RBAC access permissions to all namespaces. You can use the `--mapped-namespaces ` flag to control which namespaces are accessible. + +When you use the `--mapped-namespaces` flag, you need to include all namespaces containing services you want to access, as well as all namespaces that contain services related to the intercept. + +### Using local-only intercepts + +When you develop on isolated apps or on a virtualized container, you don't need an outbound connection. However, when developing services that aren't deployed to the cluster, it can be necessary to provide outbound connectivity to the namespace where the service will be deployed. This is because services that aren't exposed through ingress controllers require connectivity to those services. When you provide outbound connectivity, the service can access other services in that namespace without using qualified names. A local-only intercept does not cause outbound connections to originate from the intercepted namespace. The reason for this is to establish correct origin; the connection must be routed to a `traffic-agent`of an intercepted pod. For local-only intercepts, the outbound connections originates from the `traffic-manager`. + +To control outbound connectivity to specific namespaces, add the `--local-only` flag: + + ``` + $ telepresence intercept --namespace --local-only + ``` +The resources in the given namespace can now be accessed using unqualified names as long as the intercept is active. +You can deactivate the intercept with `telepresence leave `. This removes unqualified name access. + +### Proxy outcound connectivity for laptops + +To specify additional hosts or subnets that should be resolved inside the cluster, see [AlsoProxy](../../reference/cluster-config/#alsoproxy) for more details. diff --git a/docs/telepresence/2.10/howtos/preview-urls.md b/docs/telepresence/2.10/howtos/preview-urls.md new file mode 100644 index 000000000..c1bbe3fee --- /dev/null +++ b/docs/telepresence/2.10/howtos/preview-urls.md @@ -0,0 +1,101 @@ +--- +title: "Share dev environments with preview URLs | Ambassador" +description: "Telepresence uses Preview URLs to help you collaborate on developing Kubernetes services with teammates." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Share development environments with preview URLs + +Telepresence can generate sharable preview URLs. This enables you to work on a copy of your service locally, and share that environment with a teammate for pair programming. While using preview URLs, Telepresence will route only the requests coming from that preview URL to your local environment. Requests to the ingress are routed to your cluster as usual. + +Preview URLs are protected behind authentication through Ambassador Cloud, and, access to the URL is only available to users in your organization. You can make the URL publicly accessible for sharing with outside collaborators. + +## Creating a preview URL + +1. Connect to Telepresence and enter the `telepresence list` command in your CLI to verify the service is listed. +Telepresence only supports Deployments, ReplicaSets, and StatefulSet workloads with a label that matches a Service. + +2. Enter `telepresence login` to launch Ambassador Cloud in your browser. + + If you are in an environment you can't launch Telepresence in your local browser, enter If you are in an environment where Telepresence cannot launch in a local browser, pass the [`--apikey` flag to `telepresence login`](../../reference/client/login/). + +3. Start the intercept with `telepresence intercept --port --env-file --mechanism http`and adjust the flags as follows: + Start the intercept: + * **port:** specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * **env-file:** specify a file path for Telepresence to write the environment variables that are set in the pod. + * You can remove the **--mechanism http** flag if you have your traffic-manager set to *team-mode* + +4. Answer the question prompts. + The example below shows a preview URL for `example-service` which listens on port 8080. The preview URL for ingress will use the `ambassador` service in the `ambassador` namespace on port `443` using TLS encryption and the hostname `dev-environment.edgestack.me`: + + ```console +$ telepresence intercept example-service --mechanism http --ingress-host ambassador.ambassador --ingress-port 80 --ingress-l5 dev-environment.edgestack.me --ingress-tls --port 8080 --env-file ~/ex-svc.env + + Using deployment example-service + intercepted + Intercept name : example-service + State : ACTIVE + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":example-service") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname : dev-environment.edgestack.me + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + Here are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Go to the Preview URL generated from the intercept. +Traffic is now intercepted from your preview URL without impacting other traffic from your Ingress. + + + Didn't work? It might be because you have services in between your ingress controller and the service you are intercepting that do not propagate the x-telepresence-intercept-id HTTP Header. Read more on context propagation. + + +7. Make a request on the URL you would usually query for that environment. Don't route a request to your laptop. + + Normal traffic coming into the cluster through the Ingress (i.e. not coming from the preview URL) routes to services in the cluster like normal. + +8. Share with a teammate. + + You can collaborate with teammates by sending your preview URL to them. Once your teammate logs in, they must select the same identity provider and org as you are using. This authorizes their access to the preview URL. When they visit the preview URL, they see the intercepted service running on your laptop. + You can now collaborate with a teammate to debug the service on the shared intercept URL without impacting the production environment. + +## Sharing a preview URL with people outside your team + +To collaborate with someone outside of your identity provider's organization: +Log into [Ambassador Cloud](https://app.getambassador.io/cloud/). + navigate to your service's intercepts, select the preview URL details, and click **Make Publicly Accessible**. Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on your laptop. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. Removing the preview URL either from the dashboard or by running `telepresence preview remove ` also removes all access to the preview URL. + +## Change access restrictions + +To collaborate with someone outside of your identity provider's organization, you must make your preview URL publicly accessible. + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/). +2. Select the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Make Publicly Accessible**. + +Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on a local environment. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. + +## Remove a preview URL from an Intercept + +To delete a preview URL and remove all access to the intercepted service, + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) +2. Click on the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Remove Preview**. + +Alternatively, you can remove a preview URL with the following command: +`telepresence preview remove ` diff --git a/docs/telepresence/2.10/howtos/request.md b/docs/telepresence/2.10/howtos/request.md new file mode 100644 index 000000000..1109c68df --- /dev/null +++ b/docs/telepresence/2.10/howtos/request.md @@ -0,0 +1,12 @@ +import Alert from '@material-ui/lab/Alert'; + +# Send requests to an intercepted service + +Ambassador Cloud can inform you about the required request parameters to reach an intercepted service. + + 1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) + 2. Navigate to the desired service Intercepts page + 3. Click the **Query** button to open the pop-up menu. + 4. Toggle between **CURL**, **Headers** and **Browse**. + +The pre-built queries and header information will help you get started to query the desired intercepted service and manage header propagation. diff --git a/docs/telepresence/2.10/images/container-inner-dev-loop.png b/docs/telepresence/2.10/images/container-inner-dev-loop.png new file mode 100644 index 000000000..06586cd6e Binary files /dev/null and b/docs/telepresence/2.10/images/container-inner-dev-loop.png differ diff --git a/docs/telepresence/2.10/images/docker-header-containers.png b/docs/telepresence/2.10/images/docker-header-containers.png new file mode 100644 index 000000000..06f422a93 Binary files /dev/null and b/docs/telepresence/2.10/images/docker-header-containers.png differ diff --git a/docs/telepresence/2.10/images/github-login.png b/docs/telepresence/2.10/images/github-login.png new file mode 100644 index 000000000..cfd4d4bf1 Binary files /dev/null and b/docs/telepresence/2.10/images/github-login.png differ diff --git a/docs/telepresence/2.10/images/logo.png b/docs/telepresence/2.10/images/logo.png new file mode 100644 index 000000000..701f63ba8 Binary files /dev/null and b/docs/telepresence/2.10/images/logo.png differ diff --git a/docs/telepresence/2.10/images/mode-defaults.png b/docs/telepresence/2.10/images/mode-defaults.png new file mode 100644 index 000000000..1dcca4116 Binary files /dev/null and b/docs/telepresence/2.10/images/mode-defaults.png differ diff --git a/docs/telepresence/2.10/images/split-tunnel.png b/docs/telepresence/2.10/images/split-tunnel.png new file mode 100644 index 000000000..5bf30378e Binary files /dev/null and b/docs/telepresence/2.10/images/split-tunnel.png differ diff --git a/docs/telepresence/2.10/images/tracing.png b/docs/telepresence/2.10/images/tracing.png new file mode 100644 index 000000000..c374807e5 Binary files /dev/null and b/docs/telepresence/2.10/images/tracing.png differ diff --git a/docs/telepresence/2.10/images/trad-inner-dev-loop.png b/docs/telepresence/2.10/images/trad-inner-dev-loop.png new file mode 100644 index 000000000..618b674f8 Binary files /dev/null and b/docs/telepresence/2.10/images/trad-inner-dev-loop.png differ diff --git a/docs/telepresence/2.10/images/tunnelblick.png b/docs/telepresence/2.10/images/tunnelblick.png new file mode 100644 index 000000000..8944d445a Binary files /dev/null and b/docs/telepresence/2.10/images/tunnelblick.png differ diff --git a/docs/telepresence/2.10/images/vpn-dns.png b/docs/telepresence/2.10/images/vpn-dns.png new file mode 100644 index 000000000..eed535c45 Binary files /dev/null and b/docs/telepresence/2.10/images/vpn-dns.png differ diff --git a/docs/telepresence/2.10/install/cloud.md b/docs/telepresence/2.10/install/cloud.md new file mode 100644 index 000000000..9bcf9e63e --- /dev/null +++ b/docs/telepresence/2.10/install/cloud.md @@ -0,0 +1,43 @@ +# Provider Prerequisites for Traffic Manager + +## GKE + +### Firewall Rules for private clusters + +A GKE cluster with private networking will come preconfigured with firewall rules that prevent the Traffic Manager's +webhook injector from being invoked by the Kubernetes API server. +For Telepresence to work in such a cluster, you'll need to [add a firewall rule](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) allowing the Kubernetes masters to access TCP port `8443` in your pods. +For example, for a cluster named `tele-webhook-gke` in region `us-central1-c1`: + +```bash +$ gcloud container clusters describe tele-webhook-gke --region us-central1-c | grep masterIpv4CidrBlock + masterIpv4CidrBlock: 172.16.0.0/28 # Take note of the IP range, 172.16.0.0/28 + +$ gcloud compute firewall-rules list \ + --filter 'name~^gke-tele-webhook-gke' \ + --format 'table( + name, + network, + direction, + sourceRanges.list():label=SRC_RANGES, + allowed[].map().firewall_rule().list():label=ALLOW, + targetTags.list():label=TARGET_TAGS + )' + +NAME NETWORK DIRECTION SRC_RANGES ALLOW TARGET_TAGS +gke-tele-webhook-gke-33fa1791-all tele-webhook-net INGRESS 10.40.0.0/14 esp,ah,sctp,tcp,udp,icmp gke-tele-webhook-gke-33fa1791-node +gke-tele-webhook-gke-33fa1791-master tele-webhook-net INGRESS 172.16.0.0/28 tcp:10250,tcp:443 gke-tele-webhook-gke-33fa1791-node +gke-tele-webhook-gke-33fa1791-vms tele-webhook-net INGRESS 10.128.0.0/9 icmp,tcp:1-65535,udp:1-65535 gke-tele-webhook-gke-33fa1791-node +# Take note fo the TARGET_TAGS value, gke-tele-webhook-gke-33fa1791-node + +$ gcloud compute firewall-rules create gke-tele-webhook-gke-webhook \ + --action ALLOW \ + --direction INGRESS \ + --source-ranges 172.16.0.0/28 \ + --rules tcp:8443 \ + --target-tags gke-tele-webhook-gke-33fa1791-node --network tele-webhook-net +Creating firewall...⠹Created [https://www.googleapis.com/compute/v1/projects/datawire-dev/global/firewalls/gke-tele-webhook-gke-webhook]. +Creating firewall...done. +NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED +gke-tele-webhook-gke-webhook tele-webhook-net INGRESS 1000 tcp:8443 False +``` diff --git a/docs/telepresence/2.10/install/helm.md b/docs/telepresence/2.10/install/helm.md new file mode 100644 index 000000000..2709ee8f3 --- /dev/null +++ b/docs/telepresence/2.10/install/helm.md @@ -0,0 +1,181 @@ +# Install the Traffic Manager with Helm + +[Helm](https://helm.sh) is a package manager for Kubernetes that automates the release and management of software on Kubernetes. The Telepresence Traffic Manager can be installed via a Helm chart with a few simple steps. + +For more details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +## Before you begin + +Before you begin you need to have [`helm`](https://helm.sh/docs/intro/install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +The Telepresence Helm chart is hosted by Ambassador Labs and published at `https://app.getambassador.io`. + +Start by adding this repo to your Helm client with the following command: + +```shell +helm repo add datawire https://app.getambassador.io +helm repo update +``` + +## Install with Helm + +When you run the Helm chart, it installs all the components required for the Telepresence Traffic Manager. + +1. If you are installing the Telepresence Traffic Manager **for the first time on your cluster**, create the `ambassador` namespace in your cluster: + + ```shell + kubectl create namespace ambassador + ``` + +2. Install the Telepresence Traffic Manager with the following command: + + ```shell + helm install traffic-manager --namespace ambassador datawire/telepresence + ``` + +### Install into custom namespace + +The Helm chart supports being installed into any namespace, not necessarily `ambassador`. Simply pass a different `namespace` argument to `helm install`. +For example, if you wanted to deploy the traffic manager to the `staging` namespace: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence +``` + +Note that users of Telepresence will need to configure their kubeconfig to find this installation of the Traffic Manager: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +See [the kubeconfig documentation](../../reference/config#manager) for more information. + +### Upgrading the Traffic Manager. + +Versions of the Traffic Manager Helm chart are coupled to the versions of the Telepresence CLI that they are intended for. +Thus, for example, if you wish to use Telepresence `v2.4.0`, you'll need to install version `v2.4.0` of the Traffic Manager Helm chart. + +Upgrading the Traffic Manager is the same as upgrading any other Helm chart; for example, if you installed the release into the `ambassador` namespace, and you just wished to upgrade it to the latest version without changing any configuration values: + +```shell +helm repo up +helm upgrade traffic-manager datawire/telepresence --reuse-values --namespace ambassador +``` + +If you want to upgrade the Traffic-Manager to a specific version, add a `--version` flag with the version number to the upgrade command. For example: `--version v2.4.1` + +## RBAC + +### Installing a namespace-scoped traffic manager + +You might not want the Traffic Manager to have permissions across the entire kubernetes cluster, or you might want to be able to install multiple traffic managers per cluster (for example, to separate them by environment). +In these cases, the traffic manager supports being installed with a namespace scope, allowing cluster administrators to limit the reach of a traffic manager's permissions. + +For example, suppose you want a Traffic Manager that only works on namespaces `dev` and `staging`. +To do this, create a `values.yaml` like the following: + +```yaml +managerRbac: + create: true + namespaced: true + namespaces: + - dev + - staging +``` + +This can then be installed via: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence -f ./values.yaml +``` + +**NOTE** Do not install namespace-scoped Traffic Managers and a global Traffic Manager in the same cluster, as it could have unexpected effects. + +#### Namespace collision detection + +The Telepresence Helm chart will try to prevent namespace-scoped Traffic Managers from managing the same namespaces. +It will do this by creating a ConfigMap, called `traffic-manager-claim`, in each namespace that a given install manages. + +So, for example, suppose you install one Traffic Manager to manage namespaces `dev` and `staging`, as: + +```bash +helm install traffic-manager --namespace dev datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={dev,staging}' +``` + +You might then attempt to install another Traffic Manager to manage namespaces `staging` and `prod`: + +```bash +helm install traffic-manager --namespace prod datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={staging,prod}' +``` + +This would fail with an error: + +``` +Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap "traffic-manager-claim" in namespace "staging" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "prod": current value is "dev" +``` + +To fix this error, fix the overlap either by removing `staging` from the first install, or from the second. + +#### Namespace scoped user permissions + +Optionally, you can also configure user rbac to be scoped to the same namespaces as the manager itself. +You might want to do this if you don't give your users permissions throughout the cluster, and want to make sure they only have the minimum set required to perform telepresence commands on certain namespaces. + +Continuing with the `dev` and `staging` example from the previous section, simply add the following to `values.yaml` (make sure you set the `subjects`!): + +```yaml +clientRbac: + create: true + + # These are the users or groups to which the user rbac will be bound. + # This MUST be set. + subjects: {} + # - kind: User + # name: jane + # apiGroup: rbac.authorization.k8s.io + + namespaced: true + + namespaces: + - dev + - staging +``` + +#### Namespace-scoped webhook + +If you wish to use the traffic-manager's [mutating webhook](../../reference/cluster-config#mutating-webhook) with a namespace-scoped traffic manager, you will have to ensure that each namespace has an `app.kubernetes.io/name` label that is identical to its name: + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: staging + labels: + app.kubernetes.io/name: staging +``` + +You can also use `kubectl label` to add the label to an existing namespace, e.g.: + +```shell +kubectl label namespace staging app.kubernetes.io/name=staging +``` + +This is required because the mutating webhook will use the name label to find namespaces to operate on. + +**NOTE** This labelling happens automatically in kubernetes >= 1.21. + +### Installing RBAC only + +Telepresence Traffic Manager does require some [RBAC](../../reference/rbac/) for the traffic-manager deployment itself, as well as for users. +To make it easier for operators to introspect / manage RBAC separately, you can use `rbac.only=true` to +only create the rbac-related objects. +Additionally, you can use `clientRbac.create=true` and `managerRbac.create=true` to toggle which subset(s) of RBAC objects you wish to create. diff --git a/docs/telepresence/2.10/install/index.md b/docs/telepresence/2.10/install/index.md new file mode 100644 index 000000000..624cb33d6 --- /dev/null +++ b/docs/telepresence/2.10/install/index.md @@ -0,0 +1,153 @@ +import Platform from '@src/components/Platform'; + +# Install + +Install Telepresence by running the commands below for your OS. If you are not the administrator of your cluster, you will need [administrative RBAC permissions](../reference/rbac#administrating-telepresence) to install and use Telepresence in your cluster. + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## What's Next? + +Follow one of our [quick start guides](../quick-start/) to start using Telepresence, either with our sample app or in your own environment. + +## Installing nightly versions of Telepresence + +We build and publish the contents of the default branch, [release/v2](https://github.com/telepresenceio/telepresence), of Telepresence +nightly, Monday through Friday, for macOS (Intel and Apple silicon), Linux, and Windows. + +The tags are formatted like so: `vX.Y.Z-nightly-$gitShortHash`. + +`vX.Y.Z` is the most recent release of Telepresence with the patch version (Z) bumped one higher. +For example, if our last release was 2.3.4, nightly builds would start with v2.3.5, until a new +version of Telepresence is released. + +`$gitShortHash` will be the short hash of the git commit of the build. + +Use these URLs to download the most recent nightly build. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/nightly/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/nightly/telepresence.zip +``` + + + + +## Installing older versions of Telepresence + +Use these URLs to download an older version for your OS (including older nightly builds), replacing `x.y.z` with the versions you want. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/x.y.z/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/x.y.z/telepresence +``` + + + diff --git a/docs/telepresence/2.10/install/manager.md b/docs/telepresence/2.10/install/manager.md new file mode 100644 index 000000000..4efdc3c69 --- /dev/null +++ b/docs/telepresence/2.10/install/manager.md @@ -0,0 +1,85 @@ +# Install/Uninstall the Traffic Manager + +Telepresence uses a traffic manager to send/recieve cloud traffic to the user. Telepresence uses [Helm](https://helm.sh) under the hood to install the traffic manager in your cluster. + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/). +In addition, you may need certain prerequisites depending on your cloud provider and platform. +See the [cloud provider installation notes](../../install/cloud) for more. + +## Install the Traffic Manager + +The telepresence cli can install the traffic manager for you. The basic install will install the same version as the client used. + +1. Install the Telepresence Traffic Manager with the following command: + + ```shell + telepresence helm install + ``` + +### Customizing the Traffic Manager. + +For details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +1. Create a values.yaml file with your config values. + +2. Run the install command with the values flag set to the path to your values file. + + ```shell + telepresence helm install --values values.yaml + ``` + + +## Upgrading/Downgrading the Traffic Manager. + +1. Download the cli of the version of Telepresence you wish to use. + +2. Run the install command with the upgrade flag. + + ```shell + telepresence helm install --upgrade + ``` + + +## Uninstall + +The telepresence cli can uninstall the traffic manager for you using the `telepresence helm uninstall` command (previously `telepresence uninstall --everything`). + +1. Uninstall the Telepresence Traffic Manager and all of the agents installed by it using the following command: + + ```shell + telepresence helm uninstall + ``` + +## Ambassador Agent + +The Ambassador Agent is installed alongside the Traffic Manager to report your services to Ambassador Cloud and give you the ability to trigger intercepts from the Cloud UI. + +If you are already using the Emissary-Ingress or Edge-Stack you do not need to install the Ambassador Agent. When installing the `traffic-manager` you can add the flag `--set ambassador-agent.enabled=false`, to not include the ambassador-agent. Emissary and Edge-Stack both already include this agent within their deployments. + +If your namespace runs with tight security parameters you may need to set a few additional parameters. These parameters are `securityContext`, `tolerations`, and `resources`. +You can set these parameters in a `values.yaml` file under the `ambassador-agent` prefix to fit your namespace requirements. + +### Adding an API Key to your Ambassador Agent + +While installing the traffic-manager you can pass your cloud-token directly to the helm chart using the flag, `--set ambassador-agent.cloudConnectToken=`. +The [API Key](../reference/client/login.md) will be created as a secret and your agent will use it upon start-up. Telepresence will not override the API key given via Helm. + +### Creating a secret manually +The Ambassador agent watches for secrets with a name ending in `agent-cloud-token`. You can create this secret yourself. This API key will always be used. + + ```shell +kubectl apply -f - < + labels: + app.kubernetes.io/name: agent-cloud-token +data: + CLOUD_CONNECT_TOKEN: +EOF + ``` \ No newline at end of file diff --git a/docs/telepresence/2.10/install/migrate-from-legacy.md b/docs/telepresence/2.10/install/migrate-from-legacy.md new file mode 100644 index 000000000..94307dfa1 --- /dev/null +++ b/docs/telepresence/2.10/install/migrate-from-legacy.md @@ -0,0 +1,110 @@ +# Migrate from legacy Telepresence + +[Telepresence](/products/telepresence/) (formerly referenced as Telepresence 2, which is the current major version) has different mechanics and requires a different mental model from [legacy Telepresence 1](https://www.telepresence.io/docs/v1/) when working with local instances of your services. + +In legacy Telepresence, a pod running a service was swapped with a pod running the Telepresence proxy. This proxy received traffic intended for the service, and sent the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment". + +In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time. + +Telepresence 2 introduces a [new +architecture](../../reference/architecture/) built around "intercepts" +that addresses these problems. With the new Telepresence, a sidecar +proxy ("traffic agent") is injected onto the pod. The proxy then +intercepts traffic intended for the Pod and routes it to the +workstation/laptop. The advantage of this approach is that the +service is running at all times, and no swapping is used. By using +the proxy approach, we can also do personal intercepts, where rather +than re-routing all traffic to the laptop/workstation, it only +re-routes the traffic designated as belonging to that user, so that +multiple developers can intercept the same service at the same time +without disrupting normal operation or disrupting eacho. + +Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts. + +## Using legacy Telepresence commands + +First please ensure you've [installed Telepresence](../). + +Telepresence is able to translate common legacy Telepresence commands into native Telepresence commands. +So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used +to with the Telepresence binary. + +For example, say you have a deployment (`myserver`) that you want to swap deployment (equivalent to intercept in +Telepresence) with a python server, you could run the following command: + +``` +$ telepresence --swap-deployment myserver --expose 9090 --run python3 -m http.server 9090 +< help text > + +Legacy telepresence command used +Command roughly translates to the following in Telepresence: +telepresence intercept echo-easy --port 9090 -- python3 -m http.server 9090 +running... +Connecting to traffic manager... +Connected to context +Using Deployment myserver +intercepted + Intercept name : myserver + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:9090 + Intercepting : all TCP connections +Serving HTTP on :: port 9090 (http://[::]:9090/) ... +``` + +Telepresence will let you know what the legacy Telepresence command has mapped to and automatically +runs it. So you can get started with Telepresence today, using the commands you are used to +and it will help you learn the Telepresence syntax. + +### Legacy command mapping + +Below is the mapping of legacy Telepresence to Telepresence commands (where they exist and +are supported). + +| Legacy Telepresence Command | Telepresence Command | +|--------------------------------------------------|--------------------------------------------| +| --swap-deployment $workload | intercept $workload | +| --expose localPort[:remotePort] | intercept --port localPort[:remotePort] | +| --swap-deployment $workload --run-shell | intercept $workload -- bash | +| --swap-deployment $workload --run $cmd | intercept $workload -- $cmd | +| --swap-deployment $workload --docker-run $cmd | intercept $workload --docker-run -- $cmd | +| --run-shell | connect -- bash | +| --run $cmd | connect -- $cmd | +| --env-file,--env-json | --env-file, --env-json (haven't changed) | +| --context,--namespace | --context, --namespace (haven't changed) | +| --mount,--docker-mount | --mount, --docker-mount (haven't changed) | + +### Legacy Telepresence command limitations + +Some of the commands and flags from legacy Telepresence either didn't apply to Telepresence or +aren't yet supported in Telepresence. For some known popular commands, such as --method, +Telepresence will include output letting you know that the flag has gone away. For flags that +Telepresence can't translate yet, it will let you know that that flag is "unsupported". + +If Telepresence is missing any flags or functionality that is integral to your usage, please let us know +by [creating an issue](https://github.com/telepresenceio/telepresence/issues) and/or talking to us on our [Slack channel](http://a8r.io/slack)! + +## Telepresence changes + +Telepresence installs a Traffic Manager in the cluster and Traffic Agents alongside workloads when performing intercepts (including +with `--swap-deployment`) and leaves them. If you use `--swap-deployment`, the intercept will be left once the process +dies, but the agent will remain. There's no harm in leaving the agent running alongside your service, but when you +want to remove them from the cluster, the following Telepresence command will help: +``` +$ telepresence uninstall --help +Uninstall telepresence agents + +Usage: + telepresence uninstall [flags] { --agent |--all-agents } + +Flags: + -d, --agent uninstall intercept agent on specific deployments + -a, --all-agents uninstall intercept agent on all deployments + -h, --help help for uninstall + -n, --namespace string If present, the namespace scope for this CLI request +``` + +Since the new architecture deploys a Traffic Manager into the Ambassador namespace, please take a look at +our [rbac guide](../../reference/rbac) if you run into any issues with permissions while upgrading to Telepresence. + +The Traffic Manager can be uninstalled using `telepresence helm uninstall`. \ No newline at end of file diff --git a/docs/telepresence/2.10/install/upgrade.md b/docs/telepresence/2.10/install/upgrade.md new file mode 100644 index 000000000..8272b4844 --- /dev/null +++ b/docs/telepresence/2.10/install/upgrade.md @@ -0,0 +1,81 @@ +--- +description: "How to upgrade your installation of Telepresence and install previous versions." +--- + +# Upgrade Process +The Telepresence CLI will periodically check for new versions and notify you when an upgrade is available. Running the same commands used for installation will replace your current binary with the latest version. + +Before upgrading your CLI, you must stop any live Telepresence processes by issuing `telepresence quit -s` (or `telepresence quit -ur` +if your current version is less than 2.8.0). + + + + +```shell +# Intel Macs + +# Upgrade via brew: +brew upgrade datawire/blackbird/telepresence + +# OR upgrade manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +The Telepresence CLI contains an embedded Helm chart. See [Install/Uninstall the Traffic Manager](../manager/) if you want to also upgrade +the Traffic Manager in your cluster. diff --git a/docs/telepresence/2.10/quick-start/TelepresenceQuickStartLanding.js b/docs/telepresence/2.10/quick-start/TelepresenceQuickStartLanding.js new file mode 100644 index 000000000..bd375dee0 --- /dev/null +++ b/docs/telepresence/2.10/quick-start/TelepresenceQuickStartLanding.js @@ -0,0 +1,118 @@ +import queryString from 'query-string'; +import React, { useEffect, useState } from 'react'; + +import Embed from '../../../../src/components/Embed'; +import Icon from '../../../../src/components/Icon'; +import Link from '../../../../src/components/Link'; + +import './telepresence-quickstart-landing.less'; + +/** @type React.FC> */ +const RightArrow = (props) => ( + + + +); + +const TelepresenceQuickStartLanding = () => { + const [getStartedUrl, setGetStartedUrl] = useState( + 'https://app.getambassador.io/cloud/welcome?docs_source=telepresence-quick-start', + ); + + const getUrlFromQueryParams = () => { + const { docs_source, docs_campaign } = queryString.parse( + window.location.search, + ); + + if (docs_source === 'cloud-quickstart-ad' && docs_campaign === 'loops') { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=loops', + ); + } else if ( + docs_source === 'cloud-quickstart-ad' && + docs_campaign === 'environments' + ) { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=environments', + ); + } + }; + + useEffect(() => { + getUrlFromQueryParams(); + }, []); + + return ( +
+

+ Telepresence +

+

+ Set up your ideal development environment for Kubernetes in seconds. + Accelerate your inner development loop with hot reload using your + existing IDE, and workflow. +

+ +
+
+
+

+ Set Up Telepresence with Ambassador Cloud +

+

+ Seamlessly integrate Telepresence into your existing Kubernetes + environment by following our 3-step setup guide. +

+ + Get Started + +
+
+

+ + Do it Yourself: + {' '} + install Telepresence and manually connect to your Kubernetes + workloads. +

+
+ +
+
+
+

+ What Can Telepresence Do for You? +

+

Telepresence gives Kubernetes application developers:

+
    +
  • Instant feedback loops
  • +
  • Remote development environments
  • +
  • Access to your favorite local tools
  • +
  • Easy collaborative development with teammates
  • +
+ + LEARN MORE{' '} + + +
+
+ +
+
+
+
+ ); +}; + +export default TelepresenceQuickStartLanding; diff --git a/docs/telepresence/2.10/quick-start/demo-node.md b/docs/telepresence/2.10/quick-start/demo-node.md new file mode 100644 index 000000000..c1725fe30 --- /dev/null +++ b/docs/telepresence/2.10/quick-start/demo-node.md @@ -0,0 +1,155 @@ +--- +description: "Claim a remote demo cluster and learn to use Telepresence to intercept services running in a Kubernetes Cluster, speeding up local development and debugging." +--- + +import {DemoClusterMetadata, ExpirationDate} from '../../../../../src/components/DemoClusterMetadata'; +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start + +
+

Contents

+ +* [1. Get a free remote cluster](#1-get-a-free-remote-cluster) +* [2. Try the Emojivoto application](#2-try-the-emojivoto-application) +* [3. Set up your local development environment](#3-set-up-your-local-development-environment) +* [4. Testing our fix](#4-testing-our-fix) +* [5. Preview URLs](#5-preview-urls) +* [6. How/Why does this all work](#6-howwhy-does-this-all-work) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +In this guide, we'll give you a hands-on tutorial with [Telepresence](/products/telepresence/). To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js and Golang. We have a version in React if you prefer. + + +## 1. Get a free remote cluster + +[Telepresence](/docs/telepresence/) connects your local workstation with a remote Kubernetes cluster. In this tutorial, we'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. Test out the application: + +1. Go to the and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. We're going to use Telepresence shortly to fix this bug, as everyone should be able to vote for 🍩! + + + Congratulations! You've successfully accessed the Emojivoto application on your remote cluster. + + +## 3. Set up your local development environment + +We'll set up a development environment locally on your workstation. We'll then use [Telepresence](../../reference/inside-container/) to connect this local development environment to the remote Kubernetes cluster. To save time, the development environment we'll use is pre-packaged as a Docker container. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + + + + + + + +Make sure that ports 8080 and 8083 are free.
+If the Docker engine is not running, the command will fail and you will see docker: unknown server OS in your terminal. +
+ +2. The Docker container includes a copy of the Emojivoto application that fixes the bug. Visit the [leaderboard](http://localhost:8083/leaderboard) and notice how it is different from the leaderboard in your Kubernetes cluster. + +3. Vote for 🍩 on your local leaderboard, and you can see that the bug is fixed! + + + Congratulations! You have successfully set up a local development environment, and tested the fix locally. + + +## 4. Testing our fix + +A common use case for Telepresence is to connect your local development environment to a remote cluster. This way, if your application is too big or complex to run locally, you can still develop locally. In this Quick Start, we're also going to show Telepresence can be used for integration testing, by testing our fix against the services in the remote cluster. + +1. From your Docker container, create an intercept, which will tell Telepresence to send traffic to the service in your container instead of the service in the cluster: + `telepresence intercept web --port 8080` + + When prompted for ingress configuration, all default values should be correct as displayed below. + + + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Preview URLs + +Preview URLs enable you to safely share your development environment with anyone. For example, you may want your UX designer to take a quick look at what you're developing, before you commit the code. Preview URLs enable this easy collaboration. + +1. If you access the Emojivoto application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +2. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version running locally. + + +Now you're able to share your fix in your local environment with your team! + + + + To get more information regarding Preview URLs and intercepts, visit Ambassador Cloud. + + +
+ +## 6. How/Why does this all work? + +[Telepresence](../qs-go/) works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +Intercepts and preview URLs are functions of Telepresence that enable easy local development from a remote Kubernetes cluster and offer a preview environment for sharing and real-time collaboration. + +Telepresence also uses custom headers and header propagation for controllable intercepts and preview URLs. The headers facilitate the smart routing of requests either to live services in the cluster or services running locally on a developer’s machine. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to Ambassador Cloud with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. + +## What's Next? + + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! diff --git a/docs/telepresence/2.10/quick-start/demo-react.md b/docs/telepresence/2.10/quick-start/demo-react.md new file mode 100644 index 000000000..2312dbbbc --- /dev/null +++ b/docs/telepresence/2.10/quick-start/demo-react.md @@ -0,0 +1,257 @@ +--- +description: "Telepresence Quick Start - React. In this guide we'll give you everything you need in a preconfigured demo cluster: the Telepresence CLI, a config file for..." +--- + +import Alert from '@material-ui/lab/Alert'; +import QSCards26 from './qs-cards'; +import { DownloadDemo } from '../../../../../src/components/Docs/DownloadDemo'; +import { UserInterceptCommand } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start - React + +
+

Contents

+ +* [1. Download the demo cluster archive](#1-download-the-demo-cluster-archive) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Set up the sample application](#3-set-up-the-sample-application) +* [4. Test app](#4-test-app) +* [5. Run a service on your laptop](#5-run-a-service-on-your-laptop) +* [6. Make a code change](#6-make-a-code-change) +* [7. Intercept all traffic to the service](#7-intercept-all-traffic-to-the-service) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +In this guide we'll give you **everything you need in a preconfigured demo cluster:** the [Telepresence](/products/telepresence/) CLI, a config file for connecting to your demo cluster, and code to run a cluster service locally. + + + While Telepresence works with any language, this guide uses a sample app with a frontend written in React. We have a version with a Node.js backend if you prefer. + + + + +## 1. Download the demo cluster archive + +1. + +2. Extract the archive file, open the `ambassador-demo-cluster` folder, and run the installer script (the commands below might vary based on where your browser saves downloaded files). + + + This step will also install some dependency packages onto your laptop using npm, you can see those packages at ambassador-demo-cluster/edgey-corp-nodejs/DataProcessingService/package.json. + + + ``` + cd ~/Downloads + unzip ambassador-demo-cluster.zip -d ambassador-demo-cluster + cd ambassador-demo-cluster + ./install.sh + # type y to install the npm dependencies when asked + ``` + +3. Confirm that your `kubectl` is configured to use the demo cluster by getting the status of the cluster nodes, you should see a single node named `tpdemo-prod-...`: + `kubectl get nodes` + + ``` + $ kubectl get nodes + + NAME STATUS ROLES AGE VERSION + tpdemo-prod-1234 Ready control-plane,master 5d10h v1.20.2+k3s1 + ``` + +4. Confirm that the Telepresence CLI is now installed (we expect to see the daemons are not running yet): +`telepresence status` + + ``` + $ telepresence status + + Root Daemon: Not running + User Daemon: Not running + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open System Preferences → Security & Privacy → General. Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence status command. + + + + You now have Telepresence installed on your workstation and a Kubernetes cluster configured in your terminal! + + +## 2. Test Telepresence + +[Telepresence](../../reference/client/login/) connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster (this requires **root** privileges and will ask for your password): +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Set up the sample application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + +1. Clone the emojivoto app: +`git clone https://github.com/datawire/emojivoto.git` + +1. Deploy the app to your cluster: +`kubectl apply -k emojivoto/kustomize/deployment` + +1. Change the kubectl namespace: +`kubectl config set-context --current --namespace=emojivoto` + +1. List the Services: +`kubectl get svc` + + ``` + $ kubectl get svc + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + emoji-svc ClusterIP 10.43.162.236 8080/TCP,8801/TCP 29s + voting-svc ClusterIP 10.43.51.201 8080/TCP,8801/TCP 29s + web-app ClusterIP 10.43.242.240 80/TCP 29s + web-svc ClusterIP 10.43.182.119 8080/TCP 29s + ``` + +1. Since you’ve already connected Telepresence to your cluster, you can access the frontend service in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). This is the namespace qualified DNS name in the form of `service.namespace`. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Test app + +1. Vote for some emojis and see how the [leaderboard](http://web-app.emojivoto/leaderboard) changes. + +1. There is one emoji that causes an error when you vote for it. Vote for 🍩 and the leaderboard does not actually update. Also an error is shown on the browser dev console: +`GET http://web-svc.emojivoto:8080/api/vote?choice=:doughnut: 500 (Internal Server Error)` + + + Open the dev console in Chrome or Firefox with Option + ⌘ + J (macOS) or Shift + CTRL + J (Windows/Linux).
+ Open the dev console in Safari with Option + ⌘ + C. +
+ +The error is on a backend service, so **we can add an error page to notify the user** while the bug is fixed. + +## 5. Run a service on your laptop + +Now start up the `web-app` service on your laptop. We'll then make a code change and intercept this service so that we can see the immediate results of a code change to the service. + +1. **In a new terminal window**, change into the repo directory and build the application: + + `cd /emojivoto` + `make web-app-local` + + ``` + $ make web-app-local + + ... + webpack 5.34.0 compiled successfully in 4326 ms + ✨ Done in 5.38s. + ``` + +2. Change into the service's code directory and start the server: + + `cd emojivoto-web-app` + `yarn webpack serve` + + ``` + $ yarn webpack serve + + ... + ℹ 「wds」: Project is running at http://localhost:8080/ + ... + ℹ 「wdm」: Compiled successfully. + ``` + +4. Access the application at [http://localhost:8080](http://localhost:8080) and see how voting for the 🍩 is generating the same error as the application deployed in the cluster. + + + Victory, your local React server is running a-ok! + + +## 6. Make a code change +We’ve now set up a local development environment for the app. Next we'll make and locally test a code change to the app to improve the issue with voting for 🍩. + +1. In the terminal running webpack, stop the server with `Ctrl+c`. + +1. In your preferred editor open the file `emojivoto/emojivoto-web-app/js/components/Vote.jsx` and replace the `render()` function (lines 83 to the end) with [this highlighted code snippet](https://github.com/datawire/emojivoto/blob/main/assets/Vote-fixed.jsx#L83-L149). + +1. Run webpack to fully recompile the code then start the server again: + + `yarn webpack` + `yarn webpack serve` + +1. Reload the browser tab showing [http://localhost:8080](http://localhost:8080) and vote for 🍩. Notice how you see an error instead, improving the user experience. + +## 7. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the app to the version running locally instead. + + + This command must be run in the terminal window where you ran the script because the script set environment variables to access the demo cluster. Those variables will only will apply to that terminal session. + + +1. Start the intercept with the `intercept` command, setting the workload name (a Deployment in this case), namespace, and port: +`telepresence intercept web-app --namespace emojivoto --port 8080` + + ``` + $ telepresence intercept web-app --namespace emojivoto --port 8080 + + Using deployment web-app + intercepted + Intercept name: web-app-emojivoto + State : ACTIVE + ... + ``` + +2. Go to the frontend service again in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). Voting for 🍩 should now show an error message to the user. + + + The web-app Deployment is being intercepted and rerouted to the server on your laptop! + + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## What's Next? + + diff --git a/docs/telepresence/2.10/quick-start/go.md b/docs/telepresence/2.10/quick-start/go.md new file mode 100644 index 000000000..c926d7b05 --- /dev/null +++ b/docs/telepresence/2.10/quick-start/go.md @@ -0,0 +1,190 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + + +# Telepresence Quick Start - **Go** + +This guide provides you with a hands-on tutorial with Telepresence and Golang. To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + +## 1. Get a free remote cluster + +Telepresence connects your local workstation with a remote Kubernetes cluster. In this tutorial, you'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. +Test out the application: + +1. Go to the Emojivoto webapp and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. + +## 3. Run the Docker container + +The bug is present in the `voting-svc` service, you'll run that service locally. To save your time, we prepared a Docker container with this service running and all you'll need to fix the bug. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + +2. The application is failing due to a little bug inside this service which uses gRPC to communicate with the others services. We can use `grpcurl` to test the gRPC endpoint and see the error by running: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + (empty) + + Response trailers received: + content-type: application/grpc + Sent 0 requests and received 0 responses + ERROR: + Code: Unknown + Message: ERROR + ``` + +3. In order to fix the bug, use the Docker container's embedded IDE to fix this error. Go to http://localhost:8083 and open `api/api.go`. Remove the `"fmt"` package by deleting the line 5. + + ```go + 3 import ( + 4 "context" + 5 "fmt" // DELETE THIS LINE + 6 + 7 pb "github.com/buoyantio/emojivoto/emojivoto-voting-svc/gen/proto" + ``` + + and also replace the line `21`: + + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return nil, fmt.Errorf("ERROR") + 22 } + ``` + with + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return pS.vote(":doughnut:") + 22 } + ``` + Then save the file (`Ctrl+s` for Windows, `Cmd+s` for Mac or `Menu -> File -> Save`) and verify that the error is fixed now: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + content-type: application/grpc + + Response contents: + { + } + + Response trailers received: + (empty) + Sent 0 requests and received 1 response + ``` + +## 4. Telepresence intercept + +1. Now the bug is fixed, you can use Telepresence to intercept *all* the traffic through our local service. +Run the following command inside the container: + + ``` + $ telepresence intercept voting --port 8081:8080 + + Using Deployment voting + intercepted + Intercept name : voting + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8081 + Service Port Identifier: 8080 + Volume Mount Point : /tmp/telfs-XXXXXXXXX + Intercepting : all TCP connections + ``` + Now you can go back to Emojivoto webapp and you'll see that voting for 🍩 woks as expected. + +You have created an intercept to tell Telepresence where to send traffic. The `voting-svc` traffic is now destined to the local Dockerized version of the service. This intercepts *all the traffic* to the local `voting-svc` service, which has been fixed with the Telepresence intercept. + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Telepresence intercept with a preview URL + +Preview URLs allows you to safely share your development environment. With this approach, you can try and test your local service more accurately because you have a total control about which traffic is handled through your service, all of this thanks to the preview URL. + +1. First leave the current intercept: + + ``` + $ telepresence leave voting + ``` + +2. Then login to telepresence: + + + +3. Create an intercept, which will tell Telepresence to send traffic to the service in our container instead of the service in the cluster. When prompted for ingress configuration, all default values should be correct as displayed below. + + + +4. If you access the Emojivoto webapp application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +5. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version which is running locally. + +
+ +## What's Next? + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! diff --git a/docs/telepresence/2.10/quick-start/index.md b/docs/telepresence/2.10/quick-start/index.md new file mode 100644 index 000000000..e0d26fa9e --- /dev/null +++ b/docs/telepresence/2.10/quick-start/index.md @@ -0,0 +1,7 @@ +--- +description: Telepresence Quick Start. +--- + +import NewTelepresenceQuickStartLanding from './TelepresenceQuickStartLanding' + + diff --git a/docs/telepresence/2.10/quick-start/qs-cards.js b/docs/telepresence/2.10/quick-start/qs-cards.js new file mode 100644 index 000000000..5b68aa4ae --- /dev/null +++ b/docs/telepresence/2.10/quick-start/qs-cards.js @@ -0,0 +1,71 @@ +import Grid from '@material-ui/core/Grid'; +import Paper from '@material-ui/core/Paper'; +import Typography from '@material-ui/core/Typography'; +import { makeStyles } from '@material-ui/core/styles'; +import { Link as GatsbyLink } from 'gatsby'; +import React from 'react'; + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + textAlign: 'center', + alignItem: 'stretch', + padding: 0, + }, + paper: { + padding: theme.spacing(1), + textAlign: 'center', + color: 'black', + height: '100%', + }, +})); + +export default function CenteredGrid() { + const classes = useStyles(); + + return ( +
+ + + + + + Collaborating + + + + Use preview URLS to collaborate with your colleagues and others + outside of your organization. + + + + + + + + Outbound Sessions + + + + While connected to the cluster, your laptop can interact with + services as if it was another pod in the cluster. + + + + + + + + FAQs + + + + Learn more about uses cases and the technical implementation of + Telepresence. + + + + +
+ ); +} diff --git a/docs/telepresence/2.10/quick-start/qs-go.md b/docs/telepresence/2.10/quick-start/qs-go.md new file mode 100644 index 000000000..2e140f6a7 --- /dev/null +++ b/docs/telepresence/2.10/quick-start/qs-go.md @@ -0,0 +1,396 @@ +--- +description: "Telepresence Quick Start Go. You will need kubectl or oc installed and set up (Linux / macOS / Windows) to use a Kubernetes cluster, preferably an empty." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Go** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Go application](#3-install-a-sample-go-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used [Telepresence](/products/telepresence/) previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Go application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Go. We have versions in Python (Flask), Python (FastAPI), Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-go.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-go.git + + Cloning into 'edgey-corp-go'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-go/DataProcessingService/` + +3. You will use [Fresh](https://pkg.go.dev/github.com/BUGLAN/fresh) to support auto reloading of the Go server, which we'll use later. Confirm it is installed by running: + `go get github.com/pilu/fresh` + Then start the Go server: + `$GOPATH/bin/fresh` + + ``` + $ go get github.com/pilu/fresh + + $ $GOPATH/bin/fresh + + ... + 10:23:41 app | Welcome to the DataProcessingGoService! + ``` + + + Install Go from here and set your GOPATH if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Go server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Go server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-go/DataProcessingService/main.go` in your editor and change `var color string` from `blue` to `orange`. Save the file and the Go server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.10/quick-start/qs-java.md b/docs/telepresence/2.10/quick-start/qs-java.md new file mode 100644 index 000000000..9056d61cd --- /dev/null +++ b/docs/telepresence/2.10/quick-start/qs-java.md @@ -0,0 +1,390 @@ +--- +description: "Telepresence Quick Start - Java. This document uses kubectl in all example commands, but OpenShift users should have no problem substituting in the oc command." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Java** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Java application](#3-install-a-sample-java-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Java application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Java. We have versions in Python (FastAPI), Python (Flask), Go, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-java.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-java.git + + Cloning into 'edgey-corp-java'... + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-java/DataProcessingService/` + +3. Start the Maven server. + `mvn spring-boot:run` + + + Install Java and Maven first if needed. + + + ``` + $ mvn spring-boot:run + + ... + g.d.DataProcessingServiceJavaApplication : Started DataProcessingServiceJavaApplication in 1.408 seconds (JVM running for 1.684) + + ``` + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Java server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Java server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-java/DataProcessingService/src/main/resources/application.properties` in your editor and change `app.default.color` on line 2 from `blue` to `orange`. Save the file then stop and restart your Java server. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.10/quick-start/qs-node.md b/docs/telepresence/2.10/quick-start/qs-node.md new file mode 100644 index 000000000..d4282240f --- /dev/null +++ b/docs/telepresence/2.10/quick-start/qs-node.md @@ -0,0 +1,384 @@ +--- +description: "Telepresence Quick Start Node.js. This document uses kubectl in all example commands. OpenShift users should have no problem substituting in the oc command..." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Node.js** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Node.js application](#3-install-a-sample-nodejs-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Node.js application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js. We have versions in Go, Java,Python using Flask, and Python using FastAPI if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-nodejs.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-nodejs.git + + Cloning into 'edgey-corp-nodejs'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-nodejs/DataProcessingService/` + +3. Install the dependencies and start the Node server: +`npm install && npm start` + + ``` + $ npm install && npm start + + ... + Welcome to the DataProcessingService! + { _: [] } + Server running on port 3000 + ``` + + + Install Node.js from here if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Node server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + See this doc for more information on how Telepresence resolves DNS. + + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Node server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-nodejs/DataProcessingService/app.js` in your editor and change line 6 from `blue` to `orange`. Save the file and the Node server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.10/quick-start/qs-python-fastapi.md b/docs/telepresence/2.10/quick-start/qs-python-fastapi.md new file mode 100644 index 000000000..dacfd9f25 --- /dev/null +++ b/docs/telepresence/2.10/quick-start/qs-python-fastapi.md @@ -0,0 +1,381 @@ +--- +description: "Telepresence Quick Start - Python (FastAPI) You need kubectl or oc installed & set up (Linux/macOS/Windows) to use Kubernetes cluster, preferably an empty test." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Python (FastAPI)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the FastAPI framework. We have versions in Python (Flask), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python-fastapi.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python-fastapi.git + + Cloning into 'edgey-corp-python-fastapi'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python-fastapi/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install fastapi uvicorn requests && python app.py + + Collecting fastapi + ... + Application startup complete. + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local service is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python-fastapi/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 17 from `blue` to `orange`. Save the file and the Python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080) and it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.10/quick-start/qs-python.md b/docs/telepresence/2.10/quick-start/qs-python.md new file mode 100644 index 000000000..02ad7de97 --- /dev/null +++ b/docs/telepresence/2.10/quick-start/qs-python.md @@ -0,0 +1,392 @@ +--- +description: "Telepresence Quick Start - Python (Flask). This document uses kubectl in all example commands, but OpenShift users should have no problem substituting in the oc." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Python (Flask)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the Flask framework. We have versions in Python (FastAPI), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python.git + + Cloning into 'edgey-corp-python'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install flask requests && python app.py + + Collecting flask + ... + Welcome to the DataServiceProcessingPythonService! + ... + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Python server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 15 from `blue` to `orange`. Save the file and the python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.10/quick-start/telepresence-quickstart-landing.less b/docs/telepresence/2.10/quick-start/telepresence-quickstart-landing.less new file mode 100644 index 000000000..e2a83df4f --- /dev/null +++ b/docs/telepresence/2.10/quick-start/telepresence-quickstart-landing.less @@ -0,0 +1,152 @@ +@import '~@src/components/Layout/vars.less'; + +.doc-body .telepresence-quickstart-landing { + font-family: @InterFont; + color: @black; + margin: -8.4px auto 48px; + max-width: 1050px; + min-width: @docs-min-width; + width: 100%; + + h1 { + color: @blue-dark; + font-weight: normal; + letter-spacing: 0.25px; + font-size: 33px; + margin: 0 0 15px; + } + p { + font-size: 0.875rem; + line-height: 24px; + margin: 0; + padding: 0; + } + + .demo-cluster-container { + display: grid; + margin: 40px 0; + grid-template-columns: 1fr; + grid-template-columns: 1fr; + @media screen and (max-width: 900px) { + grid-template-columns: repeat(1, 1fr); + } + } + .main-title-container { + display: flex; + flex-direction: column; + align-items: center; + p { + text-align: center; + font-size: 0.875rem; + } + } + h2 { + font-size: 23px; + color: @black; + margin: 0 0 20px 0; + padding: 0; + &.underlined { + padding-bottom: 2px; + border-bottom: 3px solid @grey-separator; + text-align: center; + } + strong { + font-weight: 800; + } + &.subtitle { + margin-bottom: 10px; + font-size: 19px; + line-height: 28px; + } + } + .learn-more, + .get-started { + font-size: 14px; + font-weight: 600; + letter-spacing: 1.25px; + display: flex; + align-items: center; + text-decoration: none; + &.inline { + display: inline-block; + text-decoration: underline; + font-size: unset; + font-weight: normal; + &:hover { + text-decoration: none; + } + } + &.blue { + color: @blue-5; + } + &.blue:hover { + color: @blue-dark; + } + } + + .learn-more { + margin-top: 20px; + padding: 13px 0; + } + + .box-container { + &.border { + border: 1.5px solid @grey-separator; + border-radius: 5px; + padding: 10px; + } + &::before { + content: ''; + position: absolute; + width: 14px; + height: 14px; + border-radius: 50%; + top: 0; + left: 50%; + transform: translate(-50%, -50%); + } + p { + font-size: 0.875rem; + line-height: 24px; + padding: 0; + } + } + + .telepresence-video { + border: 2px solid @grey-separator; + box-shadow: -6px 12px 0px fade(@black, 12%); + border-radius: 8px; + padding: 18px; + h2.telepresence-video-title { + font-weight: 400; + font-size: 23px; + line-height: 33px; + color: @blue-6; + } + } + + .video-section { + display: grid; + grid-template-columns: 1fr 1fr; + column-gap: 20px; + @media screen and (max-width: 800px) { + grid-template-columns: 1fr; + } + ul { + font-size: 14px; + margin: 0 10px 6px 0; + } + .video-container { + position: relative; + padding-bottom: 56.25%; // 16:9 aspect ratio + height: 0; + iframe { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + } + } + } +} diff --git a/docs/telepresence/2.10/redirects.yml b/docs/telepresence/2.10/redirects.yml new file mode 100644 index 000000000..5961b3477 --- /dev/null +++ b/docs/telepresence/2.10/redirects.yml @@ -0,0 +1 @@ +- {from: "", to: "quick-start"} diff --git a/docs/telepresence/2.10/reference/architecture.md b/docs/telepresence/2.10/reference/architecture.md new file mode 100644 index 000000000..8aa90b267 --- /dev/null +++ b/docs/telepresence/2.10/reference/architecture.md @@ -0,0 +1,102 @@ +--- +description: "How Telepresence works to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Telepresence Architecture + +
+ +![Telepresence Architecture](https://www.getambassador.io/images/documentation/telepresence-architecture.inline.svg) + +
+ +## Telepresence CLI + +The Telepresence CLI orchestrates the moving parts on the workstation: it starts the Telepresence Daemons, +authenticates against Ambassador Cloud, and then acts as a user-friendly interface to the Telepresence User Daemon. + +## Telepresence Daemons +Telepresence has Daemons that run on a developer's workstation and act as the main point of communication to the cluster's +network in order to communicate with the cluster and handle intercepted traffic. + +### User-Daemon +The User-Daemon coordinates the creation and deletion of intercepts by communicating with the [Traffic Manager](#traffic-manager). +All requests from and to the cluster go through this Daemon. + +When you run telepresence login, Telepresence installs an enhanced version of the User-Daemon. This replaces the existing User-Daemon and +allows you to create intercepts on your local machine from Ambassador Cloud. + +### Root-Daemon +The Root-Daemon manages the networking necessary to handle traffic between the local workstation and the cluster by setting up a +[Virtual Network Device](../tun-device) (VIF). For a detailed description of how the VIF manages traffic and why it is necessary +please refer to this blog post: +[Implementing Telepresence Networking with a TUN Device](https://blog.getambassador.io/implementing-telepresence-networking-with-a-tun-device-a23a786d51e9). + +When you run telepresence login, Telepresence installs an enhanced Telepresence User Daemon. This replaces the open source +User Daemon and allows you to create intercepts on your local machine from Ambassador Cloud. + +## Traffic Manager + +The Traffic Manager is the central point of communication between Traffic Agents in the cluster and Telepresence Daemons +on developer workstations. It is responsible for injecting the Traffic Agent sidecar into intercepted pods, proxying all +relevant inbound and outbound traffic, and tracking active intercepts. + +The Traffic-Manager is installed, either by a cluster administrator using a Helm Chart, or on demand by the Telepresence +User Daemon. When the User Daemon performs its initial connect, it first checks the cluster for the Traffic Manager +deployment, and if missing it will make an attempt to install it using its embedded Helm Chart. + +When an intercept gets created with a Preview URL, the Traffic Manager will establish a connection with Ambassador Cloud +so that Preview URL requests can be routed to the cluster. This allows Ambassador Cloud to reach the Traffic Manager +without requiring the Traffic Manager to be publicly exposed. Once the Traffic Manager receives a request from a Preview +URL, it forwards the request to the ingress service specified at the Preview URL creation. + +## Traffic Agent + +The Traffic Agent is a sidecar container that facilitates intercepts. When an intercept is first started, the Traffic Agent +container is injected into the workload's pod(s). You can see the Traffic Agent's status by running `telepresence list` +or `kubectl describe pod `. + +Depending on the type of intercept that gets created, the Traffic Agent will either route the incoming request to the +Traffic Manager so that it gets routed to a developer's workstation, or it will pass it along to the container in the +pod usually handling requests on that port. + +## Ambassador Cloud + +Ambassador Cloud enables Preview URLs by generating random ephemeral domain names and routing requests received on those +domains from authorized users to the appropriate Traffic Manager. + +Ambassador Cloud also lets users manage their Preview URLs: making them publicly accessible, seeing users who have +accessed them and deleting them. + +## Pod-Daemon + +The Pod-Daemon is a modified version of the [Telepresence User-Daemon](#user-daemon) built as a container image so that +it can be inserted into a `Deployment` manifest as an additional container. This allows users to create intercepts completely +within the cluster with the benefit that the intercept stays active until the deployment with the Pod-Daemon container is removed. + +The Pod-Daemon will take arguments and environment variables as part of the `Deployment` manifest to specify which service the intercept +should be run on and to provide similar configuration that would be provided when using Telepresence intercepts from the command line. + +After being deployed to the cluster, it behaves similarly to the Telepresence User-Daemon and installs the [Traffic Agent Sidecar](#traffic-agent) +on the service that is being intercepted. After the intercept is created, traffic can then be redirected to the `Deployment` with the Pod-Daemon +container instead. The Pod-Daemon will automatically generate a Preview URL so that the intercept can be accessed from outside the cluster. +The Preview URL can be obtained from the Pod-Daemon logs if you are deploying it manually. + +The Pod-Daemon was created for use as a component of Deployment Previews in order to automatically create intercepts with development images built +by CI so that changes from pull requests can be quickly visualized in a live cluster before changes are landed by accessing the Preview URL +link which would be posted to an associated GitHub pull request when using Deployment Previews. + +See the [Deployment Previews quick-start](https://www.getambassador.io/docs/cloud/latest/deployment-previews/quick-start) for information on how to get started with Deployment Previews +or for a reference on how Pod-Daemon can be manually deployed to the cluster. + + +# Changes from Service Preview + +Using Ambassador's previous offering, Service Preview, the Traffic Agent had to be manually added to a pod by an +annotation. This is no longer required as the Traffic Agent is automatically injected when an intercept is started. + +Service Preview also started an intercept via `edgectl intercept`. The `edgectl` CLI is no longer required to intercept +as this functionality has been moved to the Telepresence CLI. + +For both the Traffic Manager and Traffic Agents, configuring Kubernetes ClusterRoles and ClusterRoleBindings is not +required as it was in Service Preview. Instead, the user running Telepresence must already have sufficient permissions in the cluster to add and modify deployments in the cluster. diff --git a/docs/telepresence/2.10/reference/client.md b/docs/telepresence/2.10/reference/client.md new file mode 100644 index 000000000..491dbbb8e --- /dev/null +++ b/docs/telepresence/2.10/reference/client.md @@ -0,0 +1,31 @@ +--- +description: "CLI options for Telepresence to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Client reference + +The [Telepresence CLI client](../../quick-start) is used to connect Telepresence to your cluster, start and stop intercepts, and create preview URLs. All commands are run in the form of `telepresence `. + +## Commands + +A list of all CLI commands and flags is available by running `telepresence help`, but here is more detail on the most common ones. +You can append `--help` to each command below to get even more information about its usage. + +| Command | Description | +| --- | --- | +| `connect` | Starts the local daemon and connects Telepresence to your cluster and installs the Traffic Manager if it is missing. After connecting, outbound traffic is routed to the cluster so that you can interact with services as if your laptop was another pod (for example, curling a service by it's name) | +| [`login`](login) | Authenticates you to Ambassador Cloud to create, manage, and share [preview URLs](../../howtos/preview-urls/) +| `logout` | Logs out out of Ambassador Cloud | +| `license` | Formats a license from Ambassdor Cloud into a secret that can be [applied to your cluster](../cluster-config#add-license-to-cluster) if you require features of the extension in an air-gapped environment| +| `status` | Shows the current connectivity status | +| `quit` | Tell Telepresence daemons to quit | +| `list` | Lists the current active intercepts | +| `intercept` | Intercepts a service, run followed by the service name to be intercepted and what port to proxy to your laptop: `telepresence intercept --port `. This command can also start a process so you can run a local instance of the service you are intercepting. For example the following will intercept the hello service on port 8000 and start a Python web server: `telepresence intercept hello --port 8000 -- python3 -m http.server 8000`. A special flag `--docker-run` can be used to run the local instance [in a docker container](../docker-run). | +| `leave` | Stops an active intercept: `telepresence leave hello` | +| `preview` | Create or remove [preview URLs](../../howtos/preview-urls) for existing intercepts: `telepresence preview create ` | +| `loglevel` | Temporarily change the log-level of the traffic-manager, traffic-agents, and user and root daemons | +| `gather-logs` | Gather logs from traffic-manager, traffic-agents, user, and root daemons, and export them into a zip file that can be shared with others or included with a github issue. Use `--get-pod-yaml` to include the yaml for the `traffic-manager` and `traffic-agent`s. Use `--anonymize` to replace the actual pod names + namespaces used for the `traffic-manager` and pods containing `traffic-agent`s in the logs. | +| `version` | Show version of Telepresence CLI + Traffic-Manager (if connected) | +| `uninstall` | Uninstalls Telepresence from your cluster, using the `--agent` flag to target the Traffic Agent for a specific workload, the `--all-agents` flag to remove all Traffic Agents from all workloads, or the `--everything` flag to remove all Traffic Agents and the Traffic Manager. +| `dashboard` | Reopens the Ambassador Cloud dashboard in your browser | +| `current-cluster-id` | Get cluster ID for your kubernetes cluster, used for [configuring license](../cluster-config#add-license-to-cluster) in an air-gapped environment | diff --git a/docs/telepresence/2.10/reference/client/login.md b/docs/telepresence/2.10/reference/client/login.md new file mode 100644 index 000000000..fc90ea385 --- /dev/null +++ b/docs/telepresence/2.10/reference/client/login.md @@ -0,0 +1,61 @@ +# Telepresence Login + +```console +$ telepresence login --help +Authenticate to Ambassador Cloud + +Usage: + telepresence login [flags] + +Flags: + --apikey string Static API key to use instead of performing an interactive login +``` + +## Description + +Use `telepresence login` to explicitly authenticate with [Ambassador +Cloud](https://www.getambassador.io/docs/cloud). Unless the +[`skipLogin` option](../../config) is set, other commands will +automatically invoke the `telepresence login` interactive login +procedure as nescessary, so it is rarely nescessary to explicitly run +`telepresence login`; it should only be truly nescessary to explictly +run `telepresence login` when you require a non-interactive login. + +The normal interactive login procedure involves launching a web +browser, a user interacting with that web browser, and finally having +the web browser make callbacks to the local Telepresence process. If +it is not possible to do this (perhaps you are using a headless remote +box via SSH, or are using Telepresence in CI), then you may instead +have Ambassador Cloud issue an API key that you pass to `telepresence +login` with the `--apikey` flag. + +## Telepresence + +When you run `telepresence login`, the CLI installs +a Telepresence binary. The Telepresence enhanced free client of the [User +Daemon](../../architecture) communicates with the Ambassador Cloud to +provide fremium features including the ability to create intercepts from +Ambassador Cloud. + +## Acquiring an API key + +1. Log in to Ambassador Cloud at https://app.getambassador.io/ . + +2. Click on your profile icon in the upper-left: ![Screenshot with the + mouse pointer over the upper-left profile icon](./login/apikey-2.png) + +3. Click on the "API Keys" menu button: ![Screenshot with the mouse + pointer over the "API Keys" menu button](./login/apikey-3.png) + +4. Click on the "generate new key" button in the upper-right: + ![Screenshot with the mouse pointer over the "generate new key" + button](./login/apikey-4.png) + +5. Enter a description for the key (perhaps the name of your laptop, + or perhaps the "CI"), and click "generate api key" to create it. + +You may now pass the API key as `KEY` to `telepresence login --apikey=KEY`. + +Telepresence will use that "master" API key to create narrower keys +for different components of Telepresence. You will see these appear +in the Ambassador Cloud web interface. \ No newline at end of file diff --git a/docs/telepresence/2.10/reference/client/login/apikey-2.png b/docs/telepresence/2.10/reference/client/login/apikey-2.png new file mode 100644 index 000000000..1379502a9 Binary files /dev/null and b/docs/telepresence/2.10/reference/client/login/apikey-2.png differ diff --git a/docs/telepresence/2.10/reference/client/login/apikey-3.png b/docs/telepresence/2.10/reference/client/login/apikey-3.png new file mode 100644 index 000000000..4559b784d Binary files /dev/null and b/docs/telepresence/2.10/reference/client/login/apikey-3.png differ diff --git a/docs/telepresence/2.10/reference/client/login/apikey-4.png b/docs/telepresence/2.10/reference/client/login/apikey-4.png new file mode 100644 index 000000000..25c6581a4 Binary files /dev/null and b/docs/telepresence/2.10/reference/client/login/apikey-4.png differ diff --git a/docs/telepresence/2.10/reference/cluster-config.md b/docs/telepresence/2.10/reference/cluster-config.md new file mode 100644 index 000000000..087bbf9af --- /dev/null +++ b/docs/telepresence/2.10/reference/cluster-config.md @@ -0,0 +1,363 @@ +import Alert from '@material-ui/lab/Alert'; +import { ClusterConfig, PaidPlansDisclaimer } from '../../../../../src/components/Docs/Telepresence'; + +# Cluster-side configuration + +For the most part, Telepresence doesn't require any special +configuration in the cluster and can be used right away in any +cluster (as long as the user has adequate [RBAC permissions](../rbac) +and the cluster's server version is `1.19.0` or higher). + +## Helm Chart configuration +Some cluster specific configuration can be provided when installing +or upgrading the Telepresence cluster installation using Helm. Once +installed, the Telepresence client will configure itself from values +that it receives when connecting to the Traffic manager. + +See the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) +for a full list of available configuration settings. + +### Values +To add configuration, create a yaml file with the configuration values and then use it executing `telepresence helm install [--upgrade] --values ` + +## Client Configuration + +It is possible for the Traffic Manager to automatically push config to all +connecting clients. To learn more about this, please see the [client config docs](../config#global-configuration) + +### Agent Configuration + +The `agent` structure of the Helm chart configures the behavior of the Telepresence agents. + +#### Application Protocol Selection +The `agent.appProtocolStrategy` is relevant when using personal intercepts and controls how telepresence selects the application protocol to use +when intercepting a service that has no `service.ports.appProtocol` declared. The port's `appProtocol` is always trusted if it is present. +Valid values are: + +| Value | Resulting action | +|--------------|------------------------------------------------------------------------------------------------------------------------------| +| `http2Probe` | The Telepresence Traffic Agent will probe the intercepted container to check whether it supports http2. This is the default. | +| `portName` | Telepresence will make an educated guess about the protocol based on the name of the service port | +| `http` | Telepresence will use http | +| `http2` | Telepresence will use http2 | + +When `portName` is used, Telepresence will determine the protocol by the name of the port: `[-suffix]`. The following protocols +are recognized: + +| Protocol | Meaning | +|----------|---------------------------------------| +| `http` | Plaintext HTTP/1.1 traffic | +| `http2` | Plaintext HTTP/2 traffic | +| `https` | TLS Encrypted HTTP (1.1 or 2) traffic | +| `grpc` | Same as http2 | + +The application protocol strategy can also be configured on a workstation. See [Intercepts](../config/#intercept) for more info. + +#### Envoy Configuration + +The `agent.envoy` structure contains three values: + +| Setting | Meaning | +|--------------|----------------------------------------------------------| +| `logLevel` | Log level used by the Envoy proxy. Defaults to "warning" | +| `serverPort` | Port used by the Envoy server. Default 18000. | +| `adminPort` | Port used for Envoy administration. Default 19000. | + +#### Image Configuration + +The `agent.image` structure contains the following values: + +| Setting | Meaning | +|------------|-----------------------------------------------------------------------------| +| `registry` | Registry used when downloading the image. Defaults to "docker.io/datawire". | +| `name` | The name of the image. Retrieved from Ambassador Cloud if not set. | +| `tag` | The tag of the image. Retrieved from Ambassador Cloud if not set. | + +#### Log level + +The `agent.LogLevel` controls the log level of the traffic-agent. See [Log Levels](../config/#log-levels) for more info. + +#### Resources + +The `agent.resources` and `agent.initResources` will be used as the `resources` element when injecting traffic-agents and init-containers. + +## TLS + +In this example, other applications in the cluster expect to speak TLS to your +intercepted application (perhaps you're using a service-mesh that does +mTLS). + +In order to use `--mechanism=http` (or any features that imply +`--mechanism=http`) you need to tell Telepresence about the TLS +certificates in use. + +Tell Telepresence about the certificates in use by adjusting your +[workload's](../intercepts/#supported-workloads) Pod template to set a couple of +annotations on the intercepted Pods: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ "getambassador.io/inject-terminating-tls-secret": "your-terminating-secret" # optional ++ "getambassador.io/inject-originating-tls-secret": "your-originating-secret" # optional +``` + +- The `getambassador.io/inject-terminating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS server + certificate to use for decrypting and responding to incoming + requests. + + When Telepresence modifies the Service and workload port + definitions to point at the Telepresence Agent sidecar's port + instead of your application's actual port, the sidecar will use this + certificate to terminate TLS. + +- The `getambassador.io/inject-originating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS + client certificate to use for communicating with your application. + + You will need to set this if your application expects incoming + requests to speak TLS (for example, your + code expects to handle mTLS itself instead of letting a service-mesh + sidecar handle mTLS for it, or the port definition that Telepresence + modified pointed at the service-mesh sidecar instead of at your + application). + + If you do set this, you should to set it to the + same client certificate Secret that you configure the Ambassador + Edge Stack to use for mTLS. + +It is only possible to refer to a Secret that is in the same Namespace +as the Pod. The Secret will be mounted into the traffic agent's container. + +Telepresence understands `type: kubernetes.io/tls` Secrets and +`type: istio.io/key-and-cert` Secrets; as well as `type: Opaque` +Secrets that it detects to be formatted as one of those types. + +## Air-gapped cluster + + +If your cluster is on an isolated network such that it cannot +communicate with Ambassador Cloud, then some additional configuration +is required to acquire a license key in order to use personal +intercepts. + +### Create a license + +1. + +2. Generate a new license (if one doesn't already exist) by clicking *Generate New License*. + +3. You will be prompted for your Cluster ID. Ensure your +kubeconfig context is using the cluster you want to create a license for then +run this command to generate the Cluster ID: + + ``` + $ telepresence current-cluster-id + + Cluster ID: + ``` + +4. Click *Generate API Key* to finish generating the license. + +5. On the licenses page, download the license file associated with your cluster. + +### Add license to cluster +There are two separate ways you can add the license to your cluster: manually creating and deploying +the license secret or having the helm chart manage the secret + +You only need to do one of the two options. + +#### Manual deploy of license secret + +1. Use this command to generate a Kubernetes Secret config using the license file: + + ``` + $ telepresence license -f + + apiVersion: v1 + data: + hostDomain: + license: + kind: Secret + metadata: + creationTimestamp: null + name: systema-license + namespace: ambassador + ``` + +2. Save the output as a YAML file and apply it to your +cluster with `kubectl`. + +3. When deploying the `traffic-manager` chart, you must add the additional values when running `helm install` by putting +the following into a file (for the example we'll assume it's called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + secret: + # This tells the helm chart not to create the secret since you've created it yourself + create: false + ``` + +4. Install the helm chart into the cluster + + ``` + telepresence helm install -f license-values.yaml + ``` + +5. Ensure that you have the docker image for the Smart Agent (datawire/ambassador-telepresence-agent:1.11.0) +pulled and in a registry your cluster can pull from. + +6. Have users use the `images` [config key](../config/#images) keys so telepresence uses the aforementioned image for their agent. + +#### Helm chart manages the secret + +1. Get the jwt token from the downloaded license file + + ``` + $ cat ~/Downloads/ambassador.License_for_yourcluster + eyJhbGnotarealtoken.butanexample + ``` + +2. Create the following values file, substituting your real jwt token in for the one used in the example below. +(for this example we'll assume the following is placed in a file called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + # This is the value from the license file you download. this value is an example and will not work + value: eyJhbGnotarealtoken.butanexample + secret: + # This tells the helm chart to create the secret + create: true + ``` + +3. Install the helm chart into the cluster + + ``` + telepresence helm install -f license-values.yaml + ``` + +Users will now be able to use preview intercepts with the +`--preview-url=false` flag. Even with the license key, preview URLs +cannot be used without enabling direct communication with Ambassador +Cloud, as Ambassador Cloud is essential to their operation. + +If using Helm to install the server-side components, see the chart's [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) to learn how to configure the image registry and license secret. + +Have clients use the [skipLogin](../config/#cloud) key to ensure the cli knows it is operating in an +air-gapped environment. + +## Mutating Webhook + +Telepresence uses a Mutating Webhook to inject the [Traffic Agent](../architecture/#traffic-agent) sidecar container and update the +port definitions. This means that an intercepted workload (Deployment, StatefulSet, ReplicaSet) will remain untouched +and in sync as far as GitOps workflows (such as ArgoCD) are concerned. + +The injection will happen on demand the first time an attempt is made to intercept the workload. + +If you want to prevent that the injection ever happens, simply add the `telepresence.getambassador.io/inject-traffic-agent: disabled` +annotation to your workload template's annotations: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ telepresence.getambassador.io/inject-traffic-agent: disabled + spec: + containers: +``` + +### Service Name and Port Annotations + +Telepresence will automatically find all services and all ports that will connect to a workload and make them available +for an intercept, but you can explicitly define that only one service and/or port can be intercepted. + +```diff + spec: + template: + metadata: + labels: + service: your-service + annotations: ++ telepresence.getambassador.io/inject-service-name: my-service ++ telepresence.getambassador.io/inject-service-port: https + spec: + containers: +``` + +### Ignore Certain Volume Mounts + +An annotation `telepresence.getambassador.io/inject-ignore-volume-mounts` can be used to make the injector ignore certain volume mounts denoted by a comma-separated string. The specified volume mounts from the original container will not be appended to the agent sidecar container. + +```diff + spec: + template: + metadata: + annotations: ++ telepresence.getambassador.io/inject-ignore-volume-mounts: "foo,bar" + spec: + containers: +``` + +### Note on Numeric Ports + +If the targetPort of your intercepted service is pointing at a port number, in addition to +injecting the Traffic Agent sidecar, Telepresence will also inject an initContainer that will +reconfigure the pod's firewall rules to redirect traffic to the Traffic Agent. + + +Note that this initContainer requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + +If you need to use numeric ports without the aforementioned capabilities, you can [manually install the agent](../intercepts/manual-agent) + +For example, the following service is using a numeric port, so Telepresence would inject an initContainer into it: +```yaml +apiVersion: v1 +kind: Service +metadata: + name: your-service +spec: + type: ClusterIP + selector: + service: your-service + ports: + - port: 80 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: your-service + labels: + service: your-service +spec: + replicas: 1 + selector: + matchLabels: + service: your-service + template: + metadata: + annotations: + telepresence.getambassador.io/inject-traffic-agent: enabled + labels: + service: your-service + spec: + containers: + - name: your-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 +``` diff --git a/docs/telepresence/2.10/reference/config.md b/docs/telepresence/2.10/reference/config.md new file mode 100644 index 000000000..e69c77daa --- /dev/null +++ b/docs/telepresence/2.10/reference/config.md @@ -0,0 +1,349 @@ +# Laptop-side configuration + +There are a number of configuration values that can be tweaked to change how Telepresence behaves. +These can be set in two ways: globally, by a platform engineer with powers to deploy the Telepresence Traffic Manager, or locally by any user. +One important exception is the location of the traffic manager itself, which, if it's different from the default of `ambassador`, [must be set](#manager) locally per-cluster to be able to connect. + +## Global Configuration + +Global configuration is set at the Traffic Manager level and applies to any user connecting to that Traffic Manager. +To set it, simply pass in a `client` dictionary to the `helm install` command, with any config values you wish to set. + +### Values + +The `client` config supports values for `timeouts`, `logLevels`, `images`, `cloud`, `grpc`, `dns`, and `routing`. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +client: + timeouts: + agentInstall: 1m + intercept: 10s + logLevels: + userDaemon: debug + images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting + cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. + grpc: + maxReceiveSize: 10Mi + telepresenceAPI: + port: 9980 + dns: + includeSuffixes: [.private] + excludeSuffixes: [.se, .com, .io, .net, .org, .ru] + lookupTimeout: 30s + routing: + alsoProxySubnets: + - 1.2.3.4/32 + neverProxySubnets: + - 1.2.3.4/32 +``` + +#### Timeouts + +Values for `client.timeouts` are all durations either as a number of seconds +or as a string with a unit suffix of `ms`, `s`, `m`, or `h`. Strings +can be fractional (`1.5h`) or combined (`2h45m`). + +These are the valid fields for the `timeouts` key: + +| Field | Description | Type | Default | +|-------------------------|------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|------------| +| `agentInstall` | Waiting for Traffic Agent to be installed | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | +| `apply` | Waiting for a Kubernetes manifest to be applied | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 1 minute | +| `clusterConnect` | Waiting for cluster to be connected | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `intercept` | Waiting for an intercept to become active | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `proxyDial` | Waiting for an outbound connection to be established | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `trafficManagerConnect` | Waiting for the Traffic Manager API to connect for port forwards | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `trafficManagerAPI` | Waiting for connection to the gPRC API after `trafficManagerConnect` is successful | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 15 seconds | +| `helm` | Waiting for Helm operations (e.g. `install`) on the Traffic Manager | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | + +#### Log Levels + +Values for the `client.logLevels` fields are one of the following strings, +case-insensitive: + + - `trace` + - `debug` + - `info` + - `warning` or `warn` + - `error` + +For whichever log-level you select, you will get logs labeled with that level and of higher severity. +(e.g. if you use `info`, you will also get logs labeled `error`. You will NOT get logs labeled `debug`. + +These are the valid fields for the `client.logLevels` key: + +| Field | Description | Type | Default | +|--------------|---------------------------------------------------------------------|---------------------------------------------|---------| +| `userDaemon` | Logging level to be used by the User Daemon (logs to connector.log) | [loglevel][logrus-level] [string][yaml-str] | debug | +| `rootDaemon` | Logging level to be used for the Root Daemon (logs to daemon.log) | [loglevel][logrus-level] [string][yaml-str] | info | + +#### Images +Values for `client.images` are strings. These values affect the objects that are deployed in the cluster, +so it's important to ensure users have the same configuration. + +Additionally, you can deploy the server-side components with [Helm](../../install/helm), to prevent them +from being overridden by a client's config and use the [mutating-webhook](../cluster-config/#mutating-webhook) +to handle installation of the `traffic-agents`. + +These are the valid fields for the `client.images` key: + +| Field | Description | Type | Default | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------|----------------------| +| `registry` | Docker registry to be used for installing the Traffic Manager and default Traffic Agent. If not using a helm chart to deploy server-side objects, changing this value will create a new traffic-manager deployment when using Telepresence commands. Additionally, changing this value will update installed default `traffic-agents` to use the new registry when creating a new intercept. | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `agentImage` | `$registry/$imageName:$imageTag` to use when installing the Traffic Agent. Changing this value will update pre-existing `traffic-agents` to use this new image. *The `registry` value is not used for the `traffic-agent` if you have this value set.* | qualified Docker image name [string][yaml-str] | (unset) | +| `webhookRegistry` | The container `$registry` that the [Traffic Manager](../cluster-config/#mutating-webhook) will use with the `webhookAgentImage` *This value is only used if a new `traffic-manager` is deployed* | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `webhookAgentImage` | The container image that the [Traffic Manager](../cluster-config/#mutating-webhook) will pull from the `webhookRegistry` when installing the Traffic Agent in annotated pods *This value is only used if a new `traffic-manager` is deployed* | non-qualified Docker image name [string][yaml-str] | (unset) | + +#### Cloud +Values for `client.cloud` are listed below and their type varies, so please see the chart for the expected type for each config value. +These fields control how the client interacts with the Cloud service. + +| Field | Description | Type | Default | +|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------|----------------------| +| `skipLogin` | Whether the CLI should skip automatic login to Ambassador Cloud. If set to true, in order to perform personal intercepts you must have a [license key](../cluster-config/#air-gapped-cluster) installed in the cluster. | [bool][yaml-bool] | false | +| `refreshMessages` | How frequently the CLI should communicate with Ambassador Cloud to get new command messages, which also resets whether the message has been raised or not. You will see each message at most once within the duration given by this config | [duration][go-duration] [string][yaml-str] | 168h | +| `systemaHost` | The host used to communicate with Ambassador Cloud | [string][yaml-str] | app.getambassador.io | +| `systemaPort` | The port used with `systemaHost` to communicate with Ambassador Cloud | [string][yaml-str] | 443 | + +Telepresence attempts to auto-detect if the cluster is capable of +communication with Ambassador Cloud, but in cases where only the on-laptop client wishes to communicate with +Ambassador Cloud Telepresence may still prompt you to log in. If you want those auto-login points to be disabled +as well, or would like it to not attempt to communicate with +Ambassador Cloud at all (even for the auto-detection), then be sure to +set the `skipLogin` value to `true`. + +Reminder: To use personal intercepts, which normally require a login, +you must have a license key in your cluster and specify which +`agentImage` should be installed by also adding the following to your +`config.yml`: + +```yaml +images: + agentImage: / +``` + +#### Grpc +The `maxReceiveSize` determines how large a message that the workstation receives via gRPC can be. The default is 4Mi (determined by gRPC). All traffic to and from the cluster is tunneled via gRPC. + +The size is measured in bytes. You can express it as a plain integer or as a fixed-point number using E, G, M, or K. You can also use the power-of-two equivalents: Gi, Mi, Ki. For example, the following represent roughly the same value: +``` +128974848, 129e6, 129M, 123Mi +``` + +#### RESTful API server +The `client.telepresenceAPI` controls the behavior of Telepresence's RESTful API server that can be queried for additional information about ongoing intercepts. When present, and the `port` is set to a valid port number, it's propagated to the auto-installer so that application containers that can be intercepted gets the `TELEPRESENCE_API_PORT` environment set. The server can then be queried at `localhost:`. In addition, the `traffic-agent` and the `user-daemon` on the workstation that performs an intercept will start the server on that port. +If the `traffic-manager` is auto-installed, its webhook agent injector will be configured to add the `TELEPRESENCE_API_PORT` environment to the app container when the `traffic-agent` is injected. +See [RESTful API server](../restapi) for more info. + +### DNS + +#### Intercept +The `intercept` controls applies to how Telepresence will intercept the communications to the intercepted service. + +The `defaultPort` controls which port is selected when no `--port` flag is given to the `telepresence intercept` command. The default value is "8080". + +The `appProtocolStrategy` is only relevant when using personal intercepts. This controls how Telepresence selects the application protocol to use when intercepting a service that has no `service.ports.appProtocol` defined. Valid values are: + +| Value | Resulting action | +|--------------|--------------------------------------------------------------------------------------------------------| +| `http2Probe` | The Telepresence Traffic Agent will probe the intercepted container to check whether it supports http2 | +| `portName` | Telepresence will make an educated guess about the protocol based on the name of the service port | +| `http` | Telepresence will use http | +| `http2` | Telepresence will use http2 | + +When `portName` is used, Telepresence will determine the protocol by the name of the port: `[-suffix]`. The following protocols are recognized: + +| Protocol | Meaning | +|----------|---------------------------------------| +| `http` | Plaintext HTTP/1.1 traffic | +| `http2` | Plaintext HTTP/2 traffic | +| `https` | TLS Encrypted HTTP (1.1 or 2) traffic | +| `grpc` | Same as http2 | + +#### Daemons + +`client.daemons` controls which binary to use for the user daemon. By default it will +use the Telepresence binary. For example, this can be used to tell Telepresence to +use the Telepresence Pro binary. + +The fields for `client.dns` are: `localIP`, `excludeSuffixes`, `includeSuffixes`, and `lookupTimeout`. + +| Field | Description | Type | Default | +|-------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|--------------------------------------------------------------------------| +| `localIP` | The address of the local DNS server. This entry is only used on Linux systems that are not configured to use systemd-resolved. | IP address [string][yaml-str] | first `nameserver` mentioned in `/etc/resolv.conf` | +| `excludeSuffixes` | Suffixes for which the DNS resolver will always fail (or fallback in case of the overriding resolver). Can be globally configured in the Helm chart. | [sequence][yaml-seq] of [strings][yaml-str] | `[".arpa", ".com", ".io", ".net", ".org", ".ru"]` | +| `includeSuffixes` | Suffixes for which the DNS resolver will always attempt to do a lookup. Includes have higher priority than excludes. Can be globally configured in the Helm chart. | [sequence][yaml-seq] of [strings][yaml-str] | `[]` | +| `lookupTimeout` | Maximum time to wait for a cluster side host lookup. | [duration][go-duration] [string][yaml-str] | 4 seconds | + +Here is an example values.yaml: +```yaml +client: + dns: + includeSuffixes: [.private] + excludeSuffixes: [.se, .com, .io, .net, .org, .ru] + localIP: 8.8.8.8 + lookupTimeout: 30s +``` + +### Routing + +#### AlsoProxySubnets + +When using `alsoProxySubnets`, you provide a list of subnets to be added to the TUN device. +All connections to addresses that the subnet spans will be dispatched to the cluster + +Here is an example values.yaml for the subnet `1.2.3.4/32`: +```yaml +client: + routing: + alsoProxySubnets: + - 1.2.3.4/32 +``` + +#### NeverProxySubnets + +When using `neverProxySubnets` you provide a list of subnets. These will never be routed via the TUN device, +even if they fall within the subnets (pod or service) for the cluster. Instead, whatever route they have before +telepresence connects is the route they will keep. + +Here is an example kubeconfig for the subnet `1.2.3.4/32`: + +```yaml +client: + routing: + neverProxySubnets: + - 1.2.3.4/32 +``` + +#### Using AlsoProxy together with NeverProxy + +Never proxy and also proxy are implemented as routing rules, meaning that when the two conflict, regular routing routes apply. +Usually this means that the most specific route will win. + +So, for example, if an `alsoProxySubnets` subnet falls within a broader `neverProxySubnets` subnet: + +```yaml +neverProxySubnets: [10.0.0.0/16] +alsoProxySubnets: [10.0.5.0/24] +``` + +Then the specific `alsoProxySubnets` of `10.0.5.0/24` will be proxied by the TUN device, whereas the rest of `10.0.0.0/16` will not. + +Conversely, if a `neverProxySubnets` subnet is inside a larger `alsoProxySubnets` subnet: + +```yaml +alsoProxySubnets: [10.0.0.0/16] +neverProxySubnets: [10.0.5.0/24] +``` + +Then all of the `alsoProxySubnets` of `10.0.0.0/16` will be proxied, with the exception of the specific `neverProxySubnets` of `10.0.5.0/24` + +## Local Overrides + +In addition, it is possible to override each of these variables at the local level by setting up new values in local config files. +There are two types of config values that can be set locally: those that apply to all clusters, which are set in a single `config.yml` file, and those +that only apply to specific clusters, which are set as extensions to the `$KUBECONFIG` file. + +### Config for all clusters +Telepresence uses a `config.yml` file to store and change those configuration values that will be used for all clusters you use Telepresence with. +The location of this file varies based on your OS: + +* macOS: `$HOME/Library/Application Support/telepresence/config.yml` +* Linux: `$XDG_CONFIG_HOME/telepresence/config.yml` or, if that variable is not set, `$HOME/.config/telepresence/config.yml` +* Windows: `%APPDATA%\telepresence\config.yml` + +For Linux, the above paths are for a user-level configuration. For system-level configuration, use the file at `$XDG_CONFIG_DIRS/telepresence/config.yml` or, if that variable is empty, `/etc/xdg/telepresence/config.yml`. If a file exists at both the user-level and system-level paths, the user-level path file will take precedence. + +### Values + +The config file currently supports values for the `timeouts`, `logLevels`, `images`, `cloud`, and `grpc` keys. +The definitions of these values are identical to those values in the `client` config above. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +timeouts: + agentInstall: 1m + intercept: 10s +logLevels: + userDaemon: debug +images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting +cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. +grpc: + maxReceiveSize: 10Mi +telepresenceAPI: + port: 9980 +``` + + +## Workstation Per-Cluster Configuration + +Configuration that is specific to a cluster can also be overriden per-workstation by modifying your `$KUBECONFIG` file. +It is recommended that you do not do this, and instead rely on upstream values provided to the Traffic Manager. This ensures +that all users that connect to the Traffic Manager will have the same routing and DNS resolution behavior. +An important exception to this is the [`manager.namespace` configuration](#Manager) which must be set locally. + +### Values + +The kubeconfig supports values for `dns`, `also-proxy`, `never-proxy`, and `manager`. + +Example kubeconfig: +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + dns: + include-suffixes: [.private] + exclude-suffixes: [.se, .com, .io, .net, .org, .ru] + local-ip: 8.8.8.8 + lookup-timeout: 30s + never-proxy: [10.0.0.0/16] + also-proxy: [10.0.5.0/24] + name: example-cluster +``` + +#### Manager + +This is the one cluster configuration that cannot be set using the Helm chart because it defines how Telepresence connects to +the Traffic manager. When not default, that setting needs to be configured in the workstation's kubeconfig for the cluster. + +The `manager` key contains configuration for finding the `traffic-manager` that telepresence will connect to. It supports one key, `namespace`, indicating the namespace where the traffic manager is to be found + +Here is an example kubeconfig that will instruct telepresence to connect to a manager in namespace `staging`: + +```yaml +apiVersion: v1 +clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +[yaml-bool]: https://yaml.org/type/bool.html +[yaml-float]: https://yaml.org/type/float.html +[yaml-int]: https://yaml.org/type/int.html +[yaml-seq]: https://yaml.org/type/seq.html +[yaml-str]: https://yaml.org/type/str.html +[go-duration]: https://pkg.go.dev/time#ParseDuration +[logrus-level]: https://github.com/sirupsen/logrus/blob/v1.8.1/logrus.go#L25-L45 diff --git a/docs/telepresence/2.10/reference/dns.md b/docs/telepresence/2.10/reference/dns.md new file mode 100644 index 000000000..2f263860e --- /dev/null +++ b/docs/telepresence/2.10/reference/dns.md @@ -0,0 +1,80 @@ +# DNS resolution + +The Telepresence DNS resolver is dynamically configured to resolve names using the namespaces of currently active intercepts. Processes running locally on the desktop will have network access to all services in the such namespaces by service-name only. + +All intercepts contribute to the DNS resolver, even those that do not use the `--namespace=` option. This is because `--namespace default` is implied, and in this context, `default` is treated just like any other namespace. + +No namespaces are used by the DNS resolver (not even `default`) when no intercepts are active, which means that no service is available by `` only. Without an active intercept, the namespace qualified DNS name must be used (in the form `.`). + +See this demonstrated below, using the [quick start's](../../quick-start/) sample app services. + +No intercepts are currently running, we'll connect to the cluster and list the services that can be intercepted. + +``` +$ telepresence connect + + Connecting to traffic manager... + Connected to context default (https://) + +$ telepresence list + + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + emoji : ready to intercept (traffic-agent not yet installed) + web : ready to intercept (traffic-agent not yet installed) + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + +$ curl web-app:80 + + curl: (6) Could not resolve host: web-app + +``` + +This is expected as Telepresence cannot reach the service yet by short name without an active intercept in that namespace. + +``` +$ curl web-app.emojivoto:80 + + + + + + Emoji Vote + ... +``` + +Using the namespaced qualified DNS name though does work. +Now we'll start an intercept against another service in the same namespace. Remember, `--namespace default` is implied since it is not specified. + +``` +$ telepresence intercept web --port 8080 + + Using Deployment web + intercepted + Intercept name : web + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Volume Mount Point: /tmp/telfs-166119801 + Intercepting : HTTP requests that match all headers: + 'x-telepresence-intercept-id: 8eac04e3-bf24-4d62-b3ba-35297c16f5cd:web' + +$ curl webapp:80 + + + + + + Emoji Vote + ... +``` + +Now curling that service by its short name works and will as long as the intercept is active. + +The DNS resolver will always be able to resolve services using `.` regardless of intercepts. + +### Supported Query Types + +The Telepresence DNS resolver is now capable of resolving queries of type `A`, `AAAA`, `CNAME`, +`MX`, `NS`, `PTR`, `SRV`, and `TXT`. + +See [Outbound connectivity](../routing/#dns-resolution) for details on DNS lookups. diff --git a/docs/telepresence/2.10/reference/docker-run.md b/docs/telepresence/2.10/reference/docker-run.md new file mode 100644 index 000000000..8aa7852e5 --- /dev/null +++ b/docs/telepresence/2.10/reference/docker-run.md @@ -0,0 +1,31 @@ +--- +Description: "How a Telepresence intercept can run a Docker container with configured environment and volume mounts." +--- + +# Using Docker for intercepts + +If you want your intercept to go to a Docker container on your laptop, use the `--docker-run` option. It creates the intercept, runs your container in the foreground, then automatically ends the intercept when the container exits. + +`telepresence intercept --port --docker-run -- ` + +The `--` separates flags intended for `telepresence intercept` from flags intended for `docker run`. + +## Example + +Imagine you are working on a new version of your frontend service. It is running in your cluster as a Deployment called `frontend-v1`. You use Docker on your laptop to build an improved version of the container called `frontend-v2`. To test it out, use this command to run the new container on your laptop and start an intercept of the cluster service to your local container. + +`telepresence intercept frontend-v1 --port 8000 --docker-run -- frontend-v2` + +## Ports + +The `--port` flag can specify an additional port when `--docker-run` is used so that the local and container port can be different. This is done using `--port :`. The container port will default to the local port when using the `--port ` syntax. + +## Flags + +Telepresence will automatically pass some relevant flags to Docker in order to connect the container with the intercept. Those flags are combined with the arguments given after `--` on the command line. + +- `--dns-search tel2-search` Enables single label name lookups in intercepted namespaces +- `--env-file ` Loads the intercepted environment +- `--name intercept--` Names the Docker container, this flag is omitted if explicitly given on the command line +- `-p ` The local port for the intercept and the container port +- `-v ` Volume mount specification, see CLI help for `--mount` and `--docker-mount` flags for more info diff --git a/docs/telepresence/2.10/reference/environment.md b/docs/telepresence/2.10/reference/environment.md new file mode 100644 index 000000000..7f83ff119 --- /dev/null +++ b/docs/telepresence/2.10/reference/environment.md @@ -0,0 +1,46 @@ +--- +description: "How Telepresence can import environment variables from your Kubernetes cluster to use with code running on your laptop." +--- + +# Environment variables + +Telepresence can import environment variables from the cluster pod when running an intercept. +You can then use these variables with the code running on your laptop of the service being intercepted. + +There are three options available to do this: + +1. `telepresence intercept [service] --port [port] --env-file=FILENAME` + + This will write the environment variables to a Docker Compose `.env` file. This file can be used with `docker-compose` when starting containers locally. Please see the Docker documentation regarding the [file syntax](https://docs.docker.com/compose/env-file/) and [usage](https://docs.docker.com/compose/environment-variables/) for more information. + +2. `telepresence intercept [service] --port [port] --env-json=FILENAME` + + This will write the environment variables to a JSON file. This file can be injected into other build processes. + +3. `telepresence intercept [service] --port [port] -- [COMMAND]` + + This will run a command locally with the pod's environment variables set on your laptop. Once the command quits the intercept is stopped (as if `telepresence leave [service]` was run). This can be used in conjunction with a local server command, such as `python [FILENAME]` or `node [FILENAME]` to run a service locally while using the environment variables that were set on the pod via a ConfigMap or other means. + + Another use would be running a subshell, Bash for example: + + `telepresence intercept [service] --port [port] -- /bin/bash` + + This would start the intercept then launch the subshell on your laptop with all the same variables set as on the pod. + +## Telepresence Environment Variables + +Telepresence adds some useful environment variables in addition to the ones imported from the intercepted pod: + +### TELEPRESENCE_ROOT +Directory where all remote volumes mounts are rooted. See [Volume Mounts](../volume/) for more info. + +### TELEPRESENCE_MOUNTS +Colon separated list of remotely mounted directories. + +### TELEPRESENCE_CONTAINER +The name of the intercepted container. Useful when a pod has several containers, and you want to know which one that was intercepted by Telepresence. + +### TELEPRESENCE_INTERCEPT_ID +ID of the intercept (same as the "x-intercept-id" http header). + +Useful if you need special behavior when intercepting a pod. One example might be when dealing with pub/sub systems like Kafka, where all processes that don't have the `TELEPRESENCE_INTERCEPT_ID` set can filter out all messages that contain an `x-intercept-id` header, while those that do, instead filter based on a matching `x-intercept-id` header. This is to assure that messages belonging to a certain intercept always are consumed by the intercepting process. diff --git a/docs/telepresence/2.10/reference/inside-container.md b/docs/telepresence/2.10/reference/inside-container.md new file mode 100644 index 000000000..637e0cdfd --- /dev/null +++ b/docs/telepresence/2.10/reference/inside-container.md @@ -0,0 +1,37 @@ +# Running Telepresence inside a container + +It is sometimes desirable to run [Telepresence](/products/telepresence/) inside a container. One reason can be to avoid any side effects on the workstation's network, another can be to establish multiple sessions with the traffic manager, or even work with different clusters simultaneously. + +## Building the container + +Building a container with a ready-to-run Telepresence is easy because there are relatively few external dependencies. Add the following to a `Dockerfile`: + +```Dockerfile +# Dockerfile with telepresence and its prerequisites +FROM alpine:3.13 + +# Install Telepresence prerequisites +RUN apk add --no-cache curl iproute2 sshfs + +# Download and install the telepresence binary +RUN curl -fL https://app.getambassador.io/download/tel2/linux/amd64/latest/telepresence -o telepresence && \ + install -o root -g root -m 0755 telepresence /usr/local/bin/telepresence +``` +In order to build the container, do this in the same directory as the `Dockerfile`: +``` +$ docker build -t tp-in-docker . +``` + +## Running the container + +Telepresence will need access to the `/dev/net/tun` device on your Linux host (or, in case the host isn't Linux, the Linux VM that Docker starts automatically), and a Kubernetes config that identifies the cluster. It will also need `--cap-add=NET_ADMIN` to create its Virtual Network Interface. + +The command to run the container can look like this: +```bash +$ docker run \ + --cap-add=NET_ADMIN \ + --device /dev/net/tun:/dev/net/tun \ + --network=host \ + -v ~/.kube/config:/root/.kube/config \ + -it --rm tp-in-docker +``` diff --git a/docs/telepresence/2.10/reference/intercepts/index.md b/docs/telepresence/2.10/reference/intercepts/index.md new file mode 100644 index 000000000..08e40a60d --- /dev/null +++ b/docs/telepresence/2.10/reference/intercepts/index.md @@ -0,0 +1,403 @@ +import Alert from '@material-ui/lab/Alert'; + +# Intercepts + +When intercepting a service, Telepresence installs a *traffic-agent* +sidecar in to the workload. That traffic-agent supports one or more +intercept *mechanisms* that it uses to decide which traffic to +intercept. Telepresence has a simple default traffic-agent, however +you can configure a different traffic-agent with more sophisticated +mechanisms either by setting the [`images.agentImage` field in +`config.yml`](../config/#images) or by writing an +`extensions/${extension}.yml` file that tells +Telepresence about a traffic-agent that it can use, what mechanisms +that traffic-agent supports, and command-line flags to expose to the +user to configure that mechanism. You may tell Telepresence which +known mechanism to use with the `--mechanism=${mechanism}` flag or by +setting one of the `--${mechansim}-XXX` flags, which implicitly set +the mechanism; for example, setting `--http-header=auto` implicitly +sets `--mechanism=http`. + +The default open-source traffic-agent only supports the `tcp` +mechanism, which treats the raw layer 4 TCP streams as opaque and +sends all of that traffic down to the developer's workstation. This +means that it is a "global" intercept, affecting all users of the +cluster. + +In addition to the default open-source traffic-agent, Telepresence +already knows about the Ambassador Cloud +traffic-agent, which supports the `http` +mechanism. The `http` mechanism operates at higher layer, working +with layer 7 HTTP, and may intercept specific HTTP requests, allowing +other HTTP requests through to the regular service. This allows for +"personal" intercepts which only intercept traffic tagged as belonging +to a given developer. + +[extensions]: https://pkg.go.dev/github.com/telepresenceio/telepresence/v2@v$version$/pkg/client/cli/extensions +[ambassador-agent]: https://github.com/telepresenceio/telepresence/blob/release/v2/pkg/client/cli/extensions/builtin.go#L30-L50 + +## Intercept behavior when logged in to Ambassador Cloud + +Logging in to Ambassador Cloud (with [`telepresence +login`](../client/login/)) changes the Telepresence defaults in two +ways. + +First, being logged in to Ambassador Cloud causes Telepresence to +default to `--mechanism=http --http-header=auto --http-path-prefix=/` ( +`--mechanism=http` is redundant. It is implied by other `--http-xxx` flags). +If you hadn't been logged in it would have defaulted to +`--mechanism=tcp`. This tells Telepresence to use the Ambassador +Cloud traffic-agent to do smart "personal" intercepts and only +intercept a subset of HTTP requests, rather than just intercepting the +entirety of all TCP connections. This is important for working in a +shared cluster with teammates, and is important for the preview URL +functionality below. See `telepresence intercept --help` for +information on using the `--http-header` and `--http-path-xxx` flags to +customize which requests that are intercepted. + +Secondly, being logged in causes Telepresence to default to +`--preview-url=true`. If you hadn't been logged in it would have +defaulted to `--preview-url=false`. This tells Telepresence to take +advantage of Ambassador Cloud to create a preview URL for this +intercept, creating a shareable URL that automatically sets the +appropriate headers to have requests coming from the preview URL be +intercepted. In order to create the preview URL, it will prompt you +for four settings about how your cluster's ingress is configured. For +each, Telepresence tries to intelligently detect the correct value for +your cluster; if it detects it correctly, may simply press "enter" and +accept the default, otherwise you must tell Telepresence the correct +value. + +When creating an intercept with the `http` mechanism, the +traffic-agent sends a `GET /telepresence-http2-check` request to your +service and to the process running on your local machine at the port +specified in your intercept, in order to determine if they support +HTTP/2. This is required for the intercepts to behave correctly. If +you do not have a service running locally when the intercept is +created, the traffic-agent will use the result it got from checking +the in-cluster service. + +## Supported workloads + +Kubernetes has various +[workloads](https://kubernetes.io/docs/concepts/workloads/). +Currently, Telepresence supports intercepting (installing a +traffic-agent on) `Deployments`, `ReplicaSets`, and `StatefulSets`. + + + +While many of our examples use Deployments, they would also work on +ReplicaSets and StatefulSets + + + +## Specifying a namespace for an intercept + +The namespace of the intercepted workload is specified using the +`--namespace` option. When this option is used, and `--workload` is +not used, then the given name is interpreted as the name of the +workload and the name of the intercept will be constructed from that +name and the namespace. + +```shell +telepresence intercept hello --namespace myns --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept +`hello-myns`. In order to remove the intercept, you will need to run +`telepresence leave hello-mydns` instead of just `telepresence leave +hello`. + +The name of the intercept will be left unchanged if the workload is specified. + +```shell +telepresence intercept myhello --namespace myns --workload hello --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept `myhello`. + +## Importing environment variables + +Telepresence can import the environment variables from the pod that is +being intercepted, see [this doc](../environment/) for more details. + +## Creating an intercept without a preview URL + +If you *are not* logged in to Ambassador Cloud, the following command +will intercept all traffic bound to the service and proxy it to your +laptop. This includes traffic coming through your ingress controller, +so use this option carefully as to not disrupt production +environments. + +```shell +telepresence intercept --port= +``` + +If you *are* logged in to Ambassador Cloud, setting the +`--preview-url` flag to `false` is necessary. + +```shell +telepresence intercept --port= --preview-url=false +``` + +This will output an HTTP header that you can set on your request for +that traffic to be intercepted: + +```console +$ telepresence intercept --port= --preview-url=false +Using Deployment +intercepted + Intercept name: + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":") +``` + +Run `telepresence status` to see the list of active intercepts. + +```console +$ telepresence status +Root Daemon: Running + Version : v2.1.4 (api 3) + Primary DNS : "" + Fallback DNS: "" +User Daemon: Running + Version : v2.1.4 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 1 total + dataprocessingnodeservice: @ +``` + +Finally, run `telepresence leave ` to stop the intercept. + +## Skipping the ingress dialogue + +You can skip the ingress dialogue by setting the relevant parameters using flags. If any of the following flags are set, the dialogue will be skipped and the flag values will be used instead. If any of the required flags are missing, an error will be thrown. + +| Flag | Description | Required | +|------------------|------------------------------------------------------------------|------------| +| `--ingress-host` | The ip address for the ingress | yes | +| `--ingress-port` | The port for the ingress | yes | +| `--ingress-tls` | Whether tls should be used | no | +| `--ingress-l5` | Whether a different ip address should be used in request headers | no | + +## Creating an intercept when a service has multiple ports + +If you are trying to intercept a service that has multiple ports, you +need to tell Telepresence which service port you are trying to +intercept. To specify, you can either use the name of the service +port or the port number itself. To see which options might be +available to you and your service, use kubectl to describe your +service or look in the object's YAML. For more information on multiple +ports, see the [Kubernetes documentation][kube-multi-port-services]. + +[kube-multi-port-services]: https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services + +```console +$ telepresence intercept --port=: +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +When intercepting a service that has multiple ports, the name of the +service port that has been intercepted is also listed. + +If you want to change which port has been intercepted, you can create +a new intercept the same way you did above and it will change which +service port is being intercepted. + +## Creating an intercept When multiple services match your workload + +Oftentimes, there's a 1-to-1 relationship between a service and a +workload, so telepresence is able to auto-detect which service it +should intercept based on the workload you are trying to intercept. +But if you use something like +[Argo](https://www.getambassador.io/docs/argo/latest/), there may be +two services (that use the same labels) to manage traffic between a +canary and a stable service. + +Fortunately, if you know which service you want to use when +intercepting a workload, you can use the `--service` flag. So in the +aforementioned example, if you wanted to use the `echo-stable` service +when intercepting your workload, your command would look like this: + +```console +$ telepresence intercept echo-rollout- --port --service echo-stable +Using ReplicaSet echo-rollout- +intercepted + Intercept name : echo-rollout- + State : ACTIVE + Workload kind : ReplicaSet + Destination : 127.0.0.1:3000 + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-921196036 + Intercepting : all TCP connections +``` + +## Intercepting multiple ports + +It is possible to intercept more than one service and/or service port that are using the same workload. You do this +by creating more than one intercept that identify the same workload using the `--workload` flag. + +Let's assume that we have a service `multi-echo` with the two ports `http` and `grpc`. They are both +targeting the same `multi-echo` deployment. + +```console +$ telepresence intercept multi-echo-http --workload multi-echo --port 8080:http --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-http + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Volume Mount Point : /tmp/telfs-893700837 + Intercepting : all TCP requests + Preview URL : https://sleepy-bassi-1140.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +$ telepresence intercept multi-echo-grpc --workload multi-echo --port 8443:grpc --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-grpc + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8443 + Service Port Identifier: extra + Volume Mount Point : /tmp/telfs-1277723591 + Intercepting : all TCP requests + Preview URL : https://upbeat-thompson-6613.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +``` + +## Port-forwarding an intercepted container's sidecars + +Sidecars are containers that sit in the same pod as an application +container; they usually provide auxiliary functionality to an +application, and can usually be reached at +`localhost:${SIDECAR_PORT}`. For example, a common use case for a +sidecar is to proxy requests to a database, your application would +connect to `localhost:${SIDECAR_PORT}`, and the sidecar would then +connect to the database, perhaps augmenting the connection with TLS or +authentication. + +When intercepting a container that uses sidecars, you might want those +sidecars' ports to be available to your local application at +`localhost:${SIDECAR_PORT}`, exactly as they would be if running +in-cluster. Telepresence's `--to-pod ${PORT}` flag implements this +behavior, adding port-forwards for the port given. + +```console +$ telepresence intercept --port=: --to-pod= +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +If there are multiple ports that you need forwarded, simply repeat the +flag (`--to-pod= --to-pod=`). + +## Intercepting headless services + +Kubernetes supports creating [services without a ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services), +which, when they have a pod selector, serve to provide a DNS record that will directly point to the service's backing pods. +Telepresence supports intercepting these `headless` services as it would a regular service with a ClusterIP. +So, for example, if you have the following service: + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: my-headless +spec: + type: ClusterIP + clusterIP: None + selector: + service: my-headless + ports: + - port: 8080 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: my-headless + labels: + service: my-headless +spec: + replicas: 1 + serviceName: my-headless + selector: + matchLabels: + service: my-headless + template: + metadata: + labels: + service: my-headless + spec: + containers: + - name: my-headless + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +``` + +You can intercept it like any other: + +```console +$ telepresence intercept my-headless --port 8080 +Using StatefulSet my-headless +intercepted + Intercept name : my-headless + State : ACTIVE + Workload kind : StatefulSet + Destination : 127.0.0.1:8080 + Volume Mount Point: /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-524189712 + Intercepting : all TCP connections +``` + + +This utilizes an initContainer that requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + + +This requires the Traffic Agent to run as GID 7777. By default, this is disabled on openshift clusters. +To enable running as GID 7777 on a specific openshift namespace, run: +oc adm policy add-scc-to-group anyuid system:serviceaccounts:$NAMESPACE + + + +Intercepting headless services without a selector is not supported. + + +## Sharing intercepts with teammates + +Once a combination of flags to easily intercept a service has been found, it's useful to share it with teammates. You +can do that easily by going to [Ambassador Cloud -> Intercepts history](https://app.getambassador.io/cloud/saved-intercepts/history) +pick the intercept command from the history tab and create a Saved Intercept by giving it a name, when doing that +the intercept command will be easily accessible for all your teammates. Note that this requires the free enhanced +client to be installed and to be logged in (`telepresence login`). + +To instantiate an intercept based on a saved intercept, simply run +`telepresence intercept --use-saved-intercept `. When logged in, the command will first check for a +saved intercept in Ambassador Cloud and will use it if found, otherwise an error will be returned. + +Saved Intercepts can be [managed through Ambassador Cloud](../../../../cloud/latest/telepresence-saved-intercepts/). diff --git a/docs/telepresence/2.10/reference/intercepts/manual-agent.md b/docs/telepresence/2.10/reference/intercepts/manual-agent.md new file mode 100644 index 000000000..8c24d6dbe --- /dev/null +++ b/docs/telepresence/2.10/reference/intercepts/manual-agent.md @@ -0,0 +1,267 @@ +import Alert from '@material-ui/lab/Alert'; + +# Manually injecting the Traffic Agent + +You can directly modify your workload's YAML configuration to add the Telepresence Traffic Agent and enable it to be intercepted. + +When you use a Telepresence intercept for the first time on a Pod, the [Telepresence Mutating Webhook](../../cluster-config/#mutating-webhook) +will automatically inject a Traffic Agent sidecar into it. There might be some situations where this approach cannot be used, such +as very strict company security policies preventing it. + + +Although it is possible to manually inject the Traffic Agent, it is not the recommended approach to making a workload interceptable, +try the Mutating Webhook before proceeding. + + +## Procedure + +You can manually inject the agent into Deployments, StatefulSets, or ReplicaSets. The example on this page +uses the following Deployment and Service. It's a prerequisite that they have been applied to the cluster: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "my-service" + labels: + service: my-service +spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +--- +apiVersion: v1 +kind: Service +metadata: + name: "my-service" +spec: + type: ClusterIP + selector: + service: my-service + ports: + - port: 80 + targetPort: 8080 +``` + +### 1. Generating the YAML + +First, generate the YAML for the traffic-agent configmap entry. It's important that the generated file have +the same name as the service, and no extension: + +```console +$ telepresence genyaml config --workload my-service -o /tmp/my-service +$ cat /tmp/my-service-config.yaml +agentImage: docker.io/datawire/tel2:2.7.0 +agentName: my-service +containers: +- Mounts: null + envPrefix: A_ + intercepts: + - agentPort: 9900 + containerPort: 8080 + protocol: TCP + serviceName: my-service + servicePort: 80 + serviceUID: f6680334-10ef-4703-aa4e-bb1f9d1665fd + mountPoint: /tel_app_mounts/echo-container + name: echo-container +logLevel: info +managerHost: traffic-manager.ambassador +managerPort: 8081 +manual: true +namespace: default +workloadKind: Deployment +workloadName: my-service +``` + +Next, generate the YAML for the traffic-agent container: + +```console +$ telepresence genyaml container --config /tmp/my-service -o /tmp/my-service-agent.yaml +$ cat /tmp/my-service-agent.yaml +args: +- agent +env: +- name: _TEL_AGENT_POD_IP + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: status.podIP +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: traffic-agent +ports: +- containerPort: 9900 + protocol: TCP +readinessProbe: + exec: + command: + - /bin/stat + - /tmp/agent/ready +resources: {} +volumeMounts: +- mountPath: /tel_pod_info + name: traffic-annotations +- mountPath: /etc/traffic-agent + name: traffic-config +- mountPath: /tel_app_exports + name: export-volume + name: traffic-annotations +``` + +Next, generate the init-container + +```console +$ telepresence genyaml initcontainer --config /tmp/my-service -o /tmp/my-service-init.yaml +$ cat /tmp/my-service-init.yaml +args: +- agent-init +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: tel-agent-init +resources: {} +securityContext: + capabilities: + add: + - NET_ADMIN +volumeMounts: +- mountPath: /etc/traffic-agent + name: traffic-config +``` + +Next, generate the YAML for the volumes: + +```console +$ telepresence genyaml volume --workload my-service -o /tmp/my-service-volume.yaml +$ cat /tmp/my-service-volume.yaml +- downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.annotations + path: annotations + name: traffic-annotations +- configMap: + items: + - key: my-service + path: config.yaml + name: telepresence-agents + name: traffic-config +- emptyDir: {} + name: export-volume + +``` + + +Enter `telepresence genyaml container --help` or `telepresence genyaml volume --help` for more information about these flags. + + +### 2. Creating (or updating) the configmap + +The generated configmap entry must be insterted into the `telepresence-agents` `ConfigMap` in the same namespace as the +modified `Deployment`. If the `ConfigMap` doesn't exist yet, it can be created using the following command: + +```console +$ kubectl create configmap telepresence-agents --from-file=/tmp/my-service +``` + +If it already exists, new entries can be added under the `Data` key using `kubectl edit configmap telepresence-agents`. + +### 3. Injecting the YAML into the Deployment + +You need to add the `Deployment` YAML you genereated to include the container and the volume. These are placed as elements +of `spec.template.spec.containers`, `spec.template.spec.initContainers`, and `spec.template.spec.volumes` respectively. +You also need to modify `spec.template.metadata.annotations` and add the annotation +`telepresence.getambassador.io/manually-injected: "true"`. These changes should look like the following: + +```diff + apiVersion: apps/v1 + kind: Deployment + metadata: + name: "my-service" + labels: + service: my-service + spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service ++ annotations: ++ telepresence.getambassador.io/manually-injected: "true" + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} ++ - args: ++ - agent ++ env: ++ - name: _TEL_AGENT_POD_IP ++ valueFrom: ++ fieldRef: ++ apiVersion: v1 ++ fieldPath: status.podIP ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: traffic-agent ++ ports: ++ - containerPort: 9900 ++ protocol: TCP ++ readinessProbe: ++ exec: ++ command: ++ - /bin/stat ++ - /tmp/agent/ready ++ resources: { } ++ volumeMounts: ++ - mountPath: /tel_pod_info ++ name: traffic-annotations ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ - mountPath: /tel_app_exports ++ name: export-volume ++ initContainers: ++ - args: ++ - agent-init ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: tel-agent-init ++ resources: { } ++ securityContext: ++ capabilities: ++ add: ++ - NET_ADMIN ++ volumeMounts: ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ volumes: ++ - downwardAPI: ++ items: ++ - fieldRef: ++ apiVersion: v1 ++ fieldPath: metadata.annotations ++ path: annotations ++ name: traffic-annotations ++ - configMap: ++ items: ++ - key: my-service ++ path: config.yaml ++ name: telepresence-agents ++ name: traffic-config ++ - emptyDir: { } ++ name: export-volume +``` diff --git a/docs/telepresence/2.10/reference/linkerd.md b/docs/telepresence/2.10/reference/linkerd.md new file mode 100644 index 000000000..9b903fa76 --- /dev/null +++ b/docs/telepresence/2.10/reference/linkerd.md @@ -0,0 +1,75 @@ +--- +Description: "How to get Linkerd meshed services working with Telepresence" +--- + +# Using Telepresence with Linkerd + +## Introduction +Getting started with Telepresence on Linkerd services is as simple as adding an annotation to your Deployment: + +```yaml +spec: + template: + metadata: + annotations: + config.linkerd.io/skip-outbound-ports: "8081" +``` + +The local system and the Traffic Agent connect to the Traffic Manager using its gRPC API on port 8081. Telling Linkerd to skip that port allows the Traffic Agent sidecar to fully communicate with the Traffic Manager, and therefore the rest of the Telepresence system. + +## Prerequisites +1. [Telepresence binary](../../install) +2. Linkerd control plane [installed to cluster](https://linkerd.io/2.10/tasks/install/) +3. Kubectl +4. [Working ingress controller](https://www.getambassador.io/docs/edge-stack/latest/howtos/linkerd2) + +## Deploy +Save and deploy the following YAML. Note the `config.linkerd.io/skip-outbound-ports` annotation in the metadata of the pod template. + +```yaml +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: quote +spec: + replicas: 1 + selector: + matchLabels: + app: quote + strategy: + type: RollingUpdate + template: + metadata: + annotations: + linkerd.io/inject: "enabled" + config.linkerd.io/skip-outbound-ports: "8081,8022,6001" + labels: + app: quote + spec: + containers: + - name: backend + image: docker.io/datawire/quote:0.4.1 + ports: + - name: http + containerPort: 8000 + env: + - name: PORT + value: "8000" + resources: + limits: + cpu: "0.1" + memory: 100Mi +``` + +## Connect to Telepresence +Run `telepresence connect` to connect to the cluster. Then `telepresence list` should show the `quote` deployment as `ready to intercept`: + +``` +$ telepresence list + + quote: ready to intercept (traffic-agent not yet installed) +``` + +## Run the intercept +Run `telepresence intercept quote --port 8080:80` to direct traffic from the `quote` deployment to port 8080 on your local system. Assuming you have something listening on 8080, you should now be able to see your local service whenever attempting to access the `quote` service. diff --git a/docs/telepresence/2.10/reference/rbac.md b/docs/telepresence/2.10/reference/rbac.md new file mode 100644 index 000000000..d78133441 --- /dev/null +++ b/docs/telepresence/2.10/reference/rbac.md @@ -0,0 +1,236 @@ +import Alert from '@material-ui/lab/Alert'; + +# Telepresence RBAC +The intention of this document is to provide a template for securing and limiting the permissions of Telepresence. +This documentation covers the full extent of permissions necessary to administrate Telepresence components in a cluster. + +There are two general categories for cluster permissions with respect to Telepresence. There are RBAC settings for a User and for an Administrator described above. The User is expected to only have the minimum cluster permissions necessary to create a Telepresence [intercept](../../howtos/intercepts/), and otherwise be unable to affect Kubernetes resources. + +In addition to the above, there is also a consideration of how to manage Users and Groups in Kubernetes which is outside of the scope of the document. This document will use Service Accounts to assign Roles and Bindings. Other methods of RBAC administration and enforcement can be found on the [Kubernetes RBAC documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) page. + +## Requirements + +- Kubernetes version 1.16+ +- Cluster admin privileges to apply RBAC + +## Editing your kubeconfig + +This guide also assumes that you are utilizing a kubeconfig file that is specified by the `KUBECONFIG` environment variable. This is a `yaml` file that contains the cluster's API endpoint information as well as the user data being supplied for authentication. The Service Account name used in the example below is called tp-user. This can be replaced by any value (i.e. John or Jane) as long as references to the Service Account are consistent throughout the `yaml`. After an administrator has applied the RBAC configuration, a user should create a `config.yaml` in your current directory that looks like the following:​ + +```yaml +apiVersion: v1 +kind: Config +clusters: +- name: my-cluster # Must match the cluster value in the contexts config + cluster: + ## The cluster field is highly cloud dependent. +contexts: +- name: my-context + context: + cluster: my-cluster # Must match the name field in the clusters config + user: tp-user +users: +- name: tp-user # Must match the name of the Service Account created by the cluster admin + user: + token: # See note below +``` + +The Service Account token will be obtained by the cluster administrator after they create the user's Service Account. Creating the Service Account will create an associated Secret in the same namespace with the format `-token-`. This token can be obtained by your cluster administrator by running `kubectl get secret -n ambassador -o jsonpath='{.data.token}' | base64 -d`. + +After creating `config.yaml` in your current directory, export the file's location to KUBECONFIG by running `export KUBECONFIG=$(pwd)/config.yaml`. You should then be able to switch to this context by running `kubectl config use-context my-context`. + +## Administrating Telepresence + +Telepresence administration requires permissions for creating `Namespaces`, `ServiceAccounts`, `ClusterRoles`, `ClusterRoleBindings`, `Secrets`, `Services`, `MutatingWebhookConfiguration`, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator. The following permissions are needed for the installation and use of Telepresence: + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: telepresence-admin + namespace: default +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-admin-role +rules: + - apiGroups: [""] + resources: ["pods", "pods/log"] + verbs: ["get", "list", "create", "delete", "watch"] + - apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "update", "create", "delete"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] + - apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update", "create", "delete", "watch"] + - apiGroups: ["rbac.authorization.k8s.io"] + resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"] + verbs: ["get", "list", "watch", "create", "delete"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["create"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["get", "list", "watch", "delete"] + resourceNames: ["telepresence-agents"] + - apiGroups: [""] + resources: ["namespaces"] + verbs: ["get", "list", "watch", "create"] + - apiGroups: [""] + resources: ["secrets"] + verbs: ["get", "create", "list", "delete"] + - apiGroups: [""] + resources: ["serviceaccounts"] + verbs: ["get", "create", "delete"] + - apiGroups: ["admissionregistration.k8s.io"] + resources: ["mutatingwebhookconfigurations"] + verbs: ["get", "create", "delete"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["list", "get", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-clusterrolebinding +subjects: + - name: telepresence-admin + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-admin-role + kind: ClusterRole +``` + +There are two ways to install the traffic-manager: Using `telepresence connect` and installing the [helm chart](../../install/helm/). + +By using `telepresence connect`, Telepresence will use your kubeconfig to create the objects mentioned above in the cluster if they don't already exist. If you want the most introspection into what is being installed, we recommend using the helm chart to install the traffic-manager. + +## Cluster-wide telepresence user access + +To allow users to make intercepts across all namespaces, but with more limited `kubectl` permissions, the following `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` will allow full `telepresence intercept` functionality. + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate value + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-role +rules: +# For gather-logs command +- apiGroups: [""] + resources: ["pods/log"] + verbs: ["get"] +- apiGroups: [""] + resources: ["pods"] + verbs: ["list"] +# Needed in order to maintain a list of workloads +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +- apiGroups: [""] + resources: ["namespaces", "services"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-rolebinding +subjects: +- name: tp-user + kind: ServiceAccount + namespace: ambassador +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-role + kind: ClusterRole +``` + +### Traffic Manager connect permission +In addition to the cluster-wide permissions, the client will also need the following namespace scoped permissions +in the traffic-manager's namespace in order to establish the needed port-forward to the traffic-manager. +```yaml +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: traffic-manager-connect +rules: + - apiGroups: [""] + resources: ["pods"] + verbs: ["get", "list", "watch"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: traffic-manager-connect +subjects: + - name: telepresence-test-developer + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: traffic-manager-connect + kind: Role +``` + +## Namespace only telepresence user access + +RBAC for multi-tenant scenarios where multiple dev teams are sharing a single cluster where users are constrained to a specific namespace(s). + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +For each accessible namespace +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate user name + namespace: tp-namespace # Update value for appropriate namespace +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +rules: +- apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "watch"] +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role-binding + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount +roleRef: + kind: Role + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +``` + +The user will also need the [Traffic Manager connect permission](#traffic-manager-connect-permission) described above. diff --git a/docs/telepresence/2.10/reference/restapi.md b/docs/telepresence/2.10/reference/restapi.md new file mode 100644 index 000000000..4be1924a3 --- /dev/null +++ b/docs/telepresence/2.10/reference/restapi.md @@ -0,0 +1,93 @@ +# Telepresence RESTful API server + +[Telepresence](/products/telepresence/) can run a RESTful API server on the local host, both on the local workstation and in a pod that contains a `traffic-agent`. The server currently has two endpoints. The standard `healthz` endpoint and the `consume-here` endpoint. + +## Enabling the server +The server is enabled by setting the `telepresenceAPI.port` to a valid port number in the [Telepresence Helm Chart](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). The values may be passed explicitly to Helm during install, or configured using the [Telepresence Config](../config#restful-api-server) to impact an auto-install. + +## Querying the server +On the cluster's side, it's the `traffic-agent` of potentially intercepted pods that runs the server. The server can be accessed using `http://localhost:/` from the application container. Telepresence ensures that the container has the `TELEPRESENCE_API_PORT` environment variable set when the `traffic-agent` is installed. On the workstation, it is the `user-daemon` that runs the server. It uses the `TELEPRESENCE_API_PORT` that is conveyed in the environment of the intercept. This means that the server can be accessed the exact same way locally, provided that the environment is propagated correctly to the interceptor process. + +## Endpoints + +The `consume-here` and `intercept-info` endpoints are both intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar. Telepresence provides the ID of the intercept in the environment variable [TELEPRESENCE_INTERCEPT_ID](../environment/#telepresence_intercept_id) during an intercept. This ID must be provided in a `x-telepresence-caller-intercept-id: = ` header. [Telepresence](/products/telepresence/) needs this to identify the caller correctly. The `` will be empty when running in the cluster, but it's harmless to provide it there too, so there's no need for conditional code. + +There are three prerequisites to fulfill before testing The `consume-here` and `intercept-info` endpoints using `curl -v` on the workstation: +1. An intercept must be active +2. The "/healthz" endpoint must respond with OK +3. The ID of the intercept must be known. It will be visible as `ID` in the output of `telepresence list --debug`. + +### healthz +The `http://localhost:/healthz` endpoint should respond with status code 200 OK. If it doesn't then something isn't configured correctly. Check that the `traffic-agent` container is present and that the `TELEPRESENCE_API_PORT` has been added to the environment of the application container and/or in the environment that is propagated to the interceptor that runs on the local workstation. + +#### test endpoint using curl +A `curl -v` call can be used to test the endpoint when an intercept is active. This example assumes that the API port is configured to be 9980. +```console +$ curl -v localhost:9980/healthz +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /healthz HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Date: Fri, 26 Nov 2021 07:06:18 GMT +< Content-Length: 0 +< +* Connection #0 to host localhost left intact +``` + +### consume-here +`http://localhost:/consume-here` will respond with "true" (consume the message) or "false" (leave the message on the queue). When running in the cluster, this endpoint will respond with `false` if the headers match an ongoing intercept for the same workload because it's assumed that it's up to the intercept to consume the message. When running locally, the response is inverted. Matching headers means that the message should be consumed. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api`, we can now check that the "/consume-here" returns "true" for the path "/api" and given headers. +```console +$ curl -v localhost:9980/consume-here?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /consume-here?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Fri, 26 Nov 2021 06:43:28 GMT +< Content-Length: 4 +< +* Connection #0 to host localhost left intact +true +``` + +If you can run curl from the pod, you can try the exact same URL. The result should be "false" when there's an ongoing intercept. The `x-telepresence-caller-intercept-id` is not needed when the call is made from the pod. + +### intercept-info +`http://localhost:/intercept-info` is intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar, and will respond with a JSON structure containing the two booleans `clientSide` and `intercepted`, and a `metadata` map which corresponds to the `--http-meta` key pairs used when the intercept was created. This field is always omitted in case `intercepted` is `false`. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api --http-meta a=b --http-meta b=c`, we can now check that the "/intercept-info" returns information for the given path and headers. +```console +$ curl -v localhost:9980/intercept-info?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980...* Connected to localhost (127.0.0.1) port 9980 (#0) +> GET /intercept-info?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.79.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Tue, 01 Feb 2022 11:39:55 GMT +< Content-Length: 68 +< +{"intercepted":true,"clientSide":true,"metadata":{"a":"b","b":"c"}} +* Connection #0 to host localhost left intact +``` diff --git a/docs/telepresence/2.10/reference/routing.md b/docs/telepresence/2.10/reference/routing.md new file mode 100644 index 000000000..cc88490a0 --- /dev/null +++ b/docs/telepresence/2.10/reference/routing.md @@ -0,0 +1,69 @@ +# Connection Routing + +## Outbound + +### DNS resolution +When requesting a connection to a host, the IP of that host must be determined. Telepresence provides DNS resolvers to help with this task. There are currently four types of resolvers but only one of them will be used on a workstation at any given time. Common for all of them is that they will propagate a selection of the host lookups to be performed in the cluster. The selection normally includes all names ending with `.cluster.local` or a currently mapped namespace but more entries can be added to the list using the `includeSuffixes` option in the +[cluster DNS configuration](../cluster-config/#dns) + +#### Cluster side DNS lookups +The cluster side host lookup will be performed by the traffic-manager unless the client has an active intercept, in which case, the agent performing that intercept will be responsible for doing it. If the client has multiple intercepts, then all of them will be asked to perform the lookup, and the response to the client will contain the unique sum of IPs that they produce. It's therefore important to never have multiple intercepts that span more than one namespace[[1](#namespacelimit)] running concurrently on the same workstation because that would logically put the workstation in several namespaces and make the DNS resolution ambiguous. The reason for asking all of them is that the workstation currently impersonates multiple containers, and it is not possible to determine on behalf of what container the lookup request is made. + +#### macOS resolver +This resolver hooks into the macOS DNS system by creating files under `/etc/resolver`. Those files correspond to some domain and contain the port number of the Telepresence resolver. Telepresence creates one such file for each of the currently mapped namespaces and `include-suffixes` option. The file `telepresence.local` contains a search path that is configured based on current intercepts so that single label names can be resolved correctly. + +#### Linux systemd-resolved resolver +This resolver registers itself as part of telepresence's [VIF](../tun-device) using `systemd-resolved` and uses the DBus API to configure domains and routes that corresponds to the current set of intercepts and namespaces. + +#### Linux overriding resolver +Linux systems that aren't configured with `systemd-resolved` will use this resolver. A Typical case is when running Telepresence [inside a docker container](../inside-container). During initialization, the resolver will first establish a _fallback_ connection to the IP passed as `--dns`, the one configured as `local-ip` in the [local DNS configuration](../config/#dns-and-routing), or the primary `nameserver` registered in `/etc/resolv.conf`. It will then use iptables to actually override that IP so that requests to it instead end up in the overriding resolver, which unless it succeeds on its own, will use the _fallback_. + +#### Windows resolver +This resolver uses the DNS resolution capabilities of the [win-tun](https://www.wintun.net/) device in conjunction with [Win32_NetworkAdapterConfiguration SetDNSDomain](https://docs.microsoft.com/en-us/powershell/scripting/samples/performing-networking-tasks?view=powershell-7.2#assigning-the-dns-domain-for-a-network-adapter). + +#### DNS caching +The Telepresence DNS resolver often changes its configuration. This means that Telepresence must either flush the DNS caches on the local host, or ensure that DNS-records returned from the Telepresence resolver aren't cached (or cached for a very short time). All operating systems have different ways of flushing the DNS caches and even different versions of one system may have differences. Also, on some systems it is necessary to actually kill and restart processes to ensure a proper flush, which in turn may result in network instabilities. + +Starting with 2.4.7, Telepresence will no longer flush the host's DNS caches. Instead, all records will have a short Time To Live (TTL) so that such caches evict the entries quickly. This causes increased load on the Telepresence resolver (shorter TTL means more frequent queries) and to cater for that, telepresence now has an internal cache to minimize the number of DNS queries that it sends to the cluster. This cache is flushed as needed without causing instabilities. + +### Routing + +#### Subnets +The Telepresence `traffic-manager` service is responsible for discovering the cluster's service subnet and all subnets used by the pods. In order to do this, it needs permission to create a dummy service[[2](#servicesubnet)] in its own namespace, and the ability to list, get, and watch nodes and pods. Most clusters will expose the pod subnets as `podCIDR` in the `Node` while others, like Amazon EKS, don't. Telepresence will then fall back to deriving the subnets from the IPs of all pods. If you'd like to choose a specific method for discovering subnets, or want to provide the list yourself, you can use the `podCIDRStrategy` configuration value in the [helm](../../install/helm) chart to do that. + +The complete set of subnets that the [VIF](../tun-device) will be configured with is dynamic and may change during a connection's life cycle as new nodes arrive or disappear from the cluster. The set consists of what that the traffic-manager finds in the cluster, and the subnets configured using the [also-proxy](../cluster-config#alsoproxy) configuration option. Telepresence will remove subnets that are equal to, or completely covered by, other subnets. + +#### Connection origin +A request to connect to an IP-address that belongs to one of the subnets of the [VIF](../tun-device) will cause a connection request to be made in the cluster. As with host name lookups, the request will originate from the traffic-manager unless the client has ongoing intercepts. If it does, one of the intercepted pods will be chosen, and the request will instead originate from that pod. This is a best-effort approach. Telepresence only knows that the request originated from the workstation. It cannot know that it is intended to originate from a specific pod when multiple intercepts are active. + +A `--local-only` intercept will not have any effect on the connection origin because there is no pod from which the connection can originate. The intercept must be made on a workload that has been deployed in the cluster if there's a requirement for correct connection origin. + +There are multiple reasons for doing this. One is that it is important that the request originates from the correct namespace. Example: + +```bash +curl some-host +``` +results in a http request with header `Host: some-host`. Now, if a service-mesh like Istio performs header based routing, then it will fail to find that host unless the request originates from the same namespace as the host resides in. Another reason is that the configuration of a service mesh can contain very strict rules. If the request then originates from the wrong pod, it will be denied. Only one intercept at a time can be used if there is a need to ensure that the chosen pod is exactly right. + +### Recursion detection +It is common that clusters used in development, such as Minikube, Minishift or k3s, run on the same host as the Telepresence client, often in a Docker container. Such clusters may have access to host network, which means that both DNS and L4 routing may be subjected to recursion. + +#### DNS recursion +When a local cluster's DNS-resolver fails to resolve a hostname, it may fall back to querying the local host network. This means that the Telepresence resolver will be asked to resolve a query that was issued from the cluster. Telepresence must check if such a query is recursive because there is a chance that it actually originated from the Telepresence DNS resolver and was dispatched to the `traffic-manager`, or a `traffic-agent`. + +Telepresence handles this by sending one initial DNS-query to resolve the hostname "tel2-recursion-check.kube-system". If the cluster runs locally, and has access to the local host's network, then that query will recurse back into the Telepresence resolver. Telepresence remembers this and alters its own behavior so that queries that are believed to be recursions are detected and respond with an NXNAME record. Telepresence performs this solution to the best of its ability, but may not be completely accurate in all situations. There's a chance that the DNS-resolver will yield a false negative for the second query if the same hostname is queried more than once in rapid succession, that is when the second query is made before the first query has received a response from the cluster. + +#### Connect recursion +A cluster running locally may dispatch connection attempts to non-existing host:port combinations to the host network. This means that they may reach the Telepresence [VIF](../tun-device). Endless recursions occur if the VIF simply dispatches such attempts on to the cluster. + +The telepresence client handles this by serializing all connection attempts to one specific IP:PORT, trapping all subsequent attempts to connect to that IP:PORT until the first attempt has completed. If the first attempt was deemed a success, then the currently trapped attempts are allowed to proceed. If the first attempt failed, then the currently trapped attempts fail. + +## Inbound + +The traffic-manager and traffic-agent are mutually responsible for setting up the necessary connection to the workstation when an intercept becomes active. In versions prior to 2.3.2, this would be accomplished by the traffic-manager creating a port dynamically that it would pass to the traffic-agent. The traffic-agent would then forward the intercepted connection to that port, and the traffic-manager would forward it to the workstation. This lead to problems when integrating with service meshes like Istio since those dynamic ports needed to be configured. It also imposed an undesired requirement to be able to use mTLS between the traffic-manager and traffic-agent. + +In 2.3.2, this changes, so that the traffic-agent instead creates a tunnel to the traffic-manager using the already existing gRPC API connection. The traffic-manager then forwards that using another tunnel to the workstation. This is completely invisible to other service meshes and is therefore much easier to configure. + +##### Footnotes: +

1: Starting with 2.8.0, Telepresence will not allow that the same workstation creates concurrent intercepts that span multiple namespaces.

+

2: The error message from an attempt to create a service in a bad subnet contains the service subnet. The trick of creating a dummy service is currently the only way to get Kubernetes to expose that subnet.

diff --git a/docs/telepresence/2.10/reference/tun-device.md b/docs/telepresence/2.10/reference/tun-device.md new file mode 100644 index 000000000..4410f6f3c --- /dev/null +++ b/docs/telepresence/2.10/reference/tun-device.md @@ -0,0 +1,27 @@ +# Networking through Virtual Network Interface + +The Telepresence daemon process creates a Virtual Network Interface (VIF) when Telepresence connects to the cluster. The VIF ensures that the cluster's subnets are available to the workstation. It also intercepts DNS requests and forwards them to the traffic-manager which in turn forwards them to intercepted agents, if any, or performs a host lookup by itself. + +### TUN-Device +The VIF is a TUN-device, which means that it communicates with the workstation in terms of L3 IP-packets. The router will recognize UDP and TCP packets and tunnel their payload to the traffic-manager via its encrypted gRPC API. The traffic-manager will then establish corresponding connections in the cluster. All protocol negotiation takes place in the client because the VIF takes care of the L3 to L4 translation (i.e. the tunnel is L4, not L3). + +## Gains when using the VIF + +### Both TCP and UDP +The TUN-device is capable of routing both TCP and UDP for outbound traffic. Earlier versions of Telepresence would only allow TCP. Future enhancements might be to also route inbound UDP, and perhaps a selection of ICMP packages (to allow for things like `ping`). + +### No SSH required + +The VIF approach is somewhat similar to using `sshuttle` but without +any requirements for extra software, configuration or connections. +Using the VIF means that only one single connection needs to be +forwarded through the Kubernetes apiserver (à la `kubectl +port-forward`), using only one single port. There is no need for +`ssh` in the client nor for `sshd` in the traffic-manager. This also +means that the traffic-manager container can run as the default user. + +#### sshfs without ssh encryption +When a POD is intercepted, and its volumes are mounted on the local machine, this mount is performed by [sshfs](https://github.com/libfuse/sshfs). Telepresence will run `sshfs -o slave` which means that instead of using `ssh` to establish an encrypted communication to an `sshd`, which in turn terminates the encryption and forwards to `sftp`, the `sshfs` will talk `sftp` directly on its `stdin/stdout` pair. Telepresence tunnels that directly to an `sftp` in the agent using its already encrypted gRPC API. As a result, no `sshd` is needed in client nor in the traffic-agent, and the traffic-agent container can run as the default user. + +### No Firewall rules +With the VIF in place, there's no longer any need to tamper with firewalls in order to establish IP routes. The VIF makes the cluster subnets available during connect, and the kernel will perform the routing automatically. When the session ends, the kernel is also responsible for cleaning up. diff --git a/docs/telepresence/2.10/reference/volume.md b/docs/telepresence/2.10/reference/volume.md new file mode 100644 index 000000000..82df9cafa --- /dev/null +++ b/docs/telepresence/2.10/reference/volume.md @@ -0,0 +1,36 @@ +# Volume mounts + +import Alert from '@material-ui/lab/Alert'; + +Telepresence supports locally mounting of volumes that are mounted to your Pods. You can specify a command to run when starting the intercept, this could be a subshell or local server such as Python or Node. + +``` +telepresence intercept --port --mount=/tmp/ -- /bin/bash +``` + +In this case, Telepresence creates the intercept, mounts the Pod's volumes to locally to `/tmp`, and starts a Bash subshell. + +Telepresence can set a random mount point for you by using `--mount=true` instead, you can then find the mount point in the output of `telepresence list` or using the `$TELEPRESENCE_ROOT` variable. + +``` +$ telepresence intercept --port --mount=true -- /bin/bash +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 + Intercepting : all TCP connections + +bash-3.2$ echo $TELEPRESENCE_ROOT +/var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 +``` + +--mount=true is the default if a mount option is not specified, use --mount=false to disable mounting volumes. + +With either method, the code you run locally either from the subshell or from the intercept command will need to be prepended with the `$TELEPRESENCE_ROOT` environment variable to utilize the mounted volumes. + +For example, Kubernetes mounts secrets to `/var/run/secrets/kubernetes.io` (even if no `mountPoint` for it exists in the Pod spec). Once mounted, to access these you would need to change your code to use `$TELEPRESENCE_ROOT/var/run/secrets/kubernetes.io`. + +If using --mount=true without a command, you can use either environment variable flag to retrieve the variable. diff --git a/docs/telepresence/2.10/reference/vpn.md b/docs/telepresence/2.10/reference/vpn.md new file mode 100644 index 000000000..91213babc --- /dev/null +++ b/docs/telepresence/2.10/reference/vpn.md @@ -0,0 +1,155 @@ + +
+ +# Telepresence and VPNs + +## The test-vpn command + +You can make use of the `telepresence test-vpn` command to diagnose issues +with your VPN setup. +This guides you through a series of steps to figure out if there are +conflicts between your VPN configuration and [Telepresence](/products/telepresence/). + +### Prerequisites + +Before running `telepresence test-vpn` you should ensure that your VPN is +in split-tunnel mode. +This means that only traffic that _must_ pass through the VPN is directed +through it; otherwise, the test results may be inaccurate. + +You may need to configure this on both the client and server sides. +Client-side, taking the Tunnelblick client as an example, you must ensure that +the `Route all IPv4 traffic through the VPN` tickbox is not enabled: + +![Tunnelblick](../images/tunnelblick.png) + +Server-side, taking AWS' ClientVPN as an example, you simply have to enable +split-tunnel mode: + +![Modify client VPN Endpoint](../images/split-tunnel.png) + +In AWS, this setting can be toggled without reprovisioning the VPN. Other cloud providers may work differently. + +### Testing the VPN configuration + +To run it, enter: + +```console +$ telepresence test-vpn +``` + +The test-vpn tool begins by asking you to disconnect from your VPN; ensure you are disconnected then +press enter: + +``` +Telepresence Root Daemon is already stopped +Telepresence User Daemon is already stopped +Please disconnect from your VPN now and hit enter once you're disconnected... +``` + +Once it's gathered information about your network configuration without an active connection, +it will ask you to connect to the VPN: + +``` +Please connect to your VPN now and hit enter once you're connected... +``` + +It will then connect to the cluster: + + +``` +Launching Telepresence Root Daemon +Launching Telepresence User Daemon +Connected to context arn:aws:eks:us-east-1:914373874199:cluster/josec-tp-test-vpn-cluster (https://07C63820C58A0426296DAEFC73AED10C.gr7.us-east-1.eks.amazonaws.com) +Telepresence Root Daemon quitting... done +Telepresence User Daemon quitting... done +``` + +And show you the results of the test: + +``` +---------- Test Results: +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +✅ svc subnet 10.19.0.0/16 is clear of VPN + +Please see https://www.telepresence.io/docs/latest/reference/vpn for more info on these corrective actions, as well as examples + +Still having issues? Please create a new github issue at https://github.com/telepresenceio/telepresence/issues/new?template=Bug_report.md + Please make sure to add the following to your issue: + * Run `telepresence loglevel debug`, try to connect, then run `telepresence gather_logs`. It will produce a zipfile that you should attach to the issue. + * Which VPN client are you using? + * Which VPN server are you using? + * How is your VPN pushing DNS configuration? It may be useful to add the contents of /etc/resolv.conf +``` + +#### Interpreting test results + +##### Case 1: VPN masked by cluster + +In an instance where the VPN is masked by the cluster, the test-vpn tool informs you that a pod or service subnet is masking a CIDR that the VPN +routes: + +``` +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +``` + +This means that all VPN hosts within `10.0.0.0/19` will be rendered inaccessible while +telepresence is connected. + +The ideal resolution in this case is to move the pods to a different subnet. This is possible, +for example, in Amazon EKS by configuring a [new CIDR range](https://aws.amazon.com/premiumsupport/knowledge-center/eks-multiple-cidr-ranges/) for the pods. +In this case, configuring the pods to be located in `10.1.0.0/19` clears the VPN and allows you +to reach hosts inside the VPC's `10.0.0.0/19` + +However, it is not always possible to move the pods to a different subnet. +In these cases, you should use the [never-proxy](../cluster-config#neverproxy) configuration to prevent certain +hosts from being masked. +This might be particularly important for DNS resolution. In an AWS ClientVPN VPN it is often +customary to set the `.2` host as a DNS server (e.g. `10.0.0.2` in this case): + +![Modify Client VPN Endpoint](../images/vpn-dns.png) + +If this is the case for your VPN, you should place the DNS server in the never-proxy list for your +cluster. In the values file that you pass to `telepresence helm install [--upgrade] --values `, add a `client.routing` +entry like so: + +```yaml +client: + routing: + neverProxySubnets: + - 10.0.0.2/32 +``` + +##### Case 2: Cluster masked by VPN + +In an instance where the Cluster is masked by the VPN, the test-vpn tool informs you that a pod or service subnet is being masked by a CIDR +that the VPN routes: + +``` +❌ pod subnet 10.0.0.0/8 being masked by VPN-routed CIDR 10.0.0.0/16. This usually means that Telepresence will not be able to connect to your cluster. To resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, consider shrinking the mask of the 10.0.0.0/16 CIDR (e.g. from /16 to /8), or disabling split-tunneling +``` + +Typically this means that pods within `10.0.0.0/8` are not accessible while the VPN is +connected. + +As with the first case, the ideal resolution is to move the pods away, but this may not always +be possible. In that case, your best bet is to attempt to shrink the VPN's CIDR +(that is, make it route more hosts) to make Telepresence's routes win by virtue of specificity. +One easy way to do this may be by disabling split tunneling (see the [prerequisites](#prerequisites) +section for more on split-tunneling). + +Note that once you fix this, you may find yourself landing again in [Case 1](#case-1-vpn-masked-by-cluster), and may need +to use never-proxy rules to whitelist hosts in the VPN: + +``` +❌ pod subnet 10.0.0.0/8 is masking VPN-routed CIDR 0.0.0.0/1. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 0.0.0.0/1 are placed in the never-proxy list +``` +
diff --git a/docs/telepresence/2.10/release-notes/no-ssh.png b/docs/telepresence/2.10/release-notes/no-ssh.png new file mode 100644 index 000000000..025f20ab7 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/no-ssh.png differ diff --git a/docs/telepresence/2.10/release-notes/run-tp-in-docker.png b/docs/telepresence/2.10/release-notes/run-tp-in-docker.png new file mode 100644 index 000000000..53b66a9b2 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/run-tp-in-docker.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.2.png b/docs/telepresence/2.10/release-notes/telepresence-2.2.png new file mode 100644 index 000000000..43abc7e89 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.2.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.3.0-homebrew.png b/docs/telepresence/2.10/release-notes/telepresence-2.3.0-homebrew.png new file mode 100644 index 000000000..e203a9750 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.3.0-homebrew.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.3.0-loglevels.png b/docs/telepresence/2.10/release-notes/telepresence-2.3.0-loglevels.png new file mode 100644 index 000000000..3d628c54a Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.3.0-loglevels.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.3.1-alsoProxy.png b/docs/telepresence/2.10/release-notes/telepresence-2.3.1-alsoProxy.png new file mode 100644 index 000000000..4052b927b Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.3.1-alsoProxy.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.3.1-brew.png b/docs/telepresence/2.10/release-notes/telepresence-2.3.1-brew.png new file mode 100644 index 000000000..2af424904 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.3.1-brew.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.3.1-dns.png b/docs/telepresence/2.10/release-notes/telepresence-2.3.1-dns.png new file mode 100644 index 000000000..c6335e7a7 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.3.1-dns.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.3.1-inject.png b/docs/telepresence/2.10/release-notes/telepresence-2.3.1-inject.png new file mode 100644 index 000000000..aea1003ef Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.3.1-inject.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.3.1-large-file-transfer.png b/docs/telepresence/2.10/release-notes/telepresence-2.3.1-large-file-transfer.png new file mode 100644 index 000000000..48ceb3817 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.3.1-large-file-transfer.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.3.1-trafficmanagerconnect.png b/docs/telepresence/2.10/release-notes/telepresence-2.3.1-trafficmanagerconnect.png new file mode 100644 index 000000000..78128c174 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.3.1-trafficmanagerconnect.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.3.2-subnets.png b/docs/telepresence/2.10/release-notes/telepresence-2.3.2-subnets.png new file mode 100644 index 000000000..778c722ab Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.3.2-subnets.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.3.2-svcport-annotation.png b/docs/telepresence/2.10/release-notes/telepresence-2.3.2-svcport-annotation.png new file mode 100644 index 000000000..1e1e92408 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.3.2-svcport-annotation.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.3.3-helm.png b/docs/telepresence/2.10/release-notes/telepresence-2.3.3-helm.png new file mode 100644 index 000000000..7b81480a7 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.3.3-helm.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.3.3-namespace-config.png b/docs/telepresence/2.10/release-notes/telepresence-2.3.3-namespace-config.png new file mode 100644 index 000000000..7864d3a30 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.3.3-namespace-config.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.3.3-to-pod.png b/docs/telepresence/2.10/release-notes/telepresence-2.3.3-to-pod.png new file mode 100644 index 000000000..aa7be3f63 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.3.3-to-pod.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.3.4-improved-error.png b/docs/telepresence/2.10/release-notes/telepresence-2.3.4-improved-error.png new file mode 100644 index 000000000..fa8a12986 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.3.4-improved-error.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.3.4-ip-error.png b/docs/telepresence/2.10/release-notes/telepresence-2.3.4-ip-error.png new file mode 100644 index 000000000..1d37380c7 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.3.4-ip-error.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.3.5-agent-config.png b/docs/telepresence/2.10/release-notes/telepresence-2.3.5-agent-config.png new file mode 100644 index 000000000..67d6d3e8b Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.3.5-agent-config.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.3.5-grpc-max-receive-size.png b/docs/telepresence/2.10/release-notes/telepresence-2.3.5-grpc-max-receive-size.png new file mode 100644 index 000000000..32939f9dd Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.3.5-grpc-max-receive-size.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.3.5-skipLogin.png b/docs/telepresence/2.10/release-notes/telepresence-2.3.5-skipLogin.png new file mode 100644 index 000000000..bf79c1910 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.3.5-skipLogin.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png b/docs/telepresence/2.10/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png new file mode 100644 index 000000000..d29a05ad7 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.3.7-keydesc.png b/docs/telepresence/2.10/release-notes/telepresence-2.3.7-keydesc.png new file mode 100644 index 000000000..9bffe5ccb Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.3.7-keydesc.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.3.7-newkey.png b/docs/telepresence/2.10/release-notes/telepresence-2.3.7-newkey.png new file mode 100644 index 000000000..c7d47c42d Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.3.7-newkey.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.4.0-cloud-messages.png b/docs/telepresence/2.10/release-notes/telepresence-2.4.0-cloud-messages.png new file mode 100644 index 000000000..ffd045ae0 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.4.0-cloud-messages.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.4.0-windows.png b/docs/telepresence/2.10/release-notes/telepresence-2.4.0-windows.png new file mode 100644 index 000000000..d27ba254a Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.4.0-windows.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.4.1-systema-vars.png b/docs/telepresence/2.10/release-notes/telepresence-2.4.1-systema-vars.png new file mode 100644 index 000000000..c098b439f Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.4.1-systema-vars.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.4.10-actions.png b/docs/telepresence/2.10/release-notes/telepresence-2.4.10-actions.png new file mode 100644 index 000000000..6d849ac21 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.4.10-actions.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.4.10-intercept-config.png b/docs/telepresence/2.10/release-notes/telepresence-2.4.10-intercept-config.png new file mode 100644 index 000000000..e3f1136ac Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.4.10-intercept-config.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.4.4-gather-logs.png b/docs/telepresence/2.10/release-notes/telepresence-2.4.4-gather-logs.png new file mode 100644 index 000000000..7db541735 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.4.4-gather-logs.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.4.5-logs-anonymize.png b/docs/telepresence/2.10/release-notes/telepresence-2.4.5-logs-anonymize.png new file mode 100644 index 000000000..edd01fde4 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.4.5-logs-anonymize.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.4.5-pod-yaml.png b/docs/telepresence/2.10/release-notes/telepresence-2.4.5-pod-yaml.png new file mode 100644 index 000000000..3f565c4f8 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.4.5-pod-yaml.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.4.5-preview-url-questions.png b/docs/telepresence/2.10/release-notes/telepresence-2.4.5-preview-url-questions.png new file mode 100644 index 000000000..1823aaa14 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.4.5-preview-url-questions.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.4.6-help-text.png b/docs/telepresence/2.10/release-notes/telepresence-2.4.6-help-text.png new file mode 100644 index 000000000..aab9178ad Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.4.6-help-text.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.4.8-health-check.png b/docs/telepresence/2.10/release-notes/telepresence-2.4.8-health-check.png new file mode 100644 index 000000000..e10a0b472 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.4.8-health-check.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.4.8-vpn.png b/docs/telepresence/2.10/release-notes/telepresence-2.4.8-vpn.png new file mode 100644 index 000000000..fbb215882 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.4.8-vpn.png differ diff --git a/docs/telepresence/2.10/release-notes/telepresence-2.5.0-pro-daemon.png b/docs/telepresence/2.10/release-notes/telepresence-2.5.0-pro-daemon.png new file mode 100644 index 000000000..5b82fc769 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/telepresence-2.5.0-pro-daemon.png differ diff --git a/docs/telepresence/2.10/release-notes/tunnel.jpg b/docs/telepresence/2.10/release-notes/tunnel.jpg new file mode 100644 index 000000000..59a0397e6 Binary files /dev/null and b/docs/telepresence/2.10/release-notes/tunnel.jpg differ diff --git a/docs/telepresence/2.10/releaseNotes.yml b/docs/telepresence/2.10/releaseNotes.yml new file mode 100644 index 000000000..46393ca98 --- /dev/null +++ b/docs/telepresence/2.10/releaseNotes.yml @@ -0,0 +1,2056 @@ +# This file should be placed in the folder for the version of the +# product that's meant to be documented. A `/release-notes` page will +# be automatically generated and populated at build time. +# +# Note that an entry needs to be added to the `doc-links.yml` file in +# order to surface the release notes in the table of contents. +# +# The YAML in this file should contain: +# +# changelog: An (optional) URL to the CHANGELOG for the product. +# items: An array of releases with the following attributes: +# - version: The (optional) version number of the release, if applicable. +# - date: The date of the release in the format YYYY-MM-DD. +# - notes: An array of noteworthy changes included in the release, each having the following attributes: +# - type: The type of change, one of `bugfix`, `feature`, `security` or `change`. +# - title: A short title of the noteworthy change. +# - body: >- +# Two or three sentences describing the change and why it +# is noteworthy. This is HTML, not plain text or +# markdown. It is handy to use YAML's ">-" feature to +# allow line-wrapping. +# - image: >- +# The URL of an image that visually represents the +# noteworthy change. This path is relative to the +# `release-notes` directory; if this file is +# `FOO/releaseNotes.yml`, then the image paths are +# relative to `FOO/release-notes/`. +# - docs: The path to the documentation page where additional information can be found. +# - href: A path from the root to a resource on the getambassador website, takes precedence over a docs link. + +docTitle: Telepresence Release Notes +docDescription: >- + Release notes for Telepresence by Ambassador Labs, a CNCF project + that enables developers to iterate rapidly on Kubernetes + microservices by arming them with infinite-scale development + environments, access to instantaneous feedback loops, and highly + customizable development environments. + +changelog: https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md + +items: + - version: 2.10.5 + date: "2023-02-06" + notes: + - type: change + title: mTLS secrets mount + body: >- + mTLS Secrets will now be mounted into the traffic agent, instead of expected to be read by it from the API. + This is only applicable to users of team mode and the proprietary agent + docs: reference/cluster-config#tls + - type: bugfix + title: Daemon reconnection fix + body: >- + Fixed a bug that prevented the local daemons from automatically reconnecting to the traffic manager when the network connection was lost. + - version: 2.10.4 + date: "2023-01-20" + notes: + - type: bugfix + title: Backward compatibility restored + body: >- + Telepresence can now create intercepts with traffic-managers of version 2.9.5 and older. + - type: bugfix + title: Saved intercepts now works with preview URLs. + body: >- + Preview URLs are now included/excluded correctly when using saved intercepts. + - version: 2.10.3 + date: "2023-01-17" + notes: + - type: bugfix + title: Saved intercepts + body: >- + Fixed an issue which was causing the saved intercepts to not be completely interpreted by telepresence. + - type: bugfix + title: Traffic manager restart during upgrade to team mode + body: >- + Fixed an issue which was causing the traffic manager to be redeployed after an upgrade to the team mode. + docs: https://github.com/telepresenceio/telepresence/pull/2979 + - version: 2.10.2 + date: "2023-01-16" + notes: + - type: bugfix + title: version consistency in helm commands + body: >- + Ensure that CLI and user-daemon binaries are the same version when running
telepresence helm install + or telepresence helm upgrade. + docs: https://github.com/telepresenceio/telepresence/pull/2975 + - type: bugfix + title: Release Process + body: >- + Fixed an issue that prevented the --use-saved-intercept flag from working. + - version: 2.10.1 + date: "2023-01-11" + notes: + - type: bugfix + title: Release Process + body: >- + Fixed a regex in our release process that prevented 2.10.0 promotion. + - version: 2.10.0 + date: "2023-01-11" + notes: + - type: feature + title: Team Mode and Single User Mode + body: >- + The Traffic Manager can now be set to either "team" mode or "single user" mode. When in team mode, intercepts will default to http intercepts. + - type: feature + title: Added `insert` and `upgrade` Subcommands to `telepresence helm` + body: >- + The `telepresence helm` sub-commands `insert` and `upgrade` now accepts all types of helm `--set-XXX` flags. + - type: feature + title: Added Image Pull Secrets to Helm Chart + body: >- + Image pull secrets for the traffic-agent can now be added using the Helm chart setting `agent.image.pullSecrets`. + - type: change + title: Rename Configmap + body: >- + The configmap `traffic-manager-clients` has been renamed to `traffic-manager`. + - type: change + title: Webhook Namespace Field + body: >- + If the cluster is Kubernetes 1.21 or later, the mutating webhook will find the correct namespace using the label `kubernetes.io/metadata.name` rather than `app.kuberenetes.io/name`. + docs: https://github.com/telepresenceio/telepresence/issues/2913 + - type: change + title: Rename Webhook + body: >- + The name of the mutating webhook now contains the namespace of the traffic-manager so that the webhook is easier to identify when there are multiple namespace scoped telepresence installations in the cluster. + - type: change + title: OSS Binaries + body: >- + The OSS Helm chart is no longer pushed to the datawire Helm repository. It will instead be pushed from the telepresence proprietary repository. The OSS Helm chart is still what's embedded in the OSS telepresence client. + docs: https://github.com/telepresenceio/telepresence/pull/2943 + - type: bugfix + title: Fix Panic Using `--docker-run` + body: >- + Telepresence no longer panics when `--docker-run` is combined with `--name ` instead of `--name=`. + docs: https://github.com/telepresenceio/telepresence/issues/2953 + - type: bugfix + title: Stop assuming cluster domain + body: >- + Telepresence traffic-manager extracts the cluster domain (e.g. "cluster.local") using a CNAME lookup for "kubernetes.default" instead of "kubernetes.default.svc". + docs: https://github.com/telepresenceio/telepresence/pull/2959 + - type: bugfix + title: Uninstall hook timeout + body: >- + A timeout was added to the pre-delete hook `uninstall-agents`, so that a helm uninstall doesn't hang when there is no running traffic-manager. + docs: https://github.com/telepresenceio/telepresence/pull/2937 + - type: bugfix + title: Uninstall hook check + body: >- + The `Helm.Revision` is now used to prevent that Helm hook calls are served by the wrong revision of the traffic-manager. + docs: https://github.com/telepresenceio/telepresence/issues/2954 + - version: 2.9.5 + date: "2022-12-08" + notes: + - type: security + title: Update to golang v1.19.4 + body: >- + Apply security updates by updating to golang v1.19.4 + docs: https://groups.google.com/g/golang-announce/c/L_3rmdT0BMU + - type: bugfix + title: GCE authentication + body: >- + Fixed a regression, that was introduced in 2.9.3, preventing use of gce authentication without also having a config element present in the gce configuration in the kubeconfig. + - version: 2.9.4 + date: "2022-12-02" + notes: + - type: feature + title: Subnet detection strategy + body: >- + The traffic-manager can automatically detect that the node subnets are different from the pod subnets, and switch detection strategy to instead use subnets that cover the pod IPs. + - type: bugfix + title: Fix `--set` flag for `telepresence helm install` + body: >- + The `telepresence helm` command `--set x=y` flag didn't correctly set values of other types than `string`. The code now uses standard Helm semantics for this flag. + - type: bugfix + title: Fix `agent.image` setting propigation + body: >- + Telepresence now uses the correct `agent.image` properties in the Helm chart when copying agent image settings from the `config.yml` file. + - type: bugfix + title: Delay file sharing until needed + body: >- + Initialization of FTP type file sharing is delayed, so that setting it using the Helm chart value `intercept.useFtp=true` works as expected. + - type: bugfix + title: Cleanup on `telepresence quit` + body: >- + The port-forward that is created when Telepresence connects to a cluster is now properly closed when `telepresence quit` is called. + - type: bugfix + title: Watch `config.yml` without panic + body: >- + The user daemon no longer panics when the `config.yml` is modified at a time when the user daemon is running but no session is active. + - type: bugfix + title: Thread safety + body: >- + Fix race condition that would occur when `telepresence connect` `telepresence leave` was called several times in rapid succession. + - version: 2.9.3 + date: "2022-11-23" + notes: + - type: feature + title: Helm options for `livenessProbe` and `readinessProbe` + body: >- + The helm chart now supports `livenessProbe` and `readinessProbe` for the traffic-manager deployment, so that the pod automatically restarts if it doesn't respond. + - type: change + title: Improved network communication + body: >- + The root daemon now communicates directly with the traffic-manager instead of routing all outbound traffic through the user daemon. + - type: bugfix + title: Root daemon debug logging + body: >- + Using `telepresence loglevel LEVEL` now also sets the log level in the root daemon. + - type: bugfix + title: Multivalue flag value propagation + body: >- + Multi valued kubernetes flags such as `--as-group` are now propagated correctly. + - type: bugfix + title: Root daemon stability + body: >- + The root daemon would sometimes hang indefinitely when quit and connect were called in rapid succession. + - type: bugfix + title: Base DNS resolver + body: >- + Don't use `systemd.resolved` base DNS resolver unless cluster is proxied. + - version: 2.9.2 + date: "2022-11-16" + notes: + - type: bugfix + title: Fix panic + body: >- + Fix panic when connecting to an older traffic-manager. + - type: bugfix + title: Fix header flag + body: >- + Fix an issue where the `http-header` flag sometimes wouldn't propagate correctly. + - version: 2.9.1 + date: "2022-11-16" + notes: + - type: bugfix + title: Connect failures due to missing auth provider. + body: >- + The regression in 2.9.0 that caused a `no Auth Provider found for name “gcp”` error when connecting was fixed. + - version: 2.9.0 + date: "2022-11-15" + notes: + - type: feature + title: New command to view client configuration. + body: >- + A new telepresence config view was added to make it easy to view the current + client configuration. + docs: new-in-2.9#view-the-client-configuration + - type: feature + title: Configure Clients using the Helm chart. + body: >- + The traffic-manager can now configure all clients that connect through the client: map in + the values.yaml file. + docs: reference/cluster-config#client-configuration + - type: feature + title: The Traffic manager version is more visible. + body: >- + The command telepresence version will now include the version of the traffic manager when + the client is connected to a cluster. + - type: feature + title: Command output in YAML format. + body: >- + The global --output flag now accepts both yaml and json. + docs: new-in-2.9#yaml-output + - type: change + title: Deprecated status command flag + body: >- + The telepresence status --json flag is deprecated. Use telepresence status --output=json instead. + - type: bugfix + title: Unqualified service name resolution in docker. + body: >- + Unqualified service names now resolves OK from the docker container when using telepresence intercept --docker-run. + docs: https://github.com/telepresenceio/telepresence/issues/2870 + - type: bugfix + title: Output no longer mixes plaintext and json. + body: >- + Informational messages that don't really originate from the command, such as "Launching Telepresence Root Daemon", + or "An update of telepresence ...", are discarded instead of being printed as plain text before the actual formatted + output when using the --output=json. + docs: https://github.com/telepresenceio/telepresence/issues/2854 + - type: bugfix + title: No more panic when invalid port names are detected. + body: >- + A `telepresence intercept` of services with invalid port no longer causes a panic. + docs: https://github.com/telepresenceio/telepresence/issues/2880 + - type: bugfix + title: Proper errors for bad output formats. + body: >- + An attempt to use an invalid value for the global --output flag now renders a proper error message. + - type: bugfix + title: Remove lingering DNS config on macOS. + body: >- + Files lingering under /etc/resolver as a result of ungraceful shutdown of the root daemon on macOS, are + now removed when a new root daemon starts. + - version: 2.8.5 + date: "2022-11-2" + notes: + - type: security + title: CVE-2022-41716 + body: >- + Updated Golang to 1.19.3 to address CVE-2022-41716. + - version: 2.8.4 + date: "2022-11-2" + notes: + - type: bugfix + title: Release Process + body: >- + This release resulted in changes to our release process. + - version: 2.8.3 + date: "2022-10-27" + notes: + - type: feature + title: Ability to disable global intercepts. + body: >- + Global intercepts (a.k.a. TCP intercepts) can now be disabled by using the new Helm chart setting intercept.disableGlobal. + docs: https://github.com/telepresenceio/telepresence/issues/2140 + - type: feature + title: Configurable mutating webhook port + body: >- + The port used for the mutating webhook can be configured using the Helm chart setting + agentInjector.webhook.port. + docs: install/helm + - type: change + title: Mutating webhook port defaults to 443 + body: >- + The default port for the mutating webhook is now 443. It used to be 8443. + - type: change + title: Agent image configuration mandatory in air-gapped environments. + body: >- + The traffic-manager will no longer default to use the tel2 image for the traffic-agent when it is + unable to connect to Ambassador Cloud. Air-gapped environments must declare what image to use in the Helm chart. + - type: bugfix + title: Can now connect to non-helm installs + body: >- + telepresence connect now works as long as the traffic manager is installed, even if + it wasn't installed via >code>helm install + docs: https://github.com/telepresenceio/telepresence/issues/2824 + - type: bugfix + title: check-vpn crash fixed + body: >- + telepresence check-vpn no longer crashes when the daemons don't start properly. + - version: 2.8.2 + date: "2022-10-15" + notes: + - type: bugfix + title: Reinstate 2.8.0 + body: >- + There was an issue downloading the free enhanced client. This problem was fixed, 2.8.0 was reinstated + - version: 2.8.1 + date: "2022-10-14" + notes: + - type: bugfix + title: Rollback 2.8.0 + body: >- + Rollback 2.8.0 while we investigate an issue with ambassador cloud. + - version: 2.8.0 + date: "2022-10-14" + notes: + - type: feature + title: Improved DNS resolver + body: >- + The Telepresence DNS resolver is now capable of resolving queries of type A, AAAA, CNAME, + MX, NS, PTR, SRV, and TXT. + docs: reference/dns + - type: feature + title: New `client` structure in Helm chart + body: >- + A new client struct was added to the Helm chart. It contains a connectionTTL that controls + how long the traffic manager will retain a client connection without seeing any sign of life from the client. + docs: reference/cluster-config#Client-Configuration + - type: feature + title: Include and exclude suffixes configurable using the Helm chart. + body: >- + A dns element was added to the client struct in Helm chart. It contains an includeSuffixes and + an excludeSuffixes value that controls what type of names that the DNS resolver in the client will delegate to + the cluster. + docs: reference/cluster-config#DNS + - type: feature + title: Configurable traffic-manager API port + body: >- + The API port used by the traffic-manager is now configurable using the Helm chart value apiPort. + The default port is 8081. + docs: https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence + - type: feature + title: Envoy server and admin port configuration. + body: >- + An new agent struct was added to the Helm chart. It contains an `envoy` structure where the server and + admin port of the Envoy proxy running in the enhanced traffic-agent can be configured. + docs: reference/cluster-config#Envoy-Configuration + - type: change + title: Helm chart `dnsConfig` moved to `client.routing`. + body: >- + The Helm chart dnsConfig was deprecated but retained for backward compatibility. The fields alsoProxySubnets + and neverProxySubnets can now be found under routing in the client struct. + docs: reference/cluster-config#Routing + - type: change + title: Helm chart `agentInjector.agentImage` moved to `agent.image`. + body: >- + The Helm chart agentInjector.agentImage was moved to agent.image. The old value is deprecated but + retained for backward compatibility. + docs: reference/cluster-config#Image-Configuration + - type: change + title: Helm chart `agentInjector.appProtocolStrategy` moved to `agent.appProtocolStrategy`. + body: >- + The Helm chart agentInjector.appProtocolStrategy was moved to agent.appProtocolStrategy. The old + value is deprecated but retained for backward compatibility. + docs: reference/cluster-config#Application-Protocol-Selection + - type: change + title: Helm chart `dnsServiceName`, `dnsServiceNamespace`, and `dnsServiceIP` removed. + body: >- + The Helm chart dnsServiceName, dnsServiceNamespace, and dnsServiceIP has been removed, because + they are no longer needed. The TUN-device will use the traffic-manager pod-IP on platforms where it needs to + dedicate an IP for its local resolver. + - type: change + title: Quit daemons with `telepresence quit -s` + body: >- + The former options `-u` and `-r` for `telepresence quit` has been deprecated and replaced with one option `-s` which will + quit both the root daemon and the user daemon. + - type: bugfix + title: Environment variable interpolation in pods now works. + body: >- + Environment variable interpolation now works for all definitions that are copied from pod containers + into the injected traffic-agent container. + - type: bugfix + title: Early detection of namespace conflict + body: >- + An attempt to create simultaneous intercepts that span multiple namespace on the same workstation + is detected early and prohibited instead of resulting in failing DNS lookups later on. + - type: bugfix + title: Annoying log message removed + body: >- + Spurious and incorrect ""!! SRV xxx"" messages will no longer appear in the logs when the reason + is normal context cancellation. + - type: bugfix + title: Single name DNS resolution in Docker on Linux host + body: >- + Single label names now resolves correctly when using Telepresence in Docker on a Linux host + - type: bugfix + title: Misnomer `appPortStrategy` in Helm chart renamed to `appProtocolStrategy`. + body: >- + The Helm chart value appProtocolStrategy is now correctly named (used to be appPortStategy) + - version: 2.7.6 + date: "2022-09-16" + notes: + - type: feature + title: Helm chart resource entries for injected agents + body: >- + The resources for the traffic-agent container and the optional init container can be + specified in the Helm chart using the resources and initResource fields + of the agentInjector.agentImage + - type: feature + title: Cluster event propagation when injection fails + body: >- + When the traffic-manager fails to inject a traffic-agent, the cause for the failure is + detected by reading the cluster events, and propagated to the user. + - type: feature + title: FTP-client instead of sshfs for remote mounts + body: >- + Telepresence can now use an embedded FTP client and load an existing FUSE library instead of running + an external sshfs or sshfs-win binary. This feature is experimental in 2.7.x + and enabled by setting intercept.useFtp to true> in the config.yml. + - type: change + title: Upgrade of winfsp + body: >- + Telepresence on Windows upgraded winfsp from version 1.10 to 1.11 + - type: bugfix + title: Removal of invalid warning messages + body: >- + Running CLI commands on Apple M1 machines will no longer throw warnings about /proc/cpuinfo + and /proc/self/auxv. + - version: 2.7.5 + date: "2022-09-14" + notes: + - type: change + title: Rollback of release 2.7.4 + body: >- + This release is a rollback of the changes in 2.7.4, so essentially the same as 2.7.3 + - version: 2.7.4 + date: "2022-09-14" + notes: + - type: change + body: >- + This release was broken on some platforms. Use 2.7.6 instead. + - version: 2.7.3 + date: "2022-09-07" + notes: + - type: bugfix + title: PTY for CLI commands + body: >- + CLI commands that are executed by the user daemon now use a pseudo TTY. This enables + docker run -it to allocate a TTY and will also give other commands like bash read the + same behavior as when executed directly in a terminal. + docs: https://github.com/telepresenceio/telepresence/issues/2724 + - type: bugfix + title: Traffic Manager useless warning silenced + body: >- + The traffic-manager will no longer log numerous warnings saying Issuing a + systema request without ApiKey or InstallID may result in an error. + - type: bugfix + title: Traffic Manager useless error silenced + body: >- + The traffic-manager will no longer log an error saying Unable to derive subnets + from nodes when the podCIDRStrategy is auto and it chooses to instead derive the + subnets from the pod IPs. + - version: 2.7.2 + date: "2022-08-25" + notes: + - type: feature + title: Autocompletion scripts + body: >- + Autocompletion scripts can now be generated with telepresence completion SHELL where SHELL can be bash, zsh, fish or powershell. + - type: feature + title: Connectivity check timeout + body: >- + The timeout for the initial connectivity check that Telepresence performs + in order to determine if the cluster's subnets are proxied or not can now be configured + in the config.yml file using timeouts.connectivityCheck. The default timeout was + changed from 5 seconds to 500 milliseconds to speed up the actual connect. + docs: reference/config#timeouts + - type: change + title: gather-traces feedback + body: >- + The command telepresence gather-traces now prints out a message on success. + docs: troubleshooting#distributed-tracing + - type: change + title: upload-traces feedback + body: >- + The command telepresence upload-traces now prints out a message on success. + docs: troubleshooting#distributed-tracing + - type: change + title: gather-traces tracing + body: >- + The command telepresence gather-traces now traces itself and reports errors with trace gathering. + docs: troubleshooting#distributed-tracing + - type: change + title: CLI log level + body: >- + The cli.log log is now logged at the same level as the connector.log + docs: reference/config#log-levels + - type: bugfix + title: Telepresence --help fixed + body: >- + telepresence --help now works once more even if there's no user daemon running. + docs: https://github.com/telepresenceio/telepresence/issues/2735 + - type: bugfix + title: Stream cancellation when no process intercepts + body: >- + Streams created between the traffic-agent and the workstation are now properly closed + when no interceptor process has been started on the workstation. This fixes a potential problem where + a large number of attempts to connect to a non-existing interceptor would cause stream congestion + and an unresponsive intercept. + - type: bugfix + title: List command excludes the traffic-manager + body: >- + The telepresence list command no longer includes the traffic-manager deployment. + - version: 2.7.1 + date: "2022-08-10" + notes: + - type: change + title: Reinstate telepresence uninstall + body: >- + Reinstate telepresence uninstall with --everything depreciated + - type: change + title: Reduce telepresence helm uninstall + body: >- + telepresence helm uninstall will only uninstall the traffic-manager helm chart and no longer accepts the --everything, --agent, or --all-agents flags. + - type: bugfix + title: Auto-connect for telepresence intercpet + body: >- + telepresence intercept will attempt to connect to the traffic manager before creating an intercept. + - version: 2.7.0 + date: "2022-08-07" + notes: + - type: feature + title: Saved Intercepts + body: >- + Create telepresence intercepts based on existing Saved Intercepts configurations with telepresence intercept --use-saved-intercept $SAVED_INTERCEPT_NAME + docs: reference/intercepts#sharing-intercepts-with-teammates + - type: feature + title: Distributed Tracing + body: >- + The Telepresence components now collect OpenTelemetry traces. + Up to 10MB of trace data are available at any given time for collection from + components. telepresence gather-traces is a new command that will collect + all that data and place it into a gzip file, and telepresence upload-traces is + a new command that will push the gzipped data into an OTLP collector. + docs: troubleshooting#distributed-tracing + - type: feature + title: Helm install + body: >- + A new telepresence helm command was added to provide an easy way to install, upgrade, or uninstall the telepresence traffic-manager. + docs: install/manager + - type: feature + title: Ignore Volume Mounts + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-ignore-volume-mounts, that can be used to make the injector ignore specified volume mounts denoted by a comma-separated string. + - type: feature + title: telepresence pod-daemon + body: >- + The Docker image now contains a new program in addition to + the existing traffic-manager and traffic-agent: the pod-daemon. The + pod-daemon is a trimmed-down version of the user-daemon that is + designed to run as a sidecar in a Pod, enabling CI systems to create + preview deploys. + - type: feature + title: Prometheus support for traffic manager + body: >- + Added prometheus support to the traffic manager. + - type: change + title: No install on telepresence connect + body: >- + The traffic manager is no longer automatically installed into the cluster. Connecting or creating an intercept in a cluster without a traffic manager will return an error. + docs: install/manager + - type: change + title: Helm Uninstall + body: >- + The command telepresence uninstall has been moved to telepresence helm uninstall. + docs: install/manager + - type: bugfix + title: readOnlyRootFileSystem mounts work + body: >- + Add an emptyDir volume and volume mount under /tmp on the agent sidecar so it works with `readOnlyRootFileSystem: true` + docs: https://github.com/telepresenceio/telepresence/pull/2666 + - version: 2.6.8 + date: "2022-06-23" + notes: + - type: feature + title: Specify Your DNS + body: >- + The name and namespace for the DNS Service that the traffic-manager uses in DNS auto-detection can now be specified. + - type: feature + title: Specify a Fallback DNS + body: >- + Should the DNS auto-detection logic in the traffic-manager fail, users can now specify a fallback IP to use. + - type: feature + title: Intercept UDP Ports + body: >- + It is now possible to intercept UDP ports with Telepresence and also use --to-pod to forward UDP traffic from ports on localhost. + - type: change + title: Additional Helm Values + body: >- + The Helm chart will now add the nodeSelector, affinity and tolerations values to the traffic-manager's post-upgrade-hook and pre-delete-hook jobs. + - type: bugfix + title: Agent Injection Bugfix + body: >- + Telepresence no longer fails to inject the traffic agent into the pod generated for workloads that have no volumes and `automountServiceAccountToken: false`. + - version: 2.6.7 + date: "2022-06-22" + notes: + - type: bugfix + title: Persistant Sessions + body: >- + The Telepresence client will remember and reuse the traffic-manager session after a network failure or other reason that caused an unclean disconnect. + - type: bugfix + title: DNS Requests + body: >- + Telepresence will no longer forward DNS requests for "wpad" to the cluster. + - type: bugfix + title: Graceful Shutdown + body: >- + The traffic-agent will properly shut down if one of its goroutines errors. + - version: 2.6.6 + date: "2022-06-9" + notes: + - type: bugfix + title: Env Var `TELEPRESENCE_API_PORT` + body: >- + The propagation of the TELEPRESENCE_API_PORT environment variable now works correctly. + - type: bugfix + title: Double Printing `--output json` + body: >- + The --output json global flag no longer outputs multiple objects + - version: 2.6.5 + date: "2022-06-03" + notes: + - type: feature + title: Helm Option -- `reinvocationPolicy` + body: >- + The reinvocationPolicy or the traffic-agent injector webhook can now be configured using the Helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Proxy Certificate + body: >- + The traffic manager now accepts a root CA for a proxy, allowing it to connect to ambassador cloud from behind an HTTPS proxy. This can be configured through the helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Agent Injection + body: >- + A policy that controls when the mutating webhook injects the traffic-agent was added, and can be configured in the Helm chart. + docs: install/helm + - type: change + title: Windows Tunnel Version Upgrade + body: >- + Telepresence on Windows upgraded wintun.dll from version 0.12 to version 0.14.1 + - type: change + title: Helm Version Upgrade + body: >- + Telepresence upgraded its embedded Helm from version 3.8.1 to 3.9 + - type: change + title: Kubernetes API Version Upgrade + body: >- + Telepresence upgraded its embedded Kubernetes API from version 0.23.4 to 0.24.1 + - type: feature + title: Flag `--watch` Added to `list` Command + body: >- + Added a --watch flag to telepresence list that can be used to watch interceptable workloads in a namespace. + - type: change + title: Depreciated `images.webhookAgentImage` + body: >- + The Telepresence configuration setting for `images.webhookAgentImage` is now deprecated. Use `images.agentImage` instead. + - type: bugfix + title: Default `reinvocationPolicy` Set to Never + body: >- + The reinvocationPolicy or the traffic-agent injector webhook now defaults to Never insteadof IfNeeded so that LimitRanges on namespaces can inject a missing resources element into the injected traffic-agent container. + - type: bugfix + title: UDP + body: >- + UDP based communication with services in the cluster now works as expected. + - type: bugfix + title: Telepresence `--help` + body: >- + The command help will only show Kubernetes flags on the commands that supports them + - type: change + title: Error Count + body: >- + Only the errors from the last session will be considered when counting the number of errors in the log after a command failure. + - version: 2.6.4 + date: "2022-05-23" + notes: + - type: bugfix + title: Upgrade RBAC Permissions + body: >- + The traffic-manager RBAC grants permissions to update services, deployments, replicatsets, and statefulsets. Those permissions are needed when the traffic-manager upgrades from versions < 2.6.0 and can be revoked after the upgrade. + - version: 2.6.3 + date: "2022-05-20" + notes: + - type: bugfix + title: Relative Mount Paths + body: >- + The --mount intercept flag now handles relative mount points correctly on non-windows platforms. Windows still require the argument to be a drive letter followed by a colon. + - type: bugfix + title: Traffic Agent Config + body: >- + The traffic-agent's configuration update automatically when services are added, updated or deleted. + - type: bugfix + title: Container Injection for Numeric Ports + body: >- + Telepresence will now always inject an initContainer when the service's targetPort is numeric + - type: bugfix + title: Matching Services + body: >- + Workloads that have several matching services pointing to the same target port are now handled correctly. + - type: bugfix + title: Unexpected Panic + body: >- + A potential race condition causing a panic when closing a DNS connection is now handled correctly. + - type: bugfix + title: Mount Volume Cleanup + body: >- + A container start would sometimes fail because and old directory remained in a mounted temp volume. + - version: 2.6.2 + date: "2022-05-17" + notes: + - type: bugfix + title: Argo Injection + body: >- + Workloads controlled by workloads like Argo Rollout are injected correctly. + - type: bugfix + title: Agent Port Mapping + body: >- + Multiple services appointing the same container port no longer result in duplicated ports in an injected pod. + - type: bugfix + title: GRPC Max Message Size + body: >- + The telepresence list command no longer errors out with "grpc: received message larger than max" when listing namespaces with a large number of workloads. + - version: 2.6.1 + date: "2022-05-16" + notes: + - type: bugfix + title: KUBECONFIG environment variable + body: >- + Telepresence will now handle multiple path entries in the KUBECONFIG environment correctly. + - type: bugfix + title: Don't Panic + body: >- + Telepresence will no longer panic when using preview URLs with traffic-managers < 2.6.0 + - version: 2.6.0 + date: "2022-05-13" + notes: + - type: feature + title: Intercept multiple containers in a pod, and multiple ports per container + body: >- + Telepresence can now intercept multiple services and/or service-ports that connect to the same pod. + docs: new-in-2.6#intercept-multiple-containers-and-ports + - type: feature + title: The Traffic Agent sidecar is always injected by the Traffic Manager's mutating webhook + body: >- + The client will no longer modify deployments, replicasets, or statefulsets in order to + inject a Traffic Agent into an intercepted pod. Instead, all injection is now performed by a mutating webhook. As a result, + the client now needs less permissions in the cluster. + docs: install/upgrade#important-note-about-upgrading-to-2.6.0 + - type: change + title: Automatic upgrade of Traffic Agents + body: >- + When upgrading, all workloads with injected agents will have their agent "uninstalled" automatically. + The mutating webhook will then ensure that their pods will receive an updated Traffic Agent. + docs: new-in-2.6#no-more-workload-modifications + - type: change + title: No default image in the Helm chart + body: >- + The helm chart no longer has a default set for the agentInjector.image.name, and unless it's set, the + traffic-manager will ask Ambassador Could for the preferred image. + docs: new-in-2.6#smarter-agent + - type: change + title: Upgrade to Helm version 3.8.1 + body: The Telepresence client now uses Helm version 3.8.1 when auto-installing the Traffic Manager. + - type: bugfix + title: Remote mounts will now function correctly with custom securityContext + body: >- + The bug causing permission problems when the Traffic Agent is in a Pod with a custom securityContext has been fixed. + - type: bugfix + title: Improved presentation of flags in CLI help + body: The help for commands that accept Kubernetes flags will now display those flags in a separate group. + - type: bugfix + title: Better termination of process parented by intercept + body: >- + Occasionally an intercept will spawn a command using -- on the command line, often in another console. + When you use telepresence leave or telepresence quit while the intercept with the spawned command is still active, + Telepresence will now terminate that the command because it's considered to be parented by the intercept that is being removed. + - version: 2.5.8 + date: "2022-04-27" + notes: + - type: bugfix + title: Folder creation on `telepresence login` + body: >- + Fixed a bug where the telepresence config folder would not be created if the user ran telepresence login before other commands. + - version: 2.5.7 + date: "2022-04-25" + notes: + - type: change + title: RBAC requirements + body: >- + A namespaced traffic-manager will no longer require cluster wide RBAC. Only Roles and RoleBindings are now used. + - type: bugfix + title: Windows DNS + body: >- + The DNS recursion detector didn't work correctly on Windows, resulting in sporadic failures to resolve names that were resolved correctly at other times. + - type: bugfix + title: Session TTL and Reconnect + body: >- + A telepresence session will now last for 24 hours after the user's last connectivity. If a session expires, the connector will automatically try to reconnect. + - version: 2.5.6 + date: "2022-04-18" + notes: + - type: change + title: Less Watchers + body: >- + Telepresence agents watcher will now only watch namespaces that the user has accessed since the last connect. + - type: bugfix + title: More Efficient `gather-logs` + body: >- + The gather-logs command will no longer send any logs through gRPC. + - version: 2.5.5 + date: "2022-04-08" + notes: + - type: change + title: Traffic Manager Permissions + body: >- + The traffic-manager now requires permissions to read pods across namespaces even if installed with limited permissions + - type: bugfix + title: Linux DNS Cache + body: >- + The DNS resolver used on Linux with systemd-resolved now flushes the cache when the search path changes. + - type: bugfix + title: Automatic Connect Sync + body: >- + The telepresence list command will produce a correct listing even when not preceded by a telepresence connect. + - type: bugfix + title: Disconnect Reconnect Stability + body: >- + The root daemon will no longer get into a bad state when a disconnect is rapidly followed by a new connect. + - type: bugfix + title: Limit Watched Namespaces + body: >- + The client will now only watch agents from accessible namespaces, and is also constrained to namespaces explicitly mapped using the connect command's --mapped-namespaces flag. + - type: bugfix + title: Limit Namespaces used in `gather-logs` + body: >- + The gather-logs command will only gather traffic-agent logs from accessible namespaces, and is also constrained to namespaces explicitly mapped using the connect command's --mapped-namespaces flag. + - version: 2.5.4 + date: "2022-03-29" + notes: + - type: bugfix + title: Linux DNS Concurrency + body: >- + The DNS fallback resolver on Linux now correctly handles concurrent requests without timing them out + - type: bugfix + title: Non-Functional Flag + body: >- + The ingress-l5 flag will no longer be forcefully set to equal the --ingress-host flag + - type: bugfix + title: Automatically Remove Failed Intercepts + body: >- + Intercepts that fail to create are now consistently removed to prevent non-working dangling intercepts from sticking around. + - type: bugfix + title: Agent UID + body: >- + Agent container is no longer sensitive to a random UID or an UID imposed by a SecurityContext. + - type: bugfix + title: Gather-Logs Output Filepath + body: >- + Removed a bad concatenation that corrupted the output path of telepresence gather-logs. + - type: change + title: Remove Unnecessary Error Advice + body: >- + An advice to "see logs for details" is no longer printed when the argument count is incorrect in a CLI command. + - type: bugfix + title: Garbage Collection + body: >- + Client and agent sessions no longer leaves dangling waiters in the traffic-manager when they depart. + - type: bugfix + title: Limit Gathered Logs + body: >- + The client's gather logs command and agent watcher will now respect the configured grpc.maxReceiveSize + - type: change + title: In-Cluster Checks + body: >- + The TUN device will no longer route pod or service subnets if it is running in a machine that's already connected to the cluster + - type: change + title: Expanded Status Command + body: >- + The status command includes the install id, user id, account id, and user email in its result, and can print output as JSON + - type: change + title: List Command Shows All Intercepts + body: >- + The list command, when used with the --intercepts flag, will list the users intercepts from all namespaces + - version: 2.5.3 + date: "2022-02-25" + notes: + - type: bugfix + title: TCP Connectivity + body: >- + Fixed bug in the TCP stack causing timeouts after repeated connects to the same address + - type: feature + title: Linux Binaries + body: >- + Client-side binaries for the arm64 architecture are now available for linux + - version: 2.5.2 + date: "2022-02-23" + notes: + - type: bugfix + title: DNS server bugfix + body: >- + Fixed a bug where Telepresence would use the last server in resolv.conf + - version: 2.5.1 + date: "2022-02-19" + notes: + - type: bugfix + title: Fix GKE auth issue + body: >- + Fixed a bug where using a GKE cluster would error with: No Auth Provider found for name "gcp" + - version: 2.5.0 + date: "2022-02-18" + notes: + - type: feature + title: Intercept specific endpoints + body: >- + The flags --http-path-equal, --http-path-prefix, and --http-path-regex can can be used in + addition to the --http-match flag to filter personal intercepts by the request URL path + docs: concepts/intercepts#intercepting-a-specific-endpoint + - type: feature + title: Intercept metadata + body: >- + The flag --http-meta can be used to declare metadata key value pairs that will be returned by the Telepresence rest + API endpoint /intercept-info + docs: reference/restapi#intercept-info + - type: change + title: Client RBAC watch + body: >- + The verb "watch" was added to the set of required verbs when accessing services and workloads for the client RBAC + ClusterRole + docs: reference/rbac + - type: change + title: Dropped backward compatibility with versions <=2.4.4 + body: >- + Telepresence is no longer backward compatible with versions 2.4.4 or older because the deprecated multiplexing tunnel + functionality was removed. + - type: change + title: No global networking flags + body: >- + The global networking flags are no longer used and using them will render a deprecation warning unless they are supported by the + command. The subcommands that support networking flags are connect, current-cluster-id, + and genyaml. + - type: bugfix + title: Output of status command + body: >- + The also-proxy and never-proxy subnets are now displayed correctly when using the + telepresence status command. + - type: bugfix + title: SETENV sudo privilege no longer needed + body: >- + Telepresence longer requires SETENV privileges when starting the root daemon. + - type: bugfix + title: Network device names containing dash + body: >- + Telepresence will now parse device names containing dashes correctly when determining routes that it should never block. + - type: bugfix + title: Linux uses cluster.local as domain instead of search + body: >- + The cluster domain (typically "cluster.local") is no longer added to the DNS search on Linux using + systemd-resolved. Instead, it is added as a domain so that names ending with it are routed + to the DNS server. + - version: 2.4.11 + date: "2022-02-10" + notes: + - type: change + title: Add additional logging to troubleshoot intermittent issues with intercepts + body: >- + We've noticed some issues with intercepts in v2.4.10, so we are releasing a version + with enhanced logging to help debug and fix the issue. + - version: 2.4.10 + date: "2022-01-13" + notes: + - type: feature + title: Application Protocol Strategy + body: >- + The strategy used when selecting the application protocol for personal intercepts can now be configured using + the intercept.appProtocolStrategy in the config.yml file. + docs: reference/config/#intercept + image: telepresence-2.4.10-intercept-config.png + - type: feature + title: Helm value for the Application Protocol Strategy + body: >- + The strategy when selecting the application protocol for personal intercepts in agents injected by the + mutating webhook can now be configured using the agentInjector.appProtocolStrategy in the Helm chart. + docs: install/helm + - type: feature + title: New --http-plaintext option + body: >- + The flag --http-plaintext can be used to ensure that an intercept uses plaintext http or grpc when + communicating with the workstation process. + docs: reference/intercepts/#tls + - type: feature + title: Configure the default intercept port + body: >- + The port used by default in the telepresence intercept command (8080), can now be changed by setting + the intercept.defaultPort in the config.yml file. + docs: reference/config/#intercept + - type: change + title: Telepresence CI now uses Github Actions + body: >- + Telepresence now uses Github Actions for doing unit and integration testing. It is + now easier for contributors to run tests on PRs since maintainers can add an + "ok to test" label to PRs (including from forks) to run integration tests. + docs: https://github.com/telepresenceio/telepresence/actions + image: telepresence-2.4.10-actions.png + - type: bugfix + title: Check conditions before asking questions + body: >- + User will not be asked to log in or add ingress information when creating an intercept until a check has been + made that the intercept is possible. + docs: reference/intercepts/ + - type: bugfix + title: Fix invalid log statement + body: >- + Telepresence will no longer log invalid: "unhandled connection control message: code DIAL_OK" errors. + - type: bugfix + title: Log errors from sshfs/sftp + body: >- + Output to stderr from the traffic-agent's sftp and the client's sshfs processes + are properly logged as errors. + - type: bugfix + title: Don't use Windows path separators in workload pod template + body: >- + Auto installer will no longer not emit backslash separators for the /tel-app-mounts paths in the + traffic-agent container spec when running on Windows. + - version: 2.4.9 + date: "2021-12-09" + notes: + - type: bugfix + title: Helm upgrade nil pointer error + body: >- + A helm upgrade using the --reuse-values flag no longer fails on a "nil pointer" error caused by a nil + telpresenceAPI value. + docs: install/helm#upgrading-the-traffic-manager + - version: 2.4.8 + date: "2021-12-03" + notes: + - type: feature + title: VPN diagnostics tool + body: >- + There is a new subcommand, test-vpn, that can be used to diagnose connectivity issues with a VPN. + See the VPN docs for more information on how to use it. + docs: reference/vpn + image: telepresence-2.4.8-vpn.png + + - type: feature + title: RESTful API service + body: >- + A RESTful service was added to Telepresence, both locally to the client and to the traffic-agent to + help determine if messages with a set of headers should be consumed or not from a message queue where the + intercept headers are added to the messages. + docs: reference/restapi + image: telepresence-2.4.8-health-check.png + + - type: change + title: TELEPRESENCE_LOGIN_CLIENT_ID env variable no longer used + body: >- + You could previously configure this value, but there was no reason to change it, so the value + was removed. + + - type: bugfix + title: Tunneled network connections behave more like ordinary TCP connections. + body: >- + When using Telepresence with an external cloud provider for extensions, those tunneled + connections now behave more like TCP connections, especially when it comes to timeouts. + We've also added increased testing around these types of connections. + - version: 2.4.7 + date: "2021-11-24" + notes: + - type: feature + title: Injector service-name annotation + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-service-name, that can be used to set the name of the service to be intercepted. + This will help disambiguate which service to intercept for when a workload is exposed by multiple services, such as can happen with Argo Rollouts + docs: reference/cluster-config#service-name-annotation + - type: feature + title: Skip the Ingress Dialogue + body: >- + You can now skip the ingress dialogue by setting the ingress parameters in the corresponding flags. + docs: reference/intercepts#skipping-the-ingress-dialogue + - type: feature + title: Never proxy subnets + body: >- + The kubeconfig extensions now support a never-proxy argument, + analogous to also-proxy, that defines a set of subnets that + will never be proxied via telepresence. + docs: reference/config#neverproxy + - type: change + title: Daemon versions check + body: >- + Telepresence now checks the versions of the client and the daemons and asks the user to quit and restart if they don't match. + - type: change + title: No explicit DNS flushes + body: >- + Telepresence DNS now uses a very short TTL instead of explicitly flushing DNS by killing the mDNSResponder or doing resolvectl flush-caches + docs: reference/routing#dns-caching + - type: bugfix + title: Legacy flags now work with global flags + body: >- + Legacy flags such as --swap-deployment can now be used together with global flags. + - type: bugfix + title: Outbound connection closing + body: >- + Outbound connections are now properly closed when the peer closes. + - type: bugfix + title: Prevent DNS recursion + body: >- + The DNS-resolver will trap recursive resolution attempts (may happen when the cluster runs in a docker-container on the client). + docs: reference/routing#dns-recursion + - type: bugfix + title: Prevent network recursion + body: >- + The TUN-device will trap failed connection attempts that results in recursive calls back into the TUN-device (may happen when the + cluster runs in a docker-container on the client). + docs: reference/routing#connect-recursion + - type: bugfix + title: Traffic Manager deadlock fix + body: >- + The Traffic Manager no longer runs a risk of entering a deadlock when a new Traffic agent arrives. + - type: bugfix + title: webhookRegistry config propagation + body: >- + The configured webhookRegistry is now propagated to the webhook installer even if no webhookAgentImage has been set. + docs: reference/config#images + - type: bugfix + title: Login refreshes expired tokens + body: >- + When a user's token has expired, telepresence login + will prompt the user to log in again to get a new token. Previously, + the user had to telepresence quit and telepresence logout + to get a new token. + docs: https://github.com/telepresenceio/telepresence/issues/2062 + - version: 2.4.6 + date: "2021-11-02" + notes: + - type: feature + title: Manually injecting Traffic Agent + body: >- + Telepresence now supports manually injecting the traffic-agent YAML into workload manifests. + Use the genyaml command to create the sidecar YAML, then add the telepresence.getambassador.io/manually-injected: "true" annotation to your pods to allow Telepresence to intercept them. + docs: reference/intercepts/manual-agent + + - type: feature + title: Telepresence CLI released for Apple silicon + body: >- + Telepresence is now built and released for Apple silicon. + docs: install/?os=macos + + - type: change + title: Telepresence help text now links to telepresence.io + body: >- + We now include a link to our documentation when you run telepresence --help. This will make it easier + for users to find this page whether they acquire Telepresence through Brew or some other mechanism. + image: telepresence-2.4.6-help-text.png + + - type: bugfix + title: Fixed bug when API server is inside CIDR range of pods/services + body: >- + If the API server for your kubernetes cluster had an IP that fell within the + subnet generated from pods/services in a kubernetes cluster, it would proxy traffic + to the API server which would result in hanging or a failed connection. We now ensure + that the API server is explicitly not proxied. + - version: 2.4.5 + date: "2021-10-15" + notes: + - type: feature + title: Get pod yaml with gather-logs command + body: >- + Adding the flag --get-pod-yaml to your request will get the + pod yaml manifest for all kubernetes components you are getting logs for + ( traffic-manager and/or pods containing a + traffic-agent container). This flag is set to false + by default. + docs: reference/client + image: telepresence-2.4.5-pod-yaml.png + + - type: feature + title: Anonymize pod name + namespace when using gather-logs command + body: >- + Adding the flag --anonymize to your command will + anonymize your pod names + namespaces in the output file. We replace the + sensitive names with simple names (e.g. pod-1, namespace-2) to maintain + relationships between the objects without exposing the real names of your + objects. This flag is set to false by default. + docs: reference/client + image: telepresence-2.4.5-logs-anonymize.png + + - type: feature + title: Added context and defaults to ingress questions when creating a preview URL + body: >- + Previously, we referred to OSI model layers when asking these questions, but this + terminology is not commonly used. The questions now provide a clearer context for the user, along with a default answer as an example. + docs: howtos/preview-urls + image: telepresence-2.4.5-preview-url-questions.png + + - type: feature + title: Support for intercepting headless services + body: >- + Intercepting headless services is now officially supported. You can request a + headless service on whatever port it exposes and get a response from the + intercept. This leverages the same approach as intercepting numeric ports when + using the mutating webhook injector, mainly requires the initContainer + to have NET_ADMIN capabilities. + docs: reference/intercepts/#intercepting-headless-services + + - type: change + title: Use one tunnel per connection instead of multiplexing into one tunnel + body: >- + We have changed Telepresence so that it uses one tunnel per connection instead + of multiplexing all connections into one tunnel. This will provide substantial + performance improvements. Clients will still be backwards compatible with older + managers that only support multiplexing. + + - type: bugfix + title: Added checks for Telepresence kubernetes compatibility + body: >- + Telepresence currently works with Kubernetes server versions 1.17.0 + and higher. We have added logs in the connector and traffic-manager + to let users know when they are using Telepresence with a cluster it doesn't support. + docs: reference/cluster-config + + - type: bugfix + title: Traffic Agent security context is now only added when necessary + body: >- + When creating an intercept, Telepresence will now only set the traffic agent's GID + when strictly necessary (i.e. when using headless services or numeric ports). This mitigates + an issue on openshift clusters where the traffic agent can fail to be created due to + openshift's security policies banning arbitrary GIDs. + + - version: 2.4.4 + date: "2021-09-27" + notes: + - type: feature + title: Numeric ports in agent injector + body: >- + The agent injector now supports injecting Traffic Agents into pods that have unnamed ports. + docs: reference/cluster-config/#note-on-numeric-ports + + - type: feature + title: New subcommand to gather logs and export into zip file + body: >- + Telepresence has logs for various components (the + traffic-manager, traffic-agents, the root and + user daemons), which are integral for understanding and debugging + Telepresence behavior. We have added the telepresence + gather-logs command to make it simple to compile logs for + all Telepresence components and export them in a zip file that can + be shared to others and/or included in a github issue. For more + information on usage, run telepresence gather-logs --help + . + docs: reference/client + image: telepresence-2.4.4-gather-logs.png + + - type: feature + title: Pod CIDR strategy is configurable in Helm chart + body: >- + Telepresence now enables you to directly configure how to get + pod CIDRs when deploying Telepresence with the Helm chart. + The default behavior remains the same. We've also introduced + the ability to explicitly set what the pod CIDRs should be. + docs: install/helm + + - type: bugfix + title: Compute pod CIDRs more efficiently + body: >- + When computing subnets using the pod CIDRs, the traffic-manager + now uses less CPU cycles. + docs: reference/routing/#subnets + + - type: bugfix + title: Prevent busy loop in traffic-manager + body: >- + In some circumstances, the traffic-manager's CPU + would max out and get pinned at its limit. This required a + shutdown or pod restart to fix. We've added some fixes + to prevent the traffic-manager from getting into this state. + + - type: bugfix + title: Added a fixed buffer size to TUN-device + body: >- + The TUN-device now has a max buffer size of 64K. This prevents the + buffer from growing limitlessly until it receies a PSH, which could + be a blocking operation when receiving lots of TCP-packets. + docs: reference/tun-device + + - type: bugfix + title: Fix hanging user daemon + body: >- + When Telepresence encountered an issue connecting to the cluster or + the root daemon, it could hang indefintely. It now will error correctly + when it encounters that situation. + + - type: bugfix + title: Improved proprietary agent connectivity + body: >- + To determine whether the environment cluster is air-gapped, the + proprietary agent attempts to connect to the cloud during startup. + To deal with a possible initial failure, the agent backs off + and retries the connection with an increasing backoff duration. + + - type: bugfix + title: Telepresence correctly reports intercept port conflict + body: >- + When creating a second intercept targetting the same local port, + it now gives the user an informative error message. Additionally, + it tells them which intercept is currently using that port to make + it easier to remedy. + + - version: 2.4.3 + date: "2021-09-15" + notes: + - type: feature + title: Environment variable TELEPRESENCE_INTERCEPT_ID available in interceptor's environment + body: >- + When you perform an intercept, we now include a TELEPRESENCE_INTERCEPT_ID environment + variable in the environment. + docs: reference/environment/#telepresence-environment-variables + + - type: bugfix + title: Improved daemon stability + body: >- + Fixed a timing bug that sometimes caused a "daemon did not start" failure. + + - type: bugfix + title: Complete logs for Windows + body: >- + Crash stack traces and other errors were incorrectly not written to log files. This has + been fixed so logs for Windows should be at parity with the ones in MacOS and Linux. + + - type: bugfix + title: Log rotation fix for Linux kernel 4.11+ + body: >- + On Linux kernel 4.11 and above, the log file rotation now properly reads the + birth-time of the log file. Older kernels continue to use the old behavior + of using the change-time in place of the birth-time. + + - type: bugfix + title: Improved error messaging + body: >- + When Telepresence encounters an error, it tells the user where they should look for + logs related to the error. We have refined this so that it only tells users to look + for errors in the daemon logs for issues that are logged there. + + - type: bugfix + title: Stop resolving localhost + body: >- + When using the overriding DNS resolver, it will no longer apply search paths when + resolving localhost, since that should be resolved on the user's machine + instead of the cluster. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Variable cluster domain + body: >- + Previously, the cluster domain was hardcoded to cluster.local. While this + is true for many kubernetes clusters, it is not for all of them. Now this value is + retrieved from the traffic-manager. + + - type: bugfix + title: Improved cleanup of traffic-agents + body: >- + Telepresence now uninstalls traffic-agents installed via mutating webhook + when using telepresence uninstall --everything. + + - type: bugfix + title: More large file transfer fixes + body: >- + Downloading large files during an intercept will no longer cause timeouts and hanging + traffic-agents. + + - type: bugfix + title: Setting --mount to false when intercepting works as expected + body: >- + When using --mount=false while performing an intercept, the file system + was still mounted. This has been remedied so the intercept behavior respects the + flag. + docs: reference/volume + + - type: bugfix + title: Traffic-manager establishes outbound connections in parallel + body: >- + Previously, the traffic-manager established outbound connections + sequentially. This resulted in slow (and failing) Dial calls would + block all outbound traffic from the workstation (for up to 30 seconds). We now + establish these connections in parallel so that won't occur. + docs: reference/routing/#outbound + + - type: bugfix + title: Status command reports correct DNS settings + body: >- + Telepresence status now correctly reports DNS settings for all operating + systems, instead of Local IP:nil, Remote IP:nil when they don't exist. + + - version: 2.4.2 + date: "2021-09-01" + notes: + - type: feature + title: New subcommand to temporarily change log-level + body: >- + We have added a new telepresence loglevel subcommand that enables users + to temporarily change the log-level for the local demons, the traffic-manager and + the traffic-agents. While the logLevels settings from the config will + still be used by default, this can be helpful if you are currently experiencing an issue and + want to have higher fidelity logs, without doing a telepresence quit and + telepresence connect. You can use telepresence loglevel --help to get + more information on options for the command. + docs: reference/config + + - type: change + title: All components have info as the default log-level + body: >- + We've now set the default for all components of Telepresence (traffic-agent, + traffic-manager, local daemons) to use info as the default log-level. + + - type: bugfix + title: Updating RBAC in helm chart to fix cluster-id regression + body: >- + In 2.4.1, we enabled the traffic-manager to get the cluster ID by getting the UID + of the default namespace. The helm chart was not updated to give the traffic-manager + those permissions, which has since been fixed. This impacted users who use licensed features of + the Telepresence extension in an air-gapped environment. + docs: reference/cluster-config/#air-gapped-cluster + + - type: bugfix + title: Timeouts for Helm actions are now respected + body: >- + The user-defined timeout for Helm actions wasn't always respected, causing the daemon to hang + indefinitely when failing to install the traffic-manager. + docs: reference/config#timeouts + + - version: 2.4.1 + date: "2021-08-30" + notes: + - type: feature + title: External cloud variables are now configurable + body: >- + We now support configuring the host and port for the cloud in your config.yml. These + are used when logging in to utilize features provided by an extension, and are also passed + along as environment variables when installing the traffic-manager. Additionally, we + now run our testsuite with these variables set to localhost to continue to ensure Telepresence + is fully fuctional without depeneding on an external service. The SYSTEMA_HOST and SYSTEMA_PORT + environment variables are no longer used. + image: telepresence-2.4.1-systema-vars.png + docs: reference/config/#cloud + + - type: feature + title: Helm chart can now regenerate certificate used for mutating webhook on-demand. + body: >- + You can now set agentInjector.certificate.regenerate when deploying Telepresence + with the Helm chart to automatically regenerate the certificate used by the agent injector webhook. + docs: install/helm + + - type: change + title: Traffic Manager installed via helm + body: >- + The traffic-manager is now installed via an embedded version of the Helm chart when telepresence connect is first performed on a cluster. + This change is transparent to the user. + A new configuration flag, timeouts.helm sets the timeouts for all helm operations performed by the Telepresence binary. + docs: reference/config#timeouts + + - type: change + title: traffic-manager gets cluster ID itself instead of via environment variable + body: >- + The traffic-manager used to get the cluster ID as an environment variable when running + telepresence connnect or via adding the value in the helm chart. This was + clunky so now the traffic-manager gets the value itself as long as it has permissions + to "get" and "list" namespaces (this has been updated in the helm chart). + docs: install/helm + + - type: bugfix + title: Telepresence now mounts all directories from /var/run/secrets + body: >- + In the past, we only mounted secret directories in /var/run/secrets/kubernetes.io. + We now mount *all* directories in /var/run/secrets, which, for example, includes + directories like eks.amazonaws.com used for IRSA tokens. + docs: reference/volume + + - type: bugfix + title: Max gRPC receive size correctly propagates to all grpc servers + body: >- + This fixes a bug where the max gRPC receive size was only propagated to some of the + grpc servers, causing failures when the message size was over the default. + docs: reference/config/#grpc + + - type: bugfix + title: Updated our Homebrew packaging to run manually + body: >- + We made some updates to our script that packages Telepresence for Homebrew so that it + can be run manually. This will enable maintainers of Telepresence to run the script manually + should we ever need to rollback a release and have latest point to an older verison. + docs: install/ + + - type: bugfix + title: Telepresence uses namespace from kubeconfig context on each call + body: >- + In the past, Telepresence would use whatever namespace was specified in the kubeconfig's current-context + for the entirety of the time a user was connected to Telepresence. This would lead to confusing behavior + when a user changed the context in their kubeconfig and expected Telepresence to acknowledge that change. + Telepresence now will do that and use the namespace designated by the context on each call. + + - type: bugfix + title: Idle outbound TCP connections timeout increased to 7200 seconds + body: >- + Some users were noticing that their intercepts would start failing after 60 seconds. + This was because the keep idle outbound TCP connections were set to 60 seconds, which we have + now bumped to 7200 seconds to match Linux's tcp_keepalive_time default. + + - type: bugfix + title: Telepresence will automatically remove a socket upon ungraceful termination + body: >- + When a Telepresence process terminates ungracefully, it would inform users that "this usually means + that the process has terminated ungracefully" and implied that they should remove the socket. We've + now made it so Telepresence will automatically attempt to remove the socket upon ungraceful termination. + + - type: bugfix + title: Fixed user daemon deadlock + body: >- + Remedied a situation where the user daemon could hang when a user was logged in. + + - type: bugfix + title: Fixed agentImage config setting + body: >- + The config setting images.agentImages is no longer required to contain the repository, and it + will use the value at images.repository. + docs: reference/config/#images + + - version: 2.4.0 + date: "2021-08-04" + notes: + - type: feature + title: Windows Client Developer Preview + body: >- + There is now a native Windows client for Telepresence that is being released as a Developer Preview. + All the same features supported by the MacOS and Linux client are available on Windows. + image: telepresence-2.4.0-windows.png + docs: install + + - type: feature + title: CLI raises helpful messages from Ambassador Cloud + body: >- + Telepresence can now receive messages from Ambassador Cloud and raise + them to the user when they perform certain commands. This enables us + to send you messages that may enhance your Telepresence experience when + using certain commands. Frequency of messages can be configured in your + config.yml. + image: telepresence-2.4.0-cloud-messages.png + docs: reference/config#cloud + + - type: bugfix + title: Improved stability of systemd-resolved-based DNS + body: >- + When initializing the systemd-resolved-based DNS, the routing domain + is set to improve stability in non-standard configurations. This also enables the + overriding resolver to do a proper take over once the DNS service ends. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Fixed an edge case when intercepting a container with multiple ports + body: >- + When specifying a port of a container to intercept, if there was a container in the + pod without ports, it was automatically selected. This has been fixed so we'll only + choose the container with "no ports" if there's no container that explicitly matches + the port used in your intercept. + docs: reference/intercepts/#creating-an-intercept-when-a-service-has-multiple-ports + + - type: bugfix + title: $(NAME) references in agent's environments are now interpolated correctly. + body: >- + If you had an environment variable $(NAME) in your workload that referenced another, intercepts + would not correctly interpolate $(NAME). This has been fixed and works automatically. + + - type: bugfix + title: Telepresence no longer prints INFO message when there is no config.yml + body: >- + Fixed a regression that printed an INFO message to the terminal when there wasn't a + config.yml present. The config is optional, so this message has been + removed. + docs: reference/config + + - type: bugfix + title: Telepresence no longer panics when using --http-match + body: >- + Fixed a bug where Telepresence would panic if the value passed to --http-match + didn't contain an equal sign, which has been fixed. The correct syntax is in the --help + string and looks like --http-match=HTTP2_HEADER=REGEX + docs: reference/intercepts/#intercept-behavior-when-logged-in-to-ambassador-cloud + + - type: bugfix + title: Improved subnet updates + body: >- + The traffic-manager used to update subnets whenever the Nodes or Pods changed, even if + the underlying subnet hadn't changed, which created a lot of unnecessary traffic between the + client and the traffic-manager. This has been fixed so we only send updates when the subnets + themselves actually change. + docs: reference/routing/#subnets + + - version: 2.3.7 + date: "2021-07-23" + notes: + - type: feature + title: Also-proxy in telepresence status + body: >- + An also-proxy entry in the Kubernetes cluster config will + show up in the output of the telepresence status command. + docs: reference/config + + - type: feature + title: Non-interactive telepresence login + body: >- + telepresence login now has an + --apikey=KEY flag that allows for + non-interactive logins. This is useful for headless + environments where launching a web-browser is impossible, + such as cloud shells, Docker containers, or CI. + image: telepresence-2.3.7-newkey.png + docs: reference/client/login/ + + - type: bugfix + title: Mutating webhook injector correctly hides named ports for probes. + body: >- + The mutating webhook injector has been fixed to correctly rename named ports for liveness and readiness probes + docs: reference/cluster-config + + - type: bugfix + title: telepresence current-cluster-id crash fixed + body: >- + Fixed a regression introduced in 2.3.5 that caused telepresence current-cluster-id + to crash. + docs: reference/cluster-config + + - type: bugfix + title: Better UX around intercepts with no local process running + body: >- + Requests would hang indefinitely when initiating an intercept before you + had a local process running. This has been fixed and will result in an + Empty reply from server until you start a local process. + docs: reference/intercepts + + - type: bugfix + title: API keys no longer show as "no description" + body: >- + New API keys generated internally for communication with + Ambassador Cloud no longer show up as "no description" in + the Ambassador Cloud web UI. Existing API keys generated by + older versions of Telepresence will still show up this way. + image: telepresence-2.3.7-keydesc.png + + - type: bugfix + title: Fix corruption of user-info.json + body: >- + Fixed a race condition that logging in and logging out + rapidly could cause memory corruption or corruption of the + user-info.json cache file used when + authenticating with Ambassador Cloud. + + - type: bugfix + title: Improved DNS resolver for systemd-resolved + body: + Telepresence's systemd-resolved-based DNS resolver is now more + stable and in case it fails to initialize, the overriding resolver + will no longer cause general DNS lookup failures when telepresence defaults to + using it. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Faster telepresence list command + body: + The performance of telepresence list has been increased + significantly by reducing the number of calls the command makes to the cluster. + docs: reference/client + + - version: 2.3.6 + date: "2021-07-20" + notes: + - type: bugfix + title: Fix preview URLs + body: >- + Fixed a regression introduced in 2.3.5 that caused preview + URLs to not work. + + - type: bugfix + title: Fix subnet discovery + body: >- + Fixed a regression introduced in 2.3.5 where the Traffic + Manager's RoleBinding did not correctly appoint + the traffic-manager Role, causing + subnet discovery to not be able to work correctly. + docs: reference/rbac/ + + - type: bugfix + title: Fix root-user configuration loading + body: >- + Fixed a regression introduced in 2.3.5 where the root daemon + did not correctly read the configuration file; ignoring the + user's configured log levels and timeouts. + docs: reference/config/ + + - type: bugfix + title: Fix a user daemon crash + body: >- + Fixed an issue that could cause the user daemon to crash + during shutdown, as during shutdown it unconditionally + attempted to close a channel even though the channel might + already be closed. + + - version: 2.3.5 + date: "2021-07-15" + notes: + - type: feature + title: traffic-manager in multiple namespaces + body: >- + We now support installing multiple traffic managers in the same cluster. + This will allow operators to install deployments of telepresence that are + limited to certain namespaces. + image: ./telepresence-2.3.5-traffic-manager-namespaces.png + docs: install/helm + - type: feature + title: No more dependence on kubectl + body: >- + Telepresence no longer depends on having an external + kubectl binary, which might not be present for + OpenShift users (who have oc instead of + kubectl). + - type: feature + title: Agent image now configurable + body: >- + We now support configuring which agent image + registry to use in the + config. This enables users whose laptop is an air-gapped environment to + create personal intercepts without requiring a login. It also makes it easier + for those who are developing on Telepresence to specify which agent image should + be used. Env vars TELEPRESENCE_AGENT_IMAGE and TELEPRESENCE_REGISTRY are no longer + used. + image: ./telepresence-2.3.5-agent-config.png + docs: reference/config/#images + - type: feature + title: Max gRPC receive size now configurable + body: >- + The default max size of messages received through gRPC (4 MB) is sometimes insufficient. It can now be configured. + image: ./telepresence-2.3.5-grpc-max-receive-size.png + docs: reference/config/#grpc + - type: feature + title: CLI can be used in air-gapped environments + body: >- + While Telepresence will auto-detect if your cluster is in an air-gapped environment, + we've added an option users can add to their config.yml to ensure the cli acts like it + is in an air-gapped environment. Air-gapped environments require a manually installed + licence. + docs: reference/cluster-config/#air-gapped-cluster + image: ./telepresence-2.3.5-skipLogin.png + - version: 2.3.4 + date: "2021-07-09" + notes: + - type: bugfix + title: Improved IP log statements + body: >- + Some log statements were printing incorrect characters, when they should have been IP addresses. + This has been resolved to include more accurate and useful logging. + docs: reference/config/#log-levels + image: ./telepresence-2.3.4-ip-error.png + - type: bugfix + title: Improved messaging when multiple services match a workload + body: >- + If multiple services matched a workload when performing an intercept, Telepresence would crash. + It now gives the correct error message, instructing the user on how to specify which + service the intercept should use. + image: ./telepresence-2.3.4-improved-error.png + docs: reference/intercepts + - type: bugfix + title: Traffic-manger creates services in its own namespace to determine subnet + body: >- + Telepresence will now determine the service subnet by creating a dummy-service in its own + namespace, instead of the default namespace, which was causing RBAC permissions issues in + some clusters. + docs: reference/routing/#subnets + - type: bugfix + title: Telepresence connect respects pre-existing clusterrole + body: >- + When Telepresence connects, if the traffic-manager's desired clusterrole already exists in the + cluster, Telepresence will no longer try to update the clusterrole. + docs: reference/rbac + - type: bugfix + title: Helm Chart fixed for clientRbac.namespaced + body: >- + The Telepresence Helm chart no longer fails when installing with --set clientRbac.namespaced=true. + docs: install/helm + - version: 2.3.3 + date: "2021-07-07" + notes: + - type: feature + title: Traffic Manager Helm Chart + body: >- + Telepresence now supports installing the Traffic Manager via Helm. + This will make it easy for operators to install and configure the + server-side components of Telepresence separately from the CLI (which + in turn allows for better separation of permissions). + image: ./telepresence-2.3.3-helm.png + docs: install/helm/ + - type: feature + title: Traffic-manager in custom namespace + body: >- + As the traffic-manager can now be installed in any + namespace via Helm, Telepresence can now be configured to look for the + Traffic Manager in a namespace other than ambassador. + This can be configured on a per-cluster basis. + image: ./telepresence-2.3.3-namespace-config.png + docs: reference/config + - type: feature + title: Intercept --to-pod + body: >- + telepresence intercept now supports a + --to-pod flag that can be used to port-forward sidecars' + ports from an intercepted pod. + image: ./telepresence-2.3.3-to-pod.png + docs: reference/intercepts + - type: change + title: Change in migration from edgectl + body: >- + Telepresence no longer automatically shuts down the old + api_version=1 edgectl daemon. If migrating + from such an old version of edgectl you must now manually + shut down the edgectl daemon before running Telepresence. + This was already the case when migrating from the newer + api_version=2 edgectl. + - type: bugfix + title: Fixed error during shutdown + body: >- + The root daemon no longer terminates when the user daemon disconnects + from its gRPC streams, and instead waits to be terminated by the CLI. + This could cause problems with things not being cleaned up correctly. + - type: bugfix + title: Intercepts will survive deletion of intercepted pod + body: >- + An intercept will survive deletion of the intercepted pod provided + that another pod is created (or already exists) that can take over. + - version: 2.3.2 + date: "2021-06-18" + notes: + # Headliners + - type: feature + title: Service Port Annotation + body: >- + The mutator webhook for injecting traffic-agents now + recognizes a + telepresence.getambassador.io/inject-service-port + annotation to specify which port to intercept; bringing the + functionality of the --port flag to users who + use the mutator webook in order to control Telepresence via + GitOps. + image: ./telepresence-2.3.2-svcport-annotation.png + docs: reference/cluster-config#service-port-annotation + - type: feature + title: Outbound Connections + body: >- + Outbound connections are now routed through the intercepted + Pods which means that the connections originate from that + Pod from the cluster's perspective. This allows service + meshes to correctly identify the traffic. + docs: reference/routing/#outbound + - type: change + title: Inbound Connections + body: >- + Inbound connections from an intercepted agent are now + tunneled to the manager over the existing gRPC connection, + instead of establishing a new connection to the manager for + each inbound connection. This avoids interference from + certain service mesh configurations. + docs: reference/routing/#inbound + + # RBAC changes + - type: change + title: Traffic Manager needs new RBAC permissions + body: >- + The Traffic Manager requires RBAC + permissions to list Nodes, Pods, and to create a dummy + Service in the manager's namespace. + docs: reference/routing/#subnets + - type: change + title: Reduced developer RBAC requirements + body: >- + The on-laptop client no longer requires RBAC permissions to list the Nodes + in the cluster or to create Services, as that functionality + has been moved to the Traffic Manager. + + # Bugfixes + - type: bugfix + title: Able to detect subnets + body: >- + Telepresence will now detect the Pod CIDR ranges even if + they are not listed in the Nodes. + image: ./telepresence-2.3.2-subnets.png + docs: reference/routing/#subnets + - type: bugfix + title: Dynamic IP ranges + body: >- + The list of cluster subnets that the virtual network + interface will route is now configured dynamically and will + follow changes in the cluster. + - type: bugfix + title: No duplicate subnets + body: >- + Subnets fully covered by other subnets are now pruned + internally and thus never superfluously added to the + laptop's routing table. + docs: reference/routing/#subnets + - type: change # not a bugfix, but it only makes sense to mention after the above bugfixes + title: Change in default timeout + body: >- + The trafficManagerAPI timeout default has + changed from 5 seconds to 15 seconds, in order to facilitate + the extended time it takes for the traffic-manager to do its + initial discovery of cluster info as a result of the above + bugfixes. + - type: bugfix + title: Removal of DNS config files on macOS + body: >- + On macOS, files generated under + /etc/resolver/ as the result of using + include-suffixes in the cluster config are now + properly removed on quit. + docs: reference/routing/#macos-resolver + + - type: bugfix + title: Large file transfers + body: >- + Telepresence no longer erroneously terminates connections + early when sending a large HTTP response from an intercepted + service. + - type: bugfix + title: Race condition in shutdown + body: >- + When shutting down the user-daemon or root-daemon on the + laptop, telepresence quit and related commands + no longer return early before everything is fully shut down. + Now it can be counted on that by the time the command has + returned that all of the side-effects on the laptop have + been cleaned up. + - version: 2.3.1 + date: "2021-06-14" + notes: + - title: DNS Resolver Configuration + body: "Telepresence now supports per-cluster configuration for custom dns behavior, which will enable users to determine which local + remote resolver to use and which suffixes should be ignored + included. These can be configured on a per-cluster basis." + image: ./telepresence-2.3.1-dns.png + docs: reference/config + type: feature + - title: AlsoProxy Configuration + body: "Telepresence now supports also proxying user-specified subnets so that they can access external services only accessible to the cluster while connected to Telepresence. These can be configured on a per-cluster basis and each subnet is added to the TUN device so that requests are routed to the cluster for IPs that fall within that subnet." + image: ./telepresence-2.3.1-alsoProxy.png + docs: reference/config + type: feature + - title: Mutating Webhook for Injecting Traffic Agents + body: "The Traffic Manager now contains a mutating webhook to automatically add an agent to pods that have the telepresence.getambassador.io/traffic-agent: enabled annotation. This enables Telepresence to work well with GitOps CD platforms that rely on higher level kubernetes objects matching what is stored in git. For workloads without the annotation, Telepresence will add the agent the way it has in the past" + image: ./telepresence-2.3.1-inject.png + docs: reference/rbac + type: feature + - title: Traffic Manager Connect Timeout + body: "The trafficManagerConnect timeout default has changed from 20 seconds to 60 seconds, in order to facilitate the extended time it takes to apply everything needed for the mutator webhook." + image: ./telepresence-2.3.1-trafficmanagerconnect.png + docs: reference/config + type: change + - title: Fix for large file transfers + body: "Fix a tun-device bug where sometimes large transfers from services on the cluster would hang indefinitely" + image: ./telepresence-2.3.1-large-file-transfer.png + docs: reference/tun-device + type: bugfix + - title: Brew Formula Changed + body: "Now that the Telepresence rewrite is the main version of Telepresence, you can install it via Brew like so: brew install datawire/blackbird/telepresence." + image: ./telepresence-2.3.1-brew.png + docs: install/ + type: change + - version: 2.3.0 + date: "2021-06-01" + notes: + - title: Brew install Telepresence + body: "Telepresence can now be installed via brew on macOS, which makes it easier for users to stay up-to-date with the latest telepresence version. To install via brew, you can use the following command: brew install datawire/blackbird/telepresence2." + image: ./telepresence-2.3.0-homebrew.png + docs: install/ + type: feature + - title: TCP and UDP routing via Virtual Network Interface + body: "Telepresence will now perform routing of outbound TCP and UDP traffic via a Virtual Network Interface (VIF). The VIF is a layer 3 TUN-device that exists while Telepresence is connected. It makes the subnets in the cluster available to the workstation and will also route DNS requests to the cluster and forward them to intercepted pods. This means that pods with custom DNS configuration will work as expected. Prior versions of Telepresence would use firewall rules and were only capable of routing TCP." + image: ./tunnel.jpg + docs: reference/tun-device + type: feature + - title: SSH is no longer used + body: "All traffic between the client and the cluster is now tunneled via the traffic manager gRPC API. This means that Telepresence no longer uses ssh tunnels and that the manager no longer have an sshd installed. Volume mounts are still established using sshfs but it is now configured to communicate using the sftp-protocol directly, which means that the traffic agent also runs without sshd. A desired side effect of this is that the manager and agent containers no longer need a special user configuration." + image: ./no-ssh.png + docs: reference/tun-device/#no-ssh-required + type: change + - title: Running in a Docker container + body: "Telepresence can now be run inside a Docker container. This can be useful for avoiding side effects on a workstation's network, establishing multiple sessions with the traffic manager, or working with different clusters simultaneously." + image: ./run-tp-in-docker.png + docs: reference/inside-container + type: feature + - title: Configurable Log Levels + body: "Telepresence now supports configuring the log level for Root Daemon and User Daemon logs. This provides control over the nature and volume of information that Telepresence generates in daemon.log and connector.log." + image: ./telepresence-2.3.0-loglevels.png + docs: reference/config/#log-levels + type: feature + - version: 2.2.2 + date: "2021-05-17" + notes: + - title: Legacy Telepresence subcommands + body: Telepresence is now able to translate common legacy Telepresence commands into native Telepresence commands. So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used to with the new Telepresence binary. + image: ./telepresence-2.2.png + docs: install/migrate-from-legacy/ + type: feature diff --git a/docs/telepresence/2.10/troubleshooting/index.md b/docs/telepresence/2.10/troubleshooting/index.md new file mode 100644 index 000000000..364f70b7d --- /dev/null +++ b/docs/telepresence/2.10/troubleshooting/index.md @@ -0,0 +1,227 @@ +--- +title: "Telepresence Troubleshooting" +description: "Learn how to troubleshoot common issues related to Telepresence, including intercept issues, cluster connection issues, and errors related to Ambassador Cloud." +--- +# Troubleshooting + + +## Creating an intercept did not generate a preview URL + +Preview URLs can only be created if Telepresence is [logged in to +Ambassador Cloud](../reference/client/login/). When not logged in, it +will not even try to create a preview URL (additionally, by default it +will intercept all traffic rather than just a subset of the traffic). +Remove the intercept with `telepresence leave [deployment name]`, run +`telepresence login` to login to Ambassador Cloud, then recreate the +intercept. See the [intercepts how-to doc](../howtos/intercepts) for +more details. + +## Error on accessing preview URL: `First record does not look like a TLS handshake` + +The service you are intercepting is likely not using TLS, however when configuring the intercept you indicated that it does use TLS. Remove the intercept with `telepresence leave [deployment name]` and recreate it, setting `TLS` to `n`. Telepresence tries to intelligently determine these settings for you when creating an intercept and offer them as defaults, but odd service configurations might cause it to suggest the wrong settings. + +## Error on accessing preview URL: Detected a 301 Redirect Loop + +If your ingress is set to redirect HTTP requests to HTTPS and your web app uses HTTPS, but you configure the intercept to not use TLS, you will get this error when opening the preview URL. Remove the intercept with `telepresence leave [deployment name]` and recreate it, selecting the correct port and setting `TLS` to `y` when prompted. + +## Connecting to a cluster via VPN doesn't work. + +There are a few different issues that could arise when working with a VPN. Please see the [dedicated page](../reference/vpn) on Telepresence and VPNs to learn more on how to fix these. + +## Connecting to a cluster hosted in a VM on the workstation doesn't work + +The cluster probably has access to the host's network and gets confused when it is mapped by Telepresence. +Please check the [cluster in hosted vm](../howtos/cluster-in-vm) for more details. + +## Your GitHub organization isn't listed + +Ambassador Cloud needs access granted to your GitHub organization as a +third-party OAuth app. If an organization isn't listed during login +then the correct access has not been granted. + +The quickest way to resolve this is to go to the **Github menu** → +**Settings** → **Applications** → **Authorized OAuth Apps** → +**Ambassador Labs**. An organization owner will have a **Grant** +button, anyone not an owner will have **Request** which sends an email +to the owner. If an access request has been denied in the past the +user will not see the **Request** button, they will have to reach out +to the owner. + +Once access is granted, log out of Ambassador Cloud and log back in; +you should see the GitHub organization listed. + +The organization owner can go to the **GitHub menu** → **Your +organizations** → **[org name]** → **Settings** → **Third-party +access** to see if Ambassador Labs has access already or authorize a +request for access (only owners will see **Settings** on the +organization page). Clicking the pencil icon will show the +permissions that were granted. + +GitHub's documentation provides more detail about [managing access granted to third-party applications](https://docs.github.com/en/github/authenticating-to-github/connecting-with-third-party-applications) and [approving access to apps](https://docs.github.com/en/github/setting-up-and-managing-organizations-and-teams/approving-oauth-apps-for-your-organization). + +### Granting or requesting access on initial login + +When using GitHub as your identity provider, the first time you log in +to Ambassador Cloud GitHub will ask to authorize Ambassador Labs to +access your organizations and certain user data. + +Authorize Ambassador labs form + +Any listed organization with a green check has already granted access +to Ambassador Labs (you still need to authorize to allow Ambassador +Labs to read your user data and organization membership). + +Any organization with a red "X" requires access to be granted to +Ambassador Labs. Owners of the organization will see a **Grant** +button. Anyone who is not an owner will see a **Request** button. +This will send an email to the organization owner requesting approval +to access the organization. If an access request has been denied in +the past the user will not see the **Request** button, they will have +to reach out to the owner. + +Once approval is granted, you will have to log out of Ambassador Cloud +then back in to select the organization. + +## Volume mounts are not working on macOS + +It's necessary to have `sshfs` installed in order for volume mounts to work correctly during intercepts. Lately there's been some issues using `brew install sshfs` a macOS workstation because the required component `osxfuse` (now named `macfuse`) isn't open source and hence, no longer supported. As a workaround, you can now use `gromgit/fuse/sshfs-mac` instead. Follow these steps: + +1. Remove old sshfs, macfuse, osxfuse using `brew uninstall` +2. `brew install --cask macfuse` +3. `brew install gromgit/fuse/sshfs-mac` +4. `brew link --overwrite sshfs-mac` + +Now sshfs -V shows you the correct version, e.g.: +``` +$ sshfs -V +SSHFS version 2.10 +FUSE library version: 2.9.9 +fuse: no mount point +``` + +5. Next, try a mount (or an intercept that performs a mount). It will fail because you need to give permission to “Benjamin Fleischer” to execute a kernel extension (a pop-up appears that takes you to the system preferences). +6. Approve the needed permission +7. Reboot your computer. + +## Authorization for preview URLs +Services that require authentication may not function correctly with preview URLs. When accessing a preview URL, it is necessary to configure your intercept to use custom authentication headers for the preview URL. If you don't, you may receive an unauthorized response or be redirected to the login page for Ambassador Cloud. + +You can accomplish this by using a browser extension such as the `ModHeader extension` for [Chrome](https://chrome.google.com/webstore/detail/modheader/idgpnmonknjnojddfkpgkljpfnnfcklj) +or [Firefox](https://addons.mozilla.org/en-CA/firefox/addon/modheader-firefox/). + +It is important to note that Ambassador Cloud does not support OAuth browser flows when accessing a preview URL, but other auth schemes such as Basic access authentication and session cookies will work. + +## Distributed tracing + +Telepresence is a complex piece of software with components running locally on your laptop and remotely in a distributed kubernetes environment. +As such, troubleshooting investigations require tools that can give users, cluster admins, and maintainers a broad view of what these distributed components are doing. +In order to facilitate such investigations, telepresence >= 2.7.0 includes distributed tracing functionality via [OpenTelemetry](https://opentelemetry.io/) +Tracing is controlled via a `grpcPort` flag under the `tracing` configuration of your `values.yaml`. It is enabled by default and can be disabled by setting `grpcPort` to `0`, or `tracing` to an empty object: + +```yaml +tracing: {} +``` + +If tracing is configured, the traffic manager and traffic agents will open a GRPC server under the port given, from which telepresence clients will be able to gather trace data. +To collect trace data, ensure you're connected to the cluster, perform whatever operation you'd like to debug and then run `gather-traces` immediately after: + +```console +$ telepresence gather-traces +``` + +This command will gather traces from both the cloud and local components of telepresence and output them into a file called `traces.gz` in your current working directory: + +```console +$ file traces.gz + traces.gz: gzip compressed data, original size modulo 2^32 158255 +``` + +Please do not try to open or uncompress this file, as it contains binary trace data. +Instead, you can use the `upload-traces` command built into telepresence to send it to an [OpenTelemetry collector](https://opentelemetry.io/docs/collector/) for ingestion: + +```console +$ telepresence upload-traces traces.gz $OTLP_GRPC_ENDPOINT +``` + +Once that's been done, the traces will be visible via whatever means your usual collector allows. For example, this is what they look like when loaded into Jaeger's [OTLP API](https://www.jaegertracing.io/docs/1.36/apis/#opentelemetry-protocol-stable): + +![Jaeger Interface](../images/tracing.png) + +**Note:** The host and port provided for the `OTLP_GRPC_ENDPOINT` must accept OTLP formatted spans (instead of e.g. Jaeger or Zipkin specific spans) via a GRPC API (instead of the HTTP API that is also available in some collectors) +**Note:** Since traces are not automatically shipped to the backend by telepresence, they are stored in memory. Hence, to avoid running telepresence components out of memory, only the last 10MB of trace data are available for export. + +## No Sidecar Injected in GKE private clusters + +An attempt to `telepresence intercept` results in a timeout, and upon examination of the pods (`kubectl get pods`) it's discovered that the intercept command did not inject a sidecar into the workload's pods: + +```bash +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +echo-easy-7f6d54cff8-rz44k 1/1 Running 0 5m5s + +$ telepresence intercept echo-easy -p 8080 +telepresence: error: connector.CreateIntercept: request timed out while waiting for agent echo-easy.default to arrive +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +echo-easy-d8dc4cc7c-27567 1/1 Running 0 2m9s + +# Notice how 1/1 containers are ready. +``` + +If this is occurring in a GKE cluster with private networking enabled, it is likely due to firewall rules blocking the +Traffic Manager's webhook injector from the API server. +To fix this, add a firewall rule allowing your cluster's master nodes to access TCP port `443` in your cluster's pods, +or change the port number that Telepresence is using for the agent injector by providing the number of an allowed port +using the Helm chart value `agentInjector.webhook.port`. +Please refer to the [telepresence install instructions](../install/cloud#gke) or the [GCP docs](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) for information to resolve this. + +## Injected init-container doesn't function properly + +The init-container is injected to insert `iptables` rules that redirects port numbers from the app container to the +traffic-agent sidecar. This is necessary when the service's `targetPort` is numeric. It requires elevated privileges +(`NET_ADMIN` capabilities), and the inserted rules may get overridden by `iptables` rules inserted by other vendors, +such as Istio or Linkerd. + +Injection of the init-container can often be avoided by using a `targetPort` _name_ instead of a number, and ensure +that the corresponding container's `containerPort` is also named. This example uses the name "http", but any valid +name will do: +```yaml +apiVersion: v1 +kind: Pod +metadata: + ... +spec: + ... + containers: + - ... + ports: + - name: http + containerPort: 8080 +--- +apiVersion: v1 +kind: Service +metadata: + ... +spec: + ... + ports: + - port: 80 + targetPort: http +``` + +Telepresence's mutating webhook will refrain from injecting an init-container when the `targetPort` is a name. Instead, +it will do the following during the injection of the traffic-agent: + +1. Rename the designated container's port by prefixing it (i.e., containerPort: http becomes containerPort: tm-http). +2. Let the container port of our injected traffic-agent use the original name (i.e., containerPort: http). + +Kubernetes takes care of the rest and will now associate the service's `targetPort` with our traffic-agent's +`containerPort`. + +### Important note +If the service is "headless" (using `ClusterIP: None`), then using named ports won't help because the `targetPort` will +not get remapped. A headless service will always require the init-container. + +## `too many files open` error when running `telepresence connect` on Linux + +If `telepresence connect` on linux fails with a message in the logs `too many files open`, then check if `fs.inotify.max_user_instances` is set too low. Check the current settings with `sysctl fs.notify.max_user_instances` and increase it temporarily with `sudo sysctl -w fs.inotify.max_user_instances=512`. For more information about permanently increasing it see [Kernel inotify watch limit reached](https://unix.stackexchange.com/a/13757/514457). diff --git a/docs/telepresence/2.10/versions.yml b/docs/telepresence/2.10/versions.yml new file mode 100644 index 000000000..d3781e4d0 --- /dev/null +++ b/docs/telepresence/2.10/versions.yml @@ -0,0 +1,5 @@ +version: "2.10.1" +dlVersion: "latest" +docsVersion: "2.10" +branch: release/v2 +productName: "Telepresence" diff --git a/docs/telepresence/2.11 b/docs/telepresence/2.11 deleted file mode 120000 index a1a8c6578..000000000 --- a/docs/telepresence/2.11 +++ /dev/null @@ -1 +0,0 @@ -../../../docs/telepresence/v2.11 \ No newline at end of file diff --git a/docs/telepresence/2.11/ci/github-actions.md b/docs/telepresence/2.11/ci/github-actions.md new file mode 100644 index 000000000..810a2d239 --- /dev/null +++ b/docs/telepresence/2.11/ci/github-actions.md @@ -0,0 +1,176 @@ +--- +title: GitHub Actions for Telepresence +description: "Learn more about GitHub Actions for Telepresence and how to integrate them in your processes to run tests for your own environments and improve your CI/CD pipeline. " +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Telepresence with GitHub Actions + +Telepresence combined with [GitHub Actions](https://docs.github.com/en/actions) allows you to run integration tests in your continuous integration/continuous delivery (CI/CD) pipeline without the need to run any dependant service. When you connect to the target Kubernetes cluster, you can intercept traffic of the remote services and send it to an instance of the local service running in CI. This way, you can quickly test the bugfixes, updates, and features that you develop in your project. + +You can [register here](https://app.getambassador.io/auth/realms/production/protocol/openid-connect/auth?client_id=telepresence-github-actions&response_type=code&code_challenge=qhXI67CwarbmH-pqjDIV1ZE6kqggBKvGfs69cxst43w&code_challenge_method=S256&redirect_uri=https://app.getambassador.io) to get a free Ambassador Cloud account to try the GitHub Actions for Telepresence yourself. + +## GitHub Actions for Telepresence + +Ambassador Labs has created a set of GitHub Actions for Telepresence that enable you to run integration tests in your CI pipeline against any existing remote cluster. The GitHub Actions for Telepresence are the following: + + - **configure**: Initial configuration setup for Telepresence that is needed to run the actions successfully. + - **install**: Installs Telepresence in your CI server with latest version or the one specified by you. + - **login**: Logs into Telepresence, you can create a [personal intercept](/docs/telepresence/latest/concepts/intercepts/#personal-intercept). You'll need a Telepresence API key and set it as an environment variable in your workflow. See the [acquiring an API key guide](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) for instructions on how to get one. + - **connect**: Connects to the remote target environment. + - **intercept**: Redirects traffic to the remote service to the version of the service running in CI so you can run integration tests. + +Each action contains a post-action script to clean up resources. This includes logging out of Telepresence, closing the connection to the remote cluster, and stopping the intercept process. These post scripts are executed automatically, regardless of job result. This way, you don't have to worry about terminating the session yourself. You can look at the [GitHub Actions for Telepresence repository](https://github.com/datawire/telepresence-actions) for more information. + +# Using Telepresence in your GitHub Actions CI pipeline + +## Prerequisites + +To enable GitHub Actions with telepresence, you need: + +* A [Telepresence API KEY](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) and set it as an environment variable in your workflow. +* Access to your remote Kubernetes cluster, like a `kubeconfig.yaml` file with the information to connect to the cluster. +* If your remote cluster already has Telepresence installed, you need to know whether Telepresence is installed [Cluster wide](/docs/telepresence/latest/reference/rbac/#cluster-wide-telepresence-user-access) or [Namespace only](/docs/telepresence/latest/reference/rbac/#namespace-only-telepresence-user-access). If telepresence is configured for namespace only, verify that your `kubeconfig.yaml` is configured to find the installation of the Traffic Manager. For example: + + ```yaml + apiVersion: v1 + clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: traffic-manager-namespace + name: example-cluster + ``` + +* If Telepresence is installed, you also need to know the version of Telepresence running in the cluster. You can run the command `kubectl describe service traffic-manager -n namespace`. The version is listed in the `labels` section of the output. +* You need a GitHub Actions secret named `TELEPRESENCE_API_KEY` in your repository that has your Telepresence API key. See [GitHub docs](https://docs.github.com/en/github-ae@latest/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository) for instructions on how to create GitHub Actions secrets. +* You need a GitHub Actions secret named `KUBECONFIG_FILE` in your repository with the content of your `kubeconfig.yaml`). + +**Does your environment look different?** We're actively working on making GitHub Actions for Telepresence more useful for more + +
+ + +
+ +## Initial configuration setup + +To be able to use the GitHub Actions for Telepresence, you need to do an initial setup to [configure Telepresence](../../reference/config/) so the repository is able to run your workflow. To complete the Telepresence setup: + + +This action only supports Ubuntu runners at the moment. + +1. In your main branch, create a `.github/workflows` directory in your GitHub repository if it does not already exist. +1. Next, in the `.github/workflows` directory, create a new YAML file named `configure-telepresence.yaml`: + + ```yaml + name: Configuring telepresence + on: workflow_dispatch + jobs: + configuring: + name: Configure telepresence + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + steps: + - name : Checkout + uses: actions/checkout@v3 + #---- here run your custom command to connect to your cluster + #- name: Connect to cluster + # shell: bash + # run: ./connnect to cluster + #---- + - name: Congifuring Telepresence + uses: datawire/telepresence-actions/configure@v1.0-rc + with: + version: latest + ``` + +1. Push the `configure-telepresence.yaml` file to your repository. +1. Run the `Configuring Telepresence Workflow` [manually](https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow) in your repository's Actions tab. + +When the workflow runs, the action caches the configuration directory of Telepresence and a Telepresence configuration file if is provided by you. This configuration file should be placed in the`/.github/telepresence-config/config.yml` with your own [Telepresence config](../../reference/config/). If you update your this file with a new configuration, you must run the `Configuring Telepresence Workflow` action manually on your main branch so your workflow detects the new configuration. + + +When you create a branch, do not remove the .telepresence/config.yml file. This is required for Telepresence to run GitHub action properly when there is a new push to the branch in your repository. + + +## Using Telepresence in your GitHub Actions workflows + +1. In the `.github/workflows` directory create a new YAML file named `run-integration-tests.yaml` and modify placeholders with real actions to run your service and perform integration tests. + + ```yaml + name: Run Integration Tests + on: + push: + branches-ignore: + - 'main' + jobs: + my-job: + name: Run Integration Test using Remote Cluster + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + KUBECONFIG_FILE: ${{ secrets.KUBECONFIG_FILE }} + KUBECONFIG: /opt/kubeconfig + steps: + - name : Checkout + uses: actions/checkout@v3 + with: + ref: ${{ github.event.pull_request.head.sha }} + #---- here run your custom command to run your service + #- name: Run your service to test + # shell: bash + # run: ./run_local_service + #---- + # First you need to log in to Telepresence, with your api key + - name: Create kubeconfig file + run: | + cat < /opt/kubeconfig + ${{ env.KUBECONFIG_FILE }} + EOF + - name: Install Telepresence + uses: datawire/telepresence-actions/install@v1.0-rc + with: + version: 2.5.8 # Change the version number here according to the version of Telepresence in your cluster or omit this parameter to install the latest version + - name: Telepresence connect + uses: datawire/telepresence-actions/connect@v1.0-rc + - name: Login + uses: datawire/telepresence-actions/login@v1.0-rc + with: + telepresence_api_key: ${{ secrets.TELEPRESENCE_API_KEY }} + - name: Intercept the service + uses: datawire/telepresence-actions/intercept@v1.0-rc + with: + service_name: service-name + service_port: 8081:8080 + namespace: namespacename-of-your-service + http_header: "x-telepresence-intercept-id=service-intercepted" + print_logs: true # Flag to instruct the action to print out Telepresence logs and export an artifact with them + #---- here run your custom command + #- name: Run integrations test + # shell: bash + # run: ./run_integration_test + #---- + ``` + +The previous is an example of a workflow that: + +* Checks out the repository code. +* Has a placeholder step to run the service during CI. +* Creates the `/opt/kubeconfig` file with the contents of the `secrets.KUBECONFIG_FILE` to make it available for Telepresence. +* Installs Telepresence. +* Runs Telepresence Connect. +* Logs into Telepresence. +* Intercepts traffic to the service running in the remote cluster. +* A placeholder for an action that would run integration tests, such as one that makes HTTP requests to your running service and verifies it works while dependent services run in the remote cluster. + +This workflow gives you the ability to run integration tests during the CI run against an ephemeral instance of your service to verify that the any change that is pushed to the working branch works as expected. After you push the changes, the CI server will run the integration tests against the intercept. You can view the results on your GitHub repository, under "Actions" tab. diff --git a/docs/telepresence/2.11/community.md b/docs/telepresence/2.11/community.md new file mode 100644 index 000000000..922457c9d --- /dev/null +++ b/docs/telepresence/2.11/community.md @@ -0,0 +1,12 @@ +# Community + +## Contributor's guide +Please review our [contributor's guide](https://github.com/telepresenceio/telepresence/blob/release/v2/DEVELOPING.md) +on GitHub to learn how you can help make Telepresence better. + +## Changelog +Our [changelog](https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md) +describes new features, bug fixes, and updates to each version of Telepresence. + +## Meetings +Check out our community [meeting schedule](https://github.com/telepresenceio/telepresence/blob/release/v2/MEETING_SCHEDULE.md) for opportunities to interact with Telepresence developers. diff --git a/docs/telepresence/2.11/concepts/context-prop.md b/docs/telepresence/2.11/concepts/context-prop.md new file mode 100644 index 000000000..b3eb41e32 --- /dev/null +++ b/docs/telepresence/2.11/concepts/context-prop.md @@ -0,0 +1,37 @@ +# Context propagation + +**Context propagation** is the transfer of request metadata across the services and remote processes of a distributed system. Telepresence uses context propagation to intelligently route requests to the appropriate destination. + +This metadata is the context that is transferred across system services. It commonly takes the form of HTTP headers; context propagation is usually referred to as header propagation. A component of the system (like a proxy or performance monitoring tool) injects the headers into requests as it relays them. + +Metadata propagation refers to any service or other middleware not stripping away the headers. Propagation facilitates the movement of the injected contexts between other downstream services and processes. + + +## What is distributed tracing? + +Distributed tracing is a technique for troubleshooting and profiling distributed microservices applications and is a common application for context propagation. It is becoming a key component for debugging. + +In a microservices architecture, a single request may trigger additional requests to other services. The originating service may not cause the failure or slow request directly; a downstream dependent service may instead be to blame. + +An application like Datadog or New Relic will use agents running on services throughout the system to inject traffic with HTTP headers (the context). They will track the request’s entire path from origin to destination to reply, gathering data on routes the requests follow and performance. The injected headers follow the [W3C Trace Context specification](https://www.w3.org/TR/trace-context/) (or another header format, such as [B3 headers](https://github.com/openzipkin/b3-propagation)), which facilitates maintaining the headers through every service without being stripped (the propagation). + + +## What are intercepts and preview URLs? + +[Intercepts](../../reference/intercepts) and [preview +URLs](../../howtos/preview-urls/) are functions of Telepresence that +enable easy local development from a remote Kubernetes cluster and +offer a preview environment for sharing and real-time collaboration. + +Telepresence uses custom HTTP headers and header propagation to +identify which traffic to intercept both for plain personal intercepts +and for personal intercepts with preview URLs; these techniques are +more commonly used for distributed tracing, so what they are being +used for is a little unorthodox, but the mechanisms for their use are +already widely deployed because of the prevalence of tracing. The +headers facilitate the smart routing of requests either to live +services in the cluster or services running locally on a developer’s +machine. The intercepted traffic can be further limited by using path +based routing. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to [Ambassador Cloud](https://app.getambassador.io) with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. diff --git a/docs/telepresence/2.11/concepts/devloop.md b/docs/telepresence/2.11/concepts/devloop.md new file mode 100644 index 000000000..86aac87e2 --- /dev/null +++ b/docs/telepresence/2.11/concepts/devloop.md @@ -0,0 +1,54 @@ +--- +title: "The developer and the inner dev loop | Ambassador " +--- + +# The developer experience and the inner dev loop + +## How is the developer experience changing? + +The developer experience is the workflow a developer uses to develop, test, deploy, and release software. + +Typically this experience has consisted of both an inner dev loop and an outer dev loop. The inner dev loop is where the individual developer codes and tests, and once the developer pushes their code to version control, the outer dev loop is triggered. + +The outer dev loop is _everything else_ that happens leading up to release. This includes code merge, automated code review, test execution, deployment, [controlled (canary) release](https://www.getambassador.io/docs/argo/latest/concepts/canary/), and observation of results. The modern outer dev loop might include, for example, an automated CI/CD pipeline as part of a [GitOps workflow](https://www.getambassador.io/docs/argo/latest/concepts/gitops/#what-is-gitops) and a [progressive delivery](/docs/argo/latest/concepts/cicd/) strategy relying on automated canaries, i.e. to make the outer loop as fast, efficient and automated as possible. + +Cloud-native technologies have fundamentally altered the developer experience in two ways: one, developers now have to take extra steps in the inner dev loop; two, developers need to be concerned with the outer dev loop as part of their workflow, even if most of their time is spent in the inner dev loop. + +Engineers now must design and build distributed service-based applications _and_ also assume responsibility for the full development life cycle. The new developer experience means that developers can no longer rely on monolithic application developer best practices, such as checking out the entire codebase and coding locally with a rapid “live-reload” inner development loop. Now developers have to manage external dependencies, build containers, and implement orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this adds development time to the equation. + +## What is the inner dev loop? + +The inner dev loop is the single developer workflow. A single developer should be able to set up and use an inner dev loop to code and test changes quickly. + +Even within the Kubernetes space, developers will find much of the inner dev loop familiar. That is, code can still be written locally at a level that a developer controls and committed to version control. + +In a traditional inner dev loop, if a typical developer codes for 360 minutes (6 hours) a day, with a traditional local iterative development loop of 5 minutes — 3 coding, 1 building, i.e. compiling/deploying/reloading, 1 testing inspecting, and 10-20 seconds for committing code — they can expect to make ~70 iterations of their code per day. Any one of these iterations could be a release candidate. The only “developer tax” being paid here is for the commit process, which is negligible. + +![traditional inner dev loop](../images/trad-inner-dev-loop.png) + +## In search of lost time: How does containerization change the inner dev loop? + +The inner dev loop is where writing and testing code happens, and time is critical for maximum developer productivity and getting features in front of end users. The faster the feedback loop, the faster developers can refactor and test again. + +Changes to the inner dev loop process, i.e., containerization, threaten to slow this development workflow down. Coding stays the same in the new inner dev loop, but code has to be containerized. The _containerized_ inner dev loop requires a number of new steps: + +* packaging code in containers +* writing a manifest to specify how Kubernetes should run the application (e.g., YAML-based configuration information, such as how much memory should be given to a container) +* pushing the container to the registry +* deploying containers in Kubernetes + +Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. + + +![container inner dev loop](../images/container-inner-dev-loop.png) + +## Tackling the slow inner dev loop + +A slow inner dev loop can negatively impact frontend and backend teams, delaying work on individual and team levels and slowing releases into production overall. + +For example: + +* Frontend developers have to wait for previews of backend changes on a shared dev/staging environment (for example, until CI/CD deploys a new version) and/or rely on mocks/stubs/virtual services when coding their application locally. These changes are only verifiable by going through the CI/CD process to build and deploy within a target environment. +* Backend developers have to wait for CI/CD to build and deploy their app to a target environment to verify that their code works correctly with cluster or cloud-based dependencies as well as to share their work to get feedback. + +New technologies and tools can facilitate cloud-native, containerized development. And in the case of a sluggish inner dev loop, developers can accelerate productivity with tools that help speed the loop up again. diff --git a/docs/telepresence/2.11/concepts/devworkflow.md b/docs/telepresence/2.11/concepts/devworkflow.md new file mode 100644 index 000000000..fa24fc2bd --- /dev/null +++ b/docs/telepresence/2.11/concepts/devworkflow.md @@ -0,0 +1,7 @@ +# The changing development workflow + +A changing workflow is one of the main challenges for developers adopting Kubernetes. Software development itself isn’t the challenge. Developers can continue to [code using the languages and tools with which they are most productive and comfortable](https://www.getambassador.io/resources/kubernetes-local-dev-toolkit/). That’s the beauty of containerized development. + +However, the cloud-native, Kubernetes-based approach to development means adopting a new development workflow and development environment. Beyond the basics, such as figuring out how to containerize software, [how to run containers in Kubernetes](https://www.getambassador.io/docs/kubernetes/latest/concepts/appdev/), and how to deploy changes into containers, for example, Kubernetes adds complexity before it delivers efficiency. The promise of a “quicker way to develop software” applies at least within the traditional aspects of the inner dev loop, where the single developer codes, builds and tests their software. But both within the inner dev loop and once code is pushed into version control to trigger the outer dev loop, the developer experience changes considerably from what many developers are used to. + +In this new paradigm, new steps are added to the inner dev loop, and more broadly, the developer begins to share responsibility for the full life cycle of their software. Inevitably this means taking new workflows and tools on board to ensure that the full life cycle continues full speed ahead. diff --git a/docs/telepresence/2.11/concepts/faster.md b/docs/telepresence/2.11/concepts/faster.md new file mode 100644 index 000000000..03dc9bd8b --- /dev/null +++ b/docs/telepresence/2.11/concepts/faster.md @@ -0,0 +1,28 @@ +--- +title: Install the Telepresence Docker extension | Ambassador +--- +# Making the remote local: Faster feedback, collaboration and debugging + +With the goal of achieving [fast, efficient development](https://www.getambassador.io/use-case/local-kubernetes-development/), developers need a set of approaches to bridge the gap between remote Kubernetes clusters and local development, and reduce time to feedback and debugging. + +## How should I set up a Kubernetes development environment? + +[Setting up a development environment](https://www.getambassador.io/resources/development-environments-microservices/) for Kubernetes can be much more complex than the setup for traditional web applications. Creating and maintaining a Kubernetes development environment relies on a number of external dependencies, such as databases or authentication. + +While there are several ways to set up a Kubernetes development environment, most introduce complexities and impediments to speed. The dev environment should be set up to easily code and test in conditions where a service can access the resources it depends on. + +A good way to meet the goals of faster feedback, possibilities for collaboration, and scale in a realistic production environment is the "single service local, all other remote" environment. Developing in a fully remote environment offers some benefits, but for developers, it offers the slowest possible feedback loop. With local development in a remote environment, the developer retains considerable control while using tools like [Telepresence](../../quick-start/) to facilitate fast feedback, debugging and collaboration. + +## What is Telepresence? + +Telepresence is an open source tool that lets developers [code and test microservices locally against a remote Kubernetes cluster](../../quick-start/). Telepresence facilitates more efficient development workflows while relieving the need to worry about other service dependencies. + +## How can I get fast, efficient local development? + +The dev loop can be jump-started with the right development environment and Kubernetes development tools to support speed, efficiency and collaboration. Telepresence is designed to let Kubernetes developers code as though their laptop is in their Kubernetes cluster, enabling the service to run locally and be proxied into the remote cluster. Telepresence runs code locally and forwards requests to and from the remote Kubernetes cluster, bypassing the much slower process of waiting for a container to build, pushing it to registry, and deploying to production. + +A rapid and continuous feedback loop is essential for productivity and speed; Telepresence enables the fast, efficient feedback loop to ensure that developers can access the rapid local development loop they rely on without disrupting their own or other developers' workflows. Telepresence safely intercepts traffic from the production cluster and enables near-instant testing of code, local debugging in production, and [preview URL](../../howtos/preview-urls/) functionality to share dev environments with others for multi-user collaboration. + +Telepresence works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This pod proxies data from the Kubernetes environment (e.g., TCP connections, environment variables, volumes) to the local process. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +The intercept proxy works thanks to context propagation, which is most frequently associated with distributed tracing but also plays a key role in controllable intercepts and preview URLs. diff --git a/docs/telepresence/2.11/concepts/intercepts.md b/docs/telepresence/2.11/concepts/intercepts.md new file mode 100644 index 000000000..0a2909be2 --- /dev/null +++ b/docs/telepresence/2.11/concepts/intercepts.md @@ -0,0 +1,208 @@ +--- +title: "Types of intercepts" +description: "Short demonstration of personal vs global intercepts" +--- + +import React from 'react'; + +import Alert from '@material-ui/lab/Alert'; +import AppBar from '@material-ui/core/AppBar'; +import Paper from '@material-ui/core/Paper'; +import Tab from '@material-ui/core/Tab'; +import TabContext from '@material-ui/lab/TabContext'; +import TabList from '@material-ui/lab/TabList'; +import TabPanel from '@material-ui/lab/TabPanel'; +import Animation from '@src/components/InterceptAnimation'; + +export function TabsContainer({ children, ...props }) { + const [state, setState] = React.useState({curTab: "personal"}); + React.useEffect(() => { + const query = new URLSearchParams(window.location.search); + var interceptType = query.get('intercept') || "personal"; + if (state.curTab != interceptType) { + setState({curTab: interceptType}); + } + }, [state, setState]) + var setURL = function(newTab) { + history.replaceState(null,null, + `?intercept=${newTab}${window.location.hash}`, + ); + }; + return ( +
+ + + {setState({curTab: newTab}); setURL(newTab)}} aria-label="intercept types"> + + + + + + {children} + +
+ ); +}; + +# Types of intercepts + + + + +# No intercept + + + + +This is the normal operation of your cluster without Telepresence. + + + + + +# Global intercept + + + + +**Global intercepts** replace the Kubernetes "Orders" service with the +Orders service running on your laptop. The users see no change, but +with all the traffic coming to your laptop, you can observe and debug +with all your dev tools. + + + +### Creating and using global intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=all + ``` + + + + Make sure your current kubectl context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service: + + All requests will be sent to the version of your service that is + running in the local development environment. + + + + +# Personal intercept + +**Personal intercepts** allow you to be selective and intercept only +some of the traffic to a service while not interfering with the rest +of the traffic. This allows you to share a cluster with others on your +team without interfering with their work. + +Personal intercepts are subject to the Ambassador Cloud active service and user limit quotas. +To read more about these quotas limits, see the [subscription management page](../../../cloud/latest/subscriptions/howtos/manage-my-subscriptions). + + + + +In the illustration above, **Orange** +requests are being made by Developer 2 on their laptop and the +**green** are made by a teammate, +Developer 1, on a different laptop. + +Each developer can intercept the Orders service for their requests only, +while sharing the rest of the development environment. + + + +### Creating and using personal intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b + ``` + + We're using + `Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b` as the + header for the sake of the example, but you can use any + `key=value` pair you want, or `--http-header=auto` to have it + choose something automatically. + + + + Make sure your current kubect context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service by passing the + HTTP header: + + ```http + Personal-Intercept: 126a72c7-be8b-4329-af64-768e207a184b + ``` + + + + Need a browser extension to modify or remove an HTTP-request-headers? + + Chrome + {' '} + Firefox + + + + 3. Using the intercept: Send requests to your service without the + HTTP header: + + Requests without the header will be sent to the version of your + service that is running in the cluster. This enables you to share + the cluster with a team! + +### Intercepting a specific endpoint + +It's not uncommon to have one service serving several endpoints. Telepresence is capable of limiting an +intercept to only affect the endpoints you want to work with by using one of the `--http-path-xxx` +flags below in addition to using `--http-header` flags. Only one such flag can be used in an intercept +and, contrary to the `--http-header` flag, it cannot be repeated. + +The following flags are available: + +| Flag | Meaning | +|-------------------------------|------------------------------------------------------------------| +| `--http-path-equal ` | Only intercept the endpoint for this exact path | +| `--http-path-prefix ` | Only intercept endpoints with a matching path prefix | +| `--http-path-regex ` | Only intercept endpoints that match the given regular expression | + +#### Examples: + +1. A personal intercept using the header "Coder: Bob" limited to all endpoints that start with "/api': + + ```shell + telepresence intercept SERVICENAME --http-path-prefix=/api --http-header=Coder=Bob + ``` + +2. A personal intercept using the auto generated header that applies only to the endpoint "/api/version": + + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version --http-header=auto + ``` + or, since `--http-header=auto` is the implicit when using `--http` options, just: + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version + ``` + +3. A personal intercept using the auto generated header limited to all endpoints matching the regular expression "(staging-)?api/.*": + + ```shell + telepresence intercept SERVICENAME --http-path-regex='/(staging-)?api/.*' + ``` + + + diff --git a/docs/telepresence/2.11/concepts/modes.md b/docs/telepresence/2.11/concepts/modes.md new file mode 100644 index 000000000..3402f07e4 --- /dev/null +++ b/docs/telepresence/2.11/concepts/modes.md @@ -0,0 +1,36 @@ +--- +title: "Modes" +--- + +# Modes + +A Telepresence installation happens in two locations, initially on your laptop or workstation, and then on your cluster after running `telepresence helm install`. +The main component that gets installed into the cluster is known as the Traffic Manager. +The Traffic Manager can be put either into single user mode (the default) or into team mode. +Modes give cluster admins the ability to enforce both [intercept type](../intercepts) defaults and logins across all connected users, enabling teams to colaborate and intercept without stepping getting in eachothers way. + +## Single user mode + +In single user mode, all intercepts will be [global intercepts](../intercepts?intercept=global) by default. +Global intercepts affect all traffic coming into the intercepted workload; this can cause issues for teams working on the same service. +While single user mode is the default, switching back from team mode is done buy runnning: +``` +telepresence helm install --single-user-mode +``` + +## Team mode + +In team mode, all intercepts will be [personal intercepts](../intercepts?intercept=personal) by default and all intercepting users must be logged in. +Personal intercepts selectively affect http traffic coming into the intercepted workload. +Being in team mode adds an additional layer of confidence for developers working on the same service, knowing their teammates wont interrupt their intercepts by mistake. +Since logins are enforced in this mode as well, you can ensure that Ambassador Cloud features, such as intercept history and saved intercepts, are being taken advantage of by everybody on your team. +To switch from single user mode to team mode, run: +``` +telepresence helm install --team-mode +``` + +## Default intercept types based on modes +The mode of the Traffic Manager determines the default type of intercept, [personal](../intercepts?intercept=personal) vs [global](../intercepts?intercept=global). +When in team mode, intercepts default to [personal intercepts](../intercepts?intercept=personal), and logins are enforced for non logged in users. +When in single user mode, all intercepts default to [global intercepts](../intercepts?intercept=global), regardless of login status. +![mode defaults](../images/mode-defaults.png) \ No newline at end of file diff --git a/docs/telepresence/2.11/doc-links.yml b/docs/telepresence/2.11/doc-links.yml new file mode 100644 index 000000000..c0881ab3f --- /dev/null +++ b/docs/telepresence/2.11/doc-links.yml @@ -0,0 +1,110 @@ +- title: Quick start + link: quick-start +- title: Install Telepresence + items: + - title: Install + link: install/ + - title: Upgrade + link: install/upgrade/ + - title: Install Traffic Manager + link: install/manager/ + - title: Install Traffic Manager with Helm + link: install/helm/ + - title: Cloud Provider Prerequisites + link: install/cloud/ + - title: Migrate from legacy Telepresence + link: install/migrate-from-legacy/ +- title: Core concepts + items: + - title: The changing development workflow + link: concepts/devworkflow + - title: The developer experience and the inner dev loop + link: concepts/devloop + - title: "Making the remote local: Faster feedback, collaboration and debugging" + link: concepts/faster + - title: Context propagation + link: concepts/context-prop + - title: Types of intercepts + link: concepts/intercepts + - title: Modes + link: concepts/modes +- title: How do I... + items: + - title: Intercept a service in your own environment + link: howtos/intercepts + - title: Share dev environments with preview URLs + link: howtos/preview-urls + - title: Proxy outbound traffic to my cluster + link: howtos/outbound + - title: Host a cluster in a local VM + link: howtos/cluster-in-vm + - title: Send requests to an intercepted service + link: howtos/request + - title: Package and share my intercepts + link: howtos/package +- title: Telepresence for Docker + items: + - title: What is Telepresence for Docker + link: extension/intro + - title: Install into Docker-Desktop + link: extension/install + - title: Intercept into a Docker Container + link: extension/intercept +- title: Telepresence for CI + items: + - title: Github Actions + link: ci/github-actions +- title: Technical reference + items: + - title: Architecture + link: reference/architecture + - title: Client reference + link: reference/client + items: + - title: login + link: reference/client/login + - title: Laptop-side configuration + link: reference/config + - title: Cluster-side configuration + link: reference/cluster-config + - title: Using Docker for intercepts + link: reference/docker-run + - title: Running Telepresence in a Docker container + link: reference/inside-container + - title: Environment variables + link: reference/environment + - title: Intercepts + link: reference/intercepts/ + items: + - title: Configure intercept using CLI + link: reference/intercepts/cli + - title: Configure intercept using specifications + link: reference/intercepts/specs + - title: Manually injecting the Traffic Agent + link: reference/intercepts/manual-agent + - title: Volume mounts + link: reference/volume + - title: RESTful API service + link: reference/restapi + - title: DNS resolution + link: reference/dns + - title: RBAC + link: reference/rbac + - title: Telepresence and VPNs + link: reference/vpn + - title: Networking through Virtual Network Interface + link: reference/tun-device + - title: Connection Routing + link: reference/routing + - title: Using Telepresence with Linkerd + link: reference/linkerd +- title: FAQs + link: faqs +- title: Troubleshooting + link: troubleshooting +- title: Community + link: community +- title: Release Notes + link: release-notes +- title: Licenses + link: licenses diff --git a/docs/telepresence/2.11/extension/install.md b/docs/telepresence/2.11/extension/install.md new file mode 100644 index 000000000..471752775 --- /dev/null +++ b/docs/telepresence/2.11/extension/install.md @@ -0,0 +1,39 @@ +--- +title: "Telepresence for Docker installation and connection guide" +description: "Learn how to install and update Ambassador Labs' Telepresence for Docker." +indexable: true +--- + +# Install and connect the Telepresence Docker extension + +[Docker](https://docker.com), the popular containerized runtime environment, now offers the [Telepresence](../../../../../kubernetes-learning-center/telepresence-docker-extension/) extension for Docker Desktop. With this extension, you can quickly install Telepresence and begin using its features with your Docker containers in a matter of minutes. + +## Install Telepresence for Docker + +Telepresence for Docker is available through the Docker Destktop. To install Telepresence for Docker: + +1. Open Docker Desktop. +2. In the Docker Dashboard, click **Add Extensions** in the left navigation bar. +3. In the Extensions Marketplace, search for the Ambassador Telepresence extension. +4. Click **Install**. + +## Connect to Ambassador Cloud through the Telepresence extension. + + After you install the Telepresence extension in Docker Desktop, you need to generate an API key to connect the Telepresence extension to Ambassador Cloud. + + 1. Click the Telepresence extension in Docker Desktop, then click **Get Started**. + + 2. Click the **Get API Key** button to open Ambassador Cloud in a browser window. + + 3. Sign in with your Google, Github, or Gitlab account. + Ambassador Cloud opens to your profile and displays the API key. + + 4. Copy the API key and paste it into the API key field in the Docker Dashboard. + +## Connect to your cluster in Docker Desktop. + + 1. Select the desired clister from the dropdown menu and click **Next**. + This cluster is now set to kubectl's current context. + + 2. Click **Connect to [your cluster]**. + Your cluster is connected and you can now create [intercepts](../intercept/). \ No newline at end of file diff --git a/docs/telepresence/2.11/extension/intercept.md b/docs/telepresence/2.11/extension/intercept.md new file mode 100644 index 000000000..3868407a8 --- /dev/null +++ b/docs/telepresence/2.11/extension/intercept.md @@ -0,0 +1,48 @@ +--- +title: "Create an intercept with Telepreence for Docker" +description: "Create an intercept with Telepresence for Docker. With Telepresence, you can create intercepts to debug, " +indexable: true +--- + +# Create an intercept + +With the Telepresence for Docker extension, you can create [personal intercepts](../../concepts/intercepts/?intercept=personal). These intercepts route the cluster traffic through a proxy UTL to your local Docker container. Follow the instructions below to create an intercept with Docker Desktop. + +## Prerequisites + +Before you begin, you need: +- [Docker Desktop](https://www.docker.com/products/docker-desktop). +- The [Telepresence ](../../../../../kubernetes-learning-center/telepresence-docker-extension/) extension [installed](../install). +- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/), the Kubernetes command-line tool. + +This guide assumes you have a Kubernetes deployment with a running service, and that you can run a copy of that service in a docker container on your laptop. + +## Copy the service you want to intercept + +Once you have the Telepresence extension [installed and connected](../install/) the Telepresence extension, you need to copy the service. To do this, use the `docker run` command with the following flags: + + ```console + $ docker run --rm -it --network host + ``` + +The Telepresence extension requires the target service to be on the host network. This allows Telepresence to share a network with your container. The mounted network device redirects cluster-related traffic back into the cluster. + +## Intercept a service + +In Docker Desktop, the Telepresence extension shows all the services in the namespace. + + 1. Choose a service to intercept and click the **Intercept** button. + + 2. Select the service port for the intercept from the dropdown. + + 3. Enter the target port of the service you previously copied in the Docker container. + + 4. Click **Submit** to create the intercept. + +The intercept now shows up in the Docker Telepresence extension. + +## Test your code + +Now you can make your code changes in your preferred IDE. When you're finished, build a new container with your code changes and run your container on Docker's host network. All the traffic previously routed to and from your Kubernetes service is now routed to and from your local container. + +Click the globe icon next to your intercept to get the preview URL. From here, you can view the intercept details in Ambassador Cloud, open the preview URL in your browser to see the changes you've made in realtime, or you can share the preview URL with teammates so they can review your work. \ No newline at end of file diff --git a/docs/telepresence/2.11/extension/intro.md b/docs/telepresence/2.11/extension/intro.md new file mode 100644 index 000000000..6a653ae06 --- /dev/null +++ b/docs/telepresence/2.11/extension/intro.md @@ -0,0 +1,29 @@ +--- +title: "Telepresence for Docker introduction" +description: "Learn about the Telepresence extension for Docker." +indexable: true +--- + +# Telepresence for Docker + +Telepresence is now available as a [Docker Extension](https://www.docker.com/products/extensions/) for Docker Desktop. + +## What is the Telepresence extension for Docker? + +The [Telepresence Docker extension](../../../../../kubernetes-learning-center/telepresence-docker-extension/) is an extension that runs in Docker Desktop. This extension allows you to spin up a selection of your application and run the Telepresence daemons in that container. The Telepresence extension allows you to intercept a service and redirect cloud traffic to other containers on the Docker host network. + +## What does the Telepresence Docker extension do? + +Telepresence for Docker is designed to simplify your coding experience and test your code faster. Traditionally, you need to build a container within docker with your code changes, push them, wait for it to upload, deploy the changes, verify them, view them, and repeat that process as you continually test your changes. This makes it a slow and cumbersome process when you need to continually test changes. + +With the Telepresence extension for Docker Desktop, you can use intercepts to immediately preview changes as you make them, without the need to redeploy after every change. Because the Telepresence extension also enables you to isolate your machine and operate it entirely within the Docker runtime, this means you can make changes without root permission on your machine. + +## How does Telepresence for Docker work? + +The Telepresence extension is configured to use Docker's host network (VM network for Windows and Mac, host network on Linux). + +Telepresence runs entirely within containers. The Telepresence daemons run in a container, which can be given commands using the extension UI. When Telepresence intercepts a service, it redirects cloud traffic to other containers on the Docker host network. + +## What do I need to begin? + +All you need is [Docker Desktop](https://www.docker.com/products/docker-desktop) with the [Ambassador Telepresence extension installed](../install) and the Kubernetes command-line tool [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/). diff --git a/docs/telepresence/2.11/faqs.md b/docs/telepresence/2.11/faqs.md new file mode 100644 index 000000000..3c37f1cc5 --- /dev/null +++ b/docs/telepresence/2.11/faqs.md @@ -0,0 +1,124 @@ +--- +description: "Learn how Telepresence helps with fast development and debugging in your Kubernetes cluster." +--- + +# FAQs + +** Why Telepresence?** + +Modern microservices-based applications that are deployed into Kubernetes often consist of tens or hundreds of services. The resource constraints and number of these services means that it is often difficult to impossible to run all of this on a local development machine, which makes fast development and debugging very challenging. The fast [inner development loop](../concepts/devloop/) from previous software projects is often a distant memory for cloud developers. + +Telepresence enables you to connect your local development machine seamlessly to the cluster via a two way proxying mechanism. This enables you to code locally and run the majority of your services within a remote Kubernetes cluster -- which in the cloud means you have access to effectively unlimited resources. + +Ultimately, this empowers you to develop services locally and still test integrations with dependent services or data stores running in the remote cluster. + +You can “intercept” any requests made to a target Kubernetes workload, and code and debug your associated service locally using your favourite local IDE and in-process debugger. You can test your integrations by making requests against the remote cluster’s ingress and watching how the resulting internal traffic is handled by your service running locally. + +By using the preview URL functionality you can share access with additional developers or stakeholders to the application via an entry point associated with your intercept and locally developed service. You can make changes that are visible in near real-time to all of the participants authenticated and viewing the preview URL. All other viewers of the application entrypoint will not see the results of your changes. + +** What operating systems does Telepresence work on?** + +Telepresence currently works natively on macOS (Intel and Apple silicon), Linux, and WSL 2. Starting with v2.4.0, we are also releasing a native Windows version of Telepresence that we are considering a Developer Preview. + +** What protocols can be intercepted by Telepresence?** + +All HTTP/1.1 and HTTP/2 protocols can be intercepted. This includes: + +- REST +- JSON/XML over HTTP +- gRPC +- GraphQL + +If you need another protocol supported, please [drop us a line](https://www.getambassador.io/feedback/) to request it. + +** When using Telepresence to intercept a pod, are the Kubernetes cluster environment variables proxied to my local machine?** + +Yes, you can either set the pod's environment variables on your machine or write the variables to a file to use with Docker or another build process. Please see [the environment variable reference doc](../reference/environment) for more information. + +** When using Telepresence to intercept a pod, can the associated pod volume mounts also be mounted by my local machine?** + +Yes, please see [the volume mounts reference doc](../reference/volume/) for more information. + +** When connected to a Kubernetes cluster via Telepresence, can I access cluster-based services via their DNS name?** + +Yes. After you have successfully connected to your cluster via `telepresence connect` you will be able to access any service in your cluster via their namespace qualified DNS name. + +This means you can curl endpoints directly e.g. `curl .:8080/mypath`. + +If you create an intercept for a service in a namespace, you will be able to use the service name directly. + +This means if you `telepresence intercept -n `, you will be able to resolve just the `` DNS record. + +You can connect to databases or middleware running in the cluster, such as MySQL, PostgreSQL and RabbitMQ, via their service name. + +** When connected to a Kubernetes cluster via Telepresence, can I access cloud-based services and data stores via their DNS name?** + +You can connect to cloud-based data stores and services that are directly addressable within the cluster (e.g. when using an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) Service type), such as AWS RDS, Google pub-sub, or Azure SQL Database. + +** What types of ingress does Telepresence support for the preview URL functionality?** + +The preview URL functionality should work with most ingress configurations, including straightforward load balancer setups. + +Telepresence will discover/prompt during first use for this info and make its best guess at figuring this out and ask you to confirm or update this. + +** Why are my intercepts still reporting as active when they've been disconnected?** + + In certain cases, Telepresence might not have been able to communicate back with Ambassador Cloud to update the intercept's status. Worry not, they will get garbage collected after a period of time. + +** Why is my intercept associated with an "Unreported" cluster?** + + Intercepts tagged with "Unreported" clusters simply mean Ambassador Cloud was unable to associate a service instance with a known detailed service from an Edge Stack or API Gateway cluster. [Connecting your cluster to the Service Catalog](/docs/telepresence/latest/quick-start/) will properly match your services from multiple data sources. + +** Will Telepresence be able to intercept workloads running on a private cluster or cluster running within a virtual private cloud (VPC)?** + +Yes. The cluster has to have outbound access to the internet for the preview URLs to function correctly, but it doesn’t need to have a publicly accessible IP address. + +The cluster must also have access to an external registry in order to be able to download the traffic-manager and traffic-agent images that are deployed when connecting with Telepresence. + +** Why does running Telepresence require sudo access for the local daemon?** + +The local daemon needs sudo to create iptable mappings. Telepresence uses this to create outbound access from the laptop to the cluster. + +On Fedora, Telepresence also creates a virtual network device (a TUN network) for DNS routing. That also requires root access. + +** What components get installed in the cluster when running Telepresence?** + +A single `traffic-manager` service is deployed in the `ambassador` namespace within your cluster, and this manages resilient intercepts and connections between your local machine and the cluster. + +A Traffic Agent container is injected per pod that is being intercepted. The first time a workload is intercepted all pods associated with this workload will be restarted with the Traffic Agent automatically injected. + +** How can I remove all of the Telepresence components installed within my cluster?** + +You can run the command `telepresence uninstall --everything` to remove the `traffic-manager` service installed in the cluster and `traffic-agent` containers injected into each pod being intercepted. + +Running this command will also stop the local daemon running. + +** What language is Telepresence written in?** + +All components of the Telepresence application and cluster components are written using Go. + +** How does Telepresence connect and tunnel into the Kubernetes cluster?** + +The connection between your laptop and cluster is established by using +the `kubectl port-forward` machinery (though without actually spawning +a separate program) to establish a TCP connection to Telepresence +Traffic Manager in the cluster, and running Telepresence's custom VPN +protocol over that TCP connection. + + + +** What identity providers are supported for authenticating to view a preview URL?** + +* GitHub +* GitLab +* Google + +More authentication mechanisms and identity provider support will be added soon. Please [let us know](https://www.getambassador.io/feedback/) which providers are the most important to you and your team in order for us to prioritize those. + +** Is Telepresence open source?** + +Yes it is! You can find its source code on [GitHub](https://github.com/telepresenceio/telepresence). + +** How do I share my feedback on Telepresence?** + +Your feedback is always appreciated and helps us build a product that provides as much value as possible for our community. You can chat with us directly on our [feedback page](https://www.getambassador.io/feedback/), or you can [join our Slack channel](http://a8r.io/slack) to share your thoughts. diff --git a/docs/telepresence/2.11/howtos/cluster-in-vm.md b/docs/telepresence/2.11/howtos/cluster-in-vm.md new file mode 100644 index 000000000..4762344c9 --- /dev/null +++ b/docs/telepresence/2.11/howtos/cluster-in-vm.md @@ -0,0 +1,192 @@ +--- +title: "Considerations for locally hosted clusters | Ambassador" +description: "Use Telepresence to intercept services in a cluster running in a hosted virtual machine." +--- + +# Network considerations for locally hosted clusters + +## The problem +Telepresence creates a Virtual Network Interface ([VIF](../../reference/tun-device)) that maps the clusters subnets to the host machine when it connects. If you're running Kubernetes locally (e.g., k3s, Minikube, Docker for Desktop), you may encounter network problems because the devices in the host are also accessible from the cluster's nodes. + +### Example: +A k3s cluster runs in a headless VirtualBox machine that uses a "host-only" network. This network will allow both host-to-guest and guest-to-host connections. In other words, the cluster will have access to the host's network and, while Telepresence is connected, also to its VIF. This means that from the cluster's perspective, there will now be more than one interface that maps the cluster's subnets; the ones already present in the cluster's nodes, and then the Telepresence VIF, mapping them again. + +Now, if a request arrives to Telepresence that is covered by a subnet mapped by the VIF, the request is routed to the cluster. If the cluster for some reason doesn't find a corresponding listener that can handle the request, it will eventually try the host network, and find the VIF. The VIF routes the request to the cluster and now the recursion is in motion. The final outcome of the request will likely be a timeout but since the recursion is very resource intensive (a large amount of very rapid connection requests), this will likely also affect other connections in a bad way. + +## Solution + +### Create a bridge network +A bridge network is a Link Layer (L2) device that forwards traffic between network segments. By creating a bridge network, you can bypass the host's network stack which enable the Kubernetes cluster to connect directly to the same router as your host. + +To create a bridge network, you need to change the network settings of the guest running a cluster's node so that it connects directly to a physical network device on your host. The details on how to configure the bridge depends on what type of virtualization solution you're using. + +### Vagrant + Virtualbox + k3s example +Here's a sample `Vagrantfile` that will spin up a server node and two agent nodes in three headless instances using a bridged network. It also adds the configuration needed for the cluster to host a docker repository (very handy in case you want to save bandwidth). The Kubernetes registry manifest must be applied using `kubectl -f registry.yaml` once the cluster is up and running. + +#### Vagrantfile +```ruby +# -*- mode: ruby -*- +# vi: set ft=ruby : + +# bridge is the name of the host's default network device +$bridge = 'wlp5s0' + +# default_route should be the IP of the host's default route. +$default_route = '192.168.1.1' + +# nameserver must be the IP of an external DNS, such as 8.8.8.8 +$nameserver = '8.8.8.8' + +# server_name should also be added to the host's /etc/hosts file and point to the server_ip +# for easy access when pushing docker images +server_name = 'multi' + +# static IPs for the server and agents. Those IPs must be on the default router's subnet +server_ip = '192.168.1.110' +agents = { + 'agent1' => '192.168.1.111', + 'agent2' => '192.168.1.112', +} + +# Extra parameters in INSTALL_K3S_EXEC variable because of +# K3s picking up the wrong interface when starting server and agent +# https://github.com/alexellis/k3sup/issues/306 +server_script = <<-SHELL + sudo -i + apk add curl + export INSTALL_K3S_EXEC="--bind-address=#{server_ip} --node-external-ip=#{server_ip} --flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + echo "Sleeping for 5 seconds to wait for k3s to start" + sleep 5 + cp /var/lib/rancher/k3s/server/token /vagrant_shared + cp /etc/rancher/k3s/k3s.yaml /vagrant_shared + cp /etc/rancher/k3s/registries.yaml /vagrant_shared + SHELL + +agent_script = <<-SHELL + sudo -i + apk add curl + export K3S_TOKEN_FILE=/vagrant_shared/token + export K3S_URL=https://#{server_ip}:6443 + export INSTALL_K3S_EXEC="--flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + SHELL + +def config_vm(name, ip, script, vm) + # The network_script has two objectives: + # 1. Ensure that the guest's default route is the bridged network (bypass the network of the host) + # 2. Ensure that the DNS points to an external DNS service, as opposed to the DNS of the host that + # the NAT network provides. + network_script = <<-SHELL + sudo -i + ip route delete default 2>&1 >/dev/null || true; ip route add default via #{$default_route} + cp /etc/resolv.conf /etc/resolv.conf.orig + sed 's/^nameserver.*/nameserver #{$nameserver}/' /etc/resolv.conf.orig > /etc/resolv.conf + SHELL + + vm.hostname = name + vm.network 'public_network', bridge: $bridge, ip: ip + vm.synced_folder './shared', '/vagrant_shared' + vm.provider 'virtualbox' do |vb| + vb.memory = '4096' + vb.cpus = '2' + end + vm.provision 'shell', inline: script + vm.provision 'shell', inline: network_script, run: 'always' +end + +Vagrant.configure('2') do |config| + config.vm.box = 'generic/alpine314' + + config.vm.define 'server', primary: true do |server| + config_vm(server_name, server_ip, server_script, server.vm) + end + + agents.each do |agent_name, agent_ip| + config.vm.define agent_name do |agent| + config_vm(agent_name, agent_ip, agent_script, agent.vm) + end + end +end +``` + +The Kubernetes manifest to add the registry: + +#### registry.yaml +```yaml +apiVersion: v1 +kind: ReplicationController +metadata: + name: kube-registry-v0 + namespace: kube-system + labels: + k8s-app: kube-registry + version: v0 +spec: + replicas: 1 + selector: + app: kube-registry + version: v0 + template: + metadata: + labels: + app: kube-registry + version: v0 + spec: + containers: + - name: registry + image: registry:2 + resources: + limits: + cpu: 100m + memory: 200Mi + env: + - name: REGISTRY_HTTP_ADDR + value: :5000 + - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY + value: /var/lib/registry + volumeMounts: + - name: image-store + mountPath: /var/lib/registry + ports: + - containerPort: 5000 + name: registry + protocol: TCP + volumes: + - name: image-store + hostPath: + path: /var/lib/registry-storage +--- +apiVersion: v1 +kind: Service +metadata: + name: kube-registry + namespace: kube-system + labels: + app: kube-registry + kubernetes.io/name: "KubeRegistry" +spec: + selector: + app: kube-registry + ports: + - name: registry + port: 5000 + targetPort: 5000 + protocol: TCP + type: LoadBalancer +``` + diff --git a/docs/telepresence/2.11/howtos/intercepts.md b/docs/telepresence/2.11/howtos/intercepts.md new file mode 100644 index 000000000..87bd9f92b --- /dev/null +++ b/docs/telepresence/2.11/howtos/intercepts.md @@ -0,0 +1,108 @@ +--- +description: "Start using Telepresence in your own environment. Follow these steps to intercept your service in your cluster." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Intercept a service in your own environment + +Telepresence enables you to create intercepts to a target Kubernetes workload. Once you have created and intercept, you can code and debug your associated service locally. + +For a detailed walk-though on creating intercepts using our sample app, follow the [quick start guide](../../quick-start/demo-node/). + + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +This guide assumes you have a Kubernetes deployment and service accessible publicly by an ingress controller, and that you can run a copy of that service on your laptop. + + +## Intercept your service with a global intercept + +With Telepresence, you can create [global intercepts](../../concepts/intercepts/?intercept=global) that intercept all traffic going to a service in your cluster and route it to your local environment instead. + +1. Connect to your cluster with `telepresence connect` and connect to the Kubernetes API server: + + ```console + $ curl -ik https://kubernetes.default + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected when you first connect. + + + You now have access to your remote Kubernetes API server as if you were on the same network. You can now use any local tools to connect to any service in the cluster. + + If you have difficulties connecting, make sure you are using Telepresence 2.0.3 or a later version. Check your version by entering `telepresence version` and [upgrade if needed](../../install/upgrade/). + + +2. Enter `telepresence list` and make sure the service you want to intercept is listed. For example: + + ```console + $ telepresence list + ... + example-service: ready to intercept (traffic-agent not yet installed) + ... + ``` + +3. Get the name of the port you want to intercept on your service: + `kubectl get service --output yaml`. + + For example: + + ```console + $ kubectl get service example-service --output yaml + ... + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + ... + ``` + +4. Intercept all traffic going to the service in your cluster: + `telepresence intercept --port [:] --env-file `. + * For `--port`: specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * For `--env-file`: specify a file path for Telepresence to write the environment variables that are set in the pod. + The example below shows Telepresence intercepting traffic going to service `example-service`. Requests now reach the service on port `http` in the cluster get routed to `8080` on the workstation and write the environment variables of the service to `~/example-service-intercept.env`. + ```console + $ telepresence intercept example-service --port 8080:http --env-file ~/example-service-intercept.env + Using Deployment example-service + intercepted + Intercept name: example-service + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Intercepting : all TCP connections + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + The following are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Query the environment in which you intercepted a service and verify your local instance being invoked. + All the traffic previously routed to your Kubernetes Service is now routed to your local environment + +You can now: +- Make changes on the fly and see them reflected when interacting with + your Kubernetes environment. +- Query services only exposed in your cluster's network. +- Set breakpoints in your IDE to investigate bugs. + + + + **Didn't work?** Make sure the port you're listening on matches the one you specified when you created your intercept. + + diff --git a/docs/telepresence/2.11/howtos/outbound.md b/docs/telepresence/2.11/howtos/outbound.md new file mode 100644 index 000000000..48877df8c --- /dev/null +++ b/docs/telepresence/2.11/howtos/outbound.md @@ -0,0 +1,89 @@ +--- +description: "Telepresence can connect to your Kubernetes cluster, letting you access cluster services as if your laptop was another pod in the cluster." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Proxy outbound traffic to my cluster + +While preview URLs are a powerful feature, Telepresence offers other options for proxying traffic between your laptop and the cluster. This section discribes how to proxy outbound traffic and control outbound connectivity to your cluster. + + This guide assumes that you have the quick start sample web app running in your cluster to test accessing the web-app service. You can substitute this service for any other service you are running. + +## Proxying outbound traffic + +Connecting to the cluster instead of running an intercept allows you to access cluster workloads as if your laptop was another pod in the cluster. This enables you to access other Kubernetes services using `.`. A service running on your laptop can interact with other services on the cluster by name. + +When you connect to your cluster, the background daemon on your machine runs and installs the [Traffic Manager deployment](../../reference/architecture/) into the cluster of your current `kubectl` context. The Traffic Manager handles the service proxying. + +1. Run `telepresence connect` and enter your password to run the daemon. + + ``` + $ telepresence connect + Launching Telepresence Daemon v2.3.7 (api v3) + Need root privileges to run "/usr/local/bin/telepresence daemon-foreground /home//.cache/telepresence/logs '' ''" + [sudo] password: + Connecting to traffic manager... + Connected to context default (https://) + ``` + +2. Run `telepresence status` to confirm connection to your cluster and that it is proxying traffic. + + ``` + $ telepresence status + Root Daemon: Running + Version : v2.3.7 (api 3) + Primary DNS : "" + Fallback DNS: "" + User Daemon: Running + Version : v2.3.7 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 0 total + ``` + +3. Access your service by name with `curl web-app.emojivoto:80`. Telepresence routes the request to the cluster, as if your laptop is actually running in the cluster. + + ``` + $ curl web-app.emojivoto:80 + + + + + Emoji Vote + ... + ``` + +If you terminate the client with `telepresence quit` and try to access the service again, it will fail because traffic is no longer proxied from your laptop. + + ``` + $ telepresence quit + Telepresence Daemon quitting...done + ``` + +When using Telepresence in this way, you need to access services with the namespace qualified DNS name (<service name>.<namespace>) before you start an intercept. After you start an intercept, only <service name> is required. Read more about these differences in the DNS resolution reference guide. + +## Controlling outbound connectivity + +By default, Telepresence provides access to all Services found in all namespaces in the connected cluster. This can lead to problems if the user does not have RBAC access permissions to all namespaces. You can use the `--mapped-namespaces ` flag to control which namespaces are accessible. + +When you use the `--mapped-namespaces` flag, you need to include all namespaces containing services you want to access, as well as all namespaces that contain services related to the intercept. + +### Using local-only intercepts + +When you develop on isolated apps or on a virtualized container, you don't need an outbound connection. However, when developing services that aren't deployed to the cluster, it can be necessary to provide outbound connectivity to the namespace where the service will be deployed. This is because services that aren't exposed through ingress controllers require connectivity to those services. When you provide outbound connectivity, the service can access other services in that namespace without using qualified names. A local-only intercept does not cause outbound connections to originate from the intercepted namespace. The reason for this is to establish correct origin; the connection must be routed to a `traffic-agent`of an intercepted pod. For local-only intercepts, the outbound connections originates from the `traffic-manager`. + +To control outbound connectivity to specific namespaces, add the `--local-only` flag: + + ``` + $ telepresence intercept --namespace --local-only + ``` +The resources in the given namespace can now be accessed using unqualified names as long as the intercept is active. +You can deactivate the intercept with `telepresence leave `. This removes unqualified name access. + +### Proxy outcound connectivity for laptops + +To specify additional hosts or subnets that should be resolved inside the cluster, see [AlsoProxy](../../reference/cluster-config/#alsoproxy) for more details. diff --git a/docs/telepresence/2.11/howtos/package.md b/docs/telepresence/2.11/howtos/package.md new file mode 100644 index 000000000..520662eef --- /dev/null +++ b/docs/telepresence/2.11/howtos/package.md @@ -0,0 +1,178 @@ +--- +title: "How to package and share my intercept setup with my teammates" +description: "Use telepresence intercept specs to enable your teammates faster" +--- +# Introduction + +While telepresence takes cares of the interception part of your setup, you usually still need to script +some boiler plate code to run the local part (the handler) of your code. + +Classic solutions rely on a Makefile, or bash scripts, but this becomes cumbersome to maintain. + +Instead, you can use [telepresence intercept specs](../../reference/intercepts/specs): They allow you +to specify all aspects of an intercept, including prerequisites, the local processes that receive the intercepted traffic, +and the actual intercept. Telepresence can then run the specification. + +# Getting started + +You will need a Kubernetes cluster, a deployment, and a service to begin using an Intercept Specification. + +Once you have a Kubernetes cluster you can apply this configuration to start an echo easy deployment that +we can then use for our Intercept Specifcation + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: "echo-easy" +spec: + type: ClusterIP + selector: + service: echo-easy + ports: + - name: proxied + port: 80 + targetPort: http +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "echo-easy" + labels: + service: echo-easy +spec: + replicas: 1 + selector: + matchLabels: + service: echo-easy + template: + metadata: + labels: + service: echo-easy + spec: + containers: + - name: echo-easy + image: jmalloc/echo-server + ports: + - containerPort: 8080 + name: http + resources: + limits: + cpu: 50m + memory: 128Mi +``` + +You can create the local yaml file by using + +```console +$ cat > echo-server.yaml < my-intercept.yaml < --port --env-file --mechanism http`and adjust the flags as follows: + Start the intercept: + * **port:** specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * **env-file:** specify a file path for Telepresence to write the environment variables that are set in the pod. + * You can remove the **--mechanism http** flag if you have your traffic-manager set to *team-mode* + +4. Answer the question prompts. + The example below shows a preview URL for `example-service` which listens on port 8080. The preview URL for ingress will use the `ambassador` service in the `ambassador` namespace on port `443` using TLS encryption and the hostname `dev-environment.edgestack.me`: + + ```console +$ telepresence intercept example-service --mechanism http --ingress-host ambassador.ambassador --ingress-port 80 --ingress-l5 dev-environment.edgestack.me --ingress-tls --port 8080 --env-file ~/ex-svc.env + + Using deployment example-service + intercepted + Intercept name : example-service + State : ACTIVE + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":example-service") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname : dev-environment.edgestack.me + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + Here are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Go to the Preview URL generated from the intercept. +Traffic is now intercepted from your preview URL without impacting other traffic from your Ingress. + + + Didn't work? It might be because you have services in between your ingress controller and the service you are intercepting that do not propagate the x-telepresence-intercept-id HTTP Header. Read more on context propagation. + + +7. Make a request on the URL you would usually query for that environment. Don't route a request to your laptop. + + Normal traffic coming into the cluster through the Ingress (i.e. not coming from the preview URL) routes to services in the cluster like normal. + +8. Share with a teammate. + + You can collaborate with teammates by sending your preview URL to them. Once your teammate logs in, they must select the same identity provider and org as you are using. This authorizes their access to the preview URL. When they visit the preview URL, they see the intercepted service running on your laptop. + You can now collaborate with a teammate to debug the service on the shared intercept URL without impacting the production environment. + +## Sharing a preview URL with people outside your team + +To collaborate with someone outside of your identity provider's organization: +Log into [Ambassador Cloud](https://app.getambassador.io/cloud/). + navigate to your service's intercepts, select the preview URL details, and click **Make Publicly Accessible**. Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on your laptop. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. Removing the preview URL either from the dashboard or by running `telepresence preview remove ` also removes all access to the preview URL. + +## Change access restrictions + +To collaborate with someone outside of your identity provider's organization, you must make your preview URL publicly accessible. + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/). +2. Select the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Make Publicly Accessible**. + +Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on a local environment. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. + +## Remove a preview URL from an Intercept + +To delete a preview URL and remove all access to the intercepted service, + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) +2. Click on the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Remove Preview**. + +Alternatively, you can remove a preview URL with the following command: +`telepresence preview remove ` diff --git a/docs/telepresence/2.11/howtos/request.md b/docs/telepresence/2.11/howtos/request.md new file mode 100644 index 000000000..1109c68df --- /dev/null +++ b/docs/telepresence/2.11/howtos/request.md @@ -0,0 +1,12 @@ +import Alert from '@material-ui/lab/Alert'; + +# Send requests to an intercepted service + +Ambassador Cloud can inform you about the required request parameters to reach an intercepted service. + + 1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) + 2. Navigate to the desired service Intercepts page + 3. Click the **Query** button to open the pop-up menu. + 4. Toggle between **CURL**, **Headers** and **Browse**. + +The pre-built queries and header information will help you get started to query the desired intercepted service and manage header propagation. diff --git a/docs/telepresence/2.11/images/container-inner-dev-loop.png b/docs/telepresence/2.11/images/container-inner-dev-loop.png new file mode 100644 index 000000000..06586cd6e Binary files /dev/null and b/docs/telepresence/2.11/images/container-inner-dev-loop.png differ diff --git a/docs/telepresence/2.11/images/docker-extension.png b/docs/telepresence/2.11/images/docker-extension.png new file mode 100644 index 000000000..886946f5d Binary files /dev/null and b/docs/telepresence/2.11/images/docker-extension.png differ diff --git a/docs/telepresence/2.11/images/docker-header-containers.png b/docs/telepresence/2.11/images/docker-header-containers.png new file mode 100644 index 000000000..06f422a93 Binary files /dev/null and b/docs/telepresence/2.11/images/docker-header-containers.png differ diff --git a/docs/telepresence/2.11/images/github-login.png b/docs/telepresence/2.11/images/github-login.png new file mode 100644 index 000000000..cfd4d4bf1 Binary files /dev/null and b/docs/telepresence/2.11/images/github-login.png differ diff --git a/docs/telepresence/2.11/images/logo.png b/docs/telepresence/2.11/images/logo.png new file mode 100644 index 000000000..701f63ba8 Binary files /dev/null and b/docs/telepresence/2.11/images/logo.png differ diff --git a/docs/telepresence/2.11/images/mode-defaults.png b/docs/telepresence/2.11/images/mode-defaults.png new file mode 100644 index 000000000..1dcca4116 Binary files /dev/null and b/docs/telepresence/2.11/images/mode-defaults.png differ diff --git a/docs/telepresence/2.11/images/split-tunnel.png b/docs/telepresence/2.11/images/split-tunnel.png new file mode 100644 index 000000000..5bf30378e Binary files /dev/null and b/docs/telepresence/2.11/images/split-tunnel.png differ diff --git a/docs/telepresence/2.11/images/tracing.png b/docs/telepresence/2.11/images/tracing.png new file mode 100644 index 000000000..c374807e5 Binary files /dev/null and b/docs/telepresence/2.11/images/tracing.png differ diff --git a/docs/telepresence/2.11/images/trad-inner-dev-loop.png b/docs/telepresence/2.11/images/trad-inner-dev-loop.png new file mode 100644 index 000000000..618b674f8 Binary files /dev/null and b/docs/telepresence/2.11/images/trad-inner-dev-loop.png differ diff --git a/docs/telepresence/2.11/images/tunnelblick.png b/docs/telepresence/2.11/images/tunnelblick.png new file mode 100644 index 000000000..8944d445a Binary files /dev/null and b/docs/telepresence/2.11/images/tunnelblick.png differ diff --git a/docs/telepresence/2.11/images/vpn-dns.png b/docs/telepresence/2.11/images/vpn-dns.png new file mode 100644 index 000000000..eed535c45 Binary files /dev/null and b/docs/telepresence/2.11/images/vpn-dns.png differ diff --git a/docs/telepresence/2.11/install/cloud.md b/docs/telepresence/2.11/install/cloud.md new file mode 100644 index 000000000..9bcf9e63e --- /dev/null +++ b/docs/telepresence/2.11/install/cloud.md @@ -0,0 +1,43 @@ +# Provider Prerequisites for Traffic Manager + +## GKE + +### Firewall Rules for private clusters + +A GKE cluster with private networking will come preconfigured with firewall rules that prevent the Traffic Manager's +webhook injector from being invoked by the Kubernetes API server. +For Telepresence to work in such a cluster, you'll need to [add a firewall rule](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) allowing the Kubernetes masters to access TCP port `8443` in your pods. +For example, for a cluster named `tele-webhook-gke` in region `us-central1-c1`: + +```bash +$ gcloud container clusters describe tele-webhook-gke --region us-central1-c | grep masterIpv4CidrBlock + masterIpv4CidrBlock: 172.16.0.0/28 # Take note of the IP range, 172.16.0.0/28 + +$ gcloud compute firewall-rules list \ + --filter 'name~^gke-tele-webhook-gke' \ + --format 'table( + name, + network, + direction, + sourceRanges.list():label=SRC_RANGES, + allowed[].map().firewall_rule().list():label=ALLOW, + targetTags.list():label=TARGET_TAGS + )' + +NAME NETWORK DIRECTION SRC_RANGES ALLOW TARGET_TAGS +gke-tele-webhook-gke-33fa1791-all tele-webhook-net INGRESS 10.40.0.0/14 esp,ah,sctp,tcp,udp,icmp gke-tele-webhook-gke-33fa1791-node +gke-tele-webhook-gke-33fa1791-master tele-webhook-net INGRESS 172.16.0.0/28 tcp:10250,tcp:443 gke-tele-webhook-gke-33fa1791-node +gke-tele-webhook-gke-33fa1791-vms tele-webhook-net INGRESS 10.128.0.0/9 icmp,tcp:1-65535,udp:1-65535 gke-tele-webhook-gke-33fa1791-node +# Take note fo the TARGET_TAGS value, gke-tele-webhook-gke-33fa1791-node + +$ gcloud compute firewall-rules create gke-tele-webhook-gke-webhook \ + --action ALLOW \ + --direction INGRESS \ + --source-ranges 172.16.0.0/28 \ + --rules tcp:8443 \ + --target-tags gke-tele-webhook-gke-33fa1791-node --network tele-webhook-net +Creating firewall...⠹Created [https://www.googleapis.com/compute/v1/projects/datawire-dev/global/firewalls/gke-tele-webhook-gke-webhook]. +Creating firewall...done. +NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED +gke-tele-webhook-gke-webhook tele-webhook-net INGRESS 1000 tcp:8443 False +``` diff --git a/docs/telepresence/2.11/install/helm.md b/docs/telepresence/2.11/install/helm.md new file mode 100644 index 000000000..2709ee8f3 --- /dev/null +++ b/docs/telepresence/2.11/install/helm.md @@ -0,0 +1,181 @@ +# Install the Traffic Manager with Helm + +[Helm](https://helm.sh) is a package manager for Kubernetes that automates the release and management of software on Kubernetes. The Telepresence Traffic Manager can be installed via a Helm chart with a few simple steps. + +For more details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +## Before you begin + +Before you begin you need to have [`helm`](https://helm.sh/docs/intro/install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +The Telepresence Helm chart is hosted by Ambassador Labs and published at `https://app.getambassador.io`. + +Start by adding this repo to your Helm client with the following command: + +```shell +helm repo add datawire https://app.getambassador.io +helm repo update +``` + +## Install with Helm + +When you run the Helm chart, it installs all the components required for the Telepresence Traffic Manager. + +1. If you are installing the Telepresence Traffic Manager **for the first time on your cluster**, create the `ambassador` namespace in your cluster: + + ```shell + kubectl create namespace ambassador + ``` + +2. Install the Telepresence Traffic Manager with the following command: + + ```shell + helm install traffic-manager --namespace ambassador datawire/telepresence + ``` + +### Install into custom namespace + +The Helm chart supports being installed into any namespace, not necessarily `ambassador`. Simply pass a different `namespace` argument to `helm install`. +For example, if you wanted to deploy the traffic manager to the `staging` namespace: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence +``` + +Note that users of Telepresence will need to configure their kubeconfig to find this installation of the Traffic Manager: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +See [the kubeconfig documentation](../../reference/config#manager) for more information. + +### Upgrading the Traffic Manager. + +Versions of the Traffic Manager Helm chart are coupled to the versions of the Telepresence CLI that they are intended for. +Thus, for example, if you wish to use Telepresence `v2.4.0`, you'll need to install version `v2.4.0` of the Traffic Manager Helm chart. + +Upgrading the Traffic Manager is the same as upgrading any other Helm chart; for example, if you installed the release into the `ambassador` namespace, and you just wished to upgrade it to the latest version without changing any configuration values: + +```shell +helm repo up +helm upgrade traffic-manager datawire/telepresence --reuse-values --namespace ambassador +``` + +If you want to upgrade the Traffic-Manager to a specific version, add a `--version` flag with the version number to the upgrade command. For example: `--version v2.4.1` + +## RBAC + +### Installing a namespace-scoped traffic manager + +You might not want the Traffic Manager to have permissions across the entire kubernetes cluster, or you might want to be able to install multiple traffic managers per cluster (for example, to separate them by environment). +In these cases, the traffic manager supports being installed with a namespace scope, allowing cluster administrators to limit the reach of a traffic manager's permissions. + +For example, suppose you want a Traffic Manager that only works on namespaces `dev` and `staging`. +To do this, create a `values.yaml` like the following: + +```yaml +managerRbac: + create: true + namespaced: true + namespaces: + - dev + - staging +``` + +This can then be installed via: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence -f ./values.yaml +``` + +**NOTE** Do not install namespace-scoped Traffic Managers and a global Traffic Manager in the same cluster, as it could have unexpected effects. + +#### Namespace collision detection + +The Telepresence Helm chart will try to prevent namespace-scoped Traffic Managers from managing the same namespaces. +It will do this by creating a ConfigMap, called `traffic-manager-claim`, in each namespace that a given install manages. + +So, for example, suppose you install one Traffic Manager to manage namespaces `dev` and `staging`, as: + +```bash +helm install traffic-manager --namespace dev datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={dev,staging}' +``` + +You might then attempt to install another Traffic Manager to manage namespaces `staging` and `prod`: + +```bash +helm install traffic-manager --namespace prod datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={staging,prod}' +``` + +This would fail with an error: + +``` +Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap "traffic-manager-claim" in namespace "staging" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "prod": current value is "dev" +``` + +To fix this error, fix the overlap either by removing `staging` from the first install, or from the second. + +#### Namespace scoped user permissions + +Optionally, you can also configure user rbac to be scoped to the same namespaces as the manager itself. +You might want to do this if you don't give your users permissions throughout the cluster, and want to make sure they only have the minimum set required to perform telepresence commands on certain namespaces. + +Continuing with the `dev` and `staging` example from the previous section, simply add the following to `values.yaml` (make sure you set the `subjects`!): + +```yaml +clientRbac: + create: true + + # These are the users or groups to which the user rbac will be bound. + # This MUST be set. + subjects: {} + # - kind: User + # name: jane + # apiGroup: rbac.authorization.k8s.io + + namespaced: true + + namespaces: + - dev + - staging +``` + +#### Namespace-scoped webhook + +If you wish to use the traffic-manager's [mutating webhook](../../reference/cluster-config#mutating-webhook) with a namespace-scoped traffic manager, you will have to ensure that each namespace has an `app.kubernetes.io/name` label that is identical to its name: + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: staging + labels: + app.kubernetes.io/name: staging +``` + +You can also use `kubectl label` to add the label to an existing namespace, e.g.: + +```shell +kubectl label namespace staging app.kubernetes.io/name=staging +``` + +This is required because the mutating webhook will use the name label to find namespaces to operate on. + +**NOTE** This labelling happens automatically in kubernetes >= 1.21. + +### Installing RBAC only + +Telepresence Traffic Manager does require some [RBAC](../../reference/rbac/) for the traffic-manager deployment itself, as well as for users. +To make it easier for operators to introspect / manage RBAC separately, you can use `rbac.only=true` to +only create the rbac-related objects. +Additionally, you can use `clientRbac.create=true` and `managerRbac.create=true` to toggle which subset(s) of RBAC objects you wish to create. diff --git a/docs/telepresence/2.11/install/index.md b/docs/telepresence/2.11/install/index.md new file mode 100644 index 000000000..624cb33d6 --- /dev/null +++ b/docs/telepresence/2.11/install/index.md @@ -0,0 +1,153 @@ +import Platform from '@src/components/Platform'; + +# Install + +Install Telepresence by running the commands below for your OS. If you are not the administrator of your cluster, you will need [administrative RBAC permissions](../reference/rbac#administrating-telepresence) to install and use Telepresence in your cluster. + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## What's Next? + +Follow one of our [quick start guides](../quick-start/) to start using Telepresence, either with our sample app or in your own environment. + +## Installing nightly versions of Telepresence + +We build and publish the contents of the default branch, [release/v2](https://github.com/telepresenceio/telepresence), of Telepresence +nightly, Monday through Friday, for macOS (Intel and Apple silicon), Linux, and Windows. + +The tags are formatted like so: `vX.Y.Z-nightly-$gitShortHash`. + +`vX.Y.Z` is the most recent release of Telepresence with the patch version (Z) bumped one higher. +For example, if our last release was 2.3.4, nightly builds would start with v2.3.5, until a new +version of Telepresence is released. + +`$gitShortHash` will be the short hash of the git commit of the build. + +Use these URLs to download the most recent nightly build. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/nightly/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/nightly/telepresence.zip +``` + + + + +## Installing older versions of Telepresence + +Use these URLs to download an older version for your OS (including older nightly builds), replacing `x.y.z` with the versions you want. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/x.y.z/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/x.y.z/telepresence +``` + + + diff --git a/docs/telepresence/2.11/install/manager.md b/docs/telepresence/2.11/install/manager.md new file mode 100644 index 000000000..4efdc3c69 --- /dev/null +++ b/docs/telepresence/2.11/install/manager.md @@ -0,0 +1,85 @@ +# Install/Uninstall the Traffic Manager + +Telepresence uses a traffic manager to send/recieve cloud traffic to the user. Telepresence uses [Helm](https://helm.sh) under the hood to install the traffic manager in your cluster. + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/). +In addition, you may need certain prerequisites depending on your cloud provider and platform. +See the [cloud provider installation notes](../../install/cloud) for more. + +## Install the Traffic Manager + +The telepresence cli can install the traffic manager for you. The basic install will install the same version as the client used. + +1. Install the Telepresence Traffic Manager with the following command: + + ```shell + telepresence helm install + ``` + +### Customizing the Traffic Manager. + +For details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +1. Create a values.yaml file with your config values. + +2. Run the install command with the values flag set to the path to your values file. + + ```shell + telepresence helm install --values values.yaml + ``` + + +## Upgrading/Downgrading the Traffic Manager. + +1. Download the cli of the version of Telepresence you wish to use. + +2. Run the install command with the upgrade flag. + + ```shell + telepresence helm install --upgrade + ``` + + +## Uninstall + +The telepresence cli can uninstall the traffic manager for you using the `telepresence helm uninstall` command (previously `telepresence uninstall --everything`). + +1. Uninstall the Telepresence Traffic Manager and all of the agents installed by it using the following command: + + ```shell + telepresence helm uninstall + ``` + +## Ambassador Agent + +The Ambassador Agent is installed alongside the Traffic Manager to report your services to Ambassador Cloud and give you the ability to trigger intercepts from the Cloud UI. + +If you are already using the Emissary-Ingress or Edge-Stack you do not need to install the Ambassador Agent. When installing the `traffic-manager` you can add the flag `--set ambassador-agent.enabled=false`, to not include the ambassador-agent. Emissary and Edge-Stack both already include this agent within their deployments. + +If your namespace runs with tight security parameters you may need to set a few additional parameters. These parameters are `securityContext`, `tolerations`, and `resources`. +You can set these parameters in a `values.yaml` file under the `ambassador-agent` prefix to fit your namespace requirements. + +### Adding an API Key to your Ambassador Agent + +While installing the traffic-manager you can pass your cloud-token directly to the helm chart using the flag, `--set ambassador-agent.cloudConnectToken=`. +The [API Key](../reference/client/login.md) will be created as a secret and your agent will use it upon start-up. Telepresence will not override the API key given via Helm. + +### Creating a secret manually +The Ambassador agent watches for secrets with a name ending in `agent-cloud-token`. You can create this secret yourself. This API key will always be used. + + ```shell +kubectl apply -f - < + labels: + app.kubernetes.io/name: agent-cloud-token +data: + CLOUD_CONNECT_TOKEN: +EOF + ``` \ No newline at end of file diff --git a/docs/telepresence/2.11/install/migrate-from-legacy.md b/docs/telepresence/2.11/install/migrate-from-legacy.md new file mode 100644 index 000000000..94307dfa1 --- /dev/null +++ b/docs/telepresence/2.11/install/migrate-from-legacy.md @@ -0,0 +1,110 @@ +# Migrate from legacy Telepresence + +[Telepresence](/products/telepresence/) (formerly referenced as Telepresence 2, which is the current major version) has different mechanics and requires a different mental model from [legacy Telepresence 1](https://www.telepresence.io/docs/v1/) when working with local instances of your services. + +In legacy Telepresence, a pod running a service was swapped with a pod running the Telepresence proxy. This proxy received traffic intended for the service, and sent the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment". + +In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time. + +Telepresence 2 introduces a [new +architecture](../../reference/architecture/) built around "intercepts" +that addresses these problems. With the new Telepresence, a sidecar +proxy ("traffic agent") is injected onto the pod. The proxy then +intercepts traffic intended for the Pod and routes it to the +workstation/laptop. The advantage of this approach is that the +service is running at all times, and no swapping is used. By using +the proxy approach, we can also do personal intercepts, where rather +than re-routing all traffic to the laptop/workstation, it only +re-routes the traffic designated as belonging to that user, so that +multiple developers can intercept the same service at the same time +without disrupting normal operation or disrupting eacho. + +Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts. + +## Using legacy Telepresence commands + +First please ensure you've [installed Telepresence](../). + +Telepresence is able to translate common legacy Telepresence commands into native Telepresence commands. +So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used +to with the Telepresence binary. + +For example, say you have a deployment (`myserver`) that you want to swap deployment (equivalent to intercept in +Telepresence) with a python server, you could run the following command: + +``` +$ telepresence --swap-deployment myserver --expose 9090 --run python3 -m http.server 9090 +< help text > + +Legacy telepresence command used +Command roughly translates to the following in Telepresence: +telepresence intercept echo-easy --port 9090 -- python3 -m http.server 9090 +running... +Connecting to traffic manager... +Connected to context +Using Deployment myserver +intercepted + Intercept name : myserver + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:9090 + Intercepting : all TCP connections +Serving HTTP on :: port 9090 (http://[::]:9090/) ... +``` + +Telepresence will let you know what the legacy Telepresence command has mapped to and automatically +runs it. So you can get started with Telepresence today, using the commands you are used to +and it will help you learn the Telepresence syntax. + +### Legacy command mapping + +Below is the mapping of legacy Telepresence to Telepresence commands (where they exist and +are supported). + +| Legacy Telepresence Command | Telepresence Command | +|--------------------------------------------------|--------------------------------------------| +| --swap-deployment $workload | intercept $workload | +| --expose localPort[:remotePort] | intercept --port localPort[:remotePort] | +| --swap-deployment $workload --run-shell | intercept $workload -- bash | +| --swap-deployment $workload --run $cmd | intercept $workload -- $cmd | +| --swap-deployment $workload --docker-run $cmd | intercept $workload --docker-run -- $cmd | +| --run-shell | connect -- bash | +| --run $cmd | connect -- $cmd | +| --env-file,--env-json | --env-file, --env-json (haven't changed) | +| --context,--namespace | --context, --namespace (haven't changed) | +| --mount,--docker-mount | --mount, --docker-mount (haven't changed) | + +### Legacy Telepresence command limitations + +Some of the commands and flags from legacy Telepresence either didn't apply to Telepresence or +aren't yet supported in Telepresence. For some known popular commands, such as --method, +Telepresence will include output letting you know that the flag has gone away. For flags that +Telepresence can't translate yet, it will let you know that that flag is "unsupported". + +If Telepresence is missing any flags or functionality that is integral to your usage, please let us know +by [creating an issue](https://github.com/telepresenceio/telepresence/issues) and/or talking to us on our [Slack channel](http://a8r.io/slack)! + +## Telepresence changes + +Telepresence installs a Traffic Manager in the cluster and Traffic Agents alongside workloads when performing intercepts (including +with `--swap-deployment`) and leaves them. If you use `--swap-deployment`, the intercept will be left once the process +dies, but the agent will remain. There's no harm in leaving the agent running alongside your service, but when you +want to remove them from the cluster, the following Telepresence command will help: +``` +$ telepresence uninstall --help +Uninstall telepresence agents + +Usage: + telepresence uninstall [flags] { --agent |--all-agents } + +Flags: + -d, --agent uninstall intercept agent on specific deployments + -a, --all-agents uninstall intercept agent on all deployments + -h, --help help for uninstall + -n, --namespace string If present, the namespace scope for this CLI request +``` + +Since the new architecture deploys a Traffic Manager into the Ambassador namespace, please take a look at +our [rbac guide](../../reference/rbac) if you run into any issues with permissions while upgrading to Telepresence. + +The Traffic Manager can be uninstalled using `telepresence helm uninstall`. \ No newline at end of file diff --git a/docs/telepresence/2.11/install/upgrade.md b/docs/telepresence/2.11/install/upgrade.md new file mode 100644 index 000000000..8272b4844 --- /dev/null +++ b/docs/telepresence/2.11/install/upgrade.md @@ -0,0 +1,81 @@ +--- +description: "How to upgrade your installation of Telepresence and install previous versions." +--- + +# Upgrade Process +The Telepresence CLI will periodically check for new versions and notify you when an upgrade is available. Running the same commands used for installation will replace your current binary with the latest version. + +Before upgrading your CLI, you must stop any live Telepresence processes by issuing `telepresence quit -s` (or `telepresence quit -ur` +if your current version is less than 2.8.0). + + + + +```shell +# Intel Macs + +# Upgrade via brew: +brew upgrade datawire/blackbird/telepresence + +# OR upgrade manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +The Telepresence CLI contains an embedded Helm chart. See [Install/Uninstall the Traffic Manager](../manager/) if you want to also upgrade +the Traffic Manager in your cluster. diff --git a/docs/telepresence/2.11/quick-start/TelepresenceQuickStartLanding.js b/docs/telepresence/2.11/quick-start/TelepresenceQuickStartLanding.js new file mode 100644 index 000000000..bd375dee0 --- /dev/null +++ b/docs/telepresence/2.11/quick-start/TelepresenceQuickStartLanding.js @@ -0,0 +1,118 @@ +import queryString from 'query-string'; +import React, { useEffect, useState } from 'react'; + +import Embed from '../../../../src/components/Embed'; +import Icon from '../../../../src/components/Icon'; +import Link from '../../../../src/components/Link'; + +import './telepresence-quickstart-landing.less'; + +/** @type React.FC> */ +const RightArrow = (props) => ( + + + +); + +const TelepresenceQuickStartLanding = () => { + const [getStartedUrl, setGetStartedUrl] = useState( + 'https://app.getambassador.io/cloud/welcome?docs_source=telepresence-quick-start', + ); + + const getUrlFromQueryParams = () => { + const { docs_source, docs_campaign } = queryString.parse( + window.location.search, + ); + + if (docs_source === 'cloud-quickstart-ad' && docs_campaign === 'loops') { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=loops', + ); + } else if ( + docs_source === 'cloud-quickstart-ad' && + docs_campaign === 'environments' + ) { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=environments', + ); + } + }; + + useEffect(() => { + getUrlFromQueryParams(); + }, []); + + return ( +
+

+ Telepresence +

+

+ Set up your ideal development environment for Kubernetes in seconds. + Accelerate your inner development loop with hot reload using your + existing IDE, and workflow. +

+ +
+
+
+

+ Set Up Telepresence with Ambassador Cloud +

+

+ Seamlessly integrate Telepresence into your existing Kubernetes + environment by following our 3-step setup guide. +

+ + Get Started + +
+
+

+ + Do it Yourself: + {' '} + install Telepresence and manually connect to your Kubernetes + workloads. +

+
+ +
+
+
+

+ What Can Telepresence Do for You? +

+

Telepresence gives Kubernetes application developers:

+
    +
  • Instant feedback loops
  • +
  • Remote development environments
  • +
  • Access to your favorite local tools
  • +
  • Easy collaborative development with teammates
  • +
+ + LEARN MORE{' '} + + +
+
+ +
+
+
+
+ ); +}; + +export default TelepresenceQuickStartLanding; diff --git a/docs/telepresence/2.11/quick-start/demo-node.md b/docs/telepresence/2.11/quick-start/demo-node.md new file mode 100644 index 000000000..c1725fe30 --- /dev/null +++ b/docs/telepresence/2.11/quick-start/demo-node.md @@ -0,0 +1,155 @@ +--- +description: "Claim a remote demo cluster and learn to use Telepresence to intercept services running in a Kubernetes Cluster, speeding up local development and debugging." +--- + +import {DemoClusterMetadata, ExpirationDate} from '../../../../../src/components/DemoClusterMetadata'; +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start + +
+

Contents

+ +* [1. Get a free remote cluster](#1-get-a-free-remote-cluster) +* [2. Try the Emojivoto application](#2-try-the-emojivoto-application) +* [3. Set up your local development environment](#3-set-up-your-local-development-environment) +* [4. Testing our fix](#4-testing-our-fix) +* [5. Preview URLs](#5-preview-urls) +* [6. How/Why does this all work](#6-howwhy-does-this-all-work) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +In this guide, we'll give you a hands-on tutorial with [Telepresence](/products/telepresence/). To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js and Golang. We have a version in React if you prefer. + + +## 1. Get a free remote cluster + +[Telepresence](/docs/telepresence/) connects your local workstation with a remote Kubernetes cluster. In this tutorial, we'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. Test out the application: + +1. Go to the and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. We're going to use Telepresence shortly to fix this bug, as everyone should be able to vote for 🍩! + + + Congratulations! You've successfully accessed the Emojivoto application on your remote cluster. + + +## 3. Set up your local development environment + +We'll set up a development environment locally on your workstation. We'll then use [Telepresence](../../reference/inside-container/) to connect this local development environment to the remote Kubernetes cluster. To save time, the development environment we'll use is pre-packaged as a Docker container. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + + + + + + + +Make sure that ports 8080 and 8083 are free.
+If the Docker engine is not running, the command will fail and you will see docker: unknown server OS in your terminal. +
+ +2. The Docker container includes a copy of the Emojivoto application that fixes the bug. Visit the [leaderboard](http://localhost:8083/leaderboard) and notice how it is different from the leaderboard in your Kubernetes cluster. + +3. Vote for 🍩 on your local leaderboard, and you can see that the bug is fixed! + + + Congratulations! You have successfully set up a local development environment, and tested the fix locally. + + +## 4. Testing our fix + +A common use case for Telepresence is to connect your local development environment to a remote cluster. This way, if your application is too big or complex to run locally, you can still develop locally. In this Quick Start, we're also going to show Telepresence can be used for integration testing, by testing our fix against the services in the remote cluster. + +1. From your Docker container, create an intercept, which will tell Telepresence to send traffic to the service in your container instead of the service in the cluster: + `telepresence intercept web --port 8080` + + When prompted for ingress configuration, all default values should be correct as displayed below. + + + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Preview URLs + +Preview URLs enable you to safely share your development environment with anyone. For example, you may want your UX designer to take a quick look at what you're developing, before you commit the code. Preview URLs enable this easy collaboration. + +1. If you access the Emojivoto application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +2. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version running locally. + + +Now you're able to share your fix in your local environment with your team! + + + + To get more information regarding Preview URLs and intercepts, visit Ambassador Cloud. + + +
+ +## 6. How/Why does this all work? + +[Telepresence](../qs-go/) works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +Intercepts and preview URLs are functions of Telepresence that enable easy local development from a remote Kubernetes cluster and offer a preview environment for sharing and real-time collaboration. + +Telepresence also uses custom headers and header propagation for controllable intercepts and preview URLs. The headers facilitate the smart routing of requests either to live services in the cluster or services running locally on a developer’s machine. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to Ambassador Cloud with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. + +## What's Next? + + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! diff --git a/docs/telepresence/2.11/quick-start/demo-react.md b/docs/telepresence/2.11/quick-start/demo-react.md new file mode 100644 index 000000000..2312dbbbc --- /dev/null +++ b/docs/telepresence/2.11/quick-start/demo-react.md @@ -0,0 +1,257 @@ +--- +description: "Telepresence Quick Start - React. In this guide we'll give you everything you need in a preconfigured demo cluster: the Telepresence CLI, a config file for..." +--- + +import Alert from '@material-ui/lab/Alert'; +import QSCards26 from './qs-cards'; +import { DownloadDemo } from '../../../../../src/components/Docs/DownloadDemo'; +import { UserInterceptCommand } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start - React + +
+

Contents

+ +* [1. Download the demo cluster archive](#1-download-the-demo-cluster-archive) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Set up the sample application](#3-set-up-the-sample-application) +* [4. Test app](#4-test-app) +* [5. Run a service on your laptop](#5-run-a-service-on-your-laptop) +* [6. Make a code change](#6-make-a-code-change) +* [7. Intercept all traffic to the service](#7-intercept-all-traffic-to-the-service) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +In this guide we'll give you **everything you need in a preconfigured demo cluster:** the [Telepresence](/products/telepresence/) CLI, a config file for connecting to your demo cluster, and code to run a cluster service locally. + + + While Telepresence works with any language, this guide uses a sample app with a frontend written in React. We have a version with a Node.js backend if you prefer. + + + + +## 1. Download the demo cluster archive + +1. + +2. Extract the archive file, open the `ambassador-demo-cluster` folder, and run the installer script (the commands below might vary based on where your browser saves downloaded files). + + + This step will also install some dependency packages onto your laptop using npm, you can see those packages at ambassador-demo-cluster/edgey-corp-nodejs/DataProcessingService/package.json. + + + ``` + cd ~/Downloads + unzip ambassador-demo-cluster.zip -d ambassador-demo-cluster + cd ambassador-demo-cluster + ./install.sh + # type y to install the npm dependencies when asked + ``` + +3. Confirm that your `kubectl` is configured to use the demo cluster by getting the status of the cluster nodes, you should see a single node named `tpdemo-prod-...`: + `kubectl get nodes` + + ``` + $ kubectl get nodes + + NAME STATUS ROLES AGE VERSION + tpdemo-prod-1234 Ready control-plane,master 5d10h v1.20.2+k3s1 + ``` + +4. Confirm that the Telepresence CLI is now installed (we expect to see the daemons are not running yet): +`telepresence status` + + ``` + $ telepresence status + + Root Daemon: Not running + User Daemon: Not running + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open System Preferences → Security & Privacy → General. Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence status command. + + + + You now have Telepresence installed on your workstation and a Kubernetes cluster configured in your terminal! + + +## 2. Test Telepresence + +[Telepresence](../../reference/client/login/) connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster (this requires **root** privileges and will ask for your password): +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Set up the sample application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + +1. Clone the emojivoto app: +`git clone https://github.com/datawire/emojivoto.git` + +1. Deploy the app to your cluster: +`kubectl apply -k emojivoto/kustomize/deployment` + +1. Change the kubectl namespace: +`kubectl config set-context --current --namespace=emojivoto` + +1. List the Services: +`kubectl get svc` + + ``` + $ kubectl get svc + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + emoji-svc ClusterIP 10.43.162.236 8080/TCP,8801/TCP 29s + voting-svc ClusterIP 10.43.51.201 8080/TCP,8801/TCP 29s + web-app ClusterIP 10.43.242.240 80/TCP 29s + web-svc ClusterIP 10.43.182.119 8080/TCP 29s + ``` + +1. Since you’ve already connected Telepresence to your cluster, you can access the frontend service in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). This is the namespace qualified DNS name in the form of `service.namespace`. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Test app + +1. Vote for some emojis and see how the [leaderboard](http://web-app.emojivoto/leaderboard) changes. + +1. There is one emoji that causes an error when you vote for it. Vote for 🍩 and the leaderboard does not actually update. Also an error is shown on the browser dev console: +`GET http://web-svc.emojivoto:8080/api/vote?choice=:doughnut: 500 (Internal Server Error)` + + + Open the dev console in Chrome or Firefox with Option + ⌘ + J (macOS) or Shift + CTRL + J (Windows/Linux).
+ Open the dev console in Safari with Option + ⌘ + C. +
+ +The error is on a backend service, so **we can add an error page to notify the user** while the bug is fixed. + +## 5. Run a service on your laptop + +Now start up the `web-app` service on your laptop. We'll then make a code change and intercept this service so that we can see the immediate results of a code change to the service. + +1. **In a new terminal window**, change into the repo directory and build the application: + + `cd /emojivoto` + `make web-app-local` + + ``` + $ make web-app-local + + ... + webpack 5.34.0 compiled successfully in 4326 ms + ✨ Done in 5.38s. + ``` + +2. Change into the service's code directory and start the server: + + `cd emojivoto-web-app` + `yarn webpack serve` + + ``` + $ yarn webpack serve + + ... + ℹ 「wds」: Project is running at http://localhost:8080/ + ... + ℹ 「wdm」: Compiled successfully. + ``` + +4. Access the application at [http://localhost:8080](http://localhost:8080) and see how voting for the 🍩 is generating the same error as the application deployed in the cluster. + + + Victory, your local React server is running a-ok! + + +## 6. Make a code change +We’ve now set up a local development environment for the app. Next we'll make and locally test a code change to the app to improve the issue with voting for 🍩. + +1. In the terminal running webpack, stop the server with `Ctrl+c`. + +1. In your preferred editor open the file `emojivoto/emojivoto-web-app/js/components/Vote.jsx` and replace the `render()` function (lines 83 to the end) with [this highlighted code snippet](https://github.com/datawire/emojivoto/blob/main/assets/Vote-fixed.jsx#L83-L149). + +1. Run webpack to fully recompile the code then start the server again: + + `yarn webpack` + `yarn webpack serve` + +1. Reload the browser tab showing [http://localhost:8080](http://localhost:8080) and vote for 🍩. Notice how you see an error instead, improving the user experience. + +## 7. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the app to the version running locally instead. + + + This command must be run in the terminal window where you ran the script because the script set environment variables to access the demo cluster. Those variables will only will apply to that terminal session. + + +1. Start the intercept with the `intercept` command, setting the workload name (a Deployment in this case), namespace, and port: +`telepresence intercept web-app --namespace emojivoto --port 8080` + + ``` + $ telepresence intercept web-app --namespace emojivoto --port 8080 + + Using deployment web-app + intercepted + Intercept name: web-app-emojivoto + State : ACTIVE + ... + ``` + +2. Go to the frontend service again in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). Voting for 🍩 should now show an error message to the user. + + + The web-app Deployment is being intercepted and rerouted to the server on your laptop! + + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## What's Next? + + diff --git a/docs/telepresence/2.11/quick-start/go.md b/docs/telepresence/2.11/quick-start/go.md new file mode 100644 index 000000000..c926d7b05 --- /dev/null +++ b/docs/telepresence/2.11/quick-start/go.md @@ -0,0 +1,190 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + + +# Telepresence Quick Start - **Go** + +This guide provides you with a hands-on tutorial with Telepresence and Golang. To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + +## 1. Get a free remote cluster + +Telepresence connects your local workstation with a remote Kubernetes cluster. In this tutorial, you'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. +Test out the application: + +1. Go to the Emojivoto webapp and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. + +## 3. Run the Docker container + +The bug is present in the `voting-svc` service, you'll run that service locally. To save your time, we prepared a Docker container with this service running and all you'll need to fix the bug. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + +2. The application is failing due to a little bug inside this service which uses gRPC to communicate with the others services. We can use `grpcurl` to test the gRPC endpoint and see the error by running: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + (empty) + + Response trailers received: + content-type: application/grpc + Sent 0 requests and received 0 responses + ERROR: + Code: Unknown + Message: ERROR + ``` + +3. In order to fix the bug, use the Docker container's embedded IDE to fix this error. Go to http://localhost:8083 and open `api/api.go`. Remove the `"fmt"` package by deleting the line 5. + + ```go + 3 import ( + 4 "context" + 5 "fmt" // DELETE THIS LINE + 6 + 7 pb "github.com/buoyantio/emojivoto/emojivoto-voting-svc/gen/proto" + ``` + + and also replace the line `21`: + + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return nil, fmt.Errorf("ERROR") + 22 } + ``` + with + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return pS.vote(":doughnut:") + 22 } + ``` + Then save the file (`Ctrl+s` for Windows, `Cmd+s` for Mac or `Menu -> File -> Save`) and verify that the error is fixed now: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + content-type: application/grpc + + Response contents: + { + } + + Response trailers received: + (empty) + Sent 0 requests and received 1 response + ``` + +## 4. Telepresence intercept + +1. Now the bug is fixed, you can use Telepresence to intercept *all* the traffic through our local service. +Run the following command inside the container: + + ``` + $ telepresence intercept voting --port 8081:8080 + + Using Deployment voting + intercepted + Intercept name : voting + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8081 + Service Port Identifier: 8080 + Volume Mount Point : /tmp/telfs-XXXXXXXXX + Intercepting : all TCP connections + ``` + Now you can go back to Emojivoto webapp and you'll see that voting for 🍩 woks as expected. + +You have created an intercept to tell Telepresence where to send traffic. The `voting-svc` traffic is now destined to the local Dockerized version of the service. This intercepts *all the traffic* to the local `voting-svc` service, which has been fixed with the Telepresence intercept. + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Telepresence intercept with a preview URL + +Preview URLs allows you to safely share your development environment. With this approach, you can try and test your local service more accurately because you have a total control about which traffic is handled through your service, all of this thanks to the preview URL. + +1. First leave the current intercept: + + ``` + $ telepresence leave voting + ``` + +2. Then login to telepresence: + + + +3. Create an intercept, which will tell Telepresence to send traffic to the service in our container instead of the service in the cluster. When prompted for ingress configuration, all default values should be correct as displayed below. + + + +4. If you access the Emojivoto webapp application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +5. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version which is running locally. + +
+ +## What's Next? + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! diff --git a/docs/telepresence/2.11/quick-start/index.md b/docs/telepresence/2.11/quick-start/index.md new file mode 100644 index 000000000..e0d26fa9e --- /dev/null +++ b/docs/telepresence/2.11/quick-start/index.md @@ -0,0 +1,7 @@ +--- +description: Telepresence Quick Start. +--- + +import NewTelepresenceQuickStartLanding from './TelepresenceQuickStartLanding' + + diff --git a/docs/telepresence/2.11/quick-start/qs-cards.js b/docs/telepresence/2.11/quick-start/qs-cards.js new file mode 100644 index 000000000..5b68aa4ae --- /dev/null +++ b/docs/telepresence/2.11/quick-start/qs-cards.js @@ -0,0 +1,71 @@ +import Grid from '@material-ui/core/Grid'; +import Paper from '@material-ui/core/Paper'; +import Typography from '@material-ui/core/Typography'; +import { makeStyles } from '@material-ui/core/styles'; +import { Link as GatsbyLink } from 'gatsby'; +import React from 'react'; + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + textAlign: 'center', + alignItem: 'stretch', + padding: 0, + }, + paper: { + padding: theme.spacing(1), + textAlign: 'center', + color: 'black', + height: '100%', + }, +})); + +export default function CenteredGrid() { + const classes = useStyles(); + + return ( +
+ + + + + + Collaborating + + + + Use preview URLS to collaborate with your colleagues and others + outside of your organization. + + + + + + + + Outbound Sessions + + + + While connected to the cluster, your laptop can interact with + services as if it was another pod in the cluster. + + + + + + + + FAQs + + + + Learn more about uses cases and the technical implementation of + Telepresence. + + + + +
+ ); +} diff --git a/docs/telepresence/2.11/quick-start/qs-go.md b/docs/telepresence/2.11/quick-start/qs-go.md new file mode 100644 index 000000000..2e140f6a7 --- /dev/null +++ b/docs/telepresence/2.11/quick-start/qs-go.md @@ -0,0 +1,396 @@ +--- +description: "Telepresence Quick Start Go. You will need kubectl or oc installed and set up (Linux / macOS / Windows) to use a Kubernetes cluster, preferably an empty." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Go** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Go application](#3-install-a-sample-go-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used [Telepresence](/products/telepresence/) previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Go application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Go. We have versions in Python (Flask), Python (FastAPI), Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-go.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-go.git + + Cloning into 'edgey-corp-go'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-go/DataProcessingService/` + +3. You will use [Fresh](https://pkg.go.dev/github.com/BUGLAN/fresh) to support auto reloading of the Go server, which we'll use later. Confirm it is installed by running: + `go get github.com/pilu/fresh` + Then start the Go server: + `$GOPATH/bin/fresh` + + ``` + $ go get github.com/pilu/fresh + + $ $GOPATH/bin/fresh + + ... + 10:23:41 app | Welcome to the DataProcessingGoService! + ``` + + + Install Go from here and set your GOPATH if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Go server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Go server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-go/DataProcessingService/main.go` in your editor and change `var color string` from `blue` to `orange`. Save the file and the Go server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.11/quick-start/qs-java.md b/docs/telepresence/2.11/quick-start/qs-java.md new file mode 100644 index 000000000..9056d61cd --- /dev/null +++ b/docs/telepresence/2.11/quick-start/qs-java.md @@ -0,0 +1,390 @@ +--- +description: "Telepresence Quick Start - Java. This document uses kubectl in all example commands, but OpenShift users should have no problem substituting in the oc command." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Java** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Java application](#3-install-a-sample-java-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Java application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Java. We have versions in Python (FastAPI), Python (Flask), Go, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-java.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-java.git + + Cloning into 'edgey-corp-java'... + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-java/DataProcessingService/` + +3. Start the Maven server. + `mvn spring-boot:run` + + + Install Java and Maven first if needed. + + + ``` + $ mvn spring-boot:run + + ... + g.d.DataProcessingServiceJavaApplication : Started DataProcessingServiceJavaApplication in 1.408 seconds (JVM running for 1.684) + + ``` + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Java server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Java server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-java/DataProcessingService/src/main/resources/application.properties` in your editor and change `app.default.color` on line 2 from `blue` to `orange`. Save the file then stop and restart your Java server. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.11/quick-start/qs-node.md b/docs/telepresence/2.11/quick-start/qs-node.md new file mode 100644 index 000000000..d4282240f --- /dev/null +++ b/docs/telepresence/2.11/quick-start/qs-node.md @@ -0,0 +1,384 @@ +--- +description: "Telepresence Quick Start Node.js. This document uses kubectl in all example commands. OpenShift users should have no problem substituting in the oc command..." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Node.js** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Node.js application](#3-install-a-sample-nodejs-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Node.js application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js. We have versions in Go, Java,Python using Flask, and Python using FastAPI if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-nodejs.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-nodejs.git + + Cloning into 'edgey-corp-nodejs'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-nodejs/DataProcessingService/` + +3. Install the dependencies and start the Node server: +`npm install && npm start` + + ``` + $ npm install && npm start + + ... + Welcome to the DataProcessingService! + { _: [] } + Server running on port 3000 + ``` + + + Install Node.js from here if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Node server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + See this doc for more information on how Telepresence resolves DNS. + + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Node server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-nodejs/DataProcessingService/app.js` in your editor and change line 6 from `blue` to `orange`. Save the file and the Node server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.11/quick-start/qs-python-fastapi.md b/docs/telepresence/2.11/quick-start/qs-python-fastapi.md new file mode 100644 index 000000000..dacfd9f25 --- /dev/null +++ b/docs/telepresence/2.11/quick-start/qs-python-fastapi.md @@ -0,0 +1,381 @@ +--- +description: "Telepresence Quick Start - Python (FastAPI) You need kubectl or oc installed & set up (Linux/macOS/Windows) to use Kubernetes cluster, preferably an empty test." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Python (FastAPI)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the FastAPI framework. We have versions in Python (Flask), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python-fastapi.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python-fastapi.git + + Cloning into 'edgey-corp-python-fastapi'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python-fastapi/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install fastapi uvicorn requests && python app.py + + Collecting fastapi + ... + Application startup complete. + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local service is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python-fastapi/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 17 from `blue` to `orange`. Save the file and the Python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080) and it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.11/quick-start/qs-python.md b/docs/telepresence/2.11/quick-start/qs-python.md new file mode 100644 index 000000000..02ad7de97 --- /dev/null +++ b/docs/telepresence/2.11/quick-start/qs-python.md @@ -0,0 +1,392 @@ +--- +description: "Telepresence Quick Start - Python (Flask). This document uses kubectl in all example commands, but OpenShift users should have no problem substituting in the oc." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Python (Flask)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the Flask framework. We have versions in Python (FastAPI), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python.git + + Cloning into 'edgey-corp-python'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install flask requests && python app.py + + Collecting flask + ... + Welcome to the DataServiceProcessingPythonService! + ... + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Python server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 15 from `blue` to `orange`. Save the file and the python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.11/quick-start/telepresence-quickstart-landing.less b/docs/telepresence/2.11/quick-start/telepresence-quickstart-landing.less new file mode 100644 index 000000000..e2a83df4f --- /dev/null +++ b/docs/telepresence/2.11/quick-start/telepresence-quickstart-landing.less @@ -0,0 +1,152 @@ +@import '~@src/components/Layout/vars.less'; + +.doc-body .telepresence-quickstart-landing { + font-family: @InterFont; + color: @black; + margin: -8.4px auto 48px; + max-width: 1050px; + min-width: @docs-min-width; + width: 100%; + + h1 { + color: @blue-dark; + font-weight: normal; + letter-spacing: 0.25px; + font-size: 33px; + margin: 0 0 15px; + } + p { + font-size: 0.875rem; + line-height: 24px; + margin: 0; + padding: 0; + } + + .demo-cluster-container { + display: grid; + margin: 40px 0; + grid-template-columns: 1fr; + grid-template-columns: 1fr; + @media screen and (max-width: 900px) { + grid-template-columns: repeat(1, 1fr); + } + } + .main-title-container { + display: flex; + flex-direction: column; + align-items: center; + p { + text-align: center; + font-size: 0.875rem; + } + } + h2 { + font-size: 23px; + color: @black; + margin: 0 0 20px 0; + padding: 0; + &.underlined { + padding-bottom: 2px; + border-bottom: 3px solid @grey-separator; + text-align: center; + } + strong { + font-weight: 800; + } + &.subtitle { + margin-bottom: 10px; + font-size: 19px; + line-height: 28px; + } + } + .learn-more, + .get-started { + font-size: 14px; + font-weight: 600; + letter-spacing: 1.25px; + display: flex; + align-items: center; + text-decoration: none; + &.inline { + display: inline-block; + text-decoration: underline; + font-size: unset; + font-weight: normal; + &:hover { + text-decoration: none; + } + } + &.blue { + color: @blue-5; + } + &.blue:hover { + color: @blue-dark; + } + } + + .learn-more { + margin-top: 20px; + padding: 13px 0; + } + + .box-container { + &.border { + border: 1.5px solid @grey-separator; + border-radius: 5px; + padding: 10px; + } + &::before { + content: ''; + position: absolute; + width: 14px; + height: 14px; + border-radius: 50%; + top: 0; + left: 50%; + transform: translate(-50%, -50%); + } + p { + font-size: 0.875rem; + line-height: 24px; + padding: 0; + } + } + + .telepresence-video { + border: 2px solid @grey-separator; + box-shadow: -6px 12px 0px fade(@black, 12%); + border-radius: 8px; + padding: 18px; + h2.telepresence-video-title { + font-weight: 400; + font-size: 23px; + line-height: 33px; + color: @blue-6; + } + } + + .video-section { + display: grid; + grid-template-columns: 1fr 1fr; + column-gap: 20px; + @media screen and (max-width: 800px) { + grid-template-columns: 1fr; + } + ul { + font-size: 14px; + margin: 0 10px 6px 0; + } + .video-container { + position: relative; + padding-bottom: 56.25%; // 16:9 aspect ratio + height: 0; + iframe { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + } + } + } +} diff --git a/docs/telepresence/2.11/redirects.yml b/docs/telepresence/2.11/redirects.yml new file mode 100644 index 000000000..5961b3477 --- /dev/null +++ b/docs/telepresence/2.11/redirects.yml @@ -0,0 +1 @@ +- {from: "", to: "quick-start"} diff --git a/docs/telepresence/2.11/reference/architecture.md b/docs/telepresence/2.11/reference/architecture.md new file mode 100644 index 000000000..8aa90b267 --- /dev/null +++ b/docs/telepresence/2.11/reference/architecture.md @@ -0,0 +1,102 @@ +--- +description: "How Telepresence works to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Telepresence Architecture + +
+ +![Telepresence Architecture](https://www.getambassador.io/images/documentation/telepresence-architecture.inline.svg) + +
+ +## Telepresence CLI + +The Telepresence CLI orchestrates the moving parts on the workstation: it starts the Telepresence Daemons, +authenticates against Ambassador Cloud, and then acts as a user-friendly interface to the Telepresence User Daemon. + +## Telepresence Daemons +Telepresence has Daemons that run on a developer's workstation and act as the main point of communication to the cluster's +network in order to communicate with the cluster and handle intercepted traffic. + +### User-Daemon +The User-Daemon coordinates the creation and deletion of intercepts by communicating with the [Traffic Manager](#traffic-manager). +All requests from and to the cluster go through this Daemon. + +When you run telepresence login, Telepresence installs an enhanced version of the User-Daemon. This replaces the existing User-Daemon and +allows you to create intercepts on your local machine from Ambassador Cloud. + +### Root-Daemon +The Root-Daemon manages the networking necessary to handle traffic between the local workstation and the cluster by setting up a +[Virtual Network Device](../tun-device) (VIF). For a detailed description of how the VIF manages traffic and why it is necessary +please refer to this blog post: +[Implementing Telepresence Networking with a TUN Device](https://blog.getambassador.io/implementing-telepresence-networking-with-a-tun-device-a23a786d51e9). + +When you run telepresence login, Telepresence installs an enhanced Telepresence User Daemon. This replaces the open source +User Daemon and allows you to create intercepts on your local machine from Ambassador Cloud. + +## Traffic Manager + +The Traffic Manager is the central point of communication between Traffic Agents in the cluster and Telepresence Daemons +on developer workstations. It is responsible for injecting the Traffic Agent sidecar into intercepted pods, proxying all +relevant inbound and outbound traffic, and tracking active intercepts. + +The Traffic-Manager is installed, either by a cluster administrator using a Helm Chart, or on demand by the Telepresence +User Daemon. When the User Daemon performs its initial connect, it first checks the cluster for the Traffic Manager +deployment, and if missing it will make an attempt to install it using its embedded Helm Chart. + +When an intercept gets created with a Preview URL, the Traffic Manager will establish a connection with Ambassador Cloud +so that Preview URL requests can be routed to the cluster. This allows Ambassador Cloud to reach the Traffic Manager +without requiring the Traffic Manager to be publicly exposed. Once the Traffic Manager receives a request from a Preview +URL, it forwards the request to the ingress service specified at the Preview URL creation. + +## Traffic Agent + +The Traffic Agent is a sidecar container that facilitates intercepts. When an intercept is first started, the Traffic Agent +container is injected into the workload's pod(s). You can see the Traffic Agent's status by running `telepresence list` +or `kubectl describe pod `. + +Depending on the type of intercept that gets created, the Traffic Agent will either route the incoming request to the +Traffic Manager so that it gets routed to a developer's workstation, or it will pass it along to the container in the +pod usually handling requests on that port. + +## Ambassador Cloud + +Ambassador Cloud enables Preview URLs by generating random ephemeral domain names and routing requests received on those +domains from authorized users to the appropriate Traffic Manager. + +Ambassador Cloud also lets users manage their Preview URLs: making them publicly accessible, seeing users who have +accessed them and deleting them. + +## Pod-Daemon + +The Pod-Daemon is a modified version of the [Telepresence User-Daemon](#user-daemon) built as a container image so that +it can be inserted into a `Deployment` manifest as an additional container. This allows users to create intercepts completely +within the cluster with the benefit that the intercept stays active until the deployment with the Pod-Daemon container is removed. + +The Pod-Daemon will take arguments and environment variables as part of the `Deployment` manifest to specify which service the intercept +should be run on and to provide similar configuration that would be provided when using Telepresence intercepts from the command line. + +After being deployed to the cluster, it behaves similarly to the Telepresence User-Daemon and installs the [Traffic Agent Sidecar](#traffic-agent) +on the service that is being intercepted. After the intercept is created, traffic can then be redirected to the `Deployment` with the Pod-Daemon +container instead. The Pod-Daemon will automatically generate a Preview URL so that the intercept can be accessed from outside the cluster. +The Preview URL can be obtained from the Pod-Daemon logs if you are deploying it manually. + +The Pod-Daemon was created for use as a component of Deployment Previews in order to automatically create intercepts with development images built +by CI so that changes from pull requests can be quickly visualized in a live cluster before changes are landed by accessing the Preview URL +link which would be posted to an associated GitHub pull request when using Deployment Previews. + +See the [Deployment Previews quick-start](https://www.getambassador.io/docs/cloud/latest/deployment-previews/quick-start) for information on how to get started with Deployment Previews +or for a reference on how Pod-Daemon can be manually deployed to the cluster. + + +# Changes from Service Preview + +Using Ambassador's previous offering, Service Preview, the Traffic Agent had to be manually added to a pod by an +annotation. This is no longer required as the Traffic Agent is automatically injected when an intercept is started. + +Service Preview also started an intercept via `edgectl intercept`. The `edgectl` CLI is no longer required to intercept +as this functionality has been moved to the Telepresence CLI. + +For both the Traffic Manager and Traffic Agents, configuring Kubernetes ClusterRoles and ClusterRoleBindings is not +required as it was in Service Preview. Instead, the user running Telepresence must already have sufficient permissions in the cluster to add and modify deployments in the cluster. diff --git a/docs/telepresence/2.11/reference/client.md b/docs/telepresence/2.11/reference/client.md new file mode 100644 index 000000000..478e07cef --- /dev/null +++ b/docs/telepresence/2.11/reference/client.md @@ -0,0 +1,31 @@ +--- +description: "CLI options for Telepresence to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Client reference + +The [Telepresence CLI client](../../quick-start) is used to connect Telepresence to your cluster, start and stop intercepts, and create preview URLs. All commands are run in the form of `telepresence `. + +## Commands + +A list of all CLI commands and flags is available by running `telepresence help`, but here is more detail on the most common ones. +You can append `--help` to each command below to get even more information about its usage. + +| Command | Description | +| --- | --- | +| `connect` | Starts the local daemon and connects Telepresence to your cluster and installs the Traffic Manager if it is missing. After connecting, outbound traffic is routed to the cluster so that you can interact with services as if your laptop was another pod (for example, curling a service by it's name) | +| [`login`](login) | Authenticates you to Ambassador Cloud to create, manage, and share [preview URLs](../../howtos/preview-urls/) | +| `logout` | Logs out out of Ambassador Cloud | +| `license` | Formats a license from Ambassdor Cloud into a secret that can be [applied to your cluster](../cluster-config#add-license-to-cluster) if you require features of the extension in an air-gapped environment| +| `status` | Shows the current connectivity status | +| `quit` | Tell Telepresence daemons to quit | +| `list` | Lists the current active intercepts | +| `intercept` | Intercepts a service, run followed by the service name to be intercepted and what port to proxy to your laptop: `telepresence intercept --port `. This command can also start a process so you can run a local instance of the service you are intercepting. For example the following will intercept the hello service on port 8000 and start a Python web server: `telepresence intercept hello --port 8000 -- python3 -m http.server 8000`. A special flag `--docker-run` can be used to run the local instance [in a docker container](../docker-run). | +| `leave` | Stops an active intercept: `telepresence leave hello` | +| `preview` | Create or remove [preview URLs](../../howtos/preview-urls) for existing intercepts: `telepresence preview create ` | +| `loglevel` | Temporarily change the log-level of the traffic-manager, traffic-agents, and user and root daemons | +| `gather-logs` | Gather logs from traffic-manager, traffic-agents, user, and root daemons, and export them into a zip file that can be shared with others or included with a github issue. Use `--get-pod-yaml` to include the yaml for the `traffic-manager` and `traffic-agent`s. Use `--anonymize` to replace the actual pod names + namespaces used for the `traffic-manager` and pods containing `traffic-agent`s in the logs. | +| `version` | Show version of Telepresence CLI + Traffic-Manager (if connected) | +| `uninstall` | Uninstalls Telepresence from your cluster, using the `--agent` flag to target the Traffic Agent for a specific workload, the `--all-agents` flag to remove all Traffic Agents from all workloads, or the `--everything` flag to remove all Traffic Agents and the Traffic Manager.| +| `dashboard` | Reopens the Ambassador Cloud dashboard in your browser | +| `current-cluster-id` | Get cluster ID for your kubernetes cluster, used for [configuring license](../cluster-config#add-license-to-cluster) in an air-gapped environment | diff --git a/docs/telepresence/2.11/reference/client/login.md b/docs/telepresence/2.11/reference/client/login.md new file mode 100644 index 000000000..fc90ea385 --- /dev/null +++ b/docs/telepresence/2.11/reference/client/login.md @@ -0,0 +1,61 @@ +# Telepresence Login + +```console +$ telepresence login --help +Authenticate to Ambassador Cloud + +Usage: + telepresence login [flags] + +Flags: + --apikey string Static API key to use instead of performing an interactive login +``` + +## Description + +Use `telepresence login` to explicitly authenticate with [Ambassador +Cloud](https://www.getambassador.io/docs/cloud). Unless the +[`skipLogin` option](../../config) is set, other commands will +automatically invoke the `telepresence login` interactive login +procedure as nescessary, so it is rarely nescessary to explicitly run +`telepresence login`; it should only be truly nescessary to explictly +run `telepresence login` when you require a non-interactive login. + +The normal interactive login procedure involves launching a web +browser, a user interacting with that web browser, and finally having +the web browser make callbacks to the local Telepresence process. If +it is not possible to do this (perhaps you are using a headless remote +box via SSH, or are using Telepresence in CI), then you may instead +have Ambassador Cloud issue an API key that you pass to `telepresence +login` with the `--apikey` flag. + +## Telepresence + +When you run `telepresence login`, the CLI installs +a Telepresence binary. The Telepresence enhanced free client of the [User +Daemon](../../architecture) communicates with the Ambassador Cloud to +provide fremium features including the ability to create intercepts from +Ambassador Cloud. + +## Acquiring an API key + +1. Log in to Ambassador Cloud at https://app.getambassador.io/ . + +2. Click on your profile icon in the upper-left: ![Screenshot with the + mouse pointer over the upper-left profile icon](./login/apikey-2.png) + +3. Click on the "API Keys" menu button: ![Screenshot with the mouse + pointer over the "API Keys" menu button](./login/apikey-3.png) + +4. Click on the "generate new key" button in the upper-right: + ![Screenshot with the mouse pointer over the "generate new key" + button](./login/apikey-4.png) + +5. Enter a description for the key (perhaps the name of your laptop, + or perhaps the "CI"), and click "generate api key" to create it. + +You may now pass the API key as `KEY` to `telepresence login --apikey=KEY`. + +Telepresence will use that "master" API key to create narrower keys +for different components of Telepresence. You will see these appear +in the Ambassador Cloud web interface. \ No newline at end of file diff --git a/docs/telepresence/2.11/reference/client/login/apikey-2.png b/docs/telepresence/2.11/reference/client/login/apikey-2.png new file mode 100644 index 000000000..1379502a9 Binary files /dev/null and b/docs/telepresence/2.11/reference/client/login/apikey-2.png differ diff --git a/docs/telepresence/2.11/reference/client/login/apikey-3.png b/docs/telepresence/2.11/reference/client/login/apikey-3.png new file mode 100644 index 000000000..4559b784d Binary files /dev/null and b/docs/telepresence/2.11/reference/client/login/apikey-3.png differ diff --git a/docs/telepresence/2.11/reference/client/login/apikey-4.png b/docs/telepresence/2.11/reference/client/login/apikey-4.png new file mode 100644 index 000000000..25c6581a4 Binary files /dev/null and b/docs/telepresence/2.11/reference/client/login/apikey-4.png differ diff --git a/docs/telepresence/2.11/reference/cluster-config.md b/docs/telepresence/2.11/reference/cluster-config.md new file mode 100644 index 000000000..087bbf9af --- /dev/null +++ b/docs/telepresence/2.11/reference/cluster-config.md @@ -0,0 +1,363 @@ +import Alert from '@material-ui/lab/Alert'; +import { ClusterConfig, PaidPlansDisclaimer } from '../../../../../src/components/Docs/Telepresence'; + +# Cluster-side configuration + +For the most part, Telepresence doesn't require any special +configuration in the cluster and can be used right away in any +cluster (as long as the user has adequate [RBAC permissions](../rbac) +and the cluster's server version is `1.19.0` or higher). + +## Helm Chart configuration +Some cluster specific configuration can be provided when installing +or upgrading the Telepresence cluster installation using Helm. Once +installed, the Telepresence client will configure itself from values +that it receives when connecting to the Traffic manager. + +See the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) +for a full list of available configuration settings. + +### Values +To add configuration, create a yaml file with the configuration values and then use it executing `telepresence helm install [--upgrade] --values ` + +## Client Configuration + +It is possible for the Traffic Manager to automatically push config to all +connecting clients. To learn more about this, please see the [client config docs](../config#global-configuration) + +### Agent Configuration + +The `agent` structure of the Helm chart configures the behavior of the Telepresence agents. + +#### Application Protocol Selection +The `agent.appProtocolStrategy` is relevant when using personal intercepts and controls how telepresence selects the application protocol to use +when intercepting a service that has no `service.ports.appProtocol` declared. The port's `appProtocol` is always trusted if it is present. +Valid values are: + +| Value | Resulting action | +|--------------|------------------------------------------------------------------------------------------------------------------------------| +| `http2Probe` | The Telepresence Traffic Agent will probe the intercepted container to check whether it supports http2. This is the default. | +| `portName` | Telepresence will make an educated guess about the protocol based on the name of the service port | +| `http` | Telepresence will use http | +| `http2` | Telepresence will use http2 | + +When `portName` is used, Telepresence will determine the protocol by the name of the port: `[-suffix]`. The following protocols +are recognized: + +| Protocol | Meaning | +|----------|---------------------------------------| +| `http` | Plaintext HTTP/1.1 traffic | +| `http2` | Plaintext HTTP/2 traffic | +| `https` | TLS Encrypted HTTP (1.1 or 2) traffic | +| `grpc` | Same as http2 | + +The application protocol strategy can also be configured on a workstation. See [Intercepts](../config/#intercept) for more info. + +#### Envoy Configuration + +The `agent.envoy` structure contains three values: + +| Setting | Meaning | +|--------------|----------------------------------------------------------| +| `logLevel` | Log level used by the Envoy proxy. Defaults to "warning" | +| `serverPort` | Port used by the Envoy server. Default 18000. | +| `adminPort` | Port used for Envoy administration. Default 19000. | + +#### Image Configuration + +The `agent.image` structure contains the following values: + +| Setting | Meaning | +|------------|-----------------------------------------------------------------------------| +| `registry` | Registry used when downloading the image. Defaults to "docker.io/datawire". | +| `name` | The name of the image. Retrieved from Ambassador Cloud if not set. | +| `tag` | The tag of the image. Retrieved from Ambassador Cloud if not set. | + +#### Log level + +The `agent.LogLevel` controls the log level of the traffic-agent. See [Log Levels](../config/#log-levels) for more info. + +#### Resources + +The `agent.resources` and `agent.initResources` will be used as the `resources` element when injecting traffic-agents and init-containers. + +## TLS + +In this example, other applications in the cluster expect to speak TLS to your +intercepted application (perhaps you're using a service-mesh that does +mTLS). + +In order to use `--mechanism=http` (or any features that imply +`--mechanism=http`) you need to tell Telepresence about the TLS +certificates in use. + +Tell Telepresence about the certificates in use by adjusting your +[workload's](../intercepts/#supported-workloads) Pod template to set a couple of +annotations on the intercepted Pods: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ "getambassador.io/inject-terminating-tls-secret": "your-terminating-secret" # optional ++ "getambassador.io/inject-originating-tls-secret": "your-originating-secret" # optional +``` + +- The `getambassador.io/inject-terminating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS server + certificate to use for decrypting and responding to incoming + requests. + + When Telepresence modifies the Service and workload port + definitions to point at the Telepresence Agent sidecar's port + instead of your application's actual port, the sidecar will use this + certificate to terminate TLS. + +- The `getambassador.io/inject-originating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS + client certificate to use for communicating with your application. + + You will need to set this if your application expects incoming + requests to speak TLS (for example, your + code expects to handle mTLS itself instead of letting a service-mesh + sidecar handle mTLS for it, or the port definition that Telepresence + modified pointed at the service-mesh sidecar instead of at your + application). + + If you do set this, you should to set it to the + same client certificate Secret that you configure the Ambassador + Edge Stack to use for mTLS. + +It is only possible to refer to a Secret that is in the same Namespace +as the Pod. The Secret will be mounted into the traffic agent's container. + +Telepresence understands `type: kubernetes.io/tls` Secrets and +`type: istio.io/key-and-cert` Secrets; as well as `type: Opaque` +Secrets that it detects to be formatted as one of those types. + +## Air-gapped cluster + + +If your cluster is on an isolated network such that it cannot +communicate with Ambassador Cloud, then some additional configuration +is required to acquire a license key in order to use personal +intercepts. + +### Create a license + +1. + +2. Generate a new license (if one doesn't already exist) by clicking *Generate New License*. + +3. You will be prompted for your Cluster ID. Ensure your +kubeconfig context is using the cluster you want to create a license for then +run this command to generate the Cluster ID: + + ``` + $ telepresence current-cluster-id + + Cluster ID: + ``` + +4. Click *Generate API Key* to finish generating the license. + +5. On the licenses page, download the license file associated with your cluster. + +### Add license to cluster +There are two separate ways you can add the license to your cluster: manually creating and deploying +the license secret or having the helm chart manage the secret + +You only need to do one of the two options. + +#### Manual deploy of license secret + +1. Use this command to generate a Kubernetes Secret config using the license file: + + ``` + $ telepresence license -f + + apiVersion: v1 + data: + hostDomain: + license: + kind: Secret + metadata: + creationTimestamp: null + name: systema-license + namespace: ambassador + ``` + +2. Save the output as a YAML file and apply it to your +cluster with `kubectl`. + +3. When deploying the `traffic-manager` chart, you must add the additional values when running `helm install` by putting +the following into a file (for the example we'll assume it's called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + secret: + # This tells the helm chart not to create the secret since you've created it yourself + create: false + ``` + +4. Install the helm chart into the cluster + + ``` + telepresence helm install -f license-values.yaml + ``` + +5. Ensure that you have the docker image for the Smart Agent (datawire/ambassador-telepresence-agent:1.11.0) +pulled and in a registry your cluster can pull from. + +6. Have users use the `images` [config key](../config/#images) keys so telepresence uses the aforementioned image for their agent. + +#### Helm chart manages the secret + +1. Get the jwt token from the downloaded license file + + ``` + $ cat ~/Downloads/ambassador.License_for_yourcluster + eyJhbGnotarealtoken.butanexample + ``` + +2. Create the following values file, substituting your real jwt token in for the one used in the example below. +(for this example we'll assume the following is placed in a file called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + # This is the value from the license file you download. this value is an example and will not work + value: eyJhbGnotarealtoken.butanexample + secret: + # This tells the helm chart to create the secret + create: true + ``` + +3. Install the helm chart into the cluster + + ``` + telepresence helm install -f license-values.yaml + ``` + +Users will now be able to use preview intercepts with the +`--preview-url=false` flag. Even with the license key, preview URLs +cannot be used without enabling direct communication with Ambassador +Cloud, as Ambassador Cloud is essential to their operation. + +If using Helm to install the server-side components, see the chart's [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) to learn how to configure the image registry and license secret. + +Have clients use the [skipLogin](../config/#cloud) key to ensure the cli knows it is operating in an +air-gapped environment. + +## Mutating Webhook + +Telepresence uses a Mutating Webhook to inject the [Traffic Agent](../architecture/#traffic-agent) sidecar container and update the +port definitions. This means that an intercepted workload (Deployment, StatefulSet, ReplicaSet) will remain untouched +and in sync as far as GitOps workflows (such as ArgoCD) are concerned. + +The injection will happen on demand the first time an attempt is made to intercept the workload. + +If you want to prevent that the injection ever happens, simply add the `telepresence.getambassador.io/inject-traffic-agent: disabled` +annotation to your workload template's annotations: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ telepresence.getambassador.io/inject-traffic-agent: disabled + spec: + containers: +``` + +### Service Name and Port Annotations + +Telepresence will automatically find all services and all ports that will connect to a workload and make them available +for an intercept, but you can explicitly define that only one service and/or port can be intercepted. + +```diff + spec: + template: + metadata: + labels: + service: your-service + annotations: ++ telepresence.getambassador.io/inject-service-name: my-service ++ telepresence.getambassador.io/inject-service-port: https + spec: + containers: +``` + +### Ignore Certain Volume Mounts + +An annotation `telepresence.getambassador.io/inject-ignore-volume-mounts` can be used to make the injector ignore certain volume mounts denoted by a comma-separated string. The specified volume mounts from the original container will not be appended to the agent sidecar container. + +```diff + spec: + template: + metadata: + annotations: ++ telepresence.getambassador.io/inject-ignore-volume-mounts: "foo,bar" + spec: + containers: +``` + +### Note on Numeric Ports + +If the targetPort of your intercepted service is pointing at a port number, in addition to +injecting the Traffic Agent sidecar, Telepresence will also inject an initContainer that will +reconfigure the pod's firewall rules to redirect traffic to the Traffic Agent. + + +Note that this initContainer requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + +If you need to use numeric ports without the aforementioned capabilities, you can [manually install the agent](../intercepts/manual-agent) + +For example, the following service is using a numeric port, so Telepresence would inject an initContainer into it: +```yaml +apiVersion: v1 +kind: Service +metadata: + name: your-service +spec: + type: ClusterIP + selector: + service: your-service + ports: + - port: 80 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: your-service + labels: + service: your-service +spec: + replicas: 1 + selector: + matchLabels: + service: your-service + template: + metadata: + annotations: + telepresence.getambassador.io/inject-traffic-agent: enabled + labels: + service: your-service + spec: + containers: + - name: your-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 +``` diff --git a/docs/telepresence/2.11/reference/config.md b/docs/telepresence/2.11/reference/config.md new file mode 100644 index 000000000..e69c77daa --- /dev/null +++ b/docs/telepresence/2.11/reference/config.md @@ -0,0 +1,349 @@ +# Laptop-side configuration + +There are a number of configuration values that can be tweaked to change how Telepresence behaves. +These can be set in two ways: globally, by a platform engineer with powers to deploy the Telepresence Traffic Manager, or locally by any user. +One important exception is the location of the traffic manager itself, which, if it's different from the default of `ambassador`, [must be set](#manager) locally per-cluster to be able to connect. + +## Global Configuration + +Global configuration is set at the Traffic Manager level and applies to any user connecting to that Traffic Manager. +To set it, simply pass in a `client` dictionary to the `helm install` command, with any config values you wish to set. + +### Values + +The `client` config supports values for `timeouts`, `logLevels`, `images`, `cloud`, `grpc`, `dns`, and `routing`. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +client: + timeouts: + agentInstall: 1m + intercept: 10s + logLevels: + userDaemon: debug + images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting + cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. + grpc: + maxReceiveSize: 10Mi + telepresenceAPI: + port: 9980 + dns: + includeSuffixes: [.private] + excludeSuffixes: [.se, .com, .io, .net, .org, .ru] + lookupTimeout: 30s + routing: + alsoProxySubnets: + - 1.2.3.4/32 + neverProxySubnets: + - 1.2.3.4/32 +``` + +#### Timeouts + +Values for `client.timeouts` are all durations either as a number of seconds +or as a string with a unit suffix of `ms`, `s`, `m`, or `h`. Strings +can be fractional (`1.5h`) or combined (`2h45m`). + +These are the valid fields for the `timeouts` key: + +| Field | Description | Type | Default | +|-------------------------|------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|------------| +| `agentInstall` | Waiting for Traffic Agent to be installed | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | +| `apply` | Waiting for a Kubernetes manifest to be applied | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 1 minute | +| `clusterConnect` | Waiting for cluster to be connected | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `intercept` | Waiting for an intercept to become active | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `proxyDial` | Waiting for an outbound connection to be established | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `trafficManagerConnect` | Waiting for the Traffic Manager API to connect for port forwards | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `trafficManagerAPI` | Waiting for connection to the gPRC API after `trafficManagerConnect` is successful | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 15 seconds | +| `helm` | Waiting for Helm operations (e.g. `install`) on the Traffic Manager | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | + +#### Log Levels + +Values for the `client.logLevels` fields are one of the following strings, +case-insensitive: + + - `trace` + - `debug` + - `info` + - `warning` or `warn` + - `error` + +For whichever log-level you select, you will get logs labeled with that level and of higher severity. +(e.g. if you use `info`, you will also get logs labeled `error`. You will NOT get logs labeled `debug`. + +These are the valid fields for the `client.logLevels` key: + +| Field | Description | Type | Default | +|--------------|---------------------------------------------------------------------|---------------------------------------------|---------| +| `userDaemon` | Logging level to be used by the User Daemon (logs to connector.log) | [loglevel][logrus-level] [string][yaml-str] | debug | +| `rootDaemon` | Logging level to be used for the Root Daemon (logs to daemon.log) | [loglevel][logrus-level] [string][yaml-str] | info | + +#### Images +Values for `client.images` are strings. These values affect the objects that are deployed in the cluster, +so it's important to ensure users have the same configuration. + +Additionally, you can deploy the server-side components with [Helm](../../install/helm), to prevent them +from being overridden by a client's config and use the [mutating-webhook](../cluster-config/#mutating-webhook) +to handle installation of the `traffic-agents`. + +These are the valid fields for the `client.images` key: + +| Field | Description | Type | Default | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------|----------------------| +| `registry` | Docker registry to be used for installing the Traffic Manager and default Traffic Agent. If not using a helm chart to deploy server-side objects, changing this value will create a new traffic-manager deployment when using Telepresence commands. Additionally, changing this value will update installed default `traffic-agents` to use the new registry when creating a new intercept. | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `agentImage` | `$registry/$imageName:$imageTag` to use when installing the Traffic Agent. Changing this value will update pre-existing `traffic-agents` to use this new image. *The `registry` value is not used for the `traffic-agent` if you have this value set.* | qualified Docker image name [string][yaml-str] | (unset) | +| `webhookRegistry` | The container `$registry` that the [Traffic Manager](../cluster-config/#mutating-webhook) will use with the `webhookAgentImage` *This value is only used if a new `traffic-manager` is deployed* | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `webhookAgentImage` | The container image that the [Traffic Manager](../cluster-config/#mutating-webhook) will pull from the `webhookRegistry` when installing the Traffic Agent in annotated pods *This value is only used if a new `traffic-manager` is deployed* | non-qualified Docker image name [string][yaml-str] | (unset) | + +#### Cloud +Values for `client.cloud` are listed below and their type varies, so please see the chart for the expected type for each config value. +These fields control how the client interacts with the Cloud service. + +| Field | Description | Type | Default | +|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------|----------------------| +| `skipLogin` | Whether the CLI should skip automatic login to Ambassador Cloud. If set to true, in order to perform personal intercepts you must have a [license key](../cluster-config/#air-gapped-cluster) installed in the cluster. | [bool][yaml-bool] | false | +| `refreshMessages` | How frequently the CLI should communicate with Ambassador Cloud to get new command messages, which also resets whether the message has been raised or not. You will see each message at most once within the duration given by this config | [duration][go-duration] [string][yaml-str] | 168h | +| `systemaHost` | The host used to communicate with Ambassador Cloud | [string][yaml-str] | app.getambassador.io | +| `systemaPort` | The port used with `systemaHost` to communicate with Ambassador Cloud | [string][yaml-str] | 443 | + +Telepresence attempts to auto-detect if the cluster is capable of +communication with Ambassador Cloud, but in cases where only the on-laptop client wishes to communicate with +Ambassador Cloud Telepresence may still prompt you to log in. If you want those auto-login points to be disabled +as well, or would like it to not attempt to communicate with +Ambassador Cloud at all (even for the auto-detection), then be sure to +set the `skipLogin` value to `true`. + +Reminder: To use personal intercepts, which normally require a login, +you must have a license key in your cluster and specify which +`agentImage` should be installed by also adding the following to your +`config.yml`: + +```yaml +images: + agentImage: / +``` + +#### Grpc +The `maxReceiveSize` determines how large a message that the workstation receives via gRPC can be. The default is 4Mi (determined by gRPC). All traffic to and from the cluster is tunneled via gRPC. + +The size is measured in bytes. You can express it as a plain integer or as a fixed-point number using E, G, M, or K. You can also use the power-of-two equivalents: Gi, Mi, Ki. For example, the following represent roughly the same value: +``` +128974848, 129e6, 129M, 123Mi +``` + +#### RESTful API server +The `client.telepresenceAPI` controls the behavior of Telepresence's RESTful API server that can be queried for additional information about ongoing intercepts. When present, and the `port` is set to a valid port number, it's propagated to the auto-installer so that application containers that can be intercepted gets the `TELEPRESENCE_API_PORT` environment set. The server can then be queried at `localhost:`. In addition, the `traffic-agent` and the `user-daemon` on the workstation that performs an intercept will start the server on that port. +If the `traffic-manager` is auto-installed, its webhook agent injector will be configured to add the `TELEPRESENCE_API_PORT` environment to the app container when the `traffic-agent` is injected. +See [RESTful API server](../restapi) for more info. + +### DNS + +#### Intercept +The `intercept` controls applies to how Telepresence will intercept the communications to the intercepted service. + +The `defaultPort` controls which port is selected when no `--port` flag is given to the `telepresence intercept` command. The default value is "8080". + +The `appProtocolStrategy` is only relevant when using personal intercepts. This controls how Telepresence selects the application protocol to use when intercepting a service that has no `service.ports.appProtocol` defined. Valid values are: + +| Value | Resulting action | +|--------------|--------------------------------------------------------------------------------------------------------| +| `http2Probe` | The Telepresence Traffic Agent will probe the intercepted container to check whether it supports http2 | +| `portName` | Telepresence will make an educated guess about the protocol based on the name of the service port | +| `http` | Telepresence will use http | +| `http2` | Telepresence will use http2 | + +When `portName` is used, Telepresence will determine the protocol by the name of the port: `[-suffix]`. The following protocols are recognized: + +| Protocol | Meaning | +|----------|---------------------------------------| +| `http` | Plaintext HTTP/1.1 traffic | +| `http2` | Plaintext HTTP/2 traffic | +| `https` | TLS Encrypted HTTP (1.1 or 2) traffic | +| `grpc` | Same as http2 | + +#### Daemons + +`client.daemons` controls which binary to use for the user daemon. By default it will +use the Telepresence binary. For example, this can be used to tell Telepresence to +use the Telepresence Pro binary. + +The fields for `client.dns` are: `localIP`, `excludeSuffixes`, `includeSuffixes`, and `lookupTimeout`. + +| Field | Description | Type | Default | +|-------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|--------------------------------------------------------------------------| +| `localIP` | The address of the local DNS server. This entry is only used on Linux systems that are not configured to use systemd-resolved. | IP address [string][yaml-str] | first `nameserver` mentioned in `/etc/resolv.conf` | +| `excludeSuffixes` | Suffixes for which the DNS resolver will always fail (or fallback in case of the overriding resolver). Can be globally configured in the Helm chart. | [sequence][yaml-seq] of [strings][yaml-str] | `[".arpa", ".com", ".io", ".net", ".org", ".ru"]` | +| `includeSuffixes` | Suffixes for which the DNS resolver will always attempt to do a lookup. Includes have higher priority than excludes. Can be globally configured in the Helm chart. | [sequence][yaml-seq] of [strings][yaml-str] | `[]` | +| `lookupTimeout` | Maximum time to wait for a cluster side host lookup. | [duration][go-duration] [string][yaml-str] | 4 seconds | + +Here is an example values.yaml: +```yaml +client: + dns: + includeSuffixes: [.private] + excludeSuffixes: [.se, .com, .io, .net, .org, .ru] + localIP: 8.8.8.8 + lookupTimeout: 30s +``` + +### Routing + +#### AlsoProxySubnets + +When using `alsoProxySubnets`, you provide a list of subnets to be added to the TUN device. +All connections to addresses that the subnet spans will be dispatched to the cluster + +Here is an example values.yaml for the subnet `1.2.3.4/32`: +```yaml +client: + routing: + alsoProxySubnets: + - 1.2.3.4/32 +``` + +#### NeverProxySubnets + +When using `neverProxySubnets` you provide a list of subnets. These will never be routed via the TUN device, +even if they fall within the subnets (pod or service) for the cluster. Instead, whatever route they have before +telepresence connects is the route they will keep. + +Here is an example kubeconfig for the subnet `1.2.3.4/32`: + +```yaml +client: + routing: + neverProxySubnets: + - 1.2.3.4/32 +``` + +#### Using AlsoProxy together with NeverProxy + +Never proxy and also proxy are implemented as routing rules, meaning that when the two conflict, regular routing routes apply. +Usually this means that the most specific route will win. + +So, for example, if an `alsoProxySubnets` subnet falls within a broader `neverProxySubnets` subnet: + +```yaml +neverProxySubnets: [10.0.0.0/16] +alsoProxySubnets: [10.0.5.0/24] +``` + +Then the specific `alsoProxySubnets` of `10.0.5.0/24` will be proxied by the TUN device, whereas the rest of `10.0.0.0/16` will not. + +Conversely, if a `neverProxySubnets` subnet is inside a larger `alsoProxySubnets` subnet: + +```yaml +alsoProxySubnets: [10.0.0.0/16] +neverProxySubnets: [10.0.5.0/24] +``` + +Then all of the `alsoProxySubnets` of `10.0.0.0/16` will be proxied, with the exception of the specific `neverProxySubnets` of `10.0.5.0/24` + +## Local Overrides + +In addition, it is possible to override each of these variables at the local level by setting up new values in local config files. +There are two types of config values that can be set locally: those that apply to all clusters, which are set in a single `config.yml` file, and those +that only apply to specific clusters, which are set as extensions to the `$KUBECONFIG` file. + +### Config for all clusters +Telepresence uses a `config.yml` file to store and change those configuration values that will be used for all clusters you use Telepresence with. +The location of this file varies based on your OS: + +* macOS: `$HOME/Library/Application Support/telepresence/config.yml` +* Linux: `$XDG_CONFIG_HOME/telepresence/config.yml` or, if that variable is not set, `$HOME/.config/telepresence/config.yml` +* Windows: `%APPDATA%\telepresence\config.yml` + +For Linux, the above paths are for a user-level configuration. For system-level configuration, use the file at `$XDG_CONFIG_DIRS/telepresence/config.yml` or, if that variable is empty, `/etc/xdg/telepresence/config.yml`. If a file exists at both the user-level and system-level paths, the user-level path file will take precedence. + +### Values + +The config file currently supports values for the `timeouts`, `logLevels`, `images`, `cloud`, and `grpc` keys. +The definitions of these values are identical to those values in the `client` config above. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +timeouts: + agentInstall: 1m + intercept: 10s +logLevels: + userDaemon: debug +images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting +cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. +grpc: + maxReceiveSize: 10Mi +telepresenceAPI: + port: 9980 +``` + + +## Workstation Per-Cluster Configuration + +Configuration that is specific to a cluster can also be overriden per-workstation by modifying your `$KUBECONFIG` file. +It is recommended that you do not do this, and instead rely on upstream values provided to the Traffic Manager. This ensures +that all users that connect to the Traffic Manager will have the same routing and DNS resolution behavior. +An important exception to this is the [`manager.namespace` configuration](#Manager) which must be set locally. + +### Values + +The kubeconfig supports values for `dns`, `also-proxy`, `never-proxy`, and `manager`. + +Example kubeconfig: +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + dns: + include-suffixes: [.private] + exclude-suffixes: [.se, .com, .io, .net, .org, .ru] + local-ip: 8.8.8.8 + lookup-timeout: 30s + never-proxy: [10.0.0.0/16] + also-proxy: [10.0.5.0/24] + name: example-cluster +``` + +#### Manager + +This is the one cluster configuration that cannot be set using the Helm chart because it defines how Telepresence connects to +the Traffic manager. When not default, that setting needs to be configured in the workstation's kubeconfig for the cluster. + +The `manager` key contains configuration for finding the `traffic-manager` that telepresence will connect to. It supports one key, `namespace`, indicating the namespace where the traffic manager is to be found + +Here is an example kubeconfig that will instruct telepresence to connect to a manager in namespace `staging`: + +```yaml +apiVersion: v1 +clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +[yaml-bool]: https://yaml.org/type/bool.html +[yaml-float]: https://yaml.org/type/float.html +[yaml-int]: https://yaml.org/type/int.html +[yaml-seq]: https://yaml.org/type/seq.html +[yaml-str]: https://yaml.org/type/str.html +[go-duration]: https://pkg.go.dev/time#ParseDuration +[logrus-level]: https://github.com/sirupsen/logrus/blob/v1.8.1/logrus.go#L25-L45 diff --git a/docs/telepresence/2.11/reference/dns.md b/docs/telepresence/2.11/reference/dns.md new file mode 100644 index 000000000..2f263860e --- /dev/null +++ b/docs/telepresence/2.11/reference/dns.md @@ -0,0 +1,80 @@ +# DNS resolution + +The Telepresence DNS resolver is dynamically configured to resolve names using the namespaces of currently active intercepts. Processes running locally on the desktop will have network access to all services in the such namespaces by service-name only. + +All intercepts contribute to the DNS resolver, even those that do not use the `--namespace=` option. This is because `--namespace default` is implied, and in this context, `default` is treated just like any other namespace. + +No namespaces are used by the DNS resolver (not even `default`) when no intercepts are active, which means that no service is available by `` only. Without an active intercept, the namespace qualified DNS name must be used (in the form `.`). + +See this demonstrated below, using the [quick start's](../../quick-start/) sample app services. + +No intercepts are currently running, we'll connect to the cluster and list the services that can be intercepted. + +``` +$ telepresence connect + + Connecting to traffic manager... + Connected to context default (https://) + +$ telepresence list + + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + emoji : ready to intercept (traffic-agent not yet installed) + web : ready to intercept (traffic-agent not yet installed) + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + +$ curl web-app:80 + + curl: (6) Could not resolve host: web-app + +``` + +This is expected as Telepresence cannot reach the service yet by short name without an active intercept in that namespace. + +``` +$ curl web-app.emojivoto:80 + + + + + + Emoji Vote + ... +``` + +Using the namespaced qualified DNS name though does work. +Now we'll start an intercept against another service in the same namespace. Remember, `--namespace default` is implied since it is not specified. + +``` +$ telepresence intercept web --port 8080 + + Using Deployment web + intercepted + Intercept name : web + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Volume Mount Point: /tmp/telfs-166119801 + Intercepting : HTTP requests that match all headers: + 'x-telepresence-intercept-id: 8eac04e3-bf24-4d62-b3ba-35297c16f5cd:web' + +$ curl webapp:80 + + + + + + Emoji Vote + ... +``` + +Now curling that service by its short name works and will as long as the intercept is active. + +The DNS resolver will always be able to resolve services using `.` regardless of intercepts. + +### Supported Query Types + +The Telepresence DNS resolver is now capable of resolving queries of type `A`, `AAAA`, `CNAME`, +`MX`, `NS`, `PTR`, `SRV`, and `TXT`. + +See [Outbound connectivity](../routing/#dns-resolution) for details on DNS lookups. diff --git a/docs/telepresence/2.11/reference/docker-run.md b/docs/telepresence/2.11/reference/docker-run.md new file mode 100644 index 000000000..8aa7852e5 --- /dev/null +++ b/docs/telepresence/2.11/reference/docker-run.md @@ -0,0 +1,31 @@ +--- +Description: "How a Telepresence intercept can run a Docker container with configured environment and volume mounts." +--- + +# Using Docker for intercepts + +If you want your intercept to go to a Docker container on your laptop, use the `--docker-run` option. It creates the intercept, runs your container in the foreground, then automatically ends the intercept when the container exits. + +`telepresence intercept --port --docker-run -- ` + +The `--` separates flags intended for `telepresence intercept` from flags intended for `docker run`. + +## Example + +Imagine you are working on a new version of your frontend service. It is running in your cluster as a Deployment called `frontend-v1`. You use Docker on your laptop to build an improved version of the container called `frontend-v2`. To test it out, use this command to run the new container on your laptop and start an intercept of the cluster service to your local container. + +`telepresence intercept frontend-v1 --port 8000 --docker-run -- frontend-v2` + +## Ports + +The `--port` flag can specify an additional port when `--docker-run` is used so that the local and container port can be different. This is done using `--port :`. The container port will default to the local port when using the `--port ` syntax. + +## Flags + +Telepresence will automatically pass some relevant flags to Docker in order to connect the container with the intercept. Those flags are combined with the arguments given after `--` on the command line. + +- `--dns-search tel2-search` Enables single label name lookups in intercepted namespaces +- `--env-file ` Loads the intercepted environment +- `--name intercept--` Names the Docker container, this flag is omitted if explicitly given on the command line +- `-p ` The local port for the intercept and the container port +- `-v ` Volume mount specification, see CLI help for `--mount` and `--docker-mount` flags for more info diff --git a/docs/telepresence/2.11/reference/environment.md b/docs/telepresence/2.11/reference/environment.md new file mode 100644 index 000000000..7f83ff119 --- /dev/null +++ b/docs/telepresence/2.11/reference/environment.md @@ -0,0 +1,46 @@ +--- +description: "How Telepresence can import environment variables from your Kubernetes cluster to use with code running on your laptop." +--- + +# Environment variables + +Telepresence can import environment variables from the cluster pod when running an intercept. +You can then use these variables with the code running on your laptop of the service being intercepted. + +There are three options available to do this: + +1. `telepresence intercept [service] --port [port] --env-file=FILENAME` + + This will write the environment variables to a Docker Compose `.env` file. This file can be used with `docker-compose` when starting containers locally. Please see the Docker documentation regarding the [file syntax](https://docs.docker.com/compose/env-file/) and [usage](https://docs.docker.com/compose/environment-variables/) for more information. + +2. `telepresence intercept [service] --port [port] --env-json=FILENAME` + + This will write the environment variables to a JSON file. This file can be injected into other build processes. + +3. `telepresence intercept [service] --port [port] -- [COMMAND]` + + This will run a command locally with the pod's environment variables set on your laptop. Once the command quits the intercept is stopped (as if `telepresence leave [service]` was run). This can be used in conjunction with a local server command, such as `python [FILENAME]` or `node [FILENAME]` to run a service locally while using the environment variables that were set on the pod via a ConfigMap or other means. + + Another use would be running a subshell, Bash for example: + + `telepresence intercept [service] --port [port] -- /bin/bash` + + This would start the intercept then launch the subshell on your laptop with all the same variables set as on the pod. + +## Telepresence Environment Variables + +Telepresence adds some useful environment variables in addition to the ones imported from the intercepted pod: + +### TELEPRESENCE_ROOT +Directory where all remote volumes mounts are rooted. See [Volume Mounts](../volume/) for more info. + +### TELEPRESENCE_MOUNTS +Colon separated list of remotely mounted directories. + +### TELEPRESENCE_CONTAINER +The name of the intercepted container. Useful when a pod has several containers, and you want to know which one that was intercepted by Telepresence. + +### TELEPRESENCE_INTERCEPT_ID +ID of the intercept (same as the "x-intercept-id" http header). + +Useful if you need special behavior when intercepting a pod. One example might be when dealing with pub/sub systems like Kafka, where all processes that don't have the `TELEPRESENCE_INTERCEPT_ID` set can filter out all messages that contain an `x-intercept-id` header, while those that do, instead filter based on a matching `x-intercept-id` header. This is to assure that messages belonging to a certain intercept always are consumed by the intercepting process. diff --git a/docs/telepresence/2.11/reference/inside-container.md b/docs/telepresence/2.11/reference/inside-container.md new file mode 100644 index 000000000..637e0cdfd --- /dev/null +++ b/docs/telepresence/2.11/reference/inside-container.md @@ -0,0 +1,37 @@ +# Running Telepresence inside a container + +It is sometimes desirable to run [Telepresence](/products/telepresence/) inside a container. One reason can be to avoid any side effects on the workstation's network, another can be to establish multiple sessions with the traffic manager, or even work with different clusters simultaneously. + +## Building the container + +Building a container with a ready-to-run Telepresence is easy because there are relatively few external dependencies. Add the following to a `Dockerfile`: + +```Dockerfile +# Dockerfile with telepresence and its prerequisites +FROM alpine:3.13 + +# Install Telepresence prerequisites +RUN apk add --no-cache curl iproute2 sshfs + +# Download and install the telepresence binary +RUN curl -fL https://app.getambassador.io/download/tel2/linux/amd64/latest/telepresence -o telepresence && \ + install -o root -g root -m 0755 telepresence /usr/local/bin/telepresence +``` +In order to build the container, do this in the same directory as the `Dockerfile`: +``` +$ docker build -t tp-in-docker . +``` + +## Running the container + +Telepresence will need access to the `/dev/net/tun` device on your Linux host (or, in case the host isn't Linux, the Linux VM that Docker starts automatically), and a Kubernetes config that identifies the cluster. It will also need `--cap-add=NET_ADMIN` to create its Virtual Network Interface. + +The command to run the container can look like this: +```bash +$ docker run \ + --cap-add=NET_ADMIN \ + --device /dev/net/tun:/dev/net/tun \ + --network=host \ + -v ~/.kube/config:/root/.kube/config \ + -it --rm tp-in-docker +``` diff --git a/docs/telepresence/2.11/reference/intercepts/cli.md b/docs/telepresence/2.11/reference/intercepts/cli.md new file mode 100644 index 000000000..0acd1505d --- /dev/null +++ b/docs/telepresence/2.11/reference/intercepts/cli.md @@ -0,0 +1,314 @@ +import Alert from '@material-ui/lab/Alert'; + +# Configuring intercept using CLI + +## Specifying a namespace for an intercept + +The namespace of the intercepted workload is specified using the +`--namespace` option. When this option is used, and `--workload` is +not used, then the given name is interpreted as the name of the +workload and the name of the intercept will be constructed from that +name and the namespace. + +```shell +telepresence intercept hello --namespace myns --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept +`hello-myns`. In order to remove the intercept, you will need to run +`telepresence leave hello-mydns` instead of just `telepresence leave +hello`. + +The name of the intercept will be left unchanged if the workload is specified. + +```shell +telepresence intercept myhello --namespace myns --workload hello --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept `myhello`. + +## Importing environment variables + +Telepresence can import the environment variables from the pod that is +being intercepted, see [this doc](../environment/) for more details. + +## Creating an intercept without a preview URL + +If you *are not* logged in to Ambassador Cloud, the following command +will intercept all traffic bound to the service and proxy it to your +laptop. This includes traffic coming through your ingress controller, +so use this option carefully as to not disrupt production +environments. + +```shell +telepresence intercept --port= +``` + +If you *are* logged in to Ambassador Cloud, setting the +`--preview-url` flag to `false` is necessary. + +```shell +telepresence intercept --port= --preview-url=false +``` + +This will output an HTTP header that you can set on your request for +that traffic to be intercepted: + +```console +$ telepresence intercept --port= --preview-url=false +Using Deployment +intercepted + Intercept name: + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":") +``` + +Run `telepresence status` to see the list of active intercepts. + +```console +$ telepresence status +Root Daemon: Running + Version : v2.1.4 (api 3) + Primary DNS : "" + Fallback DNS: "" +User Daemon: Running + Version : v2.1.4 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 1 total + dataprocessingnodeservice: @ +``` + +Finally, run `telepresence leave ` to stop the intercept. + +## Skipping the ingress dialogue + +You can skip the ingress dialogue by setting the relevant parameters using flags. If any of the following flags are set, the dialogue will be skipped and the flag values will be used instead. If any of the required flags are missing, an error will be thrown. + +| Flag | Description | Required | +|------------------|------------------------------------------------------------------|------------| +| `--ingress-host` | The ip address for the ingress | yes | +| `--ingress-port` | The port for the ingress | yes | +| `--ingress-tls` | Whether tls should be used | no | +| `--ingress-l5` | Whether a different ip address should be used in request headers | no | + +## Creating an intercept when a service has multiple ports + +If you are trying to intercept a service that has multiple ports, you +need to tell Telepresence which service port you are trying to +intercept. To specify, you can either use the name of the service +port or the port number itself. To see which options might be +available to you and your service, use kubectl to describe your +service or look in the object's YAML. For more information on multiple +ports, see the [Kubernetes documentation][kube-multi-port-services]. + +[kube-multi-port-services]: https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services + +```console +$ telepresence intercept --port=: +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +When intercepting a service that has multiple ports, the name of the +service port that has been intercepted is also listed. + +If you want to change which port has been intercepted, you can create +a new intercept the same way you did above and it will change which +service port is being intercepted. + +## Creating an intercept When multiple services match your workload + +Oftentimes, there's a 1-to-1 relationship between a service and a +workload, so telepresence is able to auto-detect which service it +should intercept based on the workload you are trying to intercept. +But if you use something like +[Argo](https://www.getambassador.io/docs/argo/latest/), there may be +two services (that use the same labels) to manage traffic between a +canary and a stable service. + +Fortunately, if you know which service you want to use when +intercepting a workload, you can use the `--service` flag. So in the +aforementioned example, if you wanted to use the `echo-stable` service +when intercepting your workload, your command would look like this: + +```console +$ telepresence intercept echo-rollout- --port --service echo-stable +Using ReplicaSet echo-rollout- +intercepted + Intercept name : echo-rollout- + State : ACTIVE + Workload kind : ReplicaSet + Destination : 127.0.0.1:3000 + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-921196036 + Intercepting : all TCP connections +``` + +## Intercepting multiple ports + +It is possible to intercept more than one service and/or service port that are using the same workload. You do this +by creating more than one intercept that identify the same workload using the `--workload` flag. + +Let's assume that we have a service `multi-echo` with the two ports `http` and `grpc`. They are both +targeting the same `multi-echo` deployment. + +```console +$ telepresence intercept multi-echo-http --workload multi-echo --port 8080:http --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-http + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Volume Mount Point : /tmp/telfs-893700837 + Intercepting : all TCP requests + Preview URL : https://sleepy-bassi-1140.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +$ telepresence intercept multi-echo-grpc --workload multi-echo --port 8443:grpc --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-grpc + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8443 + Service Port Identifier: extra + Volume Mount Point : /tmp/telfs-1277723591 + Intercepting : all TCP requests + Preview URL : https://upbeat-thompson-6613.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +``` + +## Port-forwarding an intercepted container's sidecars + +Sidecars are containers that sit in the same pod as an application +container; they usually provide auxiliary functionality to an +application, and can usually be reached at +`localhost:${SIDECAR_PORT}`. For example, a common use case for a +sidecar is to proxy requests to a database, your application would +connect to `localhost:${SIDECAR_PORT}`, and the sidecar would then +connect to the database, perhaps augmenting the connection with TLS or +authentication. + +When intercepting a container that uses sidecars, you might want those +sidecars' ports to be available to your local application at +`localhost:${SIDECAR_PORT}`, exactly as they would be if running +in-cluster. Telepresence's `--to-pod ${PORT}` flag implements this +behavior, adding port-forwards for the port given. + +```console +$ telepresence intercept --port=: --to-pod= +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +If there are multiple ports that you need forwarded, simply repeat the +flag (`--to-pod= --to-pod=`). + +## Intercepting headless services + +Kubernetes supports creating [services without a ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services), +which, when they have a pod selector, serve to provide a DNS record that will directly point to the service's backing pods. +Telepresence supports intercepting these `headless` services as it would a regular service with a ClusterIP. +So, for example, if you have the following service: + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: my-headless +spec: + type: ClusterIP + clusterIP: None + selector: + service: my-headless + ports: + - port: 8080 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: my-headless + labels: + service: my-headless +spec: + replicas: 1 + serviceName: my-headless + selector: + matchLabels: + service: my-headless + template: + metadata: + labels: + service: my-headless + spec: + containers: + - name: my-headless + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +``` + +You can intercept it like any other: + +```console +$ telepresence intercept my-headless --port 8080 +Using StatefulSet my-headless +intercepted + Intercept name : my-headless + State : ACTIVE + Workload kind : StatefulSet + Destination : 127.0.0.1:8080 + Volume Mount Point: /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-524189712 + Intercepting : all TCP connections +``` + + +This utilizes an initContainer that requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + + +This requires the Traffic Agent to run as GID 7777. By default, this is disabled on openshift clusters. +To enable running as GID 7777 on a specific openshift namespace, run: +oc adm policy add-scc-to-group anyuid system:serviceaccounts:$NAMESPACE + + + +Intercepting headless services without a selector is not supported. + + +## Sharing intercepts with teammates + +Once a combination of flags to easily intercept a service has been found, it's useful to share it with teammates. You +can do that easily by going to [Ambassador Cloud -> Intercepts history](https://app.getambassador.io/cloud/saved-intercepts) +pick the intercept command from the history tab and create a Saved Intercept by giving it a name, when doing that +the intercept command will be easily accessible for all your teammates. Note that this requires the free enhanced +client to be installed and to be logged in (`telepresence login`). + +To instantiate an intercept based on a saved intercept, simply run +`telepresence intercept --use-saved-intercept `. When logged in, the command will first check for a +saved intercept in Ambassador Cloud and will use it if found, otherwise an error will be returned. + +Saved Intercepts can be [managed through Ambassador Cloud](../../../../cloud/latest/telepresence-saved-intercepts/). diff --git a/docs/telepresence/2.11/reference/intercepts/index.md b/docs/telepresence/2.11/reference/intercepts/index.md new file mode 100644 index 000000000..5b317aeec --- /dev/null +++ b/docs/telepresence/2.11/reference/intercepts/index.md @@ -0,0 +1,61 @@ +import Alert from '@material-ui/lab/Alert'; + +# Intercepts + +When intercepting a service, the Telepresence Traffic Manager ensures +that a Traffic Agent has been injected into the intercepted workload. +The injection is triggered by a Kubernetes Mutating Webhook and will +only happen once. The Traffic Agent is responsible for redirecting +intercepted traffic to the developer's workstation. + +An intercept is either global or personal. + +### Global intercet +This intercept will intercept all`tcp` and/or `udp` traffic to the +intercepted service and send all of that traffic down to the developer's +workstation. This means that a global intercept will affect all users of +the intercepted service. + +### Personal intercept +This intercept will intercept specific HTTP requests, allowing other HTTP +requests through to the regular service. The selection is based on http +headers or paths, and allows for intercepts which only intercept traffic +tagged as belonging to a given developer. + +There are two ways of configuring an intercept: +- one from the [CLI](./cli) directly +- one from an [Intercept Specification](./specs) + +## Intercept behavior when using single-user versus team mode. + +Switching the Traffic Manager from `single-user` mode to `team` mode changes +the Telepresence defaults in two ways. + + +First, in team mode, Telepresence will require that the user is logged in to +Ambassador Cloud, or is using an api-key. The team mode aldo causes Telepresence +to default to a personal intercept using `--http-header=auto --http-path-prefix=/`. +Personal intercepts are important for working in a shared cluster with teammates, +and is important for the preview URL functionality below. See `telepresence intercept --help` +for information on using the `--http-header` and `--http-path-xxx` flags to +customize which requests that are intercepted. + +Secondly, team mode causes Telepresence to default to`--preview-url=true`. This +tells Telepresence to take advantage of Ambassador Cloud to create a preview URL +for this intercept, creating a shareable URL that automatically sets the +appropriate headers to have requests coming from the preview URL be +intercepted. + +## Supported workloads + +Kubernetes has various +[workloads](https://kubernetes.io/docs/concepts/workloads/). +Currently, Telepresence supports intercepting (installing a +traffic-agent on) `Deployments`, `ReplicaSets`, and `StatefulSets`. + + + +While many of our examples use Deployments, they would also work on +ReplicaSets and StatefulSets + + diff --git a/docs/telepresence/2.11/reference/intercepts/manual-agent.md b/docs/telepresence/2.11/reference/intercepts/manual-agent.md new file mode 100644 index 000000000..8c24d6dbe --- /dev/null +++ b/docs/telepresence/2.11/reference/intercepts/manual-agent.md @@ -0,0 +1,267 @@ +import Alert from '@material-ui/lab/Alert'; + +# Manually injecting the Traffic Agent + +You can directly modify your workload's YAML configuration to add the Telepresence Traffic Agent and enable it to be intercepted. + +When you use a Telepresence intercept for the first time on a Pod, the [Telepresence Mutating Webhook](../../cluster-config/#mutating-webhook) +will automatically inject a Traffic Agent sidecar into it. There might be some situations where this approach cannot be used, such +as very strict company security policies preventing it. + + +Although it is possible to manually inject the Traffic Agent, it is not the recommended approach to making a workload interceptable, +try the Mutating Webhook before proceeding. + + +## Procedure + +You can manually inject the agent into Deployments, StatefulSets, or ReplicaSets. The example on this page +uses the following Deployment and Service. It's a prerequisite that they have been applied to the cluster: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "my-service" + labels: + service: my-service +spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +--- +apiVersion: v1 +kind: Service +metadata: + name: "my-service" +spec: + type: ClusterIP + selector: + service: my-service + ports: + - port: 80 + targetPort: 8080 +``` + +### 1. Generating the YAML + +First, generate the YAML for the traffic-agent configmap entry. It's important that the generated file have +the same name as the service, and no extension: + +```console +$ telepresence genyaml config --workload my-service -o /tmp/my-service +$ cat /tmp/my-service-config.yaml +agentImage: docker.io/datawire/tel2:2.7.0 +agentName: my-service +containers: +- Mounts: null + envPrefix: A_ + intercepts: + - agentPort: 9900 + containerPort: 8080 + protocol: TCP + serviceName: my-service + servicePort: 80 + serviceUID: f6680334-10ef-4703-aa4e-bb1f9d1665fd + mountPoint: /tel_app_mounts/echo-container + name: echo-container +logLevel: info +managerHost: traffic-manager.ambassador +managerPort: 8081 +manual: true +namespace: default +workloadKind: Deployment +workloadName: my-service +``` + +Next, generate the YAML for the traffic-agent container: + +```console +$ telepresence genyaml container --config /tmp/my-service -o /tmp/my-service-agent.yaml +$ cat /tmp/my-service-agent.yaml +args: +- agent +env: +- name: _TEL_AGENT_POD_IP + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: status.podIP +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: traffic-agent +ports: +- containerPort: 9900 + protocol: TCP +readinessProbe: + exec: + command: + - /bin/stat + - /tmp/agent/ready +resources: {} +volumeMounts: +- mountPath: /tel_pod_info + name: traffic-annotations +- mountPath: /etc/traffic-agent + name: traffic-config +- mountPath: /tel_app_exports + name: export-volume + name: traffic-annotations +``` + +Next, generate the init-container + +```console +$ telepresence genyaml initcontainer --config /tmp/my-service -o /tmp/my-service-init.yaml +$ cat /tmp/my-service-init.yaml +args: +- agent-init +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: tel-agent-init +resources: {} +securityContext: + capabilities: + add: + - NET_ADMIN +volumeMounts: +- mountPath: /etc/traffic-agent + name: traffic-config +``` + +Next, generate the YAML for the volumes: + +```console +$ telepresence genyaml volume --workload my-service -o /tmp/my-service-volume.yaml +$ cat /tmp/my-service-volume.yaml +- downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.annotations + path: annotations + name: traffic-annotations +- configMap: + items: + - key: my-service + path: config.yaml + name: telepresence-agents + name: traffic-config +- emptyDir: {} + name: export-volume + +``` + + +Enter `telepresence genyaml container --help` or `telepresence genyaml volume --help` for more information about these flags. + + +### 2. Creating (or updating) the configmap + +The generated configmap entry must be insterted into the `telepresence-agents` `ConfigMap` in the same namespace as the +modified `Deployment`. If the `ConfigMap` doesn't exist yet, it can be created using the following command: + +```console +$ kubectl create configmap telepresence-agents --from-file=/tmp/my-service +``` + +If it already exists, new entries can be added under the `Data` key using `kubectl edit configmap telepresence-agents`. + +### 3. Injecting the YAML into the Deployment + +You need to add the `Deployment` YAML you genereated to include the container and the volume. These are placed as elements +of `spec.template.spec.containers`, `spec.template.spec.initContainers`, and `spec.template.spec.volumes` respectively. +You also need to modify `spec.template.metadata.annotations` and add the annotation +`telepresence.getambassador.io/manually-injected: "true"`. These changes should look like the following: + +```diff + apiVersion: apps/v1 + kind: Deployment + metadata: + name: "my-service" + labels: + service: my-service + spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service ++ annotations: ++ telepresence.getambassador.io/manually-injected: "true" + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} ++ - args: ++ - agent ++ env: ++ - name: _TEL_AGENT_POD_IP ++ valueFrom: ++ fieldRef: ++ apiVersion: v1 ++ fieldPath: status.podIP ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: traffic-agent ++ ports: ++ - containerPort: 9900 ++ protocol: TCP ++ readinessProbe: ++ exec: ++ command: ++ - /bin/stat ++ - /tmp/agent/ready ++ resources: { } ++ volumeMounts: ++ - mountPath: /tel_pod_info ++ name: traffic-annotations ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ - mountPath: /tel_app_exports ++ name: export-volume ++ initContainers: ++ - args: ++ - agent-init ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: tel-agent-init ++ resources: { } ++ securityContext: ++ capabilities: ++ add: ++ - NET_ADMIN ++ volumeMounts: ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ volumes: ++ - downwardAPI: ++ items: ++ - fieldRef: ++ apiVersion: v1 ++ fieldPath: metadata.annotations ++ path: annotations ++ name: traffic-annotations ++ - configMap: ++ items: ++ - key: my-service ++ path: config.yaml ++ name: telepresence-agents ++ name: traffic-config ++ - emptyDir: { } ++ name: export-volume +``` diff --git a/docs/telepresence/2.11/reference/intercepts/specs.md b/docs/telepresence/2.11/reference/intercepts/specs.md new file mode 100644 index 000000000..f1565af99 --- /dev/null +++ b/docs/telepresence/2.11/reference/intercepts/specs.md @@ -0,0 +1,333 @@ +# Configuring intercept using specifications + +This page references the different options available to the telepresence intercept specification. + +With telepresence, you can provide a file to define how an intercept should work. + +## Specification +Your intercept specification is where you can create a standard, easy to use, configuration to easily run, pre and post tasks, start an intercept, and start your local application to handle the intercepted traffic. + +There are many ways to configure your specification to suit your needs, the table below shows the possible options within your specifcation, +and you can see the spec's schema, with all available options and formats, [here](#ide-integration). + +| Options | Description | +|-------------------------------------------|---------------------------------------------------------------------------------------------------------| +| [name](#name) | Name of the specification. | +| [prerequisites](#prerequisites) | Things to set up prior to starting any intercepts, and tear things down once the intercept is complete. | +| [connection](#connection) | Connection properties to use when Telepresence connects to the cluster. | +| [workloads](#workloads) | Remote workloads that are intercepted, keyed by workload name. | +| [handlers](#handlers) | Local processes to handle traffic and/or setup . | + + +### Name +The name is optional. If you don't specify the name it will use the filename of the specification file. + +```yaml +name : echo-server-spec +``` + +### Connection + +The connection option is used to define how Telepresence connects to your cluster. + +```yaml +connection: + context: "shared-cluster" + mappedNamespaces: + - "my_app" +``` + +You can pass the most common parameters from telepresence connect command (`telepresence connect --help`) using a camel case format. + +Some of the most commonly used options include: + +| Options | Type | Format | Description | +|------------------|-------------|-------------------------|---------------------------------------------------------| +| context | string | | The kubernetes context to use | +| mappedNamespaces | string list | [a-z0-9][a-z0-9-]{1,62} | The namespaces that Telepresence will be concerned with | + + +### Handlers + +A handler is a code running locally. + +It can receive traffic for an intercepted service, or can set up prerequisites to run before/after the intercept itself. + +When it is intended as an intercept handler (i.e. to handle traffic), it's usually the service you're working on, or another dependency (database, another third party service, ...) running on your machine. +A handler can be a Docker container, or an application running natively. + +The sample below is creating an intercept handler, giving it the name `echo-server` and using a docker container. The container will +automatically have access to the ports, environment, and mounted directories of the intercepted container. + + + The ports field is important for the intercept handler while running in docker, it indicates which ports should be exposed to the host. If you want to access to it locally (to attach a debugger to your container for example), this field must be provided. + + +```yaml +handlers: + - name: echo-server + environment: + - name: PORT + value: "8080" + ports: + - 8080 + docker: + image: jmalloc/echo-server:latest +``` + +If you don't want to use Docker containers, you can still configure your handlers to start via a regular script. +The snippet below shows how to create an handler called echo-server, that sets an environment variable of `PORT=8080` +and starts the application. + + +```yaml +handlers: + - name: echo-server + environment: + - name: PORT + value: "8080" + script: + run: bin/echo-server +``` + +Keep in mind that an empty handler is still a valid handler. This is sometimes useful when you want to, for example, +simulate an intercepted service going down: + +```yaml +handlers: + - name: no-op +``` + +The table belows defines the parameters that can be used within the handlers section. + +| Options | Type | Format | Description | +|------------------------|-------------|--------------------------|------------------------------------------------------------------------------| +| name | string | [a-zA-Z][a-zA-Z0-9_-]* | Defines name of your handler that the intercepts use to reference it | +| environment | map list | N/A | Environment Defines environment variables within your handler | +| environment[*].name | string | [a-zA-Z_][a-zA-Z0-9_]* | The name of the environment variable | +| environment[*].value | string | N/A | The value for the environment variable | +| ports | int list | N/A | The ports which should be exposed to the host | +| [script](#script) | map | N/A | Tells the handler to run as a script, mutually exclusive to docker | +| [docker](#docker) | map | N/A | Tells the handler to run as a docker container, mutually exclusive to script | + +#### Script + +The handler's script element defines the parameters: + +| Options | Type | Format | Description | +|---------|--------|------------------------|-----------------------------------------------------------------------------------------------------------------------------| +| run | string | N/A | The script to run. Can be multi-line | +| shell | string | bash|sh|sh | Shell that will parse and run the script. Can be bash, zsh, or sh. Defaults to the value of the`SHELL` environment variable | + +#### Docker +The handler's docker element defines the parameters: + +| Options | Type | Format | Description | +|---------|-------------|--------|-----------------------------------------------------------------------------------------------------| +| image | string | image | Defines which image to be used | +| options | string list | N/A | Options for docker run [options](https://docs.docker.com/engine/reference/commandline/run/#options) | +| command | string | N/A | Optional command to run | +| args | string list | N/A | Optional command arguments | + +For additional informations on these parameters, please check the docker [documentation](https://docs.docker.com/engine/reference/commandline/run). + +### Prerequisites +When creating an intercept specification there is an option to include prerequisites. + +Prerequisites give you the ability to run scripts for setup, build binaries to run as your intercept handler, or many other use cases. + +Prerequisites is an array, so it can handle many options prior to starting your intercept and running your intercept handlers. +The elements of the `prerequisites` array correspond to [`handlers`](#handlers). + +The sample below is declaring that `build-binary` and `rm-binary` are two handlers; the first will be run before any intercepts, +the second will be run after cleaning up the intercepts. + +If a prerequisite create succeeds, the corresponding delete is guaranteed to run even if the other steps in the spec fail. + +```yaml +prerequisites: + - create: build-binary + delete: rm-binary +``` + + +The table below defines the parameters availble within the prerequistes section. + +| Options | Description | +|---------|-------------------------------------------------- | +| create | The name of a handler to run before the intercept | +| delete | The name of a handler to run after the intercept | + + +### Workloads + +Workloads define the services in your cluster that will be intercepted. + +The example below is creating an intercept on a service called `echo-server` on port 8080. +It creates a personal intercept with the header of `x-intercept-id: foo`, and routes its traffic to a handler called `echo-server` + +```yaml +workloads: + # You can define one or more workload(s) + - name: echo-server: + intercepts: + # You can define one or more intercept(s) + - headers: + - name: x-intercept-id + value: foo + port: 8080 + handler: echo-server +``` + +This table defines the parameters available within a workload. + + +| Options | Type | Format | Description | Default | +|---------------------------|--------------------------------|-------------------------|---------------------------------------------------------------|---------| +| name | string | [a-z][a-z0-9-]* | If set to false, disables this intercept | true | +| namespace | string | [a-z0-9][a-z0-9-]{1,62} | Name of service to intercept | N/A | +| intercepts | [intercept](#intercepts) list | N/A | The list of intercepts associated to the workload | N/A | + +#### Intercepts +This table defines the parameters available for each intercept. + +| Options | Type | Format | Description | Default | +|---------------------|-------------------------|----------------------|-----------------------------------------------------------------------|----------------| +| enabled | boolean | N/A | If set to false, disables this intercept. | true | +| headers | [header](#header) list | | Headers that will filter the intercept. | Auto generated | +| service | name | [a-z][a-z0-9-]{1,62} | Name of service to intercept | N/A | +| localPort | integersh|string | 0-65535 | The port for the service which is intercepted | N/A | +| port | integer | 0-65535 | The port the service in the cluster is running on | N/A | +| pathPrefix | string | N/A | Path prefix filter for the intercept. Defaults to "/" | / | +| previewURL | boolean | N/A | Determine if a preview url should be created | true | +| banner | boolean | N/A | Used in the preview url option, displays a banner on the preview page | true | + +##### Header + +You can define headers to filter the requests which should end up on your machine when intercepting. + +| Options | Type | Format | Description | Default | +|---------------------------|----------|-------------------------|---------------------------------------------------------------|---------| +| name | string | N/A | Name of the header | N/A | +| value | string | N/A | Value of the header | N/A | + +Telepresence specs also support dynamic headers with **variables**: + +```yaml +intercepts: + - headers: + - name: test-{{ .Telepresence.Username }} + value: "{{ .Telepresence.Username }}" +``` + +| Options | Type | Description | +|---------------------------|----------|------------------------------------------| +| Telepresence.Username | string | The name of the user running the spec | + + +### Running your specification +After you've written your intercept specification you will want to run it. + +To start your intercept, use this command: + +```bash +telepresence intercept run +``` +This will validate and run your spec. In case you just want to validate it, you can do so by using this command: + +```bash +telepresence intercept validate +``` + +### Using and sharing your specification as a CRD + +If you want to share specifications across your team or your organization. You can save specifications as CRDs inside your cluster. + + + The Intercept Specification CRD requires Kubernetes 1.22 or higher, if you are using an old cluster you will + need to install using helm directly, and use the --disable-openapi-validation flag + + +1. Install CRD object in your cluster (one time installation) : + + ```bash + telepresence helm install --crds + ``` + +1. Then you need to deploy the specification in your cluster as a CRD: + + ```yaml + apiVersion: getambassador.io/v1alpha1 + kind: InterceptSpecification + metadata: + name: my-crd-spec + namespace: my-crd-namespace + spec: + {intercept specification} + ``` + + So `echo-server` example looks like this: + + ```bash + kubectl apply -f - < # See note below +``` + +The Service Account token will be obtained by the cluster administrator after they create the user's Service Account. Creating the Service Account will create an associated Secret in the same namespace with the format `-token-`. This token can be obtained by your cluster administrator by running `kubectl get secret -n ambassador -o jsonpath='{.data.token}' | base64 -d`. + +After creating `config.yaml` in your current directory, export the file's location to KUBECONFIG by running `export KUBECONFIG=$(pwd)/config.yaml`. You should then be able to switch to this context by running `kubectl config use-context my-context`. + +## Administrating Telepresence + +Telepresence administration requires permissions for creating `Namespaces`, `ServiceAccounts`, `ClusterRoles`, `ClusterRoleBindings`, `Secrets`, `Services`, `MutatingWebhookConfiguration`, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator. The following permissions are needed for the installation and use of Telepresence: + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: telepresence-admin + namespace: default +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-admin-role +rules: + - apiGroups: [""] + resources: ["pods", "pods/log"] + verbs: ["get", "list", "create", "delete", "watch"] + - apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "update", "create", "delete"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] + - apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update", "create", "delete", "watch"] + - apiGroups: ["rbac.authorization.k8s.io"] + resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"] + verbs: ["get", "list", "watch", "create", "delete"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["create"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["get", "list", "watch", "delete"] + resourceNames: ["telepresence-agents"] + - apiGroups: [""] + resources: ["namespaces"] + verbs: ["get", "list", "watch", "create"] + - apiGroups: [""] + resources: ["secrets"] + verbs: ["get", "create", "list", "delete"] + - apiGroups: [""] + resources: ["serviceaccounts"] + verbs: ["get", "create", "delete"] + - apiGroups: ["admissionregistration.k8s.io"] + resources: ["mutatingwebhookconfigurations"] + verbs: ["get", "create", "delete"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["list", "get", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-clusterrolebinding +subjects: + - name: telepresence-admin + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-admin-role + kind: ClusterRole +``` + +There are two ways to install the traffic-manager: Using `telepresence connect` and installing the [helm chart](../../install/helm/). + +By using `telepresence connect`, Telepresence will use your kubeconfig to create the objects mentioned above in the cluster if they don't already exist. If you want the most introspection into what is being installed, we recommend using the helm chart to install the traffic-manager. + +## Cluster-wide telepresence user access + +To allow users to make intercepts across all namespaces, but with more limited `kubectl` permissions, the following `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` will allow full `telepresence intercept` functionality. + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate value + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-role +rules: +# For gather-logs command +- apiGroups: [""] + resources: ["pods/log"] + verbs: ["get"] +- apiGroups: [""] + resources: ["pods"] + verbs: ["list"] +# Needed in order to maintain a list of workloads +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +- apiGroups: [""] + resources: ["namespaces", "services"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-rolebinding +subjects: +- name: tp-user + kind: ServiceAccount + namespace: ambassador +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-role + kind: ClusterRole +``` + +### Traffic Manager connect permission +In addition to the cluster-wide permissions, the client will also need the following namespace scoped permissions +in the traffic-manager's namespace in order to establish the needed port-forward to the traffic-manager. +```yaml +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: traffic-manager-connect +rules: + - apiGroups: [""] + resources: ["pods"] + verbs: ["get", "list", "watch"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: traffic-manager-connect +subjects: + - name: telepresence-test-developer + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: traffic-manager-connect + kind: Role +``` + +## Namespace only telepresence user access + +RBAC for multi-tenant scenarios where multiple dev teams are sharing a single cluster where users are constrained to a specific namespace(s). + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +For each accessible namespace +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate user name + namespace: tp-namespace # Update value for appropriate namespace +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +rules: +- apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "watch"] +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role-binding + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount +roleRef: + kind: Role + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +``` + +The user will also need the [Traffic Manager connect permission](#traffic-manager-connect-permission) described above. diff --git a/docs/telepresence/2.11/reference/restapi.md b/docs/telepresence/2.11/reference/restapi.md new file mode 100644 index 000000000..4be1924a3 --- /dev/null +++ b/docs/telepresence/2.11/reference/restapi.md @@ -0,0 +1,93 @@ +# Telepresence RESTful API server + +[Telepresence](/products/telepresence/) can run a RESTful API server on the local host, both on the local workstation and in a pod that contains a `traffic-agent`. The server currently has two endpoints. The standard `healthz` endpoint and the `consume-here` endpoint. + +## Enabling the server +The server is enabled by setting the `telepresenceAPI.port` to a valid port number in the [Telepresence Helm Chart](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). The values may be passed explicitly to Helm during install, or configured using the [Telepresence Config](../config#restful-api-server) to impact an auto-install. + +## Querying the server +On the cluster's side, it's the `traffic-agent` of potentially intercepted pods that runs the server. The server can be accessed using `http://localhost:/` from the application container. Telepresence ensures that the container has the `TELEPRESENCE_API_PORT` environment variable set when the `traffic-agent` is installed. On the workstation, it is the `user-daemon` that runs the server. It uses the `TELEPRESENCE_API_PORT` that is conveyed in the environment of the intercept. This means that the server can be accessed the exact same way locally, provided that the environment is propagated correctly to the interceptor process. + +## Endpoints + +The `consume-here` and `intercept-info` endpoints are both intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar. Telepresence provides the ID of the intercept in the environment variable [TELEPRESENCE_INTERCEPT_ID](../environment/#telepresence_intercept_id) during an intercept. This ID must be provided in a `x-telepresence-caller-intercept-id: = ` header. [Telepresence](/products/telepresence/) needs this to identify the caller correctly. The `` will be empty when running in the cluster, but it's harmless to provide it there too, so there's no need for conditional code. + +There are three prerequisites to fulfill before testing The `consume-here` and `intercept-info` endpoints using `curl -v` on the workstation: +1. An intercept must be active +2. The "/healthz" endpoint must respond with OK +3. The ID of the intercept must be known. It will be visible as `ID` in the output of `telepresence list --debug`. + +### healthz +The `http://localhost:/healthz` endpoint should respond with status code 200 OK. If it doesn't then something isn't configured correctly. Check that the `traffic-agent` container is present and that the `TELEPRESENCE_API_PORT` has been added to the environment of the application container and/or in the environment that is propagated to the interceptor that runs on the local workstation. + +#### test endpoint using curl +A `curl -v` call can be used to test the endpoint when an intercept is active. This example assumes that the API port is configured to be 9980. +```console +$ curl -v localhost:9980/healthz +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /healthz HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Date: Fri, 26 Nov 2021 07:06:18 GMT +< Content-Length: 0 +< +* Connection #0 to host localhost left intact +``` + +### consume-here +`http://localhost:/consume-here` will respond with "true" (consume the message) or "false" (leave the message on the queue). When running in the cluster, this endpoint will respond with `false` if the headers match an ongoing intercept for the same workload because it's assumed that it's up to the intercept to consume the message. When running locally, the response is inverted. Matching headers means that the message should be consumed. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api`, we can now check that the "/consume-here" returns "true" for the path "/api" and given headers. +```console +$ curl -v localhost:9980/consume-here?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /consume-here?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Fri, 26 Nov 2021 06:43:28 GMT +< Content-Length: 4 +< +* Connection #0 to host localhost left intact +true +``` + +If you can run curl from the pod, you can try the exact same URL. The result should be "false" when there's an ongoing intercept. The `x-telepresence-caller-intercept-id` is not needed when the call is made from the pod. + +### intercept-info +`http://localhost:/intercept-info` is intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar, and will respond with a JSON structure containing the two booleans `clientSide` and `intercepted`, and a `metadata` map which corresponds to the `--http-meta` key pairs used when the intercept was created. This field is always omitted in case `intercepted` is `false`. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api --http-meta a=b --http-meta b=c`, we can now check that the "/intercept-info" returns information for the given path and headers. +```console +$ curl -v localhost:9980/intercept-info?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980...* Connected to localhost (127.0.0.1) port 9980 (#0) +> GET /intercept-info?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.79.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Tue, 01 Feb 2022 11:39:55 GMT +< Content-Length: 68 +< +{"intercepted":true,"clientSide":true,"metadata":{"a":"b","b":"c"}} +* Connection #0 to host localhost left intact +``` diff --git a/docs/telepresence/2.11/reference/routing.md b/docs/telepresence/2.11/reference/routing.md new file mode 100644 index 000000000..cc88490a0 --- /dev/null +++ b/docs/telepresence/2.11/reference/routing.md @@ -0,0 +1,69 @@ +# Connection Routing + +## Outbound + +### DNS resolution +When requesting a connection to a host, the IP of that host must be determined. Telepresence provides DNS resolvers to help with this task. There are currently four types of resolvers but only one of them will be used on a workstation at any given time. Common for all of them is that they will propagate a selection of the host lookups to be performed in the cluster. The selection normally includes all names ending with `.cluster.local` or a currently mapped namespace but more entries can be added to the list using the `includeSuffixes` option in the +[cluster DNS configuration](../cluster-config/#dns) + +#### Cluster side DNS lookups +The cluster side host lookup will be performed by the traffic-manager unless the client has an active intercept, in which case, the agent performing that intercept will be responsible for doing it. If the client has multiple intercepts, then all of them will be asked to perform the lookup, and the response to the client will contain the unique sum of IPs that they produce. It's therefore important to never have multiple intercepts that span more than one namespace[[1](#namespacelimit)] running concurrently on the same workstation because that would logically put the workstation in several namespaces and make the DNS resolution ambiguous. The reason for asking all of them is that the workstation currently impersonates multiple containers, and it is not possible to determine on behalf of what container the lookup request is made. + +#### macOS resolver +This resolver hooks into the macOS DNS system by creating files under `/etc/resolver`. Those files correspond to some domain and contain the port number of the Telepresence resolver. Telepresence creates one such file for each of the currently mapped namespaces and `include-suffixes` option. The file `telepresence.local` contains a search path that is configured based on current intercepts so that single label names can be resolved correctly. + +#### Linux systemd-resolved resolver +This resolver registers itself as part of telepresence's [VIF](../tun-device) using `systemd-resolved` and uses the DBus API to configure domains and routes that corresponds to the current set of intercepts and namespaces. + +#### Linux overriding resolver +Linux systems that aren't configured with `systemd-resolved` will use this resolver. A Typical case is when running Telepresence [inside a docker container](../inside-container). During initialization, the resolver will first establish a _fallback_ connection to the IP passed as `--dns`, the one configured as `local-ip` in the [local DNS configuration](../config/#dns-and-routing), or the primary `nameserver` registered in `/etc/resolv.conf`. It will then use iptables to actually override that IP so that requests to it instead end up in the overriding resolver, which unless it succeeds on its own, will use the _fallback_. + +#### Windows resolver +This resolver uses the DNS resolution capabilities of the [win-tun](https://www.wintun.net/) device in conjunction with [Win32_NetworkAdapterConfiguration SetDNSDomain](https://docs.microsoft.com/en-us/powershell/scripting/samples/performing-networking-tasks?view=powershell-7.2#assigning-the-dns-domain-for-a-network-adapter). + +#### DNS caching +The Telepresence DNS resolver often changes its configuration. This means that Telepresence must either flush the DNS caches on the local host, or ensure that DNS-records returned from the Telepresence resolver aren't cached (or cached for a very short time). All operating systems have different ways of flushing the DNS caches and even different versions of one system may have differences. Also, on some systems it is necessary to actually kill and restart processes to ensure a proper flush, which in turn may result in network instabilities. + +Starting with 2.4.7, Telepresence will no longer flush the host's DNS caches. Instead, all records will have a short Time To Live (TTL) so that such caches evict the entries quickly. This causes increased load on the Telepresence resolver (shorter TTL means more frequent queries) and to cater for that, telepresence now has an internal cache to minimize the number of DNS queries that it sends to the cluster. This cache is flushed as needed without causing instabilities. + +### Routing + +#### Subnets +The Telepresence `traffic-manager` service is responsible for discovering the cluster's service subnet and all subnets used by the pods. In order to do this, it needs permission to create a dummy service[[2](#servicesubnet)] in its own namespace, and the ability to list, get, and watch nodes and pods. Most clusters will expose the pod subnets as `podCIDR` in the `Node` while others, like Amazon EKS, don't. Telepresence will then fall back to deriving the subnets from the IPs of all pods. If you'd like to choose a specific method for discovering subnets, or want to provide the list yourself, you can use the `podCIDRStrategy` configuration value in the [helm](../../install/helm) chart to do that. + +The complete set of subnets that the [VIF](../tun-device) will be configured with is dynamic and may change during a connection's life cycle as new nodes arrive or disappear from the cluster. The set consists of what that the traffic-manager finds in the cluster, and the subnets configured using the [also-proxy](../cluster-config#alsoproxy) configuration option. Telepresence will remove subnets that are equal to, or completely covered by, other subnets. + +#### Connection origin +A request to connect to an IP-address that belongs to one of the subnets of the [VIF](../tun-device) will cause a connection request to be made in the cluster. As with host name lookups, the request will originate from the traffic-manager unless the client has ongoing intercepts. If it does, one of the intercepted pods will be chosen, and the request will instead originate from that pod. This is a best-effort approach. Telepresence only knows that the request originated from the workstation. It cannot know that it is intended to originate from a specific pod when multiple intercepts are active. + +A `--local-only` intercept will not have any effect on the connection origin because there is no pod from which the connection can originate. The intercept must be made on a workload that has been deployed in the cluster if there's a requirement for correct connection origin. + +There are multiple reasons for doing this. One is that it is important that the request originates from the correct namespace. Example: + +```bash +curl some-host +``` +results in a http request with header `Host: some-host`. Now, if a service-mesh like Istio performs header based routing, then it will fail to find that host unless the request originates from the same namespace as the host resides in. Another reason is that the configuration of a service mesh can contain very strict rules. If the request then originates from the wrong pod, it will be denied. Only one intercept at a time can be used if there is a need to ensure that the chosen pod is exactly right. + +### Recursion detection +It is common that clusters used in development, such as Minikube, Minishift or k3s, run on the same host as the Telepresence client, often in a Docker container. Such clusters may have access to host network, which means that both DNS and L4 routing may be subjected to recursion. + +#### DNS recursion +When a local cluster's DNS-resolver fails to resolve a hostname, it may fall back to querying the local host network. This means that the Telepresence resolver will be asked to resolve a query that was issued from the cluster. Telepresence must check if such a query is recursive because there is a chance that it actually originated from the Telepresence DNS resolver and was dispatched to the `traffic-manager`, or a `traffic-agent`. + +Telepresence handles this by sending one initial DNS-query to resolve the hostname "tel2-recursion-check.kube-system". If the cluster runs locally, and has access to the local host's network, then that query will recurse back into the Telepresence resolver. Telepresence remembers this and alters its own behavior so that queries that are believed to be recursions are detected and respond with an NXNAME record. Telepresence performs this solution to the best of its ability, but may not be completely accurate in all situations. There's a chance that the DNS-resolver will yield a false negative for the second query if the same hostname is queried more than once in rapid succession, that is when the second query is made before the first query has received a response from the cluster. + +#### Connect recursion +A cluster running locally may dispatch connection attempts to non-existing host:port combinations to the host network. This means that they may reach the Telepresence [VIF](../tun-device). Endless recursions occur if the VIF simply dispatches such attempts on to the cluster. + +The telepresence client handles this by serializing all connection attempts to one specific IP:PORT, trapping all subsequent attempts to connect to that IP:PORT until the first attempt has completed. If the first attempt was deemed a success, then the currently trapped attempts are allowed to proceed. If the first attempt failed, then the currently trapped attempts fail. + +## Inbound + +The traffic-manager and traffic-agent are mutually responsible for setting up the necessary connection to the workstation when an intercept becomes active. In versions prior to 2.3.2, this would be accomplished by the traffic-manager creating a port dynamically that it would pass to the traffic-agent. The traffic-agent would then forward the intercepted connection to that port, and the traffic-manager would forward it to the workstation. This lead to problems when integrating with service meshes like Istio since those dynamic ports needed to be configured. It also imposed an undesired requirement to be able to use mTLS between the traffic-manager and traffic-agent. + +In 2.3.2, this changes, so that the traffic-agent instead creates a tunnel to the traffic-manager using the already existing gRPC API connection. The traffic-manager then forwards that using another tunnel to the workstation. This is completely invisible to other service meshes and is therefore much easier to configure. + +##### Footnotes: +

1: Starting with 2.8.0, Telepresence will not allow that the same workstation creates concurrent intercepts that span multiple namespaces.

+

2: The error message from an attempt to create a service in a bad subnet contains the service subnet. The trick of creating a dummy service is currently the only way to get Kubernetes to expose that subnet.

diff --git a/docs/telepresence/2.11/reference/tun-device.md b/docs/telepresence/2.11/reference/tun-device.md new file mode 100644 index 000000000..4410f6f3c --- /dev/null +++ b/docs/telepresence/2.11/reference/tun-device.md @@ -0,0 +1,27 @@ +# Networking through Virtual Network Interface + +The Telepresence daemon process creates a Virtual Network Interface (VIF) when Telepresence connects to the cluster. The VIF ensures that the cluster's subnets are available to the workstation. It also intercepts DNS requests and forwards them to the traffic-manager which in turn forwards them to intercepted agents, if any, or performs a host lookup by itself. + +### TUN-Device +The VIF is a TUN-device, which means that it communicates with the workstation in terms of L3 IP-packets. The router will recognize UDP and TCP packets and tunnel their payload to the traffic-manager via its encrypted gRPC API. The traffic-manager will then establish corresponding connections in the cluster. All protocol negotiation takes place in the client because the VIF takes care of the L3 to L4 translation (i.e. the tunnel is L4, not L3). + +## Gains when using the VIF + +### Both TCP and UDP +The TUN-device is capable of routing both TCP and UDP for outbound traffic. Earlier versions of Telepresence would only allow TCP. Future enhancements might be to also route inbound UDP, and perhaps a selection of ICMP packages (to allow for things like `ping`). + +### No SSH required + +The VIF approach is somewhat similar to using `sshuttle` but without +any requirements for extra software, configuration or connections. +Using the VIF means that only one single connection needs to be +forwarded through the Kubernetes apiserver (à la `kubectl +port-forward`), using only one single port. There is no need for +`ssh` in the client nor for `sshd` in the traffic-manager. This also +means that the traffic-manager container can run as the default user. + +#### sshfs without ssh encryption +When a POD is intercepted, and its volumes are mounted on the local machine, this mount is performed by [sshfs](https://github.com/libfuse/sshfs). Telepresence will run `sshfs -o slave` which means that instead of using `ssh` to establish an encrypted communication to an `sshd`, which in turn terminates the encryption and forwards to `sftp`, the `sshfs` will talk `sftp` directly on its `stdin/stdout` pair. Telepresence tunnels that directly to an `sftp` in the agent using its already encrypted gRPC API. As a result, no `sshd` is needed in client nor in the traffic-agent, and the traffic-agent container can run as the default user. + +### No Firewall rules +With the VIF in place, there's no longer any need to tamper with firewalls in order to establish IP routes. The VIF makes the cluster subnets available during connect, and the kernel will perform the routing automatically. When the session ends, the kernel is also responsible for cleaning up. diff --git a/docs/telepresence/2.11/reference/volume.md b/docs/telepresence/2.11/reference/volume.md new file mode 100644 index 000000000..82df9cafa --- /dev/null +++ b/docs/telepresence/2.11/reference/volume.md @@ -0,0 +1,36 @@ +# Volume mounts + +import Alert from '@material-ui/lab/Alert'; + +Telepresence supports locally mounting of volumes that are mounted to your Pods. You can specify a command to run when starting the intercept, this could be a subshell or local server such as Python or Node. + +``` +telepresence intercept --port --mount=/tmp/ -- /bin/bash +``` + +In this case, Telepresence creates the intercept, mounts the Pod's volumes to locally to `/tmp`, and starts a Bash subshell. + +Telepresence can set a random mount point for you by using `--mount=true` instead, you can then find the mount point in the output of `telepresence list` or using the `$TELEPRESENCE_ROOT` variable. + +``` +$ telepresence intercept --port --mount=true -- /bin/bash +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 + Intercepting : all TCP connections + +bash-3.2$ echo $TELEPRESENCE_ROOT +/var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 +``` + +--mount=true is the default if a mount option is not specified, use --mount=false to disable mounting volumes. + +With either method, the code you run locally either from the subshell or from the intercept command will need to be prepended with the `$TELEPRESENCE_ROOT` environment variable to utilize the mounted volumes. + +For example, Kubernetes mounts secrets to `/var/run/secrets/kubernetes.io` (even if no `mountPoint` for it exists in the Pod spec). Once mounted, to access these you would need to change your code to use `$TELEPRESENCE_ROOT/var/run/secrets/kubernetes.io`. + +If using --mount=true without a command, you can use either environment variable flag to retrieve the variable. diff --git a/docs/telepresence/2.11/reference/vpn.md b/docs/telepresence/2.11/reference/vpn.md new file mode 100644 index 000000000..91213babc --- /dev/null +++ b/docs/telepresence/2.11/reference/vpn.md @@ -0,0 +1,155 @@ + +
+ +# Telepresence and VPNs + +## The test-vpn command + +You can make use of the `telepresence test-vpn` command to diagnose issues +with your VPN setup. +This guides you through a series of steps to figure out if there are +conflicts between your VPN configuration and [Telepresence](/products/telepresence/). + +### Prerequisites + +Before running `telepresence test-vpn` you should ensure that your VPN is +in split-tunnel mode. +This means that only traffic that _must_ pass through the VPN is directed +through it; otherwise, the test results may be inaccurate. + +You may need to configure this on both the client and server sides. +Client-side, taking the Tunnelblick client as an example, you must ensure that +the `Route all IPv4 traffic through the VPN` tickbox is not enabled: + +![Tunnelblick](../images/tunnelblick.png) + +Server-side, taking AWS' ClientVPN as an example, you simply have to enable +split-tunnel mode: + +![Modify client VPN Endpoint](../images/split-tunnel.png) + +In AWS, this setting can be toggled without reprovisioning the VPN. Other cloud providers may work differently. + +### Testing the VPN configuration + +To run it, enter: + +```console +$ telepresence test-vpn +``` + +The test-vpn tool begins by asking you to disconnect from your VPN; ensure you are disconnected then +press enter: + +``` +Telepresence Root Daemon is already stopped +Telepresence User Daemon is already stopped +Please disconnect from your VPN now and hit enter once you're disconnected... +``` + +Once it's gathered information about your network configuration without an active connection, +it will ask you to connect to the VPN: + +``` +Please connect to your VPN now and hit enter once you're connected... +``` + +It will then connect to the cluster: + + +``` +Launching Telepresence Root Daemon +Launching Telepresence User Daemon +Connected to context arn:aws:eks:us-east-1:914373874199:cluster/josec-tp-test-vpn-cluster (https://07C63820C58A0426296DAEFC73AED10C.gr7.us-east-1.eks.amazonaws.com) +Telepresence Root Daemon quitting... done +Telepresence User Daemon quitting... done +``` + +And show you the results of the test: + +``` +---------- Test Results: +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +✅ svc subnet 10.19.0.0/16 is clear of VPN + +Please see https://www.telepresence.io/docs/latest/reference/vpn for more info on these corrective actions, as well as examples + +Still having issues? Please create a new github issue at https://github.com/telepresenceio/telepresence/issues/new?template=Bug_report.md + Please make sure to add the following to your issue: + * Run `telepresence loglevel debug`, try to connect, then run `telepresence gather_logs`. It will produce a zipfile that you should attach to the issue. + * Which VPN client are you using? + * Which VPN server are you using? + * How is your VPN pushing DNS configuration? It may be useful to add the contents of /etc/resolv.conf +``` + +#### Interpreting test results + +##### Case 1: VPN masked by cluster + +In an instance where the VPN is masked by the cluster, the test-vpn tool informs you that a pod or service subnet is masking a CIDR that the VPN +routes: + +``` +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +``` + +This means that all VPN hosts within `10.0.0.0/19` will be rendered inaccessible while +telepresence is connected. + +The ideal resolution in this case is to move the pods to a different subnet. This is possible, +for example, in Amazon EKS by configuring a [new CIDR range](https://aws.amazon.com/premiumsupport/knowledge-center/eks-multiple-cidr-ranges/) for the pods. +In this case, configuring the pods to be located in `10.1.0.0/19` clears the VPN and allows you +to reach hosts inside the VPC's `10.0.0.0/19` + +However, it is not always possible to move the pods to a different subnet. +In these cases, you should use the [never-proxy](../cluster-config#neverproxy) configuration to prevent certain +hosts from being masked. +This might be particularly important for DNS resolution. In an AWS ClientVPN VPN it is often +customary to set the `.2` host as a DNS server (e.g. `10.0.0.2` in this case): + +![Modify Client VPN Endpoint](../images/vpn-dns.png) + +If this is the case for your VPN, you should place the DNS server in the never-proxy list for your +cluster. In the values file that you pass to `telepresence helm install [--upgrade] --values `, add a `client.routing` +entry like so: + +```yaml +client: + routing: + neverProxySubnets: + - 10.0.0.2/32 +``` + +##### Case 2: Cluster masked by VPN + +In an instance where the Cluster is masked by the VPN, the test-vpn tool informs you that a pod or service subnet is being masked by a CIDR +that the VPN routes: + +``` +❌ pod subnet 10.0.0.0/8 being masked by VPN-routed CIDR 10.0.0.0/16. This usually means that Telepresence will not be able to connect to your cluster. To resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, consider shrinking the mask of the 10.0.0.0/16 CIDR (e.g. from /16 to /8), or disabling split-tunneling +``` + +Typically this means that pods within `10.0.0.0/8` are not accessible while the VPN is +connected. + +As with the first case, the ideal resolution is to move the pods away, but this may not always +be possible. In that case, your best bet is to attempt to shrink the VPN's CIDR +(that is, make it route more hosts) to make Telepresence's routes win by virtue of specificity. +One easy way to do this may be by disabling split tunneling (see the [prerequisites](#prerequisites) +section for more on split-tunneling). + +Note that once you fix this, you may find yourself landing again in [Case 1](#case-1-vpn-masked-by-cluster), and may need +to use never-proxy rules to whitelist hosts in the VPN: + +``` +❌ pod subnet 10.0.0.0/8 is masking VPN-routed CIDR 0.0.0.0/1. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 0.0.0.0/1 are placed in the never-proxy list +``` +
diff --git a/docs/telepresence/2.11/release-notes/no-ssh.png b/docs/telepresence/2.11/release-notes/no-ssh.png new file mode 100644 index 000000000..025f20ab7 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/no-ssh.png differ diff --git a/docs/telepresence/2.11/release-notes/run-tp-in-docker.png b/docs/telepresence/2.11/release-notes/run-tp-in-docker.png new file mode 100644 index 000000000..53b66a9b2 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/run-tp-in-docker.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.2.png b/docs/telepresence/2.11/release-notes/telepresence-2.2.png new file mode 100644 index 000000000..43abc7e89 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.2.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.3.0-homebrew.png b/docs/telepresence/2.11/release-notes/telepresence-2.3.0-homebrew.png new file mode 100644 index 000000000..e203a9750 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.3.0-homebrew.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.3.0-loglevels.png b/docs/telepresence/2.11/release-notes/telepresence-2.3.0-loglevels.png new file mode 100644 index 000000000..3d628c54a Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.3.0-loglevels.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.3.1-alsoProxy.png b/docs/telepresence/2.11/release-notes/telepresence-2.3.1-alsoProxy.png new file mode 100644 index 000000000..4052b927b Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.3.1-alsoProxy.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.3.1-brew.png b/docs/telepresence/2.11/release-notes/telepresence-2.3.1-brew.png new file mode 100644 index 000000000..2af424904 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.3.1-brew.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.3.1-dns.png b/docs/telepresence/2.11/release-notes/telepresence-2.3.1-dns.png new file mode 100644 index 000000000..c6335e7a7 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.3.1-dns.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.3.1-inject.png b/docs/telepresence/2.11/release-notes/telepresence-2.3.1-inject.png new file mode 100644 index 000000000..aea1003ef Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.3.1-inject.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.3.1-large-file-transfer.png b/docs/telepresence/2.11/release-notes/telepresence-2.3.1-large-file-transfer.png new file mode 100644 index 000000000..48ceb3817 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.3.1-large-file-transfer.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.3.1-trafficmanagerconnect.png b/docs/telepresence/2.11/release-notes/telepresence-2.3.1-trafficmanagerconnect.png new file mode 100644 index 000000000..78128c174 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.3.1-trafficmanagerconnect.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.3.2-subnets.png b/docs/telepresence/2.11/release-notes/telepresence-2.3.2-subnets.png new file mode 100644 index 000000000..778c722ab Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.3.2-subnets.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.3.2-svcport-annotation.png b/docs/telepresence/2.11/release-notes/telepresence-2.3.2-svcport-annotation.png new file mode 100644 index 000000000..1e1e92408 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.3.2-svcport-annotation.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.3.3-helm.png b/docs/telepresence/2.11/release-notes/telepresence-2.3.3-helm.png new file mode 100644 index 000000000..7b81480a7 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.3.3-helm.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.3.3-namespace-config.png b/docs/telepresence/2.11/release-notes/telepresence-2.3.3-namespace-config.png new file mode 100644 index 000000000..7864d3a30 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.3.3-namespace-config.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.3.3-to-pod.png b/docs/telepresence/2.11/release-notes/telepresence-2.3.3-to-pod.png new file mode 100644 index 000000000..aa7be3f63 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.3.3-to-pod.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.3.4-improved-error.png b/docs/telepresence/2.11/release-notes/telepresence-2.3.4-improved-error.png new file mode 100644 index 000000000..fa8a12986 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.3.4-improved-error.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.3.4-ip-error.png b/docs/telepresence/2.11/release-notes/telepresence-2.3.4-ip-error.png new file mode 100644 index 000000000..1d37380c7 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.3.4-ip-error.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.3.5-agent-config.png b/docs/telepresence/2.11/release-notes/telepresence-2.3.5-agent-config.png new file mode 100644 index 000000000..67d6d3e8b Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.3.5-agent-config.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.3.5-grpc-max-receive-size.png b/docs/telepresence/2.11/release-notes/telepresence-2.3.5-grpc-max-receive-size.png new file mode 100644 index 000000000..32939f9dd Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.3.5-grpc-max-receive-size.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.3.5-skipLogin.png b/docs/telepresence/2.11/release-notes/telepresence-2.3.5-skipLogin.png new file mode 100644 index 000000000..bf79c1910 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.3.5-skipLogin.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png b/docs/telepresence/2.11/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png new file mode 100644 index 000000000..d29a05ad7 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.3.7-keydesc.png b/docs/telepresence/2.11/release-notes/telepresence-2.3.7-keydesc.png new file mode 100644 index 000000000..9bffe5ccb Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.3.7-keydesc.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.3.7-newkey.png b/docs/telepresence/2.11/release-notes/telepresence-2.3.7-newkey.png new file mode 100644 index 000000000..c7d47c42d Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.3.7-newkey.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.4.0-cloud-messages.png b/docs/telepresence/2.11/release-notes/telepresence-2.4.0-cloud-messages.png new file mode 100644 index 000000000..ffd045ae0 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.4.0-cloud-messages.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.4.0-windows.png b/docs/telepresence/2.11/release-notes/telepresence-2.4.0-windows.png new file mode 100644 index 000000000..d27ba254a Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.4.0-windows.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.4.1-systema-vars.png b/docs/telepresence/2.11/release-notes/telepresence-2.4.1-systema-vars.png new file mode 100644 index 000000000..c098b439f Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.4.1-systema-vars.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.4.10-actions.png b/docs/telepresence/2.11/release-notes/telepresence-2.4.10-actions.png new file mode 100644 index 000000000..6d849ac21 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.4.10-actions.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.4.10-intercept-config.png b/docs/telepresence/2.11/release-notes/telepresence-2.4.10-intercept-config.png new file mode 100644 index 000000000..e3f1136ac Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.4.10-intercept-config.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.4.4-gather-logs.png b/docs/telepresence/2.11/release-notes/telepresence-2.4.4-gather-logs.png new file mode 100644 index 000000000..7db541735 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.4.4-gather-logs.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.4.5-logs-anonymize.png b/docs/telepresence/2.11/release-notes/telepresence-2.4.5-logs-anonymize.png new file mode 100644 index 000000000..edd01fde4 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.4.5-logs-anonymize.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.4.5-pod-yaml.png b/docs/telepresence/2.11/release-notes/telepresence-2.4.5-pod-yaml.png new file mode 100644 index 000000000..3f565c4f8 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.4.5-pod-yaml.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.4.5-preview-url-questions.png b/docs/telepresence/2.11/release-notes/telepresence-2.4.5-preview-url-questions.png new file mode 100644 index 000000000..1823aaa14 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.4.5-preview-url-questions.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.4.6-help-text.png b/docs/telepresence/2.11/release-notes/telepresence-2.4.6-help-text.png new file mode 100644 index 000000000..aab9178ad Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.4.6-help-text.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.4.8-health-check.png b/docs/telepresence/2.11/release-notes/telepresence-2.4.8-health-check.png new file mode 100644 index 000000000..e10a0b472 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.4.8-health-check.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.4.8-vpn.png b/docs/telepresence/2.11/release-notes/telepresence-2.4.8-vpn.png new file mode 100644 index 000000000..fbb215882 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.4.8-vpn.png differ diff --git a/docs/telepresence/2.11/release-notes/telepresence-2.5.0-pro-daemon.png b/docs/telepresence/2.11/release-notes/telepresence-2.5.0-pro-daemon.png new file mode 100644 index 000000000..5b82fc769 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/telepresence-2.5.0-pro-daemon.png differ diff --git a/docs/telepresence/2.11/release-notes/tunnel.jpg b/docs/telepresence/2.11/release-notes/tunnel.jpg new file mode 100644 index 000000000..59a0397e6 Binary files /dev/null and b/docs/telepresence/2.11/release-notes/tunnel.jpg differ diff --git a/docs/telepresence/2.11/releaseNotes.yml b/docs/telepresence/2.11/releaseNotes.yml new file mode 100644 index 000000000..bd0cfb814 --- /dev/null +++ b/docs/telepresence/2.11/releaseNotes.yml @@ -0,0 +1,2094 @@ +# This file should be placed in the folder for the version of the +# product that's meant to be documented. A `/release-notes` page will +# be automatically generated and populated at build time. +# +# Note that an entry needs to be added to the `doc-links.yml` file in +# order to surface the release notes in the table of contents. +# +# The YAML in this file should contain: +# +# changelog: An (optional) URL to the CHANGELOG for the product. +# items: An array of releases with the following attributes: +# - version: The (optional) version number of the release, if applicable. +# - date: The date of the release in the format YYYY-MM-DD. +# - notes: An array of noteworthy changes included in the release, each having the following attributes: +# - type: The type of change, one of `bugfix`, `feature`, `security` or `change`. +# - title: A short title of the noteworthy change. +# - body: >- +# Two or three sentences describing the change and why it +# is noteworthy. This is HTML, not plain text or +# markdown. It is handy to use YAML's ">-" feature to +# allow line-wrapping. +# - image: >- +# The URL of an image that visually represents the +# noteworthy change. This path is relative to the +# `release-notes` directory; if this file is +# `FOO/releaseNotes.yml`, then the image paths are +# relative to `FOO/release-notes/`. +# - docs: The path to the documentation page where additional information can be found. +# - href: A path from the root to a resource on the getambassador website, takes precedence over a docs link. + +docTitle: Telepresence Release Notes +docDescription: >- + Release notes for Telepresence by Ambassador Labs, a CNCF project + that enables developers to iterate rapidly on Kubernetes + microservices by arming them with infinite-scale development + environments, access to instantaneous feedback loops, and highly + customizable development environments. + +changelog: https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md + +items: + - version: 2.11.1 + date: "2023-02-27" + notes: + - type: bugfix + title: Multiple architectures + docs: https://github.com/telepresenceio/telepresence/issues/3043 + body: >- + The multi-arch build for the ambassador-telepresence-manager and ambassador-telepresence-agent now + works for both amd64 and arm64. + - type: bugfix + title: Ambassador agent Helm chart duplicates + docs: https://github.com/telepresenceio/telepresence/issues/3046 + body: >- + Some labels in the Helm chart for the Ambassador Agent were duplicated, causing problems for FluxCD. + - version: 2.11.0 + date: "2023-02-22" + notes: + - type: feature + title: Intercept specification + body: >- + It is now possible to leverage the intercept specification to spin up your environment without extra tools. + - type: feature + title: Support for arm64 (Apple Silicon) + body: >- + The ambassador-telepresence-manager and ambassador-telepresence-agent are now distributed as + multi-architecture images and can run natively on both linux/amd64 and linux/arm64. + - type: bugfix + title: Connectivity check can break routing in VPN setups + docs: https://github.com/telepresenceio/telepresence/issues/3006 + body: >- + The connectivity check failed to recognize that the connected peer wasn't a traffic-manager. Consequently, + it didn't proxy the cluster because it incorrectly assumed that a successful connect meant cluster connectivity, + - type: bugfix + title: VPN routes not detected by telepresence test-vpn on macOS + docs: https://github.com/telepresenceio/telepresence/pull/3038 + body: >- + The telepresence test-vpn did not include routes of type link when checking for subnet + conflicts. + - version: 2.10.5 + date: "2023-02-06" + notes: + - type: change + title: mTLS secrets mount + body: >- + mTLS Secrets will now be mounted into the traffic agent, instead of expected to be read by it from the API. + This is only applicable to users of team mode and the proprietary agent + docs: reference/cluster-config#tls + - type: bugfix + title: Daemon reconnection fix + body: >- + Fixed a bug that prevented the local daemons from automatically reconnecting to the traffic manager when the network connection was lost. + - version: 2.10.4 + date: "2023-01-20" + notes: + - type: bugfix + title: Backward compatibility restored + body: >- + Telepresence can now create intercepts with traffic-managers of version 2.9.5 and older. + - type: bugfix + title: Saved intercepts now works with preview URLs. + body: >- + Preview URLs are now included/excluded correctly when using saved intercepts. + - version: 2.10.3 + date: "2023-01-17" + notes: + - type: bugfix + title: Saved intercepts + body: >- + Fixed an issue which was causing the saved intercepts to not be completely interpreted by telepresence. + - type: bugfix + title: Traffic manager restart during upgrade to team mode + body: >- + Fixed an issue which was causing the traffic manager to be redeployed after an upgrade to the team mode. + docs: https://github.com/telepresenceio/telepresence/pull/2979 + - version: 2.10.2 + date: "2023-01-16" + notes: + - type: bugfix + title: version consistency in helm commands + body: >- + Ensure that CLI and user-daemon binaries are the same version when running
telepresence helm install + or telepresence helm upgrade. + docs: https://github.com/telepresenceio/telepresence/pull/2975 + - type: bugfix + title: Release Process + body: >- + Fixed an issue that prevented the --use-saved-intercept flag from working. + - version: 2.10.1 + date: "2023-01-11" + notes: + - type: bugfix + title: Release Process + body: >- + Fixed a regex in our release process that prevented 2.10.0 promotion. + - version: 2.10.0 + date: "2023-01-11" + notes: + - type: feature + title: Team Mode and Single User Mode + body: >- + The Traffic Manager can now be set to either "team" mode or "single user" mode. When in team mode, intercepts will default to http intercepts. + - type: feature + title: Added `insert` and `upgrade` Subcommands to `telepresence helm` + body: >- + The `telepresence helm` sub-commands `insert` and `upgrade` now accepts all types of helm `--set-XXX` flags. + - type: feature + title: Added Image Pull Secrets to Helm Chart + body: >- + Image pull secrets for the traffic-agent can now be added using the Helm chart setting `agent.image.pullSecrets`. + - type: change + title: Rename Configmap + body: >- + The configmap `traffic-manager-clients` has been renamed to `traffic-manager`. + - type: change + title: Webhook Namespace Field + body: >- + If the cluster is Kubernetes 1.21 or later, the mutating webhook will find the correct namespace using the label `kubernetes.io/metadata.name` rather than `app.kuberenetes.io/name`. + docs: https://github.com/telepresenceio/telepresence/issues/2913 + - type: change + title: Rename Webhook + body: >- + The name of the mutating webhook now contains the namespace of the traffic-manager so that the webhook is easier to identify when there are multiple namespace scoped telepresence installations in the cluster. + - type: change + title: OSS Binaries + body: >- + The OSS Helm chart is no longer pushed to the datawire Helm repository. It will instead be pushed from the telepresence proprietary repository. The OSS Helm chart is still what's embedded in the OSS telepresence client. + docs: https://github.com/telepresenceio/telepresence/pull/2943 + - type: bugfix + title: Fix Panic Using `--docker-run` + body: >- + Telepresence no longer panics when `--docker-run` is combined with `--name ` instead of `--name=`. + docs: https://github.com/telepresenceio/telepresence/issues/2953 + - type: bugfix + title: Stop assuming cluster domain + body: >- + Telepresence traffic-manager extracts the cluster domain (e.g. "cluster.local") using a CNAME lookup for "kubernetes.default" instead of "kubernetes.default.svc". + docs: https://github.com/telepresenceio/telepresence/pull/2959 + - type: bugfix + title: Uninstall hook timeout + body: >- + A timeout was added to the pre-delete hook `uninstall-agents`, so that a helm uninstall doesn't hang when there is no running traffic-manager. + docs: https://github.com/telepresenceio/telepresence/pull/2937 + - type: bugfix + title: Uninstall hook check + body: >- + The `Helm.Revision` is now used to prevent that Helm hook calls are served by the wrong revision of the traffic-manager. + docs: https://github.com/telepresenceio/telepresence/issues/2954 + - version: 2.9.5 + date: "2022-12-08" + notes: + - type: security + title: Update to golang v1.19.4 + body: >- + Apply security updates by updating to golang v1.19.4 + docs: https://groups.google.com/g/golang-announce/c/L_3rmdT0BMU + - type: bugfix + title: GCE authentication + body: >- + Fixed a regression, that was introduced in 2.9.3, preventing use of gce authentication without also having a config element present in the gce configuration in the kubeconfig. + - version: 2.9.4 + date: "2022-12-02" + notes: + - type: feature + title: Subnet detection strategy + body: >- + The traffic-manager can automatically detect that the node subnets are different from the pod subnets, and switch detection strategy to instead use subnets that cover the pod IPs. + - type: bugfix + title: Fix `--set` flag for `telepresence helm install` + body: >- + The `telepresence helm` command `--set x=y` flag didn't correctly set values of other types than `string`. The code now uses standard Helm semantics for this flag. + - type: bugfix + title: Fix `agent.image` setting propigation + body: >- + Telepresence now uses the correct `agent.image` properties in the Helm chart when copying agent image settings from the `config.yml` file. + - type: bugfix + title: Delay file sharing until needed + body: >- + Initialization of FTP type file sharing is delayed, so that setting it using the Helm chart value `intercept.useFtp=true` works as expected. + - type: bugfix + title: Cleanup on `telepresence quit` + body: >- + The port-forward that is created when Telepresence connects to a cluster is now properly closed when `telepresence quit` is called. + - type: bugfix + title: Watch `config.yml` without panic + body: >- + The user daemon no longer panics when the `config.yml` is modified at a time when the user daemon is running but no session is active. + - type: bugfix + title: Thread safety + body: >- + Fix race condition that would occur when `telepresence connect` `telepresence leave` was called several times in rapid succession. + - version: 2.9.3 + date: "2022-11-23" + notes: + - type: feature + title: Helm options for `livenessProbe` and `readinessProbe` + body: >- + The helm chart now supports `livenessProbe` and `readinessProbe` for the traffic-manager deployment, so that the pod automatically restarts if it doesn't respond. + - type: change + title: Improved network communication + body: >- + The root daemon now communicates directly with the traffic-manager instead of routing all outbound traffic through the user daemon. + - type: bugfix + title: Root daemon debug logging + body: >- + Using `telepresence loglevel LEVEL` now also sets the log level in the root daemon. + - type: bugfix + title: Multivalue flag value propagation + body: >- + Multi valued kubernetes flags such as `--as-group` are now propagated correctly. + - type: bugfix + title: Root daemon stability + body: >- + The root daemon would sometimes hang indefinitely when quit and connect were called in rapid succession. + - type: bugfix + title: Base DNS resolver + body: >- + Don't use `systemd.resolved` base DNS resolver unless cluster is proxied. + - version: 2.9.2 + date: "2022-11-16" + notes: + - type: bugfix + title: Fix panic + body: >- + Fix panic when connecting to an older traffic-manager. + - type: bugfix + title: Fix header flag + body: >- + Fix an issue where the `http-header` flag sometimes wouldn't propagate correctly. + - version: 2.9.1 + date: "2022-11-16" + notes: + - type: bugfix + title: Connect failures due to missing auth provider. + body: >- + The regression in 2.9.0 that caused a `no Auth Provider found for name “gcp”` error when connecting was fixed. + - version: 2.9.0 + date: "2022-11-15" + notes: + - type: feature + title: New command to view client configuration. + body: >- + A new telepresence config view was added to make it easy to view the current + client configuration. + docs: new-in-2.9#view-the-client-configuration + - type: feature + title: Configure Clients using the Helm chart. + body: >- + The traffic-manager can now configure all clients that connect through the client: map in + the values.yaml file. + docs: reference/cluster-config#client-configuration + - type: feature + title: The Traffic manager version is more visible. + body: >- + The command telepresence version will now include the version of the traffic manager when + the client is connected to a cluster. + - type: feature + title: Command output in YAML format. + body: >- + The global --output flag now accepts both yaml and json. + docs: new-in-2.9#yaml-output + - type: change + title: Deprecated status command flag + body: >- + The telepresence status --json flag is deprecated. Use telepresence status --output=json instead. + - type: bugfix + title: Unqualified service name resolution in docker. + body: >- + Unqualified service names now resolves OK from the docker container when using telepresence intercept --docker-run. + docs: https://github.com/telepresenceio/telepresence/issues/2870 + - type: bugfix + title: Output no longer mixes plaintext and json. + body: >- + Informational messages that don't really originate from the command, such as "Launching Telepresence Root Daemon", + or "An update of telepresence ...", are discarded instead of being printed as plain text before the actual formatted + output when using the --output=json. + docs: https://github.com/telepresenceio/telepresence/issues/2854 + - type: bugfix + title: No more panic when invalid port names are detected. + body: >- + A `telepresence intercept` of services with invalid port no longer causes a panic. + docs: https://github.com/telepresenceio/telepresence/issues/2880 + - type: bugfix + title: Proper errors for bad output formats. + body: >- + An attempt to use an invalid value for the global --output flag now renders a proper error message. + - type: bugfix + title: Remove lingering DNS config on macOS. + body: >- + Files lingering under /etc/resolver as a result of ungraceful shutdown of the root daemon on macOS, are + now removed when a new root daemon starts. + - version: 2.8.5 + date: "2022-11-2" + notes: + - type: security + title: CVE-2022-41716 + body: >- + Updated Golang to 1.19.3 to address CVE-2022-41716. + - version: 2.8.4 + date: "2022-11-2" + notes: + - type: bugfix + title: Release Process + body: >- + This release resulted in changes to our release process. + - version: 2.8.3 + date: "2022-10-27" + notes: + - type: feature + title: Ability to disable global intercepts. + body: >- + Global intercepts (a.k.a. TCP intercepts) can now be disabled by using the new Helm chart setting intercept.disableGlobal. + docs: https://github.com/telepresenceio/telepresence/issues/2140 + - type: feature + title: Configurable mutating webhook port + body: >- + The port used for the mutating webhook can be configured using the Helm chart setting + agentInjector.webhook.port. + docs: install/helm + - type: change + title: Mutating webhook port defaults to 443 + body: >- + The default port for the mutating webhook is now 443. It used to be 8443. + - type: change + title: Agent image configuration mandatory in air-gapped environments. + body: >- + The traffic-manager will no longer default to use the tel2 image for the traffic-agent when it is + unable to connect to Ambassador Cloud. Air-gapped environments must declare what image to use in the Helm chart. + - type: bugfix + title: Can now connect to non-helm installs + body: >- + telepresence connect now works as long as the traffic manager is installed, even if + it wasn't installed via >code>helm install + docs: https://github.com/telepresenceio/telepresence/issues/2824 + - type: bugfix + title: check-vpn crash fixed + body: >- + telepresence check-vpn no longer crashes when the daemons don't start properly. + - version: 2.8.2 + date: "2022-10-15" + notes: + - type: bugfix + title: Reinstate 2.8.0 + body: >- + There was an issue downloading the free enhanced client. This problem was fixed, 2.8.0 was reinstated + - version: 2.8.1 + date: "2022-10-14" + notes: + - type: bugfix + title: Rollback 2.8.0 + body: >- + Rollback 2.8.0 while we investigate an issue with ambassador cloud. + - version: 2.8.0 + date: "2022-10-14" + notes: + - type: feature + title: Improved DNS resolver + body: >- + The Telepresence DNS resolver is now capable of resolving queries of type A, AAAA, CNAME, + MX, NS, PTR, SRV, and TXT. + docs: reference/dns + - type: feature + title: New `client` structure in Helm chart + body: >- + A new client struct was added to the Helm chart. It contains a connectionTTL that controls + how long the traffic manager will retain a client connection without seeing any sign of life from the client. + docs: reference/cluster-config#Client-Configuration + - type: feature + title: Include and exclude suffixes configurable using the Helm chart. + body: >- + A dns element was added to the client struct in Helm chart. It contains an includeSuffixes and + an excludeSuffixes value that controls what type of names that the DNS resolver in the client will delegate to + the cluster. + docs: reference/cluster-config#DNS + - type: feature + title: Configurable traffic-manager API port + body: >- + The API port used by the traffic-manager is now configurable using the Helm chart value apiPort. + The default port is 8081. + docs: https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence + - type: feature + title: Envoy server and admin port configuration. + body: >- + An new agent struct was added to the Helm chart. It contains an `envoy` structure where the server and + admin port of the Envoy proxy running in the enhanced traffic-agent can be configured. + docs: reference/cluster-config#Envoy-Configuration + - type: change + title: Helm chart `dnsConfig` moved to `client.routing`. + body: >- + The Helm chart dnsConfig was deprecated but retained for backward compatibility. The fields alsoProxySubnets + and neverProxySubnets can now be found under routing in the client struct. + docs: reference/cluster-config#Routing + - type: change + title: Helm chart `agentInjector.agentImage` moved to `agent.image`. + body: >- + The Helm chart agentInjector.agentImage was moved to agent.image. The old value is deprecated but + retained for backward compatibility. + docs: reference/cluster-config#Image-Configuration + - type: change + title: Helm chart `agentInjector.appProtocolStrategy` moved to `agent.appProtocolStrategy`. + body: >- + The Helm chart agentInjector.appProtocolStrategy was moved to agent.appProtocolStrategy. The old + value is deprecated but retained for backward compatibility. + docs: reference/cluster-config#Application-Protocol-Selection + - type: change + title: Helm chart `dnsServiceName`, `dnsServiceNamespace`, and `dnsServiceIP` removed. + body: >- + The Helm chart dnsServiceName, dnsServiceNamespace, and dnsServiceIP has been removed, because + they are no longer needed. The TUN-device will use the traffic-manager pod-IP on platforms where it needs to + dedicate an IP for its local resolver. + - type: change + title: Quit daemons with `telepresence quit -s` + body: >- + The former options `-u` and `-r` for `telepresence quit` has been deprecated and replaced with one option `-s` which will + quit both the root daemon and the user daemon. + - type: bugfix + title: Environment variable interpolation in pods now works. + body: >- + Environment variable interpolation now works for all definitions that are copied from pod containers + into the injected traffic-agent container. + - type: bugfix + title: Early detection of namespace conflict + body: >- + An attempt to create simultaneous intercepts that span multiple namespace on the same workstation + is detected early and prohibited instead of resulting in failing DNS lookups later on. + - type: bugfix + title: Annoying log message removed + body: >- + Spurious and incorrect ""!! SRV xxx"" messages will no longer appear in the logs when the reason + is normal context cancellation. + - type: bugfix + title: Single name DNS resolution in Docker on Linux host + body: >- + Single label names now resolves correctly when using Telepresence in Docker on a Linux host + - type: bugfix + title: Misnomer `appPortStrategy` in Helm chart renamed to `appProtocolStrategy`. + body: >- + The Helm chart value appProtocolStrategy is now correctly named (used to be appPortStategy) + - version: 2.7.6 + date: "2022-09-16" + notes: + - type: feature + title: Helm chart resource entries for injected agents + body: >- + The resources for the traffic-agent container and the optional init container can be + specified in the Helm chart using the resources and initResource fields + of the agentInjector.agentImage + - type: feature + title: Cluster event propagation when injection fails + body: >- + When the traffic-manager fails to inject a traffic-agent, the cause for the failure is + detected by reading the cluster events, and propagated to the user. + - type: feature + title: FTP-client instead of sshfs for remote mounts + body: >- + Telepresence can now use an embedded FTP client and load an existing FUSE library instead of running + an external sshfs or sshfs-win binary. This feature is experimental in 2.7.x + and enabled by setting intercept.useFtp to true> in the config.yml. + - type: change + title: Upgrade of winfsp + body: >- + Telepresence on Windows upgraded winfsp from version 1.10 to 1.11 + - type: bugfix + title: Removal of invalid warning messages + body: >- + Running CLI commands on Apple M1 machines will no longer throw warnings about /proc/cpuinfo + and /proc/self/auxv. + - version: 2.7.5 + date: "2022-09-14" + notes: + - type: change + title: Rollback of release 2.7.4 + body: >- + This release is a rollback of the changes in 2.7.4, so essentially the same as 2.7.3 + - version: 2.7.4 + date: "2022-09-14" + notes: + - type: change + body: >- + This release was broken on some platforms. Use 2.7.6 instead. + - version: 2.7.3 + date: "2022-09-07" + notes: + - type: bugfix + title: PTY for CLI commands + body: >- + CLI commands that are executed by the user daemon now use a pseudo TTY. This enables + docker run -it to allocate a TTY and will also give other commands like bash read the + same behavior as when executed directly in a terminal. + docs: https://github.com/telepresenceio/telepresence/issues/2724 + - type: bugfix + title: Traffic Manager useless warning silenced + body: >- + The traffic-manager will no longer log numerous warnings saying Issuing a + systema request without ApiKey or InstallID may result in an error. + - type: bugfix + title: Traffic Manager useless error silenced + body: >- + The traffic-manager will no longer log an error saying Unable to derive subnets + from nodes when the podCIDRStrategy is auto and it chooses to instead derive the + subnets from the pod IPs. + - version: 2.7.2 + date: "2022-08-25" + notes: + - type: feature + title: Autocompletion scripts + body: >- + Autocompletion scripts can now be generated with telepresence completion SHELL where SHELL can be bash, zsh, fish or powershell. + - type: feature + title: Connectivity check timeout + body: >- + The timeout for the initial connectivity check that Telepresence performs + in order to determine if the cluster's subnets are proxied or not can now be configured + in the config.yml file using timeouts.connectivityCheck. The default timeout was + changed from 5 seconds to 500 milliseconds to speed up the actual connect. + docs: reference/config#timeouts + - type: change + title: gather-traces feedback + body: >- + The command telepresence gather-traces now prints out a message on success. + docs: troubleshooting#distributed-tracing + - type: change + title: upload-traces feedback + body: >- + The command telepresence upload-traces now prints out a message on success. + docs: troubleshooting#distributed-tracing + - type: change + title: gather-traces tracing + body: >- + The command telepresence gather-traces now traces itself and reports errors with trace gathering. + docs: troubleshooting#distributed-tracing + - type: change + title: CLI log level + body: >- + The cli.log log is now logged at the same level as the connector.log + docs: reference/config#log-levels + - type: bugfix + title: Telepresence --help fixed + body: >- + telepresence --help now works once more even if there's no user daemon running. + docs: https://github.com/telepresenceio/telepresence/issues/2735 + - type: bugfix + title: Stream cancellation when no process intercepts + body: >- + Streams created between the traffic-agent and the workstation are now properly closed + when no interceptor process has been started on the workstation. This fixes a potential problem where + a large number of attempts to connect to a non-existing interceptor would cause stream congestion + and an unresponsive intercept. + - type: bugfix + title: List command excludes the traffic-manager + body: >- + The telepresence list command no longer includes the traffic-manager deployment. + - version: 2.7.1 + date: "2022-08-10" + notes: + - type: change + title: Reinstate telepresence uninstall + body: >- + Reinstate telepresence uninstall with --everything depreciated + - type: change + title: Reduce telepresence helm uninstall + body: >- + telepresence helm uninstall will only uninstall the traffic-manager helm chart and no longer accepts the --everything, --agent, or --all-agents flags. + - type: bugfix + title: Auto-connect for telepresence intercpet + body: >- + telepresence intercept will attempt to connect to the traffic manager before creating an intercept. + - version: 2.7.0 + date: "2022-08-07" + notes: + - type: feature + title: Saved Intercepts + body: >- + Create telepresence intercepts based on existing Saved Intercepts configurations with telepresence intercept --use-saved-intercept $SAVED_INTERCEPT_NAME + docs: reference/intercepts#sharing-intercepts-with-teammates + - type: feature + title: Distributed Tracing + body: >- + The Telepresence components now collect OpenTelemetry traces. + Up to 10MB of trace data are available at any given time for collection from + components. telepresence gather-traces is a new command that will collect + all that data and place it into a gzip file, and telepresence upload-traces is + a new command that will push the gzipped data into an OTLP collector. + docs: troubleshooting#distributed-tracing + - type: feature + title: Helm install + body: >- + A new telepresence helm command was added to provide an easy way to install, upgrade, or uninstall the telepresence traffic-manager. + docs: install/manager + - type: feature + title: Ignore Volume Mounts + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-ignore-volume-mounts, that can be used to make the injector ignore specified volume mounts denoted by a comma-separated string. + - type: feature + title: telepresence pod-daemon + body: >- + The Docker image now contains a new program in addition to + the existing traffic-manager and traffic-agent: the pod-daemon. The + pod-daemon is a trimmed-down version of the user-daemon that is + designed to run as a sidecar in a Pod, enabling CI systems to create + preview deploys. + - type: feature + title: Prometheus support for traffic manager + body: >- + Added prometheus support to the traffic manager. + - type: change + title: No install on telepresence connect + body: >- + The traffic manager is no longer automatically installed into the cluster. Connecting or creating an intercept in a cluster without a traffic manager will return an error. + docs: install/manager + - type: change + title: Helm Uninstall + body: >- + The command telepresence uninstall has been moved to telepresence helm uninstall. + docs: install/manager + - type: bugfix + title: readOnlyRootFileSystem mounts work + body: >- + Add an emptyDir volume and volume mount under /tmp on the agent sidecar so it works with `readOnlyRootFileSystem: true` + docs: https://github.com/telepresenceio/telepresence/pull/2666 + - version: 2.6.8 + date: "2022-06-23" + notes: + - type: feature + title: Specify Your DNS + body: >- + The name and namespace for the DNS Service that the traffic-manager uses in DNS auto-detection can now be specified. + - type: feature + title: Specify a Fallback DNS + body: >- + Should the DNS auto-detection logic in the traffic-manager fail, users can now specify a fallback IP to use. + - type: feature + title: Intercept UDP Ports + body: >- + It is now possible to intercept UDP ports with Telepresence and also use --to-pod to forward UDP traffic from ports on localhost. + - type: change + title: Additional Helm Values + body: >- + The Helm chart will now add the nodeSelector, affinity and tolerations values to the traffic-manager's post-upgrade-hook and pre-delete-hook jobs. + - type: bugfix + title: Agent Injection Bugfix + body: >- + Telepresence no longer fails to inject the traffic agent into the pod generated for workloads that have no volumes and `automountServiceAccountToken: false`. + - version: 2.6.7 + date: "2022-06-22" + notes: + - type: bugfix + title: Persistant Sessions + body: >- + The Telepresence client will remember and reuse the traffic-manager session after a network failure or other reason that caused an unclean disconnect. + - type: bugfix + title: DNS Requests + body: >- + Telepresence will no longer forward DNS requests for "wpad" to the cluster. + - type: bugfix + title: Graceful Shutdown + body: >- + The traffic-agent will properly shut down if one of its goroutines errors. + - version: 2.6.6 + date: "2022-06-9" + notes: + - type: bugfix + title: Env Var `TELEPRESENCE_API_PORT` + body: >- + The propagation of the TELEPRESENCE_API_PORT environment variable now works correctly. + - type: bugfix + title: Double Printing `--output json` + body: >- + The --output json global flag no longer outputs multiple objects + - version: 2.6.5 + date: "2022-06-03" + notes: + - type: feature + title: Helm Option -- `reinvocationPolicy` + body: >- + The reinvocationPolicy or the traffic-agent injector webhook can now be configured using the Helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Proxy Certificate + body: >- + The traffic manager now accepts a root CA for a proxy, allowing it to connect to ambassador cloud from behind an HTTPS proxy. This can be configured through the helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Agent Injection + body: >- + A policy that controls when the mutating webhook injects the traffic-agent was added, and can be configured in the Helm chart. + docs: install/helm + - type: change + title: Windows Tunnel Version Upgrade + body: >- + Telepresence on Windows upgraded wintun.dll from version 0.12 to version 0.14.1 + - type: change + title: Helm Version Upgrade + body: >- + Telepresence upgraded its embedded Helm from version 3.8.1 to 3.9 + - type: change + title: Kubernetes API Version Upgrade + body: >- + Telepresence upgraded its embedded Kubernetes API from version 0.23.4 to 0.24.1 + - type: feature + title: Flag `--watch` Added to `list` Command + body: >- + Added a --watch flag to telepresence list that can be used to watch interceptable workloads in a namespace. + - type: change + title: Depreciated `images.webhookAgentImage` + body: >- + The Telepresence configuration setting for `images.webhookAgentImage` is now deprecated. Use `images.agentImage` instead. + - type: bugfix + title: Default `reinvocationPolicy` Set to Never + body: >- + The reinvocationPolicy or the traffic-agent injector webhook now defaults to Never insteadof IfNeeded so that LimitRanges on namespaces can inject a missing resources element into the injected traffic-agent container. + - type: bugfix + title: UDP + body: >- + UDP based communication with services in the cluster now works as expected. + - type: bugfix + title: Telepresence `--help` + body: >- + The command help will only show Kubernetes flags on the commands that supports them + - type: change + title: Error Count + body: >- + Only the errors from the last session will be considered when counting the number of errors in the log after a command failure. + - version: 2.6.4 + date: "2022-05-23" + notes: + - type: bugfix + title: Upgrade RBAC Permissions + body: >- + The traffic-manager RBAC grants permissions to update services, deployments, replicatsets, and statefulsets. Those permissions are needed when the traffic-manager upgrades from versions < 2.6.0 and can be revoked after the upgrade. + - version: 2.6.3 + date: "2022-05-20" + notes: + - type: bugfix + title: Relative Mount Paths + body: >- + The --mount intercept flag now handles relative mount points correctly on non-windows platforms. Windows still require the argument to be a drive letter followed by a colon. + - type: bugfix + title: Traffic Agent Config + body: >- + The traffic-agent's configuration update automatically when services are added, updated or deleted. + - type: bugfix + title: Container Injection for Numeric Ports + body: >- + Telepresence will now always inject an initContainer when the service's targetPort is numeric + - type: bugfix + title: Matching Services + body: >- + Workloads that have several matching services pointing to the same target port are now handled correctly. + - type: bugfix + title: Unexpected Panic + body: >- + A potential race condition causing a panic when closing a DNS connection is now handled correctly. + - type: bugfix + title: Mount Volume Cleanup + body: >- + A container start would sometimes fail because and old directory remained in a mounted temp volume. + - version: 2.6.2 + date: "2022-05-17" + notes: + - type: bugfix + title: Argo Injection + body: >- + Workloads controlled by workloads like Argo Rollout are injected correctly. + - type: bugfix + title: Agent Port Mapping + body: >- + Multiple services appointing the same container port no longer result in duplicated ports in an injected pod. + - type: bugfix + title: GRPC Max Message Size + body: >- + The telepresence list command no longer errors out with "grpc: received message larger than max" when listing namespaces with a large number of workloads. + - version: 2.6.1 + date: "2022-05-16" + notes: + - type: bugfix + title: KUBECONFIG environment variable + body: >- + Telepresence will now handle multiple path entries in the KUBECONFIG environment correctly. + - type: bugfix + title: Don't Panic + body: >- + Telepresence will no longer panic when using preview URLs with traffic-managers < 2.6.0 + - version: 2.6.0 + date: "2022-05-13" + notes: + - type: feature + title: Intercept multiple containers in a pod, and multiple ports per container + body: >- + Telepresence can now intercept multiple services and/or service-ports that connect to the same pod. + docs: new-in-2.6#intercept-multiple-containers-and-ports + - type: feature + title: The Traffic Agent sidecar is always injected by the Traffic Manager's mutating webhook + body: >- + The client will no longer modify deployments, replicasets, or statefulsets in order to + inject a Traffic Agent into an intercepted pod. Instead, all injection is now performed by a mutating webhook. As a result, + the client now needs less permissions in the cluster. + docs: install/upgrade#important-note-about-upgrading-to-2.6.0 + - type: change + title: Automatic upgrade of Traffic Agents + body: >- + When upgrading, all workloads with injected agents will have their agent "uninstalled" automatically. + The mutating webhook will then ensure that their pods will receive an updated Traffic Agent. + docs: new-in-2.6#no-more-workload-modifications + - type: change + title: No default image in the Helm chart + body: >- + The helm chart no longer has a default set for the agentInjector.image.name, and unless it's set, the + traffic-manager will ask Ambassador Could for the preferred image. + docs: new-in-2.6#smarter-agent + - type: change + title: Upgrade to Helm version 3.8.1 + body: The Telepresence client now uses Helm version 3.8.1 when auto-installing the Traffic Manager. + - type: bugfix + title: Remote mounts will now function correctly with custom securityContext + body: >- + The bug causing permission problems when the Traffic Agent is in a Pod with a custom securityContext has been fixed. + - type: bugfix + title: Improved presentation of flags in CLI help + body: The help for commands that accept Kubernetes flags will now display those flags in a separate group. + - type: bugfix + title: Better termination of process parented by intercept + body: >- + Occasionally an intercept will spawn a command using -- on the command line, often in another console. + When you use telepresence leave or telepresence quit while the intercept with the spawned command is still active, + Telepresence will now terminate that the command because it's considered to be parented by the intercept that is being removed. + - version: 2.5.8 + date: "2022-04-27" + notes: + - type: bugfix + title: Folder creation on `telepresence login` + body: >- + Fixed a bug where the telepresence config folder would not be created if the user ran telepresence login before other commands. + - version: 2.5.7 + date: "2022-04-25" + notes: + - type: change + title: RBAC requirements + body: >- + A namespaced traffic-manager will no longer require cluster wide RBAC. Only Roles and RoleBindings are now used. + - type: bugfix + title: Windows DNS + body: >- + The DNS recursion detector didn't work correctly on Windows, resulting in sporadic failures to resolve names that were resolved correctly at other times. + - type: bugfix + title: Session TTL and Reconnect + body: >- + A telepresence session will now last for 24 hours after the user's last connectivity. If a session expires, the connector will automatically try to reconnect. + - version: 2.5.6 + date: "2022-04-18" + notes: + - type: change + title: Less Watchers + body: >- + Telepresence agents watcher will now only watch namespaces that the user has accessed since the last connect. + - type: bugfix + title: More Efficient `gather-logs` + body: >- + The gather-logs command will no longer send any logs through gRPC. + - version: 2.5.5 + date: "2022-04-08" + notes: + - type: change + title: Traffic Manager Permissions + body: >- + The traffic-manager now requires permissions to read pods across namespaces even if installed with limited permissions + - type: bugfix + title: Linux DNS Cache + body: >- + The DNS resolver used on Linux with systemd-resolved now flushes the cache when the search path changes. + - type: bugfix + title: Automatic Connect Sync + body: >- + The telepresence list command will produce a correct listing even when not preceded by a telepresence connect. + - type: bugfix + title: Disconnect Reconnect Stability + body: >- + The root daemon will no longer get into a bad state when a disconnect is rapidly followed by a new connect. + - type: bugfix + title: Limit Watched Namespaces + body: >- + The client will now only watch agents from accessible namespaces, and is also constrained to namespaces explicitly mapped using the connect command's --mapped-namespaces flag. + - type: bugfix + title: Limit Namespaces used in `gather-logs` + body: >- + The gather-logs command will only gather traffic-agent logs from accessible namespaces, and is also constrained to namespaces explicitly mapped using the connect command's --mapped-namespaces flag. + - version: 2.5.4 + date: "2022-03-29" + notes: + - type: bugfix + title: Linux DNS Concurrency + body: >- + The DNS fallback resolver on Linux now correctly handles concurrent requests without timing them out + - type: bugfix + title: Non-Functional Flag + body: >- + The ingress-l5 flag will no longer be forcefully set to equal the --ingress-host flag + - type: bugfix + title: Automatically Remove Failed Intercepts + body: >- + Intercepts that fail to create are now consistently removed to prevent non-working dangling intercepts from sticking around. + - type: bugfix + title: Agent UID + body: >- + Agent container is no longer sensitive to a random UID or an UID imposed by a SecurityContext. + - type: bugfix + title: Gather-Logs Output Filepath + body: >- + Removed a bad concatenation that corrupted the output path of telepresence gather-logs. + - type: change + title: Remove Unnecessary Error Advice + body: >- + An advice to "see logs for details" is no longer printed when the argument count is incorrect in a CLI command. + - type: bugfix + title: Garbage Collection + body: >- + Client and agent sessions no longer leaves dangling waiters in the traffic-manager when they depart. + - type: bugfix + title: Limit Gathered Logs + body: >- + The client's gather logs command and agent watcher will now respect the configured grpc.maxReceiveSize + - type: change + title: In-Cluster Checks + body: >- + The TUN device will no longer route pod or service subnets if it is running in a machine that's already connected to the cluster + - type: change + title: Expanded Status Command + body: >- + The status command includes the install id, user id, account id, and user email in its result, and can print output as JSON + - type: change + title: List Command Shows All Intercepts + body: >- + The list command, when used with the --intercepts flag, will list the users intercepts from all namespaces + - version: 2.5.3 + date: "2022-02-25" + notes: + - type: bugfix + title: TCP Connectivity + body: >- + Fixed bug in the TCP stack causing timeouts after repeated connects to the same address + - type: feature + title: Linux Binaries + body: >- + Client-side binaries for the arm64 architecture are now available for linux + - version: 2.5.2 + date: "2022-02-23" + notes: + - type: bugfix + title: DNS server bugfix + body: >- + Fixed a bug where Telepresence would use the last server in resolv.conf + - version: 2.5.1 + date: "2022-02-19" + notes: + - type: bugfix + title: Fix GKE auth issue + body: >- + Fixed a bug where using a GKE cluster would error with: No Auth Provider found for name "gcp" + - version: 2.5.0 + date: "2022-02-18" + notes: + - type: feature + title: Intercept specific endpoints + body: >- + The flags --http-path-equal, --http-path-prefix, and --http-path-regex can can be used in + addition to the --http-match flag to filter personal intercepts by the request URL path + docs: concepts/intercepts#intercepting-a-specific-endpoint + - type: feature + title: Intercept metadata + body: >- + The flag --http-meta can be used to declare metadata key value pairs that will be returned by the Telepresence rest + API endpoint /intercept-info + docs: reference/restapi#intercept-info + - type: change + title: Client RBAC watch + body: >- + The verb "watch" was added to the set of required verbs when accessing services and workloads for the client RBAC + ClusterRole + docs: reference/rbac + - type: change + title: Dropped backward compatibility with versions <=2.4.4 + body: >- + Telepresence is no longer backward compatible with versions 2.4.4 or older because the deprecated multiplexing tunnel + functionality was removed. + - type: change + title: No global networking flags + body: >- + The global networking flags are no longer used and using them will render a deprecation warning unless they are supported by the + command. The subcommands that support networking flags are connect, current-cluster-id, + and genyaml. + - type: bugfix + title: Output of status command + body: >- + The also-proxy and never-proxy subnets are now displayed correctly when using the + telepresence status command. + - type: bugfix + title: SETENV sudo privilege no longer needed + body: >- + Telepresence longer requires SETENV privileges when starting the root daemon. + - type: bugfix + title: Network device names containing dash + body: >- + Telepresence will now parse device names containing dashes correctly when determining routes that it should never block. + - type: bugfix + title: Linux uses cluster.local as domain instead of search + body: >- + The cluster domain (typically "cluster.local") is no longer added to the DNS search on Linux using + systemd-resolved. Instead, it is added as a domain so that names ending with it are routed + to the DNS server. + - version: 2.4.11 + date: "2022-02-10" + notes: + - type: change + title: Add additional logging to troubleshoot intermittent issues with intercepts + body: >- + We've noticed some issues with intercepts in v2.4.10, so we are releasing a version + with enhanced logging to help debug and fix the issue. + - version: 2.4.10 + date: "2022-01-13" + notes: + - type: feature + title: Application Protocol Strategy + body: >- + The strategy used when selecting the application protocol for personal intercepts can now be configured using + the intercept.appProtocolStrategy in the config.yml file. + docs: reference/config/#intercept + image: telepresence-2.4.10-intercept-config.png + - type: feature + title: Helm value for the Application Protocol Strategy + body: >- + The strategy when selecting the application protocol for personal intercepts in agents injected by the + mutating webhook can now be configured using the agentInjector.appProtocolStrategy in the Helm chart. + docs: install/helm + - type: feature + title: New --http-plaintext option + body: >- + The flag --http-plaintext can be used to ensure that an intercept uses plaintext http or grpc when + communicating with the workstation process. + docs: reference/intercepts/#tls + - type: feature + title: Configure the default intercept port + body: >- + The port used by default in the telepresence intercept command (8080), can now be changed by setting + the intercept.defaultPort in the config.yml file. + docs: reference/config/#intercept + - type: change + title: Telepresence CI now uses Github Actions + body: >- + Telepresence now uses Github Actions for doing unit and integration testing. It is + now easier for contributors to run tests on PRs since maintainers can add an + "ok to test" label to PRs (including from forks) to run integration tests. + docs: https://github.com/telepresenceio/telepresence/actions + image: telepresence-2.4.10-actions.png + - type: bugfix + title: Check conditions before asking questions + body: >- + User will not be asked to log in or add ingress information when creating an intercept until a check has been + made that the intercept is possible. + docs: reference/intercepts/ + - type: bugfix + title: Fix invalid log statement + body: >- + Telepresence will no longer log invalid: "unhandled connection control message: code DIAL_OK" errors. + - type: bugfix + title: Log errors from sshfs/sftp + body: >- + Output to stderr from the traffic-agent's sftp and the client's sshfs processes + are properly logged as errors. + - type: bugfix + title: Don't use Windows path separators in workload pod template + body: >- + Auto installer will no longer not emit backslash separators for the /tel-app-mounts paths in the + traffic-agent container spec when running on Windows. + - version: 2.4.9 + date: "2021-12-09" + notes: + - type: bugfix + title: Helm upgrade nil pointer error + body: >- + A helm upgrade using the --reuse-values flag no longer fails on a "nil pointer" error caused by a nil + telpresenceAPI value. + docs: install/helm#upgrading-the-traffic-manager + - version: 2.4.8 + date: "2021-12-03" + notes: + - type: feature + title: VPN diagnostics tool + body: >- + There is a new subcommand, test-vpn, that can be used to diagnose connectivity issues with a VPN. + See the VPN docs for more information on how to use it. + docs: reference/vpn + image: telepresence-2.4.8-vpn.png + + - type: feature + title: RESTful API service + body: >- + A RESTful service was added to Telepresence, both locally to the client and to the traffic-agent to + help determine if messages with a set of headers should be consumed or not from a message queue where the + intercept headers are added to the messages. + docs: reference/restapi + image: telepresence-2.4.8-health-check.png + + - type: change + title: TELEPRESENCE_LOGIN_CLIENT_ID env variable no longer used + body: >- + You could previously configure this value, but there was no reason to change it, so the value + was removed. + + - type: bugfix + title: Tunneled network connections behave more like ordinary TCP connections. + body: >- + When using Telepresence with an external cloud provider for extensions, those tunneled + connections now behave more like TCP connections, especially when it comes to timeouts. + We've also added increased testing around these types of connections. + - version: 2.4.7 + date: "2021-11-24" + notes: + - type: feature + title: Injector service-name annotation + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-service-name, that can be used to set the name of the service to be intercepted. + This will help disambiguate which service to intercept for when a workload is exposed by multiple services, such as can happen with Argo Rollouts + docs: reference/cluster-config#service-name-annotation + - type: feature + title: Skip the Ingress Dialogue + body: >- + You can now skip the ingress dialogue by setting the ingress parameters in the corresponding flags. + docs: reference/intercepts#skipping-the-ingress-dialogue + - type: feature + title: Never proxy subnets + body: >- + The kubeconfig extensions now support a never-proxy argument, + analogous to also-proxy, that defines a set of subnets that + will never be proxied via telepresence. + docs: reference/config#neverproxy + - type: change + title: Daemon versions check + body: >- + Telepresence now checks the versions of the client and the daemons and asks the user to quit and restart if they don't match. + - type: change + title: No explicit DNS flushes + body: >- + Telepresence DNS now uses a very short TTL instead of explicitly flushing DNS by killing the mDNSResponder or doing resolvectl flush-caches + docs: reference/routing#dns-caching + - type: bugfix + title: Legacy flags now work with global flags + body: >- + Legacy flags such as --swap-deployment can now be used together with global flags. + - type: bugfix + title: Outbound connection closing + body: >- + Outbound connections are now properly closed when the peer closes. + - type: bugfix + title: Prevent DNS recursion + body: >- + The DNS-resolver will trap recursive resolution attempts (may happen when the cluster runs in a docker-container on the client). + docs: reference/routing#dns-recursion + - type: bugfix + title: Prevent network recursion + body: >- + The TUN-device will trap failed connection attempts that results in recursive calls back into the TUN-device (may happen when the + cluster runs in a docker-container on the client). + docs: reference/routing#connect-recursion + - type: bugfix + title: Traffic Manager deadlock fix + body: >- + The Traffic Manager no longer runs a risk of entering a deadlock when a new Traffic agent arrives. + - type: bugfix + title: webhookRegistry config propagation + body: >- + The configured webhookRegistry is now propagated to the webhook installer even if no webhookAgentImage has been set. + docs: reference/config#images + - type: bugfix + title: Login refreshes expired tokens + body: >- + When a user's token has expired, telepresence login + will prompt the user to log in again to get a new token. Previously, + the user had to telepresence quit and telepresence logout + to get a new token. + docs: https://github.com/telepresenceio/telepresence/issues/2062 + - version: 2.4.6 + date: "2021-11-02" + notes: + - type: feature + title: Manually injecting Traffic Agent + body: >- + Telepresence now supports manually injecting the traffic-agent YAML into workload manifests. + Use the genyaml command to create the sidecar YAML, then add the telepresence.getambassador.io/manually-injected: "true" annotation to your pods to allow Telepresence to intercept them. + docs: reference/intercepts/manual-agent + + - type: feature + title: Telepresence CLI released for Apple silicon + body: >- + Telepresence is now built and released for Apple silicon. + docs: install/?os=macos + + - type: change + title: Telepresence help text now links to telepresence.io + body: >- + We now include a link to our documentation when you run telepresence --help. This will make it easier + for users to find this page whether they acquire Telepresence through Brew or some other mechanism. + image: telepresence-2.4.6-help-text.png + + - type: bugfix + title: Fixed bug when API server is inside CIDR range of pods/services + body: >- + If the API server for your kubernetes cluster had an IP that fell within the + subnet generated from pods/services in a kubernetes cluster, it would proxy traffic + to the API server which would result in hanging or a failed connection. We now ensure + that the API server is explicitly not proxied. + - version: 2.4.5 + date: "2021-10-15" + notes: + - type: feature + title: Get pod yaml with gather-logs command + body: >- + Adding the flag --get-pod-yaml to your request will get the + pod yaml manifest for all kubernetes components you are getting logs for + ( traffic-manager and/or pods containing a + traffic-agent container). This flag is set to false + by default. + docs: reference/client + image: telepresence-2.4.5-pod-yaml.png + + - type: feature + title: Anonymize pod name + namespace when using gather-logs command + body: >- + Adding the flag --anonymize to your command will + anonymize your pod names + namespaces in the output file. We replace the + sensitive names with simple names (e.g. pod-1, namespace-2) to maintain + relationships between the objects without exposing the real names of your + objects. This flag is set to false by default. + docs: reference/client + image: telepresence-2.4.5-logs-anonymize.png + + - type: feature + title: Added context and defaults to ingress questions when creating a preview URL + body: >- + Previously, we referred to OSI model layers when asking these questions, but this + terminology is not commonly used. The questions now provide a clearer context for the user, along with a default answer as an example. + docs: howtos/preview-urls + image: telepresence-2.4.5-preview-url-questions.png + + - type: feature + title: Support for intercepting headless services + body: >- + Intercepting headless services is now officially supported. You can request a + headless service on whatever port it exposes and get a response from the + intercept. This leverages the same approach as intercepting numeric ports when + using the mutating webhook injector, mainly requires the initContainer + to have NET_ADMIN capabilities. + docs: reference/intercepts/#intercepting-headless-services + + - type: change + title: Use one tunnel per connection instead of multiplexing into one tunnel + body: >- + We have changed Telepresence so that it uses one tunnel per connection instead + of multiplexing all connections into one tunnel. This will provide substantial + performance improvements. Clients will still be backwards compatible with older + managers that only support multiplexing. + + - type: bugfix + title: Added checks for Telepresence kubernetes compatibility + body: >- + Telepresence currently works with Kubernetes server versions 1.17.0 + and higher. We have added logs in the connector and traffic-manager + to let users know when they are using Telepresence with a cluster it doesn't support. + docs: reference/cluster-config + + - type: bugfix + title: Traffic Agent security context is now only added when necessary + body: >- + When creating an intercept, Telepresence will now only set the traffic agent's GID + when strictly necessary (i.e. when using headless services or numeric ports). This mitigates + an issue on openshift clusters where the traffic agent can fail to be created due to + openshift's security policies banning arbitrary GIDs. + + - version: 2.4.4 + date: "2021-09-27" + notes: + - type: feature + title: Numeric ports in agent injector + body: >- + The agent injector now supports injecting Traffic Agents into pods that have unnamed ports. + docs: reference/cluster-config/#note-on-numeric-ports + + - type: feature + title: New subcommand to gather logs and export into zip file + body: >- + Telepresence has logs for various components (the + traffic-manager, traffic-agents, the root and + user daemons), which are integral for understanding and debugging + Telepresence behavior. We have added the telepresence + gather-logs command to make it simple to compile logs for + all Telepresence components and export them in a zip file that can + be shared to others and/or included in a github issue. For more + information on usage, run telepresence gather-logs --help + . + docs: reference/client + image: telepresence-2.4.4-gather-logs.png + + - type: feature + title: Pod CIDR strategy is configurable in Helm chart + body: >- + Telepresence now enables you to directly configure how to get + pod CIDRs when deploying Telepresence with the Helm chart. + The default behavior remains the same. We've also introduced + the ability to explicitly set what the pod CIDRs should be. + docs: install/helm + + - type: bugfix + title: Compute pod CIDRs more efficiently + body: >- + When computing subnets using the pod CIDRs, the traffic-manager + now uses less CPU cycles. + docs: reference/routing/#subnets + + - type: bugfix + title: Prevent busy loop in traffic-manager + body: >- + In some circumstances, the traffic-manager's CPU + would max out and get pinned at its limit. This required a + shutdown or pod restart to fix. We've added some fixes + to prevent the traffic-manager from getting into this state. + + - type: bugfix + title: Added a fixed buffer size to TUN-device + body: >- + The TUN-device now has a max buffer size of 64K. This prevents the + buffer from growing limitlessly until it receies a PSH, which could + be a blocking operation when receiving lots of TCP-packets. + docs: reference/tun-device + + - type: bugfix + title: Fix hanging user daemon + body: >- + When Telepresence encountered an issue connecting to the cluster or + the root daemon, it could hang indefintely. It now will error correctly + when it encounters that situation. + + - type: bugfix + title: Improved proprietary agent connectivity + body: >- + To determine whether the environment cluster is air-gapped, the + proprietary agent attempts to connect to the cloud during startup. + To deal with a possible initial failure, the agent backs off + and retries the connection with an increasing backoff duration. + + - type: bugfix + title: Telepresence correctly reports intercept port conflict + body: >- + When creating a second intercept targetting the same local port, + it now gives the user an informative error message. Additionally, + it tells them which intercept is currently using that port to make + it easier to remedy. + + - version: 2.4.3 + date: "2021-09-15" + notes: + - type: feature + title: Environment variable TELEPRESENCE_INTERCEPT_ID available in interceptor's environment + body: >- + When you perform an intercept, we now include a TELEPRESENCE_INTERCEPT_ID environment + variable in the environment. + docs: reference/environment/#telepresence-environment-variables + + - type: bugfix + title: Improved daemon stability + body: >- + Fixed a timing bug that sometimes caused a "daemon did not start" failure. + + - type: bugfix + title: Complete logs for Windows + body: >- + Crash stack traces and other errors were incorrectly not written to log files. This has + been fixed so logs for Windows should be at parity with the ones in MacOS and Linux. + + - type: bugfix + title: Log rotation fix for Linux kernel 4.11+ + body: >- + On Linux kernel 4.11 and above, the log file rotation now properly reads the + birth-time of the log file. Older kernels continue to use the old behavior + of using the change-time in place of the birth-time. + + - type: bugfix + title: Improved error messaging + body: >- + When Telepresence encounters an error, it tells the user where they should look for + logs related to the error. We have refined this so that it only tells users to look + for errors in the daemon logs for issues that are logged there. + + - type: bugfix + title: Stop resolving localhost + body: >- + When using the overriding DNS resolver, it will no longer apply search paths when + resolving localhost, since that should be resolved on the user's machine + instead of the cluster. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Variable cluster domain + body: >- + Previously, the cluster domain was hardcoded to cluster.local. While this + is true for many kubernetes clusters, it is not for all of them. Now this value is + retrieved from the traffic-manager. + + - type: bugfix + title: Improved cleanup of traffic-agents + body: >- + Telepresence now uninstalls traffic-agents installed via mutating webhook + when using telepresence uninstall --everything. + + - type: bugfix + title: More large file transfer fixes + body: >- + Downloading large files during an intercept will no longer cause timeouts and hanging + traffic-agents. + + - type: bugfix + title: Setting --mount to false when intercepting works as expected + body: >- + When using --mount=false while performing an intercept, the file system + was still mounted. This has been remedied so the intercept behavior respects the + flag. + docs: reference/volume + + - type: bugfix + title: Traffic-manager establishes outbound connections in parallel + body: >- + Previously, the traffic-manager established outbound connections + sequentially. This resulted in slow (and failing) Dial calls would + block all outbound traffic from the workstation (for up to 30 seconds). We now + establish these connections in parallel so that won't occur. + docs: reference/routing/#outbound + + - type: bugfix + title: Status command reports correct DNS settings + body: >- + Telepresence status now correctly reports DNS settings for all operating + systems, instead of Local IP:nil, Remote IP:nil when they don't exist. + + - version: 2.4.2 + date: "2021-09-01" + notes: + - type: feature + title: New subcommand to temporarily change log-level + body: >- + We have added a new telepresence loglevel subcommand that enables users + to temporarily change the log-level for the local demons, the traffic-manager and + the traffic-agents. While the logLevels settings from the config will + still be used by default, this can be helpful if you are currently experiencing an issue and + want to have higher fidelity logs, without doing a telepresence quit and + telepresence connect. You can use telepresence loglevel --help to get + more information on options for the command. + docs: reference/config + + - type: change + title: All components have info as the default log-level + body: >- + We've now set the default for all components of Telepresence (traffic-agent, + traffic-manager, local daemons) to use info as the default log-level. + + - type: bugfix + title: Updating RBAC in helm chart to fix cluster-id regression + body: >- + In 2.4.1, we enabled the traffic-manager to get the cluster ID by getting the UID + of the default namespace. The helm chart was not updated to give the traffic-manager + those permissions, which has since been fixed. This impacted users who use licensed features of + the Telepresence extension in an air-gapped environment. + docs: reference/cluster-config/#air-gapped-cluster + + - type: bugfix + title: Timeouts for Helm actions are now respected + body: >- + The user-defined timeout for Helm actions wasn't always respected, causing the daemon to hang + indefinitely when failing to install the traffic-manager. + docs: reference/config#timeouts + + - version: 2.4.1 + date: "2021-08-30" + notes: + - type: feature + title: External cloud variables are now configurable + body: >- + We now support configuring the host and port for the cloud in your config.yml. These + are used when logging in to utilize features provided by an extension, and are also passed + along as environment variables when installing the traffic-manager. Additionally, we + now run our testsuite with these variables set to localhost to continue to ensure Telepresence + is fully fuctional without depeneding on an external service. The SYSTEMA_HOST and SYSTEMA_PORT + environment variables are no longer used. + image: telepresence-2.4.1-systema-vars.png + docs: reference/config/#cloud + + - type: feature + title: Helm chart can now regenerate certificate used for mutating webhook on-demand. + body: >- + You can now set agentInjector.certificate.regenerate when deploying Telepresence + with the Helm chart to automatically regenerate the certificate used by the agent injector webhook. + docs: install/helm + + - type: change + title: Traffic Manager installed via helm + body: >- + The traffic-manager is now installed via an embedded version of the Helm chart when telepresence connect is first performed on a cluster. + This change is transparent to the user. + A new configuration flag, timeouts.helm sets the timeouts for all helm operations performed by the Telepresence binary. + docs: reference/config#timeouts + + - type: change + title: traffic-manager gets cluster ID itself instead of via environment variable + body: >- + The traffic-manager used to get the cluster ID as an environment variable when running + telepresence connnect or via adding the value in the helm chart. This was + clunky so now the traffic-manager gets the value itself as long as it has permissions + to "get" and "list" namespaces (this has been updated in the helm chart). + docs: install/helm + + - type: bugfix + title: Telepresence now mounts all directories from /var/run/secrets + body: >- + In the past, we only mounted secret directories in /var/run/secrets/kubernetes.io. + We now mount *all* directories in /var/run/secrets, which, for example, includes + directories like eks.amazonaws.com used for IRSA tokens. + docs: reference/volume + + - type: bugfix + title: Max gRPC receive size correctly propagates to all grpc servers + body: >- + This fixes a bug where the max gRPC receive size was only propagated to some of the + grpc servers, causing failures when the message size was over the default. + docs: reference/config/#grpc + + - type: bugfix + title: Updated our Homebrew packaging to run manually + body: >- + We made some updates to our script that packages Telepresence for Homebrew so that it + can be run manually. This will enable maintainers of Telepresence to run the script manually + should we ever need to rollback a release and have latest point to an older verison. + docs: install/ + + - type: bugfix + title: Telepresence uses namespace from kubeconfig context on each call + body: >- + In the past, Telepresence would use whatever namespace was specified in the kubeconfig's current-context + for the entirety of the time a user was connected to Telepresence. This would lead to confusing behavior + when a user changed the context in their kubeconfig and expected Telepresence to acknowledge that change. + Telepresence now will do that and use the namespace designated by the context on each call. + + - type: bugfix + title: Idle outbound TCP connections timeout increased to 7200 seconds + body: >- + Some users were noticing that their intercepts would start failing after 60 seconds. + This was because the keep idle outbound TCP connections were set to 60 seconds, which we have + now bumped to 7200 seconds to match Linux's tcp_keepalive_time default. + + - type: bugfix + title: Telepresence will automatically remove a socket upon ungraceful termination + body: >- + When a Telepresence process terminates ungracefully, it would inform users that "this usually means + that the process has terminated ungracefully" and implied that they should remove the socket. We've + now made it so Telepresence will automatically attempt to remove the socket upon ungraceful termination. + + - type: bugfix + title: Fixed user daemon deadlock + body: >- + Remedied a situation where the user daemon could hang when a user was logged in. + + - type: bugfix + title: Fixed agentImage config setting + body: >- + The config setting images.agentImages is no longer required to contain the repository, and it + will use the value at images.repository. + docs: reference/config/#images + + - version: 2.4.0 + date: "2021-08-04" + notes: + - type: feature + title: Windows Client Developer Preview + body: >- + There is now a native Windows client for Telepresence that is being released as a Developer Preview. + All the same features supported by the MacOS and Linux client are available on Windows. + image: telepresence-2.4.0-windows.png + docs: install + + - type: feature + title: CLI raises helpful messages from Ambassador Cloud + body: >- + Telepresence can now receive messages from Ambassador Cloud and raise + them to the user when they perform certain commands. This enables us + to send you messages that may enhance your Telepresence experience when + using certain commands. Frequency of messages can be configured in your + config.yml. + image: telepresence-2.4.0-cloud-messages.png + docs: reference/config#cloud + + - type: bugfix + title: Improved stability of systemd-resolved-based DNS + body: >- + When initializing the systemd-resolved-based DNS, the routing domain + is set to improve stability in non-standard configurations. This also enables the + overriding resolver to do a proper take over once the DNS service ends. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Fixed an edge case when intercepting a container with multiple ports + body: >- + When specifying a port of a container to intercept, if there was a container in the + pod without ports, it was automatically selected. This has been fixed so we'll only + choose the container with "no ports" if there's no container that explicitly matches + the port used in your intercept. + docs: reference/intercepts/#creating-an-intercept-when-a-service-has-multiple-ports + + - type: bugfix + title: $(NAME) references in agent's environments are now interpolated correctly. + body: >- + If you had an environment variable $(NAME) in your workload that referenced another, intercepts + would not correctly interpolate $(NAME). This has been fixed and works automatically. + + - type: bugfix + title: Telepresence no longer prints INFO message when there is no config.yml + body: >- + Fixed a regression that printed an INFO message to the terminal when there wasn't a + config.yml present. The config is optional, so this message has been + removed. + docs: reference/config + + - type: bugfix + title: Telepresence no longer panics when using --http-match + body: >- + Fixed a bug where Telepresence would panic if the value passed to --http-match + didn't contain an equal sign, which has been fixed. The correct syntax is in the --help + string and looks like --http-match=HTTP2_HEADER=REGEX + docs: reference/intercepts/#intercept-behavior-when-logged-in-to-ambassador-cloud + + - type: bugfix + title: Improved subnet updates + body: >- + The traffic-manager used to update subnets whenever the Nodes or Pods changed, even if + the underlying subnet hadn't changed, which created a lot of unnecessary traffic between the + client and the traffic-manager. This has been fixed so we only send updates when the subnets + themselves actually change. + docs: reference/routing/#subnets + + - version: 2.3.7 + date: "2021-07-23" + notes: + - type: feature + title: Also-proxy in telepresence status + body: >- + An also-proxy entry in the Kubernetes cluster config will + show up in the output of the telepresence status command. + docs: reference/config + + - type: feature + title: Non-interactive telepresence login + body: >- + telepresence login now has an + --apikey=KEY flag that allows for + non-interactive logins. This is useful for headless + environments where launching a web-browser is impossible, + such as cloud shells, Docker containers, or CI. + image: telepresence-2.3.7-newkey.png + docs: reference/client/login/ + + - type: bugfix + title: Mutating webhook injector correctly hides named ports for probes. + body: >- + The mutating webhook injector has been fixed to correctly rename named ports for liveness and readiness probes + docs: reference/cluster-config + + - type: bugfix + title: telepresence current-cluster-id crash fixed + body: >- + Fixed a regression introduced in 2.3.5 that caused telepresence current-cluster-id + to crash. + docs: reference/cluster-config + + - type: bugfix + title: Better UX around intercepts with no local process running + body: >- + Requests would hang indefinitely when initiating an intercept before you + had a local process running. This has been fixed and will result in an + Empty reply from server until you start a local process. + docs: reference/intercepts + + - type: bugfix + title: API keys no longer show as "no description" + body: >- + New API keys generated internally for communication with + Ambassador Cloud no longer show up as "no description" in + the Ambassador Cloud web UI. Existing API keys generated by + older versions of Telepresence will still show up this way. + image: telepresence-2.3.7-keydesc.png + + - type: bugfix + title: Fix corruption of user-info.json + body: >- + Fixed a race condition that logging in and logging out + rapidly could cause memory corruption or corruption of the + user-info.json cache file used when + authenticating with Ambassador Cloud. + + - type: bugfix + title: Improved DNS resolver for systemd-resolved + body: + Telepresence's systemd-resolved-based DNS resolver is now more + stable and in case it fails to initialize, the overriding resolver + will no longer cause general DNS lookup failures when telepresence defaults to + using it. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Faster telepresence list command + body: + The performance of telepresence list has been increased + significantly by reducing the number of calls the command makes to the cluster. + docs: reference/client + + - version: 2.3.6 + date: "2021-07-20" + notes: + - type: bugfix + title: Fix preview URLs + body: >- + Fixed a regression introduced in 2.3.5 that caused preview + URLs to not work. + + - type: bugfix + title: Fix subnet discovery + body: >- + Fixed a regression introduced in 2.3.5 where the Traffic + Manager's RoleBinding did not correctly appoint + the traffic-manager Role, causing + subnet discovery to not be able to work correctly. + docs: reference/rbac/ + + - type: bugfix + title: Fix root-user configuration loading + body: >- + Fixed a regression introduced in 2.3.5 where the root daemon + did not correctly read the configuration file; ignoring the + user's configured log levels and timeouts. + docs: reference/config/ + + - type: bugfix + title: Fix a user daemon crash + body: >- + Fixed an issue that could cause the user daemon to crash + during shutdown, as during shutdown it unconditionally + attempted to close a channel even though the channel might + already be closed. + + - version: 2.3.5 + date: "2021-07-15" + notes: + - type: feature + title: traffic-manager in multiple namespaces + body: >- + We now support installing multiple traffic managers in the same cluster. + This will allow operators to install deployments of telepresence that are + limited to certain namespaces. + image: ./telepresence-2.3.5-traffic-manager-namespaces.png + docs: install/helm + - type: feature + title: No more dependence on kubectl + body: >- + Telepresence no longer depends on having an external + kubectl binary, which might not be present for + OpenShift users (who have oc instead of + kubectl). + - type: feature + title: Agent image now configurable + body: >- + We now support configuring which agent image + registry to use in the + config. This enables users whose laptop is an air-gapped environment to + create personal intercepts without requiring a login. It also makes it easier + for those who are developing on Telepresence to specify which agent image should + be used. Env vars TELEPRESENCE_AGENT_IMAGE and TELEPRESENCE_REGISTRY are no longer + used. + image: ./telepresence-2.3.5-agent-config.png + docs: reference/config/#images + - type: feature + title: Max gRPC receive size now configurable + body: >- + The default max size of messages received through gRPC (4 MB) is sometimes insufficient. It can now be configured. + image: ./telepresence-2.3.5-grpc-max-receive-size.png + docs: reference/config/#grpc + - type: feature + title: CLI can be used in air-gapped environments + body: >- + While Telepresence will auto-detect if your cluster is in an air-gapped environment, + we've added an option users can add to their config.yml to ensure the cli acts like it + is in an air-gapped environment. Air-gapped environments require a manually installed + licence. + docs: reference/cluster-config/#air-gapped-cluster + image: ./telepresence-2.3.5-skipLogin.png + - version: 2.3.4 + date: "2021-07-09" + notes: + - type: bugfix + title: Improved IP log statements + body: >- + Some log statements were printing incorrect characters, when they should have been IP addresses. + This has been resolved to include more accurate and useful logging. + docs: reference/config/#log-levels + image: ./telepresence-2.3.4-ip-error.png + - type: bugfix + title: Improved messaging when multiple services match a workload + body: >- + If multiple services matched a workload when performing an intercept, Telepresence would crash. + It now gives the correct error message, instructing the user on how to specify which + service the intercept should use. + image: ./telepresence-2.3.4-improved-error.png + docs: reference/intercepts + - type: bugfix + title: Traffic-manger creates services in its own namespace to determine subnet + body: >- + Telepresence will now determine the service subnet by creating a dummy-service in its own + namespace, instead of the default namespace, which was causing RBAC permissions issues in + some clusters. + docs: reference/routing/#subnets + - type: bugfix + title: Telepresence connect respects pre-existing clusterrole + body: >- + When Telepresence connects, if the traffic-manager's desired clusterrole already exists in the + cluster, Telepresence will no longer try to update the clusterrole. + docs: reference/rbac + - type: bugfix + title: Helm Chart fixed for clientRbac.namespaced + body: >- + The Telepresence Helm chart no longer fails when installing with --set clientRbac.namespaced=true. + docs: install/helm + - version: 2.3.3 + date: "2021-07-07" + notes: + - type: feature + title: Traffic Manager Helm Chart + body: >- + Telepresence now supports installing the Traffic Manager via Helm. + This will make it easy for operators to install and configure the + server-side components of Telepresence separately from the CLI (which + in turn allows for better separation of permissions). + image: ./telepresence-2.3.3-helm.png + docs: install/helm/ + - type: feature + title: Traffic-manager in custom namespace + body: >- + As the traffic-manager can now be installed in any + namespace via Helm, Telepresence can now be configured to look for the + Traffic Manager in a namespace other than ambassador. + This can be configured on a per-cluster basis. + image: ./telepresence-2.3.3-namespace-config.png + docs: reference/config + - type: feature + title: Intercept --to-pod + body: >- + telepresence intercept now supports a + --to-pod flag that can be used to port-forward sidecars' + ports from an intercepted pod. + image: ./telepresence-2.3.3-to-pod.png + docs: reference/intercepts + - type: change + title: Change in migration from edgectl + body: >- + Telepresence no longer automatically shuts down the old + api_version=1 edgectl daemon. If migrating + from such an old version of edgectl you must now manually + shut down the edgectl daemon before running Telepresence. + This was already the case when migrating from the newer + api_version=2 edgectl. + - type: bugfix + title: Fixed error during shutdown + body: >- + The root daemon no longer terminates when the user daemon disconnects + from its gRPC streams, and instead waits to be terminated by the CLI. + This could cause problems with things not being cleaned up correctly. + - type: bugfix + title: Intercepts will survive deletion of intercepted pod + body: >- + An intercept will survive deletion of the intercepted pod provided + that another pod is created (or already exists) that can take over. + - version: 2.3.2 + date: "2021-06-18" + notes: + # Headliners + - type: feature + title: Service Port Annotation + body: >- + The mutator webhook for injecting traffic-agents now + recognizes a + telepresence.getambassador.io/inject-service-port + annotation to specify which port to intercept; bringing the + functionality of the --port flag to users who + use the mutator webook in order to control Telepresence via + GitOps. + image: ./telepresence-2.3.2-svcport-annotation.png + docs: reference/cluster-config#service-port-annotation + - type: feature + title: Outbound Connections + body: >- + Outbound connections are now routed through the intercepted + Pods which means that the connections originate from that + Pod from the cluster's perspective. This allows service + meshes to correctly identify the traffic. + docs: reference/routing/#outbound + - type: change + title: Inbound Connections + body: >- + Inbound connections from an intercepted agent are now + tunneled to the manager over the existing gRPC connection, + instead of establishing a new connection to the manager for + each inbound connection. This avoids interference from + certain service mesh configurations. + docs: reference/routing/#inbound + + # RBAC changes + - type: change + title: Traffic Manager needs new RBAC permissions + body: >- + The Traffic Manager requires RBAC + permissions to list Nodes, Pods, and to create a dummy + Service in the manager's namespace. + docs: reference/routing/#subnets + - type: change + title: Reduced developer RBAC requirements + body: >- + The on-laptop client no longer requires RBAC permissions to list the Nodes + in the cluster or to create Services, as that functionality + has been moved to the Traffic Manager. + + # Bugfixes + - type: bugfix + title: Able to detect subnets + body: >- + Telepresence will now detect the Pod CIDR ranges even if + they are not listed in the Nodes. + image: ./telepresence-2.3.2-subnets.png + docs: reference/routing/#subnets + - type: bugfix + title: Dynamic IP ranges + body: >- + The list of cluster subnets that the virtual network + interface will route is now configured dynamically and will + follow changes in the cluster. + - type: bugfix + title: No duplicate subnets + body: >- + Subnets fully covered by other subnets are now pruned + internally and thus never superfluously added to the + laptop's routing table. + docs: reference/routing/#subnets + - type: change # not a bugfix, but it only makes sense to mention after the above bugfixes + title: Change in default timeout + body: >- + The trafficManagerAPI timeout default has + changed from 5 seconds to 15 seconds, in order to facilitate + the extended time it takes for the traffic-manager to do its + initial discovery of cluster info as a result of the above + bugfixes. + - type: bugfix + title: Removal of DNS config files on macOS + body: >- + On macOS, files generated under + /etc/resolver/ as the result of using + include-suffixes in the cluster config are now + properly removed on quit. + docs: reference/routing/#macos-resolver + + - type: bugfix + title: Large file transfers + body: >- + Telepresence no longer erroneously terminates connections + early when sending a large HTTP response from an intercepted + service. + - type: bugfix + title: Race condition in shutdown + body: >- + When shutting down the user-daemon or root-daemon on the + laptop, telepresence quit and related commands + no longer return early before everything is fully shut down. + Now it can be counted on that by the time the command has + returned that all of the side-effects on the laptop have + been cleaned up. + - version: 2.3.1 + date: "2021-06-14" + notes: + - title: DNS Resolver Configuration + body: "Telepresence now supports per-cluster configuration for custom dns behavior, which will enable users to determine which local + remote resolver to use and which suffixes should be ignored + included. These can be configured on a per-cluster basis." + image: ./telepresence-2.3.1-dns.png + docs: reference/config + type: feature + - title: AlsoProxy Configuration + body: "Telepresence now supports also proxying user-specified subnets so that they can access external services only accessible to the cluster while connected to Telepresence. These can be configured on a per-cluster basis and each subnet is added to the TUN device so that requests are routed to the cluster for IPs that fall within that subnet." + image: ./telepresence-2.3.1-alsoProxy.png + docs: reference/config + type: feature + - title: Mutating Webhook for Injecting Traffic Agents + body: "The Traffic Manager now contains a mutating webhook to automatically add an agent to pods that have the telepresence.getambassador.io/traffic-agent: enabled annotation. This enables Telepresence to work well with GitOps CD platforms that rely on higher level kubernetes objects matching what is stored in git. For workloads without the annotation, Telepresence will add the agent the way it has in the past" + image: ./telepresence-2.3.1-inject.png + docs: reference/rbac + type: feature + - title: Traffic Manager Connect Timeout + body: "The trafficManagerConnect timeout default has changed from 20 seconds to 60 seconds, in order to facilitate the extended time it takes to apply everything needed for the mutator webhook." + image: ./telepresence-2.3.1-trafficmanagerconnect.png + docs: reference/config + type: change + - title: Fix for large file transfers + body: "Fix a tun-device bug where sometimes large transfers from services on the cluster would hang indefinitely" + image: ./telepresence-2.3.1-large-file-transfer.png + docs: reference/tun-device + type: bugfix + - title: Brew Formula Changed + body: "Now that the Telepresence rewrite is the main version of Telepresence, you can install it via Brew like so: brew install datawire/blackbird/telepresence." + image: ./telepresence-2.3.1-brew.png + docs: install/ + type: change + - version: 2.3.0 + date: "2021-06-01" + notes: + - title: Brew install Telepresence + body: "Telepresence can now be installed via brew on macOS, which makes it easier for users to stay up-to-date with the latest telepresence version. To install via brew, you can use the following command: brew install datawire/blackbird/telepresence2." + image: ./telepresence-2.3.0-homebrew.png + docs: install/ + type: feature + - title: TCP and UDP routing via Virtual Network Interface + body: "Telepresence will now perform routing of outbound TCP and UDP traffic via a Virtual Network Interface (VIF). The VIF is a layer 3 TUN-device that exists while Telepresence is connected. It makes the subnets in the cluster available to the workstation and will also route DNS requests to the cluster and forward them to intercepted pods. This means that pods with custom DNS configuration will work as expected. Prior versions of Telepresence would use firewall rules and were only capable of routing TCP." + image: ./tunnel.jpg + docs: reference/tun-device + type: feature + - title: SSH is no longer used + body: "All traffic between the client and the cluster is now tunneled via the traffic manager gRPC API. This means that Telepresence no longer uses ssh tunnels and that the manager no longer have an sshd installed. Volume mounts are still established using sshfs but it is now configured to communicate using the sftp-protocol directly, which means that the traffic agent also runs without sshd. A desired side effect of this is that the manager and agent containers no longer need a special user configuration." + image: ./no-ssh.png + docs: reference/tun-device/#no-ssh-required + type: change + - title: Running in a Docker container + body: "Telepresence can now be run inside a Docker container. This can be useful for avoiding side effects on a workstation's network, establishing multiple sessions with the traffic manager, or working with different clusters simultaneously." + image: ./run-tp-in-docker.png + docs: reference/inside-container + type: feature + - title: Configurable Log Levels + body: "Telepresence now supports configuring the log level for Root Daemon and User Daemon logs. This provides control over the nature and volume of information that Telepresence generates in daemon.log and connector.log." + image: ./telepresence-2.3.0-loglevels.png + docs: reference/config/#log-levels + type: feature + - version: 2.2.2 + date: "2021-05-17" + notes: + - title: Legacy Telepresence subcommands + body: Telepresence is now able to translate common legacy Telepresence commands into native Telepresence commands. So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used to with the new Telepresence binary. + image: ./telepresence-2.2.png + docs: install/migrate-from-legacy/ + type: feature diff --git a/docs/telepresence/2.11/troubleshooting/index.md b/docs/telepresence/2.11/troubleshooting/index.md new file mode 100644 index 000000000..364f70b7d --- /dev/null +++ b/docs/telepresence/2.11/troubleshooting/index.md @@ -0,0 +1,227 @@ +--- +title: "Telepresence Troubleshooting" +description: "Learn how to troubleshoot common issues related to Telepresence, including intercept issues, cluster connection issues, and errors related to Ambassador Cloud." +--- +# Troubleshooting + + +## Creating an intercept did not generate a preview URL + +Preview URLs can only be created if Telepresence is [logged in to +Ambassador Cloud](../reference/client/login/). When not logged in, it +will not even try to create a preview URL (additionally, by default it +will intercept all traffic rather than just a subset of the traffic). +Remove the intercept with `telepresence leave [deployment name]`, run +`telepresence login` to login to Ambassador Cloud, then recreate the +intercept. See the [intercepts how-to doc](../howtos/intercepts) for +more details. + +## Error on accessing preview URL: `First record does not look like a TLS handshake` + +The service you are intercepting is likely not using TLS, however when configuring the intercept you indicated that it does use TLS. Remove the intercept with `telepresence leave [deployment name]` and recreate it, setting `TLS` to `n`. Telepresence tries to intelligently determine these settings for you when creating an intercept and offer them as defaults, but odd service configurations might cause it to suggest the wrong settings. + +## Error on accessing preview URL: Detected a 301 Redirect Loop + +If your ingress is set to redirect HTTP requests to HTTPS and your web app uses HTTPS, but you configure the intercept to not use TLS, you will get this error when opening the preview URL. Remove the intercept with `telepresence leave [deployment name]` and recreate it, selecting the correct port and setting `TLS` to `y` when prompted. + +## Connecting to a cluster via VPN doesn't work. + +There are a few different issues that could arise when working with a VPN. Please see the [dedicated page](../reference/vpn) on Telepresence and VPNs to learn more on how to fix these. + +## Connecting to a cluster hosted in a VM on the workstation doesn't work + +The cluster probably has access to the host's network and gets confused when it is mapped by Telepresence. +Please check the [cluster in hosted vm](../howtos/cluster-in-vm) for more details. + +## Your GitHub organization isn't listed + +Ambassador Cloud needs access granted to your GitHub organization as a +third-party OAuth app. If an organization isn't listed during login +then the correct access has not been granted. + +The quickest way to resolve this is to go to the **Github menu** → +**Settings** → **Applications** → **Authorized OAuth Apps** → +**Ambassador Labs**. An organization owner will have a **Grant** +button, anyone not an owner will have **Request** which sends an email +to the owner. If an access request has been denied in the past the +user will not see the **Request** button, they will have to reach out +to the owner. + +Once access is granted, log out of Ambassador Cloud and log back in; +you should see the GitHub organization listed. + +The organization owner can go to the **GitHub menu** → **Your +organizations** → **[org name]** → **Settings** → **Third-party +access** to see if Ambassador Labs has access already or authorize a +request for access (only owners will see **Settings** on the +organization page). Clicking the pencil icon will show the +permissions that were granted. + +GitHub's documentation provides more detail about [managing access granted to third-party applications](https://docs.github.com/en/github/authenticating-to-github/connecting-with-third-party-applications) and [approving access to apps](https://docs.github.com/en/github/setting-up-and-managing-organizations-and-teams/approving-oauth-apps-for-your-organization). + +### Granting or requesting access on initial login + +When using GitHub as your identity provider, the first time you log in +to Ambassador Cloud GitHub will ask to authorize Ambassador Labs to +access your organizations and certain user data. + +Authorize Ambassador labs form + +Any listed organization with a green check has already granted access +to Ambassador Labs (you still need to authorize to allow Ambassador +Labs to read your user data and organization membership). + +Any organization with a red "X" requires access to be granted to +Ambassador Labs. Owners of the organization will see a **Grant** +button. Anyone who is not an owner will see a **Request** button. +This will send an email to the organization owner requesting approval +to access the organization. If an access request has been denied in +the past the user will not see the **Request** button, they will have +to reach out to the owner. + +Once approval is granted, you will have to log out of Ambassador Cloud +then back in to select the organization. + +## Volume mounts are not working on macOS + +It's necessary to have `sshfs` installed in order for volume mounts to work correctly during intercepts. Lately there's been some issues using `brew install sshfs` a macOS workstation because the required component `osxfuse` (now named `macfuse`) isn't open source and hence, no longer supported. As a workaround, you can now use `gromgit/fuse/sshfs-mac` instead. Follow these steps: + +1. Remove old sshfs, macfuse, osxfuse using `brew uninstall` +2. `brew install --cask macfuse` +3. `brew install gromgit/fuse/sshfs-mac` +4. `brew link --overwrite sshfs-mac` + +Now sshfs -V shows you the correct version, e.g.: +``` +$ sshfs -V +SSHFS version 2.10 +FUSE library version: 2.9.9 +fuse: no mount point +``` + +5. Next, try a mount (or an intercept that performs a mount). It will fail because you need to give permission to “Benjamin Fleischer” to execute a kernel extension (a pop-up appears that takes you to the system preferences). +6. Approve the needed permission +7. Reboot your computer. + +## Authorization for preview URLs +Services that require authentication may not function correctly with preview URLs. When accessing a preview URL, it is necessary to configure your intercept to use custom authentication headers for the preview URL. If you don't, you may receive an unauthorized response or be redirected to the login page for Ambassador Cloud. + +You can accomplish this by using a browser extension such as the `ModHeader extension` for [Chrome](https://chrome.google.com/webstore/detail/modheader/idgpnmonknjnojddfkpgkljpfnnfcklj) +or [Firefox](https://addons.mozilla.org/en-CA/firefox/addon/modheader-firefox/). + +It is important to note that Ambassador Cloud does not support OAuth browser flows when accessing a preview URL, but other auth schemes such as Basic access authentication and session cookies will work. + +## Distributed tracing + +Telepresence is a complex piece of software with components running locally on your laptop and remotely in a distributed kubernetes environment. +As such, troubleshooting investigations require tools that can give users, cluster admins, and maintainers a broad view of what these distributed components are doing. +In order to facilitate such investigations, telepresence >= 2.7.0 includes distributed tracing functionality via [OpenTelemetry](https://opentelemetry.io/) +Tracing is controlled via a `grpcPort` flag under the `tracing` configuration of your `values.yaml`. It is enabled by default and can be disabled by setting `grpcPort` to `0`, or `tracing` to an empty object: + +```yaml +tracing: {} +``` + +If tracing is configured, the traffic manager and traffic agents will open a GRPC server under the port given, from which telepresence clients will be able to gather trace data. +To collect trace data, ensure you're connected to the cluster, perform whatever operation you'd like to debug and then run `gather-traces` immediately after: + +```console +$ telepresence gather-traces +``` + +This command will gather traces from both the cloud and local components of telepresence and output them into a file called `traces.gz` in your current working directory: + +```console +$ file traces.gz + traces.gz: gzip compressed data, original size modulo 2^32 158255 +``` + +Please do not try to open or uncompress this file, as it contains binary trace data. +Instead, you can use the `upload-traces` command built into telepresence to send it to an [OpenTelemetry collector](https://opentelemetry.io/docs/collector/) for ingestion: + +```console +$ telepresence upload-traces traces.gz $OTLP_GRPC_ENDPOINT +``` + +Once that's been done, the traces will be visible via whatever means your usual collector allows. For example, this is what they look like when loaded into Jaeger's [OTLP API](https://www.jaegertracing.io/docs/1.36/apis/#opentelemetry-protocol-stable): + +![Jaeger Interface](../images/tracing.png) + +**Note:** The host and port provided for the `OTLP_GRPC_ENDPOINT` must accept OTLP formatted spans (instead of e.g. Jaeger or Zipkin specific spans) via a GRPC API (instead of the HTTP API that is also available in some collectors) +**Note:** Since traces are not automatically shipped to the backend by telepresence, they are stored in memory. Hence, to avoid running telepresence components out of memory, only the last 10MB of trace data are available for export. + +## No Sidecar Injected in GKE private clusters + +An attempt to `telepresence intercept` results in a timeout, and upon examination of the pods (`kubectl get pods`) it's discovered that the intercept command did not inject a sidecar into the workload's pods: + +```bash +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +echo-easy-7f6d54cff8-rz44k 1/1 Running 0 5m5s + +$ telepresence intercept echo-easy -p 8080 +telepresence: error: connector.CreateIntercept: request timed out while waiting for agent echo-easy.default to arrive +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +echo-easy-d8dc4cc7c-27567 1/1 Running 0 2m9s + +# Notice how 1/1 containers are ready. +``` + +If this is occurring in a GKE cluster with private networking enabled, it is likely due to firewall rules blocking the +Traffic Manager's webhook injector from the API server. +To fix this, add a firewall rule allowing your cluster's master nodes to access TCP port `443` in your cluster's pods, +or change the port number that Telepresence is using for the agent injector by providing the number of an allowed port +using the Helm chart value `agentInjector.webhook.port`. +Please refer to the [telepresence install instructions](../install/cloud#gke) or the [GCP docs](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) for information to resolve this. + +## Injected init-container doesn't function properly + +The init-container is injected to insert `iptables` rules that redirects port numbers from the app container to the +traffic-agent sidecar. This is necessary when the service's `targetPort` is numeric. It requires elevated privileges +(`NET_ADMIN` capabilities), and the inserted rules may get overridden by `iptables` rules inserted by other vendors, +such as Istio or Linkerd. + +Injection of the init-container can often be avoided by using a `targetPort` _name_ instead of a number, and ensure +that the corresponding container's `containerPort` is also named. This example uses the name "http", but any valid +name will do: +```yaml +apiVersion: v1 +kind: Pod +metadata: + ... +spec: + ... + containers: + - ... + ports: + - name: http + containerPort: 8080 +--- +apiVersion: v1 +kind: Service +metadata: + ... +spec: + ... + ports: + - port: 80 + targetPort: http +``` + +Telepresence's mutating webhook will refrain from injecting an init-container when the `targetPort` is a name. Instead, +it will do the following during the injection of the traffic-agent: + +1. Rename the designated container's port by prefixing it (i.e., containerPort: http becomes containerPort: tm-http). +2. Let the container port of our injected traffic-agent use the original name (i.e., containerPort: http). + +Kubernetes takes care of the rest and will now associate the service's `targetPort` with our traffic-agent's +`containerPort`. + +### Important note +If the service is "headless" (using `ClusterIP: None`), then using named ports won't help because the `targetPort` will +not get remapped. A headless service will always require the init-container. + +## `too many files open` error when running `telepresence connect` on Linux + +If `telepresence connect` on linux fails with a message in the logs `too many files open`, then check if `fs.inotify.max_user_instances` is set too low. Check the current settings with `sysctl fs.notify.max_user_instances` and increase it temporarily with `sudo sysctl -w fs.inotify.max_user_instances=512`. For more information about permanently increasing it see [Kernel inotify watch limit reached](https://unix.stackexchange.com/a/13757/514457). diff --git a/docs/telepresence/2.11/versions.yml b/docs/telepresence/2.11/versions.yml new file mode 100644 index 000000000..222c25c2f --- /dev/null +++ b/docs/telepresence/2.11/versions.yml @@ -0,0 +1,5 @@ +version: "2.11.0" +dlVersion: "latest" +docsVersion: "2.11" +branch: release/v2 +productName: "Telepresence" diff --git a/docs/telepresence/2.12 b/docs/telepresence/2.12 deleted file mode 120000 index 371acbb9c..000000000 --- a/docs/telepresence/2.12 +++ /dev/null @@ -1 +0,0 @@ -../../../docs/telepresence/v2.12 \ No newline at end of file diff --git a/docs/telepresence/2.12/ci/github-actions.md b/docs/telepresence/2.12/ci/github-actions.md new file mode 100644 index 000000000..810a2d239 --- /dev/null +++ b/docs/telepresence/2.12/ci/github-actions.md @@ -0,0 +1,176 @@ +--- +title: GitHub Actions for Telepresence +description: "Learn more about GitHub Actions for Telepresence and how to integrate them in your processes to run tests for your own environments and improve your CI/CD pipeline. " +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Telepresence with GitHub Actions + +Telepresence combined with [GitHub Actions](https://docs.github.com/en/actions) allows you to run integration tests in your continuous integration/continuous delivery (CI/CD) pipeline without the need to run any dependant service. When you connect to the target Kubernetes cluster, you can intercept traffic of the remote services and send it to an instance of the local service running in CI. This way, you can quickly test the bugfixes, updates, and features that you develop in your project. + +You can [register here](https://app.getambassador.io/auth/realms/production/protocol/openid-connect/auth?client_id=telepresence-github-actions&response_type=code&code_challenge=qhXI67CwarbmH-pqjDIV1ZE6kqggBKvGfs69cxst43w&code_challenge_method=S256&redirect_uri=https://app.getambassador.io) to get a free Ambassador Cloud account to try the GitHub Actions for Telepresence yourself. + +## GitHub Actions for Telepresence + +Ambassador Labs has created a set of GitHub Actions for Telepresence that enable you to run integration tests in your CI pipeline against any existing remote cluster. The GitHub Actions for Telepresence are the following: + + - **configure**: Initial configuration setup for Telepresence that is needed to run the actions successfully. + - **install**: Installs Telepresence in your CI server with latest version or the one specified by you. + - **login**: Logs into Telepresence, you can create a [personal intercept](/docs/telepresence/latest/concepts/intercepts/#personal-intercept). You'll need a Telepresence API key and set it as an environment variable in your workflow. See the [acquiring an API key guide](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) for instructions on how to get one. + - **connect**: Connects to the remote target environment. + - **intercept**: Redirects traffic to the remote service to the version of the service running in CI so you can run integration tests. + +Each action contains a post-action script to clean up resources. This includes logging out of Telepresence, closing the connection to the remote cluster, and stopping the intercept process. These post scripts are executed automatically, regardless of job result. This way, you don't have to worry about terminating the session yourself. You can look at the [GitHub Actions for Telepresence repository](https://github.com/datawire/telepresence-actions) for more information. + +# Using Telepresence in your GitHub Actions CI pipeline + +## Prerequisites + +To enable GitHub Actions with telepresence, you need: + +* A [Telepresence API KEY](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) and set it as an environment variable in your workflow. +* Access to your remote Kubernetes cluster, like a `kubeconfig.yaml` file with the information to connect to the cluster. +* If your remote cluster already has Telepresence installed, you need to know whether Telepresence is installed [Cluster wide](/docs/telepresence/latest/reference/rbac/#cluster-wide-telepresence-user-access) or [Namespace only](/docs/telepresence/latest/reference/rbac/#namespace-only-telepresence-user-access). If telepresence is configured for namespace only, verify that your `kubeconfig.yaml` is configured to find the installation of the Traffic Manager. For example: + + ```yaml + apiVersion: v1 + clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: traffic-manager-namespace + name: example-cluster + ``` + +* If Telepresence is installed, you also need to know the version of Telepresence running in the cluster. You can run the command `kubectl describe service traffic-manager -n namespace`. The version is listed in the `labels` section of the output. +* You need a GitHub Actions secret named `TELEPRESENCE_API_KEY` in your repository that has your Telepresence API key. See [GitHub docs](https://docs.github.com/en/github-ae@latest/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository) for instructions on how to create GitHub Actions secrets. +* You need a GitHub Actions secret named `KUBECONFIG_FILE` in your repository with the content of your `kubeconfig.yaml`). + +**Does your environment look different?** We're actively working on making GitHub Actions for Telepresence more useful for more + +
+ + +
+ +## Initial configuration setup + +To be able to use the GitHub Actions for Telepresence, you need to do an initial setup to [configure Telepresence](../../reference/config/) so the repository is able to run your workflow. To complete the Telepresence setup: + + +This action only supports Ubuntu runners at the moment. + +1. In your main branch, create a `.github/workflows` directory in your GitHub repository if it does not already exist. +1. Next, in the `.github/workflows` directory, create a new YAML file named `configure-telepresence.yaml`: + + ```yaml + name: Configuring telepresence + on: workflow_dispatch + jobs: + configuring: + name: Configure telepresence + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + steps: + - name : Checkout + uses: actions/checkout@v3 + #---- here run your custom command to connect to your cluster + #- name: Connect to cluster + # shell: bash + # run: ./connnect to cluster + #---- + - name: Congifuring Telepresence + uses: datawire/telepresence-actions/configure@v1.0-rc + with: + version: latest + ``` + +1. Push the `configure-telepresence.yaml` file to your repository. +1. Run the `Configuring Telepresence Workflow` [manually](https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow) in your repository's Actions tab. + +When the workflow runs, the action caches the configuration directory of Telepresence and a Telepresence configuration file if is provided by you. This configuration file should be placed in the`/.github/telepresence-config/config.yml` with your own [Telepresence config](../../reference/config/). If you update your this file with a new configuration, you must run the `Configuring Telepresence Workflow` action manually on your main branch so your workflow detects the new configuration. + + +When you create a branch, do not remove the .telepresence/config.yml file. This is required for Telepresence to run GitHub action properly when there is a new push to the branch in your repository. + + +## Using Telepresence in your GitHub Actions workflows + +1. In the `.github/workflows` directory create a new YAML file named `run-integration-tests.yaml` and modify placeholders with real actions to run your service and perform integration tests. + + ```yaml + name: Run Integration Tests + on: + push: + branches-ignore: + - 'main' + jobs: + my-job: + name: Run Integration Test using Remote Cluster + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + KUBECONFIG_FILE: ${{ secrets.KUBECONFIG_FILE }} + KUBECONFIG: /opt/kubeconfig + steps: + - name : Checkout + uses: actions/checkout@v3 + with: + ref: ${{ github.event.pull_request.head.sha }} + #---- here run your custom command to run your service + #- name: Run your service to test + # shell: bash + # run: ./run_local_service + #---- + # First you need to log in to Telepresence, with your api key + - name: Create kubeconfig file + run: | + cat < /opt/kubeconfig + ${{ env.KUBECONFIG_FILE }} + EOF + - name: Install Telepresence + uses: datawire/telepresence-actions/install@v1.0-rc + with: + version: 2.5.8 # Change the version number here according to the version of Telepresence in your cluster or omit this parameter to install the latest version + - name: Telepresence connect + uses: datawire/telepresence-actions/connect@v1.0-rc + - name: Login + uses: datawire/telepresence-actions/login@v1.0-rc + with: + telepresence_api_key: ${{ secrets.TELEPRESENCE_API_KEY }} + - name: Intercept the service + uses: datawire/telepresence-actions/intercept@v1.0-rc + with: + service_name: service-name + service_port: 8081:8080 + namespace: namespacename-of-your-service + http_header: "x-telepresence-intercept-id=service-intercepted" + print_logs: true # Flag to instruct the action to print out Telepresence logs and export an artifact with them + #---- here run your custom command + #- name: Run integrations test + # shell: bash + # run: ./run_integration_test + #---- + ``` + +The previous is an example of a workflow that: + +* Checks out the repository code. +* Has a placeholder step to run the service during CI. +* Creates the `/opt/kubeconfig` file with the contents of the `secrets.KUBECONFIG_FILE` to make it available for Telepresence. +* Installs Telepresence. +* Runs Telepresence Connect. +* Logs into Telepresence. +* Intercepts traffic to the service running in the remote cluster. +* A placeholder for an action that would run integration tests, such as one that makes HTTP requests to your running service and verifies it works while dependent services run in the remote cluster. + +This workflow gives you the ability to run integration tests during the CI run against an ephemeral instance of your service to verify that the any change that is pushed to the working branch works as expected. After you push the changes, the CI server will run the integration tests against the intercept. You can view the results on your GitHub repository, under "Actions" tab. diff --git a/docs/telepresence/2.12/community.md b/docs/telepresence/2.12/community.md new file mode 100644 index 000000000..922457c9d --- /dev/null +++ b/docs/telepresence/2.12/community.md @@ -0,0 +1,12 @@ +# Community + +## Contributor's guide +Please review our [contributor's guide](https://github.com/telepresenceio/telepresence/blob/release/v2/DEVELOPING.md) +on GitHub to learn how you can help make Telepresence better. + +## Changelog +Our [changelog](https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md) +describes new features, bug fixes, and updates to each version of Telepresence. + +## Meetings +Check out our community [meeting schedule](https://github.com/telepresenceio/telepresence/blob/release/v2/MEETING_SCHEDULE.md) for opportunities to interact with Telepresence developers. diff --git a/docs/telepresence/2.12/concepts/context-prop.md b/docs/telepresence/2.12/concepts/context-prop.md new file mode 100644 index 000000000..b3eb41e32 --- /dev/null +++ b/docs/telepresence/2.12/concepts/context-prop.md @@ -0,0 +1,37 @@ +# Context propagation + +**Context propagation** is the transfer of request metadata across the services and remote processes of a distributed system. Telepresence uses context propagation to intelligently route requests to the appropriate destination. + +This metadata is the context that is transferred across system services. It commonly takes the form of HTTP headers; context propagation is usually referred to as header propagation. A component of the system (like a proxy or performance monitoring tool) injects the headers into requests as it relays them. + +Metadata propagation refers to any service or other middleware not stripping away the headers. Propagation facilitates the movement of the injected contexts between other downstream services and processes. + + +## What is distributed tracing? + +Distributed tracing is a technique for troubleshooting and profiling distributed microservices applications and is a common application for context propagation. It is becoming a key component for debugging. + +In a microservices architecture, a single request may trigger additional requests to other services. The originating service may not cause the failure or slow request directly; a downstream dependent service may instead be to blame. + +An application like Datadog or New Relic will use agents running on services throughout the system to inject traffic with HTTP headers (the context). They will track the request’s entire path from origin to destination to reply, gathering data on routes the requests follow and performance. The injected headers follow the [W3C Trace Context specification](https://www.w3.org/TR/trace-context/) (or another header format, such as [B3 headers](https://github.com/openzipkin/b3-propagation)), which facilitates maintaining the headers through every service without being stripped (the propagation). + + +## What are intercepts and preview URLs? + +[Intercepts](../../reference/intercepts) and [preview +URLs](../../howtos/preview-urls/) are functions of Telepresence that +enable easy local development from a remote Kubernetes cluster and +offer a preview environment for sharing and real-time collaboration. + +Telepresence uses custom HTTP headers and header propagation to +identify which traffic to intercept both for plain personal intercepts +and for personal intercepts with preview URLs; these techniques are +more commonly used for distributed tracing, so what they are being +used for is a little unorthodox, but the mechanisms for their use are +already widely deployed because of the prevalence of tracing. The +headers facilitate the smart routing of requests either to live +services in the cluster or services running locally on a developer’s +machine. The intercepted traffic can be further limited by using path +based routing. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to [Ambassador Cloud](https://app.getambassador.io) with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. diff --git a/docs/telepresence/2.12/concepts/devloop.md b/docs/telepresence/2.12/concepts/devloop.md new file mode 100644 index 000000000..86aac87e2 --- /dev/null +++ b/docs/telepresence/2.12/concepts/devloop.md @@ -0,0 +1,54 @@ +--- +title: "The developer and the inner dev loop | Ambassador " +--- + +# The developer experience and the inner dev loop + +## How is the developer experience changing? + +The developer experience is the workflow a developer uses to develop, test, deploy, and release software. + +Typically this experience has consisted of both an inner dev loop and an outer dev loop. The inner dev loop is where the individual developer codes and tests, and once the developer pushes their code to version control, the outer dev loop is triggered. + +The outer dev loop is _everything else_ that happens leading up to release. This includes code merge, automated code review, test execution, deployment, [controlled (canary) release](https://www.getambassador.io/docs/argo/latest/concepts/canary/), and observation of results. The modern outer dev loop might include, for example, an automated CI/CD pipeline as part of a [GitOps workflow](https://www.getambassador.io/docs/argo/latest/concepts/gitops/#what-is-gitops) and a [progressive delivery](/docs/argo/latest/concepts/cicd/) strategy relying on automated canaries, i.e. to make the outer loop as fast, efficient and automated as possible. + +Cloud-native technologies have fundamentally altered the developer experience in two ways: one, developers now have to take extra steps in the inner dev loop; two, developers need to be concerned with the outer dev loop as part of their workflow, even if most of their time is spent in the inner dev loop. + +Engineers now must design and build distributed service-based applications _and_ also assume responsibility for the full development life cycle. The new developer experience means that developers can no longer rely on monolithic application developer best practices, such as checking out the entire codebase and coding locally with a rapid “live-reload” inner development loop. Now developers have to manage external dependencies, build containers, and implement orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this adds development time to the equation. + +## What is the inner dev loop? + +The inner dev loop is the single developer workflow. A single developer should be able to set up and use an inner dev loop to code and test changes quickly. + +Even within the Kubernetes space, developers will find much of the inner dev loop familiar. That is, code can still be written locally at a level that a developer controls and committed to version control. + +In a traditional inner dev loop, if a typical developer codes for 360 minutes (6 hours) a day, with a traditional local iterative development loop of 5 minutes — 3 coding, 1 building, i.e. compiling/deploying/reloading, 1 testing inspecting, and 10-20 seconds for committing code — they can expect to make ~70 iterations of their code per day. Any one of these iterations could be a release candidate. The only “developer tax” being paid here is for the commit process, which is negligible. + +![traditional inner dev loop](../images/trad-inner-dev-loop.png) + +## In search of lost time: How does containerization change the inner dev loop? + +The inner dev loop is where writing and testing code happens, and time is critical for maximum developer productivity and getting features in front of end users. The faster the feedback loop, the faster developers can refactor and test again. + +Changes to the inner dev loop process, i.e., containerization, threaten to slow this development workflow down. Coding stays the same in the new inner dev loop, but code has to be containerized. The _containerized_ inner dev loop requires a number of new steps: + +* packaging code in containers +* writing a manifest to specify how Kubernetes should run the application (e.g., YAML-based configuration information, such as how much memory should be given to a container) +* pushing the container to the registry +* deploying containers in Kubernetes + +Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. + + +![container inner dev loop](../images/container-inner-dev-loop.png) + +## Tackling the slow inner dev loop + +A slow inner dev loop can negatively impact frontend and backend teams, delaying work on individual and team levels and slowing releases into production overall. + +For example: + +* Frontend developers have to wait for previews of backend changes on a shared dev/staging environment (for example, until CI/CD deploys a new version) and/or rely on mocks/stubs/virtual services when coding their application locally. These changes are only verifiable by going through the CI/CD process to build and deploy within a target environment. +* Backend developers have to wait for CI/CD to build and deploy their app to a target environment to verify that their code works correctly with cluster or cloud-based dependencies as well as to share their work to get feedback. + +New technologies and tools can facilitate cloud-native, containerized development. And in the case of a sluggish inner dev loop, developers can accelerate productivity with tools that help speed the loop up again. diff --git a/docs/telepresence/2.12/concepts/devworkflow.md b/docs/telepresence/2.12/concepts/devworkflow.md new file mode 100644 index 000000000..fa24fc2bd --- /dev/null +++ b/docs/telepresence/2.12/concepts/devworkflow.md @@ -0,0 +1,7 @@ +# The changing development workflow + +A changing workflow is one of the main challenges for developers adopting Kubernetes. Software development itself isn’t the challenge. Developers can continue to [code using the languages and tools with which they are most productive and comfortable](https://www.getambassador.io/resources/kubernetes-local-dev-toolkit/). That’s the beauty of containerized development. + +However, the cloud-native, Kubernetes-based approach to development means adopting a new development workflow and development environment. Beyond the basics, such as figuring out how to containerize software, [how to run containers in Kubernetes](https://www.getambassador.io/docs/kubernetes/latest/concepts/appdev/), and how to deploy changes into containers, for example, Kubernetes adds complexity before it delivers efficiency. The promise of a “quicker way to develop software” applies at least within the traditional aspects of the inner dev loop, where the single developer codes, builds and tests their software. But both within the inner dev loop and once code is pushed into version control to trigger the outer dev loop, the developer experience changes considerably from what many developers are used to. + +In this new paradigm, new steps are added to the inner dev loop, and more broadly, the developer begins to share responsibility for the full life cycle of their software. Inevitably this means taking new workflows and tools on board to ensure that the full life cycle continues full speed ahead. diff --git a/docs/telepresence/2.12/concepts/faster.md b/docs/telepresence/2.12/concepts/faster.md new file mode 100644 index 000000000..03dc9bd8b --- /dev/null +++ b/docs/telepresence/2.12/concepts/faster.md @@ -0,0 +1,28 @@ +--- +title: Install the Telepresence Docker extension | Ambassador +--- +# Making the remote local: Faster feedback, collaboration and debugging + +With the goal of achieving [fast, efficient development](https://www.getambassador.io/use-case/local-kubernetes-development/), developers need a set of approaches to bridge the gap between remote Kubernetes clusters and local development, and reduce time to feedback and debugging. + +## How should I set up a Kubernetes development environment? + +[Setting up a development environment](https://www.getambassador.io/resources/development-environments-microservices/) for Kubernetes can be much more complex than the setup for traditional web applications. Creating and maintaining a Kubernetes development environment relies on a number of external dependencies, such as databases or authentication. + +While there are several ways to set up a Kubernetes development environment, most introduce complexities and impediments to speed. The dev environment should be set up to easily code and test in conditions where a service can access the resources it depends on. + +A good way to meet the goals of faster feedback, possibilities for collaboration, and scale in a realistic production environment is the "single service local, all other remote" environment. Developing in a fully remote environment offers some benefits, but for developers, it offers the slowest possible feedback loop. With local development in a remote environment, the developer retains considerable control while using tools like [Telepresence](../../quick-start/) to facilitate fast feedback, debugging and collaboration. + +## What is Telepresence? + +Telepresence is an open source tool that lets developers [code and test microservices locally against a remote Kubernetes cluster](../../quick-start/). Telepresence facilitates more efficient development workflows while relieving the need to worry about other service dependencies. + +## How can I get fast, efficient local development? + +The dev loop can be jump-started with the right development environment and Kubernetes development tools to support speed, efficiency and collaboration. Telepresence is designed to let Kubernetes developers code as though their laptop is in their Kubernetes cluster, enabling the service to run locally and be proxied into the remote cluster. Telepresence runs code locally and forwards requests to and from the remote Kubernetes cluster, bypassing the much slower process of waiting for a container to build, pushing it to registry, and deploying to production. + +A rapid and continuous feedback loop is essential for productivity and speed; Telepresence enables the fast, efficient feedback loop to ensure that developers can access the rapid local development loop they rely on without disrupting their own or other developers' workflows. Telepresence safely intercepts traffic from the production cluster and enables near-instant testing of code, local debugging in production, and [preview URL](../../howtos/preview-urls/) functionality to share dev environments with others for multi-user collaboration. + +Telepresence works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This pod proxies data from the Kubernetes environment (e.g., TCP connections, environment variables, volumes) to the local process. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +The intercept proxy works thanks to context propagation, which is most frequently associated with distributed tracing but also plays a key role in controllable intercepts and preview URLs. diff --git a/docs/telepresence/2.12/concepts/intercepts.md b/docs/telepresence/2.12/concepts/intercepts.md new file mode 100644 index 000000000..0a2909be2 --- /dev/null +++ b/docs/telepresence/2.12/concepts/intercepts.md @@ -0,0 +1,208 @@ +--- +title: "Types of intercepts" +description: "Short demonstration of personal vs global intercepts" +--- + +import React from 'react'; + +import Alert from '@material-ui/lab/Alert'; +import AppBar from '@material-ui/core/AppBar'; +import Paper from '@material-ui/core/Paper'; +import Tab from '@material-ui/core/Tab'; +import TabContext from '@material-ui/lab/TabContext'; +import TabList from '@material-ui/lab/TabList'; +import TabPanel from '@material-ui/lab/TabPanel'; +import Animation from '@src/components/InterceptAnimation'; + +export function TabsContainer({ children, ...props }) { + const [state, setState] = React.useState({curTab: "personal"}); + React.useEffect(() => { + const query = new URLSearchParams(window.location.search); + var interceptType = query.get('intercept') || "personal"; + if (state.curTab != interceptType) { + setState({curTab: interceptType}); + } + }, [state, setState]) + var setURL = function(newTab) { + history.replaceState(null,null, + `?intercept=${newTab}${window.location.hash}`, + ); + }; + return ( +
+ + + {setState({curTab: newTab}); setURL(newTab)}} aria-label="intercept types"> + + + + + + {children} + +
+ ); +}; + +# Types of intercepts + + + + +# No intercept + + + + +This is the normal operation of your cluster without Telepresence. + + + + + +# Global intercept + + + + +**Global intercepts** replace the Kubernetes "Orders" service with the +Orders service running on your laptop. The users see no change, but +with all the traffic coming to your laptop, you can observe and debug +with all your dev tools. + + + +### Creating and using global intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=all + ``` + + + + Make sure your current kubectl context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service: + + All requests will be sent to the version of your service that is + running in the local development environment. + + + + +# Personal intercept + +**Personal intercepts** allow you to be selective and intercept only +some of the traffic to a service while not interfering with the rest +of the traffic. This allows you to share a cluster with others on your +team without interfering with their work. + +Personal intercepts are subject to the Ambassador Cloud active service and user limit quotas. +To read more about these quotas limits, see the [subscription management page](../../../cloud/latest/subscriptions/howtos/manage-my-subscriptions). + + + + +In the illustration above, **Orange** +requests are being made by Developer 2 on their laptop and the +**green** are made by a teammate, +Developer 1, on a different laptop. + +Each developer can intercept the Orders service for their requests only, +while sharing the rest of the development environment. + + + +### Creating and using personal intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b + ``` + + We're using + `Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b` as the + header for the sake of the example, but you can use any + `key=value` pair you want, or `--http-header=auto` to have it + choose something automatically. + + + + Make sure your current kubect context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service by passing the + HTTP header: + + ```http + Personal-Intercept: 126a72c7-be8b-4329-af64-768e207a184b + ``` + + + + Need a browser extension to modify or remove an HTTP-request-headers? + + Chrome + {' '} + Firefox + + + + 3. Using the intercept: Send requests to your service without the + HTTP header: + + Requests without the header will be sent to the version of your + service that is running in the cluster. This enables you to share + the cluster with a team! + +### Intercepting a specific endpoint + +It's not uncommon to have one service serving several endpoints. Telepresence is capable of limiting an +intercept to only affect the endpoints you want to work with by using one of the `--http-path-xxx` +flags below in addition to using `--http-header` flags. Only one such flag can be used in an intercept +and, contrary to the `--http-header` flag, it cannot be repeated. + +The following flags are available: + +| Flag | Meaning | +|-------------------------------|------------------------------------------------------------------| +| `--http-path-equal ` | Only intercept the endpoint for this exact path | +| `--http-path-prefix ` | Only intercept endpoints with a matching path prefix | +| `--http-path-regex ` | Only intercept endpoints that match the given regular expression | + +#### Examples: + +1. A personal intercept using the header "Coder: Bob" limited to all endpoints that start with "/api': + + ```shell + telepresence intercept SERVICENAME --http-path-prefix=/api --http-header=Coder=Bob + ``` + +2. A personal intercept using the auto generated header that applies only to the endpoint "/api/version": + + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version --http-header=auto + ``` + or, since `--http-header=auto` is the implicit when using `--http` options, just: + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version + ``` + +3. A personal intercept using the auto generated header limited to all endpoints matching the regular expression "(staging-)?api/.*": + + ```shell + telepresence intercept SERVICENAME --http-path-regex='/(staging-)?api/.*' + ``` + + + diff --git a/docs/telepresence/2.12/concepts/modes.md b/docs/telepresence/2.12/concepts/modes.md new file mode 100644 index 000000000..3402f07e4 --- /dev/null +++ b/docs/telepresence/2.12/concepts/modes.md @@ -0,0 +1,36 @@ +--- +title: "Modes" +--- + +# Modes + +A Telepresence installation happens in two locations, initially on your laptop or workstation, and then on your cluster after running `telepresence helm install`. +The main component that gets installed into the cluster is known as the Traffic Manager. +The Traffic Manager can be put either into single user mode (the default) or into team mode. +Modes give cluster admins the ability to enforce both [intercept type](../intercepts) defaults and logins across all connected users, enabling teams to colaborate and intercept without stepping getting in eachothers way. + +## Single user mode + +In single user mode, all intercepts will be [global intercepts](../intercepts?intercept=global) by default. +Global intercepts affect all traffic coming into the intercepted workload; this can cause issues for teams working on the same service. +While single user mode is the default, switching back from team mode is done buy runnning: +``` +telepresence helm install --single-user-mode +``` + +## Team mode + +In team mode, all intercepts will be [personal intercepts](../intercepts?intercept=personal) by default and all intercepting users must be logged in. +Personal intercepts selectively affect http traffic coming into the intercepted workload. +Being in team mode adds an additional layer of confidence for developers working on the same service, knowing their teammates wont interrupt their intercepts by mistake. +Since logins are enforced in this mode as well, you can ensure that Ambassador Cloud features, such as intercept history and saved intercepts, are being taken advantage of by everybody on your team. +To switch from single user mode to team mode, run: +``` +telepresence helm install --team-mode +``` + +## Default intercept types based on modes +The mode of the Traffic Manager determines the default type of intercept, [personal](../intercepts?intercept=personal) vs [global](../intercepts?intercept=global). +When in team mode, intercepts default to [personal intercepts](../intercepts?intercept=personal), and logins are enforced for non logged in users. +When in single user mode, all intercepts default to [global intercepts](../intercepts?intercept=global), regardless of login status. +![mode defaults](../images/mode-defaults.png) \ No newline at end of file diff --git a/docs/telepresence/2.12/doc-links.yml b/docs/telepresence/2.12/doc-links.yml new file mode 100644 index 000000000..59922d714 --- /dev/null +++ b/docs/telepresence/2.12/doc-links.yml @@ -0,0 +1,110 @@ +- title: Quick start + link: quick-start +- title: Install Telepresence + items: + - title: Install + link: install/ + - title: Upgrade + link: install/upgrade/ + - title: Install Traffic Manager + link: install/manager/ + - title: Install Traffic Manager with Helm + link: install/helm/ + - title: Cloud Provider Prerequisites + link: install/cloud/ + - title: Migrate from legacy Telepresence + link: install/migrate-from-legacy/ +- title: Core concepts + items: + - title: The changing development workflow + link: concepts/devworkflow + - title: The developer experience and the inner dev loop + link: concepts/devloop + - title: "Making the remote local: Faster feedback, collaboration and debugging" + link: concepts/faster + - title: Context propagation + link: concepts/context-prop + - title: Types of intercepts + link: concepts/intercepts + - title: Modes + link: concepts/modes +- title: How do I... + items: + - title: Intercept a service in your own environment + link: howtos/intercepts + - title: Share dev environments with preview URLs + link: howtos/preview-urls + - title: Proxy outbound traffic to my cluster + link: howtos/outbound + - title: Host a cluster in a local VM + link: howtos/cluster-in-vm + - title: Send requests to an intercepted service + link: howtos/request + - title: Package and share my intercepts + link: howtos/package +- title: Telepresence for Docker + items: + - title: What is Telepresence for Docker + link: extension/intro + - title: Install into Docker Desktop + link: extension/install + - title: Intercept into a Docker Container + link: extension/intercept +- title: Telepresence for CI + items: + - title: Github Actions + link: ci/github-actions +- title: Technical reference + items: + - title: Architecture + link: reference/architecture + - title: Client reference + link: reference/client + items: + - title: login + link: reference/client/login + - title: Laptop-side configuration + link: reference/config + - title: Cluster-side configuration + link: reference/cluster-config + - title: Using Docker for intercepts + link: reference/docker-run + - title: Running Telepresence in a Docker container + link: reference/inside-container + - title: Environment variables + link: reference/environment + - title: Intercepts + link: reference/intercepts/ + items: + - title: Configure intercept using CLI + link: reference/intercepts/cli + - title: Configure intercept using specifications + link: reference/intercepts/specs + - title: Manually injecting the Traffic Agent + link: reference/intercepts/manual-agent + - title: Volume mounts + link: reference/volume + - title: RESTful API service + link: reference/restapi + - title: DNS resolution + link: reference/dns + - title: RBAC + link: reference/rbac + - title: Telepresence and VPNs + link: reference/vpn + - title: Networking through Virtual Network Interface + link: reference/tun-device + - title: Connection Routing + link: reference/routing + - title: Using Telepresence with Linkerd + link: reference/linkerd +- title: FAQs + link: faqs +- title: Troubleshooting + link: troubleshooting +- title: Community + link: community +- title: Release Notes + link: release-notes +- title: Licenses + link: licenses diff --git a/docs/telepresence/2.12/extension/install.md b/docs/telepresence/2.12/extension/install.md new file mode 100644 index 000000000..1f4c70c09 --- /dev/null +++ b/docs/telepresence/2.12/extension/install.md @@ -0,0 +1,21 @@ +--- +title: "Telepresence for Docker Extension" +description: "Learn how to install and use Ambassador Labs' Telepresence for Docker." +indexable: true +--- + +# Docker extension + +[Docker](https://docker.com), the popular containerized runtime environment, now offers the [Telepresence](../../../../../kubernetes-learning-center/telepresence-docker-extension/) extension for Docker Desktop. With this extension, you can quickly install Telepresence and begin using its features with your Docker containers in a matter of minutes. + +## Install the Telepresence Docker extension + +Telepresence for Docker is available through the Docker Destktop. To install Telepresence for Docker: + +1. Open Docker Desktop. + +2. In the Docker Dashboard, click **Add Extensions** in the left navigation bar. + +3. In the Extensions Marketplace, search for `Ambassador Telepresence`. + +4. Click **Install**. diff --git a/docs/telepresence/2.12/extension/intercept.md b/docs/telepresence/2.12/extension/intercept.md new file mode 100644 index 000000000..8f31c4359 --- /dev/null +++ b/docs/telepresence/2.12/extension/intercept.md @@ -0,0 +1,77 @@ +--- +title: "Create an intercept with the Telepresence Docker extension" +description: "With Telepresence Docker extension, you leverage the full potential of telepresence CLI in Docker Desktop." +indexable: true +--- + +# Create an intercept + +With the Telepresence for Docker extension, you can create intercepts from one of your Kubernetes clusters, directly in the extension, or you can upload an [intercept specification](../../reference/intercepts/specs#specification) to run more complex intercepts. These intercepts route the cluster traffic through a proxy URL to your local Docker container. Follow the instructions below to create an intercept with Docker Desktop. + +## Prerequisites + +Before you begin, you need: +- [Docker Desktop](https://www.docker.com/products/docker-desktop). +- The [Telepresence ](../../../../../kubernetes-learning-center/telepresence-docker-extension/) extension [installed](../install). +- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/), the Kubernetes command-line tool. + +## Connect to Ambassador Cloud through the Telepresence Docker extension. + + 1. Click the Telepresence extension in Docker Desktop, then click **Get Started**. + + 2. You'll be redirected to Ambassador Cloud for login, you can authenticate with **Docker**, Google, GitHub or GitLab account. +

+ +

+ +## Create an Intercept from a Kubernetes service + + 1. Select the Kuberentes context you would like to connect to. +

+ +

+ + 2. Once your Telepresence is connected to your cluster you will see a list of services you can connect to. If you don't see the service you want to intercept, you may need to change namespaces in the dropdown menu. +

+ +

+ + 3. Click the **Intercept** button on the service you want to intercept. You will see a popup to help configure your intercept, and intercept handlers. +

+ +

+ + 4. Telepresence will start your intercept on your service, and your local container on the designated port. You will then be redirected to a management page where you can view your active intercepts. +

+ +

+ + +## Create an Intercept from an Intercept Specification. + + 1. Click the dropdown on the **Connect** button to activate the option to upload an intercept specification. +

+ +

+ + 2. Click **Upload Telepresence Spec** to run your intercept specification. +

+ +

+ + 3. Once your specification has been uploaded, the extension will process it and redirect you to the running intercepts page once it has been started. + + 4. The intercept information now shows up in the Docker Telepresence extension. You can now [test your code](#test-your-code). +

+ +

+ + + For more information on Intercept Specifications see the docs here. + + +## Test your code + +Now you can make your code changes in your preferred IDE. When you're finished, build a new container with your code changes and restart your intercept. + +Click `view` next to your preview URL to open a browser tab and see the changes you've made in realtime, or you can share the preview URL with teammates so they can review your work. \ No newline at end of file diff --git a/docs/telepresence/2.12/extension/intro.md b/docs/telepresence/2.12/extension/intro.md new file mode 100644 index 000000000..308dcdcdf --- /dev/null +++ b/docs/telepresence/2.12/extension/intro.md @@ -0,0 +1,27 @@ +--- +title: "Telepresence for Docker introduction" +description: "Learn about the Telepresence extension for Docker." +indexable: true +--- + +# Telepresence for Docker + +Telepresence is now available as a [Docker Extension](https://www.docker.com/products/extensions/) for Docker Desktop. + +## What is the Telepresence extension for Docker? + +The [Telepresence Docker extension](../../../../../kubernetes-learning-center/telepresence-docker-extension/) is an extension that runs in Docker Desktop. This extension allows you to spin up a selection of your application and run the Telepresence daemons in that container. The Telepresence extension allows you to intercept a service and redirect cloud traffic to other containers on the Docker host network. + +## What does the Telepresence Docker extension do? + +Telepresence for Docker is designed to simplify your coding experience and test your code faster. Traditionally, you need to build a container within docker with your code changes, push them, wait for it to upload, deploy the changes, verify them, view them, and repeat that process as you continually test your changes. This makes it a slow and cumbersome process when you need to continually test changes. + +With the Telepresence extension for Docker Desktop, you can use intercepts to immediately preview changes as you make them, without the need to redeploy after every change. Because the Telepresence extension also enables you to isolate your machine and operate it entirely within the Docker runtime, this means you can make changes without root permission on your machine. + +## How does Telepresence for Docker work? + +Telepresence runs entirely within containers. The Telepresence daemons run in a container, which can be given commands using the extension UI. When Telepresence intercepts a service, it redirects cloud traffic to other containers. + +## What do I need to begin? + +All you need is [Docker Desktop](https://www.docker.com/products/docker-desktop) with the [Ambassador Telepresence extension installed](../install) and the Kubernetes command-line tool [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/). diff --git a/docs/telepresence/2.12/faqs.md b/docs/telepresence/2.12/faqs.md new file mode 100644 index 000000000..3c37f1cc5 --- /dev/null +++ b/docs/telepresence/2.12/faqs.md @@ -0,0 +1,124 @@ +--- +description: "Learn how Telepresence helps with fast development and debugging in your Kubernetes cluster." +--- + +# FAQs + +** Why Telepresence?** + +Modern microservices-based applications that are deployed into Kubernetes often consist of tens or hundreds of services. The resource constraints and number of these services means that it is often difficult to impossible to run all of this on a local development machine, which makes fast development and debugging very challenging. The fast [inner development loop](../concepts/devloop/) from previous software projects is often a distant memory for cloud developers. + +Telepresence enables you to connect your local development machine seamlessly to the cluster via a two way proxying mechanism. This enables you to code locally and run the majority of your services within a remote Kubernetes cluster -- which in the cloud means you have access to effectively unlimited resources. + +Ultimately, this empowers you to develop services locally and still test integrations with dependent services or data stores running in the remote cluster. + +You can “intercept” any requests made to a target Kubernetes workload, and code and debug your associated service locally using your favourite local IDE and in-process debugger. You can test your integrations by making requests against the remote cluster’s ingress and watching how the resulting internal traffic is handled by your service running locally. + +By using the preview URL functionality you can share access with additional developers or stakeholders to the application via an entry point associated with your intercept and locally developed service. You can make changes that are visible in near real-time to all of the participants authenticated and viewing the preview URL. All other viewers of the application entrypoint will not see the results of your changes. + +** What operating systems does Telepresence work on?** + +Telepresence currently works natively on macOS (Intel and Apple silicon), Linux, and WSL 2. Starting with v2.4.0, we are also releasing a native Windows version of Telepresence that we are considering a Developer Preview. + +** What protocols can be intercepted by Telepresence?** + +All HTTP/1.1 and HTTP/2 protocols can be intercepted. This includes: + +- REST +- JSON/XML over HTTP +- gRPC +- GraphQL + +If you need another protocol supported, please [drop us a line](https://www.getambassador.io/feedback/) to request it. + +** When using Telepresence to intercept a pod, are the Kubernetes cluster environment variables proxied to my local machine?** + +Yes, you can either set the pod's environment variables on your machine or write the variables to a file to use with Docker or another build process. Please see [the environment variable reference doc](../reference/environment) for more information. + +** When using Telepresence to intercept a pod, can the associated pod volume mounts also be mounted by my local machine?** + +Yes, please see [the volume mounts reference doc](../reference/volume/) for more information. + +** When connected to a Kubernetes cluster via Telepresence, can I access cluster-based services via their DNS name?** + +Yes. After you have successfully connected to your cluster via `telepresence connect` you will be able to access any service in your cluster via their namespace qualified DNS name. + +This means you can curl endpoints directly e.g. `curl .:8080/mypath`. + +If you create an intercept for a service in a namespace, you will be able to use the service name directly. + +This means if you `telepresence intercept -n `, you will be able to resolve just the `` DNS record. + +You can connect to databases or middleware running in the cluster, such as MySQL, PostgreSQL and RabbitMQ, via their service name. + +** When connected to a Kubernetes cluster via Telepresence, can I access cloud-based services and data stores via their DNS name?** + +You can connect to cloud-based data stores and services that are directly addressable within the cluster (e.g. when using an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) Service type), such as AWS RDS, Google pub-sub, or Azure SQL Database. + +** What types of ingress does Telepresence support for the preview URL functionality?** + +The preview URL functionality should work with most ingress configurations, including straightforward load balancer setups. + +Telepresence will discover/prompt during first use for this info and make its best guess at figuring this out and ask you to confirm or update this. + +** Why are my intercepts still reporting as active when they've been disconnected?** + + In certain cases, Telepresence might not have been able to communicate back with Ambassador Cloud to update the intercept's status. Worry not, they will get garbage collected after a period of time. + +** Why is my intercept associated with an "Unreported" cluster?** + + Intercepts tagged with "Unreported" clusters simply mean Ambassador Cloud was unable to associate a service instance with a known detailed service from an Edge Stack or API Gateway cluster. [Connecting your cluster to the Service Catalog](/docs/telepresence/latest/quick-start/) will properly match your services from multiple data sources. + +** Will Telepresence be able to intercept workloads running on a private cluster or cluster running within a virtual private cloud (VPC)?** + +Yes. The cluster has to have outbound access to the internet for the preview URLs to function correctly, but it doesn’t need to have a publicly accessible IP address. + +The cluster must also have access to an external registry in order to be able to download the traffic-manager and traffic-agent images that are deployed when connecting with Telepresence. + +** Why does running Telepresence require sudo access for the local daemon?** + +The local daemon needs sudo to create iptable mappings. Telepresence uses this to create outbound access from the laptop to the cluster. + +On Fedora, Telepresence also creates a virtual network device (a TUN network) for DNS routing. That also requires root access. + +** What components get installed in the cluster when running Telepresence?** + +A single `traffic-manager` service is deployed in the `ambassador` namespace within your cluster, and this manages resilient intercepts and connections between your local machine and the cluster. + +A Traffic Agent container is injected per pod that is being intercepted. The first time a workload is intercepted all pods associated with this workload will be restarted with the Traffic Agent automatically injected. + +** How can I remove all of the Telepresence components installed within my cluster?** + +You can run the command `telepresence uninstall --everything` to remove the `traffic-manager` service installed in the cluster and `traffic-agent` containers injected into each pod being intercepted. + +Running this command will also stop the local daemon running. + +** What language is Telepresence written in?** + +All components of the Telepresence application and cluster components are written using Go. + +** How does Telepresence connect and tunnel into the Kubernetes cluster?** + +The connection between your laptop and cluster is established by using +the `kubectl port-forward` machinery (though without actually spawning +a separate program) to establish a TCP connection to Telepresence +Traffic Manager in the cluster, and running Telepresence's custom VPN +protocol over that TCP connection. + + + +** What identity providers are supported for authenticating to view a preview URL?** + +* GitHub +* GitLab +* Google + +More authentication mechanisms and identity provider support will be added soon. Please [let us know](https://www.getambassador.io/feedback/) which providers are the most important to you and your team in order for us to prioritize those. + +** Is Telepresence open source?** + +Yes it is! You can find its source code on [GitHub](https://github.com/telepresenceio/telepresence). + +** How do I share my feedback on Telepresence?** + +Your feedback is always appreciated and helps us build a product that provides as much value as possible for our community. You can chat with us directly on our [feedback page](https://www.getambassador.io/feedback/), or you can [join our Slack channel](http://a8r.io/slack) to share your thoughts. diff --git a/docs/telepresence/2.12/howtos/cluster-in-vm.md b/docs/telepresence/2.12/howtos/cluster-in-vm.md new file mode 100644 index 000000000..4762344c9 --- /dev/null +++ b/docs/telepresence/2.12/howtos/cluster-in-vm.md @@ -0,0 +1,192 @@ +--- +title: "Considerations for locally hosted clusters | Ambassador" +description: "Use Telepresence to intercept services in a cluster running in a hosted virtual machine." +--- + +# Network considerations for locally hosted clusters + +## The problem +Telepresence creates a Virtual Network Interface ([VIF](../../reference/tun-device)) that maps the clusters subnets to the host machine when it connects. If you're running Kubernetes locally (e.g., k3s, Minikube, Docker for Desktop), you may encounter network problems because the devices in the host are also accessible from the cluster's nodes. + +### Example: +A k3s cluster runs in a headless VirtualBox machine that uses a "host-only" network. This network will allow both host-to-guest and guest-to-host connections. In other words, the cluster will have access to the host's network and, while Telepresence is connected, also to its VIF. This means that from the cluster's perspective, there will now be more than one interface that maps the cluster's subnets; the ones already present in the cluster's nodes, and then the Telepresence VIF, mapping them again. + +Now, if a request arrives to Telepresence that is covered by a subnet mapped by the VIF, the request is routed to the cluster. If the cluster for some reason doesn't find a corresponding listener that can handle the request, it will eventually try the host network, and find the VIF. The VIF routes the request to the cluster and now the recursion is in motion. The final outcome of the request will likely be a timeout but since the recursion is very resource intensive (a large amount of very rapid connection requests), this will likely also affect other connections in a bad way. + +## Solution + +### Create a bridge network +A bridge network is a Link Layer (L2) device that forwards traffic between network segments. By creating a bridge network, you can bypass the host's network stack which enable the Kubernetes cluster to connect directly to the same router as your host. + +To create a bridge network, you need to change the network settings of the guest running a cluster's node so that it connects directly to a physical network device on your host. The details on how to configure the bridge depends on what type of virtualization solution you're using. + +### Vagrant + Virtualbox + k3s example +Here's a sample `Vagrantfile` that will spin up a server node and two agent nodes in three headless instances using a bridged network. It also adds the configuration needed for the cluster to host a docker repository (very handy in case you want to save bandwidth). The Kubernetes registry manifest must be applied using `kubectl -f registry.yaml` once the cluster is up and running. + +#### Vagrantfile +```ruby +# -*- mode: ruby -*- +# vi: set ft=ruby : + +# bridge is the name of the host's default network device +$bridge = 'wlp5s0' + +# default_route should be the IP of the host's default route. +$default_route = '192.168.1.1' + +# nameserver must be the IP of an external DNS, such as 8.8.8.8 +$nameserver = '8.8.8.8' + +# server_name should also be added to the host's /etc/hosts file and point to the server_ip +# for easy access when pushing docker images +server_name = 'multi' + +# static IPs for the server and agents. Those IPs must be on the default router's subnet +server_ip = '192.168.1.110' +agents = { + 'agent1' => '192.168.1.111', + 'agent2' => '192.168.1.112', +} + +# Extra parameters in INSTALL_K3S_EXEC variable because of +# K3s picking up the wrong interface when starting server and agent +# https://github.com/alexellis/k3sup/issues/306 +server_script = <<-SHELL + sudo -i + apk add curl + export INSTALL_K3S_EXEC="--bind-address=#{server_ip} --node-external-ip=#{server_ip} --flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + echo "Sleeping for 5 seconds to wait for k3s to start" + sleep 5 + cp /var/lib/rancher/k3s/server/token /vagrant_shared + cp /etc/rancher/k3s/k3s.yaml /vagrant_shared + cp /etc/rancher/k3s/registries.yaml /vagrant_shared + SHELL + +agent_script = <<-SHELL + sudo -i + apk add curl + export K3S_TOKEN_FILE=/vagrant_shared/token + export K3S_URL=https://#{server_ip}:6443 + export INSTALL_K3S_EXEC="--flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + SHELL + +def config_vm(name, ip, script, vm) + # The network_script has two objectives: + # 1. Ensure that the guest's default route is the bridged network (bypass the network of the host) + # 2. Ensure that the DNS points to an external DNS service, as opposed to the DNS of the host that + # the NAT network provides. + network_script = <<-SHELL + sudo -i + ip route delete default 2>&1 >/dev/null || true; ip route add default via #{$default_route} + cp /etc/resolv.conf /etc/resolv.conf.orig + sed 's/^nameserver.*/nameserver #{$nameserver}/' /etc/resolv.conf.orig > /etc/resolv.conf + SHELL + + vm.hostname = name + vm.network 'public_network', bridge: $bridge, ip: ip + vm.synced_folder './shared', '/vagrant_shared' + vm.provider 'virtualbox' do |vb| + vb.memory = '4096' + vb.cpus = '2' + end + vm.provision 'shell', inline: script + vm.provision 'shell', inline: network_script, run: 'always' +end + +Vagrant.configure('2') do |config| + config.vm.box = 'generic/alpine314' + + config.vm.define 'server', primary: true do |server| + config_vm(server_name, server_ip, server_script, server.vm) + end + + agents.each do |agent_name, agent_ip| + config.vm.define agent_name do |agent| + config_vm(agent_name, agent_ip, agent_script, agent.vm) + end + end +end +``` + +The Kubernetes manifest to add the registry: + +#### registry.yaml +```yaml +apiVersion: v1 +kind: ReplicationController +metadata: + name: kube-registry-v0 + namespace: kube-system + labels: + k8s-app: kube-registry + version: v0 +spec: + replicas: 1 + selector: + app: kube-registry + version: v0 + template: + metadata: + labels: + app: kube-registry + version: v0 + spec: + containers: + - name: registry + image: registry:2 + resources: + limits: + cpu: 100m + memory: 200Mi + env: + - name: REGISTRY_HTTP_ADDR + value: :5000 + - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY + value: /var/lib/registry + volumeMounts: + - name: image-store + mountPath: /var/lib/registry + ports: + - containerPort: 5000 + name: registry + protocol: TCP + volumes: + - name: image-store + hostPath: + path: /var/lib/registry-storage +--- +apiVersion: v1 +kind: Service +metadata: + name: kube-registry + namespace: kube-system + labels: + app: kube-registry + kubernetes.io/name: "KubeRegistry" +spec: + selector: + app: kube-registry + ports: + - name: registry + port: 5000 + targetPort: 5000 + protocol: TCP + type: LoadBalancer +``` + diff --git a/docs/telepresence/2.12/howtos/intercepts.md b/docs/telepresence/2.12/howtos/intercepts.md new file mode 100644 index 000000000..87bd9f92b --- /dev/null +++ b/docs/telepresence/2.12/howtos/intercepts.md @@ -0,0 +1,108 @@ +--- +description: "Start using Telepresence in your own environment. Follow these steps to intercept your service in your cluster." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Intercept a service in your own environment + +Telepresence enables you to create intercepts to a target Kubernetes workload. Once you have created and intercept, you can code and debug your associated service locally. + +For a detailed walk-though on creating intercepts using our sample app, follow the [quick start guide](../../quick-start/demo-node/). + + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +This guide assumes you have a Kubernetes deployment and service accessible publicly by an ingress controller, and that you can run a copy of that service on your laptop. + + +## Intercept your service with a global intercept + +With Telepresence, you can create [global intercepts](../../concepts/intercepts/?intercept=global) that intercept all traffic going to a service in your cluster and route it to your local environment instead. + +1. Connect to your cluster with `telepresence connect` and connect to the Kubernetes API server: + + ```console + $ curl -ik https://kubernetes.default + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected when you first connect. + + + You now have access to your remote Kubernetes API server as if you were on the same network. You can now use any local tools to connect to any service in the cluster. + + If you have difficulties connecting, make sure you are using Telepresence 2.0.3 or a later version. Check your version by entering `telepresence version` and [upgrade if needed](../../install/upgrade/). + + +2. Enter `telepresence list` and make sure the service you want to intercept is listed. For example: + + ```console + $ telepresence list + ... + example-service: ready to intercept (traffic-agent not yet installed) + ... + ``` + +3. Get the name of the port you want to intercept on your service: + `kubectl get service --output yaml`. + + For example: + + ```console + $ kubectl get service example-service --output yaml + ... + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + ... + ``` + +4. Intercept all traffic going to the service in your cluster: + `telepresence intercept --port [:] --env-file `. + * For `--port`: specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * For `--env-file`: specify a file path for Telepresence to write the environment variables that are set in the pod. + The example below shows Telepresence intercepting traffic going to service `example-service`. Requests now reach the service on port `http` in the cluster get routed to `8080` on the workstation and write the environment variables of the service to `~/example-service-intercept.env`. + ```console + $ telepresence intercept example-service --port 8080:http --env-file ~/example-service-intercept.env + Using Deployment example-service + intercepted + Intercept name: example-service + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Intercepting : all TCP connections + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + The following are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Query the environment in which you intercepted a service and verify your local instance being invoked. + All the traffic previously routed to your Kubernetes Service is now routed to your local environment + +You can now: +- Make changes on the fly and see them reflected when interacting with + your Kubernetes environment. +- Query services only exposed in your cluster's network. +- Set breakpoints in your IDE to investigate bugs. + + + + **Didn't work?** Make sure the port you're listening on matches the one you specified when you created your intercept. + + diff --git a/docs/telepresence/2.12/howtos/outbound.md b/docs/telepresence/2.12/howtos/outbound.md new file mode 100644 index 000000000..48877df8c --- /dev/null +++ b/docs/telepresence/2.12/howtos/outbound.md @@ -0,0 +1,89 @@ +--- +description: "Telepresence can connect to your Kubernetes cluster, letting you access cluster services as if your laptop was another pod in the cluster." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Proxy outbound traffic to my cluster + +While preview URLs are a powerful feature, Telepresence offers other options for proxying traffic between your laptop and the cluster. This section discribes how to proxy outbound traffic and control outbound connectivity to your cluster. + + This guide assumes that you have the quick start sample web app running in your cluster to test accessing the web-app service. You can substitute this service for any other service you are running. + +## Proxying outbound traffic + +Connecting to the cluster instead of running an intercept allows you to access cluster workloads as if your laptop was another pod in the cluster. This enables you to access other Kubernetes services using `.`. A service running on your laptop can interact with other services on the cluster by name. + +When you connect to your cluster, the background daemon on your machine runs and installs the [Traffic Manager deployment](../../reference/architecture/) into the cluster of your current `kubectl` context. The Traffic Manager handles the service proxying. + +1. Run `telepresence connect` and enter your password to run the daemon. + + ``` + $ telepresence connect + Launching Telepresence Daemon v2.3.7 (api v3) + Need root privileges to run "/usr/local/bin/telepresence daemon-foreground /home//.cache/telepresence/logs '' ''" + [sudo] password: + Connecting to traffic manager... + Connected to context default (https://) + ``` + +2. Run `telepresence status` to confirm connection to your cluster and that it is proxying traffic. + + ``` + $ telepresence status + Root Daemon: Running + Version : v2.3.7 (api 3) + Primary DNS : "" + Fallback DNS: "" + User Daemon: Running + Version : v2.3.7 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 0 total + ``` + +3. Access your service by name with `curl web-app.emojivoto:80`. Telepresence routes the request to the cluster, as if your laptop is actually running in the cluster. + + ``` + $ curl web-app.emojivoto:80 + + + + + Emoji Vote + ... + ``` + +If you terminate the client with `telepresence quit` and try to access the service again, it will fail because traffic is no longer proxied from your laptop. + + ``` + $ telepresence quit + Telepresence Daemon quitting...done + ``` + +When using Telepresence in this way, you need to access services with the namespace qualified DNS name (<service name>.<namespace>) before you start an intercept. After you start an intercept, only <service name> is required. Read more about these differences in the DNS resolution reference guide. + +## Controlling outbound connectivity + +By default, Telepresence provides access to all Services found in all namespaces in the connected cluster. This can lead to problems if the user does not have RBAC access permissions to all namespaces. You can use the `--mapped-namespaces ` flag to control which namespaces are accessible. + +When you use the `--mapped-namespaces` flag, you need to include all namespaces containing services you want to access, as well as all namespaces that contain services related to the intercept. + +### Using local-only intercepts + +When you develop on isolated apps or on a virtualized container, you don't need an outbound connection. However, when developing services that aren't deployed to the cluster, it can be necessary to provide outbound connectivity to the namespace where the service will be deployed. This is because services that aren't exposed through ingress controllers require connectivity to those services. When you provide outbound connectivity, the service can access other services in that namespace without using qualified names. A local-only intercept does not cause outbound connections to originate from the intercepted namespace. The reason for this is to establish correct origin; the connection must be routed to a `traffic-agent`of an intercepted pod. For local-only intercepts, the outbound connections originates from the `traffic-manager`. + +To control outbound connectivity to specific namespaces, add the `--local-only` flag: + + ``` + $ telepresence intercept --namespace --local-only + ``` +The resources in the given namespace can now be accessed using unqualified names as long as the intercept is active. +You can deactivate the intercept with `telepresence leave `. This removes unqualified name access. + +### Proxy outcound connectivity for laptops + +To specify additional hosts or subnets that should be resolved inside the cluster, see [AlsoProxy](../../reference/cluster-config/#alsoproxy) for more details. diff --git a/docs/telepresence/2.12/howtos/package.md b/docs/telepresence/2.12/howtos/package.md new file mode 100644 index 000000000..520662eef --- /dev/null +++ b/docs/telepresence/2.12/howtos/package.md @@ -0,0 +1,178 @@ +--- +title: "How to package and share my intercept setup with my teammates" +description: "Use telepresence intercept specs to enable your teammates faster" +--- +# Introduction + +While telepresence takes cares of the interception part of your setup, you usually still need to script +some boiler plate code to run the local part (the handler) of your code. + +Classic solutions rely on a Makefile, or bash scripts, but this becomes cumbersome to maintain. + +Instead, you can use [telepresence intercept specs](../../reference/intercepts/specs): They allow you +to specify all aspects of an intercept, including prerequisites, the local processes that receive the intercepted traffic, +and the actual intercept. Telepresence can then run the specification. + +# Getting started + +You will need a Kubernetes cluster, a deployment, and a service to begin using an Intercept Specification. + +Once you have a Kubernetes cluster you can apply this configuration to start an echo easy deployment that +we can then use for our Intercept Specifcation + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: "echo-easy" +spec: + type: ClusterIP + selector: + service: echo-easy + ports: + - name: proxied + port: 80 + targetPort: http +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "echo-easy" + labels: + service: echo-easy +spec: + replicas: 1 + selector: + matchLabels: + service: echo-easy + template: + metadata: + labels: + service: echo-easy + spec: + containers: + - name: echo-easy + image: jmalloc/echo-server + ports: + - containerPort: 8080 + name: http + resources: + limits: + cpu: 50m + memory: 128Mi +``` + +You can create the local yaml file by using + +```console +$ cat > echo-server.yaml < my-intercept.yaml < --port --env-file --mechanism http`and adjust the flags as follows: + Start the intercept: + * **port:** specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * **env-file:** specify a file path for Telepresence to write the environment variables that are set in the pod. + * You can remove the **--mechanism http** flag if you have your traffic-manager set to *team-mode* + +4. Answer the question prompts. + The example below shows a preview URL for `example-service` which listens on port 8080. The preview URL for ingress will use the `ambassador` service in the `ambassador` namespace on port `443` using TLS encryption and the hostname `dev-environment.edgestack.me`: + + ```console +$ telepresence intercept example-service --mechanism http --ingress-host ambassador.ambassador --ingress-port 80 --ingress-l5 dev-environment.edgestack.me --ingress-tls --port 8080 --env-file ~/ex-svc.env + + Using deployment example-service + intercepted + Intercept name : example-service + State : ACTIVE + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":example-service") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname : dev-environment.edgestack.me + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + Here are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Go to the Preview URL generated from the intercept. +Traffic is now intercepted from your preview URL without impacting other traffic from your Ingress. + + + Didn't work? It might be because you have services in between your ingress controller and the service you are intercepting that do not propagate the x-telepresence-intercept-id HTTP Header. Read more on context propagation. + + +7. Make a request on the URL you would usually query for that environment. Don't route a request to your laptop. + + Normal traffic coming into the cluster through the Ingress (i.e. not coming from the preview URL) routes to services in the cluster like normal. + +8. Share with a teammate. + + You can collaborate with teammates by sending your preview URL to them. Once your teammate logs in, they must select the same identity provider and org as you are using. This authorizes their access to the preview URL. When they visit the preview URL, they see the intercepted service running on your laptop. + You can now collaborate with a teammate to debug the service on the shared intercept URL without impacting the production environment. + +## Sharing a preview URL with people outside your team + +To collaborate with someone outside of your identity provider's organization: +Log into [Ambassador Cloud](https://app.getambassador.io/cloud/). + navigate to your service's intercepts, select the preview URL details, and click **Make Publicly Accessible**. Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on your laptop. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. Removing the preview URL either from the dashboard or by running `telepresence preview remove ` also removes all access to the preview URL. + +## Change access restrictions + +To collaborate with someone outside of your identity provider's organization, you must make your preview URL publicly accessible. + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/). +2. Select the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Make Publicly Accessible**. + +Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on a local environment. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. + +## Remove a preview URL from an Intercept + +To delete a preview URL and remove all access to the intercepted service, + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) +2. Click on the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Remove Preview**. + +Alternatively, you can remove a preview URL with the following command: +`telepresence preview remove ` diff --git a/docs/telepresence/2.12/howtos/request.md b/docs/telepresence/2.12/howtos/request.md new file mode 100644 index 000000000..1109c68df --- /dev/null +++ b/docs/telepresence/2.12/howtos/request.md @@ -0,0 +1,12 @@ +import Alert from '@material-ui/lab/Alert'; + +# Send requests to an intercepted service + +Ambassador Cloud can inform you about the required request parameters to reach an intercepted service. + + 1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) + 2. Navigate to the desired service Intercepts page + 3. Click the **Query** button to open the pop-up menu. + 4. Toggle between **CURL**, **Headers** and **Browse**. + +The pre-built queries and header information will help you get started to query the desired intercepted service and manage header propagation. diff --git a/docs/telepresence/2.12/images/container-inner-dev-loop.png b/docs/telepresence/2.12/images/container-inner-dev-loop.png new file mode 100644 index 000000000..06586cd6e Binary files /dev/null and b/docs/telepresence/2.12/images/container-inner-dev-loop.png differ diff --git a/docs/telepresence/2.12/images/docker-extension-intercept.png b/docs/telepresence/2.12/images/docker-extension-intercept.png new file mode 100644 index 000000000..d01daef8f Binary files /dev/null and b/docs/telepresence/2.12/images/docker-extension-intercept.png differ diff --git a/docs/telepresence/2.12/images/docker-header-containers.png b/docs/telepresence/2.12/images/docker-header-containers.png new file mode 100644 index 000000000..06f422a93 Binary files /dev/null and b/docs/telepresence/2.12/images/docker-header-containers.png differ diff --git a/docs/telepresence/2.12/images/docker_extension_button_drop_down.png b/docs/telepresence/2.12/images/docker_extension_button_drop_down.png new file mode 100644 index 000000000..775323e56 Binary files /dev/null and b/docs/telepresence/2.12/images/docker_extension_button_drop_down.png differ diff --git a/docs/telepresence/2.12/images/docker_extension_connect_to_cluster.png b/docs/telepresence/2.12/images/docker_extension_connect_to_cluster.png new file mode 100644 index 000000000..eb95e5180 Binary files /dev/null and b/docs/telepresence/2.12/images/docker_extension_connect_to_cluster.png differ diff --git a/docs/telepresence/2.12/images/docker_extension_login.png b/docs/telepresence/2.12/images/docker_extension_login.png new file mode 100644 index 000000000..8874fa959 Binary files /dev/null and b/docs/telepresence/2.12/images/docker_extension_login.png differ diff --git a/docs/telepresence/2.12/images/docker_extension_running_intercepts_page.png b/docs/telepresence/2.12/images/docker_extension_running_intercepts_page.png new file mode 100644 index 000000000..7870e2691 Binary files /dev/null and b/docs/telepresence/2.12/images/docker_extension_running_intercepts_page.png differ diff --git a/docs/telepresence/2.12/images/docker_extension_start_intercept_page.png b/docs/telepresence/2.12/images/docker_extension_start_intercept_page.png new file mode 100644 index 000000000..6788994e3 Binary files /dev/null and b/docs/telepresence/2.12/images/docker_extension_start_intercept_page.png differ diff --git a/docs/telepresence/2.12/images/docker_extension_start_intercept_popup.png b/docs/telepresence/2.12/images/docker_extension_start_intercept_popup.png new file mode 100644 index 000000000..12839b0e5 Binary files /dev/null and b/docs/telepresence/2.12/images/docker_extension_start_intercept_popup.png differ diff --git a/docs/telepresence/2.12/images/docker_extension_upload_spec_button.png b/docs/telepresence/2.12/images/docker_extension_upload_spec_button.png new file mode 100644 index 000000000..f571aefd3 Binary files /dev/null and b/docs/telepresence/2.12/images/docker_extension_upload_spec_button.png differ diff --git a/docs/telepresence/2.12/images/github-login.png b/docs/telepresence/2.12/images/github-login.png new file mode 100644 index 000000000..cfd4d4bf1 Binary files /dev/null and b/docs/telepresence/2.12/images/github-login.png differ diff --git a/docs/telepresence/2.12/images/logo.png b/docs/telepresence/2.12/images/logo.png new file mode 100644 index 000000000..701f63ba8 Binary files /dev/null and b/docs/telepresence/2.12/images/logo.png differ diff --git a/docs/telepresence/2.12/images/mode-defaults.png b/docs/telepresence/2.12/images/mode-defaults.png new file mode 100644 index 000000000..1dcca4116 Binary files /dev/null and b/docs/telepresence/2.12/images/mode-defaults.png differ diff --git a/docs/telepresence/2.12/images/split-tunnel.png b/docs/telepresence/2.12/images/split-tunnel.png new file mode 100644 index 000000000..5bf30378e Binary files /dev/null and b/docs/telepresence/2.12/images/split-tunnel.png differ diff --git a/docs/telepresence/2.12/images/tracing.png b/docs/telepresence/2.12/images/tracing.png new file mode 100644 index 000000000..c374807e5 Binary files /dev/null and b/docs/telepresence/2.12/images/tracing.png differ diff --git a/docs/telepresence/2.12/images/trad-inner-dev-loop.png b/docs/telepresence/2.12/images/trad-inner-dev-loop.png new file mode 100644 index 000000000..618b674f8 Binary files /dev/null and b/docs/telepresence/2.12/images/trad-inner-dev-loop.png differ diff --git a/docs/telepresence/2.12/images/tunnelblick.png b/docs/telepresence/2.12/images/tunnelblick.png new file mode 100644 index 000000000..8944d445a Binary files /dev/null and b/docs/telepresence/2.12/images/tunnelblick.png differ diff --git a/docs/telepresence/2.12/images/vpn-dns.png b/docs/telepresence/2.12/images/vpn-dns.png new file mode 100644 index 000000000..eed535c45 Binary files /dev/null and b/docs/telepresence/2.12/images/vpn-dns.png differ diff --git a/docs/telepresence/2.12/install/cloud.md b/docs/telepresence/2.12/install/cloud.md new file mode 100644 index 000000000..4f09a94ae --- /dev/null +++ b/docs/telepresence/2.12/install/cloud.md @@ -0,0 +1,63 @@ +# Provider Prerequisites for Traffic Manager + +## GKE + +### Firewall Rules for private clusters + +A GKE cluster with private networking will come preconfigured with firewall rules that prevent the Traffic Manager's +webhook injector from being invoked by the Kubernetes API server. +For Telepresence to work in such a cluster, you'll need to [add a firewall rule](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) allowing the Kubernetes masters to access TCP port `8443` in your pods. +For example, for a cluster named `tele-webhook-gke` in region `us-central1-c1`: + +```bash +$ gcloud container clusters describe tele-webhook-gke --region us-central1-c | grep masterIpv4CidrBlock + masterIpv4CidrBlock: 172.16.0.0/28 # Take note of the IP range, 172.16.0.0/28 + +$ gcloud compute firewall-rules list \ + --filter 'name~^gke-tele-webhook-gke' \ + --format 'table( + name, + network, + direction, + sourceRanges.list():label=SRC_RANGES, + allowed[].map().firewall_rule().list():label=ALLOW, + targetTags.list():label=TARGET_TAGS + )' + +NAME NETWORK DIRECTION SRC_RANGES ALLOW TARGET_TAGS +gke-tele-webhook-gke-33fa1791-all tele-webhook-net INGRESS 10.40.0.0/14 esp,ah,sctp,tcp,udp,icmp gke-tele-webhook-gke-33fa1791-node +gke-tele-webhook-gke-33fa1791-master tele-webhook-net INGRESS 172.16.0.0/28 tcp:10250,tcp:443 gke-tele-webhook-gke-33fa1791-node +gke-tele-webhook-gke-33fa1791-vms tele-webhook-net INGRESS 10.128.0.0/9 icmp,tcp:1-65535,udp:1-65535 gke-tele-webhook-gke-33fa1791-node +# Take note fo the TARGET_TAGS value, gke-tele-webhook-gke-33fa1791-node + +$ gcloud compute firewall-rules create gke-tele-webhook-gke-webhook \ + --action ALLOW \ + --direction INGRESS \ + --source-ranges 172.16.0.0/28 \ + --rules tcp:8443 \ + --target-tags gke-tele-webhook-gke-33fa1791-node --network tele-webhook-net +Creating firewall...⠹Created [https://www.googleapis.com/compute/v1/projects/datawire-dev/global/firewalls/gke-tele-webhook-gke-webhook]. +Creating firewall...done. +NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED +gke-tele-webhook-gke-webhook tele-webhook-net INGRESS 1000 tcp:8443 False +``` + +### GKE Authentication Plugin + +Starting with Kubernetes version 1.26 GKE will require the use of the [gke-gcloud-auth-plugin](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke). +You will need to install this plugin to use Telepresence with Docker while using GKE. + +If you are using the [Telepresence Docker Extension](../extension/intro) you will need to ensure that your `command` is set to an absolute path in your kubeconfig file. If you've installed not using homebrew you may see in your file `command: gke-gcloud-auth-plugin`. This would need to be replaced with the path to the binary. +You can check this by opening your kubeconfig file, and under the `users` section with your GKE cluster there is a `command` if you've installed with homebrew it would look like this +`command: /opt/homebrew/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/gcloud`. + +## EKS + +### EKS Authentication Plugin + +If you are using AWS CLI version earlier than `1.16.156` you will need to install [aws-iam-authenticator](https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html). +You will need to install this plugin to use Telepresence with Docker while using EKS. + +If you are using the [Telepresence Docker Extension](../extension/intro) you will need to ensure that your `command` is set to an absolute path in your kubeconfig file instead of a relative path. +You can check this by opening your kubeconfig file, and under the `users` section with your EKS cluster there is a `command` if you've installed with homebrew it would look like this +`command: /opt/homebrew/Cellar/aws-iam-authenticator/0.6.2/bin/aws-iam-authenticator`. \ No newline at end of file diff --git a/docs/telepresence/2.12/install/helm.md b/docs/telepresence/2.12/install/helm.md new file mode 100644 index 000000000..2709ee8f3 --- /dev/null +++ b/docs/telepresence/2.12/install/helm.md @@ -0,0 +1,181 @@ +# Install the Traffic Manager with Helm + +[Helm](https://helm.sh) is a package manager for Kubernetes that automates the release and management of software on Kubernetes. The Telepresence Traffic Manager can be installed via a Helm chart with a few simple steps. + +For more details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +## Before you begin + +Before you begin you need to have [`helm`](https://helm.sh/docs/intro/install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +The Telepresence Helm chart is hosted by Ambassador Labs and published at `https://app.getambassador.io`. + +Start by adding this repo to your Helm client with the following command: + +```shell +helm repo add datawire https://app.getambassador.io +helm repo update +``` + +## Install with Helm + +When you run the Helm chart, it installs all the components required for the Telepresence Traffic Manager. + +1. If you are installing the Telepresence Traffic Manager **for the first time on your cluster**, create the `ambassador` namespace in your cluster: + + ```shell + kubectl create namespace ambassador + ``` + +2. Install the Telepresence Traffic Manager with the following command: + + ```shell + helm install traffic-manager --namespace ambassador datawire/telepresence + ``` + +### Install into custom namespace + +The Helm chart supports being installed into any namespace, not necessarily `ambassador`. Simply pass a different `namespace` argument to `helm install`. +For example, if you wanted to deploy the traffic manager to the `staging` namespace: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence +``` + +Note that users of Telepresence will need to configure their kubeconfig to find this installation of the Traffic Manager: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +See [the kubeconfig documentation](../../reference/config#manager) for more information. + +### Upgrading the Traffic Manager. + +Versions of the Traffic Manager Helm chart are coupled to the versions of the Telepresence CLI that they are intended for. +Thus, for example, if you wish to use Telepresence `v2.4.0`, you'll need to install version `v2.4.0` of the Traffic Manager Helm chart. + +Upgrading the Traffic Manager is the same as upgrading any other Helm chart; for example, if you installed the release into the `ambassador` namespace, and you just wished to upgrade it to the latest version without changing any configuration values: + +```shell +helm repo up +helm upgrade traffic-manager datawire/telepresence --reuse-values --namespace ambassador +``` + +If you want to upgrade the Traffic-Manager to a specific version, add a `--version` flag with the version number to the upgrade command. For example: `--version v2.4.1` + +## RBAC + +### Installing a namespace-scoped traffic manager + +You might not want the Traffic Manager to have permissions across the entire kubernetes cluster, or you might want to be able to install multiple traffic managers per cluster (for example, to separate them by environment). +In these cases, the traffic manager supports being installed with a namespace scope, allowing cluster administrators to limit the reach of a traffic manager's permissions. + +For example, suppose you want a Traffic Manager that only works on namespaces `dev` and `staging`. +To do this, create a `values.yaml` like the following: + +```yaml +managerRbac: + create: true + namespaced: true + namespaces: + - dev + - staging +``` + +This can then be installed via: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence -f ./values.yaml +``` + +**NOTE** Do not install namespace-scoped Traffic Managers and a global Traffic Manager in the same cluster, as it could have unexpected effects. + +#### Namespace collision detection + +The Telepresence Helm chart will try to prevent namespace-scoped Traffic Managers from managing the same namespaces. +It will do this by creating a ConfigMap, called `traffic-manager-claim`, in each namespace that a given install manages. + +So, for example, suppose you install one Traffic Manager to manage namespaces `dev` and `staging`, as: + +```bash +helm install traffic-manager --namespace dev datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={dev,staging}' +``` + +You might then attempt to install another Traffic Manager to manage namespaces `staging` and `prod`: + +```bash +helm install traffic-manager --namespace prod datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={staging,prod}' +``` + +This would fail with an error: + +``` +Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap "traffic-manager-claim" in namespace "staging" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "prod": current value is "dev" +``` + +To fix this error, fix the overlap either by removing `staging` from the first install, or from the second. + +#### Namespace scoped user permissions + +Optionally, you can also configure user rbac to be scoped to the same namespaces as the manager itself. +You might want to do this if you don't give your users permissions throughout the cluster, and want to make sure they only have the minimum set required to perform telepresence commands on certain namespaces. + +Continuing with the `dev` and `staging` example from the previous section, simply add the following to `values.yaml` (make sure you set the `subjects`!): + +```yaml +clientRbac: + create: true + + # These are the users or groups to which the user rbac will be bound. + # This MUST be set. + subjects: {} + # - kind: User + # name: jane + # apiGroup: rbac.authorization.k8s.io + + namespaced: true + + namespaces: + - dev + - staging +``` + +#### Namespace-scoped webhook + +If you wish to use the traffic-manager's [mutating webhook](../../reference/cluster-config#mutating-webhook) with a namespace-scoped traffic manager, you will have to ensure that each namespace has an `app.kubernetes.io/name` label that is identical to its name: + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: staging + labels: + app.kubernetes.io/name: staging +``` + +You can also use `kubectl label` to add the label to an existing namespace, e.g.: + +```shell +kubectl label namespace staging app.kubernetes.io/name=staging +``` + +This is required because the mutating webhook will use the name label to find namespaces to operate on. + +**NOTE** This labelling happens automatically in kubernetes >= 1.21. + +### Installing RBAC only + +Telepresence Traffic Manager does require some [RBAC](../../reference/rbac/) for the traffic-manager deployment itself, as well as for users. +To make it easier for operators to introspect / manage RBAC separately, you can use `rbac.only=true` to +only create the rbac-related objects. +Additionally, you can use `clientRbac.create=true` and `managerRbac.create=true` to toggle which subset(s) of RBAC objects you wish to create. diff --git a/docs/telepresence/2.12/install/index.md b/docs/telepresence/2.12/install/index.md new file mode 100644 index 000000000..a8931f1af --- /dev/null +++ b/docs/telepresence/2.12/install/index.md @@ -0,0 +1,155 @@ +import Platform from '@src/components/Platform'; + +# Install + +Install Telepresence by running the commands below for your OS. If you are not the administrator of your cluster, you will need [administrative RBAC permissions](../reference/rbac#administrating-telepresence) to install and use Telepresence in your cluster. + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## What's Next? + +Follow one of our [quick start guides](../quick-start/) to start using Telepresence, either with our sample app or in your own environment. + +## Installing nightly versions of Telepresence + +We build and publish the contents of the default branch, [release/v2](https://github.com/telepresenceio/telepresence), of Telepresence +nightly, Monday through Friday, for macOS (Intel and Apple silicon), Linux, and Windows. + +The tags are formatted like so: `vX.Y.Z-nightly-$gitShortHash`. + +`vX.Y.Z` is the most recent release of Telepresence with the patch version (Z) bumped one higher. +For example, if our last release was 2.3.4, nightly builds would start with v2.3.5, until a new +version of Telepresence is released. + +`$gitShortHash` will be the short hash of the git commit of the build. + +Use these URLs to download the most recent nightly build. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/nightly/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/nightly/telepresence.zip +``` + + + + +## Installing older versions of Telepresence + +Use these URLs to download an older version for your OS (including older nightly builds), replacing `x.y.z` with the versions you want. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/x.y.z/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/x.y.z/telepresence +``` + + + + + diff --git a/docs/telepresence/2.12/install/manager.md b/docs/telepresence/2.12/install/manager.md new file mode 100644 index 000000000..4efdc3c69 --- /dev/null +++ b/docs/telepresence/2.12/install/manager.md @@ -0,0 +1,85 @@ +# Install/Uninstall the Traffic Manager + +Telepresence uses a traffic manager to send/recieve cloud traffic to the user. Telepresence uses [Helm](https://helm.sh) under the hood to install the traffic manager in your cluster. + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/). +In addition, you may need certain prerequisites depending on your cloud provider and platform. +See the [cloud provider installation notes](../../install/cloud) for more. + +## Install the Traffic Manager + +The telepresence cli can install the traffic manager for you. The basic install will install the same version as the client used. + +1. Install the Telepresence Traffic Manager with the following command: + + ```shell + telepresence helm install + ``` + +### Customizing the Traffic Manager. + +For details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +1. Create a values.yaml file with your config values. + +2. Run the install command with the values flag set to the path to your values file. + + ```shell + telepresence helm install --values values.yaml + ``` + + +## Upgrading/Downgrading the Traffic Manager. + +1. Download the cli of the version of Telepresence you wish to use. + +2. Run the install command with the upgrade flag. + + ```shell + telepresence helm install --upgrade + ``` + + +## Uninstall + +The telepresence cli can uninstall the traffic manager for you using the `telepresence helm uninstall` command (previously `telepresence uninstall --everything`). + +1. Uninstall the Telepresence Traffic Manager and all of the agents installed by it using the following command: + + ```shell + telepresence helm uninstall + ``` + +## Ambassador Agent + +The Ambassador Agent is installed alongside the Traffic Manager to report your services to Ambassador Cloud and give you the ability to trigger intercepts from the Cloud UI. + +If you are already using the Emissary-Ingress or Edge-Stack you do not need to install the Ambassador Agent. When installing the `traffic-manager` you can add the flag `--set ambassador-agent.enabled=false`, to not include the ambassador-agent. Emissary and Edge-Stack both already include this agent within their deployments. + +If your namespace runs with tight security parameters you may need to set a few additional parameters. These parameters are `securityContext`, `tolerations`, and `resources`. +You can set these parameters in a `values.yaml` file under the `ambassador-agent` prefix to fit your namespace requirements. + +### Adding an API Key to your Ambassador Agent + +While installing the traffic-manager you can pass your cloud-token directly to the helm chart using the flag, `--set ambassador-agent.cloudConnectToken=`. +The [API Key](../reference/client/login.md) will be created as a secret and your agent will use it upon start-up. Telepresence will not override the API key given via Helm. + +### Creating a secret manually +The Ambassador agent watches for secrets with a name ending in `agent-cloud-token`. You can create this secret yourself. This API key will always be used. + + ```shell +kubectl apply -f - < + labels: + app.kubernetes.io/name: agent-cloud-token +data: + CLOUD_CONNECT_TOKEN: +EOF + ``` \ No newline at end of file diff --git a/docs/telepresence/2.12/install/migrate-from-legacy.md b/docs/telepresence/2.12/install/migrate-from-legacy.md new file mode 100644 index 000000000..94307dfa1 --- /dev/null +++ b/docs/telepresence/2.12/install/migrate-from-legacy.md @@ -0,0 +1,110 @@ +# Migrate from legacy Telepresence + +[Telepresence](/products/telepresence/) (formerly referenced as Telepresence 2, which is the current major version) has different mechanics and requires a different mental model from [legacy Telepresence 1](https://www.telepresence.io/docs/v1/) when working with local instances of your services. + +In legacy Telepresence, a pod running a service was swapped with a pod running the Telepresence proxy. This proxy received traffic intended for the service, and sent the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment". + +In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time. + +Telepresence 2 introduces a [new +architecture](../../reference/architecture/) built around "intercepts" +that addresses these problems. With the new Telepresence, a sidecar +proxy ("traffic agent") is injected onto the pod. The proxy then +intercepts traffic intended for the Pod and routes it to the +workstation/laptop. The advantage of this approach is that the +service is running at all times, and no swapping is used. By using +the proxy approach, we can also do personal intercepts, where rather +than re-routing all traffic to the laptop/workstation, it only +re-routes the traffic designated as belonging to that user, so that +multiple developers can intercept the same service at the same time +without disrupting normal operation or disrupting eacho. + +Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts. + +## Using legacy Telepresence commands + +First please ensure you've [installed Telepresence](../). + +Telepresence is able to translate common legacy Telepresence commands into native Telepresence commands. +So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used +to with the Telepresence binary. + +For example, say you have a deployment (`myserver`) that you want to swap deployment (equivalent to intercept in +Telepresence) with a python server, you could run the following command: + +``` +$ telepresence --swap-deployment myserver --expose 9090 --run python3 -m http.server 9090 +< help text > + +Legacy telepresence command used +Command roughly translates to the following in Telepresence: +telepresence intercept echo-easy --port 9090 -- python3 -m http.server 9090 +running... +Connecting to traffic manager... +Connected to context +Using Deployment myserver +intercepted + Intercept name : myserver + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:9090 + Intercepting : all TCP connections +Serving HTTP on :: port 9090 (http://[::]:9090/) ... +``` + +Telepresence will let you know what the legacy Telepresence command has mapped to and automatically +runs it. So you can get started with Telepresence today, using the commands you are used to +and it will help you learn the Telepresence syntax. + +### Legacy command mapping + +Below is the mapping of legacy Telepresence to Telepresence commands (where they exist and +are supported). + +| Legacy Telepresence Command | Telepresence Command | +|--------------------------------------------------|--------------------------------------------| +| --swap-deployment $workload | intercept $workload | +| --expose localPort[:remotePort] | intercept --port localPort[:remotePort] | +| --swap-deployment $workload --run-shell | intercept $workload -- bash | +| --swap-deployment $workload --run $cmd | intercept $workload -- $cmd | +| --swap-deployment $workload --docker-run $cmd | intercept $workload --docker-run -- $cmd | +| --run-shell | connect -- bash | +| --run $cmd | connect -- $cmd | +| --env-file,--env-json | --env-file, --env-json (haven't changed) | +| --context,--namespace | --context, --namespace (haven't changed) | +| --mount,--docker-mount | --mount, --docker-mount (haven't changed) | + +### Legacy Telepresence command limitations + +Some of the commands and flags from legacy Telepresence either didn't apply to Telepresence or +aren't yet supported in Telepresence. For some known popular commands, such as --method, +Telepresence will include output letting you know that the flag has gone away. For flags that +Telepresence can't translate yet, it will let you know that that flag is "unsupported". + +If Telepresence is missing any flags or functionality that is integral to your usage, please let us know +by [creating an issue](https://github.com/telepresenceio/telepresence/issues) and/or talking to us on our [Slack channel](http://a8r.io/slack)! + +## Telepresence changes + +Telepresence installs a Traffic Manager in the cluster and Traffic Agents alongside workloads when performing intercepts (including +with `--swap-deployment`) and leaves them. If you use `--swap-deployment`, the intercept will be left once the process +dies, but the agent will remain. There's no harm in leaving the agent running alongside your service, but when you +want to remove them from the cluster, the following Telepresence command will help: +``` +$ telepresence uninstall --help +Uninstall telepresence agents + +Usage: + telepresence uninstall [flags] { --agent |--all-agents } + +Flags: + -d, --agent uninstall intercept agent on specific deployments + -a, --all-agents uninstall intercept agent on all deployments + -h, --help help for uninstall + -n, --namespace string If present, the namespace scope for this CLI request +``` + +Since the new architecture deploys a Traffic Manager into the Ambassador namespace, please take a look at +our [rbac guide](../../reference/rbac) if you run into any issues with permissions while upgrading to Telepresence. + +The Traffic Manager can be uninstalled using `telepresence helm uninstall`. \ No newline at end of file diff --git a/docs/telepresence/2.12/install/upgrade.md b/docs/telepresence/2.12/install/upgrade.md new file mode 100644 index 000000000..8272b4844 --- /dev/null +++ b/docs/telepresence/2.12/install/upgrade.md @@ -0,0 +1,81 @@ +--- +description: "How to upgrade your installation of Telepresence and install previous versions." +--- + +# Upgrade Process +The Telepresence CLI will periodically check for new versions and notify you when an upgrade is available. Running the same commands used for installation will replace your current binary with the latest version. + +Before upgrading your CLI, you must stop any live Telepresence processes by issuing `telepresence quit -s` (or `telepresence quit -ur` +if your current version is less than 2.8.0). + + + + +```shell +# Intel Macs + +# Upgrade via brew: +brew upgrade datawire/blackbird/telepresence + +# OR upgrade manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +The Telepresence CLI contains an embedded Helm chart. See [Install/Uninstall the Traffic Manager](../manager/) if you want to also upgrade +the Traffic Manager in your cluster. diff --git a/docs/telepresence/2.12/quick-start/TelepresenceQuickStartLanding.js b/docs/telepresence/2.12/quick-start/TelepresenceQuickStartLanding.js new file mode 100644 index 000000000..bd375dee0 --- /dev/null +++ b/docs/telepresence/2.12/quick-start/TelepresenceQuickStartLanding.js @@ -0,0 +1,118 @@ +import queryString from 'query-string'; +import React, { useEffect, useState } from 'react'; + +import Embed from '../../../../src/components/Embed'; +import Icon from '../../../../src/components/Icon'; +import Link from '../../../../src/components/Link'; + +import './telepresence-quickstart-landing.less'; + +/** @type React.FC> */ +const RightArrow = (props) => ( + + + +); + +const TelepresenceQuickStartLanding = () => { + const [getStartedUrl, setGetStartedUrl] = useState( + 'https://app.getambassador.io/cloud/welcome?docs_source=telepresence-quick-start', + ); + + const getUrlFromQueryParams = () => { + const { docs_source, docs_campaign } = queryString.parse( + window.location.search, + ); + + if (docs_source === 'cloud-quickstart-ad' && docs_campaign === 'loops') { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=loops', + ); + } else if ( + docs_source === 'cloud-quickstart-ad' && + docs_campaign === 'environments' + ) { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=environments', + ); + } + }; + + useEffect(() => { + getUrlFromQueryParams(); + }, []); + + return ( +
+

+ Telepresence +

+

+ Set up your ideal development environment for Kubernetes in seconds. + Accelerate your inner development loop with hot reload using your + existing IDE, and workflow. +

+ +
+
+
+

+ Set Up Telepresence with Ambassador Cloud +

+

+ Seamlessly integrate Telepresence into your existing Kubernetes + environment by following our 3-step setup guide. +

+ + Get Started + +
+
+

+ + Do it Yourself: + {' '} + install Telepresence and manually connect to your Kubernetes + workloads. +

+
+ +
+
+
+

+ What Can Telepresence Do for You? +

+

Telepresence gives Kubernetes application developers:

+
    +
  • Instant feedback loops
  • +
  • Remote development environments
  • +
  • Access to your favorite local tools
  • +
  • Easy collaborative development with teammates
  • +
+ + LEARN MORE{' '} + + +
+
+ +
+
+
+
+ ); +}; + +export default TelepresenceQuickStartLanding; diff --git a/docs/telepresence/2.12/quick-start/demo-node.md b/docs/telepresence/2.12/quick-start/demo-node.md new file mode 100644 index 000000000..c1725fe30 --- /dev/null +++ b/docs/telepresence/2.12/quick-start/demo-node.md @@ -0,0 +1,155 @@ +--- +description: "Claim a remote demo cluster and learn to use Telepresence to intercept services running in a Kubernetes Cluster, speeding up local development and debugging." +--- + +import {DemoClusterMetadata, ExpirationDate} from '../../../../../src/components/DemoClusterMetadata'; +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start + +
+

Contents

+ +* [1. Get a free remote cluster](#1-get-a-free-remote-cluster) +* [2. Try the Emojivoto application](#2-try-the-emojivoto-application) +* [3. Set up your local development environment](#3-set-up-your-local-development-environment) +* [4. Testing our fix](#4-testing-our-fix) +* [5. Preview URLs](#5-preview-urls) +* [6. How/Why does this all work](#6-howwhy-does-this-all-work) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +In this guide, we'll give you a hands-on tutorial with [Telepresence](/products/telepresence/). To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js and Golang. We have a version in React if you prefer. + + +## 1. Get a free remote cluster + +[Telepresence](/docs/telepresence/) connects your local workstation with a remote Kubernetes cluster. In this tutorial, we'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. Test out the application: + +1. Go to the and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. We're going to use Telepresence shortly to fix this bug, as everyone should be able to vote for 🍩! + + + Congratulations! You've successfully accessed the Emojivoto application on your remote cluster. + + +## 3. Set up your local development environment + +We'll set up a development environment locally on your workstation. We'll then use [Telepresence](../../reference/inside-container/) to connect this local development environment to the remote Kubernetes cluster. To save time, the development environment we'll use is pre-packaged as a Docker container. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + + + + + + + +Make sure that ports 8080 and 8083 are free.
+If the Docker engine is not running, the command will fail and you will see docker: unknown server OS in your terminal. +
+ +2. The Docker container includes a copy of the Emojivoto application that fixes the bug. Visit the [leaderboard](http://localhost:8083/leaderboard) and notice how it is different from the leaderboard in your Kubernetes cluster. + +3. Vote for 🍩 on your local leaderboard, and you can see that the bug is fixed! + + + Congratulations! You have successfully set up a local development environment, and tested the fix locally. + + +## 4. Testing our fix + +A common use case for Telepresence is to connect your local development environment to a remote cluster. This way, if your application is too big or complex to run locally, you can still develop locally. In this Quick Start, we're also going to show Telepresence can be used for integration testing, by testing our fix against the services in the remote cluster. + +1. From your Docker container, create an intercept, which will tell Telepresence to send traffic to the service in your container instead of the service in the cluster: + `telepresence intercept web --port 8080` + + When prompted for ingress configuration, all default values should be correct as displayed below. + + + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Preview URLs + +Preview URLs enable you to safely share your development environment with anyone. For example, you may want your UX designer to take a quick look at what you're developing, before you commit the code. Preview URLs enable this easy collaboration. + +1. If you access the Emojivoto application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +2. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version running locally. + + +Now you're able to share your fix in your local environment with your team! + + + + To get more information regarding Preview URLs and intercepts, visit Ambassador Cloud. + + +
+ +## 6. How/Why does this all work? + +[Telepresence](../qs-go/) works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +Intercepts and preview URLs are functions of Telepresence that enable easy local development from a remote Kubernetes cluster and offer a preview environment for sharing and real-time collaboration. + +Telepresence also uses custom headers and header propagation for controllable intercepts and preview URLs. The headers facilitate the smart routing of requests either to live services in the cluster or services running locally on a developer’s machine. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to Ambassador Cloud with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. + +## What's Next? + + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! diff --git a/docs/telepresence/2.12/quick-start/demo-react.md b/docs/telepresence/2.12/quick-start/demo-react.md new file mode 100644 index 000000000..2312dbbbc --- /dev/null +++ b/docs/telepresence/2.12/quick-start/demo-react.md @@ -0,0 +1,257 @@ +--- +description: "Telepresence Quick Start - React. In this guide we'll give you everything you need in a preconfigured demo cluster: the Telepresence CLI, a config file for..." +--- + +import Alert from '@material-ui/lab/Alert'; +import QSCards26 from './qs-cards'; +import { DownloadDemo } from '../../../../../src/components/Docs/DownloadDemo'; +import { UserInterceptCommand } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start - React + +
+

Contents

+ +* [1. Download the demo cluster archive](#1-download-the-demo-cluster-archive) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Set up the sample application](#3-set-up-the-sample-application) +* [4. Test app](#4-test-app) +* [5. Run a service on your laptop](#5-run-a-service-on-your-laptop) +* [6. Make a code change](#6-make-a-code-change) +* [7. Intercept all traffic to the service](#7-intercept-all-traffic-to-the-service) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +In this guide we'll give you **everything you need in a preconfigured demo cluster:** the [Telepresence](/products/telepresence/) CLI, a config file for connecting to your demo cluster, and code to run a cluster service locally. + + + While Telepresence works with any language, this guide uses a sample app with a frontend written in React. We have a version with a Node.js backend if you prefer. + + + + +## 1. Download the demo cluster archive + +1. + +2. Extract the archive file, open the `ambassador-demo-cluster` folder, and run the installer script (the commands below might vary based on where your browser saves downloaded files). + + + This step will also install some dependency packages onto your laptop using npm, you can see those packages at ambassador-demo-cluster/edgey-corp-nodejs/DataProcessingService/package.json. + + + ``` + cd ~/Downloads + unzip ambassador-demo-cluster.zip -d ambassador-demo-cluster + cd ambassador-demo-cluster + ./install.sh + # type y to install the npm dependencies when asked + ``` + +3. Confirm that your `kubectl` is configured to use the demo cluster by getting the status of the cluster nodes, you should see a single node named `tpdemo-prod-...`: + `kubectl get nodes` + + ``` + $ kubectl get nodes + + NAME STATUS ROLES AGE VERSION + tpdemo-prod-1234 Ready control-plane,master 5d10h v1.20.2+k3s1 + ``` + +4. Confirm that the Telepresence CLI is now installed (we expect to see the daemons are not running yet): +`telepresence status` + + ``` + $ telepresence status + + Root Daemon: Not running + User Daemon: Not running + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open System Preferences → Security & Privacy → General. Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence status command. + + + + You now have Telepresence installed on your workstation and a Kubernetes cluster configured in your terminal! + + +## 2. Test Telepresence + +[Telepresence](../../reference/client/login/) connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster (this requires **root** privileges and will ask for your password): +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Set up the sample application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + +1. Clone the emojivoto app: +`git clone https://github.com/datawire/emojivoto.git` + +1. Deploy the app to your cluster: +`kubectl apply -k emojivoto/kustomize/deployment` + +1. Change the kubectl namespace: +`kubectl config set-context --current --namespace=emojivoto` + +1. List the Services: +`kubectl get svc` + + ``` + $ kubectl get svc + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + emoji-svc ClusterIP 10.43.162.236 8080/TCP,8801/TCP 29s + voting-svc ClusterIP 10.43.51.201 8080/TCP,8801/TCP 29s + web-app ClusterIP 10.43.242.240 80/TCP 29s + web-svc ClusterIP 10.43.182.119 8080/TCP 29s + ``` + +1. Since you’ve already connected Telepresence to your cluster, you can access the frontend service in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). This is the namespace qualified DNS name in the form of `service.namespace`. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Test app + +1. Vote for some emojis and see how the [leaderboard](http://web-app.emojivoto/leaderboard) changes. + +1. There is one emoji that causes an error when you vote for it. Vote for 🍩 and the leaderboard does not actually update. Also an error is shown on the browser dev console: +`GET http://web-svc.emojivoto:8080/api/vote?choice=:doughnut: 500 (Internal Server Error)` + + + Open the dev console in Chrome or Firefox with Option + ⌘ + J (macOS) or Shift + CTRL + J (Windows/Linux).
+ Open the dev console in Safari with Option + ⌘ + C. +
+ +The error is on a backend service, so **we can add an error page to notify the user** while the bug is fixed. + +## 5. Run a service on your laptop + +Now start up the `web-app` service on your laptop. We'll then make a code change and intercept this service so that we can see the immediate results of a code change to the service. + +1. **In a new terminal window**, change into the repo directory and build the application: + + `cd /emojivoto` + `make web-app-local` + + ``` + $ make web-app-local + + ... + webpack 5.34.0 compiled successfully in 4326 ms + ✨ Done in 5.38s. + ``` + +2. Change into the service's code directory and start the server: + + `cd emojivoto-web-app` + `yarn webpack serve` + + ``` + $ yarn webpack serve + + ... + ℹ 「wds」: Project is running at http://localhost:8080/ + ... + ℹ 「wdm」: Compiled successfully. + ``` + +4. Access the application at [http://localhost:8080](http://localhost:8080) and see how voting for the 🍩 is generating the same error as the application deployed in the cluster. + + + Victory, your local React server is running a-ok! + + +## 6. Make a code change +We’ve now set up a local development environment for the app. Next we'll make and locally test a code change to the app to improve the issue with voting for 🍩. + +1. In the terminal running webpack, stop the server with `Ctrl+c`. + +1. In your preferred editor open the file `emojivoto/emojivoto-web-app/js/components/Vote.jsx` and replace the `render()` function (lines 83 to the end) with [this highlighted code snippet](https://github.com/datawire/emojivoto/blob/main/assets/Vote-fixed.jsx#L83-L149). + +1. Run webpack to fully recompile the code then start the server again: + + `yarn webpack` + `yarn webpack serve` + +1. Reload the browser tab showing [http://localhost:8080](http://localhost:8080) and vote for 🍩. Notice how you see an error instead, improving the user experience. + +## 7. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the app to the version running locally instead. + + + This command must be run in the terminal window where you ran the script because the script set environment variables to access the demo cluster. Those variables will only will apply to that terminal session. + + +1. Start the intercept with the `intercept` command, setting the workload name (a Deployment in this case), namespace, and port: +`telepresence intercept web-app --namespace emojivoto --port 8080` + + ``` + $ telepresence intercept web-app --namespace emojivoto --port 8080 + + Using deployment web-app + intercepted + Intercept name: web-app-emojivoto + State : ACTIVE + ... + ``` + +2. Go to the frontend service again in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). Voting for 🍩 should now show an error message to the user. + + + The web-app Deployment is being intercepted and rerouted to the server on your laptop! + + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## What's Next? + + diff --git a/docs/telepresence/2.12/quick-start/go.md b/docs/telepresence/2.12/quick-start/go.md new file mode 100644 index 000000000..c926d7b05 --- /dev/null +++ b/docs/telepresence/2.12/quick-start/go.md @@ -0,0 +1,190 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + + +# Telepresence Quick Start - **Go** + +This guide provides you with a hands-on tutorial with Telepresence and Golang. To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + +## 1. Get a free remote cluster + +Telepresence connects your local workstation with a remote Kubernetes cluster. In this tutorial, you'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. +Test out the application: + +1. Go to the Emojivoto webapp and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. + +## 3. Run the Docker container + +The bug is present in the `voting-svc` service, you'll run that service locally. To save your time, we prepared a Docker container with this service running and all you'll need to fix the bug. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + +2. The application is failing due to a little bug inside this service which uses gRPC to communicate with the others services. We can use `grpcurl` to test the gRPC endpoint and see the error by running: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + (empty) + + Response trailers received: + content-type: application/grpc + Sent 0 requests and received 0 responses + ERROR: + Code: Unknown + Message: ERROR + ``` + +3. In order to fix the bug, use the Docker container's embedded IDE to fix this error. Go to http://localhost:8083 and open `api/api.go`. Remove the `"fmt"` package by deleting the line 5. + + ```go + 3 import ( + 4 "context" + 5 "fmt" // DELETE THIS LINE + 6 + 7 pb "github.com/buoyantio/emojivoto/emojivoto-voting-svc/gen/proto" + ``` + + and also replace the line `21`: + + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return nil, fmt.Errorf("ERROR") + 22 } + ``` + with + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return pS.vote(":doughnut:") + 22 } + ``` + Then save the file (`Ctrl+s` for Windows, `Cmd+s` for Mac or `Menu -> File -> Save`) and verify that the error is fixed now: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + content-type: application/grpc + + Response contents: + { + } + + Response trailers received: + (empty) + Sent 0 requests and received 1 response + ``` + +## 4. Telepresence intercept + +1. Now the bug is fixed, you can use Telepresence to intercept *all* the traffic through our local service. +Run the following command inside the container: + + ``` + $ telepresence intercept voting --port 8081:8080 + + Using Deployment voting + intercepted + Intercept name : voting + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8081 + Service Port Identifier: 8080 + Volume Mount Point : /tmp/telfs-XXXXXXXXX + Intercepting : all TCP connections + ``` + Now you can go back to Emojivoto webapp and you'll see that voting for 🍩 woks as expected. + +You have created an intercept to tell Telepresence where to send traffic. The `voting-svc` traffic is now destined to the local Dockerized version of the service. This intercepts *all the traffic* to the local `voting-svc` service, which has been fixed with the Telepresence intercept. + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Telepresence intercept with a preview URL + +Preview URLs allows you to safely share your development environment. With this approach, you can try and test your local service more accurately because you have a total control about which traffic is handled through your service, all of this thanks to the preview URL. + +1. First leave the current intercept: + + ``` + $ telepresence leave voting + ``` + +2. Then login to telepresence: + + + +3. Create an intercept, which will tell Telepresence to send traffic to the service in our container instead of the service in the cluster. When prompted for ingress configuration, all default values should be correct as displayed below. + + + +4. If you access the Emojivoto webapp application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +5. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version which is running locally. + +
+ +## What's Next? + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! diff --git a/docs/telepresence/2.12/quick-start/index.md b/docs/telepresence/2.12/quick-start/index.md new file mode 100644 index 000000000..e0d26fa9e --- /dev/null +++ b/docs/telepresence/2.12/quick-start/index.md @@ -0,0 +1,7 @@ +--- +description: Telepresence Quick Start. +--- + +import NewTelepresenceQuickStartLanding from './TelepresenceQuickStartLanding' + + diff --git a/docs/telepresence/2.12/quick-start/qs-cards.js b/docs/telepresence/2.12/quick-start/qs-cards.js new file mode 100644 index 000000000..5b68aa4ae --- /dev/null +++ b/docs/telepresence/2.12/quick-start/qs-cards.js @@ -0,0 +1,71 @@ +import Grid from '@material-ui/core/Grid'; +import Paper from '@material-ui/core/Paper'; +import Typography from '@material-ui/core/Typography'; +import { makeStyles } from '@material-ui/core/styles'; +import { Link as GatsbyLink } from 'gatsby'; +import React from 'react'; + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + textAlign: 'center', + alignItem: 'stretch', + padding: 0, + }, + paper: { + padding: theme.spacing(1), + textAlign: 'center', + color: 'black', + height: '100%', + }, +})); + +export default function CenteredGrid() { + const classes = useStyles(); + + return ( +
+ + + + + + Collaborating + + + + Use preview URLS to collaborate with your colleagues and others + outside of your organization. + + + + + + + + Outbound Sessions + + + + While connected to the cluster, your laptop can interact with + services as if it was another pod in the cluster. + + + + + + + + FAQs + + + + Learn more about uses cases and the technical implementation of + Telepresence. + + + + +
+ ); +} diff --git a/docs/telepresence/2.12/quick-start/qs-go.md b/docs/telepresence/2.12/quick-start/qs-go.md new file mode 100644 index 000000000..2e140f6a7 --- /dev/null +++ b/docs/telepresence/2.12/quick-start/qs-go.md @@ -0,0 +1,396 @@ +--- +description: "Telepresence Quick Start Go. You will need kubectl or oc installed and set up (Linux / macOS / Windows) to use a Kubernetes cluster, preferably an empty." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Go** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Go application](#3-install-a-sample-go-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used [Telepresence](/products/telepresence/) previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Go application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Go. We have versions in Python (Flask), Python (FastAPI), Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-go.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-go.git + + Cloning into 'edgey-corp-go'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-go/DataProcessingService/` + +3. You will use [Fresh](https://pkg.go.dev/github.com/BUGLAN/fresh) to support auto reloading of the Go server, which we'll use later. Confirm it is installed by running: + `go get github.com/pilu/fresh` + Then start the Go server: + `$GOPATH/bin/fresh` + + ``` + $ go get github.com/pilu/fresh + + $ $GOPATH/bin/fresh + + ... + 10:23:41 app | Welcome to the DataProcessingGoService! + ``` + + + Install Go from here and set your GOPATH if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Go server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Go server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-go/DataProcessingService/main.go` in your editor and change `var color string` from `blue` to `orange`. Save the file and the Go server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.12/quick-start/qs-java.md b/docs/telepresence/2.12/quick-start/qs-java.md new file mode 100644 index 000000000..9056d61cd --- /dev/null +++ b/docs/telepresence/2.12/quick-start/qs-java.md @@ -0,0 +1,390 @@ +--- +description: "Telepresence Quick Start - Java. This document uses kubectl in all example commands, but OpenShift users should have no problem substituting in the oc command." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Java** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Java application](#3-install-a-sample-java-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Java application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Java. We have versions in Python (FastAPI), Python (Flask), Go, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-java.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-java.git + + Cloning into 'edgey-corp-java'... + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-java/DataProcessingService/` + +3. Start the Maven server. + `mvn spring-boot:run` + + + Install Java and Maven first if needed. + + + ``` + $ mvn spring-boot:run + + ... + g.d.DataProcessingServiceJavaApplication : Started DataProcessingServiceJavaApplication in 1.408 seconds (JVM running for 1.684) + + ``` + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Java server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Java server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-java/DataProcessingService/src/main/resources/application.properties` in your editor and change `app.default.color` on line 2 from `blue` to `orange`. Save the file then stop and restart your Java server. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.12/quick-start/qs-node.md b/docs/telepresence/2.12/quick-start/qs-node.md new file mode 100644 index 000000000..d4282240f --- /dev/null +++ b/docs/telepresence/2.12/quick-start/qs-node.md @@ -0,0 +1,384 @@ +--- +description: "Telepresence Quick Start Node.js. This document uses kubectl in all example commands. OpenShift users should have no problem substituting in the oc command..." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Node.js** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Node.js application](#3-install-a-sample-nodejs-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Node.js application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js. We have versions in Go, Java,Python using Flask, and Python using FastAPI if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-nodejs.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-nodejs.git + + Cloning into 'edgey-corp-nodejs'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-nodejs/DataProcessingService/` + +3. Install the dependencies and start the Node server: +`npm install && npm start` + + ``` + $ npm install && npm start + + ... + Welcome to the DataProcessingService! + { _: [] } + Server running on port 3000 + ``` + + + Install Node.js from here if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Node server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + See this doc for more information on how Telepresence resolves DNS. + + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Node server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-nodejs/DataProcessingService/app.js` in your editor and change line 6 from `blue` to `orange`. Save the file and the Node server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.12/quick-start/qs-python-fastapi.md b/docs/telepresence/2.12/quick-start/qs-python-fastapi.md new file mode 100644 index 000000000..dacfd9f25 --- /dev/null +++ b/docs/telepresence/2.12/quick-start/qs-python-fastapi.md @@ -0,0 +1,381 @@ +--- +description: "Telepresence Quick Start - Python (FastAPI) You need kubectl or oc installed & set up (Linux/macOS/Windows) to use Kubernetes cluster, preferably an empty test." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Python (FastAPI)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the FastAPI framework. We have versions in Python (Flask), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python-fastapi.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python-fastapi.git + + Cloning into 'edgey-corp-python-fastapi'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python-fastapi/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install fastapi uvicorn requests && python app.py + + Collecting fastapi + ... + Application startup complete. + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local service is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python-fastapi/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 17 from `blue` to `orange`. Save the file and the Python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080) and it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.12/quick-start/qs-python.md b/docs/telepresence/2.12/quick-start/qs-python.md new file mode 100644 index 000000000..02ad7de97 --- /dev/null +++ b/docs/telepresence/2.12/quick-start/qs-python.md @@ -0,0 +1,392 @@ +--- +description: "Telepresence Quick Start - Python (Flask). This document uses kubectl in all example commands, but OpenShift users should have no problem substituting in the oc." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Python (Flask)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the Flask framework. We have versions in Python (FastAPI), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python.git + + Cloning into 'edgey-corp-python'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install flask requests && python app.py + + Collecting flask + ... + Welcome to the DataServiceProcessingPythonService! + ... + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Python server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 15 from `blue` to `orange`. Save the file and the python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.12/quick-start/telepresence-quickstart-landing.less b/docs/telepresence/2.12/quick-start/telepresence-quickstart-landing.less new file mode 100644 index 000000000..e2a83df4f --- /dev/null +++ b/docs/telepresence/2.12/quick-start/telepresence-quickstart-landing.less @@ -0,0 +1,152 @@ +@import '~@src/components/Layout/vars.less'; + +.doc-body .telepresence-quickstart-landing { + font-family: @InterFont; + color: @black; + margin: -8.4px auto 48px; + max-width: 1050px; + min-width: @docs-min-width; + width: 100%; + + h1 { + color: @blue-dark; + font-weight: normal; + letter-spacing: 0.25px; + font-size: 33px; + margin: 0 0 15px; + } + p { + font-size: 0.875rem; + line-height: 24px; + margin: 0; + padding: 0; + } + + .demo-cluster-container { + display: grid; + margin: 40px 0; + grid-template-columns: 1fr; + grid-template-columns: 1fr; + @media screen and (max-width: 900px) { + grid-template-columns: repeat(1, 1fr); + } + } + .main-title-container { + display: flex; + flex-direction: column; + align-items: center; + p { + text-align: center; + font-size: 0.875rem; + } + } + h2 { + font-size: 23px; + color: @black; + margin: 0 0 20px 0; + padding: 0; + &.underlined { + padding-bottom: 2px; + border-bottom: 3px solid @grey-separator; + text-align: center; + } + strong { + font-weight: 800; + } + &.subtitle { + margin-bottom: 10px; + font-size: 19px; + line-height: 28px; + } + } + .learn-more, + .get-started { + font-size: 14px; + font-weight: 600; + letter-spacing: 1.25px; + display: flex; + align-items: center; + text-decoration: none; + &.inline { + display: inline-block; + text-decoration: underline; + font-size: unset; + font-weight: normal; + &:hover { + text-decoration: none; + } + } + &.blue { + color: @blue-5; + } + &.blue:hover { + color: @blue-dark; + } + } + + .learn-more { + margin-top: 20px; + padding: 13px 0; + } + + .box-container { + &.border { + border: 1.5px solid @grey-separator; + border-radius: 5px; + padding: 10px; + } + &::before { + content: ''; + position: absolute; + width: 14px; + height: 14px; + border-radius: 50%; + top: 0; + left: 50%; + transform: translate(-50%, -50%); + } + p { + font-size: 0.875rem; + line-height: 24px; + padding: 0; + } + } + + .telepresence-video { + border: 2px solid @grey-separator; + box-shadow: -6px 12px 0px fade(@black, 12%); + border-radius: 8px; + padding: 18px; + h2.telepresence-video-title { + font-weight: 400; + font-size: 23px; + line-height: 33px; + color: @blue-6; + } + } + + .video-section { + display: grid; + grid-template-columns: 1fr 1fr; + column-gap: 20px; + @media screen and (max-width: 800px) { + grid-template-columns: 1fr; + } + ul { + font-size: 14px; + margin: 0 10px 6px 0; + } + .video-container { + position: relative; + padding-bottom: 56.25%; // 16:9 aspect ratio + height: 0; + iframe { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + } + } + } +} diff --git a/docs/telepresence/2.12/redirects.yml b/docs/telepresence/2.12/redirects.yml new file mode 100644 index 000000000..5961b3477 --- /dev/null +++ b/docs/telepresence/2.12/redirects.yml @@ -0,0 +1 @@ +- {from: "", to: "quick-start"} diff --git a/docs/telepresence/2.12/reference/architecture.md b/docs/telepresence/2.12/reference/architecture.md new file mode 100644 index 000000000..8aa90b267 --- /dev/null +++ b/docs/telepresence/2.12/reference/architecture.md @@ -0,0 +1,102 @@ +--- +description: "How Telepresence works to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Telepresence Architecture + +
+ +![Telepresence Architecture](https://www.getambassador.io/images/documentation/telepresence-architecture.inline.svg) + +
+ +## Telepresence CLI + +The Telepresence CLI orchestrates the moving parts on the workstation: it starts the Telepresence Daemons, +authenticates against Ambassador Cloud, and then acts as a user-friendly interface to the Telepresence User Daemon. + +## Telepresence Daemons +Telepresence has Daemons that run on a developer's workstation and act as the main point of communication to the cluster's +network in order to communicate with the cluster and handle intercepted traffic. + +### User-Daemon +The User-Daemon coordinates the creation and deletion of intercepts by communicating with the [Traffic Manager](#traffic-manager). +All requests from and to the cluster go through this Daemon. + +When you run telepresence login, Telepresence installs an enhanced version of the User-Daemon. This replaces the existing User-Daemon and +allows you to create intercepts on your local machine from Ambassador Cloud. + +### Root-Daemon +The Root-Daemon manages the networking necessary to handle traffic between the local workstation and the cluster by setting up a +[Virtual Network Device](../tun-device) (VIF). For a detailed description of how the VIF manages traffic and why it is necessary +please refer to this blog post: +[Implementing Telepresence Networking with a TUN Device](https://blog.getambassador.io/implementing-telepresence-networking-with-a-tun-device-a23a786d51e9). + +When you run telepresence login, Telepresence installs an enhanced Telepresence User Daemon. This replaces the open source +User Daemon and allows you to create intercepts on your local machine from Ambassador Cloud. + +## Traffic Manager + +The Traffic Manager is the central point of communication between Traffic Agents in the cluster and Telepresence Daemons +on developer workstations. It is responsible for injecting the Traffic Agent sidecar into intercepted pods, proxying all +relevant inbound and outbound traffic, and tracking active intercepts. + +The Traffic-Manager is installed, either by a cluster administrator using a Helm Chart, or on demand by the Telepresence +User Daemon. When the User Daemon performs its initial connect, it first checks the cluster for the Traffic Manager +deployment, and if missing it will make an attempt to install it using its embedded Helm Chart. + +When an intercept gets created with a Preview URL, the Traffic Manager will establish a connection with Ambassador Cloud +so that Preview URL requests can be routed to the cluster. This allows Ambassador Cloud to reach the Traffic Manager +without requiring the Traffic Manager to be publicly exposed. Once the Traffic Manager receives a request from a Preview +URL, it forwards the request to the ingress service specified at the Preview URL creation. + +## Traffic Agent + +The Traffic Agent is a sidecar container that facilitates intercepts. When an intercept is first started, the Traffic Agent +container is injected into the workload's pod(s). You can see the Traffic Agent's status by running `telepresence list` +or `kubectl describe pod `. + +Depending on the type of intercept that gets created, the Traffic Agent will either route the incoming request to the +Traffic Manager so that it gets routed to a developer's workstation, or it will pass it along to the container in the +pod usually handling requests on that port. + +## Ambassador Cloud + +Ambassador Cloud enables Preview URLs by generating random ephemeral domain names and routing requests received on those +domains from authorized users to the appropriate Traffic Manager. + +Ambassador Cloud also lets users manage their Preview URLs: making them publicly accessible, seeing users who have +accessed them and deleting them. + +## Pod-Daemon + +The Pod-Daemon is a modified version of the [Telepresence User-Daemon](#user-daemon) built as a container image so that +it can be inserted into a `Deployment` manifest as an additional container. This allows users to create intercepts completely +within the cluster with the benefit that the intercept stays active until the deployment with the Pod-Daemon container is removed. + +The Pod-Daemon will take arguments and environment variables as part of the `Deployment` manifest to specify which service the intercept +should be run on and to provide similar configuration that would be provided when using Telepresence intercepts from the command line. + +After being deployed to the cluster, it behaves similarly to the Telepresence User-Daemon and installs the [Traffic Agent Sidecar](#traffic-agent) +on the service that is being intercepted. After the intercept is created, traffic can then be redirected to the `Deployment` with the Pod-Daemon +container instead. The Pod-Daemon will automatically generate a Preview URL so that the intercept can be accessed from outside the cluster. +The Preview URL can be obtained from the Pod-Daemon logs if you are deploying it manually. + +The Pod-Daemon was created for use as a component of Deployment Previews in order to automatically create intercepts with development images built +by CI so that changes from pull requests can be quickly visualized in a live cluster before changes are landed by accessing the Preview URL +link which would be posted to an associated GitHub pull request when using Deployment Previews. + +See the [Deployment Previews quick-start](https://www.getambassador.io/docs/cloud/latest/deployment-previews/quick-start) for information on how to get started with Deployment Previews +or for a reference on how Pod-Daemon can be manually deployed to the cluster. + + +# Changes from Service Preview + +Using Ambassador's previous offering, Service Preview, the Traffic Agent had to be manually added to a pod by an +annotation. This is no longer required as the Traffic Agent is automatically injected when an intercept is started. + +Service Preview also started an intercept via `edgectl intercept`. The `edgectl` CLI is no longer required to intercept +as this functionality has been moved to the Telepresence CLI. + +For both the Traffic Manager and Traffic Agents, configuring Kubernetes ClusterRoles and ClusterRoleBindings is not +required as it was in Service Preview. Instead, the user running Telepresence must already have sufficient permissions in the cluster to add and modify deployments in the cluster. diff --git a/docs/telepresence/2.12/reference/client.md b/docs/telepresence/2.12/reference/client.md new file mode 100644 index 000000000..478e07cef --- /dev/null +++ b/docs/telepresence/2.12/reference/client.md @@ -0,0 +1,31 @@ +--- +description: "CLI options for Telepresence to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Client reference + +The [Telepresence CLI client](../../quick-start) is used to connect Telepresence to your cluster, start and stop intercepts, and create preview URLs. All commands are run in the form of `telepresence `. + +## Commands + +A list of all CLI commands and flags is available by running `telepresence help`, but here is more detail on the most common ones. +You can append `--help` to each command below to get even more information about its usage. + +| Command | Description | +| --- | --- | +| `connect` | Starts the local daemon and connects Telepresence to your cluster and installs the Traffic Manager if it is missing. After connecting, outbound traffic is routed to the cluster so that you can interact with services as if your laptop was another pod (for example, curling a service by it's name) | +| [`login`](login) | Authenticates you to Ambassador Cloud to create, manage, and share [preview URLs](../../howtos/preview-urls/) | +| `logout` | Logs out out of Ambassador Cloud | +| `license` | Formats a license from Ambassdor Cloud into a secret that can be [applied to your cluster](../cluster-config#add-license-to-cluster) if you require features of the extension in an air-gapped environment| +| `status` | Shows the current connectivity status | +| `quit` | Tell Telepresence daemons to quit | +| `list` | Lists the current active intercepts | +| `intercept` | Intercepts a service, run followed by the service name to be intercepted and what port to proxy to your laptop: `telepresence intercept --port `. This command can also start a process so you can run a local instance of the service you are intercepting. For example the following will intercept the hello service on port 8000 and start a Python web server: `telepresence intercept hello --port 8000 -- python3 -m http.server 8000`. A special flag `--docker-run` can be used to run the local instance [in a docker container](../docker-run). | +| `leave` | Stops an active intercept: `telepresence leave hello` | +| `preview` | Create or remove [preview URLs](../../howtos/preview-urls) for existing intercepts: `telepresence preview create ` | +| `loglevel` | Temporarily change the log-level of the traffic-manager, traffic-agents, and user and root daemons | +| `gather-logs` | Gather logs from traffic-manager, traffic-agents, user, and root daemons, and export them into a zip file that can be shared with others or included with a github issue. Use `--get-pod-yaml` to include the yaml for the `traffic-manager` and `traffic-agent`s. Use `--anonymize` to replace the actual pod names + namespaces used for the `traffic-manager` and pods containing `traffic-agent`s in the logs. | +| `version` | Show version of Telepresence CLI + Traffic-Manager (if connected) | +| `uninstall` | Uninstalls Telepresence from your cluster, using the `--agent` flag to target the Traffic Agent for a specific workload, the `--all-agents` flag to remove all Traffic Agents from all workloads, or the `--everything` flag to remove all Traffic Agents and the Traffic Manager.| +| `dashboard` | Reopens the Ambassador Cloud dashboard in your browser | +| `current-cluster-id` | Get cluster ID for your kubernetes cluster, used for [configuring license](../cluster-config#add-license-to-cluster) in an air-gapped environment | diff --git a/docs/telepresence/2.12/reference/client/login.md b/docs/telepresence/2.12/reference/client/login.md new file mode 100644 index 000000000..fc90ea385 --- /dev/null +++ b/docs/telepresence/2.12/reference/client/login.md @@ -0,0 +1,61 @@ +# Telepresence Login + +```console +$ telepresence login --help +Authenticate to Ambassador Cloud + +Usage: + telepresence login [flags] + +Flags: + --apikey string Static API key to use instead of performing an interactive login +``` + +## Description + +Use `telepresence login` to explicitly authenticate with [Ambassador +Cloud](https://www.getambassador.io/docs/cloud). Unless the +[`skipLogin` option](../../config) is set, other commands will +automatically invoke the `telepresence login` interactive login +procedure as nescessary, so it is rarely nescessary to explicitly run +`telepresence login`; it should only be truly nescessary to explictly +run `telepresence login` when you require a non-interactive login. + +The normal interactive login procedure involves launching a web +browser, a user interacting with that web browser, and finally having +the web browser make callbacks to the local Telepresence process. If +it is not possible to do this (perhaps you are using a headless remote +box via SSH, or are using Telepresence in CI), then you may instead +have Ambassador Cloud issue an API key that you pass to `telepresence +login` with the `--apikey` flag. + +## Telepresence + +When you run `telepresence login`, the CLI installs +a Telepresence binary. The Telepresence enhanced free client of the [User +Daemon](../../architecture) communicates with the Ambassador Cloud to +provide fremium features including the ability to create intercepts from +Ambassador Cloud. + +## Acquiring an API key + +1. Log in to Ambassador Cloud at https://app.getambassador.io/ . + +2. Click on your profile icon in the upper-left: ![Screenshot with the + mouse pointer over the upper-left profile icon](./login/apikey-2.png) + +3. Click on the "API Keys" menu button: ![Screenshot with the mouse + pointer over the "API Keys" menu button](./login/apikey-3.png) + +4. Click on the "generate new key" button in the upper-right: + ![Screenshot with the mouse pointer over the "generate new key" + button](./login/apikey-4.png) + +5. Enter a description for the key (perhaps the name of your laptop, + or perhaps the "CI"), and click "generate api key" to create it. + +You may now pass the API key as `KEY` to `telepresence login --apikey=KEY`. + +Telepresence will use that "master" API key to create narrower keys +for different components of Telepresence. You will see these appear +in the Ambassador Cloud web interface. \ No newline at end of file diff --git a/docs/telepresence/2.12/reference/client/login/apikey-2.png b/docs/telepresence/2.12/reference/client/login/apikey-2.png new file mode 100644 index 000000000..1379502a9 Binary files /dev/null and b/docs/telepresence/2.12/reference/client/login/apikey-2.png differ diff --git a/docs/telepresence/2.12/reference/client/login/apikey-3.png b/docs/telepresence/2.12/reference/client/login/apikey-3.png new file mode 100644 index 000000000..4559b784d Binary files /dev/null and b/docs/telepresence/2.12/reference/client/login/apikey-3.png differ diff --git a/docs/telepresence/2.12/reference/client/login/apikey-4.png b/docs/telepresence/2.12/reference/client/login/apikey-4.png new file mode 100644 index 000000000..25c6581a4 Binary files /dev/null and b/docs/telepresence/2.12/reference/client/login/apikey-4.png differ diff --git a/docs/telepresence/2.12/reference/cluster-config.md b/docs/telepresence/2.12/reference/cluster-config.md new file mode 100644 index 000000000..087bbf9af --- /dev/null +++ b/docs/telepresence/2.12/reference/cluster-config.md @@ -0,0 +1,363 @@ +import Alert from '@material-ui/lab/Alert'; +import { ClusterConfig, PaidPlansDisclaimer } from '../../../../../src/components/Docs/Telepresence'; + +# Cluster-side configuration + +For the most part, Telepresence doesn't require any special +configuration in the cluster and can be used right away in any +cluster (as long as the user has adequate [RBAC permissions](../rbac) +and the cluster's server version is `1.19.0` or higher). + +## Helm Chart configuration +Some cluster specific configuration can be provided when installing +or upgrading the Telepresence cluster installation using Helm. Once +installed, the Telepresence client will configure itself from values +that it receives when connecting to the Traffic manager. + +See the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) +for a full list of available configuration settings. + +### Values +To add configuration, create a yaml file with the configuration values and then use it executing `telepresence helm install [--upgrade] --values ` + +## Client Configuration + +It is possible for the Traffic Manager to automatically push config to all +connecting clients. To learn more about this, please see the [client config docs](../config#global-configuration) + +### Agent Configuration + +The `agent` structure of the Helm chart configures the behavior of the Telepresence agents. + +#### Application Protocol Selection +The `agent.appProtocolStrategy` is relevant when using personal intercepts and controls how telepresence selects the application protocol to use +when intercepting a service that has no `service.ports.appProtocol` declared. The port's `appProtocol` is always trusted if it is present. +Valid values are: + +| Value | Resulting action | +|--------------|------------------------------------------------------------------------------------------------------------------------------| +| `http2Probe` | The Telepresence Traffic Agent will probe the intercepted container to check whether it supports http2. This is the default. | +| `portName` | Telepresence will make an educated guess about the protocol based on the name of the service port | +| `http` | Telepresence will use http | +| `http2` | Telepresence will use http2 | + +When `portName` is used, Telepresence will determine the protocol by the name of the port: `[-suffix]`. The following protocols +are recognized: + +| Protocol | Meaning | +|----------|---------------------------------------| +| `http` | Plaintext HTTP/1.1 traffic | +| `http2` | Plaintext HTTP/2 traffic | +| `https` | TLS Encrypted HTTP (1.1 or 2) traffic | +| `grpc` | Same as http2 | + +The application protocol strategy can also be configured on a workstation. See [Intercepts](../config/#intercept) for more info. + +#### Envoy Configuration + +The `agent.envoy` structure contains three values: + +| Setting | Meaning | +|--------------|----------------------------------------------------------| +| `logLevel` | Log level used by the Envoy proxy. Defaults to "warning" | +| `serverPort` | Port used by the Envoy server. Default 18000. | +| `adminPort` | Port used for Envoy administration. Default 19000. | + +#### Image Configuration + +The `agent.image` structure contains the following values: + +| Setting | Meaning | +|------------|-----------------------------------------------------------------------------| +| `registry` | Registry used when downloading the image. Defaults to "docker.io/datawire". | +| `name` | The name of the image. Retrieved from Ambassador Cloud if not set. | +| `tag` | The tag of the image. Retrieved from Ambassador Cloud if not set. | + +#### Log level + +The `agent.LogLevel` controls the log level of the traffic-agent. See [Log Levels](../config/#log-levels) for more info. + +#### Resources + +The `agent.resources` and `agent.initResources` will be used as the `resources` element when injecting traffic-agents and init-containers. + +## TLS + +In this example, other applications in the cluster expect to speak TLS to your +intercepted application (perhaps you're using a service-mesh that does +mTLS). + +In order to use `--mechanism=http` (or any features that imply +`--mechanism=http`) you need to tell Telepresence about the TLS +certificates in use. + +Tell Telepresence about the certificates in use by adjusting your +[workload's](../intercepts/#supported-workloads) Pod template to set a couple of +annotations on the intercepted Pods: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ "getambassador.io/inject-terminating-tls-secret": "your-terminating-secret" # optional ++ "getambassador.io/inject-originating-tls-secret": "your-originating-secret" # optional +``` + +- The `getambassador.io/inject-terminating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS server + certificate to use for decrypting and responding to incoming + requests. + + When Telepresence modifies the Service and workload port + definitions to point at the Telepresence Agent sidecar's port + instead of your application's actual port, the sidecar will use this + certificate to terminate TLS. + +- The `getambassador.io/inject-originating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS + client certificate to use for communicating with your application. + + You will need to set this if your application expects incoming + requests to speak TLS (for example, your + code expects to handle mTLS itself instead of letting a service-mesh + sidecar handle mTLS for it, or the port definition that Telepresence + modified pointed at the service-mesh sidecar instead of at your + application). + + If you do set this, you should to set it to the + same client certificate Secret that you configure the Ambassador + Edge Stack to use for mTLS. + +It is only possible to refer to a Secret that is in the same Namespace +as the Pod. The Secret will be mounted into the traffic agent's container. + +Telepresence understands `type: kubernetes.io/tls` Secrets and +`type: istio.io/key-and-cert` Secrets; as well as `type: Opaque` +Secrets that it detects to be formatted as one of those types. + +## Air-gapped cluster + + +If your cluster is on an isolated network such that it cannot +communicate with Ambassador Cloud, then some additional configuration +is required to acquire a license key in order to use personal +intercepts. + +### Create a license + +1. + +2. Generate a new license (if one doesn't already exist) by clicking *Generate New License*. + +3. You will be prompted for your Cluster ID. Ensure your +kubeconfig context is using the cluster you want to create a license for then +run this command to generate the Cluster ID: + + ``` + $ telepresence current-cluster-id + + Cluster ID: + ``` + +4. Click *Generate API Key* to finish generating the license. + +5. On the licenses page, download the license file associated with your cluster. + +### Add license to cluster +There are two separate ways you can add the license to your cluster: manually creating and deploying +the license secret or having the helm chart manage the secret + +You only need to do one of the two options. + +#### Manual deploy of license secret + +1. Use this command to generate a Kubernetes Secret config using the license file: + + ``` + $ telepresence license -f + + apiVersion: v1 + data: + hostDomain: + license: + kind: Secret + metadata: + creationTimestamp: null + name: systema-license + namespace: ambassador + ``` + +2. Save the output as a YAML file and apply it to your +cluster with `kubectl`. + +3. When deploying the `traffic-manager` chart, you must add the additional values when running `helm install` by putting +the following into a file (for the example we'll assume it's called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + secret: + # This tells the helm chart not to create the secret since you've created it yourself + create: false + ``` + +4. Install the helm chart into the cluster + + ``` + telepresence helm install -f license-values.yaml + ``` + +5. Ensure that you have the docker image for the Smart Agent (datawire/ambassador-telepresence-agent:1.11.0) +pulled and in a registry your cluster can pull from. + +6. Have users use the `images` [config key](../config/#images) keys so telepresence uses the aforementioned image for their agent. + +#### Helm chart manages the secret + +1. Get the jwt token from the downloaded license file + + ``` + $ cat ~/Downloads/ambassador.License_for_yourcluster + eyJhbGnotarealtoken.butanexample + ``` + +2. Create the following values file, substituting your real jwt token in for the one used in the example below. +(for this example we'll assume the following is placed in a file called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + # This is the value from the license file you download. this value is an example and will not work + value: eyJhbGnotarealtoken.butanexample + secret: + # This tells the helm chart to create the secret + create: true + ``` + +3. Install the helm chart into the cluster + + ``` + telepresence helm install -f license-values.yaml + ``` + +Users will now be able to use preview intercepts with the +`--preview-url=false` flag. Even with the license key, preview URLs +cannot be used without enabling direct communication with Ambassador +Cloud, as Ambassador Cloud is essential to their operation. + +If using Helm to install the server-side components, see the chart's [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) to learn how to configure the image registry and license secret. + +Have clients use the [skipLogin](../config/#cloud) key to ensure the cli knows it is operating in an +air-gapped environment. + +## Mutating Webhook + +Telepresence uses a Mutating Webhook to inject the [Traffic Agent](../architecture/#traffic-agent) sidecar container and update the +port definitions. This means that an intercepted workload (Deployment, StatefulSet, ReplicaSet) will remain untouched +and in sync as far as GitOps workflows (such as ArgoCD) are concerned. + +The injection will happen on demand the first time an attempt is made to intercept the workload. + +If you want to prevent that the injection ever happens, simply add the `telepresence.getambassador.io/inject-traffic-agent: disabled` +annotation to your workload template's annotations: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ telepresence.getambassador.io/inject-traffic-agent: disabled + spec: + containers: +``` + +### Service Name and Port Annotations + +Telepresence will automatically find all services and all ports that will connect to a workload and make them available +for an intercept, but you can explicitly define that only one service and/or port can be intercepted. + +```diff + spec: + template: + metadata: + labels: + service: your-service + annotations: ++ telepresence.getambassador.io/inject-service-name: my-service ++ telepresence.getambassador.io/inject-service-port: https + spec: + containers: +``` + +### Ignore Certain Volume Mounts + +An annotation `telepresence.getambassador.io/inject-ignore-volume-mounts` can be used to make the injector ignore certain volume mounts denoted by a comma-separated string. The specified volume mounts from the original container will not be appended to the agent sidecar container. + +```diff + spec: + template: + metadata: + annotations: ++ telepresence.getambassador.io/inject-ignore-volume-mounts: "foo,bar" + spec: + containers: +``` + +### Note on Numeric Ports + +If the targetPort of your intercepted service is pointing at a port number, in addition to +injecting the Traffic Agent sidecar, Telepresence will also inject an initContainer that will +reconfigure the pod's firewall rules to redirect traffic to the Traffic Agent. + + +Note that this initContainer requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + +If you need to use numeric ports without the aforementioned capabilities, you can [manually install the agent](../intercepts/manual-agent) + +For example, the following service is using a numeric port, so Telepresence would inject an initContainer into it: +```yaml +apiVersion: v1 +kind: Service +metadata: + name: your-service +spec: + type: ClusterIP + selector: + service: your-service + ports: + - port: 80 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: your-service + labels: + service: your-service +spec: + replicas: 1 + selector: + matchLabels: + service: your-service + template: + metadata: + annotations: + telepresence.getambassador.io/inject-traffic-agent: enabled + labels: + service: your-service + spec: + containers: + - name: your-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 +``` diff --git a/docs/telepresence/2.12/reference/config.md b/docs/telepresence/2.12/reference/config.md new file mode 100644 index 000000000..e69c77daa --- /dev/null +++ b/docs/telepresence/2.12/reference/config.md @@ -0,0 +1,349 @@ +# Laptop-side configuration + +There are a number of configuration values that can be tweaked to change how Telepresence behaves. +These can be set in two ways: globally, by a platform engineer with powers to deploy the Telepresence Traffic Manager, or locally by any user. +One important exception is the location of the traffic manager itself, which, if it's different from the default of `ambassador`, [must be set](#manager) locally per-cluster to be able to connect. + +## Global Configuration + +Global configuration is set at the Traffic Manager level and applies to any user connecting to that Traffic Manager. +To set it, simply pass in a `client` dictionary to the `helm install` command, with any config values you wish to set. + +### Values + +The `client` config supports values for `timeouts`, `logLevels`, `images`, `cloud`, `grpc`, `dns`, and `routing`. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +client: + timeouts: + agentInstall: 1m + intercept: 10s + logLevels: + userDaemon: debug + images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting + cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. + grpc: + maxReceiveSize: 10Mi + telepresenceAPI: + port: 9980 + dns: + includeSuffixes: [.private] + excludeSuffixes: [.se, .com, .io, .net, .org, .ru] + lookupTimeout: 30s + routing: + alsoProxySubnets: + - 1.2.3.4/32 + neverProxySubnets: + - 1.2.3.4/32 +``` + +#### Timeouts + +Values for `client.timeouts` are all durations either as a number of seconds +or as a string with a unit suffix of `ms`, `s`, `m`, or `h`. Strings +can be fractional (`1.5h`) or combined (`2h45m`). + +These are the valid fields for the `timeouts` key: + +| Field | Description | Type | Default | +|-------------------------|------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|------------| +| `agentInstall` | Waiting for Traffic Agent to be installed | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | +| `apply` | Waiting for a Kubernetes manifest to be applied | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 1 minute | +| `clusterConnect` | Waiting for cluster to be connected | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `intercept` | Waiting for an intercept to become active | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `proxyDial` | Waiting for an outbound connection to be established | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `trafficManagerConnect` | Waiting for the Traffic Manager API to connect for port forwards | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `trafficManagerAPI` | Waiting for connection to the gPRC API after `trafficManagerConnect` is successful | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 15 seconds | +| `helm` | Waiting for Helm operations (e.g. `install`) on the Traffic Manager | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | + +#### Log Levels + +Values for the `client.logLevels` fields are one of the following strings, +case-insensitive: + + - `trace` + - `debug` + - `info` + - `warning` or `warn` + - `error` + +For whichever log-level you select, you will get logs labeled with that level and of higher severity. +(e.g. if you use `info`, you will also get logs labeled `error`. You will NOT get logs labeled `debug`. + +These are the valid fields for the `client.logLevels` key: + +| Field | Description | Type | Default | +|--------------|---------------------------------------------------------------------|---------------------------------------------|---------| +| `userDaemon` | Logging level to be used by the User Daemon (logs to connector.log) | [loglevel][logrus-level] [string][yaml-str] | debug | +| `rootDaemon` | Logging level to be used for the Root Daemon (logs to daemon.log) | [loglevel][logrus-level] [string][yaml-str] | info | + +#### Images +Values for `client.images` are strings. These values affect the objects that are deployed in the cluster, +so it's important to ensure users have the same configuration. + +Additionally, you can deploy the server-side components with [Helm](../../install/helm), to prevent them +from being overridden by a client's config and use the [mutating-webhook](../cluster-config/#mutating-webhook) +to handle installation of the `traffic-agents`. + +These are the valid fields for the `client.images` key: + +| Field | Description | Type | Default | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------|----------------------| +| `registry` | Docker registry to be used for installing the Traffic Manager and default Traffic Agent. If not using a helm chart to deploy server-side objects, changing this value will create a new traffic-manager deployment when using Telepresence commands. Additionally, changing this value will update installed default `traffic-agents` to use the new registry when creating a new intercept. | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `agentImage` | `$registry/$imageName:$imageTag` to use when installing the Traffic Agent. Changing this value will update pre-existing `traffic-agents` to use this new image. *The `registry` value is not used for the `traffic-agent` if you have this value set.* | qualified Docker image name [string][yaml-str] | (unset) | +| `webhookRegistry` | The container `$registry` that the [Traffic Manager](../cluster-config/#mutating-webhook) will use with the `webhookAgentImage` *This value is only used if a new `traffic-manager` is deployed* | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `webhookAgentImage` | The container image that the [Traffic Manager](../cluster-config/#mutating-webhook) will pull from the `webhookRegistry` when installing the Traffic Agent in annotated pods *This value is only used if a new `traffic-manager` is deployed* | non-qualified Docker image name [string][yaml-str] | (unset) | + +#### Cloud +Values for `client.cloud` are listed below and their type varies, so please see the chart for the expected type for each config value. +These fields control how the client interacts with the Cloud service. + +| Field | Description | Type | Default | +|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------|----------------------| +| `skipLogin` | Whether the CLI should skip automatic login to Ambassador Cloud. If set to true, in order to perform personal intercepts you must have a [license key](../cluster-config/#air-gapped-cluster) installed in the cluster. | [bool][yaml-bool] | false | +| `refreshMessages` | How frequently the CLI should communicate with Ambassador Cloud to get new command messages, which also resets whether the message has been raised or not. You will see each message at most once within the duration given by this config | [duration][go-duration] [string][yaml-str] | 168h | +| `systemaHost` | The host used to communicate with Ambassador Cloud | [string][yaml-str] | app.getambassador.io | +| `systemaPort` | The port used with `systemaHost` to communicate with Ambassador Cloud | [string][yaml-str] | 443 | + +Telepresence attempts to auto-detect if the cluster is capable of +communication with Ambassador Cloud, but in cases where only the on-laptop client wishes to communicate with +Ambassador Cloud Telepresence may still prompt you to log in. If you want those auto-login points to be disabled +as well, or would like it to not attempt to communicate with +Ambassador Cloud at all (even for the auto-detection), then be sure to +set the `skipLogin` value to `true`. + +Reminder: To use personal intercepts, which normally require a login, +you must have a license key in your cluster and specify which +`agentImage` should be installed by also adding the following to your +`config.yml`: + +```yaml +images: + agentImage: / +``` + +#### Grpc +The `maxReceiveSize` determines how large a message that the workstation receives via gRPC can be. The default is 4Mi (determined by gRPC). All traffic to and from the cluster is tunneled via gRPC. + +The size is measured in bytes. You can express it as a plain integer or as a fixed-point number using E, G, M, or K. You can also use the power-of-two equivalents: Gi, Mi, Ki. For example, the following represent roughly the same value: +``` +128974848, 129e6, 129M, 123Mi +``` + +#### RESTful API server +The `client.telepresenceAPI` controls the behavior of Telepresence's RESTful API server that can be queried for additional information about ongoing intercepts. When present, and the `port` is set to a valid port number, it's propagated to the auto-installer so that application containers that can be intercepted gets the `TELEPRESENCE_API_PORT` environment set. The server can then be queried at `localhost:`. In addition, the `traffic-agent` and the `user-daemon` on the workstation that performs an intercept will start the server on that port. +If the `traffic-manager` is auto-installed, its webhook agent injector will be configured to add the `TELEPRESENCE_API_PORT` environment to the app container when the `traffic-agent` is injected. +See [RESTful API server](../restapi) for more info. + +### DNS + +#### Intercept +The `intercept` controls applies to how Telepresence will intercept the communications to the intercepted service. + +The `defaultPort` controls which port is selected when no `--port` flag is given to the `telepresence intercept` command. The default value is "8080". + +The `appProtocolStrategy` is only relevant when using personal intercepts. This controls how Telepresence selects the application protocol to use when intercepting a service that has no `service.ports.appProtocol` defined. Valid values are: + +| Value | Resulting action | +|--------------|--------------------------------------------------------------------------------------------------------| +| `http2Probe` | The Telepresence Traffic Agent will probe the intercepted container to check whether it supports http2 | +| `portName` | Telepresence will make an educated guess about the protocol based on the name of the service port | +| `http` | Telepresence will use http | +| `http2` | Telepresence will use http2 | + +When `portName` is used, Telepresence will determine the protocol by the name of the port: `[-suffix]`. The following protocols are recognized: + +| Protocol | Meaning | +|----------|---------------------------------------| +| `http` | Plaintext HTTP/1.1 traffic | +| `http2` | Plaintext HTTP/2 traffic | +| `https` | TLS Encrypted HTTP (1.1 or 2) traffic | +| `grpc` | Same as http2 | + +#### Daemons + +`client.daemons` controls which binary to use for the user daemon. By default it will +use the Telepresence binary. For example, this can be used to tell Telepresence to +use the Telepresence Pro binary. + +The fields for `client.dns` are: `localIP`, `excludeSuffixes`, `includeSuffixes`, and `lookupTimeout`. + +| Field | Description | Type | Default | +|-------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|--------------------------------------------------------------------------| +| `localIP` | The address of the local DNS server. This entry is only used on Linux systems that are not configured to use systemd-resolved. | IP address [string][yaml-str] | first `nameserver` mentioned in `/etc/resolv.conf` | +| `excludeSuffixes` | Suffixes for which the DNS resolver will always fail (or fallback in case of the overriding resolver). Can be globally configured in the Helm chart. | [sequence][yaml-seq] of [strings][yaml-str] | `[".arpa", ".com", ".io", ".net", ".org", ".ru"]` | +| `includeSuffixes` | Suffixes for which the DNS resolver will always attempt to do a lookup. Includes have higher priority than excludes. Can be globally configured in the Helm chart. | [sequence][yaml-seq] of [strings][yaml-str] | `[]` | +| `lookupTimeout` | Maximum time to wait for a cluster side host lookup. | [duration][go-duration] [string][yaml-str] | 4 seconds | + +Here is an example values.yaml: +```yaml +client: + dns: + includeSuffixes: [.private] + excludeSuffixes: [.se, .com, .io, .net, .org, .ru] + localIP: 8.8.8.8 + lookupTimeout: 30s +``` + +### Routing + +#### AlsoProxySubnets + +When using `alsoProxySubnets`, you provide a list of subnets to be added to the TUN device. +All connections to addresses that the subnet spans will be dispatched to the cluster + +Here is an example values.yaml for the subnet `1.2.3.4/32`: +```yaml +client: + routing: + alsoProxySubnets: + - 1.2.3.4/32 +``` + +#### NeverProxySubnets + +When using `neverProxySubnets` you provide a list of subnets. These will never be routed via the TUN device, +even if they fall within the subnets (pod or service) for the cluster. Instead, whatever route they have before +telepresence connects is the route they will keep. + +Here is an example kubeconfig for the subnet `1.2.3.4/32`: + +```yaml +client: + routing: + neverProxySubnets: + - 1.2.3.4/32 +``` + +#### Using AlsoProxy together with NeverProxy + +Never proxy and also proxy are implemented as routing rules, meaning that when the two conflict, regular routing routes apply. +Usually this means that the most specific route will win. + +So, for example, if an `alsoProxySubnets` subnet falls within a broader `neverProxySubnets` subnet: + +```yaml +neverProxySubnets: [10.0.0.0/16] +alsoProxySubnets: [10.0.5.0/24] +``` + +Then the specific `alsoProxySubnets` of `10.0.5.0/24` will be proxied by the TUN device, whereas the rest of `10.0.0.0/16` will not. + +Conversely, if a `neverProxySubnets` subnet is inside a larger `alsoProxySubnets` subnet: + +```yaml +alsoProxySubnets: [10.0.0.0/16] +neverProxySubnets: [10.0.5.0/24] +``` + +Then all of the `alsoProxySubnets` of `10.0.0.0/16` will be proxied, with the exception of the specific `neverProxySubnets` of `10.0.5.0/24` + +## Local Overrides + +In addition, it is possible to override each of these variables at the local level by setting up new values in local config files. +There are two types of config values that can be set locally: those that apply to all clusters, which are set in a single `config.yml` file, and those +that only apply to specific clusters, which are set as extensions to the `$KUBECONFIG` file. + +### Config for all clusters +Telepresence uses a `config.yml` file to store and change those configuration values that will be used for all clusters you use Telepresence with. +The location of this file varies based on your OS: + +* macOS: `$HOME/Library/Application Support/telepresence/config.yml` +* Linux: `$XDG_CONFIG_HOME/telepresence/config.yml` or, if that variable is not set, `$HOME/.config/telepresence/config.yml` +* Windows: `%APPDATA%\telepresence\config.yml` + +For Linux, the above paths are for a user-level configuration. For system-level configuration, use the file at `$XDG_CONFIG_DIRS/telepresence/config.yml` or, if that variable is empty, `/etc/xdg/telepresence/config.yml`. If a file exists at both the user-level and system-level paths, the user-level path file will take precedence. + +### Values + +The config file currently supports values for the `timeouts`, `logLevels`, `images`, `cloud`, and `grpc` keys. +The definitions of these values are identical to those values in the `client` config above. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +timeouts: + agentInstall: 1m + intercept: 10s +logLevels: + userDaemon: debug +images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting +cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. +grpc: + maxReceiveSize: 10Mi +telepresenceAPI: + port: 9980 +``` + + +## Workstation Per-Cluster Configuration + +Configuration that is specific to a cluster can also be overriden per-workstation by modifying your `$KUBECONFIG` file. +It is recommended that you do not do this, and instead rely on upstream values provided to the Traffic Manager. This ensures +that all users that connect to the Traffic Manager will have the same routing and DNS resolution behavior. +An important exception to this is the [`manager.namespace` configuration](#Manager) which must be set locally. + +### Values + +The kubeconfig supports values for `dns`, `also-proxy`, `never-proxy`, and `manager`. + +Example kubeconfig: +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + dns: + include-suffixes: [.private] + exclude-suffixes: [.se, .com, .io, .net, .org, .ru] + local-ip: 8.8.8.8 + lookup-timeout: 30s + never-proxy: [10.0.0.0/16] + also-proxy: [10.0.5.0/24] + name: example-cluster +``` + +#### Manager + +This is the one cluster configuration that cannot be set using the Helm chart because it defines how Telepresence connects to +the Traffic manager. When not default, that setting needs to be configured in the workstation's kubeconfig for the cluster. + +The `manager` key contains configuration for finding the `traffic-manager` that telepresence will connect to. It supports one key, `namespace`, indicating the namespace where the traffic manager is to be found + +Here is an example kubeconfig that will instruct telepresence to connect to a manager in namespace `staging`: + +```yaml +apiVersion: v1 +clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +[yaml-bool]: https://yaml.org/type/bool.html +[yaml-float]: https://yaml.org/type/float.html +[yaml-int]: https://yaml.org/type/int.html +[yaml-seq]: https://yaml.org/type/seq.html +[yaml-str]: https://yaml.org/type/str.html +[go-duration]: https://pkg.go.dev/time#ParseDuration +[logrus-level]: https://github.com/sirupsen/logrus/blob/v1.8.1/logrus.go#L25-L45 diff --git a/docs/telepresence/2.12/reference/dns.md b/docs/telepresence/2.12/reference/dns.md new file mode 100644 index 000000000..2f263860e --- /dev/null +++ b/docs/telepresence/2.12/reference/dns.md @@ -0,0 +1,80 @@ +# DNS resolution + +The Telepresence DNS resolver is dynamically configured to resolve names using the namespaces of currently active intercepts. Processes running locally on the desktop will have network access to all services in the such namespaces by service-name only. + +All intercepts contribute to the DNS resolver, even those that do not use the `--namespace=` option. This is because `--namespace default` is implied, and in this context, `default` is treated just like any other namespace. + +No namespaces are used by the DNS resolver (not even `default`) when no intercepts are active, which means that no service is available by `` only. Without an active intercept, the namespace qualified DNS name must be used (in the form `.`). + +See this demonstrated below, using the [quick start's](../../quick-start/) sample app services. + +No intercepts are currently running, we'll connect to the cluster and list the services that can be intercepted. + +``` +$ telepresence connect + + Connecting to traffic manager... + Connected to context default (https://) + +$ telepresence list + + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + emoji : ready to intercept (traffic-agent not yet installed) + web : ready to intercept (traffic-agent not yet installed) + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + +$ curl web-app:80 + + curl: (6) Could not resolve host: web-app + +``` + +This is expected as Telepresence cannot reach the service yet by short name without an active intercept in that namespace. + +``` +$ curl web-app.emojivoto:80 + + + + + + Emoji Vote + ... +``` + +Using the namespaced qualified DNS name though does work. +Now we'll start an intercept against another service in the same namespace. Remember, `--namespace default` is implied since it is not specified. + +``` +$ telepresence intercept web --port 8080 + + Using Deployment web + intercepted + Intercept name : web + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Volume Mount Point: /tmp/telfs-166119801 + Intercepting : HTTP requests that match all headers: + 'x-telepresence-intercept-id: 8eac04e3-bf24-4d62-b3ba-35297c16f5cd:web' + +$ curl webapp:80 + + + + + + Emoji Vote + ... +``` + +Now curling that service by its short name works and will as long as the intercept is active. + +The DNS resolver will always be able to resolve services using `.` regardless of intercepts. + +### Supported Query Types + +The Telepresence DNS resolver is now capable of resolving queries of type `A`, `AAAA`, `CNAME`, +`MX`, `NS`, `PTR`, `SRV`, and `TXT`. + +See [Outbound connectivity](../routing/#dns-resolution) for details on DNS lookups. diff --git a/docs/telepresence/2.12/reference/docker-run.md b/docs/telepresence/2.12/reference/docker-run.md new file mode 100644 index 000000000..8aa7852e5 --- /dev/null +++ b/docs/telepresence/2.12/reference/docker-run.md @@ -0,0 +1,31 @@ +--- +Description: "How a Telepresence intercept can run a Docker container with configured environment and volume mounts." +--- + +# Using Docker for intercepts + +If you want your intercept to go to a Docker container on your laptop, use the `--docker-run` option. It creates the intercept, runs your container in the foreground, then automatically ends the intercept when the container exits. + +`telepresence intercept --port --docker-run -- ` + +The `--` separates flags intended for `telepresence intercept` from flags intended for `docker run`. + +## Example + +Imagine you are working on a new version of your frontend service. It is running in your cluster as a Deployment called `frontend-v1`. You use Docker on your laptop to build an improved version of the container called `frontend-v2`. To test it out, use this command to run the new container on your laptop and start an intercept of the cluster service to your local container. + +`telepresence intercept frontend-v1 --port 8000 --docker-run -- frontend-v2` + +## Ports + +The `--port` flag can specify an additional port when `--docker-run` is used so that the local and container port can be different. This is done using `--port :`. The container port will default to the local port when using the `--port ` syntax. + +## Flags + +Telepresence will automatically pass some relevant flags to Docker in order to connect the container with the intercept. Those flags are combined with the arguments given after `--` on the command line. + +- `--dns-search tel2-search` Enables single label name lookups in intercepted namespaces +- `--env-file ` Loads the intercepted environment +- `--name intercept--` Names the Docker container, this flag is omitted if explicitly given on the command line +- `-p ` The local port for the intercept and the container port +- `-v ` Volume mount specification, see CLI help for `--mount` and `--docker-mount` flags for more info diff --git a/docs/telepresence/2.12/reference/environment.md b/docs/telepresence/2.12/reference/environment.md new file mode 100644 index 000000000..7f83ff119 --- /dev/null +++ b/docs/telepresence/2.12/reference/environment.md @@ -0,0 +1,46 @@ +--- +description: "How Telepresence can import environment variables from your Kubernetes cluster to use with code running on your laptop." +--- + +# Environment variables + +Telepresence can import environment variables from the cluster pod when running an intercept. +You can then use these variables with the code running on your laptop of the service being intercepted. + +There are three options available to do this: + +1. `telepresence intercept [service] --port [port] --env-file=FILENAME` + + This will write the environment variables to a Docker Compose `.env` file. This file can be used with `docker-compose` when starting containers locally. Please see the Docker documentation regarding the [file syntax](https://docs.docker.com/compose/env-file/) and [usage](https://docs.docker.com/compose/environment-variables/) for more information. + +2. `telepresence intercept [service] --port [port] --env-json=FILENAME` + + This will write the environment variables to a JSON file. This file can be injected into other build processes. + +3. `telepresence intercept [service] --port [port] -- [COMMAND]` + + This will run a command locally with the pod's environment variables set on your laptop. Once the command quits the intercept is stopped (as if `telepresence leave [service]` was run). This can be used in conjunction with a local server command, such as `python [FILENAME]` or `node [FILENAME]` to run a service locally while using the environment variables that were set on the pod via a ConfigMap or other means. + + Another use would be running a subshell, Bash for example: + + `telepresence intercept [service] --port [port] -- /bin/bash` + + This would start the intercept then launch the subshell on your laptop with all the same variables set as on the pod. + +## Telepresence Environment Variables + +Telepresence adds some useful environment variables in addition to the ones imported from the intercepted pod: + +### TELEPRESENCE_ROOT +Directory where all remote volumes mounts are rooted. See [Volume Mounts](../volume/) for more info. + +### TELEPRESENCE_MOUNTS +Colon separated list of remotely mounted directories. + +### TELEPRESENCE_CONTAINER +The name of the intercepted container. Useful when a pod has several containers, and you want to know which one that was intercepted by Telepresence. + +### TELEPRESENCE_INTERCEPT_ID +ID of the intercept (same as the "x-intercept-id" http header). + +Useful if you need special behavior when intercepting a pod. One example might be when dealing with pub/sub systems like Kafka, where all processes that don't have the `TELEPRESENCE_INTERCEPT_ID` set can filter out all messages that contain an `x-intercept-id` header, while those that do, instead filter based on a matching `x-intercept-id` header. This is to assure that messages belonging to a certain intercept always are consumed by the intercepting process. diff --git a/docs/telepresence/2.12/reference/inside-container.md b/docs/telepresence/2.12/reference/inside-container.md new file mode 100644 index 000000000..48a38b5a3 --- /dev/null +++ b/docs/telepresence/2.12/reference/inside-container.md @@ -0,0 +1,19 @@ +# Running Telepresence inside a container + +All Telepresence commands now have the global option `--docker`. This option tells telepresence to start the Telepresence daemon in a +docker container. + +Running the daemon in a container brings many advantages. The daemon will no longer make modifications to the host's network or DNS, and +it will not mount files in the host's filesystem. Consequently, it will not need admin privileges to run, nor will it need special software +like macFUSE or WinFSP to mount the remote file systems. + +The intercept handler (the process that will receive the intercepted traffic) must also be a docker container, because that is the only +way to access the cluster network that the daemon makes available, and to mount the docker volumes needed. + +It's highly recommended that you use the new [Intercept Specification](../intercepts/specs) to set things up. That way, Telepresence can do +all the plumbing needed to start the intercept handler with the correct environment and volume mounts. +Otherwise, doing a fully container based intercept manually with all bells and whistles is a complicated process that involves: +- Capturing the details of an intercept +- Ensuring that the [Telemount](https://github.com/datawire/docker-volume-telemount#readme) Docker volume plugin is installed +- Creating volumes for all remotely exposed directories +- Starting the intercept handler container using the same network as the daemon. diff --git a/docs/telepresence/2.12/reference/intercepts/cli.md b/docs/telepresence/2.12/reference/intercepts/cli.md new file mode 100644 index 000000000..0acd1505d --- /dev/null +++ b/docs/telepresence/2.12/reference/intercepts/cli.md @@ -0,0 +1,314 @@ +import Alert from '@material-ui/lab/Alert'; + +# Configuring intercept using CLI + +## Specifying a namespace for an intercept + +The namespace of the intercepted workload is specified using the +`--namespace` option. When this option is used, and `--workload` is +not used, then the given name is interpreted as the name of the +workload and the name of the intercept will be constructed from that +name and the namespace. + +```shell +telepresence intercept hello --namespace myns --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept +`hello-myns`. In order to remove the intercept, you will need to run +`telepresence leave hello-mydns` instead of just `telepresence leave +hello`. + +The name of the intercept will be left unchanged if the workload is specified. + +```shell +telepresence intercept myhello --namespace myns --workload hello --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept `myhello`. + +## Importing environment variables + +Telepresence can import the environment variables from the pod that is +being intercepted, see [this doc](../environment/) for more details. + +## Creating an intercept without a preview URL + +If you *are not* logged in to Ambassador Cloud, the following command +will intercept all traffic bound to the service and proxy it to your +laptop. This includes traffic coming through your ingress controller, +so use this option carefully as to not disrupt production +environments. + +```shell +telepresence intercept --port= +``` + +If you *are* logged in to Ambassador Cloud, setting the +`--preview-url` flag to `false` is necessary. + +```shell +telepresence intercept --port= --preview-url=false +``` + +This will output an HTTP header that you can set on your request for +that traffic to be intercepted: + +```console +$ telepresence intercept --port= --preview-url=false +Using Deployment +intercepted + Intercept name: + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":") +``` + +Run `telepresence status` to see the list of active intercepts. + +```console +$ telepresence status +Root Daemon: Running + Version : v2.1.4 (api 3) + Primary DNS : "" + Fallback DNS: "" +User Daemon: Running + Version : v2.1.4 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 1 total + dataprocessingnodeservice: @ +``` + +Finally, run `telepresence leave ` to stop the intercept. + +## Skipping the ingress dialogue + +You can skip the ingress dialogue by setting the relevant parameters using flags. If any of the following flags are set, the dialogue will be skipped and the flag values will be used instead. If any of the required flags are missing, an error will be thrown. + +| Flag | Description | Required | +|------------------|------------------------------------------------------------------|------------| +| `--ingress-host` | The ip address for the ingress | yes | +| `--ingress-port` | The port for the ingress | yes | +| `--ingress-tls` | Whether tls should be used | no | +| `--ingress-l5` | Whether a different ip address should be used in request headers | no | + +## Creating an intercept when a service has multiple ports + +If you are trying to intercept a service that has multiple ports, you +need to tell Telepresence which service port you are trying to +intercept. To specify, you can either use the name of the service +port or the port number itself. To see which options might be +available to you and your service, use kubectl to describe your +service or look in the object's YAML. For more information on multiple +ports, see the [Kubernetes documentation][kube-multi-port-services]. + +[kube-multi-port-services]: https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services + +```console +$ telepresence intercept --port=: +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +When intercepting a service that has multiple ports, the name of the +service port that has been intercepted is also listed. + +If you want to change which port has been intercepted, you can create +a new intercept the same way you did above and it will change which +service port is being intercepted. + +## Creating an intercept When multiple services match your workload + +Oftentimes, there's a 1-to-1 relationship between a service and a +workload, so telepresence is able to auto-detect which service it +should intercept based on the workload you are trying to intercept. +But if you use something like +[Argo](https://www.getambassador.io/docs/argo/latest/), there may be +two services (that use the same labels) to manage traffic between a +canary and a stable service. + +Fortunately, if you know which service you want to use when +intercepting a workload, you can use the `--service` flag. So in the +aforementioned example, if you wanted to use the `echo-stable` service +when intercepting your workload, your command would look like this: + +```console +$ telepresence intercept echo-rollout- --port --service echo-stable +Using ReplicaSet echo-rollout- +intercepted + Intercept name : echo-rollout- + State : ACTIVE + Workload kind : ReplicaSet + Destination : 127.0.0.1:3000 + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-921196036 + Intercepting : all TCP connections +``` + +## Intercepting multiple ports + +It is possible to intercept more than one service and/or service port that are using the same workload. You do this +by creating more than one intercept that identify the same workload using the `--workload` flag. + +Let's assume that we have a service `multi-echo` with the two ports `http` and `grpc`. They are both +targeting the same `multi-echo` deployment. + +```console +$ telepresence intercept multi-echo-http --workload multi-echo --port 8080:http --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-http + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Volume Mount Point : /tmp/telfs-893700837 + Intercepting : all TCP requests + Preview URL : https://sleepy-bassi-1140.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +$ telepresence intercept multi-echo-grpc --workload multi-echo --port 8443:grpc --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-grpc + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8443 + Service Port Identifier: extra + Volume Mount Point : /tmp/telfs-1277723591 + Intercepting : all TCP requests + Preview URL : https://upbeat-thompson-6613.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +``` + +## Port-forwarding an intercepted container's sidecars + +Sidecars are containers that sit in the same pod as an application +container; they usually provide auxiliary functionality to an +application, and can usually be reached at +`localhost:${SIDECAR_PORT}`. For example, a common use case for a +sidecar is to proxy requests to a database, your application would +connect to `localhost:${SIDECAR_PORT}`, and the sidecar would then +connect to the database, perhaps augmenting the connection with TLS or +authentication. + +When intercepting a container that uses sidecars, you might want those +sidecars' ports to be available to your local application at +`localhost:${SIDECAR_PORT}`, exactly as they would be if running +in-cluster. Telepresence's `--to-pod ${PORT}` flag implements this +behavior, adding port-forwards for the port given. + +```console +$ telepresence intercept --port=: --to-pod= +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +If there are multiple ports that you need forwarded, simply repeat the +flag (`--to-pod= --to-pod=`). + +## Intercepting headless services + +Kubernetes supports creating [services without a ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services), +which, when they have a pod selector, serve to provide a DNS record that will directly point to the service's backing pods. +Telepresence supports intercepting these `headless` services as it would a regular service with a ClusterIP. +So, for example, if you have the following service: + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: my-headless +spec: + type: ClusterIP + clusterIP: None + selector: + service: my-headless + ports: + - port: 8080 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: my-headless + labels: + service: my-headless +spec: + replicas: 1 + serviceName: my-headless + selector: + matchLabels: + service: my-headless + template: + metadata: + labels: + service: my-headless + spec: + containers: + - name: my-headless + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +``` + +You can intercept it like any other: + +```console +$ telepresence intercept my-headless --port 8080 +Using StatefulSet my-headless +intercepted + Intercept name : my-headless + State : ACTIVE + Workload kind : StatefulSet + Destination : 127.0.0.1:8080 + Volume Mount Point: /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-524189712 + Intercepting : all TCP connections +``` + + +This utilizes an initContainer that requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + + +This requires the Traffic Agent to run as GID 7777. By default, this is disabled on openshift clusters. +To enable running as GID 7777 on a specific openshift namespace, run: +oc adm policy add-scc-to-group anyuid system:serviceaccounts:$NAMESPACE + + + +Intercepting headless services without a selector is not supported. + + +## Sharing intercepts with teammates + +Once a combination of flags to easily intercept a service has been found, it's useful to share it with teammates. You +can do that easily by going to [Ambassador Cloud -> Intercepts history](https://app.getambassador.io/cloud/saved-intercepts) +pick the intercept command from the history tab and create a Saved Intercept by giving it a name, when doing that +the intercept command will be easily accessible for all your teammates. Note that this requires the free enhanced +client to be installed and to be logged in (`telepresence login`). + +To instantiate an intercept based on a saved intercept, simply run +`telepresence intercept --use-saved-intercept `. When logged in, the command will first check for a +saved intercept in Ambassador Cloud and will use it if found, otherwise an error will be returned. + +Saved Intercepts can be [managed through Ambassador Cloud](../../../../cloud/latest/telepresence-saved-intercepts/). diff --git a/docs/telepresence/2.12/reference/intercepts/index.md b/docs/telepresence/2.12/reference/intercepts/index.md new file mode 100644 index 000000000..5b317aeec --- /dev/null +++ b/docs/telepresence/2.12/reference/intercepts/index.md @@ -0,0 +1,61 @@ +import Alert from '@material-ui/lab/Alert'; + +# Intercepts + +When intercepting a service, the Telepresence Traffic Manager ensures +that a Traffic Agent has been injected into the intercepted workload. +The injection is triggered by a Kubernetes Mutating Webhook and will +only happen once. The Traffic Agent is responsible for redirecting +intercepted traffic to the developer's workstation. + +An intercept is either global or personal. + +### Global intercet +This intercept will intercept all`tcp` and/or `udp` traffic to the +intercepted service and send all of that traffic down to the developer's +workstation. This means that a global intercept will affect all users of +the intercepted service. + +### Personal intercept +This intercept will intercept specific HTTP requests, allowing other HTTP +requests through to the regular service. The selection is based on http +headers or paths, and allows for intercepts which only intercept traffic +tagged as belonging to a given developer. + +There are two ways of configuring an intercept: +- one from the [CLI](./cli) directly +- one from an [Intercept Specification](./specs) + +## Intercept behavior when using single-user versus team mode. + +Switching the Traffic Manager from `single-user` mode to `team` mode changes +the Telepresence defaults in two ways. + + +First, in team mode, Telepresence will require that the user is logged in to +Ambassador Cloud, or is using an api-key. The team mode aldo causes Telepresence +to default to a personal intercept using `--http-header=auto --http-path-prefix=/`. +Personal intercepts are important for working in a shared cluster with teammates, +and is important for the preview URL functionality below. See `telepresence intercept --help` +for information on using the `--http-header` and `--http-path-xxx` flags to +customize which requests that are intercepted. + +Secondly, team mode causes Telepresence to default to`--preview-url=true`. This +tells Telepresence to take advantage of Ambassador Cloud to create a preview URL +for this intercept, creating a shareable URL that automatically sets the +appropriate headers to have requests coming from the preview URL be +intercepted. + +## Supported workloads + +Kubernetes has various +[workloads](https://kubernetes.io/docs/concepts/workloads/). +Currently, Telepresence supports intercepting (installing a +traffic-agent on) `Deployments`, `ReplicaSets`, and `StatefulSets`. + + + +While many of our examples use Deployments, they would also work on +ReplicaSets and StatefulSets + + diff --git a/docs/telepresence/2.12/reference/intercepts/manual-agent.md b/docs/telepresence/2.12/reference/intercepts/manual-agent.md new file mode 100644 index 000000000..8c24d6dbe --- /dev/null +++ b/docs/telepresence/2.12/reference/intercepts/manual-agent.md @@ -0,0 +1,267 @@ +import Alert from '@material-ui/lab/Alert'; + +# Manually injecting the Traffic Agent + +You can directly modify your workload's YAML configuration to add the Telepresence Traffic Agent and enable it to be intercepted. + +When you use a Telepresence intercept for the first time on a Pod, the [Telepresence Mutating Webhook](../../cluster-config/#mutating-webhook) +will automatically inject a Traffic Agent sidecar into it. There might be some situations where this approach cannot be used, such +as very strict company security policies preventing it. + + +Although it is possible to manually inject the Traffic Agent, it is not the recommended approach to making a workload interceptable, +try the Mutating Webhook before proceeding. + + +## Procedure + +You can manually inject the agent into Deployments, StatefulSets, or ReplicaSets. The example on this page +uses the following Deployment and Service. It's a prerequisite that they have been applied to the cluster: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "my-service" + labels: + service: my-service +spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +--- +apiVersion: v1 +kind: Service +metadata: + name: "my-service" +spec: + type: ClusterIP + selector: + service: my-service + ports: + - port: 80 + targetPort: 8080 +``` + +### 1. Generating the YAML + +First, generate the YAML for the traffic-agent configmap entry. It's important that the generated file have +the same name as the service, and no extension: + +```console +$ telepresence genyaml config --workload my-service -o /tmp/my-service +$ cat /tmp/my-service-config.yaml +agentImage: docker.io/datawire/tel2:2.7.0 +agentName: my-service +containers: +- Mounts: null + envPrefix: A_ + intercepts: + - agentPort: 9900 + containerPort: 8080 + protocol: TCP + serviceName: my-service + servicePort: 80 + serviceUID: f6680334-10ef-4703-aa4e-bb1f9d1665fd + mountPoint: /tel_app_mounts/echo-container + name: echo-container +logLevel: info +managerHost: traffic-manager.ambassador +managerPort: 8081 +manual: true +namespace: default +workloadKind: Deployment +workloadName: my-service +``` + +Next, generate the YAML for the traffic-agent container: + +```console +$ telepresence genyaml container --config /tmp/my-service -o /tmp/my-service-agent.yaml +$ cat /tmp/my-service-agent.yaml +args: +- agent +env: +- name: _TEL_AGENT_POD_IP + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: status.podIP +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: traffic-agent +ports: +- containerPort: 9900 + protocol: TCP +readinessProbe: + exec: + command: + - /bin/stat + - /tmp/agent/ready +resources: {} +volumeMounts: +- mountPath: /tel_pod_info + name: traffic-annotations +- mountPath: /etc/traffic-agent + name: traffic-config +- mountPath: /tel_app_exports + name: export-volume + name: traffic-annotations +``` + +Next, generate the init-container + +```console +$ telepresence genyaml initcontainer --config /tmp/my-service -o /tmp/my-service-init.yaml +$ cat /tmp/my-service-init.yaml +args: +- agent-init +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: tel-agent-init +resources: {} +securityContext: + capabilities: + add: + - NET_ADMIN +volumeMounts: +- mountPath: /etc/traffic-agent + name: traffic-config +``` + +Next, generate the YAML for the volumes: + +```console +$ telepresence genyaml volume --workload my-service -o /tmp/my-service-volume.yaml +$ cat /tmp/my-service-volume.yaml +- downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.annotations + path: annotations + name: traffic-annotations +- configMap: + items: + - key: my-service + path: config.yaml + name: telepresence-agents + name: traffic-config +- emptyDir: {} + name: export-volume + +``` + + +Enter `telepresence genyaml container --help` or `telepresence genyaml volume --help` for more information about these flags. + + +### 2. Creating (or updating) the configmap + +The generated configmap entry must be insterted into the `telepresence-agents` `ConfigMap` in the same namespace as the +modified `Deployment`. If the `ConfigMap` doesn't exist yet, it can be created using the following command: + +```console +$ kubectl create configmap telepresence-agents --from-file=/tmp/my-service +``` + +If it already exists, new entries can be added under the `Data` key using `kubectl edit configmap telepresence-agents`. + +### 3. Injecting the YAML into the Deployment + +You need to add the `Deployment` YAML you genereated to include the container and the volume. These are placed as elements +of `spec.template.spec.containers`, `spec.template.spec.initContainers`, and `spec.template.spec.volumes` respectively. +You also need to modify `spec.template.metadata.annotations` and add the annotation +`telepresence.getambassador.io/manually-injected: "true"`. These changes should look like the following: + +```diff + apiVersion: apps/v1 + kind: Deployment + metadata: + name: "my-service" + labels: + service: my-service + spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service ++ annotations: ++ telepresence.getambassador.io/manually-injected: "true" + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} ++ - args: ++ - agent ++ env: ++ - name: _TEL_AGENT_POD_IP ++ valueFrom: ++ fieldRef: ++ apiVersion: v1 ++ fieldPath: status.podIP ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: traffic-agent ++ ports: ++ - containerPort: 9900 ++ protocol: TCP ++ readinessProbe: ++ exec: ++ command: ++ - /bin/stat ++ - /tmp/agent/ready ++ resources: { } ++ volumeMounts: ++ - mountPath: /tel_pod_info ++ name: traffic-annotations ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ - mountPath: /tel_app_exports ++ name: export-volume ++ initContainers: ++ - args: ++ - agent-init ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: tel-agent-init ++ resources: { } ++ securityContext: ++ capabilities: ++ add: ++ - NET_ADMIN ++ volumeMounts: ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ volumes: ++ - downwardAPI: ++ items: ++ - fieldRef: ++ apiVersion: v1 ++ fieldPath: metadata.annotations ++ path: annotations ++ name: traffic-annotations ++ - configMap: ++ items: ++ - key: my-service ++ path: config.yaml ++ name: telepresence-agents ++ name: traffic-config ++ - emptyDir: { } ++ name: export-volume +``` diff --git a/docs/telepresence/2.12/reference/intercepts/specs.md b/docs/telepresence/2.12/reference/intercepts/specs.md new file mode 100644 index 000000000..92ab4c6c9 --- /dev/null +++ b/docs/telepresence/2.12/reference/intercepts/specs.md @@ -0,0 +1,338 @@ +# Configuring intercept using specifications + +This page references the different options available to the telepresence intercept specification. + +With telepresence, you can provide a file to define how an intercept should work. + +## Specification + +Your intercept specification is where you can create a standard, easy to use, configuration to easily run pre and post tasks, start an intercept, and start your local application to handle the intercepted traffic. + +There are many ways to configure your specification to suit your needs, the table below shows the possible options within your specifcation, +and you can see the spec's schema, with all available options and formats, [here](#ide-integration). + +| Options | Description | +|-------------------------------------------|---------------------------------------------------------------------------------------------------------| +| [name](#name) | Name of the specification. | +| [connection](#connection) | Connection properties to use when Telepresence connects to the cluster. | +| [handlers](#handlers) | Local processes to handle traffic and/or setup . | +| [prerequisites](#prerequisites) | Things to set up prior to starting any intercepts, and tear things down once the intercept is complete. | +| [workloads](#workloads) | Remote workloads that are intercepted, keyed by workload name. | + +### Name +The name is optional. If you don't specify the name it will use the filename of the specification file. + +```yaml +name : echo-server-spec +``` + +### Connection + +The connection option is used to define how Telepresence connects to your cluster. + +```yaml +connection: + context: "shared-cluster" + mappedNamespaces: + - "my_app" +``` + +You can pass the most common parameters from telepresence connect command (`telepresence connect --help`) using a camel case format. + +Some of the most commonly used options include: + +| Options | Type | Format | Description | +|------------------|-------------|-------------------------|---------------------------------------------------------| +| context | string | N/A | The kubernetes context to use | +| mappedNamespaces | string list | [a-z0-9][a-z0-9-]{1,62} | The namespaces that Telepresence will be concerned with | + + +### Handlers + +A handler is code running locally. + +It can receive traffic for an intercepted service, or can set up prerequisites to run before/after the intercept itself. + +When it is intended as an intercept handler (i.e. to handle traffic), it's usually the service you're working on, or another dependency (database, another third party service, ...) running on your machine. +A handler can be a Docker container, or an application running natively. + +The sample below is creating an intercept handler, giving it the name `echo-server` and using a docker container. The container will +automatically have access to the ports, environment, and mounted directories of the intercepted container. + + + The ports field is important for the intercept handler while running in docker, it indicates which ports should be exposed to the host. If you want to access to it locally (to attach a debugger to your container for example), this field must be provided. + + +```yaml +handlers: + - name: echo-server + environment: + - name: PORT + value: "8080" + docker: + image: jmalloc/echo-server:latest + ports: + - 8080 +``` + +If you don't want to use Docker containers, you can still configure your handlers to start via a regular script. +The snippet below shows how to create an handler called echo-server, that sets an environment variable of `PORT=8080` +and starts the application. + + +```yaml +handlers: + - name: echo-server + environment: + - name: PORT + value: "8080" + script: + run: bin/echo-server +``` + +Keep in mind that an empty handler is still a valid handler. This is sometimes useful when you want to, for example, +simulate an intercepted service going down: + +```yaml +handlers: + - name: no-op +``` + +The table belows defines the parameters that can be used within the handlers section. + +| Options | Type | Format | Description | +|------------------------|-------------|--------------------------|------------------------------------------------------------------------------| +| name | string | [a-zA-Z][a-zA-Z0-9_-]* | Defines name of your handler that the intercepts use to reference it | +| environment | map list | N/A | Environment Defines environment variables within your handler | +| environment[*].name | string | [a-zA-Z_][a-zA-Z0-9_]* | The name of the environment variable | +| environment[*].value | string | N/A | The value for the environment variable | +| [script](#script) | map | N/A | Tells the handler to run as a script, mutually exclusive to docker | +| [docker](#docker) | map | N/A | Tells the handler to run as a docker container, mutually exclusive to script | + +#### Script + +The handler's script element defines the parameters: + +| Options | Type | Format | Description | +|---------|--------|------------------------|-----------------------------------------------------------------------------------------------------------------------------| +| run | string | N/A | The script to run. Can be multi-line | +| shell | string | bash|sh|sh | Shell that will parse and run the script. Can be bash, zsh, or sh. Defaults to the value of the`SHELL` environment variable | + +#### Docker +The handler's docker element defines the parameters. The `build` and `image` parameters are mutually exclusive: + +| Options | Type | Format | Description | +|-----------------|-------------|--------|--------------------------------------------------------------------------------------------------------------------------------------| +| [build](#build) | map | N/A | Defines how to build the image from source using [docker build](https://docs.docker.com/engine/reference/commandline/build/) command | +| image | string | image | Defines which image to be used | +| ports | int list | N/A | The ports which should be exposed to the host | +| options | string list | N/A | Options for docker run [options](https://docs.docker.com/engine/reference/commandline/run/#options) | +| command | string | N/A | Optional command to run | +| args | string list | N/A | Optional command arguments | + +#### Build + +The docker build element defines the parameters: + +| Options | Type | Format | Description | +|---------|-------------|--------|--------------------------------------------------------------------------------------------| +| context | string | N/A | Defines either a path to a directory containing a Dockerfile, or a url to a git repository | +| args | string list | N/A | Additional arguments for the docker build command. | + +For additional informations on these parameters, please check the docker [documentation](https://docs.docker.com/engine/reference/commandline/run). + +### Prerequisites +When creating an intercept specification there is an option to include prerequisites. + +Prerequisites give you the ability to run scripts for setup, build binaries to run as your intercept handler, or many other use cases. + +Prerequisites is an array, so it can handle many options prior to starting your intercept and running your intercept handlers. +The elements of the `prerequisites` array correspond to [`handlers`](#handlers). + +The sample below is declaring that `build-binary` and `rm-binary` are two handlers; the first will be run before any intercepts, +the second will be run after cleaning up the intercepts. + +If a prerequisite create succeeds, the corresponding delete is guaranteed to run even if the other steps in the spec fail. + +```yaml +prerequisites: + - create: build-binary + delete: rm-binary +``` + + +The table below defines the parameters availble within the prerequistes section. + +| Options | Description | +|---------|-------------------------------------------------- | +| create | The name of a handler to run before the intercept | +| delete | The name of a handler to run after the intercept | + + +### Workloads + +Workloads define the services in your cluster that will be intercepted. + +The example below is creating an intercept on a service called `echo-server` on port 8080. +It creates a personal intercept with the header of `x-intercept-id: foo`, and routes its traffic to a handler called `echo-server` + +```yaml +workloads: + # You can define one or more workload(s) + - name: echo-server: + intercepts: + # You can define one or more intercept(s) + - headers: + - name: x-intercept-id + value: foo + port: 8080 + handler: echo-server +``` + +This table defines the parameters available within a workload. + +| Options | Type | Format | Description | Default | +|---------------------------|--------------------------------|-------------------------|---------------------------------------------------------------|---------| +| name | string | [a-z][a-z0-9-]* | Name of the workload to intercept | N/A | +| namespace | string | [a-z0-9][a-z0-9-]{1,62} | Namespace of workload to intercept | N/A | +| intercepts | [intercept](#intercepts) list | N/A | The list of intercepts associated to the workload | N/A | + +#### Intercepts +This table defines the parameters available for each intercept. + +| Options | Type | Format | Description | Default | +|---------------------|-------------------------|----------------------|-----------------------------------------------------------------------|----------------| +| enabled | boolean | N/A | If set to false, disables this intercept. | true | +| headers | [header](#header) list | N/A | Headers that will filter the intercept. | Auto generated | +| service | name | [a-z][a-z0-9-]{1,62} | Name of service to intercept | N/A | +| localPort | integersh|string | 0-65535 | The port for the service which is intercepted | N/A | +| port | integer | 0-65535 | The port the service in the cluster is running on | N/A | +| pathPrefix | string | N/A | Path prefix filter for the intercept. Defaults to "/" | / | +| previewURL | boolean | N/A | Determine if a preview url should be created | true | +| banner | boolean | N/A | Used in the preview url option, displays a banner on the preview page | true | + +##### Header + +You can define headers to filter the requests which should end up on your machine when intercepting. + +| Options | Type | Format | Description | Default | +|---------------------------|----------|-------------------------|---------------------------------------------------------------|---------| +| name | string | N/A | Name of the header | N/A | +| value | string | N/A | Value of the header | N/A | + +Telepresence specs also support dynamic headers with **variables**: + +```yaml +intercepts: + - headers: + - name: test-{{ .Telepresence.Username }} + value: "{{ .Telepresence.Username }}" +``` + +| Options | Type | Description | +|---------------------------|----------|------------------------------------------| +| Telepresence.Username | string | The name of the user running the spec | + + +### Running your specification +After you've written your intercept specification you will want to run it. + +To start your intercept, use this command: + +```bash +telepresence intercept run +``` +This will validate and run your spec. In case you just want to validate it, you can do so by using this command: + +```bash +telepresence intercept validate +``` + +### Using and sharing your specification as a CRD + +If you want to share specifications across your team or your organization. You can save specifications as CRDs inside your cluster. + + + The Intercept Specification CRD requires Kubernetes 1.22 or higher, if you are using an old cluster you will + need to install using helm directly, and use the --disable-openapi-validation flag + + +1. Install CRD object in your cluster (one time installation) : + + ```bash + telepresence helm install --crds + ``` + +1. Then you need to deploy the specification in your cluster as a CRD: + + ```yaml + apiVersion: getambassador.io/v1alpha2 + kind: InterceptSpecification + metadata: + name: my-crd-spec + namespace: my-crd-namespace + spec: + {intercept specification} + ``` + + So `echo-server` example looks like this: + + ```bash + kubectl apply -f - < # See note below +``` + +The Service Account token will be obtained by the cluster administrator after they create the user's Service Account. Creating the Service Account will create an associated Secret in the same namespace with the format `-token-`. This token can be obtained by your cluster administrator by running `kubectl get secret -n ambassador -o jsonpath='{.data.token}' | base64 -d`. + +After creating `config.yaml` in your current directory, export the file's location to KUBECONFIG by running `export KUBECONFIG=$(pwd)/config.yaml`. You should then be able to switch to this context by running `kubectl config use-context my-context`. + +## Administrating Telepresence + +Telepresence administration requires permissions for creating `Namespaces`, `ServiceAccounts`, `ClusterRoles`, `ClusterRoleBindings`, `Secrets`, `Services`, `MutatingWebhookConfiguration`, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator. The following permissions are needed for the installation and use of Telepresence: + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: telepresence-admin + namespace: default +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-admin-role +rules: + - apiGroups: [""] + resources: ["pods", "pods/log"] + verbs: ["get", "list", "create", "delete", "watch"] + - apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "update", "create", "delete"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] + - apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update", "create", "delete", "watch"] + - apiGroups: ["rbac.authorization.k8s.io"] + resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"] + verbs: ["get", "list", "watch", "create", "delete"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["create"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["get", "list", "watch", "delete"] + resourceNames: ["telepresence-agents"] + - apiGroups: [""] + resources: ["namespaces"] + verbs: ["get", "list", "watch", "create"] + - apiGroups: [""] + resources: ["secrets"] + verbs: ["get", "create", "list", "delete"] + - apiGroups: [""] + resources: ["serviceaccounts"] + verbs: ["get", "create", "delete"] + - apiGroups: ["admissionregistration.k8s.io"] + resources: ["mutatingwebhookconfigurations"] + verbs: ["get", "create", "delete"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["list", "get", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-clusterrolebinding +subjects: + - name: telepresence-admin + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-admin-role + kind: ClusterRole +``` + +There are two ways to install the traffic-manager: Using `telepresence connect` and installing the [helm chart](../../install/helm/). + +By using `telepresence connect`, Telepresence will use your kubeconfig to create the objects mentioned above in the cluster if they don't already exist. If you want the most introspection into what is being installed, we recommend using the helm chart to install the traffic-manager. + +## Cluster-wide telepresence user access + +To allow users to make intercepts across all namespaces, but with more limited `kubectl` permissions, the following `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` will allow full `telepresence intercept` functionality. + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate value + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-role +rules: +# For gather-logs command +- apiGroups: [""] + resources: ["pods/log"] + verbs: ["get"] +- apiGroups: [""] + resources: ["pods"] + verbs: ["list"] +# Needed in order to maintain a list of workloads +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +- apiGroups: [""] + resources: ["namespaces", "services"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-rolebinding +subjects: +- name: tp-user + kind: ServiceAccount + namespace: ambassador +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-role + kind: ClusterRole +``` + +### Traffic Manager connect permission +In addition to the cluster-wide permissions, the client will also need the following namespace scoped permissions +in the traffic-manager's namespace in order to establish the needed port-forward to the traffic-manager. +```yaml +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: traffic-manager-connect +rules: + - apiGroups: [""] + resources: ["pods"] + verbs: ["get", "list", "watch"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: traffic-manager-connect +subjects: + - name: telepresence-test-developer + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: traffic-manager-connect + kind: Role +``` + +## Namespace only telepresence user access + +RBAC for multi-tenant scenarios where multiple dev teams are sharing a single cluster where users are constrained to a specific namespace(s). + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +For each accessible namespace +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate user name + namespace: tp-namespace # Update value for appropriate namespace +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +rules: +- apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "watch"] +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role-binding + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount +roleRef: + kind: Role + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +``` + +The user will also need the [Traffic Manager connect permission](#traffic-manager-connect-permission) described above. diff --git a/docs/telepresence/2.12/reference/restapi.md b/docs/telepresence/2.12/reference/restapi.md new file mode 100644 index 000000000..4be1924a3 --- /dev/null +++ b/docs/telepresence/2.12/reference/restapi.md @@ -0,0 +1,93 @@ +# Telepresence RESTful API server + +[Telepresence](/products/telepresence/) can run a RESTful API server on the local host, both on the local workstation and in a pod that contains a `traffic-agent`. The server currently has two endpoints. The standard `healthz` endpoint and the `consume-here` endpoint. + +## Enabling the server +The server is enabled by setting the `telepresenceAPI.port` to a valid port number in the [Telepresence Helm Chart](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). The values may be passed explicitly to Helm during install, or configured using the [Telepresence Config](../config#restful-api-server) to impact an auto-install. + +## Querying the server +On the cluster's side, it's the `traffic-agent` of potentially intercepted pods that runs the server. The server can be accessed using `http://localhost:/` from the application container. Telepresence ensures that the container has the `TELEPRESENCE_API_PORT` environment variable set when the `traffic-agent` is installed. On the workstation, it is the `user-daemon` that runs the server. It uses the `TELEPRESENCE_API_PORT` that is conveyed in the environment of the intercept. This means that the server can be accessed the exact same way locally, provided that the environment is propagated correctly to the interceptor process. + +## Endpoints + +The `consume-here` and `intercept-info` endpoints are both intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar. Telepresence provides the ID of the intercept in the environment variable [TELEPRESENCE_INTERCEPT_ID](../environment/#telepresence_intercept_id) during an intercept. This ID must be provided in a `x-telepresence-caller-intercept-id: = ` header. [Telepresence](/products/telepresence/) needs this to identify the caller correctly. The `` will be empty when running in the cluster, but it's harmless to provide it there too, so there's no need for conditional code. + +There are three prerequisites to fulfill before testing The `consume-here` and `intercept-info` endpoints using `curl -v` on the workstation: +1. An intercept must be active +2. The "/healthz" endpoint must respond with OK +3. The ID of the intercept must be known. It will be visible as `ID` in the output of `telepresence list --debug`. + +### healthz +The `http://localhost:/healthz` endpoint should respond with status code 200 OK. If it doesn't then something isn't configured correctly. Check that the `traffic-agent` container is present and that the `TELEPRESENCE_API_PORT` has been added to the environment of the application container and/or in the environment that is propagated to the interceptor that runs on the local workstation. + +#### test endpoint using curl +A `curl -v` call can be used to test the endpoint when an intercept is active. This example assumes that the API port is configured to be 9980. +```console +$ curl -v localhost:9980/healthz +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /healthz HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Date: Fri, 26 Nov 2021 07:06:18 GMT +< Content-Length: 0 +< +* Connection #0 to host localhost left intact +``` + +### consume-here +`http://localhost:/consume-here` will respond with "true" (consume the message) or "false" (leave the message on the queue). When running in the cluster, this endpoint will respond with `false` if the headers match an ongoing intercept for the same workload because it's assumed that it's up to the intercept to consume the message. When running locally, the response is inverted. Matching headers means that the message should be consumed. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api`, we can now check that the "/consume-here" returns "true" for the path "/api" and given headers. +```console +$ curl -v localhost:9980/consume-here?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /consume-here?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Fri, 26 Nov 2021 06:43:28 GMT +< Content-Length: 4 +< +* Connection #0 to host localhost left intact +true +``` + +If you can run curl from the pod, you can try the exact same URL. The result should be "false" when there's an ongoing intercept. The `x-telepresence-caller-intercept-id` is not needed when the call is made from the pod. + +### intercept-info +`http://localhost:/intercept-info` is intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar, and will respond with a JSON structure containing the two booleans `clientSide` and `intercepted`, and a `metadata` map which corresponds to the `--http-meta` key pairs used when the intercept was created. This field is always omitted in case `intercepted` is `false`. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api --http-meta a=b --http-meta b=c`, we can now check that the "/intercept-info" returns information for the given path and headers. +```console +$ curl -v localhost:9980/intercept-info?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980...* Connected to localhost (127.0.0.1) port 9980 (#0) +> GET /intercept-info?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.79.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Tue, 01 Feb 2022 11:39:55 GMT +< Content-Length: 68 +< +{"intercepted":true,"clientSide":true,"metadata":{"a":"b","b":"c"}} +* Connection #0 to host localhost left intact +``` diff --git a/docs/telepresence/2.12/reference/routing.md b/docs/telepresence/2.12/reference/routing.md new file mode 100644 index 000000000..cc88490a0 --- /dev/null +++ b/docs/telepresence/2.12/reference/routing.md @@ -0,0 +1,69 @@ +# Connection Routing + +## Outbound + +### DNS resolution +When requesting a connection to a host, the IP of that host must be determined. Telepresence provides DNS resolvers to help with this task. There are currently four types of resolvers but only one of them will be used on a workstation at any given time. Common for all of them is that they will propagate a selection of the host lookups to be performed in the cluster. The selection normally includes all names ending with `.cluster.local` or a currently mapped namespace but more entries can be added to the list using the `includeSuffixes` option in the +[cluster DNS configuration](../cluster-config/#dns) + +#### Cluster side DNS lookups +The cluster side host lookup will be performed by the traffic-manager unless the client has an active intercept, in which case, the agent performing that intercept will be responsible for doing it. If the client has multiple intercepts, then all of them will be asked to perform the lookup, and the response to the client will contain the unique sum of IPs that they produce. It's therefore important to never have multiple intercepts that span more than one namespace[[1](#namespacelimit)] running concurrently on the same workstation because that would logically put the workstation in several namespaces and make the DNS resolution ambiguous. The reason for asking all of them is that the workstation currently impersonates multiple containers, and it is not possible to determine on behalf of what container the lookup request is made. + +#### macOS resolver +This resolver hooks into the macOS DNS system by creating files under `/etc/resolver`. Those files correspond to some domain and contain the port number of the Telepresence resolver. Telepresence creates one such file for each of the currently mapped namespaces and `include-suffixes` option. The file `telepresence.local` contains a search path that is configured based on current intercepts so that single label names can be resolved correctly. + +#### Linux systemd-resolved resolver +This resolver registers itself as part of telepresence's [VIF](../tun-device) using `systemd-resolved` and uses the DBus API to configure domains and routes that corresponds to the current set of intercepts and namespaces. + +#### Linux overriding resolver +Linux systems that aren't configured with `systemd-resolved` will use this resolver. A Typical case is when running Telepresence [inside a docker container](../inside-container). During initialization, the resolver will first establish a _fallback_ connection to the IP passed as `--dns`, the one configured as `local-ip` in the [local DNS configuration](../config/#dns-and-routing), or the primary `nameserver` registered in `/etc/resolv.conf`. It will then use iptables to actually override that IP so that requests to it instead end up in the overriding resolver, which unless it succeeds on its own, will use the _fallback_. + +#### Windows resolver +This resolver uses the DNS resolution capabilities of the [win-tun](https://www.wintun.net/) device in conjunction with [Win32_NetworkAdapterConfiguration SetDNSDomain](https://docs.microsoft.com/en-us/powershell/scripting/samples/performing-networking-tasks?view=powershell-7.2#assigning-the-dns-domain-for-a-network-adapter). + +#### DNS caching +The Telepresence DNS resolver often changes its configuration. This means that Telepresence must either flush the DNS caches on the local host, or ensure that DNS-records returned from the Telepresence resolver aren't cached (or cached for a very short time). All operating systems have different ways of flushing the DNS caches and even different versions of one system may have differences. Also, on some systems it is necessary to actually kill and restart processes to ensure a proper flush, which in turn may result in network instabilities. + +Starting with 2.4.7, Telepresence will no longer flush the host's DNS caches. Instead, all records will have a short Time To Live (TTL) so that such caches evict the entries quickly. This causes increased load on the Telepresence resolver (shorter TTL means more frequent queries) and to cater for that, telepresence now has an internal cache to minimize the number of DNS queries that it sends to the cluster. This cache is flushed as needed without causing instabilities. + +### Routing + +#### Subnets +The Telepresence `traffic-manager` service is responsible for discovering the cluster's service subnet and all subnets used by the pods. In order to do this, it needs permission to create a dummy service[[2](#servicesubnet)] in its own namespace, and the ability to list, get, and watch nodes and pods. Most clusters will expose the pod subnets as `podCIDR` in the `Node` while others, like Amazon EKS, don't. Telepresence will then fall back to deriving the subnets from the IPs of all pods. If you'd like to choose a specific method for discovering subnets, or want to provide the list yourself, you can use the `podCIDRStrategy` configuration value in the [helm](../../install/helm) chart to do that. + +The complete set of subnets that the [VIF](../tun-device) will be configured with is dynamic and may change during a connection's life cycle as new nodes arrive or disappear from the cluster. The set consists of what that the traffic-manager finds in the cluster, and the subnets configured using the [also-proxy](../cluster-config#alsoproxy) configuration option. Telepresence will remove subnets that are equal to, or completely covered by, other subnets. + +#### Connection origin +A request to connect to an IP-address that belongs to one of the subnets of the [VIF](../tun-device) will cause a connection request to be made in the cluster. As with host name lookups, the request will originate from the traffic-manager unless the client has ongoing intercepts. If it does, one of the intercepted pods will be chosen, and the request will instead originate from that pod. This is a best-effort approach. Telepresence only knows that the request originated from the workstation. It cannot know that it is intended to originate from a specific pod when multiple intercepts are active. + +A `--local-only` intercept will not have any effect on the connection origin because there is no pod from which the connection can originate. The intercept must be made on a workload that has been deployed in the cluster if there's a requirement for correct connection origin. + +There are multiple reasons for doing this. One is that it is important that the request originates from the correct namespace. Example: + +```bash +curl some-host +``` +results in a http request with header `Host: some-host`. Now, if a service-mesh like Istio performs header based routing, then it will fail to find that host unless the request originates from the same namespace as the host resides in. Another reason is that the configuration of a service mesh can contain very strict rules. If the request then originates from the wrong pod, it will be denied. Only one intercept at a time can be used if there is a need to ensure that the chosen pod is exactly right. + +### Recursion detection +It is common that clusters used in development, such as Minikube, Minishift or k3s, run on the same host as the Telepresence client, often in a Docker container. Such clusters may have access to host network, which means that both DNS and L4 routing may be subjected to recursion. + +#### DNS recursion +When a local cluster's DNS-resolver fails to resolve a hostname, it may fall back to querying the local host network. This means that the Telepresence resolver will be asked to resolve a query that was issued from the cluster. Telepresence must check if such a query is recursive because there is a chance that it actually originated from the Telepresence DNS resolver and was dispatched to the `traffic-manager`, or a `traffic-agent`. + +Telepresence handles this by sending one initial DNS-query to resolve the hostname "tel2-recursion-check.kube-system". If the cluster runs locally, and has access to the local host's network, then that query will recurse back into the Telepresence resolver. Telepresence remembers this and alters its own behavior so that queries that are believed to be recursions are detected and respond with an NXNAME record. Telepresence performs this solution to the best of its ability, but may not be completely accurate in all situations. There's a chance that the DNS-resolver will yield a false negative for the second query if the same hostname is queried more than once in rapid succession, that is when the second query is made before the first query has received a response from the cluster. + +#### Connect recursion +A cluster running locally may dispatch connection attempts to non-existing host:port combinations to the host network. This means that they may reach the Telepresence [VIF](../tun-device). Endless recursions occur if the VIF simply dispatches such attempts on to the cluster. + +The telepresence client handles this by serializing all connection attempts to one specific IP:PORT, trapping all subsequent attempts to connect to that IP:PORT until the first attempt has completed. If the first attempt was deemed a success, then the currently trapped attempts are allowed to proceed. If the first attempt failed, then the currently trapped attempts fail. + +## Inbound + +The traffic-manager and traffic-agent are mutually responsible for setting up the necessary connection to the workstation when an intercept becomes active. In versions prior to 2.3.2, this would be accomplished by the traffic-manager creating a port dynamically that it would pass to the traffic-agent. The traffic-agent would then forward the intercepted connection to that port, and the traffic-manager would forward it to the workstation. This lead to problems when integrating with service meshes like Istio since those dynamic ports needed to be configured. It also imposed an undesired requirement to be able to use mTLS between the traffic-manager and traffic-agent. + +In 2.3.2, this changes, so that the traffic-agent instead creates a tunnel to the traffic-manager using the already existing gRPC API connection. The traffic-manager then forwards that using another tunnel to the workstation. This is completely invisible to other service meshes and is therefore much easier to configure. + +##### Footnotes: +

1: Starting with 2.8.0, Telepresence will not allow that the same workstation creates concurrent intercepts that span multiple namespaces.

+

2: The error message from an attempt to create a service in a bad subnet contains the service subnet. The trick of creating a dummy service is currently the only way to get Kubernetes to expose that subnet.

diff --git a/docs/telepresence/2.12/reference/tun-device.md b/docs/telepresence/2.12/reference/tun-device.md new file mode 100644 index 000000000..4410f6f3c --- /dev/null +++ b/docs/telepresence/2.12/reference/tun-device.md @@ -0,0 +1,27 @@ +# Networking through Virtual Network Interface + +The Telepresence daemon process creates a Virtual Network Interface (VIF) when Telepresence connects to the cluster. The VIF ensures that the cluster's subnets are available to the workstation. It also intercepts DNS requests and forwards them to the traffic-manager which in turn forwards them to intercepted agents, if any, or performs a host lookup by itself. + +### TUN-Device +The VIF is a TUN-device, which means that it communicates with the workstation in terms of L3 IP-packets. The router will recognize UDP and TCP packets and tunnel their payload to the traffic-manager via its encrypted gRPC API. The traffic-manager will then establish corresponding connections in the cluster. All protocol negotiation takes place in the client because the VIF takes care of the L3 to L4 translation (i.e. the tunnel is L4, not L3). + +## Gains when using the VIF + +### Both TCP and UDP +The TUN-device is capable of routing both TCP and UDP for outbound traffic. Earlier versions of Telepresence would only allow TCP. Future enhancements might be to also route inbound UDP, and perhaps a selection of ICMP packages (to allow for things like `ping`). + +### No SSH required + +The VIF approach is somewhat similar to using `sshuttle` but without +any requirements for extra software, configuration or connections. +Using the VIF means that only one single connection needs to be +forwarded through the Kubernetes apiserver (à la `kubectl +port-forward`), using only one single port. There is no need for +`ssh` in the client nor for `sshd` in the traffic-manager. This also +means that the traffic-manager container can run as the default user. + +#### sshfs without ssh encryption +When a POD is intercepted, and its volumes are mounted on the local machine, this mount is performed by [sshfs](https://github.com/libfuse/sshfs). Telepresence will run `sshfs -o slave` which means that instead of using `ssh` to establish an encrypted communication to an `sshd`, which in turn terminates the encryption and forwards to `sftp`, the `sshfs` will talk `sftp` directly on its `stdin/stdout` pair. Telepresence tunnels that directly to an `sftp` in the agent using its already encrypted gRPC API. As a result, no `sshd` is needed in client nor in the traffic-agent, and the traffic-agent container can run as the default user. + +### No Firewall rules +With the VIF in place, there's no longer any need to tamper with firewalls in order to establish IP routes. The VIF makes the cluster subnets available during connect, and the kernel will perform the routing automatically. When the session ends, the kernel is also responsible for cleaning up. diff --git a/docs/telepresence/2.12/reference/volume.md b/docs/telepresence/2.12/reference/volume.md new file mode 100644 index 000000000..82df9cafa --- /dev/null +++ b/docs/telepresence/2.12/reference/volume.md @@ -0,0 +1,36 @@ +# Volume mounts + +import Alert from '@material-ui/lab/Alert'; + +Telepresence supports locally mounting of volumes that are mounted to your Pods. You can specify a command to run when starting the intercept, this could be a subshell or local server such as Python or Node. + +``` +telepresence intercept --port --mount=/tmp/ -- /bin/bash +``` + +In this case, Telepresence creates the intercept, mounts the Pod's volumes to locally to `/tmp`, and starts a Bash subshell. + +Telepresence can set a random mount point for you by using `--mount=true` instead, you can then find the mount point in the output of `telepresence list` or using the `$TELEPRESENCE_ROOT` variable. + +``` +$ telepresence intercept --port --mount=true -- /bin/bash +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 + Intercepting : all TCP connections + +bash-3.2$ echo $TELEPRESENCE_ROOT +/var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 +``` + +--mount=true is the default if a mount option is not specified, use --mount=false to disable mounting volumes. + +With either method, the code you run locally either from the subshell or from the intercept command will need to be prepended with the `$TELEPRESENCE_ROOT` environment variable to utilize the mounted volumes. + +For example, Kubernetes mounts secrets to `/var/run/secrets/kubernetes.io` (even if no `mountPoint` for it exists in the Pod spec). Once mounted, to access these you would need to change your code to use `$TELEPRESENCE_ROOT/var/run/secrets/kubernetes.io`. + +If using --mount=true without a command, you can use either environment variable flag to retrieve the variable. diff --git a/docs/telepresence/2.12/reference/vpn.md b/docs/telepresence/2.12/reference/vpn.md new file mode 100644 index 000000000..91213babc --- /dev/null +++ b/docs/telepresence/2.12/reference/vpn.md @@ -0,0 +1,155 @@ + +
+ +# Telepresence and VPNs + +## The test-vpn command + +You can make use of the `telepresence test-vpn` command to diagnose issues +with your VPN setup. +This guides you through a series of steps to figure out if there are +conflicts between your VPN configuration and [Telepresence](/products/telepresence/). + +### Prerequisites + +Before running `telepresence test-vpn` you should ensure that your VPN is +in split-tunnel mode. +This means that only traffic that _must_ pass through the VPN is directed +through it; otherwise, the test results may be inaccurate. + +You may need to configure this on both the client and server sides. +Client-side, taking the Tunnelblick client as an example, you must ensure that +the `Route all IPv4 traffic through the VPN` tickbox is not enabled: + +![Tunnelblick](../images/tunnelblick.png) + +Server-side, taking AWS' ClientVPN as an example, you simply have to enable +split-tunnel mode: + +![Modify client VPN Endpoint](../images/split-tunnel.png) + +In AWS, this setting can be toggled without reprovisioning the VPN. Other cloud providers may work differently. + +### Testing the VPN configuration + +To run it, enter: + +```console +$ telepresence test-vpn +``` + +The test-vpn tool begins by asking you to disconnect from your VPN; ensure you are disconnected then +press enter: + +``` +Telepresence Root Daemon is already stopped +Telepresence User Daemon is already stopped +Please disconnect from your VPN now and hit enter once you're disconnected... +``` + +Once it's gathered information about your network configuration without an active connection, +it will ask you to connect to the VPN: + +``` +Please connect to your VPN now and hit enter once you're connected... +``` + +It will then connect to the cluster: + + +``` +Launching Telepresence Root Daemon +Launching Telepresence User Daemon +Connected to context arn:aws:eks:us-east-1:914373874199:cluster/josec-tp-test-vpn-cluster (https://07C63820C58A0426296DAEFC73AED10C.gr7.us-east-1.eks.amazonaws.com) +Telepresence Root Daemon quitting... done +Telepresence User Daemon quitting... done +``` + +And show you the results of the test: + +``` +---------- Test Results: +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +✅ svc subnet 10.19.0.0/16 is clear of VPN + +Please see https://www.telepresence.io/docs/latest/reference/vpn for more info on these corrective actions, as well as examples + +Still having issues? Please create a new github issue at https://github.com/telepresenceio/telepresence/issues/new?template=Bug_report.md + Please make sure to add the following to your issue: + * Run `telepresence loglevel debug`, try to connect, then run `telepresence gather_logs`. It will produce a zipfile that you should attach to the issue. + * Which VPN client are you using? + * Which VPN server are you using? + * How is your VPN pushing DNS configuration? It may be useful to add the contents of /etc/resolv.conf +``` + +#### Interpreting test results + +##### Case 1: VPN masked by cluster + +In an instance where the VPN is masked by the cluster, the test-vpn tool informs you that a pod or service subnet is masking a CIDR that the VPN +routes: + +``` +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +``` + +This means that all VPN hosts within `10.0.0.0/19` will be rendered inaccessible while +telepresence is connected. + +The ideal resolution in this case is to move the pods to a different subnet. This is possible, +for example, in Amazon EKS by configuring a [new CIDR range](https://aws.amazon.com/premiumsupport/knowledge-center/eks-multiple-cidr-ranges/) for the pods. +In this case, configuring the pods to be located in `10.1.0.0/19` clears the VPN and allows you +to reach hosts inside the VPC's `10.0.0.0/19` + +However, it is not always possible to move the pods to a different subnet. +In these cases, you should use the [never-proxy](../cluster-config#neverproxy) configuration to prevent certain +hosts from being masked. +This might be particularly important for DNS resolution. In an AWS ClientVPN VPN it is often +customary to set the `.2` host as a DNS server (e.g. `10.0.0.2` in this case): + +![Modify Client VPN Endpoint](../images/vpn-dns.png) + +If this is the case for your VPN, you should place the DNS server in the never-proxy list for your +cluster. In the values file that you pass to `telepresence helm install [--upgrade] --values `, add a `client.routing` +entry like so: + +```yaml +client: + routing: + neverProxySubnets: + - 10.0.0.2/32 +``` + +##### Case 2: Cluster masked by VPN + +In an instance where the Cluster is masked by the VPN, the test-vpn tool informs you that a pod or service subnet is being masked by a CIDR +that the VPN routes: + +``` +❌ pod subnet 10.0.0.0/8 being masked by VPN-routed CIDR 10.0.0.0/16. This usually means that Telepresence will not be able to connect to your cluster. To resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, consider shrinking the mask of the 10.0.0.0/16 CIDR (e.g. from /16 to /8), or disabling split-tunneling +``` + +Typically this means that pods within `10.0.0.0/8` are not accessible while the VPN is +connected. + +As with the first case, the ideal resolution is to move the pods away, but this may not always +be possible. In that case, your best bet is to attempt to shrink the VPN's CIDR +(that is, make it route more hosts) to make Telepresence's routes win by virtue of specificity. +One easy way to do this may be by disabling split tunneling (see the [prerequisites](#prerequisites) +section for more on split-tunneling). + +Note that once you fix this, you may find yourself landing again in [Case 1](#case-1-vpn-masked-by-cluster), and may need +to use never-proxy rules to whitelist hosts in the VPN: + +``` +❌ pod subnet 10.0.0.0/8 is masking VPN-routed CIDR 0.0.0.0/1. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 0.0.0.0/1 are placed in the never-proxy list +``` +
diff --git a/docs/telepresence/2.12/release-notes/no-ssh.png b/docs/telepresence/2.12/release-notes/no-ssh.png new file mode 100644 index 000000000..025f20ab7 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/no-ssh.png differ diff --git a/docs/telepresence/2.12/release-notes/run-tp-in-docker.png b/docs/telepresence/2.12/release-notes/run-tp-in-docker.png new file mode 100644 index 000000000..53b66a9b2 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/run-tp-in-docker.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.2.png b/docs/telepresence/2.12/release-notes/telepresence-2.2.png new file mode 100644 index 000000000..43abc7e89 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.2.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.3.0-homebrew.png b/docs/telepresence/2.12/release-notes/telepresence-2.3.0-homebrew.png new file mode 100644 index 000000000..e203a9750 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.3.0-homebrew.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.3.0-loglevels.png b/docs/telepresence/2.12/release-notes/telepresence-2.3.0-loglevels.png new file mode 100644 index 000000000..3d628c54a Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.3.0-loglevels.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.3.1-alsoProxy.png b/docs/telepresence/2.12/release-notes/telepresence-2.3.1-alsoProxy.png new file mode 100644 index 000000000..4052b927b Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.3.1-alsoProxy.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.3.1-brew.png b/docs/telepresence/2.12/release-notes/telepresence-2.3.1-brew.png new file mode 100644 index 000000000..2af424904 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.3.1-brew.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.3.1-dns.png b/docs/telepresence/2.12/release-notes/telepresence-2.3.1-dns.png new file mode 100644 index 000000000..c6335e7a7 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.3.1-dns.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.3.1-inject.png b/docs/telepresence/2.12/release-notes/telepresence-2.3.1-inject.png new file mode 100644 index 000000000..aea1003ef Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.3.1-inject.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.3.1-large-file-transfer.png b/docs/telepresence/2.12/release-notes/telepresence-2.3.1-large-file-transfer.png new file mode 100644 index 000000000..48ceb3817 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.3.1-large-file-transfer.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.3.1-trafficmanagerconnect.png b/docs/telepresence/2.12/release-notes/telepresence-2.3.1-trafficmanagerconnect.png new file mode 100644 index 000000000..78128c174 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.3.1-trafficmanagerconnect.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.3.2-subnets.png b/docs/telepresence/2.12/release-notes/telepresence-2.3.2-subnets.png new file mode 100644 index 000000000..778c722ab Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.3.2-subnets.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.3.2-svcport-annotation.png b/docs/telepresence/2.12/release-notes/telepresence-2.3.2-svcport-annotation.png new file mode 100644 index 000000000..1e1e92408 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.3.2-svcport-annotation.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.3.3-helm.png b/docs/telepresence/2.12/release-notes/telepresence-2.3.3-helm.png new file mode 100644 index 000000000..7b81480a7 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.3.3-helm.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.3.3-namespace-config.png b/docs/telepresence/2.12/release-notes/telepresence-2.3.3-namespace-config.png new file mode 100644 index 000000000..7864d3a30 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.3.3-namespace-config.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.3.3-to-pod.png b/docs/telepresence/2.12/release-notes/telepresence-2.3.3-to-pod.png new file mode 100644 index 000000000..aa7be3f63 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.3.3-to-pod.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.3.4-improved-error.png b/docs/telepresence/2.12/release-notes/telepresence-2.3.4-improved-error.png new file mode 100644 index 000000000..fa8a12986 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.3.4-improved-error.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.3.4-ip-error.png b/docs/telepresence/2.12/release-notes/telepresence-2.3.4-ip-error.png new file mode 100644 index 000000000..1d37380c7 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.3.4-ip-error.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.3.5-agent-config.png b/docs/telepresence/2.12/release-notes/telepresence-2.3.5-agent-config.png new file mode 100644 index 000000000..67d6d3e8b Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.3.5-agent-config.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.3.5-grpc-max-receive-size.png b/docs/telepresence/2.12/release-notes/telepresence-2.3.5-grpc-max-receive-size.png new file mode 100644 index 000000000..32939f9dd Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.3.5-grpc-max-receive-size.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.3.5-skipLogin.png b/docs/telepresence/2.12/release-notes/telepresence-2.3.5-skipLogin.png new file mode 100644 index 000000000..bf79c1910 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.3.5-skipLogin.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png b/docs/telepresence/2.12/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png new file mode 100644 index 000000000..d29a05ad7 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.3.7-keydesc.png b/docs/telepresence/2.12/release-notes/telepresence-2.3.7-keydesc.png new file mode 100644 index 000000000..9bffe5ccb Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.3.7-keydesc.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.3.7-newkey.png b/docs/telepresence/2.12/release-notes/telepresence-2.3.7-newkey.png new file mode 100644 index 000000000..c7d47c42d Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.3.7-newkey.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.4.0-cloud-messages.png b/docs/telepresence/2.12/release-notes/telepresence-2.4.0-cloud-messages.png new file mode 100644 index 000000000..ffd045ae0 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.4.0-cloud-messages.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.4.0-windows.png b/docs/telepresence/2.12/release-notes/telepresence-2.4.0-windows.png new file mode 100644 index 000000000..d27ba254a Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.4.0-windows.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.4.1-systema-vars.png b/docs/telepresence/2.12/release-notes/telepresence-2.4.1-systema-vars.png new file mode 100644 index 000000000..c098b439f Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.4.1-systema-vars.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.4.10-actions.png b/docs/telepresence/2.12/release-notes/telepresence-2.4.10-actions.png new file mode 100644 index 000000000..6d849ac21 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.4.10-actions.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.4.10-intercept-config.png b/docs/telepresence/2.12/release-notes/telepresence-2.4.10-intercept-config.png new file mode 100644 index 000000000..e3f1136ac Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.4.10-intercept-config.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.4.4-gather-logs.png b/docs/telepresence/2.12/release-notes/telepresence-2.4.4-gather-logs.png new file mode 100644 index 000000000..7db541735 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.4.4-gather-logs.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.4.5-logs-anonymize.png b/docs/telepresence/2.12/release-notes/telepresence-2.4.5-logs-anonymize.png new file mode 100644 index 000000000..edd01fde4 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.4.5-logs-anonymize.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.4.5-pod-yaml.png b/docs/telepresence/2.12/release-notes/telepresence-2.4.5-pod-yaml.png new file mode 100644 index 000000000..3f565c4f8 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.4.5-pod-yaml.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.4.5-preview-url-questions.png b/docs/telepresence/2.12/release-notes/telepresence-2.4.5-preview-url-questions.png new file mode 100644 index 000000000..1823aaa14 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.4.5-preview-url-questions.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.4.6-help-text.png b/docs/telepresence/2.12/release-notes/telepresence-2.4.6-help-text.png new file mode 100644 index 000000000..aab9178ad Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.4.6-help-text.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.4.8-health-check.png b/docs/telepresence/2.12/release-notes/telepresence-2.4.8-health-check.png new file mode 100644 index 000000000..e10a0b472 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.4.8-health-check.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.4.8-vpn.png b/docs/telepresence/2.12/release-notes/telepresence-2.4.8-vpn.png new file mode 100644 index 000000000..fbb215882 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.4.8-vpn.png differ diff --git a/docs/telepresence/2.12/release-notes/telepresence-2.5.0-pro-daemon.png b/docs/telepresence/2.12/release-notes/telepresence-2.5.0-pro-daemon.png new file mode 100644 index 000000000..5b82fc769 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/telepresence-2.5.0-pro-daemon.png differ diff --git a/docs/telepresence/2.12/release-notes/tunnel.jpg b/docs/telepresence/2.12/release-notes/tunnel.jpg new file mode 100644 index 000000000..59a0397e6 Binary files /dev/null and b/docs/telepresence/2.12/release-notes/tunnel.jpg differ diff --git a/docs/telepresence/2.12/releaseNotes.yml b/docs/telepresence/2.12/releaseNotes.yml new file mode 100644 index 000000000..2458a7f57 --- /dev/null +++ b/docs/telepresence/2.12/releaseNotes.yml @@ -0,0 +1,2154 @@ +# This file should be placed in the folder for the version of the +# product that's meant to be documented. A `/release-notes` page will +# be automatically generated and populated at build time. +# +# Note that an entry needs to be added to the `doc-links.yml` file in +# order to surface the release notes in the table of contents. +# +# The YAML in this file should contain: +# +# changelog: An (optional) URL to the CHANGELOG for the product. +# items: An array of releases with the following attributes: +# - version: The (optional) version number of the release, if applicable. +# - date: The date of the release in the format YYYY-MM-DD. +# - notes: An array of noteworthy changes included in the release, each having the following attributes: +# - type: The type of change, one of `bugfix`, `feature`, `security` or `change`. +# - title: A short title of the noteworthy change. +# - body: >- +# Two or three sentences describing the change and why it +# is noteworthy. This is HTML, not plain text or +# markdown. It is handy to use YAML's ">-" feature to +# allow line-wrapping. +# - image: >- +# The URL of an image that visually represents the +# noteworthy change. This path is relative to the +# `release-notes` directory; if this file is +# `FOO/releaseNotes.yml`, then the image paths are +# relative to `FOO/release-notes/`. +# - docs: The path to the documentation page where additional information can be found. +# - href: A path from the root to a resource on the getambassador website, takes precedence over a docs link. + +docTitle: Telepresence Release Notes +docDescription: >- + Release notes for Telepresence by Ambassador Labs, a CNCF project + that enables developers to iterate rapidly on Kubernetes + microservices by arming them with infinite-scale development + environments, access to instantaneous feedback loops, and highly + customizable development environments. + +changelog: https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md + +items: + - version: 2.12.2 + date: "2023-04-04" + notes: + - type: security + title: Update Golang build version to 1.20.3 + body: >- + Update Golang to 1.20.3 to address CVE-2023-24534, CVE-2023-24536, CVE-2023-24537, and CVE-2023-24538 + - version: 2.12.1 + date: "2023-03-22" + notes: + - type: feature + title: Additions to gather-logs + body: >- + Telepresence now includes the kubeauth logs when running + the gather-logs command + - type: bugfix + title: Airgapped Clusters can once again create personal intercepts + body: >- + Telepresence on airgapped clusters regained the ability to use the + skipLogin config option to bypass login and create personal intercepts. + - type: bugfix + title: Environment Variables are now propagated to kubeauth + body: >- + Telepresence now propagates environment variables properly + to the kubeauth-foreground to be used with cluster authentication + - version: 2.12.0 + date: "2023-03-20" + notes: + - type: feature + title: Intercept spec can build images from source + body: >- + Handlers in the Intercept Specification can now specify a build property instead of an image so that + the image is built when the spec runs. + docs: reference/intercepts/specs#build + - type: feature + title: Improve volume mount experience for Windows and Mac users + body: >- + On macOS and Windows platforms, the installation of sshfs or platform specific FUSE implementations such as macFUSE or WinFSP are + no longer needed when running an Intercept Specification that uses docker images. + docs: reference/intercepts/specs + - type: feature + title: Check for service connectivity independently from pod connectivity + body: >- + Telepresence now enables you to check for a service and pod's connectivity independently, so that it can proxy one without proxying the other. + docs: https://github.com/telepresenceio/telepresence/issues/2911 + - type: bugfix + title: Fix cluster authentication when running the telepresence daemon in a docker container. + body: >- + Authentication to EKS and GKE clusters have been fixed (k8s >= v1.26) + docs: https://github.com/telepresenceio/telepresence/pull/3055 + - type: bugfix + title: The Intercept spec image pattern now allows nested and sha256 images. + body: >- + Telepresence Intercept Specifications now handle passing nested images or the sha256 of an image + docs: https://github.com/telepresenceio/telepresence/issues/3064 + - type: bugfix + body: >- + Telepresence will not longer panic when a CNAME does not contain the .svc in it + title: Fix panic when CNAME of kubernetes.default doesn't contain .svc + docs: https://github.com/telepresenceio/telepresence/issues/3015 + - version: 2.11.1 + date: "2023-02-27" + notes: + - type: bugfix + title: Multiple architectures + docs: https://github.com/telepresenceio/telepresence/issues/3043 + body: >- + The multi-arch build for the ambassador-telepresence-manager and ambassador-telepresence-agent now + works for both amd64 and arm64. + - type: bugfix + title: Ambassador agent Helm chart duplicates + docs: https://github.com/telepresenceio/telepresence/issues/3046 + body: >- + Some labels in the Helm chart for the Ambassador Agent were duplicated, causing problems for FluxCD. + - version: 2.11.0 + date: "2023-02-22" + notes: + - type: feature + title: Intercept specification + body: >- + It is now possible to leverage the intercept specification to spin up your environment without extra tools. + - type: feature + title: Support for arm64 (Apple Silicon) + body: >- + The ambassador-telepresence-manager and ambassador-telepresence-agent are now distributed as + multi-architecture images and can run natively on both linux/amd64 and linux/arm64. + - type: bugfix + title: Connectivity check can break routing in VPN setups + docs: https://github.com/telepresenceio/telepresence/issues/3006 + body: >- + The connectivity check failed to recognize that the connected peer wasn't a traffic-manager. Consequently, + it didn't proxy the cluster because it incorrectly assumed that a successful connect meant cluster connectivity, + - type: bugfix + title: VPN routes not detected by telepresence test-vpn on macOS + docs: https://github.com/telepresenceio/telepresence/pull/3038 + body: >- + The telepresence test-vpn did not include routes of type link when checking for subnet + conflicts. + - version: 2.10.5 + date: "2023-02-06" + notes: + - type: change + title: mTLS secrets mount + body: >- + mTLS Secrets will now be mounted into the traffic agent, instead of expected to be read by it from the API. + This is only applicable to users of team mode and the proprietary agent + docs: reference/cluster-config#tls + - type: bugfix + title: Daemon reconnection fix + body: >- + Fixed a bug that prevented the local daemons from automatically reconnecting to the traffic manager when the network connection was lost. + - version: 2.10.4 + date: "2023-01-20" + notes: + - type: bugfix + title: Backward compatibility restored + body: >- + Telepresence can now create intercepts with traffic-managers of version 2.9.5 and older. + - type: bugfix + title: Saved intercepts now works with preview URLs. + body: >- + Preview URLs are now included/excluded correctly when using saved intercepts. + - version: 2.10.3 + date: "2023-01-17" + notes: + - type: bugfix + title: Saved intercepts + body: >- + Fixed an issue which was causing the saved intercepts to not be completely interpreted by telepresence. + - type: bugfix + title: Traffic manager restart during upgrade to team mode + body: >- + Fixed an issue which was causing the traffic manager to be redeployed after an upgrade to the team mode. + docs: https://github.com/telepresenceio/telepresence/pull/2979 + - version: 2.10.2 + date: "2023-01-16" + notes: + - type: bugfix + title: version consistency in helm commands + body: >- + Ensure that CLI and user-daemon binaries are the same version when running
telepresence helm install + or telepresence helm upgrade. + docs: https://github.com/telepresenceio/telepresence/pull/2975 + - type: bugfix + title: Release Process + body: >- + Fixed an issue that prevented the --use-saved-intercept flag from working. + - version: 2.10.1 + date: "2023-01-11" + notes: + - type: bugfix + title: Release Process + body: >- + Fixed a regex in our release process that prevented 2.10.0 promotion. + - version: 2.10.0 + date: "2023-01-11" + notes: + - type: feature + title: Team Mode and Single User Mode + body: >- + The Traffic Manager can now be set to either "team" mode or "single user" mode. When in team mode, intercepts will default to http intercepts. + - type: feature + title: Added `insert` and `upgrade` Subcommands to `telepresence helm` + body: >- + The `telepresence helm` sub-commands `insert` and `upgrade` now accepts all types of helm `--set-XXX` flags. + - type: feature + title: Added Image Pull Secrets to Helm Chart + body: >- + Image pull secrets for the traffic-agent can now be added using the Helm chart setting `agent.image.pullSecrets`. + - type: change + title: Rename Configmap + body: >- + The configmap `traffic-manager-clients` has been renamed to `traffic-manager`. + - type: change + title: Webhook Namespace Field + body: >- + If the cluster is Kubernetes 1.21 or later, the mutating webhook will find the correct namespace using the label `kubernetes.io/metadata.name` rather than `app.kuberenetes.io/name`. + docs: https://github.com/telepresenceio/telepresence/issues/2913 + - type: change + title: Rename Webhook + body: >- + The name of the mutating webhook now contains the namespace of the traffic-manager so that the webhook is easier to identify when there are multiple namespace scoped telepresence installations in the cluster. + - type: change + title: OSS Binaries + body: >- + The OSS Helm chart is no longer pushed to the datawire Helm repository. It will instead be pushed from the telepresence proprietary repository. The OSS Helm chart is still what's embedded in the OSS telepresence client. + docs: https://github.com/telepresenceio/telepresence/pull/2943 + - type: bugfix + title: Fix Panic Using `--docker-run` + body: >- + Telepresence no longer panics when `--docker-run` is combined with `--name ` instead of `--name=`. + docs: https://github.com/telepresenceio/telepresence/issues/2953 + - type: bugfix + title: Stop assuming cluster domain + body: >- + Telepresence traffic-manager extracts the cluster domain (e.g. "cluster.local") using a CNAME lookup for "kubernetes.default" instead of "kubernetes.default.svc". + docs: https://github.com/telepresenceio/telepresence/pull/2959 + - type: bugfix + title: Uninstall hook timeout + body: >- + A timeout was added to the pre-delete hook `uninstall-agents`, so that a helm uninstall doesn't hang when there is no running traffic-manager. + docs: https://github.com/telepresenceio/telepresence/pull/2937 + - type: bugfix + title: Uninstall hook check + body: >- + The `Helm.Revision` is now used to prevent that Helm hook calls are served by the wrong revision of the traffic-manager. + docs: https://github.com/telepresenceio/telepresence/issues/2954 + - version: 2.9.5 + date: "2022-12-08" + notes: + - type: security + title: Update to golang v1.19.4 + body: >- + Apply security updates by updating to golang v1.19.4 + docs: https://groups.google.com/g/golang-announce/c/L_3rmdT0BMU + - type: bugfix + title: GCE authentication + body: >- + Fixed a regression, that was introduced in 2.9.3, preventing use of gce authentication without also having a config element present in the gce configuration in the kubeconfig. + - version: 2.9.4 + date: "2022-12-02" + notes: + - type: feature + title: Subnet detection strategy + body: >- + The traffic-manager can automatically detect that the node subnets are different from the pod subnets, and switch detection strategy to instead use subnets that cover the pod IPs. + - type: bugfix + title: Fix `--set` flag for `telepresence helm install` + body: >- + The `telepresence helm` command `--set x=y` flag didn't correctly set values of other types than `string`. The code now uses standard Helm semantics for this flag. + - type: bugfix + title: Fix `agent.image` setting propigation + body: >- + Telepresence now uses the correct `agent.image` properties in the Helm chart when copying agent image settings from the `config.yml` file. + - type: bugfix + title: Delay file sharing until needed + body: >- + Initialization of FTP type file sharing is delayed, so that setting it using the Helm chart value `intercept.useFtp=true` works as expected. + - type: bugfix + title: Cleanup on `telepresence quit` + body: >- + The port-forward that is created when Telepresence connects to a cluster is now properly closed when `telepresence quit` is called. + - type: bugfix + title: Watch `config.yml` without panic + body: >- + The user daemon no longer panics when the `config.yml` is modified at a time when the user daemon is running but no session is active. + - type: bugfix + title: Thread safety + body: >- + Fix race condition that would occur when `telepresence connect` `telepresence leave` was called several times in rapid succession. + - version: 2.9.3 + date: "2022-11-23" + notes: + - type: feature + title: Helm options for `livenessProbe` and `readinessProbe` + body: >- + The helm chart now supports `livenessProbe` and `readinessProbe` for the traffic-manager deployment, so that the pod automatically restarts if it doesn't respond. + - type: change + title: Improved network communication + body: >- + The root daemon now communicates directly with the traffic-manager instead of routing all outbound traffic through the user daemon. + - type: bugfix + title: Root daemon debug logging + body: >- + Using `telepresence loglevel LEVEL` now also sets the log level in the root daemon. + - type: bugfix + title: Multivalue flag value propagation + body: >- + Multi valued kubernetes flags such as `--as-group` are now propagated correctly. + - type: bugfix + title: Root daemon stability + body: >- + The root daemon would sometimes hang indefinitely when quit and connect were called in rapid succession. + - type: bugfix + title: Base DNS resolver + body: >- + Don't use `systemd.resolved` base DNS resolver unless cluster is proxied. + - version: 2.9.2 + date: "2022-11-16" + notes: + - type: bugfix + title: Fix panic + body: >- + Fix panic when connecting to an older traffic-manager. + - type: bugfix + title: Fix header flag + body: >- + Fix an issue where the `http-header` flag sometimes wouldn't propagate correctly. + - version: 2.9.1 + date: "2022-11-16" + notes: + - type: bugfix + title: Connect failures due to missing auth provider. + body: >- + The regression in 2.9.0 that caused a `no Auth Provider found for name “gcp”` error when connecting was fixed. + - version: 2.9.0 + date: "2022-11-15" + notes: + - type: feature + title: New command to view client configuration. + body: >- + A new telepresence config view was added to make it easy to view the current + client configuration. + docs: new-in-2.9#view-the-client-configuration + - type: feature + title: Configure Clients using the Helm chart. + body: >- + The traffic-manager can now configure all clients that connect through the client: map in + the values.yaml file. + docs: reference/cluster-config#client-configuration + - type: feature + title: The Traffic manager version is more visible. + body: >- + The command telepresence version will now include the version of the traffic manager when + the client is connected to a cluster. + - type: feature + title: Command output in YAML format. + body: >- + The global --output flag now accepts both yaml and json. + docs: new-in-2.9#yaml-output + - type: change + title: Deprecated status command flag + body: >- + The telepresence status --json flag is deprecated. Use telepresence status --output=json instead. + - type: bugfix + title: Unqualified service name resolution in docker. + body: >- + Unqualified service names now resolves OK from the docker container when using telepresence intercept --docker-run. + docs: https://github.com/telepresenceio/telepresence/issues/2870 + - type: bugfix + title: Output no longer mixes plaintext and json. + body: >- + Informational messages that don't really originate from the command, such as "Launching Telepresence Root Daemon", + or "An update of telepresence ...", are discarded instead of being printed as plain text before the actual formatted + output when using the --output=json. + docs: https://github.com/telepresenceio/telepresence/issues/2854 + - type: bugfix + title: No more panic when invalid port names are detected. + body: >- + A `telepresence intercept` of services with invalid port no longer causes a panic. + docs: https://github.com/telepresenceio/telepresence/issues/2880 + - type: bugfix + title: Proper errors for bad output formats. + body: >- + An attempt to use an invalid value for the global --output flag now renders a proper error message. + - type: bugfix + title: Remove lingering DNS config on macOS. + body: >- + Files lingering under /etc/resolver as a result of ungraceful shutdown of the root daemon on macOS, are + now removed when a new root daemon starts. + - version: 2.8.5 + date: "2022-11-2" + notes: + - type: security + title: CVE-2022-41716 + body: >- + Updated Golang to 1.19.3 to address CVE-2022-41716. + - version: 2.8.4 + date: "2022-11-2" + notes: + - type: bugfix + title: Release Process + body: >- + This release resulted in changes to our release process. + - version: 2.8.3 + date: "2022-10-27" + notes: + - type: feature + title: Ability to disable global intercepts. + body: >- + Global intercepts (a.k.a. TCP intercepts) can now be disabled by using the new Helm chart setting intercept.disableGlobal. + docs: https://github.com/telepresenceio/telepresence/issues/2140 + - type: feature + title: Configurable mutating webhook port + body: >- + The port used for the mutating webhook can be configured using the Helm chart setting + agentInjector.webhook.port. + docs: install/helm + - type: change + title: Mutating webhook port defaults to 443 + body: >- + The default port for the mutating webhook is now 443. It used to be 8443. + - type: change + title: Agent image configuration mandatory in air-gapped environments. + body: >- + The traffic-manager will no longer default to use the tel2 image for the traffic-agent when it is + unable to connect to Ambassador Cloud. Air-gapped environments must declare what image to use in the Helm chart. + - type: bugfix + title: Can now connect to non-helm installs + body: >- + telepresence connect now works as long as the traffic manager is installed, even if + it wasn't installed via >code>helm install + docs: https://github.com/telepresenceio/telepresence/issues/2824 + - type: bugfix + title: check-vpn crash fixed + body: >- + telepresence check-vpn no longer crashes when the daemons don't start properly. + - version: 2.8.2 + date: "2022-10-15" + notes: + - type: bugfix + title: Reinstate 2.8.0 + body: >- + There was an issue downloading the free enhanced client. This problem was fixed, 2.8.0 was reinstated + - version: 2.8.1 + date: "2022-10-14" + notes: + - type: bugfix + title: Rollback 2.8.0 + body: >- + Rollback 2.8.0 while we investigate an issue with ambassador cloud. + - version: 2.8.0 + date: "2022-10-14" + notes: + - type: feature + title: Improved DNS resolver + body: >- + The Telepresence DNS resolver is now capable of resolving queries of type A, AAAA, CNAME, + MX, NS, PTR, SRV, and TXT. + docs: reference/dns + - type: feature + title: New `client` structure in Helm chart + body: >- + A new client struct was added to the Helm chart. It contains a connectionTTL that controls + how long the traffic manager will retain a client connection without seeing any sign of life from the client. + docs: reference/cluster-config#Client-Configuration + - type: feature + title: Include and exclude suffixes configurable using the Helm chart. + body: >- + A dns element was added to the client struct in Helm chart. It contains an includeSuffixes and + an excludeSuffixes value that controls what type of names that the DNS resolver in the client will delegate to + the cluster. + docs: reference/cluster-config#DNS + - type: feature + title: Configurable traffic-manager API port + body: >- + The API port used by the traffic-manager is now configurable using the Helm chart value apiPort. + The default port is 8081. + docs: https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence + - type: feature + title: Envoy server and admin port configuration. + body: >- + An new agent struct was added to the Helm chart. It contains an `envoy` structure where the server and + admin port of the Envoy proxy running in the enhanced traffic-agent can be configured. + docs: reference/cluster-config#Envoy-Configuration + - type: change + title: Helm chart `dnsConfig` moved to `client.routing`. + body: >- + The Helm chart dnsConfig was deprecated but retained for backward compatibility. The fields alsoProxySubnets + and neverProxySubnets can now be found under routing in the client struct. + docs: reference/cluster-config#Routing + - type: change + title: Helm chart `agentInjector.agentImage` moved to `agent.image`. + body: >- + The Helm chart agentInjector.agentImage was moved to agent.image. The old value is deprecated but + retained for backward compatibility. + docs: reference/cluster-config#Image-Configuration + - type: change + title: Helm chart `agentInjector.appProtocolStrategy` moved to `agent.appProtocolStrategy`. + body: >- + The Helm chart agentInjector.appProtocolStrategy was moved to agent.appProtocolStrategy. The old + value is deprecated but retained for backward compatibility. + docs: reference/cluster-config#Application-Protocol-Selection + - type: change + title: Helm chart `dnsServiceName`, `dnsServiceNamespace`, and `dnsServiceIP` removed. + body: >- + The Helm chart dnsServiceName, dnsServiceNamespace, and dnsServiceIP has been removed, because + they are no longer needed. The TUN-device will use the traffic-manager pod-IP on platforms where it needs to + dedicate an IP for its local resolver. + - type: change + title: Quit daemons with `telepresence quit -s` + body: >- + The former options `-u` and `-r` for `telepresence quit` has been deprecated and replaced with one option `-s` which will + quit both the root daemon and the user daemon. + - type: bugfix + title: Environment variable interpolation in pods now works. + body: >- + Environment variable interpolation now works for all definitions that are copied from pod containers + into the injected traffic-agent container. + - type: bugfix + title: Early detection of namespace conflict + body: >- + An attempt to create simultaneous intercepts that span multiple namespace on the same workstation + is detected early and prohibited instead of resulting in failing DNS lookups later on. + - type: bugfix + title: Annoying log message removed + body: >- + Spurious and incorrect ""!! SRV xxx"" messages will no longer appear in the logs when the reason + is normal context cancellation. + - type: bugfix + title: Single name DNS resolution in Docker on Linux host + body: >- + Single label names now resolves correctly when using Telepresence in Docker on a Linux host + - type: bugfix + title: Misnomer `appPortStrategy` in Helm chart renamed to `appProtocolStrategy`. + body: >- + The Helm chart value appProtocolStrategy is now correctly named (used to be appPortStategy) + - version: 2.7.6 + date: "2022-09-16" + notes: + - type: feature + title: Helm chart resource entries for injected agents + body: >- + The resources for the traffic-agent container and the optional init container can be + specified in the Helm chart using the resources and initResource fields + of the agentInjector.agentImage + - type: feature + title: Cluster event propagation when injection fails + body: >- + When the traffic-manager fails to inject a traffic-agent, the cause for the failure is + detected by reading the cluster events, and propagated to the user. + - type: feature + title: FTP-client instead of sshfs for remote mounts + body: >- + Telepresence can now use an embedded FTP client and load an existing FUSE library instead of running + an external sshfs or sshfs-win binary. This feature is experimental in 2.7.x + and enabled by setting intercept.useFtp to true> in the config.yml. + - type: change + title: Upgrade of winfsp + body: >- + Telepresence on Windows upgraded winfsp from version 1.10 to 1.11 + - type: bugfix + title: Removal of invalid warning messages + body: >- + Running CLI commands on Apple M1 machines will no longer throw warnings about /proc/cpuinfo + and /proc/self/auxv. + - version: 2.7.5 + date: "2022-09-14" + notes: + - type: change + title: Rollback of release 2.7.4 + body: >- + This release is a rollback of the changes in 2.7.4, so essentially the same as 2.7.3 + - version: 2.7.4 + date: "2022-09-14" + notes: + - type: change + body: >- + This release was broken on some platforms. Use 2.7.6 instead. + - version: 2.7.3 + date: "2022-09-07" + notes: + - type: bugfix + title: PTY for CLI commands + body: >- + CLI commands that are executed by the user daemon now use a pseudo TTY. This enables + docker run -it to allocate a TTY and will also give other commands like bash read the + same behavior as when executed directly in a terminal. + docs: https://github.com/telepresenceio/telepresence/issues/2724 + - type: bugfix + title: Traffic Manager useless warning silenced + body: >- + The traffic-manager will no longer log numerous warnings saying Issuing a + systema request without ApiKey or InstallID may result in an error. + - type: bugfix + title: Traffic Manager useless error silenced + body: >- + The traffic-manager will no longer log an error saying Unable to derive subnets + from nodes when the podCIDRStrategy is auto and it chooses to instead derive the + subnets from the pod IPs. + - version: 2.7.2 + date: "2022-08-25" + notes: + - type: feature + title: Autocompletion scripts + body: >- + Autocompletion scripts can now be generated with telepresence completion SHELL where SHELL can be bash, zsh, fish or powershell. + - type: feature + title: Connectivity check timeout + body: >- + The timeout for the initial connectivity check that Telepresence performs + in order to determine if the cluster's subnets are proxied or not can now be configured + in the config.yml file using timeouts.connectivityCheck. The default timeout was + changed from 5 seconds to 500 milliseconds to speed up the actual connect. + docs: reference/config#timeouts + - type: change + title: gather-traces feedback + body: >- + The command telepresence gather-traces now prints out a message on success. + docs: troubleshooting#distributed-tracing + - type: change + title: upload-traces feedback + body: >- + The command telepresence upload-traces now prints out a message on success. + docs: troubleshooting#distributed-tracing + - type: change + title: gather-traces tracing + body: >- + The command telepresence gather-traces now traces itself and reports errors with trace gathering. + docs: troubleshooting#distributed-tracing + - type: change + title: CLI log level + body: >- + The cli.log log is now logged at the same level as the connector.log + docs: reference/config#log-levels + - type: bugfix + title: Telepresence --help fixed + body: >- + telepresence --help now works once more even if there's no user daemon running. + docs: https://github.com/telepresenceio/telepresence/issues/2735 + - type: bugfix + title: Stream cancellation when no process intercepts + body: >- + Streams created between the traffic-agent and the workstation are now properly closed + when no interceptor process has been started on the workstation. This fixes a potential problem where + a large number of attempts to connect to a non-existing interceptor would cause stream congestion + and an unresponsive intercept. + - type: bugfix + title: List command excludes the traffic-manager + body: >- + The telepresence list command no longer includes the traffic-manager deployment. + - version: 2.7.1 + date: "2022-08-10" + notes: + - type: change + title: Reinstate telepresence uninstall + body: >- + Reinstate telepresence uninstall with --everything depreciated + - type: change + title: Reduce telepresence helm uninstall + body: >- + telepresence helm uninstall will only uninstall the traffic-manager helm chart and no longer accepts the --everything, --agent, or --all-agents flags. + - type: bugfix + title: Auto-connect for telepresence intercpet + body: >- + telepresence intercept will attempt to connect to the traffic manager before creating an intercept. + - version: 2.7.0 + date: "2022-08-07" + notes: + - type: feature + title: Saved Intercepts + body: >- + Create telepresence intercepts based on existing Saved Intercepts configurations with telepresence intercept --use-saved-intercept $SAVED_INTERCEPT_NAME + docs: reference/intercepts#sharing-intercepts-with-teammates + - type: feature + title: Distributed Tracing + body: >- + The Telepresence components now collect OpenTelemetry traces. + Up to 10MB of trace data are available at any given time for collection from + components. telepresence gather-traces is a new command that will collect + all that data and place it into a gzip file, and telepresence upload-traces is + a new command that will push the gzipped data into an OTLP collector. + docs: troubleshooting#distributed-tracing + - type: feature + title: Helm install + body: >- + A new telepresence helm command was added to provide an easy way to install, upgrade, or uninstall the telepresence traffic-manager. + docs: install/manager + - type: feature + title: Ignore Volume Mounts + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-ignore-volume-mounts, that can be used to make the injector ignore specified volume mounts denoted by a comma-separated string. + - type: feature + title: telepresence pod-daemon + body: >- + The Docker image now contains a new program in addition to + the existing traffic-manager and traffic-agent: the pod-daemon. The + pod-daemon is a trimmed-down version of the user-daemon that is + designed to run as a sidecar in a Pod, enabling CI systems to create + preview deploys. + - type: feature + title: Prometheus support for traffic manager + body: >- + Added prometheus support to the traffic manager. + - type: change + title: No install on telepresence connect + body: >- + The traffic manager is no longer automatically installed into the cluster. Connecting or creating an intercept in a cluster without a traffic manager will return an error. + docs: install/manager + - type: change + title: Helm Uninstall + body: >- + The command telepresence uninstall has been moved to telepresence helm uninstall. + docs: install/manager + - type: bugfix + title: readOnlyRootFileSystem mounts work + body: >- + Add an emptyDir volume and volume mount under /tmp on the agent sidecar so it works with `readOnlyRootFileSystem: true` + docs: https://github.com/telepresenceio/telepresence/pull/2666 + - version: 2.6.8 + date: "2022-06-23" + notes: + - type: feature + title: Specify Your DNS + body: >- + The name and namespace for the DNS Service that the traffic-manager uses in DNS auto-detection can now be specified. + - type: feature + title: Specify a Fallback DNS + body: >- + Should the DNS auto-detection logic in the traffic-manager fail, users can now specify a fallback IP to use. + - type: feature + title: Intercept UDP Ports + body: >- + It is now possible to intercept UDP ports with Telepresence and also use --to-pod to forward UDP traffic from ports on localhost. + - type: change + title: Additional Helm Values + body: >- + The Helm chart will now add the nodeSelector, affinity and tolerations values to the traffic-manager's post-upgrade-hook and pre-delete-hook jobs. + - type: bugfix + title: Agent Injection Bugfix + body: >- + Telepresence no longer fails to inject the traffic agent into the pod generated for workloads that have no volumes and `automountServiceAccountToken: false`. + - version: 2.6.7 + date: "2022-06-22" + notes: + - type: bugfix + title: Persistant Sessions + body: >- + The Telepresence client will remember and reuse the traffic-manager session after a network failure or other reason that caused an unclean disconnect. + - type: bugfix + title: DNS Requests + body: >- + Telepresence will no longer forward DNS requests for "wpad" to the cluster. + - type: bugfix + title: Graceful Shutdown + body: >- + The traffic-agent will properly shut down if one of its goroutines errors. + - version: 2.6.6 + date: "2022-06-9" + notes: + - type: bugfix + title: Env Var `TELEPRESENCE_API_PORT` + body: >- + The propagation of the TELEPRESENCE_API_PORT environment variable now works correctly. + - type: bugfix + title: Double Printing `--output json` + body: >- + The --output json global flag no longer outputs multiple objects + - version: 2.6.5 + date: "2022-06-03" + notes: + - type: feature + title: Helm Option -- `reinvocationPolicy` + body: >- + The reinvocationPolicy or the traffic-agent injector webhook can now be configured using the Helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Proxy Certificate + body: >- + The traffic manager now accepts a root CA for a proxy, allowing it to connect to ambassador cloud from behind an HTTPS proxy. This can be configured through the helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Agent Injection + body: >- + A policy that controls when the mutating webhook injects the traffic-agent was added, and can be configured in the Helm chart. + docs: install/helm + - type: change + title: Windows Tunnel Version Upgrade + body: >- + Telepresence on Windows upgraded wintun.dll from version 0.12 to version 0.14.1 + - type: change + title: Helm Version Upgrade + body: >- + Telepresence upgraded its embedded Helm from version 3.8.1 to 3.9 + - type: change + title: Kubernetes API Version Upgrade + body: >- + Telepresence upgraded its embedded Kubernetes API from version 0.23.4 to 0.24.1 + - type: feature + title: Flag `--watch` Added to `list` Command + body: >- + Added a --watch flag to telepresence list that can be used to watch interceptable workloads in a namespace. + - type: change + title: Depreciated `images.webhookAgentImage` + body: >- + The Telepresence configuration setting for `images.webhookAgentImage` is now deprecated. Use `images.agentImage` instead. + - type: bugfix + title: Default `reinvocationPolicy` Set to Never + body: >- + The reinvocationPolicy or the traffic-agent injector webhook now defaults to Never insteadof IfNeeded so that LimitRanges on namespaces can inject a missing resources element into the injected traffic-agent container. + - type: bugfix + title: UDP + body: >- + UDP based communication with services in the cluster now works as expected. + - type: bugfix + title: Telepresence `--help` + body: >- + The command help will only show Kubernetes flags on the commands that supports them + - type: change + title: Error Count + body: >- + Only the errors from the last session will be considered when counting the number of errors in the log after a command failure. + - version: 2.6.4 + date: "2022-05-23" + notes: + - type: bugfix + title: Upgrade RBAC Permissions + body: >- + The traffic-manager RBAC grants permissions to update services, deployments, replicatsets, and statefulsets. Those permissions are needed when the traffic-manager upgrades from versions < 2.6.0 and can be revoked after the upgrade. + - version: 2.6.3 + date: "2022-05-20" + notes: + - type: bugfix + title: Relative Mount Paths + body: >- + The --mount intercept flag now handles relative mount points correctly on non-windows platforms. Windows still require the argument to be a drive letter followed by a colon. + - type: bugfix + title: Traffic Agent Config + body: >- + The traffic-agent's configuration update automatically when services are added, updated or deleted. + - type: bugfix + title: Container Injection for Numeric Ports + body: >- + Telepresence will now always inject an initContainer when the service's targetPort is numeric + - type: bugfix + title: Matching Services + body: >- + Workloads that have several matching services pointing to the same target port are now handled correctly. + - type: bugfix + title: Unexpected Panic + body: >- + A potential race condition causing a panic when closing a DNS connection is now handled correctly. + - type: bugfix + title: Mount Volume Cleanup + body: >- + A container start would sometimes fail because and old directory remained in a mounted temp volume. + - version: 2.6.2 + date: "2022-05-17" + notes: + - type: bugfix + title: Argo Injection + body: >- + Workloads controlled by workloads like Argo Rollout are injected correctly. + - type: bugfix + title: Agent Port Mapping + body: >- + Multiple services appointing the same container port no longer result in duplicated ports in an injected pod. + - type: bugfix + title: GRPC Max Message Size + body: >- + The telepresence list command no longer errors out with "grpc: received message larger than max" when listing namespaces with a large number of workloads. + - version: 2.6.1 + date: "2022-05-16" + notes: + - type: bugfix + title: KUBECONFIG environment variable + body: >- + Telepresence will now handle multiple path entries in the KUBECONFIG environment correctly. + - type: bugfix + title: Don't Panic + body: >- + Telepresence will no longer panic when using preview URLs with traffic-managers < 2.6.0 + - version: 2.6.0 + date: "2022-05-13" + notes: + - type: feature + title: Intercept multiple containers in a pod, and multiple ports per container + body: >- + Telepresence can now intercept multiple services and/or service-ports that connect to the same pod. + docs: new-in-2.6#intercept-multiple-containers-and-ports + - type: feature + title: The Traffic Agent sidecar is always injected by the Traffic Manager's mutating webhook + body: >- + The client will no longer modify deployments, replicasets, or statefulsets in order to + inject a Traffic Agent into an intercepted pod. Instead, all injection is now performed by a mutating webhook. As a result, + the client now needs less permissions in the cluster. + docs: install/upgrade#important-note-about-upgrading-to-2.6.0 + - type: change + title: Automatic upgrade of Traffic Agents + body: >- + When upgrading, all workloads with injected agents will have their agent "uninstalled" automatically. + The mutating webhook will then ensure that their pods will receive an updated Traffic Agent. + docs: new-in-2.6#no-more-workload-modifications + - type: change + title: No default image in the Helm chart + body: >- + The helm chart no longer has a default set for the agentInjector.image.name, and unless it's set, the + traffic-manager will ask Ambassador Could for the preferred image. + docs: new-in-2.6#smarter-agent + - type: change + title: Upgrade to Helm version 3.8.1 + body: The Telepresence client now uses Helm version 3.8.1 when auto-installing the Traffic Manager. + - type: bugfix + title: Remote mounts will now function correctly with custom securityContext + body: >- + The bug causing permission problems when the Traffic Agent is in a Pod with a custom securityContext has been fixed. + - type: bugfix + title: Improved presentation of flags in CLI help + body: The help for commands that accept Kubernetes flags will now display those flags in a separate group. + - type: bugfix + title: Better termination of process parented by intercept + body: >- + Occasionally an intercept will spawn a command using -- on the command line, often in another console. + When you use telepresence leave or telepresence quit while the intercept with the spawned command is still active, + Telepresence will now terminate that the command because it's considered to be parented by the intercept that is being removed. + - version: 2.5.8 + date: "2022-04-27" + notes: + - type: bugfix + title: Folder creation on `telepresence login` + body: >- + Fixed a bug where the telepresence config folder would not be created if the user ran telepresence login before other commands. + - version: 2.5.7 + date: "2022-04-25" + notes: + - type: change + title: RBAC requirements + body: >- + A namespaced traffic-manager will no longer require cluster wide RBAC. Only Roles and RoleBindings are now used. + - type: bugfix + title: Windows DNS + body: >- + The DNS recursion detector didn't work correctly on Windows, resulting in sporadic failures to resolve names that were resolved correctly at other times. + - type: bugfix + title: Session TTL and Reconnect + body: >- + A telepresence session will now last for 24 hours after the user's last connectivity. If a session expires, the connector will automatically try to reconnect. + - version: 2.5.6 + date: "2022-04-18" + notes: + - type: change + title: Less Watchers + body: >- + Telepresence agents watcher will now only watch namespaces that the user has accessed since the last connect. + - type: bugfix + title: More Efficient `gather-logs` + body: >- + The gather-logs command will no longer send any logs through gRPC. + - version: 2.5.5 + date: "2022-04-08" + notes: + - type: change + title: Traffic Manager Permissions + body: >- + The traffic-manager now requires permissions to read pods across namespaces even if installed with limited permissions + - type: bugfix + title: Linux DNS Cache + body: >- + The DNS resolver used on Linux with systemd-resolved now flushes the cache when the search path changes. + - type: bugfix + title: Automatic Connect Sync + body: >- + The telepresence list command will produce a correct listing even when not preceded by a telepresence connect. + - type: bugfix + title: Disconnect Reconnect Stability + body: >- + The root daemon will no longer get into a bad state when a disconnect is rapidly followed by a new connect. + - type: bugfix + title: Limit Watched Namespaces + body: >- + The client will now only watch agents from accessible namespaces, and is also constrained to namespaces explicitly mapped using the connect command's --mapped-namespaces flag. + - type: bugfix + title: Limit Namespaces used in `gather-logs` + body: >- + The gather-logs command will only gather traffic-agent logs from accessible namespaces, and is also constrained to namespaces explicitly mapped using the connect command's --mapped-namespaces flag. + - version: 2.5.4 + date: "2022-03-29" + notes: + - type: bugfix + title: Linux DNS Concurrency + body: >- + The DNS fallback resolver on Linux now correctly handles concurrent requests without timing them out + - type: bugfix + title: Non-Functional Flag + body: >- + The ingress-l5 flag will no longer be forcefully set to equal the --ingress-host flag + - type: bugfix + title: Automatically Remove Failed Intercepts + body: >- + Intercepts that fail to create are now consistently removed to prevent non-working dangling intercepts from sticking around. + - type: bugfix + title: Agent UID + body: >- + Agent container is no longer sensitive to a random UID or an UID imposed by a SecurityContext. + - type: bugfix + title: Gather-Logs Output Filepath + body: >- + Removed a bad concatenation that corrupted the output path of telepresence gather-logs. + - type: change + title: Remove Unnecessary Error Advice + body: >- + An advice to "see logs for details" is no longer printed when the argument count is incorrect in a CLI command. + - type: bugfix + title: Garbage Collection + body: >- + Client and agent sessions no longer leaves dangling waiters in the traffic-manager when they depart. + - type: bugfix + title: Limit Gathered Logs + body: >- + The client's gather logs command and agent watcher will now respect the configured grpc.maxReceiveSize + - type: change + title: In-Cluster Checks + body: >- + The TUN device will no longer route pod or service subnets if it is running in a machine that's already connected to the cluster + - type: change + title: Expanded Status Command + body: >- + The status command includes the install id, user id, account id, and user email in its result, and can print output as JSON + - type: change + title: List Command Shows All Intercepts + body: >- + The list command, when used with the --intercepts flag, will list the users intercepts from all namespaces + - version: 2.5.3 + date: "2022-02-25" + notes: + - type: bugfix + title: TCP Connectivity + body: >- + Fixed bug in the TCP stack causing timeouts after repeated connects to the same address + - type: feature + title: Linux Binaries + body: >- + Client-side binaries for the arm64 architecture are now available for linux + - version: 2.5.2 + date: "2022-02-23" + notes: + - type: bugfix + title: DNS server bugfix + body: >- + Fixed a bug where Telepresence would use the last server in resolv.conf + - version: 2.5.1 + date: "2022-02-19" + notes: + - type: bugfix + title: Fix GKE auth issue + body: >- + Fixed a bug where using a GKE cluster would error with: No Auth Provider found for name "gcp" + - version: 2.5.0 + date: "2022-02-18" + notes: + - type: feature + title: Intercept specific endpoints + body: >- + The flags --http-path-equal, --http-path-prefix, and --http-path-regex can can be used in + addition to the --http-match flag to filter personal intercepts by the request URL path + docs: concepts/intercepts#intercepting-a-specific-endpoint + - type: feature + title: Intercept metadata + body: >- + The flag --http-meta can be used to declare metadata key value pairs that will be returned by the Telepresence rest + API endpoint /intercept-info + docs: reference/restapi#intercept-info + - type: change + title: Client RBAC watch + body: >- + The verb "watch" was added to the set of required verbs when accessing services and workloads for the client RBAC + ClusterRole + docs: reference/rbac + - type: change + title: Dropped backward compatibility with versions <=2.4.4 + body: >- + Telepresence is no longer backward compatible with versions 2.4.4 or older because the deprecated multiplexing tunnel + functionality was removed. + - type: change + title: No global networking flags + body: >- + The global networking flags are no longer used and using them will render a deprecation warning unless they are supported by the + command. The subcommands that support networking flags are connect, current-cluster-id, + and genyaml. + - type: bugfix + title: Output of status command + body: >- + The also-proxy and never-proxy subnets are now displayed correctly when using the + telepresence status command. + - type: bugfix + title: SETENV sudo privilege no longer needed + body: >- + Telepresence longer requires SETENV privileges when starting the root daemon. + - type: bugfix + title: Network device names containing dash + body: >- + Telepresence will now parse device names containing dashes correctly when determining routes that it should never block. + - type: bugfix + title: Linux uses cluster.local as domain instead of search + body: >- + The cluster domain (typically "cluster.local") is no longer added to the DNS search on Linux using + systemd-resolved. Instead, it is added as a domain so that names ending with it are routed + to the DNS server. + - version: 2.4.11 + date: "2022-02-10" + notes: + - type: change + title: Add additional logging to troubleshoot intermittent issues with intercepts + body: >- + We've noticed some issues with intercepts in v2.4.10, so we are releasing a version + with enhanced logging to help debug and fix the issue. + - version: 2.4.10 + date: "2022-01-13" + notes: + - type: feature + title: Application Protocol Strategy + body: >- + The strategy used when selecting the application protocol for personal intercepts can now be configured using + the intercept.appProtocolStrategy in the config.yml file. + docs: reference/config/#intercept + image: telepresence-2.4.10-intercept-config.png + - type: feature + title: Helm value for the Application Protocol Strategy + body: >- + The strategy when selecting the application protocol for personal intercepts in agents injected by the + mutating webhook can now be configured using the agentInjector.appProtocolStrategy in the Helm chart. + docs: install/helm + - type: feature + title: New --http-plaintext option + body: >- + The flag --http-plaintext can be used to ensure that an intercept uses plaintext http or grpc when + communicating with the workstation process. + docs: reference/intercepts/#tls + - type: feature + title: Configure the default intercept port + body: >- + The port used by default in the telepresence intercept command (8080), can now be changed by setting + the intercept.defaultPort in the config.yml file. + docs: reference/config/#intercept + - type: change + title: Telepresence CI now uses Github Actions + body: >- + Telepresence now uses Github Actions for doing unit and integration testing. It is + now easier for contributors to run tests on PRs since maintainers can add an + "ok to test" label to PRs (including from forks) to run integration tests. + docs: https://github.com/telepresenceio/telepresence/actions + image: telepresence-2.4.10-actions.png + - type: bugfix + title: Check conditions before asking questions + body: >- + User will not be asked to log in or add ingress information when creating an intercept until a check has been + made that the intercept is possible. + docs: reference/intercepts/ + - type: bugfix + title: Fix invalid log statement + body: >- + Telepresence will no longer log invalid: "unhandled connection control message: code DIAL_OK" errors. + - type: bugfix + title: Log errors from sshfs/sftp + body: >- + Output to stderr from the traffic-agent's sftp and the client's sshfs processes + are properly logged as errors. + - type: bugfix + title: Don't use Windows path separators in workload pod template + body: >- + Auto installer will no longer not emit backslash separators for the /tel-app-mounts paths in the + traffic-agent container spec when running on Windows. + - version: 2.4.9 + date: "2021-12-09" + notes: + - type: bugfix + title: Helm upgrade nil pointer error + body: >- + A helm upgrade using the --reuse-values flag no longer fails on a "nil pointer" error caused by a nil + telpresenceAPI value. + docs: install/helm#upgrading-the-traffic-manager + - version: 2.4.8 + date: "2021-12-03" + notes: + - type: feature + title: VPN diagnostics tool + body: >- + There is a new subcommand, test-vpn, that can be used to diagnose connectivity issues with a VPN. + See the VPN docs for more information on how to use it. + docs: reference/vpn + image: telepresence-2.4.8-vpn.png + + - type: feature + title: RESTful API service + body: >- + A RESTful service was added to Telepresence, both locally to the client and to the traffic-agent to + help determine if messages with a set of headers should be consumed or not from a message queue where the + intercept headers are added to the messages. + docs: reference/restapi + image: telepresence-2.4.8-health-check.png + + - type: change + title: TELEPRESENCE_LOGIN_CLIENT_ID env variable no longer used + body: >- + You could previously configure this value, but there was no reason to change it, so the value + was removed. + + - type: bugfix + title: Tunneled network connections behave more like ordinary TCP connections. + body: >- + When using Telepresence with an external cloud provider for extensions, those tunneled + connections now behave more like TCP connections, especially when it comes to timeouts. + We've also added increased testing around these types of connections. + - version: 2.4.7 + date: "2021-11-24" + notes: + - type: feature + title: Injector service-name annotation + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-service-name, that can be used to set the name of the service to be intercepted. + This will help disambiguate which service to intercept for when a workload is exposed by multiple services, such as can happen with Argo Rollouts + docs: reference/cluster-config#service-name-annotation + - type: feature + title: Skip the Ingress Dialogue + body: >- + You can now skip the ingress dialogue by setting the ingress parameters in the corresponding flags. + docs: reference/intercepts#skipping-the-ingress-dialogue + - type: feature + title: Never proxy subnets + body: >- + The kubeconfig extensions now support a never-proxy argument, + analogous to also-proxy, that defines a set of subnets that + will never be proxied via telepresence. + docs: reference/config#neverproxy + - type: change + title: Daemon versions check + body: >- + Telepresence now checks the versions of the client and the daemons and asks the user to quit and restart if they don't match. + - type: change + title: No explicit DNS flushes + body: >- + Telepresence DNS now uses a very short TTL instead of explicitly flushing DNS by killing the mDNSResponder or doing resolvectl flush-caches + docs: reference/routing#dns-caching + - type: bugfix + title: Legacy flags now work with global flags + body: >- + Legacy flags such as --swap-deployment can now be used together with global flags. + - type: bugfix + title: Outbound connection closing + body: >- + Outbound connections are now properly closed when the peer closes. + - type: bugfix + title: Prevent DNS recursion + body: >- + The DNS-resolver will trap recursive resolution attempts (may happen when the cluster runs in a docker-container on the client). + docs: reference/routing#dns-recursion + - type: bugfix + title: Prevent network recursion + body: >- + The TUN-device will trap failed connection attempts that results in recursive calls back into the TUN-device (may happen when the + cluster runs in a docker-container on the client). + docs: reference/routing#connect-recursion + - type: bugfix + title: Traffic Manager deadlock fix + body: >- + The Traffic Manager no longer runs a risk of entering a deadlock when a new Traffic agent arrives. + - type: bugfix + title: webhookRegistry config propagation + body: >- + The configured webhookRegistry is now propagated to the webhook installer even if no webhookAgentImage has been set. + docs: reference/config#images + - type: bugfix + title: Login refreshes expired tokens + body: >- + When a user's token has expired, telepresence login + will prompt the user to log in again to get a new token. Previously, + the user had to telepresence quit and telepresence logout + to get a new token. + docs: https://github.com/telepresenceio/telepresence/issues/2062 + - version: 2.4.6 + date: "2021-11-02" + notes: + - type: feature + title: Manually injecting Traffic Agent + body: >- + Telepresence now supports manually injecting the traffic-agent YAML into workload manifests. + Use the genyaml command to create the sidecar YAML, then add the telepresence.getambassador.io/manually-injected: "true" annotation to your pods to allow Telepresence to intercept them. + docs: reference/intercepts/manual-agent + + - type: feature + title: Telepresence CLI released for Apple silicon + body: >- + Telepresence is now built and released for Apple silicon. + docs: install/?os=macos + + - type: change + title: Telepresence help text now links to telepresence.io + body: >- + We now include a link to our documentation when you run telepresence --help. This will make it easier + for users to find this page whether they acquire Telepresence through Brew or some other mechanism. + image: telepresence-2.4.6-help-text.png + + - type: bugfix + title: Fixed bug when API server is inside CIDR range of pods/services + body: >- + If the API server for your kubernetes cluster had an IP that fell within the + subnet generated from pods/services in a kubernetes cluster, it would proxy traffic + to the API server which would result in hanging or a failed connection. We now ensure + that the API server is explicitly not proxied. + - version: 2.4.5 + date: "2021-10-15" + notes: + - type: feature + title: Get pod yaml with gather-logs command + body: >- + Adding the flag --get-pod-yaml to your request will get the + pod yaml manifest for all kubernetes components you are getting logs for + ( traffic-manager and/or pods containing a + traffic-agent container). This flag is set to false + by default. + docs: reference/client + image: telepresence-2.4.5-pod-yaml.png + + - type: feature + title: Anonymize pod name + namespace when using gather-logs command + body: >- + Adding the flag --anonymize to your command will + anonymize your pod names + namespaces in the output file. We replace the + sensitive names with simple names (e.g. pod-1, namespace-2) to maintain + relationships between the objects without exposing the real names of your + objects. This flag is set to false by default. + docs: reference/client + image: telepresence-2.4.5-logs-anonymize.png + + - type: feature + title: Added context and defaults to ingress questions when creating a preview URL + body: >- + Previously, we referred to OSI model layers when asking these questions, but this + terminology is not commonly used. The questions now provide a clearer context for the user, along with a default answer as an example. + docs: howtos/preview-urls + image: telepresence-2.4.5-preview-url-questions.png + + - type: feature + title: Support for intercepting headless services + body: >- + Intercepting headless services is now officially supported. You can request a + headless service on whatever port it exposes and get a response from the + intercept. This leverages the same approach as intercepting numeric ports when + using the mutating webhook injector, mainly requires the initContainer + to have NET_ADMIN capabilities. + docs: reference/intercepts/#intercepting-headless-services + + - type: change + title: Use one tunnel per connection instead of multiplexing into one tunnel + body: >- + We have changed Telepresence so that it uses one tunnel per connection instead + of multiplexing all connections into one tunnel. This will provide substantial + performance improvements. Clients will still be backwards compatible with older + managers that only support multiplexing. + + - type: bugfix + title: Added checks for Telepresence kubernetes compatibility + body: >- + Telepresence currently works with Kubernetes server versions 1.17.0 + and higher. We have added logs in the connector and traffic-manager + to let users know when they are using Telepresence with a cluster it doesn't support. + docs: reference/cluster-config + + - type: bugfix + title: Traffic Agent security context is now only added when necessary + body: >- + When creating an intercept, Telepresence will now only set the traffic agent's GID + when strictly necessary (i.e. when using headless services or numeric ports). This mitigates + an issue on openshift clusters where the traffic agent can fail to be created due to + openshift's security policies banning arbitrary GIDs. + + - version: 2.4.4 + date: "2021-09-27" + notes: + - type: feature + title: Numeric ports in agent injector + body: >- + The agent injector now supports injecting Traffic Agents into pods that have unnamed ports. + docs: reference/cluster-config/#note-on-numeric-ports + + - type: feature + title: New subcommand to gather logs and export into zip file + body: >- + Telepresence has logs for various components (the + traffic-manager, traffic-agents, the root and + user daemons), which are integral for understanding and debugging + Telepresence behavior. We have added the telepresence + gather-logs command to make it simple to compile logs for + all Telepresence components and export them in a zip file that can + be shared to others and/or included in a github issue. For more + information on usage, run telepresence gather-logs --help + . + docs: reference/client + image: telepresence-2.4.4-gather-logs.png + + - type: feature + title: Pod CIDR strategy is configurable in Helm chart + body: >- + Telepresence now enables you to directly configure how to get + pod CIDRs when deploying Telepresence with the Helm chart. + The default behavior remains the same. We've also introduced + the ability to explicitly set what the pod CIDRs should be. + docs: install/helm + + - type: bugfix + title: Compute pod CIDRs more efficiently + body: >- + When computing subnets using the pod CIDRs, the traffic-manager + now uses less CPU cycles. + docs: reference/routing/#subnets + + - type: bugfix + title: Prevent busy loop in traffic-manager + body: >- + In some circumstances, the traffic-manager's CPU + would max out and get pinned at its limit. This required a + shutdown or pod restart to fix. We've added some fixes + to prevent the traffic-manager from getting into this state. + + - type: bugfix + title: Added a fixed buffer size to TUN-device + body: >- + The TUN-device now has a max buffer size of 64K. This prevents the + buffer from growing limitlessly until it receies a PSH, which could + be a blocking operation when receiving lots of TCP-packets. + docs: reference/tun-device + + - type: bugfix + title: Fix hanging user daemon + body: >- + When Telepresence encountered an issue connecting to the cluster or + the root daemon, it could hang indefintely. It now will error correctly + when it encounters that situation. + + - type: bugfix + title: Improved proprietary agent connectivity + body: >- + To determine whether the environment cluster is air-gapped, the + proprietary agent attempts to connect to the cloud during startup. + To deal with a possible initial failure, the agent backs off + and retries the connection with an increasing backoff duration. + + - type: bugfix + title: Telepresence correctly reports intercept port conflict + body: >- + When creating a second intercept targetting the same local port, + it now gives the user an informative error message. Additionally, + it tells them which intercept is currently using that port to make + it easier to remedy. + + - version: 2.4.3 + date: "2021-09-15" + notes: + - type: feature + title: Environment variable TELEPRESENCE_INTERCEPT_ID available in interceptor's environment + body: >- + When you perform an intercept, we now include a TELEPRESENCE_INTERCEPT_ID environment + variable in the environment. + docs: reference/environment/#telepresence-environment-variables + + - type: bugfix + title: Improved daemon stability + body: >- + Fixed a timing bug that sometimes caused a "daemon did not start" failure. + + - type: bugfix + title: Complete logs for Windows + body: >- + Crash stack traces and other errors were incorrectly not written to log files. This has + been fixed so logs for Windows should be at parity with the ones in MacOS and Linux. + + - type: bugfix + title: Log rotation fix for Linux kernel 4.11+ + body: >- + On Linux kernel 4.11 and above, the log file rotation now properly reads the + birth-time of the log file. Older kernels continue to use the old behavior + of using the change-time in place of the birth-time. + + - type: bugfix + title: Improved error messaging + body: >- + When Telepresence encounters an error, it tells the user where they should look for + logs related to the error. We have refined this so that it only tells users to look + for errors in the daemon logs for issues that are logged there. + + - type: bugfix + title: Stop resolving localhost + body: >- + When using the overriding DNS resolver, it will no longer apply search paths when + resolving localhost, since that should be resolved on the user's machine + instead of the cluster. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Variable cluster domain + body: >- + Previously, the cluster domain was hardcoded to cluster.local. While this + is true for many kubernetes clusters, it is not for all of them. Now this value is + retrieved from the traffic-manager. + + - type: bugfix + title: Improved cleanup of traffic-agents + body: >- + Telepresence now uninstalls traffic-agents installed via mutating webhook + when using telepresence uninstall --everything. + + - type: bugfix + title: More large file transfer fixes + body: >- + Downloading large files during an intercept will no longer cause timeouts and hanging + traffic-agents. + + - type: bugfix + title: Setting --mount to false when intercepting works as expected + body: >- + When using --mount=false while performing an intercept, the file system + was still mounted. This has been remedied so the intercept behavior respects the + flag. + docs: reference/volume + + - type: bugfix + title: Traffic-manager establishes outbound connections in parallel + body: >- + Previously, the traffic-manager established outbound connections + sequentially. This resulted in slow (and failing) Dial calls would + block all outbound traffic from the workstation (for up to 30 seconds). We now + establish these connections in parallel so that won't occur. + docs: reference/routing/#outbound + + - type: bugfix + title: Status command reports correct DNS settings + body: >- + Telepresence status now correctly reports DNS settings for all operating + systems, instead of Local IP:nil, Remote IP:nil when they don't exist. + + - version: 2.4.2 + date: "2021-09-01" + notes: + - type: feature + title: New subcommand to temporarily change log-level + body: >- + We have added a new telepresence loglevel subcommand that enables users + to temporarily change the log-level for the local demons, the traffic-manager and + the traffic-agents. While the logLevels settings from the config will + still be used by default, this can be helpful if you are currently experiencing an issue and + want to have higher fidelity logs, without doing a telepresence quit and + telepresence connect. You can use telepresence loglevel --help to get + more information on options for the command. + docs: reference/config + + - type: change + title: All components have info as the default log-level + body: >- + We've now set the default for all components of Telepresence (traffic-agent, + traffic-manager, local daemons) to use info as the default log-level. + + - type: bugfix + title: Updating RBAC in helm chart to fix cluster-id regression + body: >- + In 2.4.1, we enabled the traffic-manager to get the cluster ID by getting the UID + of the default namespace. The helm chart was not updated to give the traffic-manager + those permissions, which has since been fixed. This impacted users who use licensed features of + the Telepresence extension in an air-gapped environment. + docs: reference/cluster-config/#air-gapped-cluster + + - type: bugfix + title: Timeouts for Helm actions are now respected + body: >- + The user-defined timeout for Helm actions wasn't always respected, causing the daemon to hang + indefinitely when failing to install the traffic-manager. + docs: reference/config#timeouts + + - version: 2.4.1 + date: "2021-08-30" + notes: + - type: feature + title: External cloud variables are now configurable + body: >- + We now support configuring the host and port for the cloud in your config.yml. These + are used when logging in to utilize features provided by an extension, and are also passed + along as environment variables when installing the traffic-manager. Additionally, we + now run our testsuite with these variables set to localhost to continue to ensure Telepresence + is fully fuctional without depeneding on an external service. The SYSTEMA_HOST and SYSTEMA_PORT + environment variables are no longer used. + image: telepresence-2.4.1-systema-vars.png + docs: reference/config/#cloud + + - type: feature + title: Helm chart can now regenerate certificate used for mutating webhook on-demand. + body: >- + You can now set agentInjector.certificate.regenerate when deploying Telepresence + with the Helm chart to automatically regenerate the certificate used by the agent injector webhook. + docs: install/helm + + - type: change + title: Traffic Manager installed via helm + body: >- + The traffic-manager is now installed via an embedded version of the Helm chart when telepresence connect is first performed on a cluster. + This change is transparent to the user. + A new configuration flag, timeouts.helm sets the timeouts for all helm operations performed by the Telepresence binary. + docs: reference/config#timeouts + + - type: change + title: traffic-manager gets cluster ID itself instead of via environment variable + body: >- + The traffic-manager used to get the cluster ID as an environment variable when running + telepresence connnect or via adding the value in the helm chart. This was + clunky so now the traffic-manager gets the value itself as long as it has permissions + to "get" and "list" namespaces (this has been updated in the helm chart). + docs: install/helm + + - type: bugfix + title: Telepresence now mounts all directories from /var/run/secrets + body: >- + In the past, we only mounted secret directories in /var/run/secrets/kubernetes.io. + We now mount *all* directories in /var/run/secrets, which, for example, includes + directories like eks.amazonaws.com used for IRSA tokens. + docs: reference/volume + + - type: bugfix + title: Max gRPC receive size correctly propagates to all grpc servers + body: >- + This fixes a bug where the max gRPC receive size was only propagated to some of the + grpc servers, causing failures when the message size was over the default. + docs: reference/config/#grpc + + - type: bugfix + title: Updated our Homebrew packaging to run manually + body: >- + We made some updates to our script that packages Telepresence for Homebrew so that it + can be run manually. This will enable maintainers of Telepresence to run the script manually + should we ever need to rollback a release and have latest point to an older verison. + docs: install/ + + - type: bugfix + title: Telepresence uses namespace from kubeconfig context on each call + body: >- + In the past, Telepresence would use whatever namespace was specified in the kubeconfig's current-context + for the entirety of the time a user was connected to Telepresence. This would lead to confusing behavior + when a user changed the context in their kubeconfig and expected Telepresence to acknowledge that change. + Telepresence now will do that and use the namespace designated by the context on each call. + + - type: bugfix + title: Idle outbound TCP connections timeout increased to 7200 seconds + body: >- + Some users were noticing that their intercepts would start failing after 60 seconds. + This was because the keep idle outbound TCP connections were set to 60 seconds, which we have + now bumped to 7200 seconds to match Linux's tcp_keepalive_time default. + + - type: bugfix + title: Telepresence will automatically remove a socket upon ungraceful termination + body: >- + When a Telepresence process terminates ungracefully, it would inform users that "this usually means + that the process has terminated ungracefully" and implied that they should remove the socket. We've + now made it so Telepresence will automatically attempt to remove the socket upon ungraceful termination. + + - type: bugfix + title: Fixed user daemon deadlock + body: >- + Remedied a situation where the user daemon could hang when a user was logged in. + + - type: bugfix + title: Fixed agentImage config setting + body: >- + The config setting images.agentImages is no longer required to contain the repository, and it + will use the value at images.repository. + docs: reference/config/#images + + - version: 2.4.0 + date: "2021-08-04" + notes: + - type: feature + title: Windows Client Developer Preview + body: >- + There is now a native Windows client for Telepresence that is being released as a Developer Preview. + All the same features supported by the MacOS and Linux client are available on Windows. + image: telepresence-2.4.0-windows.png + docs: install + + - type: feature + title: CLI raises helpful messages from Ambassador Cloud + body: >- + Telepresence can now receive messages from Ambassador Cloud and raise + them to the user when they perform certain commands. This enables us + to send you messages that may enhance your Telepresence experience when + using certain commands. Frequency of messages can be configured in your + config.yml. + image: telepresence-2.4.0-cloud-messages.png + docs: reference/config#cloud + + - type: bugfix + title: Improved stability of systemd-resolved-based DNS + body: >- + When initializing the systemd-resolved-based DNS, the routing domain + is set to improve stability in non-standard configurations. This also enables the + overriding resolver to do a proper take over once the DNS service ends. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Fixed an edge case when intercepting a container with multiple ports + body: >- + When specifying a port of a container to intercept, if there was a container in the + pod without ports, it was automatically selected. This has been fixed so we'll only + choose the container with "no ports" if there's no container that explicitly matches + the port used in your intercept. + docs: reference/intercepts/#creating-an-intercept-when-a-service-has-multiple-ports + + - type: bugfix + title: $(NAME) references in agent's environments are now interpolated correctly. + body: >- + If you had an environment variable $(NAME) in your workload that referenced another, intercepts + would not correctly interpolate $(NAME). This has been fixed and works automatically. + + - type: bugfix + title: Telepresence no longer prints INFO message when there is no config.yml + body: >- + Fixed a regression that printed an INFO message to the terminal when there wasn't a + config.yml present. The config is optional, so this message has been + removed. + docs: reference/config + + - type: bugfix + title: Telepresence no longer panics when using --http-match + body: >- + Fixed a bug where Telepresence would panic if the value passed to --http-match + didn't contain an equal sign, which has been fixed. The correct syntax is in the --help + string and looks like --http-match=HTTP2_HEADER=REGEX + docs: reference/intercepts/#intercept-behavior-when-logged-in-to-ambassador-cloud + + - type: bugfix + title: Improved subnet updates + body: >- + The traffic-manager used to update subnets whenever the Nodes or Pods changed, even if + the underlying subnet hadn't changed, which created a lot of unnecessary traffic between the + client and the traffic-manager. This has been fixed so we only send updates when the subnets + themselves actually change. + docs: reference/routing/#subnets + + - version: 2.3.7 + date: "2021-07-23" + notes: + - type: feature + title: Also-proxy in telepresence status + body: >- + An also-proxy entry in the Kubernetes cluster config will + show up in the output of the telepresence status command. + docs: reference/config + + - type: feature + title: Non-interactive telepresence login + body: >- + telepresence login now has an + --apikey=KEY flag that allows for + non-interactive logins. This is useful for headless + environments where launching a web-browser is impossible, + such as cloud shells, Docker containers, or CI. + image: telepresence-2.3.7-newkey.png + docs: reference/client/login/ + + - type: bugfix + title: Mutating webhook injector correctly hides named ports for probes. + body: >- + The mutating webhook injector has been fixed to correctly rename named ports for liveness and readiness probes + docs: reference/cluster-config + + - type: bugfix + title: telepresence current-cluster-id crash fixed + body: >- + Fixed a regression introduced in 2.3.5 that caused telepresence current-cluster-id + to crash. + docs: reference/cluster-config + + - type: bugfix + title: Better UX around intercepts with no local process running + body: >- + Requests would hang indefinitely when initiating an intercept before you + had a local process running. This has been fixed and will result in an + Empty reply from server until you start a local process. + docs: reference/intercepts + + - type: bugfix + title: API keys no longer show as "no description" + body: >- + New API keys generated internally for communication with + Ambassador Cloud no longer show up as "no description" in + the Ambassador Cloud web UI. Existing API keys generated by + older versions of Telepresence will still show up this way. + image: telepresence-2.3.7-keydesc.png + + - type: bugfix + title: Fix corruption of user-info.json + body: >- + Fixed a race condition that logging in and logging out + rapidly could cause memory corruption or corruption of the + user-info.json cache file used when + authenticating with Ambassador Cloud. + + - type: bugfix + title: Improved DNS resolver for systemd-resolved + body: + Telepresence's systemd-resolved-based DNS resolver is now more + stable and in case it fails to initialize, the overriding resolver + will no longer cause general DNS lookup failures when telepresence defaults to + using it. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Faster telepresence list command + body: + The performance of telepresence list has been increased + significantly by reducing the number of calls the command makes to the cluster. + docs: reference/client + + - version: 2.3.6 + date: "2021-07-20" + notes: + - type: bugfix + title: Fix preview URLs + body: >- + Fixed a regression introduced in 2.3.5 that caused preview + URLs to not work. + + - type: bugfix + title: Fix subnet discovery + body: >- + Fixed a regression introduced in 2.3.5 where the Traffic + Manager's RoleBinding did not correctly appoint + the traffic-manager Role, causing + subnet discovery to not be able to work correctly. + docs: reference/rbac/ + + - type: bugfix + title: Fix root-user configuration loading + body: >- + Fixed a regression introduced in 2.3.5 where the root daemon + did not correctly read the configuration file; ignoring the + user's configured log levels and timeouts. + docs: reference/config/ + + - type: bugfix + title: Fix a user daemon crash + body: >- + Fixed an issue that could cause the user daemon to crash + during shutdown, as during shutdown it unconditionally + attempted to close a channel even though the channel might + already be closed. + + - version: 2.3.5 + date: "2021-07-15" + notes: + - type: feature + title: traffic-manager in multiple namespaces + body: >- + We now support installing multiple traffic managers in the same cluster. + This will allow operators to install deployments of telepresence that are + limited to certain namespaces. + image: ./telepresence-2.3.5-traffic-manager-namespaces.png + docs: install/helm + - type: feature + title: No more dependence on kubectl + body: >- + Telepresence no longer depends on having an external + kubectl binary, which might not be present for + OpenShift users (who have oc instead of + kubectl). + - type: feature + title: Agent image now configurable + body: >- + We now support configuring which agent image + registry to use in the + config. This enables users whose laptop is an air-gapped environment to + create personal intercepts without requiring a login. It also makes it easier + for those who are developing on Telepresence to specify which agent image should + be used. Env vars TELEPRESENCE_AGENT_IMAGE and TELEPRESENCE_REGISTRY are no longer + used. + image: ./telepresence-2.3.5-agent-config.png + docs: reference/config/#images + - type: feature + title: Max gRPC receive size now configurable + body: >- + The default max size of messages received through gRPC (4 MB) is sometimes insufficient. It can now be configured. + image: ./telepresence-2.3.5-grpc-max-receive-size.png + docs: reference/config/#grpc + - type: feature + title: CLI can be used in air-gapped environments + body: >- + While Telepresence will auto-detect if your cluster is in an air-gapped environment, + we've added an option users can add to their config.yml to ensure the cli acts like it + is in an air-gapped environment. Air-gapped environments require a manually installed + licence. + docs: reference/cluster-config/#air-gapped-cluster + image: ./telepresence-2.3.5-skipLogin.png + - version: 2.3.4 + date: "2021-07-09" + notes: + - type: bugfix + title: Improved IP log statements + body: >- + Some log statements were printing incorrect characters, when they should have been IP addresses. + This has been resolved to include more accurate and useful logging. + docs: reference/config/#log-levels + image: ./telepresence-2.3.4-ip-error.png + - type: bugfix + title: Improved messaging when multiple services match a workload + body: >- + If multiple services matched a workload when performing an intercept, Telepresence would crash. + It now gives the correct error message, instructing the user on how to specify which + service the intercept should use. + image: ./telepresence-2.3.4-improved-error.png + docs: reference/intercepts + - type: bugfix + title: Traffic-manger creates services in its own namespace to determine subnet + body: >- + Telepresence will now determine the service subnet by creating a dummy-service in its own + namespace, instead of the default namespace, which was causing RBAC permissions issues in + some clusters. + docs: reference/routing/#subnets + - type: bugfix + title: Telepresence connect respects pre-existing clusterrole + body: >- + When Telepresence connects, if the traffic-manager's desired clusterrole already exists in the + cluster, Telepresence will no longer try to update the clusterrole. + docs: reference/rbac + - type: bugfix + title: Helm Chart fixed for clientRbac.namespaced + body: >- + The Telepresence Helm chart no longer fails when installing with --set clientRbac.namespaced=true. + docs: install/helm + - version: 2.3.3 + date: "2021-07-07" + notes: + - type: feature + title: Traffic Manager Helm Chart + body: >- + Telepresence now supports installing the Traffic Manager via Helm. + This will make it easy for operators to install and configure the + server-side components of Telepresence separately from the CLI (which + in turn allows for better separation of permissions). + image: ./telepresence-2.3.3-helm.png + docs: install/helm/ + - type: feature + title: Traffic-manager in custom namespace + body: >- + As the traffic-manager can now be installed in any + namespace via Helm, Telepresence can now be configured to look for the + Traffic Manager in a namespace other than ambassador. + This can be configured on a per-cluster basis. + image: ./telepresence-2.3.3-namespace-config.png + docs: reference/config + - type: feature + title: Intercept --to-pod + body: >- + telepresence intercept now supports a + --to-pod flag that can be used to port-forward sidecars' + ports from an intercepted pod. + image: ./telepresence-2.3.3-to-pod.png + docs: reference/intercepts + - type: change + title: Change in migration from edgectl + body: >- + Telepresence no longer automatically shuts down the old + api_version=1 edgectl daemon. If migrating + from such an old version of edgectl you must now manually + shut down the edgectl daemon before running Telepresence. + This was already the case when migrating from the newer + api_version=2 edgectl. + - type: bugfix + title: Fixed error during shutdown + body: >- + The root daemon no longer terminates when the user daemon disconnects + from its gRPC streams, and instead waits to be terminated by the CLI. + This could cause problems with things not being cleaned up correctly. + - type: bugfix + title: Intercepts will survive deletion of intercepted pod + body: >- + An intercept will survive deletion of the intercepted pod provided + that another pod is created (or already exists) that can take over. + - version: 2.3.2 + date: "2021-06-18" + notes: + # Headliners + - type: feature + title: Service Port Annotation + body: >- + The mutator webhook for injecting traffic-agents now + recognizes a + telepresence.getambassador.io/inject-service-port + annotation to specify which port to intercept; bringing the + functionality of the --port flag to users who + use the mutator webook in order to control Telepresence via + GitOps. + image: ./telepresence-2.3.2-svcport-annotation.png + docs: reference/cluster-config#service-port-annotation + - type: feature + title: Outbound Connections + body: >- + Outbound connections are now routed through the intercepted + Pods which means that the connections originate from that + Pod from the cluster's perspective. This allows service + meshes to correctly identify the traffic. + docs: reference/routing/#outbound + - type: change + title: Inbound Connections + body: >- + Inbound connections from an intercepted agent are now + tunneled to the manager over the existing gRPC connection, + instead of establishing a new connection to the manager for + each inbound connection. This avoids interference from + certain service mesh configurations. + docs: reference/routing/#inbound + + # RBAC changes + - type: change + title: Traffic Manager needs new RBAC permissions + body: >- + The Traffic Manager requires RBAC + permissions to list Nodes, Pods, and to create a dummy + Service in the manager's namespace. + docs: reference/routing/#subnets + - type: change + title: Reduced developer RBAC requirements + body: >- + The on-laptop client no longer requires RBAC permissions to list the Nodes + in the cluster or to create Services, as that functionality + has been moved to the Traffic Manager. + + # Bugfixes + - type: bugfix + title: Able to detect subnets + body: >- + Telepresence will now detect the Pod CIDR ranges even if + they are not listed in the Nodes. + image: ./telepresence-2.3.2-subnets.png + docs: reference/routing/#subnets + - type: bugfix + title: Dynamic IP ranges + body: >- + The list of cluster subnets that the virtual network + interface will route is now configured dynamically and will + follow changes in the cluster. + - type: bugfix + title: No duplicate subnets + body: >- + Subnets fully covered by other subnets are now pruned + internally and thus never superfluously added to the + laptop's routing table. + docs: reference/routing/#subnets + - type: change # not a bugfix, but it only makes sense to mention after the above bugfixes + title: Change in default timeout + body: >- + The trafficManagerAPI timeout default has + changed from 5 seconds to 15 seconds, in order to facilitate + the extended time it takes for the traffic-manager to do its + initial discovery of cluster info as a result of the above + bugfixes. + - type: bugfix + title: Removal of DNS config files on macOS + body: >- + On macOS, files generated under + /etc/resolver/ as the result of using + include-suffixes in the cluster config are now + properly removed on quit. + docs: reference/routing/#macos-resolver + + - type: bugfix + title: Large file transfers + body: >- + Telepresence no longer erroneously terminates connections + early when sending a large HTTP response from an intercepted + service. + - type: bugfix + title: Race condition in shutdown + body: >- + When shutting down the user-daemon or root-daemon on the + laptop, telepresence quit and related commands + no longer return early before everything is fully shut down. + Now it can be counted on that by the time the command has + returned that all of the side-effects on the laptop have + been cleaned up. + - version: 2.3.1 + date: "2021-06-14" + notes: + - title: DNS Resolver Configuration + body: "Telepresence now supports per-cluster configuration for custom dns behavior, which will enable users to determine which local + remote resolver to use and which suffixes should be ignored + included. These can be configured on a per-cluster basis." + image: ./telepresence-2.3.1-dns.png + docs: reference/config + type: feature + - title: AlsoProxy Configuration + body: "Telepresence now supports also proxying user-specified subnets so that they can access external services only accessible to the cluster while connected to Telepresence. These can be configured on a per-cluster basis and each subnet is added to the TUN device so that requests are routed to the cluster for IPs that fall within that subnet." + image: ./telepresence-2.3.1-alsoProxy.png + docs: reference/config + type: feature + - title: Mutating Webhook for Injecting Traffic Agents + body: "The Traffic Manager now contains a mutating webhook to automatically add an agent to pods that have the telepresence.getambassador.io/traffic-agent: enabled annotation. This enables Telepresence to work well with GitOps CD platforms that rely on higher level kubernetes objects matching what is stored in git. For workloads without the annotation, Telepresence will add the agent the way it has in the past" + image: ./telepresence-2.3.1-inject.png + docs: reference/rbac + type: feature + - title: Traffic Manager Connect Timeout + body: "The trafficManagerConnect timeout default has changed from 20 seconds to 60 seconds, in order to facilitate the extended time it takes to apply everything needed for the mutator webhook." + image: ./telepresence-2.3.1-trafficmanagerconnect.png + docs: reference/config + type: change + - title: Fix for large file transfers + body: "Fix a tun-device bug where sometimes large transfers from services on the cluster would hang indefinitely" + image: ./telepresence-2.3.1-large-file-transfer.png + docs: reference/tun-device + type: bugfix + - title: Brew Formula Changed + body: "Now that the Telepresence rewrite is the main version of Telepresence, you can install it via Brew like so: brew install datawire/blackbird/telepresence." + image: ./telepresence-2.3.1-brew.png + docs: install/ + type: change + - version: 2.3.0 + date: "2021-06-01" + notes: + - title: Brew install Telepresence + body: "Telepresence can now be installed via brew on macOS, which makes it easier for users to stay up-to-date with the latest telepresence version. To install via brew, you can use the following command: brew install datawire/blackbird/telepresence2." + image: ./telepresence-2.3.0-homebrew.png + docs: install/ + type: feature + - title: TCP and UDP routing via Virtual Network Interface + body: "Telepresence will now perform routing of outbound TCP and UDP traffic via a Virtual Network Interface (VIF). The VIF is a layer 3 TUN-device that exists while Telepresence is connected. It makes the subnets in the cluster available to the workstation and will also route DNS requests to the cluster and forward them to intercepted pods. This means that pods with custom DNS configuration will work as expected. Prior versions of Telepresence would use firewall rules and were only capable of routing TCP." + image: ./tunnel.jpg + docs: reference/tun-device + type: feature + - title: SSH is no longer used + body: "All traffic between the client and the cluster is now tunneled via the traffic manager gRPC API. This means that Telepresence no longer uses ssh tunnels and that the manager no longer have an sshd installed. Volume mounts are still established using sshfs but it is now configured to communicate using the sftp-protocol directly, which means that the traffic agent also runs without sshd. A desired side effect of this is that the manager and agent containers no longer need a special user configuration." + image: ./no-ssh.png + docs: reference/tun-device/#no-ssh-required + type: change + - title: Running in a Docker container + body: "Telepresence can now be run inside a Docker container. This can be useful for avoiding side effects on a workstation's network, establishing multiple sessions with the traffic manager, or working with different clusters simultaneously." + image: ./run-tp-in-docker.png + docs: reference/inside-container + type: feature + - title: Configurable Log Levels + body: "Telepresence now supports configuring the log level for Root Daemon and User Daemon logs. This provides control over the nature and volume of information that Telepresence generates in daemon.log and connector.log." + image: ./telepresence-2.3.0-loglevels.png + docs: reference/config/#log-levels + type: feature + - version: 2.2.2 + date: "2021-05-17" + notes: + - title: Legacy Telepresence subcommands + body: Telepresence is now able to translate common legacy Telepresence commands into native Telepresence commands. So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used to with the new Telepresence binary. + image: ./telepresence-2.2.png + docs: install/migrate-from-legacy/ + type: feature diff --git a/docs/telepresence/2.12/troubleshooting/index.md b/docs/telepresence/2.12/troubleshooting/index.md new file mode 100644 index 000000000..c882e1bf0 --- /dev/null +++ b/docs/telepresence/2.12/troubleshooting/index.md @@ -0,0 +1,233 @@ +--- +title: "Telepresence Troubleshooting" +description: "Learn how to troubleshoot common issues related to Telepresence, including intercept issues, cluster connection issues, and errors related to Ambassador Cloud." +--- +# Troubleshooting + + +## Creating an intercept did not generate a preview URL + +Preview URLs can only be created if Telepresence is [logged in to +Ambassador Cloud](../reference/client/login/). When not logged in, it +will not even try to create a preview URL (additionally, by default it +will intercept all traffic rather than just a subset of the traffic). +Remove the intercept with `telepresence leave [deployment name]`, run +`telepresence login` to login to Ambassador Cloud, then recreate the +intercept. See the [intercepts how-to doc](../howtos/intercepts) for +more details. + +## Error on accessing preview URL: `First record does not look like a TLS handshake` + +The service you are intercepting is likely not using TLS, however when configuring the intercept you indicated that it does use TLS. Remove the intercept with `telepresence leave [deployment name]` and recreate it, setting `TLS` to `n`. Telepresence tries to intelligently determine these settings for you when creating an intercept and offer them as defaults, but odd service configurations might cause it to suggest the wrong settings. + +## Error on accessing preview URL: Detected a 301 Redirect Loop + +If your ingress is set to redirect HTTP requests to HTTPS and your web app uses HTTPS, but you configure the intercept to not use TLS, you will get this error when opening the preview URL. Remove the intercept with `telepresence leave [deployment name]` and recreate it, selecting the correct port and setting `TLS` to `y` when prompted. + +## Connecting to a cluster via VPN doesn't work. + +There are a few different issues that could arise when working with a VPN. Please see the [dedicated page](../reference/vpn) on Telepresence and VPNs to learn more on how to fix these. + +## Connecting to a cluster hosted in a VM on the workstation doesn't work + +The cluster probably has access to the host's network and gets confused when it is mapped by Telepresence. +Please check the [cluster in hosted vm](../howtos/cluster-in-vm) for more details. + +## Your GitHub organization isn't listed + +Ambassador Cloud needs access granted to your GitHub organization as a +third-party OAuth app. If an organization isn't listed during login +then the correct access has not been granted. + +The quickest way to resolve this is to go to the **Github menu** → +**Settings** → **Applications** → **Authorized OAuth Apps** → +**Ambassador Labs**. An organization owner will have a **Grant** +button, anyone not an owner will have **Request** which sends an email +to the owner. If an access request has been denied in the past the +user will not see the **Request** button, they will have to reach out +to the owner. + +Once access is granted, log out of Ambassador Cloud and log back in; +you should see the GitHub organization listed. + +The organization owner can go to the **GitHub menu** → **Your +organizations** → **[org name]** → **Settings** → **Third-party +access** to see if Ambassador Labs has access already or authorize a +request for access (only owners will see **Settings** on the +organization page). Clicking the pencil icon will show the +permissions that were granted. + +GitHub's documentation provides more detail about [managing access granted to third-party applications](https://docs.github.com/en/github/authenticating-to-github/connecting-with-third-party-applications) and [approving access to apps](https://docs.github.com/en/github/setting-up-and-managing-organizations-and-teams/approving-oauth-apps-for-your-organization). + +### Granting or requesting access on initial login + +When using GitHub as your identity provider, the first time you log in +to Ambassador Cloud GitHub will ask to authorize Ambassador Labs to +access your organizations and certain user data. + +Authorize Ambassador labs form + +Any listed organization with a green check has already granted access +to Ambassador Labs (you still need to authorize to allow Ambassador +Labs to read your user data and organization membership). + +Any organization with a red "X" requires access to be granted to +Ambassador Labs. Owners of the organization will see a **Grant** +button. Anyone who is not an owner will see a **Request** button. +This will send an email to the organization owner requesting approval +to access the organization. If an access request has been denied in +the past the user will not see the **Request** button, they will have +to reach out to the owner. + +Once approval is granted, you will have to log out of Ambassador Cloud +then back in to select the organization. + +## Volume mounts are not working on macOS + +It's necessary to have `sshfs` installed in order for volume mounts to work correctly during intercepts. Lately there's been some issues using `brew install sshfs` a macOS workstation because the required component `osxfuse` (now named `macfuse`) isn't open source and hence, no longer supported. As a workaround, you can now use `gromgit/fuse/sshfs-mac` instead. Follow these steps: + +1. Remove old sshfs, macfuse, osxfuse using `brew uninstall` +2. `brew install --cask macfuse` +3. `brew install gromgit/fuse/sshfs-mac` +4. `brew link --overwrite sshfs-mac` + +Now sshfs -V shows you the correct version, e.g.: +``` +$ sshfs -V +SSHFS version 2.10 +FUSE library version: 2.9.9 +fuse: no mount point +``` + +5. Next, try a mount (or an intercept that performs a mount). It will fail because you need to give permission to “Benjamin Fleischer” to execute a kernel extension (a pop-up appears that takes you to the system preferences). +6. Approve the needed permission +7. Reboot your computer. + +## Authorization for preview URLs +Services that require authentication may not function correctly with preview URLs. When accessing a preview URL, it is necessary to configure your intercept to use custom authentication headers for the preview URL. If you don't, you may receive an unauthorized response or be redirected to the login page for Ambassador Cloud. + +You can accomplish this by using a browser extension such as the `ModHeader extension` for [Chrome](https://chrome.google.com/webstore/detail/modheader/idgpnmonknjnojddfkpgkljpfnnfcklj) +or [Firefox](https://addons.mozilla.org/en-CA/firefox/addon/modheader-firefox/). + +It is important to note that Ambassador Cloud does not support OAuth browser flows when accessing a preview URL, but other auth schemes such as Basic access authentication and session cookies will work. + +## Distributed tracing + +Telepresence is a complex piece of software with components running locally on your laptop and remotely in a distributed kubernetes environment. +As such, troubleshooting investigations require tools that can give users, cluster admins, and maintainers a broad view of what these distributed components are doing. +In order to facilitate such investigations, telepresence >= 2.7.0 includes distributed tracing functionality via [OpenTelemetry](https://opentelemetry.io/) +Tracing is controlled via a `grpcPort` flag under the `tracing` configuration of your `values.yaml`. It is enabled by default and can be disabled by setting `grpcPort` to `0`, or `tracing` to an empty object: + +```yaml +tracing: {} +``` + +If tracing is configured, the traffic manager and traffic agents will open a GRPC server under the port given, from which telepresence clients will be able to gather trace data. +To collect trace data, ensure you're connected to the cluster, perform whatever operation you'd like to debug and then run `gather-traces` immediately after: + +```console +$ telepresence gather-traces +``` + +This command will gather traces from both the cloud and local components of telepresence and output them into a file called `traces.gz` in your current working directory: + +```console +$ file traces.gz + traces.gz: gzip compressed data, original size modulo 2^32 158255 +``` + +Please do not try to open or uncompress this file, as it contains binary trace data. +Instead, you can use the `upload-traces` command built into telepresence to send it to an [OpenTelemetry collector](https://opentelemetry.io/docs/collector/) for ingestion: + +```console +$ telepresence upload-traces traces.gz $OTLP_GRPC_ENDPOINT +``` + +Once that's been done, the traces will be visible via whatever means your usual collector allows. For example, this is what they look like when loaded into Jaeger's [OTLP API](https://www.jaegertracing.io/docs/1.36/apis/#opentelemetry-protocol-stable): + +![Jaeger Interface](../images/tracing.png) + +**Note:** The host and port provided for the `OTLP_GRPC_ENDPOINT` must accept OTLP formatted spans (instead of e.g. Jaeger or Zipkin specific spans) via a GRPC API (instead of the HTTP API that is also available in some collectors) +**Note:** Since traces are not automatically shipped to the backend by telepresence, they are stored in memory. Hence, to avoid running telepresence components out of memory, only the last 10MB of trace data are available for export. + +## No Sidecar Injected in GKE private clusters + +An attempt to `telepresence intercept` results in a timeout, and upon examination of the pods (`kubectl get pods`) it's discovered that the intercept command did not inject a sidecar into the workload's pods: + +```bash +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +echo-easy-7f6d54cff8-rz44k 1/1 Running 0 5m5s + +$ telepresence intercept echo-easy -p 8080 +telepresence: error: connector.CreateIntercept: request timed out while waiting for agent echo-easy.default to arrive +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +echo-easy-d8dc4cc7c-27567 1/1 Running 0 2m9s + +# Notice how 1/1 containers are ready. +``` + +If this is occurring in a GKE cluster with private networking enabled, it is likely due to firewall rules blocking the +Traffic Manager's webhook injector from the API server. +To fix this, add a firewall rule allowing your cluster's master nodes to access TCP port `443` in your cluster's pods, +or change the port number that Telepresence is using for the agent injector by providing the number of an allowed port +using the Helm chart value `agentInjector.webhook.port`. +Please refer to the [telepresence install instructions](../install/cloud#gke) or the [GCP docs](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) for information to resolve this. + +## Injected init-container doesn't function properly + +The init-container is injected to insert `iptables` rules that redirects port numbers from the app container to the +traffic-agent sidecar. This is necessary when the service's `targetPort` is numeric. It requires elevated privileges +(`NET_ADMIN` capabilities), and the inserted rules may get overridden by `iptables` rules inserted by other vendors, +such as Istio or Linkerd. + +Injection of the init-container can often be avoided by using a `targetPort` _name_ instead of a number, and ensure +that the corresponding container's `containerPort` is also named. This example uses the name "http", but any valid +name will do: +```yaml +apiVersion: v1 +kind: Pod +metadata: + ... +spec: + ... + containers: + - ... + ports: + - name: http + containerPort: 8080 +--- +apiVersion: v1 +kind: Service +metadata: + ... +spec: + ... + ports: + - port: 80 + targetPort: http +``` + +Telepresence's mutating webhook will refrain from injecting an init-container when the `targetPort` is a name. Instead, +it will do the following during the injection of the traffic-agent: + +1. Rename the designated container's port by prefixing it (i.e., containerPort: http becomes containerPort: tm-http). +2. Let the container port of our injected traffic-agent use the original name (i.e., containerPort: http). + +Kubernetes takes care of the rest and will now associate the service's `targetPort` with our traffic-agent's +`containerPort`. + +### Important note +If the service is "headless" (using `ClusterIP: None`), then using named ports won't help because the `targetPort` will +not get remapped. A headless service will always require the init-container. + +## Error connecting to GKE or EKS cluster + +GKE and EKS require a plugin that utilizes their resepective IAM providers. +You will need to install the [gke](../install/cloud#gke-authentication-plugin) or [eks](../install/cloud#eks-authentication-plugin) plugins +for Telepresence to connect to your cluster. + +## `too many files open` error when running `telepresence connect` on Linux + +If `telepresence connect` on linux fails with a message in the logs `too many files open`, then check if `fs.inotify.max_user_instances` is set too low. Check the current settings with `sysctl fs.notify.max_user_instances` and increase it temporarily with `sudo sysctl -w fs.inotify.max_user_instances=512`. For more information about permanently increasing it see [Kernel inotify watch limit reached](https://unix.stackexchange.com/a/13757/514457). diff --git a/docs/telepresence/2.12/versions.yml b/docs/telepresence/2.12/versions.yml new file mode 100644 index 000000000..7e62f76cb --- /dev/null +++ b/docs/telepresence/2.12/versions.yml @@ -0,0 +1,5 @@ +version: "2.12.2" +dlVersion: "latest" +docsVersion: "2.12" +branch: release/v2 +productName: "Telepresence" diff --git a/docs/telepresence/2.13 b/docs/telepresence/2.13 deleted file mode 120000 index c5ec98e29..000000000 --- a/docs/telepresence/2.13 +++ /dev/null @@ -1 +0,0 @@ -../../../docs/telepresence/v2.13 \ No newline at end of file diff --git a/docs/telepresence/2.13/ci/github-actions.md b/docs/telepresence/2.13/ci/github-actions.md new file mode 100644 index 000000000..810a2d239 --- /dev/null +++ b/docs/telepresence/2.13/ci/github-actions.md @@ -0,0 +1,176 @@ +--- +title: GitHub Actions for Telepresence +description: "Learn more about GitHub Actions for Telepresence and how to integrate them in your processes to run tests for your own environments and improve your CI/CD pipeline. " +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Telepresence with GitHub Actions + +Telepresence combined with [GitHub Actions](https://docs.github.com/en/actions) allows you to run integration tests in your continuous integration/continuous delivery (CI/CD) pipeline without the need to run any dependant service. When you connect to the target Kubernetes cluster, you can intercept traffic of the remote services and send it to an instance of the local service running in CI. This way, you can quickly test the bugfixes, updates, and features that you develop in your project. + +You can [register here](https://app.getambassador.io/auth/realms/production/protocol/openid-connect/auth?client_id=telepresence-github-actions&response_type=code&code_challenge=qhXI67CwarbmH-pqjDIV1ZE6kqggBKvGfs69cxst43w&code_challenge_method=S256&redirect_uri=https://app.getambassador.io) to get a free Ambassador Cloud account to try the GitHub Actions for Telepresence yourself. + +## GitHub Actions for Telepresence + +Ambassador Labs has created a set of GitHub Actions for Telepresence that enable you to run integration tests in your CI pipeline against any existing remote cluster. The GitHub Actions for Telepresence are the following: + + - **configure**: Initial configuration setup for Telepresence that is needed to run the actions successfully. + - **install**: Installs Telepresence in your CI server with latest version or the one specified by you. + - **login**: Logs into Telepresence, you can create a [personal intercept](/docs/telepresence/latest/concepts/intercepts/#personal-intercept). You'll need a Telepresence API key and set it as an environment variable in your workflow. See the [acquiring an API key guide](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) for instructions on how to get one. + - **connect**: Connects to the remote target environment. + - **intercept**: Redirects traffic to the remote service to the version of the service running in CI so you can run integration tests. + +Each action contains a post-action script to clean up resources. This includes logging out of Telepresence, closing the connection to the remote cluster, and stopping the intercept process. These post scripts are executed automatically, regardless of job result. This way, you don't have to worry about terminating the session yourself. You can look at the [GitHub Actions for Telepresence repository](https://github.com/datawire/telepresence-actions) for more information. + +# Using Telepresence in your GitHub Actions CI pipeline + +## Prerequisites + +To enable GitHub Actions with telepresence, you need: + +* A [Telepresence API KEY](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) and set it as an environment variable in your workflow. +* Access to your remote Kubernetes cluster, like a `kubeconfig.yaml` file with the information to connect to the cluster. +* If your remote cluster already has Telepresence installed, you need to know whether Telepresence is installed [Cluster wide](/docs/telepresence/latest/reference/rbac/#cluster-wide-telepresence-user-access) or [Namespace only](/docs/telepresence/latest/reference/rbac/#namespace-only-telepresence-user-access). If telepresence is configured for namespace only, verify that your `kubeconfig.yaml` is configured to find the installation of the Traffic Manager. For example: + + ```yaml + apiVersion: v1 + clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: traffic-manager-namespace + name: example-cluster + ``` + +* If Telepresence is installed, you also need to know the version of Telepresence running in the cluster. You can run the command `kubectl describe service traffic-manager -n namespace`. The version is listed in the `labels` section of the output. +* You need a GitHub Actions secret named `TELEPRESENCE_API_KEY` in your repository that has your Telepresence API key. See [GitHub docs](https://docs.github.com/en/github-ae@latest/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository) for instructions on how to create GitHub Actions secrets. +* You need a GitHub Actions secret named `KUBECONFIG_FILE` in your repository with the content of your `kubeconfig.yaml`). + +**Does your environment look different?** We're actively working on making GitHub Actions for Telepresence more useful for more + +
+ + +
+ +## Initial configuration setup + +To be able to use the GitHub Actions for Telepresence, you need to do an initial setup to [configure Telepresence](../../reference/config/) so the repository is able to run your workflow. To complete the Telepresence setup: + + +This action only supports Ubuntu runners at the moment. + +1. In your main branch, create a `.github/workflows` directory in your GitHub repository if it does not already exist. +1. Next, in the `.github/workflows` directory, create a new YAML file named `configure-telepresence.yaml`: + + ```yaml + name: Configuring telepresence + on: workflow_dispatch + jobs: + configuring: + name: Configure telepresence + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + steps: + - name : Checkout + uses: actions/checkout@v3 + #---- here run your custom command to connect to your cluster + #- name: Connect to cluster + # shell: bash + # run: ./connnect to cluster + #---- + - name: Congifuring Telepresence + uses: datawire/telepresence-actions/configure@v1.0-rc + with: + version: latest + ``` + +1. Push the `configure-telepresence.yaml` file to your repository. +1. Run the `Configuring Telepresence Workflow` [manually](https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow) in your repository's Actions tab. + +When the workflow runs, the action caches the configuration directory of Telepresence and a Telepresence configuration file if is provided by you. This configuration file should be placed in the`/.github/telepresence-config/config.yml` with your own [Telepresence config](../../reference/config/). If you update your this file with a new configuration, you must run the `Configuring Telepresence Workflow` action manually on your main branch so your workflow detects the new configuration. + + +When you create a branch, do not remove the .telepresence/config.yml file. This is required for Telepresence to run GitHub action properly when there is a new push to the branch in your repository. + + +## Using Telepresence in your GitHub Actions workflows + +1. In the `.github/workflows` directory create a new YAML file named `run-integration-tests.yaml` and modify placeholders with real actions to run your service and perform integration tests. + + ```yaml + name: Run Integration Tests + on: + push: + branches-ignore: + - 'main' + jobs: + my-job: + name: Run Integration Test using Remote Cluster + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + KUBECONFIG_FILE: ${{ secrets.KUBECONFIG_FILE }} + KUBECONFIG: /opt/kubeconfig + steps: + - name : Checkout + uses: actions/checkout@v3 + with: + ref: ${{ github.event.pull_request.head.sha }} + #---- here run your custom command to run your service + #- name: Run your service to test + # shell: bash + # run: ./run_local_service + #---- + # First you need to log in to Telepresence, with your api key + - name: Create kubeconfig file + run: | + cat < /opt/kubeconfig + ${{ env.KUBECONFIG_FILE }} + EOF + - name: Install Telepresence + uses: datawire/telepresence-actions/install@v1.0-rc + with: + version: 2.5.8 # Change the version number here according to the version of Telepresence in your cluster or omit this parameter to install the latest version + - name: Telepresence connect + uses: datawire/telepresence-actions/connect@v1.0-rc + - name: Login + uses: datawire/telepresence-actions/login@v1.0-rc + with: + telepresence_api_key: ${{ secrets.TELEPRESENCE_API_KEY }} + - name: Intercept the service + uses: datawire/telepresence-actions/intercept@v1.0-rc + with: + service_name: service-name + service_port: 8081:8080 + namespace: namespacename-of-your-service + http_header: "x-telepresence-intercept-id=service-intercepted" + print_logs: true # Flag to instruct the action to print out Telepresence logs and export an artifact with them + #---- here run your custom command + #- name: Run integrations test + # shell: bash + # run: ./run_integration_test + #---- + ``` + +The previous is an example of a workflow that: + +* Checks out the repository code. +* Has a placeholder step to run the service during CI. +* Creates the `/opt/kubeconfig` file with the contents of the `secrets.KUBECONFIG_FILE` to make it available for Telepresence. +* Installs Telepresence. +* Runs Telepresence Connect. +* Logs into Telepresence. +* Intercepts traffic to the service running in the remote cluster. +* A placeholder for an action that would run integration tests, such as one that makes HTTP requests to your running service and verifies it works while dependent services run in the remote cluster. + +This workflow gives you the ability to run integration tests during the CI run against an ephemeral instance of your service to verify that the any change that is pushed to the working branch works as expected. After you push the changes, the CI server will run the integration tests against the intercept. You can view the results on your GitHub repository, under "Actions" tab. diff --git a/docs/telepresence/2.13/community.md b/docs/telepresence/2.13/community.md new file mode 100644 index 000000000..922457c9d --- /dev/null +++ b/docs/telepresence/2.13/community.md @@ -0,0 +1,12 @@ +# Community + +## Contributor's guide +Please review our [contributor's guide](https://github.com/telepresenceio/telepresence/blob/release/v2/DEVELOPING.md) +on GitHub to learn how you can help make Telepresence better. + +## Changelog +Our [changelog](https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md) +describes new features, bug fixes, and updates to each version of Telepresence. + +## Meetings +Check out our community [meeting schedule](https://github.com/telepresenceio/telepresence/blob/release/v2/MEETING_SCHEDULE.md) for opportunities to interact with Telepresence developers. diff --git a/docs/telepresence/2.13/concepts/context-prop.md b/docs/telepresence/2.13/concepts/context-prop.md new file mode 100644 index 000000000..b3eb41e32 --- /dev/null +++ b/docs/telepresence/2.13/concepts/context-prop.md @@ -0,0 +1,37 @@ +# Context propagation + +**Context propagation** is the transfer of request metadata across the services and remote processes of a distributed system. Telepresence uses context propagation to intelligently route requests to the appropriate destination. + +This metadata is the context that is transferred across system services. It commonly takes the form of HTTP headers; context propagation is usually referred to as header propagation. A component of the system (like a proxy or performance monitoring tool) injects the headers into requests as it relays them. + +Metadata propagation refers to any service or other middleware not stripping away the headers. Propagation facilitates the movement of the injected contexts between other downstream services and processes. + + +## What is distributed tracing? + +Distributed tracing is a technique for troubleshooting and profiling distributed microservices applications and is a common application for context propagation. It is becoming a key component for debugging. + +In a microservices architecture, a single request may trigger additional requests to other services. The originating service may not cause the failure or slow request directly; a downstream dependent service may instead be to blame. + +An application like Datadog or New Relic will use agents running on services throughout the system to inject traffic with HTTP headers (the context). They will track the request’s entire path from origin to destination to reply, gathering data on routes the requests follow and performance. The injected headers follow the [W3C Trace Context specification](https://www.w3.org/TR/trace-context/) (or another header format, such as [B3 headers](https://github.com/openzipkin/b3-propagation)), which facilitates maintaining the headers through every service without being stripped (the propagation). + + +## What are intercepts and preview URLs? + +[Intercepts](../../reference/intercepts) and [preview +URLs](../../howtos/preview-urls/) are functions of Telepresence that +enable easy local development from a remote Kubernetes cluster and +offer a preview environment for sharing and real-time collaboration. + +Telepresence uses custom HTTP headers and header propagation to +identify which traffic to intercept both for plain personal intercepts +and for personal intercepts with preview URLs; these techniques are +more commonly used for distributed tracing, so what they are being +used for is a little unorthodox, but the mechanisms for their use are +already widely deployed because of the prevalence of tracing. The +headers facilitate the smart routing of requests either to live +services in the cluster or services running locally on a developer’s +machine. The intercepted traffic can be further limited by using path +based routing. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to [Ambassador Cloud](https://app.getambassador.io) with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. diff --git a/docs/telepresence/2.13/concepts/devloop.md b/docs/telepresence/2.13/concepts/devloop.md new file mode 100644 index 000000000..86aac87e2 --- /dev/null +++ b/docs/telepresence/2.13/concepts/devloop.md @@ -0,0 +1,54 @@ +--- +title: "The developer and the inner dev loop | Ambassador " +--- + +# The developer experience and the inner dev loop + +## How is the developer experience changing? + +The developer experience is the workflow a developer uses to develop, test, deploy, and release software. + +Typically this experience has consisted of both an inner dev loop and an outer dev loop. The inner dev loop is where the individual developer codes and tests, and once the developer pushes their code to version control, the outer dev loop is triggered. + +The outer dev loop is _everything else_ that happens leading up to release. This includes code merge, automated code review, test execution, deployment, [controlled (canary) release](https://www.getambassador.io/docs/argo/latest/concepts/canary/), and observation of results. The modern outer dev loop might include, for example, an automated CI/CD pipeline as part of a [GitOps workflow](https://www.getambassador.io/docs/argo/latest/concepts/gitops/#what-is-gitops) and a [progressive delivery](/docs/argo/latest/concepts/cicd/) strategy relying on automated canaries, i.e. to make the outer loop as fast, efficient and automated as possible. + +Cloud-native technologies have fundamentally altered the developer experience in two ways: one, developers now have to take extra steps in the inner dev loop; two, developers need to be concerned with the outer dev loop as part of their workflow, even if most of their time is spent in the inner dev loop. + +Engineers now must design and build distributed service-based applications _and_ also assume responsibility for the full development life cycle. The new developer experience means that developers can no longer rely on monolithic application developer best practices, such as checking out the entire codebase and coding locally with a rapid “live-reload” inner development loop. Now developers have to manage external dependencies, build containers, and implement orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this adds development time to the equation. + +## What is the inner dev loop? + +The inner dev loop is the single developer workflow. A single developer should be able to set up and use an inner dev loop to code and test changes quickly. + +Even within the Kubernetes space, developers will find much of the inner dev loop familiar. That is, code can still be written locally at a level that a developer controls and committed to version control. + +In a traditional inner dev loop, if a typical developer codes for 360 minutes (6 hours) a day, with a traditional local iterative development loop of 5 minutes — 3 coding, 1 building, i.e. compiling/deploying/reloading, 1 testing inspecting, and 10-20 seconds for committing code — they can expect to make ~70 iterations of their code per day. Any one of these iterations could be a release candidate. The only “developer tax” being paid here is for the commit process, which is negligible. + +![traditional inner dev loop](../images/trad-inner-dev-loop.png) + +## In search of lost time: How does containerization change the inner dev loop? + +The inner dev loop is where writing and testing code happens, and time is critical for maximum developer productivity and getting features in front of end users. The faster the feedback loop, the faster developers can refactor and test again. + +Changes to the inner dev loop process, i.e., containerization, threaten to slow this development workflow down. Coding stays the same in the new inner dev loop, but code has to be containerized. The _containerized_ inner dev loop requires a number of new steps: + +* packaging code in containers +* writing a manifest to specify how Kubernetes should run the application (e.g., YAML-based configuration information, such as how much memory should be given to a container) +* pushing the container to the registry +* deploying containers in Kubernetes + +Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. + + +![container inner dev loop](../images/container-inner-dev-loop.png) + +## Tackling the slow inner dev loop + +A slow inner dev loop can negatively impact frontend and backend teams, delaying work on individual and team levels and slowing releases into production overall. + +For example: + +* Frontend developers have to wait for previews of backend changes on a shared dev/staging environment (for example, until CI/CD deploys a new version) and/or rely on mocks/stubs/virtual services when coding their application locally. These changes are only verifiable by going through the CI/CD process to build and deploy within a target environment. +* Backend developers have to wait for CI/CD to build and deploy their app to a target environment to verify that their code works correctly with cluster or cloud-based dependencies as well as to share their work to get feedback. + +New technologies and tools can facilitate cloud-native, containerized development. And in the case of a sluggish inner dev loop, developers can accelerate productivity with tools that help speed the loop up again. diff --git a/docs/telepresence/2.13/concepts/devworkflow.md b/docs/telepresence/2.13/concepts/devworkflow.md new file mode 100644 index 000000000..fa24fc2bd --- /dev/null +++ b/docs/telepresence/2.13/concepts/devworkflow.md @@ -0,0 +1,7 @@ +# The changing development workflow + +A changing workflow is one of the main challenges for developers adopting Kubernetes. Software development itself isn’t the challenge. Developers can continue to [code using the languages and tools with which they are most productive and comfortable](https://www.getambassador.io/resources/kubernetes-local-dev-toolkit/). That’s the beauty of containerized development. + +However, the cloud-native, Kubernetes-based approach to development means adopting a new development workflow and development environment. Beyond the basics, such as figuring out how to containerize software, [how to run containers in Kubernetes](https://www.getambassador.io/docs/kubernetes/latest/concepts/appdev/), and how to deploy changes into containers, for example, Kubernetes adds complexity before it delivers efficiency. The promise of a “quicker way to develop software” applies at least within the traditional aspects of the inner dev loop, where the single developer codes, builds and tests their software. But both within the inner dev loop and once code is pushed into version control to trigger the outer dev loop, the developer experience changes considerably from what many developers are used to. + +In this new paradigm, new steps are added to the inner dev loop, and more broadly, the developer begins to share responsibility for the full life cycle of their software. Inevitably this means taking new workflows and tools on board to ensure that the full life cycle continues full speed ahead. diff --git a/docs/telepresence/2.13/concepts/faster.md b/docs/telepresence/2.13/concepts/faster.md new file mode 100644 index 000000000..3950dce38 --- /dev/null +++ b/docs/telepresence/2.13/concepts/faster.md @@ -0,0 +1,28 @@ +--- +title: Install the Telepresence Docker extension | Ambassador +--- +# Making the remote local: Faster feedback, collaboration and debugging + +With the goal of achieving [fast, efficient development](https://www.getambassador.io/use-case/local-kubernetes-development/), developers need a set of approaches to bridge the gap between remote Kubernetes clusters and local development, and reduce time to feedback and debugging. + +## How should I set up a Kubernetes development environment? + +[Setting up a development environment](https://www.getambassador.io/resources/development-environments-microservices/) for Kubernetes can be much more complex than the setup for traditional web applications. Creating and maintaining a Kubernetes development environment relies on a number of external dependencies, such as databases or authentication. + +While there are several ways to set up a Kubernetes development environment, most introduce complexities and impediments to speed. The dev environment should be set up to easily code and test in conditions where a service can access the resources it depends on. + +A good way to meet the goals of faster feedback, possibilities for collaboration, and scale in a realistic production environment is the "single service local, all other remote" environment. Developing in a fully remote environment offers some benefits, but for developers, it offers the slowest possible feedback loop. With local development in a remote environment, the developer retains considerable control while using tools like [Telepresence](../../quick-start/) to facilitate fast feedback, debugging and collaboration. + +## What is Telepresence? + +Telepresence is an open source tool that lets developers [code and test microservices locally against a remote Kubernetes cluster](../../quick-start/). Telepresence facilitates more efficient development workflows while relieving the need to worry about other service dependencies. + +## How can I get fast, efficient local development? + +The dev loop can be jump-started with the right development environment and Kubernetes development tools to support speed, efficiency and collaboration. Telepresence is designed to let Kubernetes developers code as though their laptop is in their Kubernetes cluster, enabling the service to run locally and be proxied into the remote cluster. Telepresence runs code locally and forwards requests to and from the remote Kubernetes cluster, bypassing the much slower process of waiting for a container to build, pushing it to registry, and deploying to production. + +A rapid and continuous feedback loop is essential for productivity and speed; Telepresence enables the fast, efficient feedback loop to ensure that developers can access the rapid local development loop they rely on without disrupting their own or other developers' workflows. Telepresence safely intercepts traffic from the production cluster and enables near-instant testing of code, local debugging in production, and [preview URL](../../howtos/preview-urls/) functionality to share dev environments with others for multi-user collaboration. + +Telepresence works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This pod proxies data from the Kubernetes environment (e.g., TCP/UDP connections, environment variables, volumes) to the local process. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +The intercept proxy works thanks to context propagation, which is most frequently associated with distributed tracing but also plays a key role in controllable intercepts and preview URLs. diff --git a/docs/telepresence/2.13/concepts/goldenpaths.md b/docs/telepresence/2.13/concepts/goldenpaths.md new file mode 100644 index 000000000..afd2a95e2 --- /dev/null +++ b/docs/telepresence/2.13/concepts/goldenpaths.md @@ -0,0 +1,9 @@ +# Golden Paths + +A golden path is a best practice or a standardized process you should apply to Telepresence, often used to optimize productivity or quality control. It can be used as a benchmark or a reference point for measuring success and progress towards a particular goal or outcome. + +We have provided Golden Paths for multiple use cases listed below. + +1. [Intercept Specifications](../goldenpaths/specs) +2. [Using Telepresence with Docker](../goldenpaths/docker) +3. [Installing Telepresence in Team Mode](../goldenpaths/installation) \ No newline at end of file diff --git a/docs/telepresence/2.13/concepts/goldenpaths/docker.md b/docs/telepresence/2.13/concepts/goldenpaths/docker.md new file mode 100644 index 000000000..e58994c13 --- /dev/null +++ b/docs/telepresence/2.13/concepts/goldenpaths/docker.md @@ -0,0 +1,70 @@ +# Telepresence with Docker Golden Path + +## Why? + +It can be tedious to adopt Telepresence across your organization, since in its handiest form, it requires admin access, and needs to get along with any exotic +networking setup that your company may have. + +If Docker is already approved in your organization, this Golden path should be considered. + +## How? + +When using Telepresence in Docker mode, users can eliminate the need for admin access on their machines, address several networking challenges, and forego the need for third-party applications to enable volume mounts. + +You can simply add the docker flag to any Telepresence command, and it will start your daemon in a container. +Thus removing the need for root access, making it easier to adopt as an organization + +Let's illustrate with a quick demo, assuming a default Kubernetes context named default, and a simple HTTP service: + +```cli +$ telepresence connect --docker +Connected to context default (https://default.cluster.bakerstreet.io) + +$ docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +7a0e01cab325 datawire/telepresence:2.12.1 "telepresence connec…" 18 seconds ago Up 16 seconds 127.0.0.1:58802->58802/tcp tp-default +``` + +This method limits the scope of the potential networking issues since everything stays inside Docker. The Telepresence daemon can be found under the name `tp-` when listing your containers. + +Start an intercept: + +```cli +$ telepresence intercept echo-easy --port 8080:80 -n default +Using Deployment echo-easy + Intercept name : echo-easy-default + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: proxied + Volume Mount Point : /var/folders/x_/4x_4pfvx2j3_94f36x551g140000gp/T/telfs-505935483 + Intercepting : HTTP requests with headers + 'x-telepresence-intercept-id: e20f0764-7fd8-45c1-b911-b2adeee1af45:echo-easy-default' + Preview URL : https://gracious-ishizaka-5365.preview.edgestack.me + Layer 5 Hostname : echo-easy.default.svc.cluster.local +``` + +Start your intercept handler (interceptor) by targeting the daemon container `--network=container:tp-`, and open the preview URL to see the traffic routed to your machine. + +```cli +$ docker run \ + --network=container:tp-default \ + -e PORT=8080 jmalloc/echo-server +Echo server listening on port 8080. +127.0.0.1:41500 | GET / +127.0.0.1:41512 | GET /favicon.ico +127.0.0.1:41500 | GET / +127.0.0.1:41512 | GET /favicon.ico +``` + +For users utilizing Docker mode in Telepresence, we strongly recommend using [Intercept Specifications](specs) to seamlessly configure their Intercept Handler as a Docker container. + +It's essential to ensure that users also open the debugging port on their container to allow them to attach their local debugger from their IDE. +By leveraging Intercept Specifications and Docker mode together, users can optimize their Telepresence experience and streamline their debugging workflow. + +## Key learnings + +* Using the Docker mode of telepresence **do not require root access**, and make it **easier** to adopt it across your organization. +* It **limits the potential networking issues** you can encounter. +* It leverages **Docker** for your interceptor. +* You can use it with the [Intercept Specifications](specs). diff --git a/docs/telepresence/2.13/concepts/goldenpaths/installation.md b/docs/telepresence/2.13/concepts/goldenpaths/installation.md new file mode 100644 index 000000000..cb804f965 --- /dev/null +++ b/docs/telepresence/2.13/concepts/goldenpaths/installation.md @@ -0,0 +1,39 @@ +# Installing the Telepresence Traffic Manager Golden Path + +## Why? + +Telepresence requires a Traffic Manager to be installed in your cluster, to control how traffic is redirected while intercepting. The Traffic Manager can be installed in two different modes, [Single User Mode](../../modes#single-user-mode) and [Team Mode](../../modes#team-mode). + +Single User Mode is great for an individual user that has autonomy within their cluster and won't impede other developers if they were to intercept traffic. However, this is often not the case for most developers, you often work in a shared environment and will affect other team members by hi-jacking their traffic. + +We recommend installing your Traffic Manager in Team Mode. This will default all Intercepts created to be a [Personal Intercept](../../../reference/intercepts#personal-intercept). This will give each Intercept a specific HTTP header, that will only reroute the traffic containing the header. Thus working best in a team environment. + +## How? + +Installing the Traffic Manager in Team Mode is quite easy. + +If you install the Traffic Manager using the Telepresence command you can simply pass the `--team-mode` flag like so: + +```cli +telepresence helm install --team-mode +``` + +If you use the Helm chart directly, you can just set the `mode` variable. +```cli +helm install traffic-manager datawire/telepresence --set mode=team +``` + +Or if you are upgrading your Traffic Manager you can run: + +```cli +telepresence helm upgrade --team-mode +``` + +```cli +helm upgrade traffic-manager datawire/telepresence --set mode=team +``` + +## Key Learnings + +* Team mode is essential when working in a shared cluster to ensure you aren't interrupting other developers workflows +* You can always change the mode of your Traffic Manager while installing or upgrading \ No newline at end of file diff --git a/docs/telepresence/2.13/concepts/goldenpaths/specs.md b/docs/telepresence/2.13/concepts/goldenpaths/specs.md new file mode 100644 index 000000000..0d8e5dc30 --- /dev/null +++ b/docs/telepresence/2.13/concepts/goldenpaths/specs.md @@ -0,0 +1,80 @@ +# Intercept Specification Golden Path + +## Why? + +Telepresence can be difficult to adopt Organization-wide. Each developer has their own local setup and adds many variables to running Telepresence, duplicating work amongst developers. + +For these reasons, and many others we recommend using [Intercept Specifications](../../../reference/intercepts/specs). + +## How? + +When using an Intercept Specification you write a YAML file, similar to a CI workflow, or a Docker compose file. An Intercept Specification enables you to standardization amongst your developers. + +With a spec you will be able to define the kubernetes context to work in, the workload you want to intercept, the local intercept handler your traffic will be flowing to, and any pre/post requisties that are required to run your applications. + +Lets look at an example: + +I have a service `quote` running in the `default` namespace I want to intercept to test changes I've made before opening a Pull Request. + +I can use the Intercept Specification below to tell Telepresence to Intercept the quote serivce with a [Personal Intercept](../../../reference/intercepts#personal-intercept), in the default namespace of my cluster `test-cluster`. I also want to start the Intercept Handler, as a Docker container, with the provided image. + +```yaml +--- +connection: + context: test-cluster +workloads: + - name: quote + namespace: default + intercepts: + - headers: + - name: test-{{ .Telepresence.Username }} + value: "{{ .Telepresence.Username }}" + localPort: 8080 + mountPoint: "false" + port: 80 + handler: quote + service: quote + previewURL: + enable: true +handlers: + - name: quote + environment: + - name: PORT + value: "8080" + docker: + image: docker.io/datawire/quote:0.5.0 +``` + +You can then run this Intercept Specification with: + +```cli +telepresence intercept run quote-spec.yaml + Intercept name : quote-default + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests with headers + 'test-user =~ user' + Preview URL : https://charming-newton-3109.preview.edgestack.me + Layer 5 Hostname : quote.default.svc.cluster.local +Intercept spec "quote-spec" started successfully, use ctrl-c to cancel. +2023/04/12 16:05:00 CONSUL_IP environment variable not found, continuing without Consul registration +2023/04/12 16:05:00 listening on :8080 +``` + +You can see that the Intercept was started, and if I check the local docker containers I can see that the Telepresence daemon is running in a container, and your Intercept Handler was successfully started. + +```cli +docker ps + +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +bdd99d244fbb datawire/quote:0.5.0 "/bin/qotm" 2 minutes ago Up 2 minutes tp-quote +5966d7099adf datawire/telepresence:2.12.1 "telepresence connec…" 2 minutes ago Up 2 minutes 127.0.0.1:58443->58443/tcp tp-test-cluster +``` + +## Key Learnings + +* Using Intercept Specification enables you to create a standardized approach for Intercepts across your Organization in an easy to share way. +* You can easily leverage Docker to remove other potential hiccups associated with networking. +* There are many more great things you can do with an Intercept Specification, check those out [here](../../../reference/intercepts/specs) \ No newline at end of file diff --git a/docs/telepresence/2.13/concepts/goldenpaths/vpn.md b/docs/telepresence/2.13/concepts/goldenpaths/vpn.md new file mode 100644 index 000000000..e69de29bb diff --git a/docs/telepresence/2.13/concepts/intercepts.md b/docs/telepresence/2.13/concepts/intercepts.md new file mode 100644 index 000000000..0a2909be2 --- /dev/null +++ b/docs/telepresence/2.13/concepts/intercepts.md @@ -0,0 +1,208 @@ +--- +title: "Types of intercepts" +description: "Short demonstration of personal vs global intercepts" +--- + +import React from 'react'; + +import Alert from '@material-ui/lab/Alert'; +import AppBar from '@material-ui/core/AppBar'; +import Paper from '@material-ui/core/Paper'; +import Tab from '@material-ui/core/Tab'; +import TabContext from '@material-ui/lab/TabContext'; +import TabList from '@material-ui/lab/TabList'; +import TabPanel from '@material-ui/lab/TabPanel'; +import Animation from '@src/components/InterceptAnimation'; + +export function TabsContainer({ children, ...props }) { + const [state, setState] = React.useState({curTab: "personal"}); + React.useEffect(() => { + const query = new URLSearchParams(window.location.search); + var interceptType = query.get('intercept') || "personal"; + if (state.curTab != interceptType) { + setState({curTab: interceptType}); + } + }, [state, setState]) + var setURL = function(newTab) { + history.replaceState(null,null, + `?intercept=${newTab}${window.location.hash}`, + ); + }; + return ( +
+ + + {setState({curTab: newTab}); setURL(newTab)}} aria-label="intercept types"> + + + + + + {children} + +
+ ); +}; + +# Types of intercepts + + + + +# No intercept + + + + +This is the normal operation of your cluster without Telepresence. + + + + + +# Global intercept + + + + +**Global intercepts** replace the Kubernetes "Orders" service with the +Orders service running on your laptop. The users see no change, but +with all the traffic coming to your laptop, you can observe and debug +with all your dev tools. + + + +### Creating and using global intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=all + ``` + + + + Make sure your current kubectl context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service: + + All requests will be sent to the version of your service that is + running in the local development environment. + + + + +# Personal intercept + +**Personal intercepts** allow you to be selective and intercept only +some of the traffic to a service while not interfering with the rest +of the traffic. This allows you to share a cluster with others on your +team without interfering with their work. + +Personal intercepts are subject to the Ambassador Cloud active service and user limit quotas. +To read more about these quotas limits, see the [subscription management page](../../../cloud/latest/subscriptions/howtos/manage-my-subscriptions). + + + + +In the illustration above, **Orange** +requests are being made by Developer 2 on their laptop and the +**green** are made by a teammate, +Developer 1, on a different laptop. + +Each developer can intercept the Orders service for their requests only, +while sharing the rest of the development environment. + + + +### Creating and using personal intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b + ``` + + We're using + `Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b` as the + header for the sake of the example, but you can use any + `key=value` pair you want, or `--http-header=auto` to have it + choose something automatically. + + + + Make sure your current kubect context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service by passing the + HTTP header: + + ```http + Personal-Intercept: 126a72c7-be8b-4329-af64-768e207a184b + ``` + + + + Need a browser extension to modify or remove an HTTP-request-headers? + + Chrome + {' '} + Firefox + + + + 3. Using the intercept: Send requests to your service without the + HTTP header: + + Requests without the header will be sent to the version of your + service that is running in the cluster. This enables you to share + the cluster with a team! + +### Intercepting a specific endpoint + +It's not uncommon to have one service serving several endpoints. Telepresence is capable of limiting an +intercept to only affect the endpoints you want to work with by using one of the `--http-path-xxx` +flags below in addition to using `--http-header` flags. Only one such flag can be used in an intercept +and, contrary to the `--http-header` flag, it cannot be repeated. + +The following flags are available: + +| Flag | Meaning | +|-------------------------------|------------------------------------------------------------------| +| `--http-path-equal ` | Only intercept the endpoint for this exact path | +| `--http-path-prefix ` | Only intercept endpoints with a matching path prefix | +| `--http-path-regex ` | Only intercept endpoints that match the given regular expression | + +#### Examples: + +1. A personal intercept using the header "Coder: Bob" limited to all endpoints that start with "/api': + + ```shell + telepresence intercept SERVICENAME --http-path-prefix=/api --http-header=Coder=Bob + ``` + +2. A personal intercept using the auto generated header that applies only to the endpoint "/api/version": + + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version --http-header=auto + ``` + or, since `--http-header=auto` is the implicit when using `--http` options, just: + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version + ``` + +3. A personal intercept using the auto generated header limited to all endpoints matching the regular expression "(staging-)?api/.*": + + ```shell + telepresence intercept SERVICENAME --http-path-regex='/(staging-)?api/.*' + ``` + + + diff --git a/docs/telepresence/2.13/concepts/modes.md b/docs/telepresence/2.13/concepts/modes.md new file mode 100644 index 000000000..3402f07e4 --- /dev/null +++ b/docs/telepresence/2.13/concepts/modes.md @@ -0,0 +1,36 @@ +--- +title: "Modes" +--- + +# Modes + +A Telepresence installation happens in two locations, initially on your laptop or workstation, and then on your cluster after running `telepresence helm install`. +The main component that gets installed into the cluster is known as the Traffic Manager. +The Traffic Manager can be put either into single user mode (the default) or into team mode. +Modes give cluster admins the ability to enforce both [intercept type](../intercepts) defaults and logins across all connected users, enabling teams to colaborate and intercept without stepping getting in eachothers way. + +## Single user mode + +In single user mode, all intercepts will be [global intercepts](../intercepts?intercept=global) by default. +Global intercepts affect all traffic coming into the intercepted workload; this can cause issues for teams working on the same service. +While single user mode is the default, switching back from team mode is done buy runnning: +``` +telepresence helm install --single-user-mode +``` + +## Team mode + +In team mode, all intercepts will be [personal intercepts](../intercepts?intercept=personal) by default and all intercepting users must be logged in. +Personal intercepts selectively affect http traffic coming into the intercepted workload. +Being in team mode adds an additional layer of confidence for developers working on the same service, knowing their teammates wont interrupt their intercepts by mistake. +Since logins are enforced in this mode as well, you can ensure that Ambassador Cloud features, such as intercept history and saved intercepts, are being taken advantage of by everybody on your team. +To switch from single user mode to team mode, run: +``` +telepresence helm install --team-mode +``` + +## Default intercept types based on modes +The mode of the Traffic Manager determines the default type of intercept, [personal](../intercepts?intercept=personal) vs [global](../intercepts?intercept=global). +When in team mode, intercepts default to [personal intercepts](../intercepts?intercept=personal), and logins are enforced for non logged in users. +When in single user mode, all intercepts default to [global intercepts](../intercepts?intercept=global), regardless of login status. +![mode defaults](../images/mode-defaults.png) \ No newline at end of file diff --git a/docs/telepresence/2.13/doc-links.yml b/docs/telepresence/2.13/doc-links.yml new file mode 100644 index 000000000..9c62796ad --- /dev/null +++ b/docs/telepresence/2.13/doc-links.yml @@ -0,0 +1,119 @@ +- title: Quick start + link: quick-start +- title: Install Telepresence + items: + - title: Install + link: install/ + - title: Upgrade + link: install/upgrade/ + - title: Install Traffic Manager + link: install/manager/ + - title: Install Traffic Manager with Helm + link: install/helm/ + - title: Cloud Provider Prerequisites + link: install/cloud/ + - title: Migrate from legacy Telepresence + link: install/migrate-from-legacy/ +- title: Core concepts + items: + - title: The changing development workflow + link: concepts/devworkflow + - title: The developer experience and the inner dev loop + link: concepts/devloop + - title: "Making the remote local: Faster feedback, collaboration and debugging" + link: concepts/faster + - title: Context propagation + link: concepts/context-prop + - title: Types of intercepts + link: concepts/intercepts + - title: Modes + link: concepts/modes + - title: Golden Paths + link: concepts/goldenpaths + items: + - title: Intercept Specifications + link: concepts/goldenpaths/specs + - title: Docker Mode + link: concepts/goldenpaths/docker + - title: Team Mode + link: concepts/goldenpaths/installation +- title: How do I... + items: + - title: Intercept a service in your own environment + link: howtos/intercepts + - title: Share dev environments with preview URLs + link: howtos/preview-urls + - title: Proxy outbound traffic to my cluster + link: howtos/outbound + - title: Host a cluster in a local VM + link: howtos/cluster-in-vm + - title: Send requests to an intercepted service + link: howtos/request + - title: Package and share my intercepts + link: howtos/package +- title: Telepresence for Docker + items: + - title: What is Telepresence for Docker + link: extension/intro + - title: Install into Docker Desktop + link: extension/install + - title: Intercept into a Docker Container + link: extension/intercept +- title: Telepresence for CI + items: + - title: Github Actions + link: ci/github-actions +- title: Technical reference + items: + - title: Architecture + link: reference/architecture + - title: Client reference + link: reference/client + items: + - title: login + link: reference/client/login + - title: Laptop-side configuration + link: reference/config + - title: Cluster-side configuration + link: reference/cluster-config + - title: Using Docker for intercepts + link: reference/docker-run + - title: Running Telepresence in a Docker container + link: reference/inside-container + - title: Environment variables + link: reference/environment + - title: Intercepts + link: reference/intercepts/ + items: + - title: Configure intercept using CLI + link: reference/intercepts/cli + - title: Configure intercept using specifications + link: reference/intercepts/specs + - title: Manually injecting the Traffic Agent + link: reference/intercepts/manual-agent + - title: Volume mounts + link: reference/volume + - title: RESTful API service + link: reference/restapi + - title: DNS resolution + link: reference/dns + - title: RBAC + link: reference/rbac + - title: Telepresence and VPNs + link: reference/vpn + - title: Networking through Virtual Network Interface + link: reference/tun-device + - title: Connection Routing + link: reference/routing + - title: Using Telepresence with Linkerd + link: reference/linkerd +- title: FAQs + link: faqs +- title: Troubleshooting + link: troubleshooting +- title: Community + link: community +- title: Release Notes + link: release-notes +- title: Licenses + link: licenses \ No newline at end of file diff --git a/docs/telepresence/2.13/extension/install.md b/docs/telepresence/2.13/extension/install.md new file mode 100644 index 000000000..1f4c70c09 --- /dev/null +++ b/docs/telepresence/2.13/extension/install.md @@ -0,0 +1,21 @@ +--- +title: "Telepresence for Docker Extension" +description: "Learn how to install and use Ambassador Labs' Telepresence for Docker." +indexable: true +--- + +# Docker extension + +[Docker](https://docker.com), the popular containerized runtime environment, now offers the [Telepresence](../../../../../kubernetes-learning-center/telepresence-docker-extension/) extension for Docker Desktop. With this extension, you can quickly install Telepresence and begin using its features with your Docker containers in a matter of minutes. + +## Install the Telepresence Docker extension + +Telepresence for Docker is available through the Docker Destktop. To install Telepresence for Docker: + +1. Open Docker Desktop. + +2. In the Docker Dashboard, click **Add Extensions** in the left navigation bar. + +3. In the Extensions Marketplace, search for `Ambassador Telepresence`. + +4. Click **Install**. diff --git a/docs/telepresence/2.13/extension/intercept.md b/docs/telepresence/2.13/extension/intercept.md new file mode 100644 index 000000000..223c08ac7 --- /dev/null +++ b/docs/telepresence/2.13/extension/intercept.md @@ -0,0 +1,77 @@ +--- +title: "Create an intercept with the Telepresence Docker extension" +description: "With Telepresence Docker extension, you leverage the full potential of the Telepresence CLI in Docker Desktop." +indexable: true +--- + +# Create an intercept + +With the Telepresence for Docker extension, you can create intercepts from one of your Kubernetes clusters, directly in the extension, or you can upload an [intercept specification](../../reference/intercepts/specs#specification) to run more complex intercepts. These intercepts route the cluster traffic through a proxy URL to your local Docker container. Follow the instructions below to create an intercept with Docker Desktop. + +## Prerequisites + +Before you begin, you need: +- [Docker Desktop](https://www.docker.com/products/docker-desktop). +- The [Telepresence ](../../../../../kubernetes-learning-center/telepresence-docker-extension/) extension [installed](../install). +- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/), the Kubernetes command-line tool. + +## Connect to Ambassador Cloud through the Telepresence Docker extension. + + 1. Click the Telepresence extension in Docker Desktop, then click **Get Started**. + + 2. You'll be redirected to Ambassador Cloud for login, you can authenticate with **Docker**, Google, GitHub or GitLab account. +

+ +

+ +## Create an Intercept from a Kubernetes service + + 1. Select the Kubernetes context you would like to connect to. +

+ +

+ + 2. Once Telepresence is connected to your cluster you will see a list of services you can connect to. If you don't see the service you want to intercept, you may need to change namespaces in the dropdown menu. +

+ +

+ + 3. Click the **Intercept** button on the service you want to intercept. You will see a popup to help configure your intercept, and intercept handlers. +

+ +

+ + 4. Telepresence will start an intercept on the service and your local container on the designated port. You will then be redirected to a management page where you can view your active intercepts. +

+ +

+ + +## Create an Intercept from an Intercept Specification. + + 1. Click the dropdown on the **Connect** button to activate the option to upload an intercept specification. +

+ +

+ + 2. Click **Upload Telepresence Spec** to run your intercept specification. +

+ +

+ + 3. Once your specification has been uploaded, the extension will process it and redirect you to the running intercepts page after it has been started. + + 4. The intercept information now shows up in the Docker Telepresence extension. You can now [test your code](#test-your-code). +

+ +

+ + + For more information on Intercept Specifications see the docs here. + + +## Test your code + +Now you can make your code changes in your preferred IDE. When you're finished, build a new container with your code changes and restart your intercept. + +Click `view` next to your preview URL to open a browser tab and see the changes you've made in real time, or you can share the preview URL with teammates so they can review your work. \ No newline at end of file diff --git a/docs/telepresence/2.13/extension/intro.md b/docs/telepresence/2.13/extension/intro.md new file mode 100644 index 000000000..4db54d404 --- /dev/null +++ b/docs/telepresence/2.13/extension/intro.md @@ -0,0 +1,27 @@ +--- +title: "Telepresence for Docker introduction" +description: "Learn about the Telepresence extension for Docker." +indexable: true +--- + +# Telepresence for Docker + +Telepresence is now available as a [Docker Extension](https://www.docker.com/products/extensions/) for Docker Desktop. + +## What is the Telepresence extension for Docker? + +The [Telepresence Docker extension](../../../../../kubernetes-learning-center/telepresence-docker-extension/) is an extension that runs in Docker Desktop. This extension allows you to spin up a selection of your application and run the Telepresence daemons in that container. The Telepresence extension allows you to intercept a service and redirect cloud traffic to containers. + +## What does the Telepresence Docker extension do? + +Telepresence for Docker is designed to simplify your coding experience and test your code faster. Traditionally, you need to build a container within docker with your code changes, push them, wait for it to upload, deploy the changes, verify them, view them, and repeat that process as you continually test your changes. This makes it a slow and cumbersome process when you need to continually test changes. + +With the Telepresence extension for Docker Desktop, you can use intercepts to immediately preview changes as you make them, without the need to redeploy after every change. Because the Telepresence extension also enables you to isolate your machine and operate it entirely within the Docker runtime, this means you can make changes without root permission on your machine. + +## How does Telepresence for Docker work? + +Telepresence runs entirely within containers. The Telepresence daemons run in a container, which can be given commands using the extension UI. When Telepresence intercepts a service, it redirects cloud traffic to other containers. + +## What do I need to begin? + +All you need is [Docker Desktop](https://www.docker.com/products/docker-desktop) with the [Ambassador Telepresence extension installed](../install) and the Kubernetes command-line tool [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/). diff --git a/docs/telepresence/2.13/faqs.md b/docs/telepresence/2.13/faqs.md new file mode 100644 index 000000000..092f11d6a --- /dev/null +++ b/docs/telepresence/2.13/faqs.md @@ -0,0 +1,126 @@ +--- +description: "Learn how Telepresence helps with fast development and debugging in your Kubernetes cluster." +--- + +# FAQs + +** Why Telepresence?** + +Modern microservices-based applications that are deployed into Kubernetes often consist of tens or hundreds of services. The resource constraints and number of these services means that it is often difficult to impossible to run all of this on a local development machine, which makes fast development and debugging very challenging. The fast [inner development loop](../concepts/devloop/) from previous software projects is often a distant memory for cloud developers. + +Telepresence enables you to connect your local development machine seamlessly to the cluster via a two way proxying mechanism. This enables you to code locally and run the majority of your services within a remote Kubernetes cluster -- which in the cloud means you have access to effectively unlimited resources. + +Ultimately, this empowers you to develop services locally and still test integrations with dependent services or data stores running in the remote cluster. + +You can “intercept” any requests made to a target Kubernetes workload, and code and debug your associated service locally using your favourite local IDE and in-process debugger. You can test your integrations by making requests against the remote cluster’s ingress and watching how the resulting internal traffic is handled by your service running locally. + +By using the preview URL functionality you can share access with additional developers or stakeholders to the application via an entry point associated with your intercept and locally developed service. You can make changes that are visible in near real-time to all of the participants authenticated and viewing the preview URL. All other viewers of the application entrypoint will not see the results of your changes. + +** What operating systems does Telepresence work on?** + +Telepresence currently works natively on macOS (Intel and Apple Silicon), Linux, and Windows. + +** What protocols can be intercepted by Telepresence?** + +Both TCP and UDP are supported for global intercepts. + +Personal intercepts require HTTP. All HTTP/1.1 and HTTP/2 protocols can be intercepted. This includes: + +- REST +- JSON/XML over HTTP +- gRPC +- GraphQL + +If you need another protocol supported, please [drop us a line](https://www.getambassador.io/feedback/) to request it. + +** When using Telepresence to intercept a pod, are the Kubernetes cluster environment variables proxied to my local machine?** + +Yes, you can either set the pod's environment variables on your machine or write the variables to a file to use with Docker or another build process. Please see [the environment variable reference doc](../reference/environment) for more information. + +** When using Telepresence to intercept a pod, can the associated pod volume mounts also be mounted by my local machine?** + +Yes, please see [the volume mounts reference doc](../reference/volume/) for more information. + +** When connected to a Kubernetes cluster via Telepresence, can I access cluster-based services via their DNS name?** + +Yes. After you have successfully connected to your cluster via `telepresence connect` you will be able to access any service in your cluster via their namespace qualified DNS name. + +This means you can curl endpoints directly e.g. `curl .:8080/mypath`. + +If you create an intercept for a service in a namespace, you will be able to use the service name directly. + +This means if you `telepresence intercept -n `, you will be able to resolve just the `` DNS record. + +You can connect to databases or middleware running in the cluster, such as MySQL, PostgreSQL and RabbitMQ, via their service name. + +** When connected to a Kubernetes cluster via Telepresence, can I access cloud-based services and data stores via their DNS name?** + +You can connect to cloud-based data stores and services that are directly addressable within the cluster (e.g. when using an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) Service type), such as AWS RDS, Google pub-sub, or Azure SQL Database. + +** What types of ingress does Telepresence support for the preview URL functionality?** + +The preview URL functionality should work with most ingress configurations, including straightforward load balancer setups. + +Telepresence will discover/prompt during first use for this info and make its best guess at figuring this out and ask you to confirm or update this. + +** Why are my intercepts still reporting as active when they've been disconnected?** + + In certain cases, Telepresence might not have been able to communicate back with Ambassador Cloud to update the intercept's status. Worry not, they will get garbage collected after a period of time. + +** Why is my intercept associated with an "Unreported" cluster?** + + Intercepts tagged with "Unreported" clusters simply mean Ambassador Cloud was unable to associate a service instance with a known detailed service from an Edge Stack or API Gateway cluster. [Connecting your cluster to the Service Catalog](/docs/telepresence/latest/quick-start/) will properly match your services from multiple data sources. + +** Will Telepresence be able to intercept workloads running on a private cluster or cluster running within a virtual private cloud (VPC)?** + +Yes. The cluster has to have outbound access to the internet for the preview URLs to function correctly, but it doesn't need to have a publicly accessible IP address. + +The cluster must also have access to an external registry in order to be able to download the traffic-manager and traffic-agent images that are deployed when connecting with Telepresence. + +** Why does running Telepresence require sudo access for the local daemon unless it runs in a Docker container?** + +The local daemon needs sudo to create a VIF (Virtual Network Interface) for outbound routing and DNS. Root access is needed to do that unless the daemon runs in a Docker container. + +** What components get installed in the cluster when running Telepresence?** + +A single `traffic-manager` service is deployed in the `ambassador` namespace within your cluster, and this manages resilient intercepts and connections between your local machine and the cluster. + +When running in `team` mode, a single `ambassador-agent` service is deployed in the `ambassador` namespace. It communicates with the cloud to keep your list of services up to date. + +A Traffic Agent container is injected per pod that is being intercepted. The first time a workload is intercepted all pods associated with this workload will be restarted with the Traffic Agent automatically injected. + +** How can I remove all the Telepresence components installed within my cluster?** + +You can run the command `telepresence helm uninstall` to remove everything from the cluster, including the `traffic-manager` and the `ambassador-agent` services, and all the `traffic-agent` containers injected into each pod being intercepted. + +Also run `telepresence quit -s` to stop the local daemon running. + +** What language is Telepresence written in?** + +All components of the Telepresence application and cluster components are written using Go. + +** How does Telepresence connect and tunnel into the Kubernetes cluster?** + +The connection between your laptop and cluster is established by using +the `kubectl port-forward` machinery (though without actually spawning +a separate program) to establish a TLS encrypted connection to Telepresence +Traffic Manager in the cluster, and running Telepresence's custom VPN +protocol over that connection. + + + +** What identity providers are supported for authenticating to view a preview URL?** + +* GitHub +* GitLab +* Google + +More authentication mechanisms and identity provider support will be added soon. Please [let us know](https://www.getambassador.io/feedback/) which providers are the most important to you and your team in order for us to prioritize those. + +** Is Telepresence open source?** + +A large part of it is! You can find its source code on [GitHub](https://github.com/telepresenceio/telepresence). + +** How do I share my feedback on Telepresence?** + +Your feedback is always appreciated and helps us build a product that provides as much value as possible for our community. You can chat with us directly on our [feedback page](https://www.getambassador.io/feedback/), or you can [join our Slack channel](http://a8r.io/slack) to share your thoughts. diff --git a/docs/telepresence/2.13/howtos/cluster-in-vm.md b/docs/telepresence/2.13/howtos/cluster-in-vm.md new file mode 100644 index 000000000..4762344c9 --- /dev/null +++ b/docs/telepresence/2.13/howtos/cluster-in-vm.md @@ -0,0 +1,192 @@ +--- +title: "Considerations for locally hosted clusters | Ambassador" +description: "Use Telepresence to intercept services in a cluster running in a hosted virtual machine." +--- + +# Network considerations for locally hosted clusters + +## The problem +Telepresence creates a Virtual Network Interface ([VIF](../../reference/tun-device)) that maps the clusters subnets to the host machine when it connects. If you're running Kubernetes locally (e.g., k3s, Minikube, Docker for Desktop), you may encounter network problems because the devices in the host are also accessible from the cluster's nodes. + +### Example: +A k3s cluster runs in a headless VirtualBox machine that uses a "host-only" network. This network will allow both host-to-guest and guest-to-host connections. In other words, the cluster will have access to the host's network and, while Telepresence is connected, also to its VIF. This means that from the cluster's perspective, there will now be more than one interface that maps the cluster's subnets; the ones already present in the cluster's nodes, and then the Telepresence VIF, mapping them again. + +Now, if a request arrives to Telepresence that is covered by a subnet mapped by the VIF, the request is routed to the cluster. If the cluster for some reason doesn't find a corresponding listener that can handle the request, it will eventually try the host network, and find the VIF. The VIF routes the request to the cluster and now the recursion is in motion. The final outcome of the request will likely be a timeout but since the recursion is very resource intensive (a large amount of very rapid connection requests), this will likely also affect other connections in a bad way. + +## Solution + +### Create a bridge network +A bridge network is a Link Layer (L2) device that forwards traffic between network segments. By creating a bridge network, you can bypass the host's network stack which enable the Kubernetes cluster to connect directly to the same router as your host. + +To create a bridge network, you need to change the network settings of the guest running a cluster's node so that it connects directly to a physical network device on your host. The details on how to configure the bridge depends on what type of virtualization solution you're using. + +### Vagrant + Virtualbox + k3s example +Here's a sample `Vagrantfile` that will spin up a server node and two agent nodes in three headless instances using a bridged network. It also adds the configuration needed for the cluster to host a docker repository (very handy in case you want to save bandwidth). The Kubernetes registry manifest must be applied using `kubectl -f registry.yaml` once the cluster is up and running. + +#### Vagrantfile +```ruby +# -*- mode: ruby -*- +# vi: set ft=ruby : + +# bridge is the name of the host's default network device +$bridge = 'wlp5s0' + +# default_route should be the IP of the host's default route. +$default_route = '192.168.1.1' + +# nameserver must be the IP of an external DNS, such as 8.8.8.8 +$nameserver = '8.8.8.8' + +# server_name should also be added to the host's /etc/hosts file and point to the server_ip +# for easy access when pushing docker images +server_name = 'multi' + +# static IPs for the server and agents. Those IPs must be on the default router's subnet +server_ip = '192.168.1.110' +agents = { + 'agent1' => '192.168.1.111', + 'agent2' => '192.168.1.112', +} + +# Extra parameters in INSTALL_K3S_EXEC variable because of +# K3s picking up the wrong interface when starting server and agent +# https://github.com/alexellis/k3sup/issues/306 +server_script = <<-SHELL + sudo -i + apk add curl + export INSTALL_K3S_EXEC="--bind-address=#{server_ip} --node-external-ip=#{server_ip} --flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + echo "Sleeping for 5 seconds to wait for k3s to start" + sleep 5 + cp /var/lib/rancher/k3s/server/token /vagrant_shared + cp /etc/rancher/k3s/k3s.yaml /vagrant_shared + cp /etc/rancher/k3s/registries.yaml /vagrant_shared + SHELL + +agent_script = <<-SHELL + sudo -i + apk add curl + export K3S_TOKEN_FILE=/vagrant_shared/token + export K3S_URL=https://#{server_ip}:6443 + export INSTALL_K3S_EXEC="--flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + SHELL + +def config_vm(name, ip, script, vm) + # The network_script has two objectives: + # 1. Ensure that the guest's default route is the bridged network (bypass the network of the host) + # 2. Ensure that the DNS points to an external DNS service, as opposed to the DNS of the host that + # the NAT network provides. + network_script = <<-SHELL + sudo -i + ip route delete default 2>&1 >/dev/null || true; ip route add default via #{$default_route} + cp /etc/resolv.conf /etc/resolv.conf.orig + sed 's/^nameserver.*/nameserver #{$nameserver}/' /etc/resolv.conf.orig > /etc/resolv.conf + SHELL + + vm.hostname = name + vm.network 'public_network', bridge: $bridge, ip: ip + vm.synced_folder './shared', '/vagrant_shared' + vm.provider 'virtualbox' do |vb| + vb.memory = '4096' + vb.cpus = '2' + end + vm.provision 'shell', inline: script + vm.provision 'shell', inline: network_script, run: 'always' +end + +Vagrant.configure('2') do |config| + config.vm.box = 'generic/alpine314' + + config.vm.define 'server', primary: true do |server| + config_vm(server_name, server_ip, server_script, server.vm) + end + + agents.each do |agent_name, agent_ip| + config.vm.define agent_name do |agent| + config_vm(agent_name, agent_ip, agent_script, agent.vm) + end + end +end +``` + +The Kubernetes manifest to add the registry: + +#### registry.yaml +```yaml +apiVersion: v1 +kind: ReplicationController +metadata: + name: kube-registry-v0 + namespace: kube-system + labels: + k8s-app: kube-registry + version: v0 +spec: + replicas: 1 + selector: + app: kube-registry + version: v0 + template: + metadata: + labels: + app: kube-registry + version: v0 + spec: + containers: + - name: registry + image: registry:2 + resources: + limits: + cpu: 100m + memory: 200Mi + env: + - name: REGISTRY_HTTP_ADDR + value: :5000 + - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY + value: /var/lib/registry + volumeMounts: + - name: image-store + mountPath: /var/lib/registry + ports: + - containerPort: 5000 + name: registry + protocol: TCP + volumes: + - name: image-store + hostPath: + path: /var/lib/registry-storage +--- +apiVersion: v1 +kind: Service +metadata: + name: kube-registry + namespace: kube-system + labels: + app: kube-registry + kubernetes.io/name: "KubeRegistry" +spec: + selector: + app: kube-registry + ports: + - name: registry + port: 5000 + targetPort: 5000 + protocol: TCP + type: LoadBalancer +``` + diff --git a/docs/telepresence/2.13/howtos/intercepts.md b/docs/telepresence/2.13/howtos/intercepts.md new file mode 100644 index 000000000..87bd9f92b --- /dev/null +++ b/docs/telepresence/2.13/howtos/intercepts.md @@ -0,0 +1,108 @@ +--- +description: "Start using Telepresence in your own environment. Follow these steps to intercept your service in your cluster." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Intercept a service in your own environment + +Telepresence enables you to create intercepts to a target Kubernetes workload. Once you have created and intercept, you can code and debug your associated service locally. + +For a detailed walk-though on creating intercepts using our sample app, follow the [quick start guide](../../quick-start/demo-node/). + + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +This guide assumes you have a Kubernetes deployment and service accessible publicly by an ingress controller, and that you can run a copy of that service on your laptop. + + +## Intercept your service with a global intercept + +With Telepresence, you can create [global intercepts](../../concepts/intercepts/?intercept=global) that intercept all traffic going to a service in your cluster and route it to your local environment instead. + +1. Connect to your cluster with `telepresence connect` and connect to the Kubernetes API server: + + ```console + $ curl -ik https://kubernetes.default + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected when you first connect. + + + You now have access to your remote Kubernetes API server as if you were on the same network. You can now use any local tools to connect to any service in the cluster. + + If you have difficulties connecting, make sure you are using Telepresence 2.0.3 or a later version. Check your version by entering `telepresence version` and [upgrade if needed](../../install/upgrade/). + + +2. Enter `telepresence list` and make sure the service you want to intercept is listed. For example: + + ```console + $ telepresence list + ... + example-service: ready to intercept (traffic-agent not yet installed) + ... + ``` + +3. Get the name of the port you want to intercept on your service: + `kubectl get service --output yaml`. + + For example: + + ```console + $ kubectl get service example-service --output yaml + ... + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + ... + ``` + +4. Intercept all traffic going to the service in your cluster: + `telepresence intercept --port [:] --env-file `. + * For `--port`: specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * For `--env-file`: specify a file path for Telepresence to write the environment variables that are set in the pod. + The example below shows Telepresence intercepting traffic going to service `example-service`. Requests now reach the service on port `http` in the cluster get routed to `8080` on the workstation and write the environment variables of the service to `~/example-service-intercept.env`. + ```console + $ telepresence intercept example-service --port 8080:http --env-file ~/example-service-intercept.env + Using Deployment example-service + intercepted + Intercept name: example-service + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Intercepting : all TCP connections + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + The following are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Query the environment in which you intercepted a service and verify your local instance being invoked. + All the traffic previously routed to your Kubernetes Service is now routed to your local environment + +You can now: +- Make changes on the fly and see them reflected when interacting with + your Kubernetes environment. +- Query services only exposed in your cluster's network. +- Set breakpoints in your IDE to investigate bugs. + + + + **Didn't work?** Make sure the port you're listening on matches the one you specified when you created your intercept. + + diff --git a/docs/telepresence/2.13/howtos/outbound.md b/docs/telepresence/2.13/howtos/outbound.md new file mode 100644 index 000000000..48877df8c --- /dev/null +++ b/docs/telepresence/2.13/howtos/outbound.md @@ -0,0 +1,89 @@ +--- +description: "Telepresence can connect to your Kubernetes cluster, letting you access cluster services as if your laptop was another pod in the cluster." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Proxy outbound traffic to my cluster + +While preview URLs are a powerful feature, Telepresence offers other options for proxying traffic between your laptop and the cluster. This section discribes how to proxy outbound traffic and control outbound connectivity to your cluster. + + This guide assumes that you have the quick start sample web app running in your cluster to test accessing the web-app service. You can substitute this service for any other service you are running. + +## Proxying outbound traffic + +Connecting to the cluster instead of running an intercept allows you to access cluster workloads as if your laptop was another pod in the cluster. This enables you to access other Kubernetes services using `.`. A service running on your laptop can interact with other services on the cluster by name. + +When you connect to your cluster, the background daemon on your machine runs and installs the [Traffic Manager deployment](../../reference/architecture/) into the cluster of your current `kubectl` context. The Traffic Manager handles the service proxying. + +1. Run `telepresence connect` and enter your password to run the daemon. + + ``` + $ telepresence connect + Launching Telepresence Daemon v2.3.7 (api v3) + Need root privileges to run "/usr/local/bin/telepresence daemon-foreground /home//.cache/telepresence/logs '' ''" + [sudo] password: + Connecting to traffic manager... + Connected to context default (https://) + ``` + +2. Run `telepresence status` to confirm connection to your cluster and that it is proxying traffic. + + ``` + $ telepresence status + Root Daemon: Running + Version : v2.3.7 (api 3) + Primary DNS : "" + Fallback DNS: "" + User Daemon: Running + Version : v2.3.7 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 0 total + ``` + +3. Access your service by name with `curl web-app.emojivoto:80`. Telepresence routes the request to the cluster, as if your laptop is actually running in the cluster. + + ``` + $ curl web-app.emojivoto:80 + + + + + Emoji Vote + ... + ``` + +If you terminate the client with `telepresence quit` and try to access the service again, it will fail because traffic is no longer proxied from your laptop. + + ``` + $ telepresence quit + Telepresence Daemon quitting...done + ``` + +When using Telepresence in this way, you need to access services with the namespace qualified DNS name (<service name>.<namespace>) before you start an intercept. After you start an intercept, only <service name> is required. Read more about these differences in the DNS resolution reference guide. + +## Controlling outbound connectivity + +By default, Telepresence provides access to all Services found in all namespaces in the connected cluster. This can lead to problems if the user does not have RBAC access permissions to all namespaces. You can use the `--mapped-namespaces ` flag to control which namespaces are accessible. + +When you use the `--mapped-namespaces` flag, you need to include all namespaces containing services you want to access, as well as all namespaces that contain services related to the intercept. + +### Using local-only intercepts + +When you develop on isolated apps or on a virtualized container, you don't need an outbound connection. However, when developing services that aren't deployed to the cluster, it can be necessary to provide outbound connectivity to the namespace where the service will be deployed. This is because services that aren't exposed through ingress controllers require connectivity to those services. When you provide outbound connectivity, the service can access other services in that namespace without using qualified names. A local-only intercept does not cause outbound connections to originate from the intercepted namespace. The reason for this is to establish correct origin; the connection must be routed to a `traffic-agent`of an intercepted pod. For local-only intercepts, the outbound connections originates from the `traffic-manager`. + +To control outbound connectivity to specific namespaces, add the `--local-only` flag: + + ``` + $ telepresence intercept --namespace --local-only + ``` +The resources in the given namespace can now be accessed using unqualified names as long as the intercept is active. +You can deactivate the intercept with `telepresence leave `. This removes unqualified name access. + +### Proxy outcound connectivity for laptops + +To specify additional hosts or subnets that should be resolved inside the cluster, see [AlsoProxy](../../reference/cluster-config/#alsoproxy) for more details. diff --git a/docs/telepresence/2.13/howtos/package.md b/docs/telepresence/2.13/howtos/package.md new file mode 100644 index 000000000..520662eef --- /dev/null +++ b/docs/telepresence/2.13/howtos/package.md @@ -0,0 +1,178 @@ +--- +title: "How to package and share my intercept setup with my teammates" +description: "Use telepresence intercept specs to enable your teammates faster" +--- +# Introduction + +While telepresence takes cares of the interception part of your setup, you usually still need to script +some boiler plate code to run the local part (the handler) of your code. + +Classic solutions rely on a Makefile, or bash scripts, but this becomes cumbersome to maintain. + +Instead, you can use [telepresence intercept specs](../../reference/intercepts/specs): They allow you +to specify all aspects of an intercept, including prerequisites, the local processes that receive the intercepted traffic, +and the actual intercept. Telepresence can then run the specification. + +# Getting started + +You will need a Kubernetes cluster, a deployment, and a service to begin using an Intercept Specification. + +Once you have a Kubernetes cluster you can apply this configuration to start an echo easy deployment that +we can then use for our Intercept Specifcation + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: "echo-easy" +spec: + type: ClusterIP + selector: + service: echo-easy + ports: + - name: proxied + port: 80 + targetPort: http +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "echo-easy" + labels: + service: echo-easy +spec: + replicas: 1 + selector: + matchLabels: + service: echo-easy + template: + metadata: + labels: + service: echo-easy + spec: + containers: + - name: echo-easy + image: jmalloc/echo-server + ports: + - containerPort: 8080 + name: http + resources: + limits: + cpu: 50m + memory: 128Mi +``` + +You can create the local yaml file by using + +```console +$ cat > echo-server.yaml < my-intercept.yaml < --port --env-file --mechanism http`and adjust the flags as follows: + Start the intercept: + * **port:** specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * **env-file:** specify a file path for Telepresence to write the environment variables that are set in the pod. + * You can remove the **--mechanism http** flag if you have your traffic-manager set to *team-mode* + +4. Answer the question prompts. + The example below shows a preview URL for `example-service` which listens on port 8080. The preview URL for ingress will use the `ambassador` service in the `ambassador` namespace on port `443` using TLS encryption and the hostname `dev-environment.edgestack.me`: + + ```console +$ telepresence intercept example-service --mechanism http --ingress-host ambassador.ambassador --ingress-port 80 --ingress-l5 dev-environment.edgestack.me --ingress-tls --port 8080 --env-file ~/ex-svc.env + + Using deployment example-service + intercepted + Intercept name : example-service + State : ACTIVE + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":example-service") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname : dev-environment.edgestack.me + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + Here are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Go to the Preview URL generated from the intercept. +Traffic is now intercepted from your preview URL without impacting other traffic from your Ingress. + + + Didn't work? It might be because you have services in between your ingress controller and the service you are intercepting that do not propagate the x-telepresence-intercept-id HTTP Header. Read more on context propagation. + + +7. Make a request on the URL you would usually query for that environment. Don't route a request to your laptop. + + Normal traffic coming into the cluster through the Ingress (i.e. not coming from the preview URL) routes to services in the cluster like normal. + +8. Share with a teammate. + + You can collaborate with teammates by sending your preview URL to them. Once your teammate logs in, they must select the same identity provider and org as you are using. This authorizes their access to the preview URL. When they visit the preview URL, they see the intercepted service running on your laptop. + You can now collaborate with a teammate to debug the service on the shared intercept URL without impacting the production environment. + +## Sharing a preview URL with people outside your team + +To collaborate with someone outside of your identity provider's organization: +Log into [Ambassador Cloud](https://app.getambassador.io/cloud/). + navigate to your service's intercepts, select the preview URL details, and click **Make Publicly Accessible**. Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on your laptop. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. Removing the preview URL either from the dashboard or by running `telepresence preview remove ` also removes all access to the preview URL. + +## Change access restrictions + +To collaborate with someone outside of your identity provider's organization, you must make your preview URL publicly accessible. + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/). +2. Select the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Make Publicly Accessible**. + +Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on a local environment. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. + +## Remove a preview URL from an Intercept + +To delete a preview URL and remove all access to the intercepted service, + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) +2. Click on the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Remove Preview**. + +Alternatively, you can remove a preview URL with the following command: +`telepresence preview remove ` diff --git a/docs/telepresence/2.13/howtos/request.md b/docs/telepresence/2.13/howtos/request.md new file mode 100644 index 000000000..1109c68df --- /dev/null +++ b/docs/telepresence/2.13/howtos/request.md @@ -0,0 +1,12 @@ +import Alert from '@material-ui/lab/Alert'; + +# Send requests to an intercepted service + +Ambassador Cloud can inform you about the required request parameters to reach an intercepted service. + + 1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) + 2. Navigate to the desired service Intercepts page + 3. Click the **Query** button to open the pop-up menu. + 4. Toggle between **CURL**, **Headers** and **Browse**. + +The pre-built queries and header information will help you get started to query the desired intercepted service and manage header propagation. diff --git a/docs/telepresence/2.13/images/container-inner-dev-loop.png b/docs/telepresence/2.13/images/container-inner-dev-loop.png new file mode 100644 index 000000000..06586cd6e Binary files /dev/null and b/docs/telepresence/2.13/images/container-inner-dev-loop.png differ diff --git a/docs/telepresence/2.13/images/docker-extension-intercept.png b/docs/telepresence/2.13/images/docker-extension-intercept.png new file mode 100644 index 000000000..d01daef8f Binary files /dev/null and b/docs/telepresence/2.13/images/docker-extension-intercept.png differ diff --git a/docs/telepresence/2.13/images/docker-header-containers.png b/docs/telepresence/2.13/images/docker-header-containers.png new file mode 100644 index 000000000..06f422a93 Binary files /dev/null and b/docs/telepresence/2.13/images/docker-header-containers.png differ diff --git a/docs/telepresence/2.13/images/docker_extension_button_drop_down.png b/docs/telepresence/2.13/images/docker_extension_button_drop_down.png new file mode 100644 index 000000000..775323e56 Binary files /dev/null and b/docs/telepresence/2.13/images/docker_extension_button_drop_down.png differ diff --git a/docs/telepresence/2.13/images/docker_extension_connect_to_cluster.png b/docs/telepresence/2.13/images/docker_extension_connect_to_cluster.png new file mode 100644 index 000000000..eb95e5180 Binary files /dev/null and b/docs/telepresence/2.13/images/docker_extension_connect_to_cluster.png differ diff --git a/docs/telepresence/2.13/images/docker_extension_login.png b/docs/telepresence/2.13/images/docker_extension_login.png new file mode 100644 index 000000000..8874fa959 Binary files /dev/null and b/docs/telepresence/2.13/images/docker_extension_login.png differ diff --git a/docs/telepresence/2.13/images/docker_extension_running_intercepts_page.png b/docs/telepresence/2.13/images/docker_extension_running_intercepts_page.png new file mode 100644 index 000000000..7870e2691 Binary files /dev/null and b/docs/telepresence/2.13/images/docker_extension_running_intercepts_page.png differ diff --git a/docs/telepresence/2.13/images/docker_extension_start_intercept_page.png b/docs/telepresence/2.13/images/docker_extension_start_intercept_page.png new file mode 100644 index 000000000..6788994e3 Binary files /dev/null and b/docs/telepresence/2.13/images/docker_extension_start_intercept_page.png differ diff --git a/docs/telepresence/2.13/images/docker_extension_start_intercept_popup.png b/docs/telepresence/2.13/images/docker_extension_start_intercept_popup.png new file mode 100644 index 000000000..12839b0e5 Binary files /dev/null and b/docs/telepresence/2.13/images/docker_extension_start_intercept_popup.png differ diff --git a/docs/telepresence/2.13/images/docker_extension_upload_spec_button.png b/docs/telepresence/2.13/images/docker_extension_upload_spec_button.png new file mode 100644 index 000000000..f571aefd3 Binary files /dev/null and b/docs/telepresence/2.13/images/docker_extension_upload_spec_button.png differ diff --git a/docs/telepresence/2.13/images/github-login.png b/docs/telepresence/2.13/images/github-login.png new file mode 100644 index 000000000..cfd4d4bf1 Binary files /dev/null and b/docs/telepresence/2.13/images/github-login.png differ diff --git a/docs/telepresence/2.13/images/logo.png b/docs/telepresence/2.13/images/logo.png new file mode 100644 index 000000000..701f63ba8 Binary files /dev/null and b/docs/telepresence/2.13/images/logo.png differ diff --git a/docs/telepresence/2.13/images/mode-defaults.png b/docs/telepresence/2.13/images/mode-defaults.png new file mode 100644 index 000000000..1dcca4116 Binary files /dev/null and b/docs/telepresence/2.13/images/mode-defaults.png differ diff --git a/docs/telepresence/2.13/images/split-tunnel.png b/docs/telepresence/2.13/images/split-tunnel.png new file mode 100644 index 000000000..5bf30378e Binary files /dev/null and b/docs/telepresence/2.13/images/split-tunnel.png differ diff --git a/docs/telepresence/2.13/images/tracing.png b/docs/telepresence/2.13/images/tracing.png new file mode 100644 index 000000000..c374807e5 Binary files /dev/null and b/docs/telepresence/2.13/images/tracing.png differ diff --git a/docs/telepresence/2.13/images/trad-inner-dev-loop.png b/docs/telepresence/2.13/images/trad-inner-dev-loop.png new file mode 100644 index 000000000..618b674f8 Binary files /dev/null and b/docs/telepresence/2.13/images/trad-inner-dev-loop.png differ diff --git a/docs/telepresence/2.13/images/tunnelblick.png b/docs/telepresence/2.13/images/tunnelblick.png new file mode 100644 index 000000000..8944d445a Binary files /dev/null and b/docs/telepresence/2.13/images/tunnelblick.png differ diff --git a/docs/telepresence/2.13/images/vpn-dns.png b/docs/telepresence/2.13/images/vpn-dns.png new file mode 100644 index 000000000..eed535c45 Binary files /dev/null and b/docs/telepresence/2.13/images/vpn-dns.png differ diff --git a/docs/telepresence/2.13/install/cloud.md b/docs/telepresence/2.13/install/cloud.md new file mode 100644 index 000000000..4f09a94ae --- /dev/null +++ b/docs/telepresence/2.13/install/cloud.md @@ -0,0 +1,63 @@ +# Provider Prerequisites for Traffic Manager + +## GKE + +### Firewall Rules for private clusters + +A GKE cluster with private networking will come preconfigured with firewall rules that prevent the Traffic Manager's +webhook injector from being invoked by the Kubernetes API server. +For Telepresence to work in such a cluster, you'll need to [add a firewall rule](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) allowing the Kubernetes masters to access TCP port `8443` in your pods. +For example, for a cluster named `tele-webhook-gke` in region `us-central1-c1`: + +```bash +$ gcloud container clusters describe tele-webhook-gke --region us-central1-c | grep masterIpv4CidrBlock + masterIpv4CidrBlock: 172.16.0.0/28 # Take note of the IP range, 172.16.0.0/28 + +$ gcloud compute firewall-rules list \ + --filter 'name~^gke-tele-webhook-gke' \ + --format 'table( + name, + network, + direction, + sourceRanges.list():label=SRC_RANGES, + allowed[].map().firewall_rule().list():label=ALLOW, + targetTags.list():label=TARGET_TAGS + )' + +NAME NETWORK DIRECTION SRC_RANGES ALLOW TARGET_TAGS +gke-tele-webhook-gke-33fa1791-all tele-webhook-net INGRESS 10.40.0.0/14 esp,ah,sctp,tcp,udp,icmp gke-tele-webhook-gke-33fa1791-node +gke-tele-webhook-gke-33fa1791-master tele-webhook-net INGRESS 172.16.0.0/28 tcp:10250,tcp:443 gke-tele-webhook-gke-33fa1791-node +gke-tele-webhook-gke-33fa1791-vms tele-webhook-net INGRESS 10.128.0.0/9 icmp,tcp:1-65535,udp:1-65535 gke-tele-webhook-gke-33fa1791-node +# Take note fo the TARGET_TAGS value, gke-tele-webhook-gke-33fa1791-node + +$ gcloud compute firewall-rules create gke-tele-webhook-gke-webhook \ + --action ALLOW \ + --direction INGRESS \ + --source-ranges 172.16.0.0/28 \ + --rules tcp:8443 \ + --target-tags gke-tele-webhook-gke-33fa1791-node --network tele-webhook-net +Creating firewall...⠹Created [https://www.googleapis.com/compute/v1/projects/datawire-dev/global/firewalls/gke-tele-webhook-gke-webhook]. +Creating firewall...done. +NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED +gke-tele-webhook-gke-webhook tele-webhook-net INGRESS 1000 tcp:8443 False +``` + +### GKE Authentication Plugin + +Starting with Kubernetes version 1.26 GKE will require the use of the [gke-gcloud-auth-plugin](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke). +You will need to install this plugin to use Telepresence with Docker while using GKE. + +If you are using the [Telepresence Docker Extension](../extension/intro) you will need to ensure that your `command` is set to an absolute path in your kubeconfig file. If you've installed not using homebrew you may see in your file `command: gke-gcloud-auth-plugin`. This would need to be replaced with the path to the binary. +You can check this by opening your kubeconfig file, and under the `users` section with your GKE cluster there is a `command` if you've installed with homebrew it would look like this +`command: /opt/homebrew/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/gcloud`. + +## EKS + +### EKS Authentication Plugin + +If you are using AWS CLI version earlier than `1.16.156` you will need to install [aws-iam-authenticator](https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html). +You will need to install this plugin to use Telepresence with Docker while using EKS. + +If you are using the [Telepresence Docker Extension](../extension/intro) you will need to ensure that your `command` is set to an absolute path in your kubeconfig file instead of a relative path. +You can check this by opening your kubeconfig file, and under the `users` section with your EKS cluster there is a `command` if you've installed with homebrew it would look like this +`command: /opt/homebrew/Cellar/aws-iam-authenticator/0.6.2/bin/aws-iam-authenticator`. \ No newline at end of file diff --git a/docs/telepresence/2.13/install/helm.md b/docs/telepresence/2.13/install/helm.md new file mode 100644 index 000000000..2709ee8f3 --- /dev/null +++ b/docs/telepresence/2.13/install/helm.md @@ -0,0 +1,181 @@ +# Install the Traffic Manager with Helm + +[Helm](https://helm.sh) is a package manager for Kubernetes that automates the release and management of software on Kubernetes. The Telepresence Traffic Manager can be installed via a Helm chart with a few simple steps. + +For more details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +## Before you begin + +Before you begin you need to have [`helm`](https://helm.sh/docs/intro/install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +The Telepresence Helm chart is hosted by Ambassador Labs and published at `https://app.getambassador.io`. + +Start by adding this repo to your Helm client with the following command: + +```shell +helm repo add datawire https://app.getambassador.io +helm repo update +``` + +## Install with Helm + +When you run the Helm chart, it installs all the components required for the Telepresence Traffic Manager. + +1. If you are installing the Telepresence Traffic Manager **for the first time on your cluster**, create the `ambassador` namespace in your cluster: + + ```shell + kubectl create namespace ambassador + ``` + +2. Install the Telepresence Traffic Manager with the following command: + + ```shell + helm install traffic-manager --namespace ambassador datawire/telepresence + ``` + +### Install into custom namespace + +The Helm chart supports being installed into any namespace, not necessarily `ambassador`. Simply pass a different `namespace` argument to `helm install`. +For example, if you wanted to deploy the traffic manager to the `staging` namespace: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence +``` + +Note that users of Telepresence will need to configure their kubeconfig to find this installation of the Traffic Manager: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +See [the kubeconfig documentation](../../reference/config#manager) for more information. + +### Upgrading the Traffic Manager. + +Versions of the Traffic Manager Helm chart are coupled to the versions of the Telepresence CLI that they are intended for. +Thus, for example, if you wish to use Telepresence `v2.4.0`, you'll need to install version `v2.4.0` of the Traffic Manager Helm chart. + +Upgrading the Traffic Manager is the same as upgrading any other Helm chart; for example, if you installed the release into the `ambassador` namespace, and you just wished to upgrade it to the latest version without changing any configuration values: + +```shell +helm repo up +helm upgrade traffic-manager datawire/telepresence --reuse-values --namespace ambassador +``` + +If you want to upgrade the Traffic-Manager to a specific version, add a `--version` flag with the version number to the upgrade command. For example: `--version v2.4.1` + +## RBAC + +### Installing a namespace-scoped traffic manager + +You might not want the Traffic Manager to have permissions across the entire kubernetes cluster, or you might want to be able to install multiple traffic managers per cluster (for example, to separate them by environment). +In these cases, the traffic manager supports being installed with a namespace scope, allowing cluster administrators to limit the reach of a traffic manager's permissions. + +For example, suppose you want a Traffic Manager that only works on namespaces `dev` and `staging`. +To do this, create a `values.yaml` like the following: + +```yaml +managerRbac: + create: true + namespaced: true + namespaces: + - dev + - staging +``` + +This can then be installed via: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence -f ./values.yaml +``` + +**NOTE** Do not install namespace-scoped Traffic Managers and a global Traffic Manager in the same cluster, as it could have unexpected effects. + +#### Namespace collision detection + +The Telepresence Helm chart will try to prevent namespace-scoped Traffic Managers from managing the same namespaces. +It will do this by creating a ConfigMap, called `traffic-manager-claim`, in each namespace that a given install manages. + +So, for example, suppose you install one Traffic Manager to manage namespaces `dev` and `staging`, as: + +```bash +helm install traffic-manager --namespace dev datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={dev,staging}' +``` + +You might then attempt to install another Traffic Manager to manage namespaces `staging` and `prod`: + +```bash +helm install traffic-manager --namespace prod datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={staging,prod}' +``` + +This would fail with an error: + +``` +Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap "traffic-manager-claim" in namespace "staging" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "prod": current value is "dev" +``` + +To fix this error, fix the overlap either by removing `staging` from the first install, or from the second. + +#### Namespace scoped user permissions + +Optionally, you can also configure user rbac to be scoped to the same namespaces as the manager itself. +You might want to do this if you don't give your users permissions throughout the cluster, and want to make sure they only have the minimum set required to perform telepresence commands on certain namespaces. + +Continuing with the `dev` and `staging` example from the previous section, simply add the following to `values.yaml` (make sure you set the `subjects`!): + +```yaml +clientRbac: + create: true + + # These are the users or groups to which the user rbac will be bound. + # This MUST be set. + subjects: {} + # - kind: User + # name: jane + # apiGroup: rbac.authorization.k8s.io + + namespaced: true + + namespaces: + - dev + - staging +``` + +#### Namespace-scoped webhook + +If you wish to use the traffic-manager's [mutating webhook](../../reference/cluster-config#mutating-webhook) with a namespace-scoped traffic manager, you will have to ensure that each namespace has an `app.kubernetes.io/name` label that is identical to its name: + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: staging + labels: + app.kubernetes.io/name: staging +``` + +You can also use `kubectl label` to add the label to an existing namespace, e.g.: + +```shell +kubectl label namespace staging app.kubernetes.io/name=staging +``` + +This is required because the mutating webhook will use the name label to find namespaces to operate on. + +**NOTE** This labelling happens automatically in kubernetes >= 1.21. + +### Installing RBAC only + +Telepresence Traffic Manager does require some [RBAC](../../reference/rbac/) for the traffic-manager deployment itself, as well as for users. +To make it easier for operators to introspect / manage RBAC separately, you can use `rbac.only=true` to +only create the rbac-related objects. +Additionally, you can use `clientRbac.create=true` and `managerRbac.create=true` to toggle which subset(s) of RBAC objects you wish to create. diff --git a/docs/telepresence/2.13/install/index.md b/docs/telepresence/2.13/install/index.md new file mode 100644 index 000000000..e843dbb9b --- /dev/null +++ b/docs/telepresence/2.13/install/index.md @@ -0,0 +1,157 @@ +import Platform from '@src/components/Platform'; + +# Install + +Install Telepresence by running the commands below for your OS. If you are not the administrator of your cluster, you will need [administrative RBAC permissions](../reference/rbac#administrating-telepresence) to install and use Telepresence in your cluster. + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +Download the [installer](https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence-setup.exe) or use these Powershell commands: + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## What's Next? + +Follow one of our [quick start guides](../quick-start/) to start using Telepresence, either with our sample app or in your own environment. + +## Installing nightly versions of Telepresence + +We build and publish the contents of the default branch, [release/v2](https://github.com/telepresenceio/telepresence), of Telepresence +nightly, Monday through Friday, for macOS (Intel and Apple silicon), Linux, and Windows. + +The tags are formatted like so: `vX.Y.Z-nightly-$gitShortHash`. + +`vX.Y.Z` is the most recent release of Telepresence with the patch version (Z) bumped one higher. +For example, if our last release was 2.3.4, nightly builds would start with v2.3.5, until a new +version of Telepresence is released. + +`$gitShortHash` will be the short hash of the git commit of the build. + +Use these URLs to download the most recent nightly build. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/nightly/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/nightly/telepresence.zip +``` + + + + +## Installing older versions of Telepresence + +Use these URLs to download an older version for your OS (including older nightly builds), replacing `x.y.z` with the versions you want. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/x.y.z/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/x.y.z/telepresence +``` + + + + + diff --git a/docs/telepresence/2.13/install/manager.md b/docs/telepresence/2.13/install/manager.md new file mode 100644 index 000000000..4efdc3c69 --- /dev/null +++ b/docs/telepresence/2.13/install/manager.md @@ -0,0 +1,85 @@ +# Install/Uninstall the Traffic Manager + +Telepresence uses a traffic manager to send/recieve cloud traffic to the user. Telepresence uses [Helm](https://helm.sh) under the hood to install the traffic manager in your cluster. + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/). +In addition, you may need certain prerequisites depending on your cloud provider and platform. +See the [cloud provider installation notes](../../install/cloud) for more. + +## Install the Traffic Manager + +The telepresence cli can install the traffic manager for you. The basic install will install the same version as the client used. + +1. Install the Telepresence Traffic Manager with the following command: + + ```shell + telepresence helm install + ``` + +### Customizing the Traffic Manager. + +For details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +1. Create a values.yaml file with your config values. + +2. Run the install command with the values flag set to the path to your values file. + + ```shell + telepresence helm install --values values.yaml + ``` + + +## Upgrading/Downgrading the Traffic Manager. + +1. Download the cli of the version of Telepresence you wish to use. + +2. Run the install command with the upgrade flag. + + ```shell + telepresence helm install --upgrade + ``` + + +## Uninstall + +The telepresence cli can uninstall the traffic manager for you using the `telepresence helm uninstall` command (previously `telepresence uninstall --everything`). + +1. Uninstall the Telepresence Traffic Manager and all of the agents installed by it using the following command: + + ```shell + telepresence helm uninstall + ``` + +## Ambassador Agent + +The Ambassador Agent is installed alongside the Traffic Manager to report your services to Ambassador Cloud and give you the ability to trigger intercepts from the Cloud UI. + +If you are already using the Emissary-Ingress or Edge-Stack you do not need to install the Ambassador Agent. When installing the `traffic-manager` you can add the flag `--set ambassador-agent.enabled=false`, to not include the ambassador-agent. Emissary and Edge-Stack both already include this agent within their deployments. + +If your namespace runs with tight security parameters you may need to set a few additional parameters. These parameters are `securityContext`, `tolerations`, and `resources`. +You can set these parameters in a `values.yaml` file under the `ambassador-agent` prefix to fit your namespace requirements. + +### Adding an API Key to your Ambassador Agent + +While installing the traffic-manager you can pass your cloud-token directly to the helm chart using the flag, `--set ambassador-agent.cloudConnectToken=`. +The [API Key](../reference/client/login.md) will be created as a secret and your agent will use it upon start-up. Telepresence will not override the API key given via Helm. + +### Creating a secret manually +The Ambassador agent watches for secrets with a name ending in `agent-cloud-token`. You can create this secret yourself. This API key will always be used. + + ```shell +kubectl apply -f - < + labels: + app.kubernetes.io/name: agent-cloud-token +data: + CLOUD_CONNECT_TOKEN: +EOF + ``` \ No newline at end of file diff --git a/docs/telepresence/2.13/install/migrate-from-legacy.md b/docs/telepresence/2.13/install/migrate-from-legacy.md new file mode 100644 index 000000000..94307dfa1 --- /dev/null +++ b/docs/telepresence/2.13/install/migrate-from-legacy.md @@ -0,0 +1,110 @@ +# Migrate from legacy Telepresence + +[Telepresence](/products/telepresence/) (formerly referenced as Telepresence 2, which is the current major version) has different mechanics and requires a different mental model from [legacy Telepresence 1](https://www.telepresence.io/docs/v1/) when working with local instances of your services. + +In legacy Telepresence, a pod running a service was swapped with a pod running the Telepresence proxy. This proxy received traffic intended for the service, and sent the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment". + +In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time. + +Telepresence 2 introduces a [new +architecture](../../reference/architecture/) built around "intercepts" +that addresses these problems. With the new Telepresence, a sidecar +proxy ("traffic agent") is injected onto the pod. The proxy then +intercepts traffic intended for the Pod and routes it to the +workstation/laptop. The advantage of this approach is that the +service is running at all times, and no swapping is used. By using +the proxy approach, we can also do personal intercepts, where rather +than re-routing all traffic to the laptop/workstation, it only +re-routes the traffic designated as belonging to that user, so that +multiple developers can intercept the same service at the same time +without disrupting normal operation or disrupting eacho. + +Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts. + +## Using legacy Telepresence commands + +First please ensure you've [installed Telepresence](../). + +Telepresence is able to translate common legacy Telepresence commands into native Telepresence commands. +So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used +to with the Telepresence binary. + +For example, say you have a deployment (`myserver`) that you want to swap deployment (equivalent to intercept in +Telepresence) with a python server, you could run the following command: + +``` +$ telepresence --swap-deployment myserver --expose 9090 --run python3 -m http.server 9090 +< help text > + +Legacy telepresence command used +Command roughly translates to the following in Telepresence: +telepresence intercept echo-easy --port 9090 -- python3 -m http.server 9090 +running... +Connecting to traffic manager... +Connected to context +Using Deployment myserver +intercepted + Intercept name : myserver + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:9090 + Intercepting : all TCP connections +Serving HTTP on :: port 9090 (http://[::]:9090/) ... +``` + +Telepresence will let you know what the legacy Telepresence command has mapped to and automatically +runs it. So you can get started with Telepresence today, using the commands you are used to +and it will help you learn the Telepresence syntax. + +### Legacy command mapping + +Below is the mapping of legacy Telepresence to Telepresence commands (where they exist and +are supported). + +| Legacy Telepresence Command | Telepresence Command | +|--------------------------------------------------|--------------------------------------------| +| --swap-deployment $workload | intercept $workload | +| --expose localPort[:remotePort] | intercept --port localPort[:remotePort] | +| --swap-deployment $workload --run-shell | intercept $workload -- bash | +| --swap-deployment $workload --run $cmd | intercept $workload -- $cmd | +| --swap-deployment $workload --docker-run $cmd | intercept $workload --docker-run -- $cmd | +| --run-shell | connect -- bash | +| --run $cmd | connect -- $cmd | +| --env-file,--env-json | --env-file, --env-json (haven't changed) | +| --context,--namespace | --context, --namespace (haven't changed) | +| --mount,--docker-mount | --mount, --docker-mount (haven't changed) | + +### Legacy Telepresence command limitations + +Some of the commands and flags from legacy Telepresence either didn't apply to Telepresence or +aren't yet supported in Telepresence. For some known popular commands, such as --method, +Telepresence will include output letting you know that the flag has gone away. For flags that +Telepresence can't translate yet, it will let you know that that flag is "unsupported". + +If Telepresence is missing any flags or functionality that is integral to your usage, please let us know +by [creating an issue](https://github.com/telepresenceio/telepresence/issues) and/or talking to us on our [Slack channel](http://a8r.io/slack)! + +## Telepresence changes + +Telepresence installs a Traffic Manager in the cluster and Traffic Agents alongside workloads when performing intercepts (including +with `--swap-deployment`) and leaves them. If you use `--swap-deployment`, the intercept will be left once the process +dies, but the agent will remain. There's no harm in leaving the agent running alongside your service, but when you +want to remove them from the cluster, the following Telepresence command will help: +``` +$ telepresence uninstall --help +Uninstall telepresence agents + +Usage: + telepresence uninstall [flags] { --agent |--all-agents } + +Flags: + -d, --agent uninstall intercept agent on specific deployments + -a, --all-agents uninstall intercept agent on all deployments + -h, --help help for uninstall + -n, --namespace string If present, the namespace scope for this CLI request +``` + +Since the new architecture deploys a Traffic Manager into the Ambassador namespace, please take a look at +our [rbac guide](../../reference/rbac) if you run into any issues with permissions while upgrading to Telepresence. + +The Traffic Manager can be uninstalled using `telepresence helm uninstall`. \ No newline at end of file diff --git a/docs/telepresence/2.13/install/upgrade.md b/docs/telepresence/2.13/install/upgrade.md new file mode 100644 index 000000000..def572362 --- /dev/null +++ b/docs/telepresence/2.13/install/upgrade.md @@ -0,0 +1,83 @@ +--- +description: "How to upgrade your installation of Telepresence and install previous versions." +--- + +# Upgrade Process +The Telepresence CLI will periodically check for new versions and notify you when an upgrade is available. Running the same commands used for installation will replace your current binary with the latest version. + +Before upgrading your CLI, you must stop any live Telepresence processes by issuing `telepresence quit -s` (or `telepresence quit -ur` +if your current version is less than 2.8.0). + + + + +```shell +# Intel Macs + +# Upgrade via brew: +brew upgrade datawire/blackbird/telepresence + +# OR upgrade manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +The [installer](https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence-setup.exe) can upgrade Telepresence, or if you installed it with Powershell: + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +The Telepresence CLI contains an embedded Helm chart. See [Install/Uninstall the Traffic Manager](../manager/) if you want to also upgrade +the Traffic Manager in your cluster. diff --git a/docs/telepresence/2.13/quick-start/TelepresenceQuickStartLanding.js b/docs/telepresence/2.13/quick-start/TelepresenceQuickStartLanding.js new file mode 100644 index 000000000..bd375dee0 --- /dev/null +++ b/docs/telepresence/2.13/quick-start/TelepresenceQuickStartLanding.js @@ -0,0 +1,118 @@ +import queryString from 'query-string'; +import React, { useEffect, useState } from 'react'; + +import Embed from '../../../../src/components/Embed'; +import Icon from '../../../../src/components/Icon'; +import Link from '../../../../src/components/Link'; + +import './telepresence-quickstart-landing.less'; + +/** @type React.FC> */ +const RightArrow = (props) => ( + + + +); + +const TelepresenceQuickStartLanding = () => { + const [getStartedUrl, setGetStartedUrl] = useState( + 'https://app.getambassador.io/cloud/welcome?docs_source=telepresence-quick-start', + ); + + const getUrlFromQueryParams = () => { + const { docs_source, docs_campaign } = queryString.parse( + window.location.search, + ); + + if (docs_source === 'cloud-quickstart-ad' && docs_campaign === 'loops') { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=loops', + ); + } else if ( + docs_source === 'cloud-quickstart-ad' && + docs_campaign === 'environments' + ) { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=environments', + ); + } + }; + + useEffect(() => { + getUrlFromQueryParams(); + }, []); + + return ( +
+

+ Telepresence +

+

+ Set up your ideal development environment for Kubernetes in seconds. + Accelerate your inner development loop with hot reload using your + existing IDE, and workflow. +

+ +
+
+
+

+ Set Up Telepresence with Ambassador Cloud +

+

+ Seamlessly integrate Telepresence into your existing Kubernetes + environment by following our 3-step setup guide. +

+ + Get Started + +
+
+

+ + Do it Yourself: + {' '} + install Telepresence and manually connect to your Kubernetes + workloads. +

+
+ +
+
+
+

+ What Can Telepresence Do for You? +

+

Telepresence gives Kubernetes application developers:

+
    +
  • Instant feedback loops
  • +
  • Remote development environments
  • +
  • Access to your favorite local tools
  • +
  • Easy collaborative development with teammates
  • +
+ + LEARN MORE{' '} + + +
+
+ +
+
+
+
+ ); +}; + +export default TelepresenceQuickStartLanding; diff --git a/docs/telepresence/2.13/quick-start/demo-node.md b/docs/telepresence/2.13/quick-start/demo-node.md new file mode 100644 index 000000000..c1725fe30 --- /dev/null +++ b/docs/telepresence/2.13/quick-start/demo-node.md @@ -0,0 +1,155 @@ +--- +description: "Claim a remote demo cluster and learn to use Telepresence to intercept services running in a Kubernetes Cluster, speeding up local development and debugging." +--- + +import {DemoClusterMetadata, ExpirationDate} from '../../../../../src/components/DemoClusterMetadata'; +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start + +
+

Contents

+ +* [1. Get a free remote cluster](#1-get-a-free-remote-cluster) +* [2. Try the Emojivoto application](#2-try-the-emojivoto-application) +* [3. Set up your local development environment](#3-set-up-your-local-development-environment) +* [4. Testing our fix](#4-testing-our-fix) +* [5. Preview URLs](#5-preview-urls) +* [6. How/Why does this all work](#6-howwhy-does-this-all-work) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +In this guide, we'll give you a hands-on tutorial with [Telepresence](/products/telepresence/). To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js and Golang. We have a version in React if you prefer. + + +## 1. Get a free remote cluster + +[Telepresence](/docs/telepresence/) connects your local workstation with a remote Kubernetes cluster. In this tutorial, we'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. Test out the application: + +1. Go to the and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. We're going to use Telepresence shortly to fix this bug, as everyone should be able to vote for 🍩! + + + Congratulations! You've successfully accessed the Emojivoto application on your remote cluster. + + +## 3. Set up your local development environment + +We'll set up a development environment locally on your workstation. We'll then use [Telepresence](../../reference/inside-container/) to connect this local development environment to the remote Kubernetes cluster. To save time, the development environment we'll use is pre-packaged as a Docker container. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + + + + + + + +Make sure that ports 8080 and 8083 are free.
+If the Docker engine is not running, the command will fail and you will see docker: unknown server OS in your terminal. +
+ +2. The Docker container includes a copy of the Emojivoto application that fixes the bug. Visit the [leaderboard](http://localhost:8083/leaderboard) and notice how it is different from the leaderboard in your Kubernetes cluster. + +3. Vote for 🍩 on your local leaderboard, and you can see that the bug is fixed! + + + Congratulations! You have successfully set up a local development environment, and tested the fix locally. + + +## 4. Testing our fix + +A common use case for Telepresence is to connect your local development environment to a remote cluster. This way, if your application is too big or complex to run locally, you can still develop locally. In this Quick Start, we're also going to show Telepresence can be used for integration testing, by testing our fix against the services in the remote cluster. + +1. From your Docker container, create an intercept, which will tell Telepresence to send traffic to the service in your container instead of the service in the cluster: + `telepresence intercept web --port 8080` + + When prompted for ingress configuration, all default values should be correct as displayed below. + + + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Preview URLs + +Preview URLs enable you to safely share your development environment with anyone. For example, you may want your UX designer to take a quick look at what you're developing, before you commit the code. Preview URLs enable this easy collaboration. + +1. If you access the Emojivoto application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +2. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version running locally. + + +Now you're able to share your fix in your local environment with your team! + + + + To get more information regarding Preview URLs and intercepts, visit Ambassador Cloud. + + +
+ +## 6. How/Why does this all work? + +[Telepresence](../qs-go/) works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +Intercepts and preview URLs are functions of Telepresence that enable easy local development from a remote Kubernetes cluster and offer a preview environment for sharing and real-time collaboration. + +Telepresence also uses custom headers and header propagation for controllable intercepts and preview URLs. The headers facilitate the smart routing of requests either to live services in the cluster or services running locally on a developer’s machine. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to Ambassador Cloud with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. + +## What's Next? + + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! diff --git a/docs/telepresence/2.13/quick-start/demo-react.md b/docs/telepresence/2.13/quick-start/demo-react.md new file mode 100644 index 000000000..2312dbbbc --- /dev/null +++ b/docs/telepresence/2.13/quick-start/demo-react.md @@ -0,0 +1,257 @@ +--- +description: "Telepresence Quick Start - React. In this guide we'll give you everything you need in a preconfigured demo cluster: the Telepresence CLI, a config file for..." +--- + +import Alert from '@material-ui/lab/Alert'; +import QSCards26 from './qs-cards'; +import { DownloadDemo } from '../../../../../src/components/Docs/DownloadDemo'; +import { UserInterceptCommand } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start - React + +
+

Contents

+ +* [1. Download the demo cluster archive](#1-download-the-demo-cluster-archive) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Set up the sample application](#3-set-up-the-sample-application) +* [4. Test app](#4-test-app) +* [5. Run a service on your laptop](#5-run-a-service-on-your-laptop) +* [6. Make a code change](#6-make-a-code-change) +* [7. Intercept all traffic to the service](#7-intercept-all-traffic-to-the-service) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +In this guide we'll give you **everything you need in a preconfigured demo cluster:** the [Telepresence](/products/telepresence/) CLI, a config file for connecting to your demo cluster, and code to run a cluster service locally. + + + While Telepresence works with any language, this guide uses a sample app with a frontend written in React. We have a version with a Node.js backend if you prefer. + + + + +## 1. Download the demo cluster archive + +1. + +2. Extract the archive file, open the `ambassador-demo-cluster` folder, and run the installer script (the commands below might vary based on where your browser saves downloaded files). + + + This step will also install some dependency packages onto your laptop using npm, you can see those packages at ambassador-demo-cluster/edgey-corp-nodejs/DataProcessingService/package.json. + + + ``` + cd ~/Downloads + unzip ambassador-demo-cluster.zip -d ambassador-demo-cluster + cd ambassador-demo-cluster + ./install.sh + # type y to install the npm dependencies when asked + ``` + +3. Confirm that your `kubectl` is configured to use the demo cluster by getting the status of the cluster nodes, you should see a single node named `tpdemo-prod-...`: + `kubectl get nodes` + + ``` + $ kubectl get nodes + + NAME STATUS ROLES AGE VERSION + tpdemo-prod-1234 Ready control-plane,master 5d10h v1.20.2+k3s1 + ``` + +4. Confirm that the Telepresence CLI is now installed (we expect to see the daemons are not running yet): +`telepresence status` + + ``` + $ telepresence status + + Root Daemon: Not running + User Daemon: Not running + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open System Preferences → Security & Privacy → General. Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence status command. + + + + You now have Telepresence installed on your workstation and a Kubernetes cluster configured in your terminal! + + +## 2. Test Telepresence + +[Telepresence](../../reference/client/login/) connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster (this requires **root** privileges and will ask for your password): +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Set up the sample application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + +1. Clone the emojivoto app: +`git clone https://github.com/datawire/emojivoto.git` + +1. Deploy the app to your cluster: +`kubectl apply -k emojivoto/kustomize/deployment` + +1. Change the kubectl namespace: +`kubectl config set-context --current --namespace=emojivoto` + +1. List the Services: +`kubectl get svc` + + ``` + $ kubectl get svc + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + emoji-svc ClusterIP 10.43.162.236 8080/TCP,8801/TCP 29s + voting-svc ClusterIP 10.43.51.201 8080/TCP,8801/TCP 29s + web-app ClusterIP 10.43.242.240 80/TCP 29s + web-svc ClusterIP 10.43.182.119 8080/TCP 29s + ``` + +1. Since you’ve already connected Telepresence to your cluster, you can access the frontend service in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). This is the namespace qualified DNS name in the form of `service.namespace`. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Test app + +1. Vote for some emojis and see how the [leaderboard](http://web-app.emojivoto/leaderboard) changes. + +1. There is one emoji that causes an error when you vote for it. Vote for 🍩 and the leaderboard does not actually update. Also an error is shown on the browser dev console: +`GET http://web-svc.emojivoto:8080/api/vote?choice=:doughnut: 500 (Internal Server Error)` + + + Open the dev console in Chrome or Firefox with Option + ⌘ + J (macOS) or Shift + CTRL + J (Windows/Linux).
+ Open the dev console in Safari with Option + ⌘ + C. +
+ +The error is on a backend service, so **we can add an error page to notify the user** while the bug is fixed. + +## 5. Run a service on your laptop + +Now start up the `web-app` service on your laptop. We'll then make a code change and intercept this service so that we can see the immediate results of a code change to the service. + +1. **In a new terminal window**, change into the repo directory and build the application: + + `cd /emojivoto` + `make web-app-local` + + ``` + $ make web-app-local + + ... + webpack 5.34.0 compiled successfully in 4326 ms + ✨ Done in 5.38s. + ``` + +2. Change into the service's code directory and start the server: + + `cd emojivoto-web-app` + `yarn webpack serve` + + ``` + $ yarn webpack serve + + ... + ℹ 「wds」: Project is running at http://localhost:8080/ + ... + ℹ 「wdm」: Compiled successfully. + ``` + +4. Access the application at [http://localhost:8080](http://localhost:8080) and see how voting for the 🍩 is generating the same error as the application deployed in the cluster. + + + Victory, your local React server is running a-ok! + + +## 6. Make a code change +We’ve now set up a local development environment for the app. Next we'll make and locally test a code change to the app to improve the issue with voting for 🍩. + +1. In the terminal running webpack, stop the server with `Ctrl+c`. + +1. In your preferred editor open the file `emojivoto/emojivoto-web-app/js/components/Vote.jsx` and replace the `render()` function (lines 83 to the end) with [this highlighted code snippet](https://github.com/datawire/emojivoto/blob/main/assets/Vote-fixed.jsx#L83-L149). + +1. Run webpack to fully recompile the code then start the server again: + + `yarn webpack` + `yarn webpack serve` + +1. Reload the browser tab showing [http://localhost:8080](http://localhost:8080) and vote for 🍩. Notice how you see an error instead, improving the user experience. + +## 7. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the app to the version running locally instead. + + + This command must be run in the terminal window where you ran the script because the script set environment variables to access the demo cluster. Those variables will only will apply to that terminal session. + + +1. Start the intercept with the `intercept` command, setting the workload name (a Deployment in this case), namespace, and port: +`telepresence intercept web-app --namespace emojivoto --port 8080` + + ``` + $ telepresence intercept web-app --namespace emojivoto --port 8080 + + Using deployment web-app + intercepted + Intercept name: web-app-emojivoto + State : ACTIVE + ... + ``` + +2. Go to the frontend service again in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). Voting for 🍩 should now show an error message to the user. + + + The web-app Deployment is being intercepted and rerouted to the server on your laptop! + + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## What's Next? + + diff --git a/docs/telepresence/2.13/quick-start/go.md b/docs/telepresence/2.13/quick-start/go.md new file mode 100644 index 000000000..c926d7b05 --- /dev/null +++ b/docs/telepresence/2.13/quick-start/go.md @@ -0,0 +1,190 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + + +# Telepresence Quick Start - **Go** + +This guide provides you with a hands-on tutorial with Telepresence and Golang. To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + +## 1. Get a free remote cluster + +Telepresence connects your local workstation with a remote Kubernetes cluster. In this tutorial, you'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. +Test out the application: + +1. Go to the Emojivoto webapp and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. + +## 3. Run the Docker container + +The bug is present in the `voting-svc` service, you'll run that service locally. To save your time, we prepared a Docker container with this service running and all you'll need to fix the bug. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + +2. The application is failing due to a little bug inside this service which uses gRPC to communicate with the others services. We can use `grpcurl` to test the gRPC endpoint and see the error by running: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + (empty) + + Response trailers received: + content-type: application/grpc + Sent 0 requests and received 0 responses + ERROR: + Code: Unknown + Message: ERROR + ``` + +3. In order to fix the bug, use the Docker container's embedded IDE to fix this error. Go to http://localhost:8083 and open `api/api.go`. Remove the `"fmt"` package by deleting the line 5. + + ```go + 3 import ( + 4 "context" + 5 "fmt" // DELETE THIS LINE + 6 + 7 pb "github.com/buoyantio/emojivoto/emojivoto-voting-svc/gen/proto" + ``` + + and also replace the line `21`: + + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return nil, fmt.Errorf("ERROR") + 22 } + ``` + with + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return pS.vote(":doughnut:") + 22 } + ``` + Then save the file (`Ctrl+s` for Windows, `Cmd+s` for Mac or `Menu -> File -> Save`) and verify that the error is fixed now: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + content-type: application/grpc + + Response contents: + { + } + + Response trailers received: + (empty) + Sent 0 requests and received 1 response + ``` + +## 4. Telepresence intercept + +1. Now the bug is fixed, you can use Telepresence to intercept *all* the traffic through our local service. +Run the following command inside the container: + + ``` + $ telepresence intercept voting --port 8081:8080 + + Using Deployment voting + intercepted + Intercept name : voting + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8081 + Service Port Identifier: 8080 + Volume Mount Point : /tmp/telfs-XXXXXXXXX + Intercepting : all TCP connections + ``` + Now you can go back to Emojivoto webapp and you'll see that voting for 🍩 woks as expected. + +You have created an intercept to tell Telepresence where to send traffic. The `voting-svc` traffic is now destined to the local Dockerized version of the service. This intercepts *all the traffic* to the local `voting-svc` service, which has been fixed with the Telepresence intercept. + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Telepresence intercept with a preview URL + +Preview URLs allows you to safely share your development environment. With this approach, you can try and test your local service more accurately because you have a total control about which traffic is handled through your service, all of this thanks to the preview URL. + +1. First leave the current intercept: + + ``` + $ telepresence leave voting + ``` + +2. Then login to telepresence: + + + +3. Create an intercept, which will tell Telepresence to send traffic to the service in our container instead of the service in the cluster. When prompted for ingress configuration, all default values should be correct as displayed below. + + + +4. If you access the Emojivoto webapp application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +5. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version which is running locally. + +
+ +## What's Next? + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! diff --git a/docs/telepresence/2.13/quick-start/index.md b/docs/telepresence/2.13/quick-start/index.md new file mode 100644 index 000000000..e0d26fa9e --- /dev/null +++ b/docs/telepresence/2.13/quick-start/index.md @@ -0,0 +1,7 @@ +--- +description: Telepresence Quick Start. +--- + +import NewTelepresenceQuickStartLanding from './TelepresenceQuickStartLanding' + + diff --git a/docs/telepresence/2.13/quick-start/qs-cards.js b/docs/telepresence/2.13/quick-start/qs-cards.js new file mode 100644 index 000000000..5b68aa4ae --- /dev/null +++ b/docs/telepresence/2.13/quick-start/qs-cards.js @@ -0,0 +1,71 @@ +import Grid from '@material-ui/core/Grid'; +import Paper from '@material-ui/core/Paper'; +import Typography from '@material-ui/core/Typography'; +import { makeStyles } from '@material-ui/core/styles'; +import { Link as GatsbyLink } from 'gatsby'; +import React from 'react'; + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + textAlign: 'center', + alignItem: 'stretch', + padding: 0, + }, + paper: { + padding: theme.spacing(1), + textAlign: 'center', + color: 'black', + height: '100%', + }, +})); + +export default function CenteredGrid() { + const classes = useStyles(); + + return ( +
+ + + + + + Collaborating + + + + Use preview URLS to collaborate with your colleagues and others + outside of your organization. + + + + + + + + Outbound Sessions + + + + While connected to the cluster, your laptop can interact with + services as if it was another pod in the cluster. + + + + + + + + FAQs + + + + Learn more about uses cases and the technical implementation of + Telepresence. + + + + +
+ ); +} diff --git a/docs/telepresence/2.13/quick-start/qs-go.md b/docs/telepresence/2.13/quick-start/qs-go.md new file mode 100644 index 000000000..db88178c3 --- /dev/null +++ b/docs/telepresence/2.13/quick-start/qs-go.md @@ -0,0 +1,398 @@ +--- +description: "Telepresence Quick Start Go. You will need kubectl or oc installed and set up (Linux / macOS / Windows) to use a Kubernetes cluster, preferably an empty." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Go** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Go application](#3-install-a-sample-go-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used [Telepresence](/products/telepresence/) previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +Download the [installer](https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence-setup.exe) or use these Powershell commands: + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Go application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Go. We have versions in Python (Flask), Python (FastAPI), Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-go.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-go.git + + Cloning into 'edgey-corp-go'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-go/DataProcessingService/` + +3. You will use [Fresh](https://pkg.go.dev/github.com/BUGLAN/fresh) to support auto reloading of the Go server, which we'll use later. Confirm it is installed by running: + `go get github.com/pilu/fresh` + Then start the Go server: + `$GOPATH/bin/fresh` + + ``` + $ go get github.com/pilu/fresh + + $ $GOPATH/bin/fresh + + ... + 10:23:41 app | Welcome to the DataProcessingGoService! + ``` + + + Install Go from here and set your GOPATH if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Go server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Go server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-go/DataProcessingService/main.go` in your editor and change `var color string` from `blue` to `orange`. Save the file and the Go server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.13/quick-start/qs-java.md b/docs/telepresence/2.13/quick-start/qs-java.md new file mode 100644 index 000000000..7b7ce4cd6 --- /dev/null +++ b/docs/telepresence/2.13/quick-start/qs-java.md @@ -0,0 +1,392 @@ +--- +description: "Telepresence Quick Start - Java. This document uses kubectl in all example commands, but OpenShift users should have no problem substituting in the oc command." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Java** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Java application](#3-install-a-sample-java-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +Download the [installer](https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence-setup.exe) or use these Powershell commands: + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Java application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Java. We have versions in Python (FastAPI), Python (Flask), Go, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-java.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-java.git + + Cloning into 'edgey-corp-java'... + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-java/DataProcessingService/` + +3. Start the Maven server. + `mvn spring-boot:run` + + + Install Java and Maven first if needed. + + + ``` + $ mvn spring-boot:run + + ... + g.d.DataProcessingServiceJavaApplication : Started DataProcessingServiceJavaApplication in 1.408 seconds (JVM running for 1.684) + + ``` + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Java server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Java server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-java/DataProcessingService/src/main/resources/application.properties` in your editor and change `app.default.color` on line 2 from `blue` to `orange`. Save the file then stop and restart your Java server. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.13/quick-start/qs-node.md b/docs/telepresence/2.13/quick-start/qs-node.md new file mode 100644 index 000000000..f24d2feba --- /dev/null +++ b/docs/telepresence/2.13/quick-start/qs-node.md @@ -0,0 +1,386 @@ +--- +description: "Telepresence Quick Start Node.js. This document uses kubectl in all example commands. OpenShift users should have no problem substituting in the oc command..." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Node.js** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Node.js application](#3-install-a-sample-nodejs-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +Download the [installer](https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence-setup.exe) or use these Powershell commands: + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Node.js application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js. We have versions in Go, Java,Python using Flask, and Python using FastAPI if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-nodejs.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-nodejs.git + + Cloning into 'edgey-corp-nodejs'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-nodejs/DataProcessingService/` + +3. Install the dependencies and start the Node server: +`npm install && npm start` + + ``` + $ npm install && npm start + + ... + Welcome to the DataProcessingService! + { _: [] } + Server running on port 3000 + ``` + + + Install Node.js from here if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Node server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + See this doc for more information on how Telepresence resolves DNS. + + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Node server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-nodejs/DataProcessingService/app.js` in your editor and change line 6 from `blue` to `orange`. Save the file and the Node server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.13/quick-start/qs-python-fastapi.md b/docs/telepresence/2.13/quick-start/qs-python-fastapi.md new file mode 100644 index 000000000..897da9632 --- /dev/null +++ b/docs/telepresence/2.13/quick-start/qs-python-fastapi.md @@ -0,0 +1,383 @@ +--- +description: "Telepresence Quick Start - Python (FastAPI) You need kubectl or oc installed & set up (Linux/macOS/Windows) to use Kubernetes cluster, preferably an empty test." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Python (FastAPI)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +Download the [installer](https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence-setup.exe) or use these Powershell commands: + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the FastAPI framework. We have versions in Python (Flask), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python-fastapi.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python-fastapi.git + + Cloning into 'edgey-corp-python-fastapi'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python-fastapi/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install fastapi uvicorn requests && python app.py + + Collecting fastapi + ... + Application startup complete. + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local service is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python-fastapi/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 17 from `blue` to `orange`. Save the file and the Python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080) and it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.13/quick-start/qs-python.md b/docs/telepresence/2.13/quick-start/qs-python.md new file mode 100644 index 000000000..90e83a717 --- /dev/null +++ b/docs/telepresence/2.13/quick-start/qs-python.md @@ -0,0 +1,394 @@ +--- +description: "Telepresence Quick Start - Python (Flask). This document uses kubectl in all example commands, but OpenShift users should have no problem substituting in the oc." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Python (Flask)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +Download the [installer](https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence-setup.exe) or use these Powershell commands: + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the Flask framework. We have versions in Python (FastAPI), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python.git + + Cloning into 'edgey-corp-python'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install flask requests && python app.py + + Collecting flask + ... + Welcome to the DataServiceProcessingPythonService! + ... + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Python server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 15 from `blue` to `orange`. Save the file and the python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.13/quick-start/telepresence-quickstart-landing.less b/docs/telepresence/2.13/quick-start/telepresence-quickstart-landing.less new file mode 100644 index 000000000..e2a83df4f --- /dev/null +++ b/docs/telepresence/2.13/quick-start/telepresence-quickstart-landing.less @@ -0,0 +1,152 @@ +@import '~@src/components/Layout/vars.less'; + +.doc-body .telepresence-quickstart-landing { + font-family: @InterFont; + color: @black; + margin: -8.4px auto 48px; + max-width: 1050px; + min-width: @docs-min-width; + width: 100%; + + h1 { + color: @blue-dark; + font-weight: normal; + letter-spacing: 0.25px; + font-size: 33px; + margin: 0 0 15px; + } + p { + font-size: 0.875rem; + line-height: 24px; + margin: 0; + padding: 0; + } + + .demo-cluster-container { + display: grid; + margin: 40px 0; + grid-template-columns: 1fr; + grid-template-columns: 1fr; + @media screen and (max-width: 900px) { + grid-template-columns: repeat(1, 1fr); + } + } + .main-title-container { + display: flex; + flex-direction: column; + align-items: center; + p { + text-align: center; + font-size: 0.875rem; + } + } + h2 { + font-size: 23px; + color: @black; + margin: 0 0 20px 0; + padding: 0; + &.underlined { + padding-bottom: 2px; + border-bottom: 3px solid @grey-separator; + text-align: center; + } + strong { + font-weight: 800; + } + &.subtitle { + margin-bottom: 10px; + font-size: 19px; + line-height: 28px; + } + } + .learn-more, + .get-started { + font-size: 14px; + font-weight: 600; + letter-spacing: 1.25px; + display: flex; + align-items: center; + text-decoration: none; + &.inline { + display: inline-block; + text-decoration: underline; + font-size: unset; + font-weight: normal; + &:hover { + text-decoration: none; + } + } + &.blue { + color: @blue-5; + } + &.blue:hover { + color: @blue-dark; + } + } + + .learn-more { + margin-top: 20px; + padding: 13px 0; + } + + .box-container { + &.border { + border: 1.5px solid @grey-separator; + border-radius: 5px; + padding: 10px; + } + &::before { + content: ''; + position: absolute; + width: 14px; + height: 14px; + border-radius: 50%; + top: 0; + left: 50%; + transform: translate(-50%, -50%); + } + p { + font-size: 0.875rem; + line-height: 24px; + padding: 0; + } + } + + .telepresence-video { + border: 2px solid @grey-separator; + box-shadow: -6px 12px 0px fade(@black, 12%); + border-radius: 8px; + padding: 18px; + h2.telepresence-video-title { + font-weight: 400; + font-size: 23px; + line-height: 33px; + color: @blue-6; + } + } + + .video-section { + display: grid; + grid-template-columns: 1fr 1fr; + column-gap: 20px; + @media screen and (max-width: 800px) { + grid-template-columns: 1fr; + } + ul { + font-size: 14px; + margin: 0 10px 6px 0; + } + .video-container { + position: relative; + padding-bottom: 56.25%; // 16:9 aspect ratio + height: 0; + iframe { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + } + } + } +} diff --git a/docs/telepresence/2.13/redirects.yml b/docs/telepresence/2.13/redirects.yml new file mode 100644 index 000000000..5961b3477 --- /dev/null +++ b/docs/telepresence/2.13/redirects.yml @@ -0,0 +1 @@ +- {from: "", to: "quick-start"} diff --git a/docs/telepresence/2.13/reference/architecture.md b/docs/telepresence/2.13/reference/architecture.md new file mode 100644 index 000000000..8aa90b267 --- /dev/null +++ b/docs/telepresence/2.13/reference/architecture.md @@ -0,0 +1,102 @@ +--- +description: "How Telepresence works to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Telepresence Architecture + +
+ +![Telepresence Architecture](https://www.getambassador.io/images/documentation/telepresence-architecture.inline.svg) + +
+ +## Telepresence CLI + +The Telepresence CLI orchestrates the moving parts on the workstation: it starts the Telepresence Daemons, +authenticates against Ambassador Cloud, and then acts as a user-friendly interface to the Telepresence User Daemon. + +## Telepresence Daemons +Telepresence has Daemons that run on a developer's workstation and act as the main point of communication to the cluster's +network in order to communicate with the cluster and handle intercepted traffic. + +### User-Daemon +The User-Daemon coordinates the creation and deletion of intercepts by communicating with the [Traffic Manager](#traffic-manager). +All requests from and to the cluster go through this Daemon. + +When you run telepresence login, Telepresence installs an enhanced version of the User-Daemon. This replaces the existing User-Daemon and +allows you to create intercepts on your local machine from Ambassador Cloud. + +### Root-Daemon +The Root-Daemon manages the networking necessary to handle traffic between the local workstation and the cluster by setting up a +[Virtual Network Device](../tun-device) (VIF). For a detailed description of how the VIF manages traffic and why it is necessary +please refer to this blog post: +[Implementing Telepresence Networking with a TUN Device](https://blog.getambassador.io/implementing-telepresence-networking-with-a-tun-device-a23a786d51e9). + +When you run telepresence login, Telepresence installs an enhanced Telepresence User Daemon. This replaces the open source +User Daemon and allows you to create intercepts on your local machine from Ambassador Cloud. + +## Traffic Manager + +The Traffic Manager is the central point of communication between Traffic Agents in the cluster and Telepresence Daemons +on developer workstations. It is responsible for injecting the Traffic Agent sidecar into intercepted pods, proxying all +relevant inbound and outbound traffic, and tracking active intercepts. + +The Traffic-Manager is installed, either by a cluster administrator using a Helm Chart, or on demand by the Telepresence +User Daemon. When the User Daemon performs its initial connect, it first checks the cluster for the Traffic Manager +deployment, and if missing it will make an attempt to install it using its embedded Helm Chart. + +When an intercept gets created with a Preview URL, the Traffic Manager will establish a connection with Ambassador Cloud +so that Preview URL requests can be routed to the cluster. This allows Ambassador Cloud to reach the Traffic Manager +without requiring the Traffic Manager to be publicly exposed. Once the Traffic Manager receives a request from a Preview +URL, it forwards the request to the ingress service specified at the Preview URL creation. + +## Traffic Agent + +The Traffic Agent is a sidecar container that facilitates intercepts. When an intercept is first started, the Traffic Agent +container is injected into the workload's pod(s). You can see the Traffic Agent's status by running `telepresence list` +or `kubectl describe pod `. + +Depending on the type of intercept that gets created, the Traffic Agent will either route the incoming request to the +Traffic Manager so that it gets routed to a developer's workstation, or it will pass it along to the container in the +pod usually handling requests on that port. + +## Ambassador Cloud + +Ambassador Cloud enables Preview URLs by generating random ephemeral domain names and routing requests received on those +domains from authorized users to the appropriate Traffic Manager. + +Ambassador Cloud also lets users manage their Preview URLs: making them publicly accessible, seeing users who have +accessed them and deleting them. + +## Pod-Daemon + +The Pod-Daemon is a modified version of the [Telepresence User-Daemon](#user-daemon) built as a container image so that +it can be inserted into a `Deployment` manifest as an additional container. This allows users to create intercepts completely +within the cluster with the benefit that the intercept stays active until the deployment with the Pod-Daemon container is removed. + +The Pod-Daemon will take arguments and environment variables as part of the `Deployment` manifest to specify which service the intercept +should be run on and to provide similar configuration that would be provided when using Telepresence intercepts from the command line. + +After being deployed to the cluster, it behaves similarly to the Telepresence User-Daemon and installs the [Traffic Agent Sidecar](#traffic-agent) +on the service that is being intercepted. After the intercept is created, traffic can then be redirected to the `Deployment` with the Pod-Daemon +container instead. The Pod-Daemon will automatically generate a Preview URL so that the intercept can be accessed from outside the cluster. +The Preview URL can be obtained from the Pod-Daemon logs if you are deploying it manually. + +The Pod-Daemon was created for use as a component of Deployment Previews in order to automatically create intercepts with development images built +by CI so that changes from pull requests can be quickly visualized in a live cluster before changes are landed by accessing the Preview URL +link which would be posted to an associated GitHub pull request when using Deployment Previews. + +See the [Deployment Previews quick-start](https://www.getambassador.io/docs/cloud/latest/deployment-previews/quick-start) for information on how to get started with Deployment Previews +or for a reference on how Pod-Daemon can be manually deployed to the cluster. + + +# Changes from Service Preview + +Using Ambassador's previous offering, Service Preview, the Traffic Agent had to be manually added to a pod by an +annotation. This is no longer required as the Traffic Agent is automatically injected when an intercept is started. + +Service Preview also started an intercept via `edgectl intercept`. The `edgectl` CLI is no longer required to intercept +as this functionality has been moved to the Telepresence CLI. + +For both the Traffic Manager and Traffic Agents, configuring Kubernetes ClusterRoles and ClusterRoleBindings is not +required as it was in Service Preview. Instead, the user running Telepresence must already have sufficient permissions in the cluster to add and modify deployments in the cluster. diff --git a/docs/telepresence/2.13/reference/client.md b/docs/telepresence/2.13/reference/client.md new file mode 100644 index 000000000..84137db98 --- /dev/null +++ b/docs/telepresence/2.13/reference/client.md @@ -0,0 +1,31 @@ +--- +description: "CLI options for Telepresence to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Client reference + +The [Telepresence CLI client](../../quick-start) is used to connect Telepresence to your cluster, start and stop intercepts, and create preview URLs. All commands are run in the form of `telepresence `. + +## Commands + +A list of all CLI commands and flags is available by running `telepresence help`, but here is more detail on the most common ones. +You can append `--help` to each command below to get even more information about its usage. + +| Command | Description | +|----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `connect` | Starts the local daemon and connects Telepresence to your cluster and installs the Traffic Manager if it is missing. After connecting, outbound traffic is routed to the cluster so that you can interact with services as if your laptop was another pod (for example, curling a service by it's name) | +| [`login`](login) | Authenticates you to Ambassador Cloud to create, manage, and share [preview URLs](../../howtos/preview-urls/) | +| `logout` | Logs out out of Ambassador Cloud | +| `license` | Formats a license from Ambassdor Cloud into a secret that can be [applied to your cluster](../cluster-config#add-license-to-cluster) if you require features of the extension in an air-gapped environment | +| `status` | Shows the current connectivity status | +| `quit` | Tell Telepresence daemons to quit | +| `list` | Lists the current active intercepts | +| `intercept` | Intercepts a service, run followed by the service name to be intercepted and what port to proxy to your laptop: `telepresence intercept --port ` (use `port/UDP` to force UDP). This command can also start a process so you can run a local instance of the service you are intercepting. For example the following will intercept the hello service on port 8000 and start a Python web server: `telepresence intercept hello --port 8000 -- python3 -m http.server 8000`. A special flag `--docker-run` can be used to run the local instance [in a docker container](../docker-run). | +| `leave` | Stops an active intercept: `telepresence leave hello` | +| `preview` | Create or remove [preview URLs](../../howtos/preview-urls) for existing intercepts: `telepresence preview create ` | +| `loglevel` | Temporarily change the log-level of the traffic-manager, traffic-agents, and user and root daemons | +| `gather-logs` | Gather logs from traffic-manager, traffic-agents, user, and root daemons, and export them into a zip file that can be shared with others or included with a github issue. Use `--get-pod-yaml` to include the yaml for the `traffic-manager` and `traffic-agent`s. Use `--anonymize` to replace the actual pod names + namespaces used for the `traffic-manager` and pods containing `traffic-agent`s in the logs. | +| `version` | Show version of Telepresence CLI + Traffic-Manager (if connected) | +| `uninstall` | Uninstalls Telepresence from your cluster, using the `--agent` flag to target the Traffic Agent for a specific workload, the `--all-agents` flag to remove all Traffic Agents from all workloads, or the `--everything` flag to remove all Traffic Agents and the Traffic Manager. | +| `dashboard` | Reopens the Ambassador Cloud dashboard in your browser | +| `current-cluster-id` | Get cluster ID for your kubernetes cluster, used for [configuring license](../cluster-config#add-license-to-cluster) in an air-gapped environment | diff --git a/docs/telepresence/2.13/reference/client/login.md b/docs/telepresence/2.13/reference/client/login.md new file mode 100644 index 000000000..fc90ea385 --- /dev/null +++ b/docs/telepresence/2.13/reference/client/login.md @@ -0,0 +1,61 @@ +# Telepresence Login + +```console +$ telepresence login --help +Authenticate to Ambassador Cloud + +Usage: + telepresence login [flags] + +Flags: + --apikey string Static API key to use instead of performing an interactive login +``` + +## Description + +Use `telepresence login` to explicitly authenticate with [Ambassador +Cloud](https://www.getambassador.io/docs/cloud). Unless the +[`skipLogin` option](../../config) is set, other commands will +automatically invoke the `telepresence login` interactive login +procedure as nescessary, so it is rarely nescessary to explicitly run +`telepresence login`; it should only be truly nescessary to explictly +run `telepresence login` when you require a non-interactive login. + +The normal interactive login procedure involves launching a web +browser, a user interacting with that web browser, and finally having +the web browser make callbacks to the local Telepresence process. If +it is not possible to do this (perhaps you are using a headless remote +box via SSH, or are using Telepresence in CI), then you may instead +have Ambassador Cloud issue an API key that you pass to `telepresence +login` with the `--apikey` flag. + +## Telepresence + +When you run `telepresence login`, the CLI installs +a Telepresence binary. The Telepresence enhanced free client of the [User +Daemon](../../architecture) communicates with the Ambassador Cloud to +provide fremium features including the ability to create intercepts from +Ambassador Cloud. + +## Acquiring an API key + +1. Log in to Ambassador Cloud at https://app.getambassador.io/ . + +2. Click on your profile icon in the upper-left: ![Screenshot with the + mouse pointer over the upper-left profile icon](./login/apikey-2.png) + +3. Click on the "API Keys" menu button: ![Screenshot with the mouse + pointer over the "API Keys" menu button](./login/apikey-3.png) + +4. Click on the "generate new key" button in the upper-right: + ![Screenshot with the mouse pointer over the "generate new key" + button](./login/apikey-4.png) + +5. Enter a description for the key (perhaps the name of your laptop, + or perhaps the "CI"), and click "generate api key" to create it. + +You may now pass the API key as `KEY` to `telepresence login --apikey=KEY`. + +Telepresence will use that "master" API key to create narrower keys +for different components of Telepresence. You will see these appear +in the Ambassador Cloud web interface. \ No newline at end of file diff --git a/docs/telepresence/2.13/reference/client/login/apikey-2.png b/docs/telepresence/2.13/reference/client/login/apikey-2.png new file mode 100644 index 000000000..1379502a9 Binary files /dev/null and b/docs/telepresence/2.13/reference/client/login/apikey-2.png differ diff --git a/docs/telepresence/2.13/reference/client/login/apikey-3.png b/docs/telepresence/2.13/reference/client/login/apikey-3.png new file mode 100644 index 000000000..4559b784d Binary files /dev/null and b/docs/telepresence/2.13/reference/client/login/apikey-3.png differ diff --git a/docs/telepresence/2.13/reference/client/login/apikey-4.png b/docs/telepresence/2.13/reference/client/login/apikey-4.png new file mode 100644 index 000000000..25c6581a4 Binary files /dev/null and b/docs/telepresence/2.13/reference/client/login/apikey-4.png differ diff --git a/docs/telepresence/2.13/reference/cluster-config.md b/docs/telepresence/2.13/reference/cluster-config.md new file mode 100644 index 000000000..087bbf9af --- /dev/null +++ b/docs/telepresence/2.13/reference/cluster-config.md @@ -0,0 +1,363 @@ +import Alert from '@material-ui/lab/Alert'; +import { ClusterConfig, PaidPlansDisclaimer } from '../../../../../src/components/Docs/Telepresence'; + +# Cluster-side configuration + +For the most part, Telepresence doesn't require any special +configuration in the cluster and can be used right away in any +cluster (as long as the user has adequate [RBAC permissions](../rbac) +and the cluster's server version is `1.19.0` or higher). + +## Helm Chart configuration +Some cluster specific configuration can be provided when installing +or upgrading the Telepresence cluster installation using Helm. Once +installed, the Telepresence client will configure itself from values +that it receives when connecting to the Traffic manager. + +See the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) +for a full list of available configuration settings. + +### Values +To add configuration, create a yaml file with the configuration values and then use it executing `telepresence helm install [--upgrade] --values ` + +## Client Configuration + +It is possible for the Traffic Manager to automatically push config to all +connecting clients. To learn more about this, please see the [client config docs](../config#global-configuration) + +### Agent Configuration + +The `agent` structure of the Helm chart configures the behavior of the Telepresence agents. + +#### Application Protocol Selection +The `agent.appProtocolStrategy` is relevant when using personal intercepts and controls how telepresence selects the application protocol to use +when intercepting a service that has no `service.ports.appProtocol` declared. The port's `appProtocol` is always trusted if it is present. +Valid values are: + +| Value | Resulting action | +|--------------|------------------------------------------------------------------------------------------------------------------------------| +| `http2Probe` | The Telepresence Traffic Agent will probe the intercepted container to check whether it supports http2. This is the default. | +| `portName` | Telepresence will make an educated guess about the protocol based on the name of the service port | +| `http` | Telepresence will use http | +| `http2` | Telepresence will use http2 | + +When `portName` is used, Telepresence will determine the protocol by the name of the port: `[-suffix]`. The following protocols +are recognized: + +| Protocol | Meaning | +|----------|---------------------------------------| +| `http` | Plaintext HTTP/1.1 traffic | +| `http2` | Plaintext HTTP/2 traffic | +| `https` | TLS Encrypted HTTP (1.1 or 2) traffic | +| `grpc` | Same as http2 | + +The application protocol strategy can also be configured on a workstation. See [Intercepts](../config/#intercept) for more info. + +#### Envoy Configuration + +The `agent.envoy` structure contains three values: + +| Setting | Meaning | +|--------------|----------------------------------------------------------| +| `logLevel` | Log level used by the Envoy proxy. Defaults to "warning" | +| `serverPort` | Port used by the Envoy server. Default 18000. | +| `adminPort` | Port used for Envoy administration. Default 19000. | + +#### Image Configuration + +The `agent.image` structure contains the following values: + +| Setting | Meaning | +|------------|-----------------------------------------------------------------------------| +| `registry` | Registry used when downloading the image. Defaults to "docker.io/datawire". | +| `name` | The name of the image. Retrieved from Ambassador Cloud if not set. | +| `tag` | The tag of the image. Retrieved from Ambassador Cloud if not set. | + +#### Log level + +The `agent.LogLevel` controls the log level of the traffic-agent. See [Log Levels](../config/#log-levels) for more info. + +#### Resources + +The `agent.resources` and `agent.initResources` will be used as the `resources` element when injecting traffic-agents and init-containers. + +## TLS + +In this example, other applications in the cluster expect to speak TLS to your +intercepted application (perhaps you're using a service-mesh that does +mTLS). + +In order to use `--mechanism=http` (or any features that imply +`--mechanism=http`) you need to tell Telepresence about the TLS +certificates in use. + +Tell Telepresence about the certificates in use by adjusting your +[workload's](../intercepts/#supported-workloads) Pod template to set a couple of +annotations on the intercepted Pods: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ "getambassador.io/inject-terminating-tls-secret": "your-terminating-secret" # optional ++ "getambassador.io/inject-originating-tls-secret": "your-originating-secret" # optional +``` + +- The `getambassador.io/inject-terminating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS server + certificate to use for decrypting and responding to incoming + requests. + + When Telepresence modifies the Service and workload port + definitions to point at the Telepresence Agent sidecar's port + instead of your application's actual port, the sidecar will use this + certificate to terminate TLS. + +- The `getambassador.io/inject-originating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS + client certificate to use for communicating with your application. + + You will need to set this if your application expects incoming + requests to speak TLS (for example, your + code expects to handle mTLS itself instead of letting a service-mesh + sidecar handle mTLS for it, or the port definition that Telepresence + modified pointed at the service-mesh sidecar instead of at your + application). + + If you do set this, you should to set it to the + same client certificate Secret that you configure the Ambassador + Edge Stack to use for mTLS. + +It is only possible to refer to a Secret that is in the same Namespace +as the Pod. The Secret will be mounted into the traffic agent's container. + +Telepresence understands `type: kubernetes.io/tls` Secrets and +`type: istio.io/key-and-cert` Secrets; as well as `type: Opaque` +Secrets that it detects to be formatted as one of those types. + +## Air-gapped cluster + + +If your cluster is on an isolated network such that it cannot +communicate with Ambassador Cloud, then some additional configuration +is required to acquire a license key in order to use personal +intercepts. + +### Create a license + +1. + +2. Generate a new license (if one doesn't already exist) by clicking *Generate New License*. + +3. You will be prompted for your Cluster ID. Ensure your +kubeconfig context is using the cluster you want to create a license for then +run this command to generate the Cluster ID: + + ``` + $ telepresence current-cluster-id + + Cluster ID: + ``` + +4. Click *Generate API Key* to finish generating the license. + +5. On the licenses page, download the license file associated with your cluster. + +### Add license to cluster +There are two separate ways you can add the license to your cluster: manually creating and deploying +the license secret or having the helm chart manage the secret + +You only need to do one of the two options. + +#### Manual deploy of license secret + +1. Use this command to generate a Kubernetes Secret config using the license file: + + ``` + $ telepresence license -f + + apiVersion: v1 + data: + hostDomain: + license: + kind: Secret + metadata: + creationTimestamp: null + name: systema-license + namespace: ambassador + ``` + +2. Save the output as a YAML file and apply it to your +cluster with `kubectl`. + +3. When deploying the `traffic-manager` chart, you must add the additional values when running `helm install` by putting +the following into a file (for the example we'll assume it's called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + secret: + # This tells the helm chart not to create the secret since you've created it yourself + create: false + ``` + +4. Install the helm chart into the cluster + + ``` + telepresence helm install -f license-values.yaml + ``` + +5. Ensure that you have the docker image for the Smart Agent (datawire/ambassador-telepresence-agent:1.11.0) +pulled and in a registry your cluster can pull from. + +6. Have users use the `images` [config key](../config/#images) keys so telepresence uses the aforementioned image for their agent. + +#### Helm chart manages the secret + +1. Get the jwt token from the downloaded license file + + ``` + $ cat ~/Downloads/ambassador.License_for_yourcluster + eyJhbGnotarealtoken.butanexample + ``` + +2. Create the following values file, substituting your real jwt token in for the one used in the example below. +(for this example we'll assume the following is placed in a file called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + # This is the value from the license file you download. this value is an example and will not work + value: eyJhbGnotarealtoken.butanexample + secret: + # This tells the helm chart to create the secret + create: true + ``` + +3. Install the helm chart into the cluster + + ``` + telepresence helm install -f license-values.yaml + ``` + +Users will now be able to use preview intercepts with the +`--preview-url=false` flag. Even with the license key, preview URLs +cannot be used without enabling direct communication with Ambassador +Cloud, as Ambassador Cloud is essential to their operation. + +If using Helm to install the server-side components, see the chart's [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) to learn how to configure the image registry and license secret. + +Have clients use the [skipLogin](../config/#cloud) key to ensure the cli knows it is operating in an +air-gapped environment. + +## Mutating Webhook + +Telepresence uses a Mutating Webhook to inject the [Traffic Agent](../architecture/#traffic-agent) sidecar container and update the +port definitions. This means that an intercepted workload (Deployment, StatefulSet, ReplicaSet) will remain untouched +and in sync as far as GitOps workflows (such as ArgoCD) are concerned. + +The injection will happen on demand the first time an attempt is made to intercept the workload. + +If you want to prevent that the injection ever happens, simply add the `telepresence.getambassador.io/inject-traffic-agent: disabled` +annotation to your workload template's annotations: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ telepresence.getambassador.io/inject-traffic-agent: disabled + spec: + containers: +``` + +### Service Name and Port Annotations + +Telepresence will automatically find all services and all ports that will connect to a workload and make them available +for an intercept, but you can explicitly define that only one service and/or port can be intercepted. + +```diff + spec: + template: + metadata: + labels: + service: your-service + annotations: ++ telepresence.getambassador.io/inject-service-name: my-service ++ telepresence.getambassador.io/inject-service-port: https + spec: + containers: +``` + +### Ignore Certain Volume Mounts + +An annotation `telepresence.getambassador.io/inject-ignore-volume-mounts` can be used to make the injector ignore certain volume mounts denoted by a comma-separated string. The specified volume mounts from the original container will not be appended to the agent sidecar container. + +```diff + spec: + template: + metadata: + annotations: ++ telepresence.getambassador.io/inject-ignore-volume-mounts: "foo,bar" + spec: + containers: +``` + +### Note on Numeric Ports + +If the targetPort of your intercepted service is pointing at a port number, in addition to +injecting the Traffic Agent sidecar, Telepresence will also inject an initContainer that will +reconfigure the pod's firewall rules to redirect traffic to the Traffic Agent. + + +Note that this initContainer requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + +If you need to use numeric ports without the aforementioned capabilities, you can [manually install the agent](../intercepts/manual-agent) + +For example, the following service is using a numeric port, so Telepresence would inject an initContainer into it: +```yaml +apiVersion: v1 +kind: Service +metadata: + name: your-service +spec: + type: ClusterIP + selector: + service: your-service + ports: + - port: 80 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: your-service + labels: + service: your-service +spec: + replicas: 1 + selector: + matchLabels: + service: your-service + template: + metadata: + annotations: + telepresence.getambassador.io/inject-traffic-agent: enabled + labels: + service: your-service + spec: + containers: + - name: your-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 +``` diff --git a/docs/telepresence/2.13/reference/config.md b/docs/telepresence/2.13/reference/config.md new file mode 100644 index 000000000..e69c77daa --- /dev/null +++ b/docs/telepresence/2.13/reference/config.md @@ -0,0 +1,349 @@ +# Laptop-side configuration + +There are a number of configuration values that can be tweaked to change how Telepresence behaves. +These can be set in two ways: globally, by a platform engineer with powers to deploy the Telepresence Traffic Manager, or locally by any user. +One important exception is the location of the traffic manager itself, which, if it's different from the default of `ambassador`, [must be set](#manager) locally per-cluster to be able to connect. + +## Global Configuration + +Global configuration is set at the Traffic Manager level and applies to any user connecting to that Traffic Manager. +To set it, simply pass in a `client` dictionary to the `helm install` command, with any config values you wish to set. + +### Values + +The `client` config supports values for `timeouts`, `logLevels`, `images`, `cloud`, `grpc`, `dns`, and `routing`. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +client: + timeouts: + agentInstall: 1m + intercept: 10s + logLevels: + userDaemon: debug + images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting + cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. + grpc: + maxReceiveSize: 10Mi + telepresenceAPI: + port: 9980 + dns: + includeSuffixes: [.private] + excludeSuffixes: [.se, .com, .io, .net, .org, .ru] + lookupTimeout: 30s + routing: + alsoProxySubnets: + - 1.2.3.4/32 + neverProxySubnets: + - 1.2.3.4/32 +``` + +#### Timeouts + +Values for `client.timeouts` are all durations either as a number of seconds +or as a string with a unit suffix of `ms`, `s`, `m`, or `h`. Strings +can be fractional (`1.5h`) or combined (`2h45m`). + +These are the valid fields for the `timeouts` key: + +| Field | Description | Type | Default | +|-------------------------|------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|------------| +| `agentInstall` | Waiting for Traffic Agent to be installed | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | +| `apply` | Waiting for a Kubernetes manifest to be applied | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 1 minute | +| `clusterConnect` | Waiting for cluster to be connected | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `intercept` | Waiting for an intercept to become active | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `proxyDial` | Waiting for an outbound connection to be established | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `trafficManagerConnect` | Waiting for the Traffic Manager API to connect for port forwards | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `trafficManagerAPI` | Waiting for connection to the gPRC API after `trafficManagerConnect` is successful | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 15 seconds | +| `helm` | Waiting for Helm operations (e.g. `install`) on the Traffic Manager | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | + +#### Log Levels + +Values for the `client.logLevels` fields are one of the following strings, +case-insensitive: + + - `trace` + - `debug` + - `info` + - `warning` or `warn` + - `error` + +For whichever log-level you select, you will get logs labeled with that level and of higher severity. +(e.g. if you use `info`, you will also get logs labeled `error`. You will NOT get logs labeled `debug`. + +These are the valid fields for the `client.logLevels` key: + +| Field | Description | Type | Default | +|--------------|---------------------------------------------------------------------|---------------------------------------------|---------| +| `userDaemon` | Logging level to be used by the User Daemon (logs to connector.log) | [loglevel][logrus-level] [string][yaml-str] | debug | +| `rootDaemon` | Logging level to be used for the Root Daemon (logs to daemon.log) | [loglevel][logrus-level] [string][yaml-str] | info | + +#### Images +Values for `client.images` are strings. These values affect the objects that are deployed in the cluster, +so it's important to ensure users have the same configuration. + +Additionally, you can deploy the server-side components with [Helm](../../install/helm), to prevent them +from being overridden by a client's config and use the [mutating-webhook](../cluster-config/#mutating-webhook) +to handle installation of the `traffic-agents`. + +These are the valid fields for the `client.images` key: + +| Field | Description | Type | Default | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------|----------------------| +| `registry` | Docker registry to be used for installing the Traffic Manager and default Traffic Agent. If not using a helm chart to deploy server-side objects, changing this value will create a new traffic-manager deployment when using Telepresence commands. Additionally, changing this value will update installed default `traffic-agents` to use the new registry when creating a new intercept. | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `agentImage` | `$registry/$imageName:$imageTag` to use when installing the Traffic Agent. Changing this value will update pre-existing `traffic-agents` to use this new image. *The `registry` value is not used for the `traffic-agent` if you have this value set.* | qualified Docker image name [string][yaml-str] | (unset) | +| `webhookRegistry` | The container `$registry` that the [Traffic Manager](../cluster-config/#mutating-webhook) will use with the `webhookAgentImage` *This value is only used if a new `traffic-manager` is deployed* | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `webhookAgentImage` | The container image that the [Traffic Manager](../cluster-config/#mutating-webhook) will pull from the `webhookRegistry` when installing the Traffic Agent in annotated pods *This value is only used if a new `traffic-manager` is deployed* | non-qualified Docker image name [string][yaml-str] | (unset) | + +#### Cloud +Values for `client.cloud` are listed below and their type varies, so please see the chart for the expected type for each config value. +These fields control how the client interacts with the Cloud service. + +| Field | Description | Type | Default | +|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------|----------------------| +| `skipLogin` | Whether the CLI should skip automatic login to Ambassador Cloud. If set to true, in order to perform personal intercepts you must have a [license key](../cluster-config/#air-gapped-cluster) installed in the cluster. | [bool][yaml-bool] | false | +| `refreshMessages` | How frequently the CLI should communicate with Ambassador Cloud to get new command messages, which also resets whether the message has been raised or not. You will see each message at most once within the duration given by this config | [duration][go-duration] [string][yaml-str] | 168h | +| `systemaHost` | The host used to communicate with Ambassador Cloud | [string][yaml-str] | app.getambassador.io | +| `systemaPort` | The port used with `systemaHost` to communicate with Ambassador Cloud | [string][yaml-str] | 443 | + +Telepresence attempts to auto-detect if the cluster is capable of +communication with Ambassador Cloud, but in cases where only the on-laptop client wishes to communicate with +Ambassador Cloud Telepresence may still prompt you to log in. If you want those auto-login points to be disabled +as well, or would like it to not attempt to communicate with +Ambassador Cloud at all (even for the auto-detection), then be sure to +set the `skipLogin` value to `true`. + +Reminder: To use personal intercepts, which normally require a login, +you must have a license key in your cluster and specify which +`agentImage` should be installed by also adding the following to your +`config.yml`: + +```yaml +images: + agentImage: / +``` + +#### Grpc +The `maxReceiveSize` determines how large a message that the workstation receives via gRPC can be. The default is 4Mi (determined by gRPC). All traffic to and from the cluster is tunneled via gRPC. + +The size is measured in bytes. You can express it as a plain integer or as a fixed-point number using E, G, M, or K. You can also use the power-of-two equivalents: Gi, Mi, Ki. For example, the following represent roughly the same value: +``` +128974848, 129e6, 129M, 123Mi +``` + +#### RESTful API server +The `client.telepresenceAPI` controls the behavior of Telepresence's RESTful API server that can be queried for additional information about ongoing intercepts. When present, and the `port` is set to a valid port number, it's propagated to the auto-installer so that application containers that can be intercepted gets the `TELEPRESENCE_API_PORT` environment set. The server can then be queried at `localhost:`. In addition, the `traffic-agent` and the `user-daemon` on the workstation that performs an intercept will start the server on that port. +If the `traffic-manager` is auto-installed, its webhook agent injector will be configured to add the `TELEPRESENCE_API_PORT` environment to the app container when the `traffic-agent` is injected. +See [RESTful API server](../restapi) for more info. + +### DNS + +#### Intercept +The `intercept` controls applies to how Telepresence will intercept the communications to the intercepted service. + +The `defaultPort` controls which port is selected when no `--port` flag is given to the `telepresence intercept` command. The default value is "8080". + +The `appProtocolStrategy` is only relevant when using personal intercepts. This controls how Telepresence selects the application protocol to use when intercepting a service that has no `service.ports.appProtocol` defined. Valid values are: + +| Value | Resulting action | +|--------------|--------------------------------------------------------------------------------------------------------| +| `http2Probe` | The Telepresence Traffic Agent will probe the intercepted container to check whether it supports http2 | +| `portName` | Telepresence will make an educated guess about the protocol based on the name of the service port | +| `http` | Telepresence will use http | +| `http2` | Telepresence will use http2 | + +When `portName` is used, Telepresence will determine the protocol by the name of the port: `[-suffix]`. The following protocols are recognized: + +| Protocol | Meaning | +|----------|---------------------------------------| +| `http` | Plaintext HTTP/1.1 traffic | +| `http2` | Plaintext HTTP/2 traffic | +| `https` | TLS Encrypted HTTP (1.1 or 2) traffic | +| `grpc` | Same as http2 | + +#### Daemons + +`client.daemons` controls which binary to use for the user daemon. By default it will +use the Telepresence binary. For example, this can be used to tell Telepresence to +use the Telepresence Pro binary. + +The fields for `client.dns` are: `localIP`, `excludeSuffixes`, `includeSuffixes`, and `lookupTimeout`. + +| Field | Description | Type | Default | +|-------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|--------------------------------------------------------------------------| +| `localIP` | The address of the local DNS server. This entry is only used on Linux systems that are not configured to use systemd-resolved. | IP address [string][yaml-str] | first `nameserver` mentioned in `/etc/resolv.conf` | +| `excludeSuffixes` | Suffixes for which the DNS resolver will always fail (or fallback in case of the overriding resolver). Can be globally configured in the Helm chart. | [sequence][yaml-seq] of [strings][yaml-str] | `[".arpa", ".com", ".io", ".net", ".org", ".ru"]` | +| `includeSuffixes` | Suffixes for which the DNS resolver will always attempt to do a lookup. Includes have higher priority than excludes. Can be globally configured in the Helm chart. | [sequence][yaml-seq] of [strings][yaml-str] | `[]` | +| `lookupTimeout` | Maximum time to wait for a cluster side host lookup. | [duration][go-duration] [string][yaml-str] | 4 seconds | + +Here is an example values.yaml: +```yaml +client: + dns: + includeSuffixes: [.private] + excludeSuffixes: [.se, .com, .io, .net, .org, .ru] + localIP: 8.8.8.8 + lookupTimeout: 30s +``` + +### Routing + +#### AlsoProxySubnets + +When using `alsoProxySubnets`, you provide a list of subnets to be added to the TUN device. +All connections to addresses that the subnet spans will be dispatched to the cluster + +Here is an example values.yaml for the subnet `1.2.3.4/32`: +```yaml +client: + routing: + alsoProxySubnets: + - 1.2.3.4/32 +``` + +#### NeverProxySubnets + +When using `neverProxySubnets` you provide a list of subnets. These will never be routed via the TUN device, +even if they fall within the subnets (pod or service) for the cluster. Instead, whatever route they have before +telepresence connects is the route they will keep. + +Here is an example kubeconfig for the subnet `1.2.3.4/32`: + +```yaml +client: + routing: + neverProxySubnets: + - 1.2.3.4/32 +``` + +#### Using AlsoProxy together with NeverProxy + +Never proxy and also proxy are implemented as routing rules, meaning that when the two conflict, regular routing routes apply. +Usually this means that the most specific route will win. + +So, for example, if an `alsoProxySubnets` subnet falls within a broader `neverProxySubnets` subnet: + +```yaml +neverProxySubnets: [10.0.0.0/16] +alsoProxySubnets: [10.0.5.0/24] +``` + +Then the specific `alsoProxySubnets` of `10.0.5.0/24` will be proxied by the TUN device, whereas the rest of `10.0.0.0/16` will not. + +Conversely, if a `neverProxySubnets` subnet is inside a larger `alsoProxySubnets` subnet: + +```yaml +alsoProxySubnets: [10.0.0.0/16] +neverProxySubnets: [10.0.5.0/24] +``` + +Then all of the `alsoProxySubnets` of `10.0.0.0/16` will be proxied, with the exception of the specific `neverProxySubnets` of `10.0.5.0/24` + +## Local Overrides + +In addition, it is possible to override each of these variables at the local level by setting up new values in local config files. +There are two types of config values that can be set locally: those that apply to all clusters, which are set in a single `config.yml` file, and those +that only apply to specific clusters, which are set as extensions to the `$KUBECONFIG` file. + +### Config for all clusters +Telepresence uses a `config.yml` file to store and change those configuration values that will be used for all clusters you use Telepresence with. +The location of this file varies based on your OS: + +* macOS: `$HOME/Library/Application Support/telepresence/config.yml` +* Linux: `$XDG_CONFIG_HOME/telepresence/config.yml` or, if that variable is not set, `$HOME/.config/telepresence/config.yml` +* Windows: `%APPDATA%\telepresence\config.yml` + +For Linux, the above paths are for a user-level configuration. For system-level configuration, use the file at `$XDG_CONFIG_DIRS/telepresence/config.yml` or, if that variable is empty, `/etc/xdg/telepresence/config.yml`. If a file exists at both the user-level and system-level paths, the user-level path file will take precedence. + +### Values + +The config file currently supports values for the `timeouts`, `logLevels`, `images`, `cloud`, and `grpc` keys. +The definitions of these values are identical to those values in the `client` config above. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +timeouts: + agentInstall: 1m + intercept: 10s +logLevels: + userDaemon: debug +images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting +cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. +grpc: + maxReceiveSize: 10Mi +telepresenceAPI: + port: 9980 +``` + + +## Workstation Per-Cluster Configuration + +Configuration that is specific to a cluster can also be overriden per-workstation by modifying your `$KUBECONFIG` file. +It is recommended that you do not do this, and instead rely on upstream values provided to the Traffic Manager. This ensures +that all users that connect to the Traffic Manager will have the same routing and DNS resolution behavior. +An important exception to this is the [`manager.namespace` configuration](#Manager) which must be set locally. + +### Values + +The kubeconfig supports values for `dns`, `also-proxy`, `never-proxy`, and `manager`. + +Example kubeconfig: +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + dns: + include-suffixes: [.private] + exclude-suffixes: [.se, .com, .io, .net, .org, .ru] + local-ip: 8.8.8.8 + lookup-timeout: 30s + never-proxy: [10.0.0.0/16] + also-proxy: [10.0.5.0/24] + name: example-cluster +``` + +#### Manager + +This is the one cluster configuration that cannot be set using the Helm chart because it defines how Telepresence connects to +the Traffic manager. When not default, that setting needs to be configured in the workstation's kubeconfig for the cluster. + +The `manager` key contains configuration for finding the `traffic-manager` that telepresence will connect to. It supports one key, `namespace`, indicating the namespace where the traffic manager is to be found + +Here is an example kubeconfig that will instruct telepresence to connect to a manager in namespace `staging`: + +```yaml +apiVersion: v1 +clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +[yaml-bool]: https://yaml.org/type/bool.html +[yaml-float]: https://yaml.org/type/float.html +[yaml-int]: https://yaml.org/type/int.html +[yaml-seq]: https://yaml.org/type/seq.html +[yaml-str]: https://yaml.org/type/str.html +[go-duration]: https://pkg.go.dev/time#ParseDuration +[logrus-level]: https://github.com/sirupsen/logrus/blob/v1.8.1/logrus.go#L25-L45 diff --git a/docs/telepresence/2.13/reference/dns.md b/docs/telepresence/2.13/reference/dns.md new file mode 100644 index 000000000..2f263860e --- /dev/null +++ b/docs/telepresence/2.13/reference/dns.md @@ -0,0 +1,80 @@ +# DNS resolution + +The Telepresence DNS resolver is dynamically configured to resolve names using the namespaces of currently active intercepts. Processes running locally on the desktop will have network access to all services in the such namespaces by service-name only. + +All intercepts contribute to the DNS resolver, even those that do not use the `--namespace=` option. This is because `--namespace default` is implied, and in this context, `default` is treated just like any other namespace. + +No namespaces are used by the DNS resolver (not even `default`) when no intercepts are active, which means that no service is available by `` only. Without an active intercept, the namespace qualified DNS name must be used (in the form `.`). + +See this demonstrated below, using the [quick start's](../../quick-start/) sample app services. + +No intercepts are currently running, we'll connect to the cluster and list the services that can be intercepted. + +``` +$ telepresence connect + + Connecting to traffic manager... + Connected to context default (https://) + +$ telepresence list + + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + emoji : ready to intercept (traffic-agent not yet installed) + web : ready to intercept (traffic-agent not yet installed) + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + +$ curl web-app:80 + + curl: (6) Could not resolve host: web-app + +``` + +This is expected as Telepresence cannot reach the service yet by short name without an active intercept in that namespace. + +``` +$ curl web-app.emojivoto:80 + + + + + + Emoji Vote + ... +``` + +Using the namespaced qualified DNS name though does work. +Now we'll start an intercept against another service in the same namespace. Remember, `--namespace default` is implied since it is not specified. + +``` +$ telepresence intercept web --port 8080 + + Using Deployment web + intercepted + Intercept name : web + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Volume Mount Point: /tmp/telfs-166119801 + Intercepting : HTTP requests that match all headers: + 'x-telepresence-intercept-id: 8eac04e3-bf24-4d62-b3ba-35297c16f5cd:web' + +$ curl webapp:80 + + + + + + Emoji Vote + ... +``` + +Now curling that service by its short name works and will as long as the intercept is active. + +The DNS resolver will always be able to resolve services using `.` regardless of intercepts. + +### Supported Query Types + +The Telepresence DNS resolver is now capable of resolving queries of type `A`, `AAAA`, `CNAME`, +`MX`, `NS`, `PTR`, `SRV`, and `TXT`. + +See [Outbound connectivity](../routing/#dns-resolution) for details on DNS lookups. diff --git a/docs/telepresence/2.13/reference/docker-run.md b/docs/telepresence/2.13/reference/docker-run.md new file mode 100644 index 000000000..27b2f316f --- /dev/null +++ b/docs/telepresence/2.13/reference/docker-run.md @@ -0,0 +1,90 @@ +--- +Description: "How a Telepresence intercept can run a Docker container with configured environment and volume mounts." +--- + +# Using Docker for intercepts + +## Use the Intercept Specification +The recommended way to use Telepresence with Docker is to create an [Intercept Specification](../intercepts/specs) that uses docker images as intercept handlers. + +## Using command flags + +### The docker flag +You can start the Telepresence daemon in a Docker container on your laptop using the command: + +```console +$ telepresence connect --docker +``` + +The `--docker` flag is a global flag, and if passed directly like `telepresence intercept --docker`, then the implicit connect that takes place if no connections is active, will use a container based daemon. + +### The docker-run flag + +If you want your intercept to go to another Docker container, you can use the `--docker-run` flag. It creates the intercept, runs your container in the foreground, then automatically ends the intercept when the container exits. + +```console +$ telepresence intercept --port --docker-run -- +``` + +The `--` separates flags intended for `telepresence intercept` from flags intended for `docker run`. + +It's recommended that you always use the `--docker-run` in combination with the global `--docker` flag, because that makes everything less intrusive. +- No admin user access is needed. Network modifications are confined to a Docker network. +- There's no need for special filesystem mount software like MacFUSE or WinFSP. The volume mounts happen in the Docker engine. + +The following happens under the hood when both flags are in use: + +- The network of for the intercept handler will be set to the same as the network used by the daemon. This guarantees that the + intercept handler can access the Telepresence VIF, and hence have access the cluster. +- Volume mounts will be automatic and made using the Telemount Docker volume plugin so that all volumes exposed by the intercepted + container are mounted on the intercept handler container. +- The environment of the intercepted container becomes the environment of the intercept handler container. + +### The docker-build flag + +The `--docker-build ` and the repeatable `docker-build-opt key=value` flags enable container's to be build on the fly by the intercept command. + +When using `--docker-build`, the image name used in the argument list must be verbatim `IMAGE`. The word acts as a placeholder and will be replaced by the ID of the image that is built. + +The `--docker-build` flag implies `--docker-run`. + +## Using docker-run flag without docker + +It is possible to use `--docker-run` with a daemon running on your host, which is the default behavior of Telepresence. + +However, it isn't recommended since you'll be in a hybrid mode: while your intercept runs in a container, the daemon will modify the host network, and if remote mounts are desired, they may require extra software. + +The ability to use this special combination is retained for backward compatibility reasons. It might be removed in a future release of Telepresence. + +The `--port` flag has slightly different semantics and can be used in situations when the local and container port must be different. This +is done using `--port :`. The container port will default to the local port when using the `--port ` syntax. + +## Examples + +Imagine you are working on a new version of your frontend service. It is running in your cluster as a Deployment called `frontend-v1`. You use Docker on your laptop to build an improved version of the container called `frontend-v2`. To test it out, use this command to run the new container on your laptop and start an intercept of the cluster service to your local container. + +```console +$ telepresence intercept --docker frontend-v1 --port 8000 --docker-run -- frontend-v2 +``` + +Now, imagine that the `frontend-v2` image is built by a `Dockerfile` that resides in the directory `images/frontend-v2`. You can build and intercept directly. + +```console +$ telepresence intercept --docker frontend-v1 --port8000 --docker-build images/frontend-v2 --docker-build-opt tag=mytag -- IMAGE +``` + +## Automatic flags + +Telepresence will automatically pass some relevant flags to Docker in order to connect the container with the intercept. Those flags are combined with the arguments given after `--` on the command line. + +- `--env-file ` Loads the intercepted environment +- `--name intercept--` Names the Docker container, this flag is omitted if explicitly given on the command line +- `-v ` Volume mount specification, see CLI help for `--docker-mount` flags for more info + +When used with a container based daemon: +- `--rm` Mandatory, because the volume mounts cannot be removed until the container is removed. +- `-v :` Volume mount specifications propagated from the intercepted container + +When used with a daemon that isn't container based: +- `--dns-search tel2-search` Enables single label name lookups in intercepted namespaces +- `-p ` The local port for the intercept and the container port diff --git a/docs/telepresence/2.13/reference/environment.md b/docs/telepresence/2.13/reference/environment.md new file mode 100644 index 000000000..7f83ff119 --- /dev/null +++ b/docs/telepresence/2.13/reference/environment.md @@ -0,0 +1,46 @@ +--- +description: "How Telepresence can import environment variables from your Kubernetes cluster to use with code running on your laptop." +--- + +# Environment variables + +Telepresence can import environment variables from the cluster pod when running an intercept. +You can then use these variables with the code running on your laptop of the service being intercepted. + +There are three options available to do this: + +1. `telepresence intercept [service] --port [port] --env-file=FILENAME` + + This will write the environment variables to a Docker Compose `.env` file. This file can be used with `docker-compose` when starting containers locally. Please see the Docker documentation regarding the [file syntax](https://docs.docker.com/compose/env-file/) and [usage](https://docs.docker.com/compose/environment-variables/) for more information. + +2. `telepresence intercept [service] --port [port] --env-json=FILENAME` + + This will write the environment variables to a JSON file. This file can be injected into other build processes. + +3. `telepresence intercept [service] --port [port] -- [COMMAND]` + + This will run a command locally with the pod's environment variables set on your laptop. Once the command quits the intercept is stopped (as if `telepresence leave [service]` was run). This can be used in conjunction with a local server command, such as `python [FILENAME]` or `node [FILENAME]` to run a service locally while using the environment variables that were set on the pod via a ConfigMap or other means. + + Another use would be running a subshell, Bash for example: + + `telepresence intercept [service] --port [port] -- /bin/bash` + + This would start the intercept then launch the subshell on your laptop with all the same variables set as on the pod. + +## Telepresence Environment Variables + +Telepresence adds some useful environment variables in addition to the ones imported from the intercepted pod: + +### TELEPRESENCE_ROOT +Directory where all remote volumes mounts are rooted. See [Volume Mounts](../volume/) for more info. + +### TELEPRESENCE_MOUNTS +Colon separated list of remotely mounted directories. + +### TELEPRESENCE_CONTAINER +The name of the intercepted container. Useful when a pod has several containers, and you want to know which one that was intercepted by Telepresence. + +### TELEPRESENCE_INTERCEPT_ID +ID of the intercept (same as the "x-intercept-id" http header). + +Useful if you need special behavior when intercepting a pod. One example might be when dealing with pub/sub systems like Kafka, where all processes that don't have the `TELEPRESENCE_INTERCEPT_ID` set can filter out all messages that contain an `x-intercept-id` header, while those that do, instead filter based on a matching `x-intercept-id` header. This is to assure that messages belonging to a certain intercept always are consumed by the intercepting process. diff --git a/docs/telepresence/2.13/reference/inside-container.md b/docs/telepresence/2.13/reference/inside-container.md new file mode 100644 index 000000000..48a38b5a3 --- /dev/null +++ b/docs/telepresence/2.13/reference/inside-container.md @@ -0,0 +1,19 @@ +# Running Telepresence inside a container + +All Telepresence commands now have the global option `--docker`. This option tells telepresence to start the Telepresence daemon in a +docker container. + +Running the daemon in a container brings many advantages. The daemon will no longer make modifications to the host's network or DNS, and +it will not mount files in the host's filesystem. Consequently, it will not need admin privileges to run, nor will it need special software +like macFUSE or WinFSP to mount the remote file systems. + +The intercept handler (the process that will receive the intercepted traffic) must also be a docker container, because that is the only +way to access the cluster network that the daemon makes available, and to mount the docker volumes needed. + +It's highly recommended that you use the new [Intercept Specification](../intercepts/specs) to set things up. That way, Telepresence can do +all the plumbing needed to start the intercept handler with the correct environment and volume mounts. +Otherwise, doing a fully container based intercept manually with all bells and whistles is a complicated process that involves: +- Capturing the details of an intercept +- Ensuring that the [Telemount](https://github.com/datawire/docker-volume-telemount#readme) Docker volume plugin is installed +- Creating volumes for all remotely exposed directories +- Starting the intercept handler container using the same network as the daemon. diff --git a/docs/telepresence/2.13/reference/intercepts/cli.md b/docs/telepresence/2.13/reference/intercepts/cli.md new file mode 100644 index 000000000..87588b3fe --- /dev/null +++ b/docs/telepresence/2.13/reference/intercepts/cli.md @@ -0,0 +1,335 @@ +import Alert from '@material-ui/lab/Alert'; + +# Configuring intercept using CLI + +## Specifying a namespace for an intercept + +The namespace of the intercepted workload is specified using the +`--namespace` option. When this option is used, and `--workload` is +not used, then the given name is interpreted as the name of the +workload and the name of the intercept will be constructed from that +name and the namespace. + +```shell +telepresence intercept hello --namespace myns --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept +`hello-myns`. In order to remove the intercept, you will need to run +`telepresence leave hello-mydns` instead of just `telepresence leave +hello`. + +The name of the intercept will be left unchanged if the workload is specified. + +```shell +telepresence intercept myhello --namespace myns --workload hello --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept `myhello`. + +## Importing environment variables + +Telepresence can import the environment variables from the pod that is +being intercepted, see [this doc](../environment/) for more details. + +## Creating an intercept without a preview URL + +If you *are not* logged in to Ambassador Cloud, the following command +will intercept all traffic bound to the service and proxy it to your +laptop. This includes traffic coming through your ingress controller, +so use this option carefully as to not disrupt production +environments. + +```shell +telepresence intercept --port= +``` + +If you *are* logged in to Ambassador Cloud, setting the +`--preview-url` flag to `false` is necessary. + +```shell +telepresence intercept --port= --preview-url=false +``` + +This will output an HTTP header that you can set on your request for +that traffic to be intercepted: + +```console +$ telepresence intercept --port= --preview-url=false +Using Deployment +intercepted + Intercept name: + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":") +``` + +Run `telepresence status` to see the list of active intercepts. + +```console +$ telepresence status +Root Daemon: Running + Version : v2.1.4 (api 3) + Primary DNS : "" + Fallback DNS: "" +User Daemon: Running + Version : v2.1.4 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 1 total + dataprocessingnodeservice: @ +``` + +Finally, run `telepresence leave ` to stop the intercept. + +## Skipping the ingress dialogue + +You can skip the ingress dialogue by setting the relevant parameters using flags. If any of the following flags are set, the dialogue will be skipped and the flag values will be used instead. If any of the required flags are missing, an error will be thrown. + +| Flag | Description | Required | +|------------------|------------------------------------------------------------------|------------| +| `--ingress-host` | The ip address for the ingress | yes | +| `--ingress-port` | The port for the ingress | yes | +| `--ingress-tls` | Whether tls should be used | no | +| `--ingress-l5` | Whether a different ip address should be used in request headers | no | + +## Creating an intercept when a service has multiple ports + +If you are trying to intercept a service that has multiple ports, you +need to tell Telepresence which service port you are trying to +intercept. To specify, you can either use the name of the service +port or the port number itself. To see which options might be +available to you and your service, use kubectl to describe your +service or look in the object's YAML. For more information on multiple +ports, see the [Kubernetes documentation][kube-multi-port-services]. + +[kube-multi-port-services]: https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services + +```console +$ telepresence intercept --port=: +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +When intercepting a service that has multiple ports, the name of the +service port that has been intercepted is also listed. + +If you want to change which port has been intercepted, you can create +a new intercept the same way you did above and it will change which +service port is being intercepted. + +## Creating an intercept When multiple services match your workload + +Oftentimes, there's a 1-to-1 relationship between a service and a +workload, so telepresence is able to auto-detect which service it +should intercept based on the workload you are trying to intercept. +But if you use something like +[Argo](https://www.getambassador.io/docs/argo/latest/), there may be +two services (that use the same labels) to manage traffic between a +canary and a stable service. + +Fortunately, if you know which service you want to use when +intercepting a workload, you can use the `--service` flag. So in the +aforementioned example, if you wanted to use the `echo-stable` service +when intercepting your workload, your command would look like this: + +```console +$ telepresence intercept echo-rollout- --port --service echo-stable +Using ReplicaSet echo-rollout- +intercepted + Intercept name : echo-rollout- + State : ACTIVE + Workload kind : ReplicaSet + Destination : 127.0.0.1:3000 + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-921196036 + Intercepting : all TCP connections +``` + +## Intercepting multiple ports + +It is possible to intercept more than one service and/or service port that are using the same workload. You do this +by creating more than one intercept that identify the same workload using the `--workload` flag. + +Let's assume that we have a service `multi-echo` with the two ports `http` and `grpc`. They are both +targeting the same `multi-echo` deployment. + +```console +$ telepresence intercept multi-echo-http --workload multi-echo --port 8080:http --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-http + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Volume Mount Point : /tmp/telfs-893700837 + Intercepting : all TCP requests + Preview URL : https://sleepy-bassi-1140.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +$ telepresence intercept multi-echo-grpc --workload multi-echo --port 8443:grpc --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-grpc + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8443 + Service Port Identifier: extra + Volume Mount Point : /tmp/telfs-1277723591 + Intercepting : all TCP requests + Preview URL : https://upbeat-thompson-6613.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +``` + +## Port-forwarding an intercepted container's sidecars + +Sidecars are containers that sit in the same pod as an application +container; they usually provide auxiliary functionality to an +application, and can usually be reached at +`localhost:${SIDECAR_PORT}`. For example, a common use case for a +sidecar is to proxy requests to a database, your application would +connect to `localhost:${SIDECAR_PORT}`, and the sidecar would then +connect to the database, perhaps augmenting the connection with TLS or +authentication. + +When intercepting a container that uses sidecars, you might want those +sidecars' ports to be available to your local application at +`localhost:${SIDECAR_PORT}`, exactly as they would be if running +in-cluster. Telepresence's `--to-pod ${PORT}` flag implements this +behavior, adding port-forwards for the port given. + +```console +$ telepresence intercept --port=: --to-pod= +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +If there are multiple ports that you need forwarded, simply repeat the +flag (`--to-pod= --to-pod=`). + +## Intercepting headless services + +Kubernetes supports creating [services without a ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services), +which, when they have a pod selector, serve to provide a DNS record that will directly point to the service's backing pods. +Telepresence supports intercepting these `headless` services as it would a regular service with a ClusterIP. +So, for example, if you have the following service: + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: my-headless +spec: + type: ClusterIP + clusterIP: None + selector: + service: my-headless + ports: + - port: 8080 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: my-headless + labels: + service: my-headless +spec: + replicas: 1 + serviceName: my-headless + selector: + matchLabels: + service: my-headless + template: + metadata: + labels: + service: my-headless + spec: + containers: + - name: my-headless + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +``` + +You can intercept it like any other: + +```console +$ telepresence intercept my-headless --port 8080 +Using StatefulSet my-headless +intercepted + Intercept name : my-headless + State : ACTIVE + Workload kind : StatefulSet + Destination : 127.0.0.1:8080 + Volume Mount Point: /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-524189712 + Intercepting : all TCP connections +``` + + +This utilizes an initContainer that requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + + +This requires the Traffic Agent to run as GID 7777. By default, this is disabled on openshift clusters. +To enable running as GID 7777 on a specific openshift namespace, run: +oc adm policy add-scc-to-group anyuid system:serviceaccounts:$NAMESPACE + + + +Intercepting headless services without a selector is not supported. + + +## Sharing intercepts with teammates + +Once a combination of flags to easily intercept a service has been found, it's useful to share it with teammates. You +can do that easily by going to [Ambassador Cloud -> Intercepts history](https://app.getambassador.io/cloud/saved-intercepts) +pick the intercept command from the history tab and create a Saved Intercept by giving it a name, when doing that +the intercept command will be easily accessible for all your teammates. Note that this requires the free enhanced +client to be installed and to be logged in (`telepresence login`). + +To instantiate an intercept based on a saved intercept, simply run +`telepresence intercept --use-saved-intercept `. When logged in, the command will first check for a +saved intercept in Ambassador Cloud and will use it if found, otherwise an error will be returned. + +Saved Intercepts can be [managed through Ambassador Cloud](../../../../cloud/latest/telepresence-saved-intercepts/). + +## Specifying the intercept traffic target + +By default, it's assumed that your local app is reachable on `127.0.0.1`, and intercepted traffic will be sent to that IP +at the port given by `--port`. If you wish to change this behavior and send traffic to a different IP address, you can use the `--address` parameter +to `telepresence intercept`. Say your machine is configured to respond to HTTP requests for an intercept on `172.16.0.19:8080`. You would run this as: + +```console +$ telepresence intercept my-service --address 172.16.0.19 --port 8080 +Using Deployment echo-easy + Intercept name : echo-easy + State : ACTIVE + Workload kind : Deployment + Destination : 172.16.0.19:8080 + Service Port Identifier: proxied + Volume Mount Point : /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-517018422 + Intercepting : HTTP requests with headers + 'x-telepresence-intercept-id: 8e0dd8ea-b55a-43bd-ad04-018b9de9cfab:echo-easy' + Preview URL : https://laughing-curran-5375.preview.edgestack.me + Layer 5 Hostname : echo-easy.default.svc.cluster.local +``` diff --git a/docs/telepresence/2.13/reference/intercepts/index.md b/docs/telepresence/2.13/reference/intercepts/index.md new file mode 100644 index 000000000..5b317aeec --- /dev/null +++ b/docs/telepresence/2.13/reference/intercepts/index.md @@ -0,0 +1,61 @@ +import Alert from '@material-ui/lab/Alert'; + +# Intercepts + +When intercepting a service, the Telepresence Traffic Manager ensures +that a Traffic Agent has been injected into the intercepted workload. +The injection is triggered by a Kubernetes Mutating Webhook and will +only happen once. The Traffic Agent is responsible for redirecting +intercepted traffic to the developer's workstation. + +An intercept is either global or personal. + +### Global intercet +This intercept will intercept all`tcp` and/or `udp` traffic to the +intercepted service and send all of that traffic down to the developer's +workstation. This means that a global intercept will affect all users of +the intercepted service. + +### Personal intercept +This intercept will intercept specific HTTP requests, allowing other HTTP +requests through to the regular service. The selection is based on http +headers or paths, and allows for intercepts which only intercept traffic +tagged as belonging to a given developer. + +There are two ways of configuring an intercept: +- one from the [CLI](./cli) directly +- one from an [Intercept Specification](./specs) + +## Intercept behavior when using single-user versus team mode. + +Switching the Traffic Manager from `single-user` mode to `team` mode changes +the Telepresence defaults in two ways. + + +First, in team mode, Telepresence will require that the user is logged in to +Ambassador Cloud, or is using an api-key. The team mode aldo causes Telepresence +to default to a personal intercept using `--http-header=auto --http-path-prefix=/`. +Personal intercepts are important for working in a shared cluster with teammates, +and is important for the preview URL functionality below. See `telepresence intercept --help` +for information on using the `--http-header` and `--http-path-xxx` flags to +customize which requests that are intercepted. + +Secondly, team mode causes Telepresence to default to`--preview-url=true`. This +tells Telepresence to take advantage of Ambassador Cloud to create a preview URL +for this intercept, creating a shareable URL that automatically sets the +appropriate headers to have requests coming from the preview URL be +intercepted. + +## Supported workloads + +Kubernetes has various +[workloads](https://kubernetes.io/docs/concepts/workloads/). +Currently, Telepresence supports intercepting (installing a +traffic-agent on) `Deployments`, `ReplicaSets`, and `StatefulSets`. + + + +While many of our examples use Deployments, they would also work on +ReplicaSets and StatefulSets + + diff --git a/docs/telepresence/2.13/reference/intercepts/manual-agent.md b/docs/telepresence/2.13/reference/intercepts/manual-agent.md new file mode 100644 index 000000000..8c24d6dbe --- /dev/null +++ b/docs/telepresence/2.13/reference/intercepts/manual-agent.md @@ -0,0 +1,267 @@ +import Alert from '@material-ui/lab/Alert'; + +# Manually injecting the Traffic Agent + +You can directly modify your workload's YAML configuration to add the Telepresence Traffic Agent and enable it to be intercepted. + +When you use a Telepresence intercept for the first time on a Pod, the [Telepresence Mutating Webhook](../../cluster-config/#mutating-webhook) +will automatically inject a Traffic Agent sidecar into it. There might be some situations where this approach cannot be used, such +as very strict company security policies preventing it. + + +Although it is possible to manually inject the Traffic Agent, it is not the recommended approach to making a workload interceptable, +try the Mutating Webhook before proceeding. + + +## Procedure + +You can manually inject the agent into Deployments, StatefulSets, or ReplicaSets. The example on this page +uses the following Deployment and Service. It's a prerequisite that they have been applied to the cluster: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "my-service" + labels: + service: my-service +spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +--- +apiVersion: v1 +kind: Service +metadata: + name: "my-service" +spec: + type: ClusterIP + selector: + service: my-service + ports: + - port: 80 + targetPort: 8080 +``` + +### 1. Generating the YAML + +First, generate the YAML for the traffic-agent configmap entry. It's important that the generated file have +the same name as the service, and no extension: + +```console +$ telepresence genyaml config --workload my-service -o /tmp/my-service +$ cat /tmp/my-service-config.yaml +agentImage: docker.io/datawire/tel2:2.7.0 +agentName: my-service +containers: +- Mounts: null + envPrefix: A_ + intercepts: + - agentPort: 9900 + containerPort: 8080 + protocol: TCP + serviceName: my-service + servicePort: 80 + serviceUID: f6680334-10ef-4703-aa4e-bb1f9d1665fd + mountPoint: /tel_app_mounts/echo-container + name: echo-container +logLevel: info +managerHost: traffic-manager.ambassador +managerPort: 8081 +manual: true +namespace: default +workloadKind: Deployment +workloadName: my-service +``` + +Next, generate the YAML for the traffic-agent container: + +```console +$ telepresence genyaml container --config /tmp/my-service -o /tmp/my-service-agent.yaml +$ cat /tmp/my-service-agent.yaml +args: +- agent +env: +- name: _TEL_AGENT_POD_IP + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: status.podIP +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: traffic-agent +ports: +- containerPort: 9900 + protocol: TCP +readinessProbe: + exec: + command: + - /bin/stat + - /tmp/agent/ready +resources: {} +volumeMounts: +- mountPath: /tel_pod_info + name: traffic-annotations +- mountPath: /etc/traffic-agent + name: traffic-config +- mountPath: /tel_app_exports + name: export-volume + name: traffic-annotations +``` + +Next, generate the init-container + +```console +$ telepresence genyaml initcontainer --config /tmp/my-service -o /tmp/my-service-init.yaml +$ cat /tmp/my-service-init.yaml +args: +- agent-init +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: tel-agent-init +resources: {} +securityContext: + capabilities: + add: + - NET_ADMIN +volumeMounts: +- mountPath: /etc/traffic-agent + name: traffic-config +``` + +Next, generate the YAML for the volumes: + +```console +$ telepresence genyaml volume --workload my-service -o /tmp/my-service-volume.yaml +$ cat /tmp/my-service-volume.yaml +- downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.annotations + path: annotations + name: traffic-annotations +- configMap: + items: + - key: my-service + path: config.yaml + name: telepresence-agents + name: traffic-config +- emptyDir: {} + name: export-volume + +``` + + +Enter `telepresence genyaml container --help` or `telepresence genyaml volume --help` for more information about these flags. + + +### 2. Creating (or updating) the configmap + +The generated configmap entry must be insterted into the `telepresence-agents` `ConfigMap` in the same namespace as the +modified `Deployment`. If the `ConfigMap` doesn't exist yet, it can be created using the following command: + +```console +$ kubectl create configmap telepresence-agents --from-file=/tmp/my-service +``` + +If it already exists, new entries can be added under the `Data` key using `kubectl edit configmap telepresence-agents`. + +### 3. Injecting the YAML into the Deployment + +You need to add the `Deployment` YAML you genereated to include the container and the volume. These are placed as elements +of `spec.template.spec.containers`, `spec.template.spec.initContainers`, and `spec.template.spec.volumes` respectively. +You also need to modify `spec.template.metadata.annotations` and add the annotation +`telepresence.getambassador.io/manually-injected: "true"`. These changes should look like the following: + +```diff + apiVersion: apps/v1 + kind: Deployment + metadata: + name: "my-service" + labels: + service: my-service + spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service ++ annotations: ++ telepresence.getambassador.io/manually-injected: "true" + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} ++ - args: ++ - agent ++ env: ++ - name: _TEL_AGENT_POD_IP ++ valueFrom: ++ fieldRef: ++ apiVersion: v1 ++ fieldPath: status.podIP ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: traffic-agent ++ ports: ++ - containerPort: 9900 ++ protocol: TCP ++ readinessProbe: ++ exec: ++ command: ++ - /bin/stat ++ - /tmp/agent/ready ++ resources: { } ++ volumeMounts: ++ - mountPath: /tel_pod_info ++ name: traffic-annotations ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ - mountPath: /tel_app_exports ++ name: export-volume ++ initContainers: ++ - args: ++ - agent-init ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: tel-agent-init ++ resources: { } ++ securityContext: ++ capabilities: ++ add: ++ - NET_ADMIN ++ volumeMounts: ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ volumes: ++ - downwardAPI: ++ items: ++ - fieldRef: ++ apiVersion: v1 ++ fieldPath: metadata.annotations ++ path: annotations ++ name: traffic-annotations ++ - configMap: ++ items: ++ - key: my-service ++ path: config.yaml ++ name: telepresence-agents ++ name: traffic-config ++ - emptyDir: { } ++ name: export-volume +``` diff --git a/docs/telepresence/2.13/reference/intercepts/specs.md b/docs/telepresence/2.13/reference/intercepts/specs.md new file mode 100644 index 000000000..cc6d7abd8 --- /dev/null +++ b/docs/telepresence/2.13/reference/intercepts/specs.md @@ -0,0 +1,338 @@ +# Configuring intercept using specifications + +This page references the different options available to the telepresence intercept specification. + +With telepresence, you can provide a file to define how an intercept should work. + +## Specification + +Your intercept specification is where you can create a standard, easy to use, configuration to easily run pre and post tasks, start an intercept, and start your local application to handle the intercepted traffic. + +There are many ways to configure your specification to suit your needs, the table below shows the possible options within your specifcation, +and you can see the spec's schema, with all available options and formats, [here](#ide-integration). + +| Options | Description | +|-------------------------------------------|---------------------------------------------------------------------------------------------------------| +| [name](#name) | Name of the specification. | +| [connection](#connection) | Connection properties to use when Telepresence connects to the cluster. | +| [handlers](#handlers) | Local processes to handle traffic and/or setup . | +| [prerequisites](#prerequisites) | Things to set up prior to starting any intercepts, and tear things down once the intercept is complete. | +| [workloads](#workloads) | Remote workloads that are intercepted, keyed by workload name. | + +### Name +The name is optional. If you don't specify the name it will use the filename of the specification file. + +```yaml +name : echo-server-spec +``` + +### Connection + +The connection option is used to define how Telepresence connects to your cluster. + +```yaml +connection: + context: "shared-cluster" + mappedNamespaces: + - "my_app" +``` + +You can pass the most common parameters from telepresence connect command (`telepresence connect --help`) using a camel case format. + +Some of the most commonly used options include: + +| Options | Type | Format | Description | +|------------------|-------------|-------------------------|---------------------------------------------------------| +| context | string | N/A | The kubernetes context to use | +| mappedNamespaces | string list | [a-z0-9][a-z0-9-]{1,62} | The namespaces that Telepresence will be concerned with | + + +### Handlers + +A handler is code running locally. + +It can receive traffic for an intercepted service, or can set up prerequisites to run before/after the intercept itself. + +When it is intended as an intercept handler (i.e. to handle traffic), it's usually the service you're working on, or another dependency (database, another third party service, ...) running on your machine. +A handler can be a Docker container, or an application running natively. + +The sample below is creating an intercept handler, giving it the name `echo-server` and using a docker container. The container will +automatically have access to the ports, environment, and mounted directories of the intercepted container. + + + The ports field is important for the intercept handler while running in docker, it indicates which ports should be exposed to the host. If you want to access to it locally (to attach a debugger to your container for example), this field must be provided. + + +```yaml +handlers: + - name: echo-server + environment: + - name: PORT + value: "8080" + docker: + image: jmalloc/echo-server:latest + ports: + - 8080 +``` + +If you don't want to use Docker containers, you can still configure your handlers to start via a regular script. +The snippet below shows how to create an handler called echo-server, that sets an environment variable of `PORT=8080` +and starts the application. + + +```yaml +handlers: + - name: echo-server + environment: + - name: PORT + value: "8080" + script: + run: bin/echo-server +``` + +Keep in mind that an empty handler is still a valid handler. This is sometimes useful when you want to, for example, +simulate an intercepted service going down: + +```yaml +handlers: + - name: no-op +``` + +The table belows defines the parameters that can be used within the handlers section. + +| Options | Type | Format | Description | +|------------------------|-------------|--------------------------|------------------------------------------------------------------------------| +| name | string | [a-zA-Z][a-zA-Z0-9_-]* | Defines name of your handler that the intercepts use to reference it | +| environment | map list | N/A | Environment Defines environment variables within your handler | +| environment[*].name | string | [a-zA-Z_][a-zA-Z0-9_]* | The name of the environment variable | +| environment[*].value | string | N/A | The value for the environment variable | +| [script](#script) | map | N/A | Tells the handler to run as a script, mutually exclusive to docker | +| [docker](#docker) | map | N/A | Tells the handler to run as a docker container, mutually exclusive to script | + +#### Script + +The handler's script element defines the parameters: + +| Options | Type | Format | Description | +|---------|--------|------------------------|-----------------------------------------------------------------------------------------------------------------------------| +| run | string | N/A | The script to run. Can be multi-line | +| shell | string | bash|sh|sh | Shell that will parse and run the script. Can be bash, zsh, or sh. Defaults to the value of the`SHELL` environment variable | + +#### Docker +The handler's docker element defines the parameters. The `build` and `image` parameters are mutually exclusive: + +| Options | Type | Format | Description | +|-----------------|-------------|--------|--------------------------------------------------------------------------------------------------------------------------------------| +| [build](#build) | map | N/A | Defines how to build the image from source using [docker build](https://docs.docker.com/engine/reference/commandline/build/) command | +| image | string | image | Defines which image to be used | +| ports | int list | N/A | The ports which should be exposed to the host | +| options | string list | N/A | Options for docker run [options](https://docs.docker.com/engine/reference/commandline/run/#options) | +| command | string | N/A | Optional command to run | +| args | string list | N/A | Optional command arguments | + +#### Build + +The docker build element defines the parameters: + +| Options | Type | Format | Description | +|---------|-------------|--------|--------------------------------------------------------------------------------------------| +| context | string | N/A | Defines either a path to a directory containing a Dockerfile, or a url to a git repository | +| args | string list | N/A | Additional arguments for the docker build command. | + +For additional informations on these parameters, please check the docker [documentation](https://docs.docker.com/engine/reference/commandline/run). + +### Prerequisites +When creating an intercept specification there is an option to include prerequisites. + +Prerequisites give you the ability to run scripts for setup, build binaries to run as your intercept handler, or many other use cases. + +Prerequisites is an array, so it can handle many options prior to starting your intercept and running your intercept handlers. +The elements of the `prerequisites` array correspond to [`handlers`](#handlers). + +The sample below is declaring that `build-binary` and `rm-binary` are two handlers; the first will be run before any intercepts, +the second will be run after cleaning up the intercepts. + +If a prerequisite create succeeds, the corresponding delete is guaranteed to run even if the other steps in the spec fail. + +```yaml +prerequisites: + - create: build-binary + delete: rm-binary +``` + + +The table below defines the parameters availble within the prerequistes section. + +| Options | Description | +|---------|-------------------------------------------------- | +| create | The name of a handler to run before the intercept | +| delete | The name of a handler to run after the intercept | + + +### Workloads + +Workloads define the services in your cluster that will be intercepted. + +The example below is creating an intercept on a service called `echo-server` on port 8080. +It creates a personal intercept with the header of `x-intercept-id: foo`, and routes its traffic to a handler called `echo-server` + +```yaml +workloads: + # You can define one or more workload(s) + - name: echo-server: + intercepts: + # You can define one or more intercept(s) + - headers: + - name: x-intercept-id + value: foo + port: 8080 + handler: echo-server +``` + +This table defines the parameters available within a workload. + +| Options | Type | Format | Description | Default | +|---------------------------|--------------------------------|-------------------------|---------------------------------------------------------------|---------| +| name | string | [a-z][a-z0-9-]* | Name of the workload to intercept | N/A | +| namespace | string | [a-z0-9][a-z0-9-]{1,62} | Namespace of workload to intercept | N/A | +| intercepts | [intercept](#intercepts) list | N/A | The list of intercepts associated to the workload | N/A | + +#### Intercepts +This table defines the parameters available for each intercept. + +| Options | Type | Format | Description | Default | +|---------------------|-------------------------|----------------------|-----------------------------------------------------------------------|----------------| +| enabled | boolean | N/A | If set to false, disables this intercept. | true | +| headers | [header](#header) list | N/A | Headers that will filter the intercept. | Auto generated | +| service | name | [a-z][a-z0-9-]{1,62} | Name of service to intercept | N/A | +| localPort | integersh|string | 0-65535 | The port for the service which is intercepted | N/A | +| port | integer | 0-65535 | The port the service in the cluster is running on | N/A | +| pathPrefix | string | N/A | Path prefix filter for the intercept. Defaults to "/" | / | +| previewURL | boolean | N/A | Determine if a preview url should be created | true | +| banner | boolean | N/A | Used in the preview url option, displays a banner on the preview page | true | + +##### Header + +You can define headers to filter the requests which should end up on your machine when intercepting. + +| Options | Type | Format | Description | Default | +|---------------------------|----------|-------------------------|---------------------------------------------------------------|---------| +| name | string | N/A | Name of the header | N/A | +| value | string | N/A | Value of the header | N/A | + +Telepresence specs also support dynamic headers with **variables**: + +```yaml +intercepts: + - headers: + - name: test-{{ .Telepresence.Username }} + value: "{{ .Telepresence.Username }}" +``` + +| Options | Type | Description | +|---------------------------|----------|------------------------------------------| +| Telepresence.Username | string | The name of the user running the spec | + + +### Running your specification +After you've written your intercept specification you will want to run it. + +To start your intercept, use this command: + +```bash +telepresence intercept run +``` +This will validate and run your spec. In case you just want to validate it, you can do so by using this command: + +```bash +telepresence intercept validate +``` + +### Using and sharing your specification as a CRD + +If you want to share specifications across your team or your organization. You can save specifications as CRDs inside your cluster. + + + The Intercept Specification CRD requires Kubernetes 1.22 or higher, if you are using an old cluster you will + need to install using helm directly, and use the --disable-openapi-validation flag + + +1. Install CRD object in your cluster (one time installation) : + + ```bash + telepresence helm install --crds + ``` + +1. Then you need to deploy the specification in your cluster as a CRD: + + ```yaml + apiVersion: getambassador.io/v1alpha2 + kind: InterceptSpecification + metadata: + name: my-crd-spec + namespace: my-crd-namespace + spec: + {intercept specification} + ``` + + So `echo-server` example looks like this: + + ```bash + kubectl apply -f - < # See note below +``` + +The Service Account token will be obtained by the cluster administrator after they create the user's Service Account. Creating the Service Account will create an associated Secret in the same namespace with the format `-token-`. This token can be obtained by your cluster administrator by running `kubectl get secret -n ambassador -o jsonpath='{.data.token}' | base64 -d`. + +After creating `config.yaml` in your current directory, export the file's location to KUBECONFIG by running `export KUBECONFIG=$(pwd)/config.yaml`. You should then be able to switch to this context by running `kubectl config use-context my-context`. + +## Administrating Telepresence + +Telepresence administration requires permissions for creating `Namespaces`, `ServiceAccounts`, `ClusterRoles`, `ClusterRoleBindings`, `Secrets`, `Services`, `MutatingWebhookConfiguration`, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator. The following permissions are needed for the installation and use of Telepresence: + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: telepresence-admin + namespace: default +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-admin-role +rules: + - apiGroups: [""] + resources: ["pods", "pods/log"] + verbs: ["get", "list", "create", "delete", "watch"] + - apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "update", "create", "delete"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] + - apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update", "create", "delete", "watch"] + - apiGroups: ["rbac.authorization.k8s.io"] + resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"] + verbs: ["get", "list", "watch", "create", "delete"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["create"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["get", "list", "watch", "delete"] + resourceNames: ["telepresence-agents"] + - apiGroups: [""] + resources: ["namespaces"] + verbs: ["get", "list", "watch", "create"] + - apiGroups: [""] + resources: ["secrets"] + verbs: ["get", "create", "list", "delete"] + - apiGroups: [""] + resources: ["serviceaccounts"] + verbs: ["get", "create", "delete"] + - apiGroups: ["admissionregistration.k8s.io"] + resources: ["mutatingwebhookconfigurations"] + verbs: ["get", "create", "delete"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["list", "get", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-clusterrolebinding +subjects: + - name: telepresence-admin + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-admin-role + kind: ClusterRole +``` + +There are two ways to install the traffic-manager: Using `telepresence connect` and installing the [helm chart](../../install/helm/). + +By using `telepresence connect`, Telepresence will use your kubeconfig to create the objects mentioned above in the cluster if they don't already exist. If you want the most introspection into what is being installed, we recommend using the helm chart to install the traffic-manager. + +## Cluster-wide telepresence user access + +To allow users to make intercepts across all namespaces, but with more limited `kubectl` permissions, the following `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` will allow full `telepresence intercept` functionality. + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate value + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-role +rules: +# For gather-logs command +- apiGroups: [""] + resources: ["pods/log"] + verbs: ["get"] +- apiGroups: [""] + resources: ["pods"] + verbs: ["list"] +# Needed in order to maintain a list of workloads +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +- apiGroups: [""] + resources: ["namespaces", "services"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-rolebinding +subjects: +- name: tp-user + kind: ServiceAccount + namespace: ambassador +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-role + kind: ClusterRole +``` + +### Traffic Manager connect permission +In addition to the cluster-wide permissions, the client will also need the following namespace scoped permissions +in the traffic-manager's namespace in order to establish the needed port-forward to the traffic-manager. +```yaml +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: traffic-manager-connect +rules: + - apiGroups: [""] + resources: ["pods"] + verbs: ["get", "list", "watch"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: traffic-manager-connect +subjects: + - name: telepresence-test-developer + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: traffic-manager-connect + kind: Role +``` + +## Namespace only telepresence user access + +RBAC for multi-tenant scenarios where multiple dev teams are sharing a single cluster where users are constrained to a specific namespace(s). + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +For each accessible namespace +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate user name + namespace: tp-namespace # Update value for appropriate namespace +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +rules: +- apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "watch"] +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role-binding + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount +roleRef: + kind: Role + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +``` + +The user will also need the [Traffic Manager connect permission](#traffic-manager-connect-permission) described above. diff --git a/docs/telepresence/2.13/reference/restapi.md b/docs/telepresence/2.13/reference/restapi.md new file mode 100644 index 000000000..4be1924a3 --- /dev/null +++ b/docs/telepresence/2.13/reference/restapi.md @@ -0,0 +1,93 @@ +# Telepresence RESTful API server + +[Telepresence](/products/telepresence/) can run a RESTful API server on the local host, both on the local workstation and in a pod that contains a `traffic-agent`. The server currently has two endpoints. The standard `healthz` endpoint and the `consume-here` endpoint. + +## Enabling the server +The server is enabled by setting the `telepresenceAPI.port` to a valid port number in the [Telepresence Helm Chart](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). The values may be passed explicitly to Helm during install, or configured using the [Telepresence Config](../config#restful-api-server) to impact an auto-install. + +## Querying the server +On the cluster's side, it's the `traffic-agent` of potentially intercepted pods that runs the server. The server can be accessed using `http://localhost:/` from the application container. Telepresence ensures that the container has the `TELEPRESENCE_API_PORT` environment variable set when the `traffic-agent` is installed. On the workstation, it is the `user-daemon` that runs the server. It uses the `TELEPRESENCE_API_PORT` that is conveyed in the environment of the intercept. This means that the server can be accessed the exact same way locally, provided that the environment is propagated correctly to the interceptor process. + +## Endpoints + +The `consume-here` and `intercept-info` endpoints are both intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar. Telepresence provides the ID of the intercept in the environment variable [TELEPRESENCE_INTERCEPT_ID](../environment/#telepresence_intercept_id) during an intercept. This ID must be provided in a `x-telepresence-caller-intercept-id: = ` header. [Telepresence](/products/telepresence/) needs this to identify the caller correctly. The `` will be empty when running in the cluster, but it's harmless to provide it there too, so there's no need for conditional code. + +There are three prerequisites to fulfill before testing The `consume-here` and `intercept-info` endpoints using `curl -v` on the workstation: +1. An intercept must be active +2. The "/healthz" endpoint must respond with OK +3. The ID of the intercept must be known. It will be visible as `ID` in the output of `telepresence list --debug`. + +### healthz +The `http://localhost:/healthz` endpoint should respond with status code 200 OK. If it doesn't then something isn't configured correctly. Check that the `traffic-agent` container is present and that the `TELEPRESENCE_API_PORT` has been added to the environment of the application container and/or in the environment that is propagated to the interceptor that runs on the local workstation. + +#### test endpoint using curl +A `curl -v` call can be used to test the endpoint when an intercept is active. This example assumes that the API port is configured to be 9980. +```console +$ curl -v localhost:9980/healthz +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /healthz HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Date: Fri, 26 Nov 2021 07:06:18 GMT +< Content-Length: 0 +< +* Connection #0 to host localhost left intact +``` + +### consume-here +`http://localhost:/consume-here` will respond with "true" (consume the message) or "false" (leave the message on the queue). When running in the cluster, this endpoint will respond with `false` if the headers match an ongoing intercept for the same workload because it's assumed that it's up to the intercept to consume the message. When running locally, the response is inverted. Matching headers means that the message should be consumed. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api`, we can now check that the "/consume-here" returns "true" for the path "/api" and given headers. +```console +$ curl -v localhost:9980/consume-here?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /consume-here?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Fri, 26 Nov 2021 06:43:28 GMT +< Content-Length: 4 +< +* Connection #0 to host localhost left intact +true +``` + +If you can run curl from the pod, you can try the exact same URL. The result should be "false" when there's an ongoing intercept. The `x-telepresence-caller-intercept-id` is not needed when the call is made from the pod. + +### intercept-info +`http://localhost:/intercept-info` is intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar, and will respond with a JSON structure containing the two booleans `clientSide` and `intercepted`, and a `metadata` map which corresponds to the `--http-meta` key pairs used when the intercept was created. This field is always omitted in case `intercepted` is `false`. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api --http-meta a=b --http-meta b=c`, we can now check that the "/intercept-info" returns information for the given path and headers. +```console +$ curl -v localhost:9980/intercept-info?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980...* Connected to localhost (127.0.0.1) port 9980 (#0) +> GET /intercept-info?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.79.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Tue, 01 Feb 2022 11:39:55 GMT +< Content-Length: 68 +< +{"intercepted":true,"clientSide":true,"metadata":{"a":"b","b":"c"}} +* Connection #0 to host localhost left intact +``` diff --git a/docs/telepresence/2.13/reference/routing.md b/docs/telepresence/2.13/reference/routing.md new file mode 100644 index 000000000..cc88490a0 --- /dev/null +++ b/docs/telepresence/2.13/reference/routing.md @@ -0,0 +1,69 @@ +# Connection Routing + +## Outbound + +### DNS resolution +When requesting a connection to a host, the IP of that host must be determined. Telepresence provides DNS resolvers to help with this task. There are currently four types of resolvers but only one of them will be used on a workstation at any given time. Common for all of them is that they will propagate a selection of the host lookups to be performed in the cluster. The selection normally includes all names ending with `.cluster.local` or a currently mapped namespace but more entries can be added to the list using the `includeSuffixes` option in the +[cluster DNS configuration](../cluster-config/#dns) + +#### Cluster side DNS lookups +The cluster side host lookup will be performed by the traffic-manager unless the client has an active intercept, in which case, the agent performing that intercept will be responsible for doing it. If the client has multiple intercepts, then all of them will be asked to perform the lookup, and the response to the client will contain the unique sum of IPs that they produce. It's therefore important to never have multiple intercepts that span more than one namespace[[1](#namespacelimit)] running concurrently on the same workstation because that would logically put the workstation in several namespaces and make the DNS resolution ambiguous. The reason for asking all of them is that the workstation currently impersonates multiple containers, and it is not possible to determine on behalf of what container the lookup request is made. + +#### macOS resolver +This resolver hooks into the macOS DNS system by creating files under `/etc/resolver`. Those files correspond to some domain and contain the port number of the Telepresence resolver. Telepresence creates one such file for each of the currently mapped namespaces and `include-suffixes` option. The file `telepresence.local` contains a search path that is configured based on current intercepts so that single label names can be resolved correctly. + +#### Linux systemd-resolved resolver +This resolver registers itself as part of telepresence's [VIF](../tun-device) using `systemd-resolved` and uses the DBus API to configure domains and routes that corresponds to the current set of intercepts and namespaces. + +#### Linux overriding resolver +Linux systems that aren't configured with `systemd-resolved` will use this resolver. A Typical case is when running Telepresence [inside a docker container](../inside-container). During initialization, the resolver will first establish a _fallback_ connection to the IP passed as `--dns`, the one configured as `local-ip` in the [local DNS configuration](../config/#dns-and-routing), or the primary `nameserver` registered in `/etc/resolv.conf`. It will then use iptables to actually override that IP so that requests to it instead end up in the overriding resolver, which unless it succeeds on its own, will use the _fallback_. + +#### Windows resolver +This resolver uses the DNS resolution capabilities of the [win-tun](https://www.wintun.net/) device in conjunction with [Win32_NetworkAdapterConfiguration SetDNSDomain](https://docs.microsoft.com/en-us/powershell/scripting/samples/performing-networking-tasks?view=powershell-7.2#assigning-the-dns-domain-for-a-network-adapter). + +#### DNS caching +The Telepresence DNS resolver often changes its configuration. This means that Telepresence must either flush the DNS caches on the local host, or ensure that DNS-records returned from the Telepresence resolver aren't cached (or cached for a very short time). All operating systems have different ways of flushing the DNS caches and even different versions of one system may have differences. Also, on some systems it is necessary to actually kill and restart processes to ensure a proper flush, which in turn may result in network instabilities. + +Starting with 2.4.7, Telepresence will no longer flush the host's DNS caches. Instead, all records will have a short Time To Live (TTL) so that such caches evict the entries quickly. This causes increased load on the Telepresence resolver (shorter TTL means more frequent queries) and to cater for that, telepresence now has an internal cache to minimize the number of DNS queries that it sends to the cluster. This cache is flushed as needed without causing instabilities. + +### Routing + +#### Subnets +The Telepresence `traffic-manager` service is responsible for discovering the cluster's service subnet and all subnets used by the pods. In order to do this, it needs permission to create a dummy service[[2](#servicesubnet)] in its own namespace, and the ability to list, get, and watch nodes and pods. Most clusters will expose the pod subnets as `podCIDR` in the `Node` while others, like Amazon EKS, don't. Telepresence will then fall back to deriving the subnets from the IPs of all pods. If you'd like to choose a specific method for discovering subnets, or want to provide the list yourself, you can use the `podCIDRStrategy` configuration value in the [helm](../../install/helm) chart to do that. + +The complete set of subnets that the [VIF](../tun-device) will be configured with is dynamic and may change during a connection's life cycle as new nodes arrive or disappear from the cluster. The set consists of what that the traffic-manager finds in the cluster, and the subnets configured using the [also-proxy](../cluster-config#alsoproxy) configuration option. Telepresence will remove subnets that are equal to, or completely covered by, other subnets. + +#### Connection origin +A request to connect to an IP-address that belongs to one of the subnets of the [VIF](../tun-device) will cause a connection request to be made in the cluster. As with host name lookups, the request will originate from the traffic-manager unless the client has ongoing intercepts. If it does, one of the intercepted pods will be chosen, and the request will instead originate from that pod. This is a best-effort approach. Telepresence only knows that the request originated from the workstation. It cannot know that it is intended to originate from a specific pod when multiple intercepts are active. + +A `--local-only` intercept will not have any effect on the connection origin because there is no pod from which the connection can originate. The intercept must be made on a workload that has been deployed in the cluster if there's a requirement for correct connection origin. + +There are multiple reasons for doing this. One is that it is important that the request originates from the correct namespace. Example: + +```bash +curl some-host +``` +results in a http request with header `Host: some-host`. Now, if a service-mesh like Istio performs header based routing, then it will fail to find that host unless the request originates from the same namespace as the host resides in. Another reason is that the configuration of a service mesh can contain very strict rules. If the request then originates from the wrong pod, it will be denied. Only one intercept at a time can be used if there is a need to ensure that the chosen pod is exactly right. + +### Recursion detection +It is common that clusters used in development, such as Minikube, Minishift or k3s, run on the same host as the Telepresence client, often in a Docker container. Such clusters may have access to host network, which means that both DNS and L4 routing may be subjected to recursion. + +#### DNS recursion +When a local cluster's DNS-resolver fails to resolve a hostname, it may fall back to querying the local host network. This means that the Telepresence resolver will be asked to resolve a query that was issued from the cluster. Telepresence must check if such a query is recursive because there is a chance that it actually originated from the Telepresence DNS resolver and was dispatched to the `traffic-manager`, or a `traffic-agent`. + +Telepresence handles this by sending one initial DNS-query to resolve the hostname "tel2-recursion-check.kube-system". If the cluster runs locally, and has access to the local host's network, then that query will recurse back into the Telepresence resolver. Telepresence remembers this and alters its own behavior so that queries that are believed to be recursions are detected and respond with an NXNAME record. Telepresence performs this solution to the best of its ability, but may not be completely accurate in all situations. There's a chance that the DNS-resolver will yield a false negative for the second query if the same hostname is queried more than once in rapid succession, that is when the second query is made before the first query has received a response from the cluster. + +#### Connect recursion +A cluster running locally may dispatch connection attempts to non-existing host:port combinations to the host network. This means that they may reach the Telepresence [VIF](../tun-device). Endless recursions occur if the VIF simply dispatches such attempts on to the cluster. + +The telepresence client handles this by serializing all connection attempts to one specific IP:PORT, trapping all subsequent attempts to connect to that IP:PORT until the first attempt has completed. If the first attempt was deemed a success, then the currently trapped attempts are allowed to proceed. If the first attempt failed, then the currently trapped attempts fail. + +## Inbound + +The traffic-manager and traffic-agent are mutually responsible for setting up the necessary connection to the workstation when an intercept becomes active. In versions prior to 2.3.2, this would be accomplished by the traffic-manager creating a port dynamically that it would pass to the traffic-agent. The traffic-agent would then forward the intercepted connection to that port, and the traffic-manager would forward it to the workstation. This lead to problems when integrating with service meshes like Istio since those dynamic ports needed to be configured. It also imposed an undesired requirement to be able to use mTLS between the traffic-manager and traffic-agent. + +In 2.3.2, this changes, so that the traffic-agent instead creates a tunnel to the traffic-manager using the already existing gRPC API connection. The traffic-manager then forwards that using another tunnel to the workstation. This is completely invisible to other service meshes and is therefore much easier to configure. + +##### Footnotes: +

1: Starting with 2.8.0, Telepresence will not allow that the same workstation creates concurrent intercepts that span multiple namespaces.

+

2: The error message from an attempt to create a service in a bad subnet contains the service subnet. The trick of creating a dummy service is currently the only way to get Kubernetes to expose that subnet.

diff --git a/docs/telepresence/2.13/reference/tun-device.md b/docs/telepresence/2.13/reference/tun-device.md new file mode 100644 index 000000000..af7e3828c --- /dev/null +++ b/docs/telepresence/2.13/reference/tun-device.md @@ -0,0 +1,27 @@ +# Networking through Virtual Network Interface + +The Telepresence daemon process creates a Virtual Network Interface (VIF) when Telepresence connects to the cluster. The VIF ensures that the cluster's subnets are available to the workstation. It also intercepts DNS requests and forwards them to the traffic-manager which in turn forwards them to intercepted agents, if any, or performs a host lookup by itself. + +### TUN-Device +The VIF is a TUN-device, which means that it communicates with the workstation in terms of L3 IP-packets. The router will recognize UDP and TCP packets and tunnel their payload to the traffic-manager via its encrypted gRPC API. The traffic-manager will then establish corresponding connections in the cluster. All protocol negotiation takes place in the client because the VIF takes care of the L3 to L4 translation (i.e. the tunnel is L4, not L3). + +## Gains when using the VIF + +### Both TCP and UDP +The TUN-device is capable of routing both TCP and UDP traffic. + +### No SSH required + +The VIF approach is somewhat similar to using `sshuttle` but without +any requirements for extra software, configuration or connections. +Using the VIF means that only one single connection needs to be +forwarded through the Kubernetes apiserver (à la `kubectl +port-forward`), using only one single port. There is no need for +`ssh` in the client nor for `sshd` in the traffic-manager. This also +means that the traffic-manager container can run as the default user. + +#### sshfs without ssh encryption +When a POD is intercepted, and its volumes are mounted on the local machine, this mount is performed by [sshfs](https://github.com/libfuse/sshfs). Telepresence will run `sshfs -o slave` which means that instead of using `ssh` to establish an encrypted communication to an `sshd`, which in turn terminates the encryption and forwards to `sftp`, the `sshfs` will talk `sftp` directly on its `stdin/stdout` pair. Telepresence tunnels that directly to an `sftp` in the agent using its already encrypted gRPC API. As a result, no `sshd` is needed in client nor in the traffic-agent, and the traffic-agent container can run as the default user. + +### No Firewall rules +With the VIF in place, there's no longer any need to tamper with firewalls in order to establish IP routes. The VIF makes the cluster subnets available during connect, and the kernel will perform the routing automatically. When the session ends, the kernel is also responsible for cleaning up. diff --git a/docs/telepresence/2.13/reference/volume.md b/docs/telepresence/2.13/reference/volume.md new file mode 100644 index 000000000..82df9cafa --- /dev/null +++ b/docs/telepresence/2.13/reference/volume.md @@ -0,0 +1,36 @@ +# Volume mounts + +import Alert from '@material-ui/lab/Alert'; + +Telepresence supports locally mounting of volumes that are mounted to your Pods. You can specify a command to run when starting the intercept, this could be a subshell or local server such as Python or Node. + +``` +telepresence intercept --port --mount=/tmp/ -- /bin/bash +``` + +In this case, Telepresence creates the intercept, mounts the Pod's volumes to locally to `/tmp`, and starts a Bash subshell. + +Telepresence can set a random mount point for you by using `--mount=true` instead, you can then find the mount point in the output of `telepresence list` or using the `$TELEPRESENCE_ROOT` variable. + +``` +$ telepresence intercept --port --mount=true -- /bin/bash +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 + Intercepting : all TCP connections + +bash-3.2$ echo $TELEPRESENCE_ROOT +/var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 +``` + +--mount=true is the default if a mount option is not specified, use --mount=false to disable mounting volumes. + +With either method, the code you run locally either from the subshell or from the intercept command will need to be prepended with the `$TELEPRESENCE_ROOT` environment variable to utilize the mounted volumes. + +For example, Kubernetes mounts secrets to `/var/run/secrets/kubernetes.io` (even if no `mountPoint` for it exists in the Pod spec). Once mounted, to access these you would need to change your code to use `$TELEPRESENCE_ROOT/var/run/secrets/kubernetes.io`. + +If using --mount=true without a command, you can use either environment variable flag to retrieve the variable. diff --git a/docs/telepresence/2.13/reference/vpn.md b/docs/telepresence/2.13/reference/vpn.md new file mode 100644 index 000000000..91213babc --- /dev/null +++ b/docs/telepresence/2.13/reference/vpn.md @@ -0,0 +1,155 @@ + +
+ +# Telepresence and VPNs + +## The test-vpn command + +You can make use of the `telepresence test-vpn` command to diagnose issues +with your VPN setup. +This guides you through a series of steps to figure out if there are +conflicts between your VPN configuration and [Telepresence](/products/telepresence/). + +### Prerequisites + +Before running `telepresence test-vpn` you should ensure that your VPN is +in split-tunnel mode. +This means that only traffic that _must_ pass through the VPN is directed +through it; otherwise, the test results may be inaccurate. + +You may need to configure this on both the client and server sides. +Client-side, taking the Tunnelblick client as an example, you must ensure that +the `Route all IPv4 traffic through the VPN` tickbox is not enabled: + +![Tunnelblick](../images/tunnelblick.png) + +Server-side, taking AWS' ClientVPN as an example, you simply have to enable +split-tunnel mode: + +![Modify client VPN Endpoint](../images/split-tunnel.png) + +In AWS, this setting can be toggled without reprovisioning the VPN. Other cloud providers may work differently. + +### Testing the VPN configuration + +To run it, enter: + +```console +$ telepresence test-vpn +``` + +The test-vpn tool begins by asking you to disconnect from your VPN; ensure you are disconnected then +press enter: + +``` +Telepresence Root Daemon is already stopped +Telepresence User Daemon is already stopped +Please disconnect from your VPN now and hit enter once you're disconnected... +``` + +Once it's gathered information about your network configuration without an active connection, +it will ask you to connect to the VPN: + +``` +Please connect to your VPN now and hit enter once you're connected... +``` + +It will then connect to the cluster: + + +``` +Launching Telepresence Root Daemon +Launching Telepresence User Daemon +Connected to context arn:aws:eks:us-east-1:914373874199:cluster/josec-tp-test-vpn-cluster (https://07C63820C58A0426296DAEFC73AED10C.gr7.us-east-1.eks.amazonaws.com) +Telepresence Root Daemon quitting... done +Telepresence User Daemon quitting... done +``` + +And show you the results of the test: + +``` +---------- Test Results: +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +✅ svc subnet 10.19.0.0/16 is clear of VPN + +Please see https://www.telepresence.io/docs/latest/reference/vpn for more info on these corrective actions, as well as examples + +Still having issues? Please create a new github issue at https://github.com/telepresenceio/telepresence/issues/new?template=Bug_report.md + Please make sure to add the following to your issue: + * Run `telepresence loglevel debug`, try to connect, then run `telepresence gather_logs`. It will produce a zipfile that you should attach to the issue. + * Which VPN client are you using? + * Which VPN server are you using? + * How is your VPN pushing DNS configuration? It may be useful to add the contents of /etc/resolv.conf +``` + +#### Interpreting test results + +##### Case 1: VPN masked by cluster + +In an instance where the VPN is masked by the cluster, the test-vpn tool informs you that a pod or service subnet is masking a CIDR that the VPN +routes: + +``` +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +``` + +This means that all VPN hosts within `10.0.0.0/19` will be rendered inaccessible while +telepresence is connected. + +The ideal resolution in this case is to move the pods to a different subnet. This is possible, +for example, in Amazon EKS by configuring a [new CIDR range](https://aws.amazon.com/premiumsupport/knowledge-center/eks-multiple-cidr-ranges/) for the pods. +In this case, configuring the pods to be located in `10.1.0.0/19` clears the VPN and allows you +to reach hosts inside the VPC's `10.0.0.0/19` + +However, it is not always possible to move the pods to a different subnet. +In these cases, you should use the [never-proxy](../cluster-config#neverproxy) configuration to prevent certain +hosts from being masked. +This might be particularly important for DNS resolution. In an AWS ClientVPN VPN it is often +customary to set the `.2` host as a DNS server (e.g. `10.0.0.2` in this case): + +![Modify Client VPN Endpoint](../images/vpn-dns.png) + +If this is the case for your VPN, you should place the DNS server in the never-proxy list for your +cluster. In the values file that you pass to `telepresence helm install [--upgrade] --values `, add a `client.routing` +entry like so: + +```yaml +client: + routing: + neverProxySubnets: + - 10.0.0.2/32 +``` + +##### Case 2: Cluster masked by VPN + +In an instance where the Cluster is masked by the VPN, the test-vpn tool informs you that a pod or service subnet is being masked by a CIDR +that the VPN routes: + +``` +❌ pod subnet 10.0.0.0/8 being masked by VPN-routed CIDR 10.0.0.0/16. This usually means that Telepresence will not be able to connect to your cluster. To resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, consider shrinking the mask of the 10.0.0.0/16 CIDR (e.g. from /16 to /8), or disabling split-tunneling +``` + +Typically this means that pods within `10.0.0.0/8` are not accessible while the VPN is +connected. + +As with the first case, the ideal resolution is to move the pods away, but this may not always +be possible. In that case, your best bet is to attempt to shrink the VPN's CIDR +(that is, make it route more hosts) to make Telepresence's routes win by virtue of specificity. +One easy way to do this may be by disabling split tunneling (see the [prerequisites](#prerequisites) +section for more on split-tunneling). + +Note that once you fix this, you may find yourself landing again in [Case 1](#case-1-vpn-masked-by-cluster), and may need +to use never-proxy rules to whitelist hosts in the VPN: + +``` +❌ pod subnet 10.0.0.0/8 is masking VPN-routed CIDR 0.0.0.0/1. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 0.0.0.0/1 are placed in the never-proxy list +``` +
diff --git a/docs/telepresence/2.13/release-notes/no-ssh.png b/docs/telepresence/2.13/release-notes/no-ssh.png new file mode 100644 index 000000000..025f20ab7 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/no-ssh.png differ diff --git a/docs/telepresence/2.13/release-notes/run-tp-in-docker.png b/docs/telepresence/2.13/release-notes/run-tp-in-docker.png new file mode 100644 index 000000000..53b66a9b2 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/run-tp-in-docker.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.2.png b/docs/telepresence/2.13/release-notes/telepresence-2.2.png new file mode 100644 index 000000000..43abc7e89 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.2.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.3.0-homebrew.png b/docs/telepresence/2.13/release-notes/telepresence-2.3.0-homebrew.png new file mode 100644 index 000000000..e203a9750 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.3.0-homebrew.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.3.0-loglevels.png b/docs/telepresence/2.13/release-notes/telepresence-2.3.0-loglevels.png new file mode 100644 index 000000000..3d628c54a Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.3.0-loglevels.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.3.1-alsoProxy.png b/docs/telepresence/2.13/release-notes/telepresence-2.3.1-alsoProxy.png new file mode 100644 index 000000000..4052b927b Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.3.1-alsoProxy.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.3.1-brew.png b/docs/telepresence/2.13/release-notes/telepresence-2.3.1-brew.png new file mode 100644 index 000000000..2af424904 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.3.1-brew.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.3.1-dns.png b/docs/telepresence/2.13/release-notes/telepresence-2.3.1-dns.png new file mode 100644 index 000000000..c6335e7a7 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.3.1-dns.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.3.1-inject.png b/docs/telepresence/2.13/release-notes/telepresence-2.3.1-inject.png new file mode 100644 index 000000000..aea1003ef Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.3.1-inject.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.3.1-large-file-transfer.png b/docs/telepresence/2.13/release-notes/telepresence-2.3.1-large-file-transfer.png new file mode 100644 index 000000000..48ceb3817 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.3.1-large-file-transfer.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.3.1-trafficmanagerconnect.png b/docs/telepresence/2.13/release-notes/telepresence-2.3.1-trafficmanagerconnect.png new file mode 100644 index 000000000..78128c174 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.3.1-trafficmanagerconnect.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.3.2-subnets.png b/docs/telepresence/2.13/release-notes/telepresence-2.3.2-subnets.png new file mode 100644 index 000000000..778c722ab Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.3.2-subnets.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.3.2-svcport-annotation.png b/docs/telepresence/2.13/release-notes/telepresence-2.3.2-svcport-annotation.png new file mode 100644 index 000000000..1e1e92408 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.3.2-svcport-annotation.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.3.3-helm.png b/docs/telepresence/2.13/release-notes/telepresence-2.3.3-helm.png new file mode 100644 index 000000000..7b81480a7 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.3.3-helm.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.3.3-namespace-config.png b/docs/telepresence/2.13/release-notes/telepresence-2.3.3-namespace-config.png new file mode 100644 index 000000000..7864d3a30 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.3.3-namespace-config.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.3.3-to-pod.png b/docs/telepresence/2.13/release-notes/telepresence-2.3.3-to-pod.png new file mode 100644 index 000000000..aa7be3f63 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.3.3-to-pod.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.3.4-improved-error.png b/docs/telepresence/2.13/release-notes/telepresence-2.3.4-improved-error.png new file mode 100644 index 000000000..fa8a12986 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.3.4-improved-error.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.3.4-ip-error.png b/docs/telepresence/2.13/release-notes/telepresence-2.3.4-ip-error.png new file mode 100644 index 000000000..1d37380c7 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.3.4-ip-error.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.3.5-agent-config.png b/docs/telepresence/2.13/release-notes/telepresence-2.3.5-agent-config.png new file mode 100644 index 000000000..67d6d3e8b Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.3.5-agent-config.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.3.5-grpc-max-receive-size.png b/docs/telepresence/2.13/release-notes/telepresence-2.3.5-grpc-max-receive-size.png new file mode 100644 index 000000000..32939f9dd Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.3.5-grpc-max-receive-size.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.3.5-skipLogin.png b/docs/telepresence/2.13/release-notes/telepresence-2.3.5-skipLogin.png new file mode 100644 index 000000000..bf79c1910 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.3.5-skipLogin.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png b/docs/telepresence/2.13/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png new file mode 100644 index 000000000..d29a05ad7 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.3.7-keydesc.png b/docs/telepresence/2.13/release-notes/telepresence-2.3.7-keydesc.png new file mode 100644 index 000000000..9bffe5ccb Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.3.7-keydesc.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.3.7-newkey.png b/docs/telepresence/2.13/release-notes/telepresence-2.3.7-newkey.png new file mode 100644 index 000000000..c7d47c42d Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.3.7-newkey.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.4.0-cloud-messages.png b/docs/telepresence/2.13/release-notes/telepresence-2.4.0-cloud-messages.png new file mode 100644 index 000000000..ffd045ae0 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.4.0-cloud-messages.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.4.0-windows.png b/docs/telepresence/2.13/release-notes/telepresence-2.4.0-windows.png new file mode 100644 index 000000000..d27ba254a Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.4.0-windows.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.4.1-systema-vars.png b/docs/telepresence/2.13/release-notes/telepresence-2.4.1-systema-vars.png new file mode 100644 index 000000000..c098b439f Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.4.1-systema-vars.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.4.10-actions.png b/docs/telepresence/2.13/release-notes/telepresence-2.4.10-actions.png new file mode 100644 index 000000000..6d849ac21 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.4.10-actions.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.4.10-intercept-config.png b/docs/telepresence/2.13/release-notes/telepresence-2.4.10-intercept-config.png new file mode 100644 index 000000000..e3f1136ac Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.4.10-intercept-config.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.4.4-gather-logs.png b/docs/telepresence/2.13/release-notes/telepresence-2.4.4-gather-logs.png new file mode 100644 index 000000000..7db541735 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.4.4-gather-logs.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.4.5-logs-anonymize.png b/docs/telepresence/2.13/release-notes/telepresence-2.4.5-logs-anonymize.png new file mode 100644 index 000000000..edd01fde4 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.4.5-logs-anonymize.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.4.5-pod-yaml.png b/docs/telepresence/2.13/release-notes/telepresence-2.4.5-pod-yaml.png new file mode 100644 index 000000000..3f565c4f8 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.4.5-pod-yaml.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.4.5-preview-url-questions.png b/docs/telepresence/2.13/release-notes/telepresence-2.4.5-preview-url-questions.png new file mode 100644 index 000000000..1823aaa14 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.4.5-preview-url-questions.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.4.6-help-text.png b/docs/telepresence/2.13/release-notes/telepresence-2.4.6-help-text.png new file mode 100644 index 000000000..aab9178ad Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.4.6-help-text.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.4.8-health-check.png b/docs/telepresence/2.13/release-notes/telepresence-2.4.8-health-check.png new file mode 100644 index 000000000..e10a0b472 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.4.8-health-check.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.4.8-vpn.png b/docs/telepresence/2.13/release-notes/telepresence-2.4.8-vpn.png new file mode 100644 index 000000000..fbb215882 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.4.8-vpn.png differ diff --git a/docs/telepresence/2.13/release-notes/telepresence-2.5.0-pro-daemon.png b/docs/telepresence/2.13/release-notes/telepresence-2.5.0-pro-daemon.png new file mode 100644 index 000000000..5b82fc769 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/telepresence-2.5.0-pro-daemon.png differ diff --git a/docs/telepresence/2.13/release-notes/tunnel.jpg b/docs/telepresence/2.13/release-notes/tunnel.jpg new file mode 100644 index 000000000..59a0397e6 Binary files /dev/null and b/docs/telepresence/2.13/release-notes/tunnel.jpg differ diff --git a/docs/telepresence/2.13/releaseNotes.yml b/docs/telepresence/2.13/releaseNotes.yml new file mode 100644 index 000000000..1bccc3615 --- /dev/null +++ b/docs/telepresence/2.13/releaseNotes.yml @@ -0,0 +1,2339 @@ +# This file should be placed in the folder for the version of the +# product that's meant to be documented. A `/release-notes` page will +# be automatically generated and populated at build time. +# +# Note that an entry needs to be added to the `doc-links.yml` file in +# order to surface the release notes in the table of contents. +# +# The YAML in this file should contain: +# +# changelog: An (optional) URL to the CHANGELOG for the product. +# items: An array of releases with the following attributes: +# - version: The (optional) version number of the release, if applicable. +# - date: The date of the release in the format YYYY-MM-DD. +# - notes: An array of noteworthy changes included in the release, each having the following attributes: +# - type: The type of change, one of `bugfix`, `feature`, `security` or `change`. +# - title: A short title of the noteworthy change. +# - body: >- +# Two or three sentences describing the change and why it +# is noteworthy. This is HTML, not plain text or +# markdown. It is handy to use YAML's ">-" feature to +# allow line-wrapping. +# - image: >- +# The URL of an image that visually represents the +# noteworthy change. This path is relative to the +# `release-notes` directory; if this file is +# `FOO/releaseNotes.yml`, then the image paths are +# relative to `FOO/release-notes/`. +# - docs: The path to the documentation page where additional information can be found. +# - href: A path from the root to a resource on the getambassador website, takes precedence over a docs link. + +docTitle: Telepresence Release Notes +docDescription: >- + Release notes for Telepresence by Ambassador Labs, a CNCF project + that enables developers to iterate rapidly on Kubernetes + microservices by arming them with infinite-scale development + environments, access to instantaneous feedback loops, and highly + customizable development environments. + +changelog: https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md + +items: + - version: 2.13.3 + date: "2023-05-25" + notes: + - type: feature + title: Add imagePullSecrets to hooks + body: >- + Add .Values.hooks.curl.imagePullSecrets and .Values.hooks curl.imagePullSecrets to Helm values. + docs: https://github.com/telepresenceio/telepresence/pull/3079 + + - type: change + title: Change reinvocation policy to Never for the mutating webhook + body: >- + The default setting of the reinvocationPolicy for the mutating webhook dealing with agent injections changed from Never to IfNeeded. + + - type: bugfix + title: Fix mounting fail of IAM roles for service accounts web identity token + body: >- + The eks.amazonaws.com/serviceaccount volume injected by EKS is now exported and remotely mounted during an intercept. + docs: https://github.com/telepresenceio/telepresence/issues/3166 + + - type: bugfix + title: Correct namespace selector for cluster versions with non-numeric characters + body: >- + The mutating webhook now correctly applies the namespace selector even if the cluster version contains non-numeric characters. For example, it can now handle versions such as Major:"1", Minor:"22+". + docs: https://github.com/telepresenceio/telepresence/pull/3184 + + - type: bugfix + title: Enable IPv6 on the telepresence docker network + body: >- + The "telepresence" Docker network will now propagate DNS AAAA queries to the Telepresence DNS resolver when it runs in a Docker container. + docs: https://github.com/telepresenceio/telepresence/issues/3179 + + - type: bugfix + title: Fix the crash when intercepting with --local-only and --docker-run + body: >- + Running telepresence intercept --local-only --docker-run no longer results in a panic. + docs: https://github.com/telepresenceio/telepresence/issues/3171 + + - type: bugfix + title: Fix incorrect error message with local-only mounts + body: >- + Running telepresence intercept --local-only --mount false no longer results in an incorrect error message saying "a local-only intercept cannot have mounts". + docs: https://github.com/telepresenceio/telepresence/issues/3171 + + - type: bugfix + title: specify port in hook urls + body: >- + The helm chart now correctly handles custom agentInjector.webhook.port that was not being set in hook URLs. + docs: https://github.com/telepresenceio/telepresence/pull/3161 + + - type: bugfix + title: Fix wrong default value for disableGlobal and agentArrival + body: >- + Params .intercept.disableGlobal and .timeouts.agentArrival are now correctly honored. + + - version: 2.13.2 + date: "2023-05-12" + notes: + - type: bugfix + title: Authenticator Service Update + body: >- + Replaced / characters with a - when the authenticator service creates the kubeconfig in the Telepresence cache. + docs: https://github.com/telepresenceio/telepresence/pull/3167 + + - type: bugfix + title: Enhanced DNS Search Path Configuration for Windows (Auto, PowerShell, and Registry Options) + body: >- + Configurable strategy (auto, powershell. or registry) to set the global DNS search path on Windows. Default is auto which means try powershell first, and if it fails, fall back to registry. + docs: https://github.com/telepresenceio/telepresence/pull/3154 + + - type: feature + title: Configurable Traffic Manager Timeout in values.yaml + body: >- + The timeout for the traffic manager to wait for traffic agent to arrive can now be configured in the values.yaml file using timeouts.agentArrival. The default timeout is still 30 seconds. + docs: https://github.com/telepresenceio/telepresence/pull/3148 + + - type: bugfix + title: Enhanced Local Cluster Discovery for macOS and Windows + body: >- + The automatic discovery of a local container based cluster (minikube or kind) used when the Telepresence daemon runs in a container, now works on macOS and Windows, and with different profiles, ports, and cluster names + docs: https://github.com/telepresenceio/telepresence/pull/3165 + + - type: bugfix + title: FTP Stability Improvements + body: >- + Multiple simultaneous intercepts can transfer large files in bidirectionally and in parallel. + docs: https://github.com/telepresenceio/telepresence/pull/3157 + + - type: bugfix + title: Intercepted Persistent Volume Pods No Longer Cause Timeouts + body: >- + Pods using persistent volumes no longer causes timeouts when intercepted. + docs: https://github.com/telepresenceio/telepresence/pull/3157 + + - type: bugfix + title: Successful 'Telepresence Connect' Regardless of DNS Configuration + body: >- + Ensure that `telepresence connect`` succeeds even though DNS isn't configured correctly. + docs: https://github.com/telepresenceio/telepresence/pull/3154 + + - type: bugfix + title: Traffic-Manager's 'Close of Closed Channel' Panic Issue + body: >- + The traffic-manager would sometimes panic with a "close of closed channel" message and exit. + docs: https://github.com/telepresenceio/telepresence/pull/3160 + + - type: bugfix + title: Traffic-Manager's Type Cast Panic Issue + body: >- + The traffic-manager would sometimes panic and exit after some time due to a type cast panic. + docs: https://github.com/telepresenceio/telepresence/pull/3153 + + - type: bugfix + title: Login Friction + body: >- + Improve login behavior by clearing the saved intermediary API Keys when a user logins to force Telepresence to generate new ones. + + - version: 2.13.1 + date: "2023-04-20" + notes: + - type: change + title: Update ambassador-telepresence-agent to version 1.13.13 + body: >- + The malfunction of the Ambassador Telepresence Agent occurred as a result of an update which compressed the executable file. + + - version: 2.13.0 + date: "2023-04-18" + notes: + - type: feature + title: Better kind / minikube network integration with docker + body: >- + The Docker network used by a Kind or Minikube (using the "docker" driver) installation, is automatically detected and connected to a Docker container running the Telepresence daemon. + docs: https://github.com/telepresenceio/telepresence/pull/3104 + + - type: feature + title: New mapped namespace output + body: >- + Mapped namespaces are included in the output of the telepresence status command. + + - type: feature + title: Setting of the target IP of the intercept + docs: reference/intercepts/cli + body: >- + There's a new --address flag to the intercept command allowing users to set the target IP of the intercept. + + - type: feature + title: Multi-tenant support + body: >- + The client will no longer need cluster wide permissions when connected to a namespace scoped Traffic Manager. + + - type: bugfix + title: Cluster domain resolution bugfix + body: >- + The Traffic Manager now uses a fail-proof way to determine the cluster domain. + docs: https://github.com/telepresenceio/telepresence/issues/3114 + + - type: bugfix + title: Windows DNS + body: >- + DNS on windows is more reliable and performant. + docs: https://github.com/telepresenceio/telepresence/issues/2939 + + - type: bugfix + title: Agent injection with huge amount of deployments + body: >- + The agent is now correctly injected even with a high number of deployment starting at the same time. + docs: https://github.com/telepresenceio/telepresence/issues/3025 + + - type: bugfix + title: Self-contained kubeconfig with Docker + body: >- + The kubeconfig is made self-contained before running Telepresence daemon in a Docker container. + docs: https://github.com/telepresenceio/telepresence/issues/3099 + + - type: bugfix + title: Version command error + body: >- + The version command won't throw an error anymore if there is no kubeconfig file defined. + docs: https://github.com/telepresenceio/telepresence/issues/3095 + + - type: change + title: Intercept Spec CRD v1alpha1 depreciated + body: >- + Please use version v1alpha2 of the intercept spec crd. + + - version: 2.12.2 + date: "2023-04-04" + notes: + - type: security + title: Update Golang build version to 1.20.3 + body: >- + Update Golang to 1.20.3 to address CVE-2023-24534, CVE-2023-24536, CVE-2023-24537, and CVE-2023-24538 + - version: 2.12.1 + date: "2023-03-22" + notes: + - type: feature + title: Additions to gather-logs + body: >- + Telepresence now includes the kubeauth logs when running + the gather-logs command + - type: bugfix + title: Airgapped Clusters can once again create personal intercepts + body: >- + Telepresence on airgapped clusters regained the ability to use the + skipLogin config option to bypass login and create personal intercepts. + - type: bugfix + title: Environment Variables are now propagated to kubeauth + body: >- + Telepresence now propagates environment variables properly + to the kubeauth-foreground to be used with cluster authentication + - version: 2.12.0 + date: "2023-03-20" + notes: + - type: feature + title: Intercept spec can build images from source + body: >- + Handlers in the Intercept Specification can now specify a build property instead of an image so that + the image is built when the spec runs. + docs: reference/intercepts/specs#build + - type: feature + title: Improve volume mount experience for Windows and Mac users + body: >- + On macOS and Windows platforms, the installation of sshfs or platform specific FUSE implementations such as macFUSE or WinFSP are + no longer needed when running an Intercept Specification that uses docker images. + docs: reference/intercepts/specs + - type: feature + title: Check for service connectivity independently from pod connectivity + body: >- + Telepresence now enables you to check for a service and pod's connectivity independently, so that it can proxy one without proxying the other. + docs: https://github.com/telepresenceio/telepresence/issues/2911 + - type: bugfix + title: Fix cluster authentication when running the telepresence daemon in a docker container. + body: >- + Authentication to EKS and GKE clusters have been fixed (k8s >= v1.26) + docs: https://github.com/telepresenceio/telepresence/pull/3055 + - type: bugfix + title: The Intercept spec image pattern now allows nested and sha256 images. + body: >- + Telepresence Intercept Specifications now handle passing nested images or the sha256 of an image + docs: https://github.com/telepresenceio/telepresence/issues/3064 + - type: bugfix + body: >- + Telepresence will not longer panic when a CNAME does not contain the .svc in it + title: Fix panic when CNAME of kubernetes.default doesn't contain .svc + docs: https://github.com/telepresenceio/telepresence/issues/3015 + - version: 2.11.1 + date: "2023-02-27" + notes: + - type: bugfix + title: Multiple architectures + docs: https://github.com/telepresenceio/telepresence/issues/3043 + body: >- + The multi-arch build for the ambassador-telepresence-manager and ambassador-telepresence-agent now + works for both amd64 and arm64. + - type: bugfix + title: Ambassador agent Helm chart duplicates + docs: https://github.com/telepresenceio/telepresence/issues/3046 + body: >- + Some labels in the Helm chart for the Ambassador Agent were duplicated, causing problems for FluxCD. + - version: 2.11.0 + date: "2023-02-22" + notes: + - type: feature + title: Intercept specification + body: >- + It is now possible to leverage the intercept specification to spin up your environment without extra tools. + - type: feature + title: Support for arm64 (Apple Silicon) + body: >- + The ambassador-telepresence-manager and ambassador-telepresence-agent are now distributed as + multi-architecture images and can run natively on both linux/amd64 and linux/arm64. + - type: bugfix + title: Connectivity check can break routing in VPN setups + docs: https://github.com/telepresenceio/telepresence/issues/3006 + body: >- + The connectivity check failed to recognize that the connected peer wasn't a traffic-manager. Consequently, + it didn't proxy the cluster because it incorrectly assumed that a successful connect meant cluster connectivity, + - type: bugfix + title: VPN routes not detected by telepresence test-vpn on macOS + docs: https://github.com/telepresenceio/telepresence/pull/3038 + body: >- + The telepresence test-vpn did not include routes of type link when checking for subnet + conflicts. + - version: 2.10.5 + date: "2023-02-06" + notes: + - type: change + title: mTLS secrets mount + body: >- + mTLS Secrets will now be mounted into the traffic agent, instead of expected to be read by it from the API. + This is only applicable to users of team mode and the proprietary agent + docs: reference/cluster-config#tls + - type: bugfix + title: Daemon reconnection fix + body: >- + Fixed a bug that prevented the local daemons from automatically reconnecting to the traffic manager when the network connection was lost. + - version: 2.10.4 + date: "2023-01-20" + notes: + - type: bugfix + title: Backward compatibility restored + body: >- + Telepresence can now create intercepts with traffic-managers of version 2.9.5 and older. + - type: bugfix + title: Saved intercepts now works with preview URLs. + body: >- + Preview URLs are now included/excluded correctly when using saved intercepts. + - version: 2.10.3 + date: "2023-01-17" + notes: + - type: bugfix + title: Saved intercepts + body: >- + Fixed an issue which was causing the saved intercepts to not be completely interpreted by telepresence. + - type: bugfix + title: Traffic manager restart during upgrade to team mode + body: >- + Fixed an issue which was causing the traffic manager to be redeployed after an upgrade to the team mode. + docs: https://github.com/telepresenceio/telepresence/pull/2979 + - version: 2.10.2 + date: "2023-01-16" + notes: + - type: bugfix + title: version consistency in helm commands + body: >- + Ensure that CLI and user-daemon binaries are the same version when running
telepresence helm install + or telepresence helm upgrade. + docs: https://github.com/telepresenceio/telepresence/pull/2975 + - type: bugfix + title: Release Process + body: >- + Fixed an issue that prevented the --use-saved-intercept flag from working. + - version: 2.10.1 + date: "2023-01-11" + notes: + - type: bugfix + title: Release Process + body: >- + Fixed a regex in our release process that prevented 2.10.0 promotion. + - version: 2.10.0 + date: "2023-01-11" + notes: + - type: feature + title: Team Mode and Single User Mode + body: >- + The Traffic Manager can now be set to either "team" mode or "single user" mode. When in team mode, intercepts will default to http intercepts. + - type: feature + title: Added `insert` and `upgrade` Subcommands to `telepresence helm` + body: >- + The `telepresence helm` sub-commands `insert` and `upgrade` now accepts all types of helm `--set-XXX` flags. + - type: feature + title: Added Image Pull Secrets to Helm Chart + body: >- + Image pull secrets for the traffic-agent can now be added using the Helm chart setting `agent.image.pullSecrets`. + - type: change + title: Rename Configmap + body: >- + The configmap `traffic-manager-clients` has been renamed to `traffic-manager`. + - type: change + title: Webhook Namespace Field + body: >- + If the cluster is Kubernetes 1.21 or later, the mutating webhook will find the correct namespace using the label `kubernetes.io/metadata.name` rather than `app.kuberenetes.io/name`. + docs: https://github.com/telepresenceio/telepresence/issues/2913 + - type: change + title: Rename Webhook + body: >- + The name of the mutating webhook now contains the namespace of the traffic-manager so that the webhook is easier to identify when there are multiple namespace scoped telepresence installations in the cluster. + - type: change + title: OSS Binaries + body: >- + The OSS Helm chart is no longer pushed to the datawire Helm repository. It will instead be pushed from the telepresence proprietary repository. The OSS Helm chart is still what's embedded in the OSS telepresence client. + docs: https://github.com/telepresenceio/telepresence/pull/2943 + - type: bugfix + title: Fix Panic Using `--docker-run` + body: >- + Telepresence no longer panics when `--docker-run` is combined with `--name ` instead of `--name=`. + docs: https://github.com/telepresenceio/telepresence/issues/2953 + - type: bugfix + title: Stop assuming cluster domain + body: >- + Telepresence traffic-manager extracts the cluster domain (e.g. "cluster.local") using a CNAME lookup for "kubernetes.default" instead of "kubernetes.default.svc". + docs: https://github.com/telepresenceio/telepresence/pull/2959 + - type: bugfix + title: Uninstall hook timeout + body: >- + A timeout was added to the pre-delete hook `uninstall-agents`, so that a helm uninstall doesn't hang when there is no running traffic-manager. + docs: https://github.com/telepresenceio/telepresence/pull/2937 + - type: bugfix + title: Uninstall hook check + body: >- + The `Helm.Revision` is now used to prevent that Helm hook calls are served by the wrong revision of the traffic-manager. + docs: https://github.com/telepresenceio/telepresence/issues/2954 + - version: 2.9.5 + date: "2022-12-08" + notes: + - type: security + title: Update to golang v1.19.4 + body: >- + Apply security updates by updating to golang v1.19.4 + docs: https://groups.google.com/g/golang-announce/c/L_3rmdT0BMU + - type: bugfix + title: GCE authentication + body: >- + Fixed a regression, that was introduced in 2.9.3, preventing use of gce authentication without also having a config element present in the gce configuration in the kubeconfig. + - version: 2.9.4 + date: "2022-12-02" + notes: + - type: feature + title: Subnet detection strategy + body: >- + The traffic-manager can automatically detect that the node subnets are different from the pod subnets, and switch detection strategy to instead use subnets that cover the pod IPs. + - type: bugfix + title: Fix `--set` flag for `telepresence helm install` + body: >- + The `telepresence helm` command `--set x=y` flag didn't correctly set values of other types than `string`. The code now uses standard Helm semantics for this flag. + - type: bugfix + title: Fix `agent.image` setting propigation + body: >- + Telepresence now uses the correct `agent.image` properties in the Helm chart when copying agent image settings from the `config.yml` file. + - type: bugfix + title: Delay file sharing until needed + body: >- + Initialization of FTP type file sharing is delayed, so that setting it using the Helm chart value `intercept.useFtp=true` works as expected. + - type: bugfix + title: Cleanup on `telepresence quit` + body: >- + The port-forward that is created when Telepresence connects to a cluster is now properly closed when `telepresence quit` is called. + - type: bugfix + title: Watch `config.yml` without panic + body: >- + The user daemon no longer panics when the `config.yml` is modified at a time when the user daemon is running but no session is active. + - type: bugfix + title: Thread safety + body: >- + Fix race condition that would occur when `telepresence connect` `telepresence leave` was called several times in rapid succession. + - version: 2.9.3 + date: "2022-11-23" + notes: + - type: feature + title: Helm options for `livenessProbe` and `readinessProbe` + body: >- + The helm chart now supports `livenessProbe` and `readinessProbe` for the traffic-manager deployment, so that the pod automatically restarts if it doesn't respond. + - type: change + title: Improved network communication + body: >- + The root daemon now communicates directly with the traffic-manager instead of routing all outbound traffic through the user daemon. + - type: bugfix + title: Root daemon debug logging + body: >- + Using `telepresence loglevel LEVEL` now also sets the log level in the root daemon. + - type: bugfix + title: Multivalue flag value propagation + body: >- + Multi valued kubernetes flags such as `--as-group` are now propagated correctly. + - type: bugfix + title: Root daemon stability + body: >- + The root daemon would sometimes hang indefinitely when quit and connect were called in rapid succession. + - type: bugfix + title: Base DNS resolver + body: >- + Don't use `systemd.resolved` base DNS resolver unless cluster is proxied. + - version: 2.9.2 + date: "2022-11-16" + notes: + - type: bugfix + title: Fix panic + body: >- + Fix panic when connecting to an older traffic-manager. + - type: bugfix + title: Fix header flag + body: >- + Fix an issue where the `http-header` flag sometimes wouldn't propagate correctly. + - version: 2.9.1 + date: "2022-11-16" + notes: + - type: bugfix + title: Connect failures due to missing auth provider. + body: >- + The regression in 2.9.0 that caused a `no Auth Provider found for name “gcp”` error when connecting was fixed. + - version: 2.9.0 + date: "2022-11-15" + notes: + - type: feature + title: New command to view client configuration. + body: >- + A new telepresence config view was added to make it easy to view the current + client configuration. + docs: new-in-2.9#view-the-client-configuration + - type: feature + title: Configure Clients using the Helm chart. + body: >- + The traffic-manager can now configure all clients that connect through the client: map in + the values.yaml file. + docs: reference/cluster-config#client-configuration + - type: feature + title: The Traffic manager version is more visible. + body: >- + The command telepresence version will now include the version of the traffic manager when + the client is connected to a cluster. + - type: feature + title: Command output in YAML format. + body: >- + The global --output flag now accepts both yaml and json. + docs: new-in-2.9#yaml-output + - type: change + title: Deprecated status command flag + body: >- + The telepresence status --json flag is deprecated. Use telepresence status --output=json instead. + - type: bugfix + title: Unqualified service name resolution in docker. + body: >- + Unqualified service names now resolves OK from the docker container when using telepresence intercept --docker-run. + docs: https://github.com/telepresenceio/telepresence/issues/2870 + - type: bugfix + title: Output no longer mixes plaintext and json. + body: >- + Informational messages that don't really originate from the command, such as "Launching Telepresence Root Daemon", + or "An update of telepresence ...", are discarded instead of being printed as plain text before the actual formatted + output when using the --output=json. + docs: https://github.com/telepresenceio/telepresence/issues/2854 + - type: bugfix + title: No more panic when invalid port names are detected. + body: >- + A `telepresence intercept` of services with invalid port no longer causes a panic. + docs: https://github.com/telepresenceio/telepresence/issues/2880 + - type: bugfix + title: Proper errors for bad output formats. + body: >- + An attempt to use an invalid value for the global --output flag now renders a proper error message. + - type: bugfix + title: Remove lingering DNS config on macOS. + body: >- + Files lingering under /etc/resolver as a result of ungraceful shutdown of the root daemon on macOS, are + now removed when a new root daemon starts. + - version: 2.8.5 + date: "2022-11-2" + notes: + - type: security + title: CVE-2022-41716 + body: >- + Updated Golang to 1.19.3 to address CVE-2022-41716. + - version: 2.8.4 + date: "2022-11-2" + notes: + - type: bugfix + title: Release Process + body: >- + This release resulted in changes to our release process. + - version: 2.8.3 + date: "2022-10-27" + notes: + - type: feature + title: Ability to disable global intercepts. + body: >- + Global intercepts (a.k.a. TCP intercepts) can now be disabled by using the new Helm chart setting intercept.disableGlobal. + docs: https://github.com/telepresenceio/telepresence/issues/2140 + - type: feature + title: Configurable mutating webhook port + body: >- + The port used for the mutating webhook can be configured using the Helm chart setting + agentInjector.webhook.port. + docs: install/helm + - type: change + title: Mutating webhook port defaults to 443 + body: >- + The default port for the mutating webhook is now 443. It used to be 8443. + - type: change + title: Agent image configuration mandatory in air-gapped environments. + body: >- + The traffic-manager will no longer default to use the tel2 image for the traffic-agent when it is + unable to connect to Ambassador Cloud. Air-gapped environments must declare what image to use in the Helm chart. + - type: bugfix + title: Can now connect to non-helm installs + body: >- + telepresence connect now works as long as the traffic manager is installed, even if + it wasn't installed via >code>helm install + docs: https://github.com/telepresenceio/telepresence/issues/2824 + - type: bugfix + title: check-vpn crash fixed + body: >- + telepresence check-vpn no longer crashes when the daemons don't start properly. + - version: 2.8.2 + date: "2022-10-15" + notes: + - type: bugfix + title: Reinstate 2.8.0 + body: >- + There was an issue downloading the free enhanced client. This problem was fixed, 2.8.0 was reinstated + - version: 2.8.1 + date: "2022-10-14" + notes: + - type: bugfix + title: Rollback 2.8.0 + body: >- + Rollback 2.8.0 while we investigate an issue with ambassador cloud. + - version: 2.8.0 + date: "2022-10-14" + notes: + - type: feature + title: Improved DNS resolver + body: >- + The Telepresence DNS resolver is now capable of resolving queries of type A, AAAA, CNAME, + MX, NS, PTR, SRV, and TXT. + docs: reference/dns + - type: feature + title: New `client` structure in Helm chart + body: >- + A new client struct was added to the Helm chart. It contains a connectionTTL that controls + how long the traffic manager will retain a client connection without seeing any sign of life from the client. + docs: reference/cluster-config#Client-Configuration + - type: feature + title: Include and exclude suffixes configurable using the Helm chart. + body: >- + A dns element was added to the client struct in Helm chart. It contains an includeSuffixes and + an excludeSuffixes value that controls what type of names that the DNS resolver in the client will delegate to + the cluster. + docs: reference/cluster-config#DNS + - type: feature + title: Configurable traffic-manager API port + body: >- + The API port used by the traffic-manager is now configurable using the Helm chart value apiPort. + The default port is 8081. + docs: https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence + - type: feature + title: Envoy server and admin port configuration. + body: >- + An new agent struct was added to the Helm chart. It contains an `envoy` structure where the server and + admin port of the Envoy proxy running in the enhanced traffic-agent can be configured. + docs: reference/cluster-config#Envoy-Configuration + - type: change + title: Helm chart `dnsConfig` moved to `client.routing`. + body: >- + The Helm chart dnsConfig was deprecated but retained for backward compatibility. The fields alsoProxySubnets + and neverProxySubnets can now be found under routing in the client struct. + docs: reference/cluster-config#Routing + - type: change + title: Helm chart `agentInjector.agentImage` moved to `agent.image`. + body: >- + The Helm chart agentInjector.agentImage was moved to agent.image. The old value is deprecated but + retained for backward compatibility. + docs: reference/cluster-config#Image-Configuration + - type: change + title: Helm chart `agentInjector.appProtocolStrategy` moved to `agent.appProtocolStrategy`. + body: >- + The Helm chart agentInjector.appProtocolStrategy was moved to agent.appProtocolStrategy. The old + value is deprecated but retained for backward compatibility. + docs: reference/cluster-config#Application-Protocol-Selection + - type: change + title: Helm chart `dnsServiceName`, `dnsServiceNamespace`, and `dnsServiceIP` removed. + body: >- + The Helm chart dnsServiceName, dnsServiceNamespace, and dnsServiceIP has been removed, because + they are no longer needed. The TUN-device will use the traffic-manager pod-IP on platforms where it needs to + dedicate an IP for its local resolver. + - type: change + title: Quit daemons with `telepresence quit -s` + body: >- + The former options `-u` and `-r` for `telepresence quit` has been deprecated and replaced with one option `-s` which will + quit both the root daemon and the user daemon. + - type: bugfix + title: Environment variable interpolation in pods now works. + body: >- + Environment variable interpolation now works for all definitions that are copied from pod containers + into the injected traffic-agent container. + - type: bugfix + title: Early detection of namespace conflict + body: >- + An attempt to create simultaneous intercepts that span multiple namespace on the same workstation + is detected early and prohibited instead of resulting in failing DNS lookups later on. + - type: bugfix + title: Annoying log message removed + body: >- + Spurious and incorrect ""!! SRV xxx"" messages will no longer appear in the logs when the reason + is normal context cancellation. + - type: bugfix + title: Single name DNS resolution in Docker on Linux host + body: >- + Single label names now resolves correctly when using Telepresence in Docker on a Linux host + - type: bugfix + title: Misnomer `appPortStrategy` in Helm chart renamed to `appProtocolStrategy`. + body: >- + The Helm chart value appProtocolStrategy is now correctly named (used to be appPortStategy) + - version: 2.7.6 + date: "2022-09-16" + notes: + - type: feature + title: Helm chart resource entries for injected agents + body: >- + The resources for the traffic-agent container and the optional init container can be + specified in the Helm chart using the resources and initResource fields + of the agentInjector.agentImage + - type: feature + title: Cluster event propagation when injection fails + body: >- + When the traffic-manager fails to inject a traffic-agent, the cause for the failure is + detected by reading the cluster events, and propagated to the user. + - type: feature + title: FTP-client instead of sshfs for remote mounts + body: >- + Telepresence can now use an embedded FTP client and load an existing FUSE library instead of running + an external sshfs or sshfs-win binary. This feature is experimental in 2.7.x + and enabled by setting intercept.useFtp to true> in the config.yml. + - type: change + title: Upgrade of winfsp + body: >- + Telepresence on Windows upgraded winfsp from version 1.10 to 1.11 + - type: bugfix + title: Removal of invalid warning messages + body: >- + Running CLI commands on Apple M1 machines will no longer throw warnings about /proc/cpuinfo + and /proc/self/auxv. + - version: 2.7.5 + date: "2022-09-14" + notes: + - type: change + title: Rollback of release 2.7.4 + body: >- + This release is a rollback of the changes in 2.7.4, so essentially the same as 2.7.3 + - version: 2.7.4 + date: "2022-09-14" + notes: + - type: change + body: >- + This release was broken on some platforms. Use 2.7.6 instead. + - version: 2.7.3 + date: "2022-09-07" + notes: + - type: bugfix + title: PTY for CLI commands + body: >- + CLI commands that are executed by the user daemon now use a pseudo TTY. This enables + docker run -it to allocate a TTY and will also give other commands like bash read the + same behavior as when executed directly in a terminal. + docs: https://github.com/telepresenceio/telepresence/issues/2724 + - type: bugfix + title: Traffic Manager useless warning silenced + body: >- + The traffic-manager will no longer log numerous warnings saying Issuing a + systema request without ApiKey or InstallID may result in an error. + - type: bugfix + title: Traffic Manager useless error silenced + body: >- + The traffic-manager will no longer log an error saying Unable to derive subnets + from nodes when the podCIDRStrategy is auto and it chooses to instead derive the + subnets from the pod IPs. + - version: 2.7.2 + date: "2022-08-25" + notes: + - type: feature + title: Autocompletion scripts + body: >- + Autocompletion scripts can now be generated with telepresence completion SHELL where SHELL can be bash, zsh, fish or powershell. + - type: feature + title: Connectivity check timeout + body: >- + The timeout for the initial connectivity check that Telepresence performs + in order to determine if the cluster's subnets are proxied or not can now be configured + in the config.yml file using timeouts.connectivityCheck. The default timeout was + changed from 5 seconds to 500 milliseconds to speed up the actual connect. + docs: reference/config#timeouts + - type: change + title: gather-traces feedback + body: >- + The command telepresence gather-traces now prints out a message on success. + docs: troubleshooting#distributed-tracing + - type: change + title: upload-traces feedback + body: >- + The command telepresence upload-traces now prints out a message on success. + docs: troubleshooting#distributed-tracing + - type: change + title: gather-traces tracing + body: >- + The command telepresence gather-traces now traces itself and reports errors with trace gathering. + docs: troubleshooting#distributed-tracing + - type: change + title: CLI log level + body: >- + The cli.log log is now logged at the same level as the connector.log + docs: reference/config#log-levels + - type: bugfix + title: Telepresence --help fixed + body: >- + telepresence --help now works once more even if there's no user daemon running. + docs: https://github.com/telepresenceio/telepresence/issues/2735 + - type: bugfix + title: Stream cancellation when no process intercepts + body: >- + Streams created between the traffic-agent and the workstation are now properly closed + when no interceptor process has been started on the workstation. This fixes a potential problem where + a large number of attempts to connect to a non-existing interceptor would cause stream congestion + and an unresponsive intercept. + - type: bugfix + title: List command excludes the traffic-manager + body: >- + The telepresence list command no longer includes the traffic-manager deployment. + - version: 2.7.1 + date: "2022-08-10" + notes: + - type: change + title: Reinstate telepresence uninstall + body: >- + Reinstate telepresence uninstall with --everything depreciated + - type: change + title: Reduce telepresence helm uninstall + body: >- + telepresence helm uninstall will only uninstall the traffic-manager helm chart and no longer accepts the --everything, --agent, or --all-agents flags. + - type: bugfix + title: Auto-connect for telepresence intercpet + body: >- + telepresence intercept will attempt to connect to the traffic manager before creating an intercept. + - version: 2.7.0 + date: "2022-08-07" + notes: + - type: feature + title: Saved Intercepts + body: >- + Create telepresence intercepts based on existing Saved Intercepts configurations with telepresence intercept --use-saved-intercept $SAVED_INTERCEPT_NAME + docs: reference/intercepts#sharing-intercepts-with-teammates + - type: feature + title: Distributed Tracing + body: >- + The Telepresence components now collect OpenTelemetry traces. + Up to 10MB of trace data are available at any given time for collection from + components. telepresence gather-traces is a new command that will collect + all that data and place it into a gzip file, and telepresence upload-traces is + a new command that will push the gzipped data into an OTLP collector. + docs: troubleshooting#distributed-tracing + - type: feature + title: Helm install + body: >- + A new telepresence helm command was added to provide an easy way to install, upgrade, or uninstall the telepresence traffic-manager. + docs: install/manager + - type: feature + title: Ignore Volume Mounts + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-ignore-volume-mounts, that can be used to make the injector ignore specified volume mounts denoted by a comma-separated string. + - type: feature + title: telepresence pod-daemon + body: >- + The Docker image now contains a new program in addition to + the existing traffic-manager and traffic-agent: the pod-daemon. The + pod-daemon is a trimmed-down version of the user-daemon that is + designed to run as a sidecar in a Pod, enabling CI systems to create + preview deploys. + - type: feature + title: Prometheus support for traffic manager + body: >- + Added prometheus support to the traffic manager. + - type: change + title: No install on telepresence connect + body: >- + The traffic manager is no longer automatically installed into the cluster. Connecting or creating an intercept in a cluster without a traffic manager will return an error. + docs: install/manager + - type: change + title: Helm Uninstall + body: >- + The command telepresence uninstall has been moved to telepresence helm uninstall. + docs: install/manager + - type: bugfix + title: readOnlyRootFileSystem mounts work + body: >- + Add an emptyDir volume and volume mount under /tmp on the agent sidecar so it works with `readOnlyRootFileSystem: true` + docs: https://github.com/telepresenceio/telepresence/pull/2666 + - version: 2.6.8 + date: "2022-06-23" + notes: + - type: feature + title: Specify Your DNS + body: >- + The name and namespace for the DNS Service that the traffic-manager uses in DNS auto-detection can now be specified. + - type: feature + title: Specify a Fallback DNS + body: >- + Should the DNS auto-detection logic in the traffic-manager fail, users can now specify a fallback IP to use. + - type: feature + title: Intercept UDP Ports + body: >- + It is now possible to intercept UDP ports with Telepresence and also use --to-pod to forward UDP traffic from ports on localhost. + - type: change + title: Additional Helm Values + body: >- + The Helm chart will now add the nodeSelector, affinity and tolerations values to the traffic-manager's post-upgrade-hook and pre-delete-hook jobs. + - type: bugfix + title: Agent Injection Bugfix + body: >- + Telepresence no longer fails to inject the traffic agent into the pod generated for workloads that have no volumes and `automountServiceAccountToken: false`. + - version: 2.6.7 + date: "2022-06-22" + notes: + - type: bugfix + title: Persistant Sessions + body: >- + The Telepresence client will remember and reuse the traffic-manager session after a network failure or other reason that caused an unclean disconnect. + - type: bugfix + title: DNS Requests + body: >- + Telepresence will no longer forward DNS requests for "wpad" to the cluster. + - type: bugfix + title: Graceful Shutdown + body: >- + The traffic-agent will properly shut down if one of its goroutines errors. + - version: 2.6.6 + date: "2022-06-9" + notes: + - type: bugfix + title: Env Var `TELEPRESENCE_API_PORT` + body: >- + The propagation of the TELEPRESENCE_API_PORT environment variable now works correctly. + - type: bugfix + title: Double Printing `--output json` + body: >- + The --output json global flag no longer outputs multiple objects + - version: 2.6.5 + date: "2022-06-03" + notes: + - type: feature + title: Helm Option -- `reinvocationPolicy` + body: >- + The reinvocationPolicy or the traffic-agent injector webhook can now be configured using the Helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Proxy Certificate + body: >- + The traffic manager now accepts a root CA for a proxy, allowing it to connect to ambassador cloud from behind an HTTPS proxy. This can be configured through the helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Agent Injection + body: >- + A policy that controls when the mutating webhook injects the traffic-agent was added, and can be configured in the Helm chart. + docs: install/helm + - type: change + title: Windows Tunnel Version Upgrade + body: >- + Telepresence on Windows upgraded wintun.dll from version 0.12 to version 0.14.1 + - type: change + title: Helm Version Upgrade + body: >- + Telepresence upgraded its embedded Helm from version 3.8.1 to 3.9 + - type: change + title: Kubernetes API Version Upgrade + body: >- + Telepresence upgraded its embedded Kubernetes API from version 0.23.4 to 0.24.1 + - type: feature + title: Flag `--watch` Added to `list` Command + body: >- + Added a --watch flag to telepresence list that can be used to watch interceptable workloads in a namespace. + - type: change + title: Depreciated `images.webhookAgentImage` + body: >- + The Telepresence configuration setting for `images.webhookAgentImage` is now deprecated. Use `images.agentImage` instead. + - type: bugfix + title: Default `reinvocationPolicy` Set to Never + body: >- + The reinvocationPolicy or the traffic-agent injector webhook now defaults to Never insteadof IfNeeded so that LimitRanges on namespaces can inject a missing resources element into the injected traffic-agent container. + - type: bugfix + title: UDP + body: >- + UDP based communication with services in the cluster now works as expected. + - type: bugfix + title: Telepresence `--help` + body: >- + The command help will only show Kubernetes flags on the commands that supports them + - type: change + title: Error Count + body: >- + Only the errors from the last session will be considered when counting the number of errors in the log after a command failure. + - version: 2.6.4 + date: "2022-05-23" + notes: + - type: bugfix + title: Upgrade RBAC Permissions + body: >- + The traffic-manager RBAC grants permissions to update services, deployments, replicatsets, and statefulsets. Those permissions are needed when the traffic-manager upgrades from versions < 2.6.0 and can be revoked after the upgrade. + - version: 2.6.3 + date: "2022-05-20" + notes: + - type: bugfix + title: Relative Mount Paths + body: >- + The --mount intercept flag now handles relative mount points correctly on non-windows platforms. Windows still require the argument to be a drive letter followed by a colon. + - type: bugfix + title: Traffic Agent Config + body: >- + The traffic-agent's configuration update automatically when services are added, updated or deleted. + - type: bugfix + title: Container Injection for Numeric Ports + body: >- + Telepresence will now always inject an initContainer when the service's targetPort is numeric + - type: bugfix + title: Matching Services + body: >- + Workloads that have several matching services pointing to the same target port are now handled correctly. + - type: bugfix + title: Unexpected Panic + body: >- + A potential race condition causing a panic when closing a DNS connection is now handled correctly. + - type: bugfix + title: Mount Volume Cleanup + body: >- + A container start would sometimes fail because and old directory remained in a mounted temp volume. + - version: 2.6.2 + date: "2022-05-17" + notes: + - type: bugfix + title: Argo Injection + body: >- + Workloads controlled by workloads like Argo Rollout are injected correctly. + - type: bugfix + title: Agent Port Mapping + body: >- + Multiple services appointing the same container port no longer result in duplicated ports in an injected pod. + - type: bugfix + title: GRPC Max Message Size + body: >- + The telepresence list command no longer errors out with "grpc: received message larger than max" when listing namespaces with a large number of workloads. + - version: 2.6.1 + date: "2022-05-16" + notes: + - type: bugfix + title: KUBECONFIG environment variable + body: >- + Telepresence will now handle multiple path entries in the KUBECONFIG environment correctly. + - type: bugfix + title: Don't Panic + body: >- + Telepresence will no longer panic when using preview URLs with traffic-managers < 2.6.0 + - version: 2.6.0 + date: "2022-05-13" + notes: + - type: feature + title: Intercept multiple containers in a pod, and multiple ports per container + body: >- + Telepresence can now intercept multiple services and/or service-ports that connect to the same pod. + docs: new-in-2.6#intercept-multiple-containers-and-ports + - type: feature + title: The Traffic Agent sidecar is always injected by the Traffic Manager's mutating webhook + body: >- + The client will no longer modify deployments, replicasets, or statefulsets in order to + inject a Traffic Agent into an intercepted pod. Instead, all injection is now performed by a mutating webhook. As a result, + the client now needs less permissions in the cluster. + docs: install/upgrade#important-note-about-upgrading-to-2.6.0 + - type: change + title: Automatic upgrade of Traffic Agents + body: >- + When upgrading, all workloads with injected agents will have their agent "uninstalled" automatically. + The mutating webhook will then ensure that their pods will receive an updated Traffic Agent. + docs: new-in-2.6#no-more-workload-modifications + - type: change + title: No default image in the Helm chart + body: >- + The helm chart no longer has a default set for the agentInjector.image.name, and unless it's set, the + traffic-manager will ask Ambassador Could for the preferred image. + docs: new-in-2.6#smarter-agent + - type: change + title: Upgrade to Helm version 3.8.1 + body: The Telepresence client now uses Helm version 3.8.1 when auto-installing the Traffic Manager. + - type: bugfix + title: Remote mounts will now function correctly with custom securityContext + body: >- + The bug causing permission problems when the Traffic Agent is in a Pod with a custom securityContext has been fixed. + - type: bugfix + title: Improved presentation of flags in CLI help + body: The help for commands that accept Kubernetes flags will now display those flags in a separate group. + - type: bugfix + title: Better termination of process parented by intercept + body: >- + Occasionally an intercept will spawn a command using -- on the command line, often in another console. + When you use telepresence leave or telepresence quit while the intercept with the spawned command is still active, + Telepresence will now terminate that the command because it's considered to be parented by the intercept that is being removed. + - version: 2.5.8 + date: "2022-04-27" + notes: + - type: bugfix + title: Folder creation on `telepresence login` + body: >- + Fixed a bug where the telepresence config folder would not be created if the user ran telepresence login before other commands. + - version: 2.5.7 + date: "2022-04-25" + notes: + - type: change + title: RBAC requirements + body: >- + A namespaced traffic-manager will no longer require cluster wide RBAC. Only Roles and RoleBindings are now used. + - type: bugfix + title: Windows DNS + body: >- + The DNS recursion detector didn't work correctly on Windows, resulting in sporadic failures to resolve names that were resolved correctly at other times. + - type: bugfix + title: Session TTL and Reconnect + body: >- + A telepresence session will now last for 24 hours after the user's last connectivity. If a session expires, the connector will automatically try to reconnect. + - version: 2.5.6 + date: "2022-04-18" + notes: + - type: change + title: Less Watchers + body: >- + Telepresence agents watcher will now only watch namespaces that the user has accessed since the last connect. + - type: bugfix + title: More Efficient `gather-logs` + body: >- + The gather-logs command will no longer send any logs through gRPC. + - version: 2.5.5 + date: "2022-04-08" + notes: + - type: change + title: Traffic Manager Permissions + body: >- + The traffic-manager now requires permissions to read pods across namespaces even if installed with limited permissions + - type: bugfix + title: Linux DNS Cache + body: >- + The DNS resolver used on Linux with systemd-resolved now flushes the cache when the search path changes. + - type: bugfix + title: Automatic Connect Sync + body: >- + The telepresence list command will produce a correct listing even when not preceded by a telepresence connect. + - type: bugfix + title: Disconnect Reconnect Stability + body: >- + The root daemon will no longer get into a bad state when a disconnect is rapidly followed by a new connect. + - type: bugfix + title: Limit Watched Namespaces + body: >- + The client will now only watch agents from accessible namespaces, and is also constrained to namespaces explicitly mapped using the connect command's --mapped-namespaces flag. + - type: bugfix + title: Limit Namespaces used in `gather-logs` + body: >- + The gather-logs command will only gather traffic-agent logs from accessible namespaces, and is also constrained to namespaces explicitly mapped using the connect command's --mapped-namespaces flag. + - version: 2.5.4 + date: "2022-03-29" + notes: + - type: bugfix + title: Linux DNS Concurrency + body: >- + The DNS fallback resolver on Linux now correctly handles concurrent requests without timing them out + - type: bugfix + title: Non-Functional Flag + body: >- + The ingress-l5 flag will no longer be forcefully set to equal the --ingress-host flag + - type: bugfix + title: Automatically Remove Failed Intercepts + body: >- + Intercepts that fail to create are now consistently removed to prevent non-working dangling intercepts from sticking around. + - type: bugfix + title: Agent UID + body: >- + Agent container is no longer sensitive to a random UID or an UID imposed by a SecurityContext. + - type: bugfix + title: Gather-Logs Output Filepath + body: >- + Removed a bad concatenation that corrupted the output path of telepresence gather-logs. + - type: change + title: Remove Unnecessary Error Advice + body: >- + An advice to "see logs for details" is no longer printed when the argument count is incorrect in a CLI command. + - type: bugfix + title: Garbage Collection + body: >- + Client and agent sessions no longer leaves dangling waiters in the traffic-manager when they depart. + - type: bugfix + title: Limit Gathered Logs + body: >- + The client's gather logs command and agent watcher will now respect the configured grpc.maxReceiveSize + - type: change + title: In-Cluster Checks + body: >- + The TUN device will no longer route pod or service subnets if it is running in a machine that's already connected to the cluster + - type: change + title: Expanded Status Command + body: >- + The status command includes the install id, user id, account id, and user email in its result, and can print output as JSON + - type: change + title: List Command Shows All Intercepts + body: >- + The list command, when used with the --intercepts flag, will list the users intercepts from all namespaces + - version: 2.5.3 + date: "2022-02-25" + notes: + - type: bugfix + title: TCP Connectivity + body: >- + Fixed bug in the TCP stack causing timeouts after repeated connects to the same address + - type: feature + title: Linux Binaries + body: >- + Client-side binaries for the arm64 architecture are now available for linux + - version: 2.5.2 + date: "2022-02-23" + notes: + - type: bugfix + title: DNS server bugfix + body: >- + Fixed a bug where Telepresence would use the last server in resolv.conf + - version: 2.5.1 + date: "2022-02-19" + notes: + - type: bugfix + title: Fix GKE auth issue + body: >- + Fixed a bug where using a GKE cluster would error with: No Auth Provider found for name "gcp" + - version: 2.5.0 + date: "2022-02-18" + notes: + - type: feature + title: Intercept specific endpoints + body: >- + The flags --http-path-equal, --http-path-prefix, and --http-path-regex can can be used in + addition to the --http-match flag to filter personal intercepts by the request URL path + docs: concepts/intercepts#intercepting-a-specific-endpoint + - type: feature + title: Intercept metadata + body: >- + The flag --http-meta can be used to declare metadata key value pairs that will be returned by the Telepresence rest + API endpoint /intercept-info + docs: reference/restapi#intercept-info + - type: change + title: Client RBAC watch + body: >- + The verb "watch" was added to the set of required verbs when accessing services and workloads for the client RBAC + ClusterRole + docs: reference/rbac + - type: change + title: Dropped backward compatibility with versions <=2.4.4 + body: >- + Telepresence is no longer backward compatible with versions 2.4.4 or older because the deprecated multiplexing tunnel + functionality was removed. + - type: change + title: No global networking flags + body: >- + The global networking flags are no longer used and using them will render a deprecation warning unless they are supported by the + command. The subcommands that support networking flags are connect, current-cluster-id, + and genyaml. + - type: bugfix + title: Output of status command + body: >- + The also-proxy and never-proxy subnets are now displayed correctly when using the + telepresence status command. + - type: bugfix + title: SETENV sudo privilege no longer needed + body: >- + Telepresence longer requires SETENV privileges when starting the root daemon. + - type: bugfix + title: Network device names containing dash + body: >- + Telepresence will now parse device names containing dashes correctly when determining routes that it should never block. + - type: bugfix + title: Linux uses cluster.local as domain instead of search + body: >- + The cluster domain (typically "cluster.local") is no longer added to the DNS search on Linux using + systemd-resolved. Instead, it is added as a domain so that names ending with it are routed + to the DNS server. + - version: 2.4.11 + date: "2022-02-10" + notes: + - type: change + title: Add additional logging to troubleshoot intermittent issues with intercepts + body: >- + We've noticed some issues with intercepts in v2.4.10, so we are releasing a version + with enhanced logging to help debug and fix the issue. + - version: 2.4.10 + date: "2022-01-13" + notes: + - type: feature + title: Application Protocol Strategy + body: >- + The strategy used when selecting the application protocol for personal intercepts can now be configured using + the intercept.appProtocolStrategy in the config.yml file. + docs: reference/config/#intercept + image: telepresence-2.4.10-intercept-config.png + - type: feature + title: Helm value for the Application Protocol Strategy + body: >- + The strategy when selecting the application protocol for personal intercepts in agents injected by the + mutating webhook can now be configured using the agentInjector.appProtocolStrategy in the Helm chart. + docs: install/helm + - type: feature + title: New --http-plaintext option + body: >- + The flag --http-plaintext can be used to ensure that an intercept uses plaintext http or grpc when + communicating with the workstation process. + docs: reference/intercepts/#tls + - type: feature + title: Configure the default intercept port + body: >- + The port used by default in the telepresence intercept command (8080), can now be changed by setting + the intercept.defaultPort in the config.yml file. + docs: reference/config/#intercept + - type: change + title: Telepresence CI now uses Github Actions + body: >- + Telepresence now uses Github Actions for doing unit and integration testing. It is + now easier for contributors to run tests on PRs since maintainers can add an + "ok to test" label to PRs (including from forks) to run integration tests. + docs: https://github.com/telepresenceio/telepresence/actions + image: telepresence-2.4.10-actions.png + - type: bugfix + title: Check conditions before asking questions + body: >- + User will not be asked to log in or add ingress information when creating an intercept until a check has been + made that the intercept is possible. + docs: reference/intercepts/ + - type: bugfix + title: Fix invalid log statement + body: >- + Telepresence will no longer log invalid: "unhandled connection control message: code DIAL_OK" errors. + - type: bugfix + title: Log errors from sshfs/sftp + body: >- + Output to stderr from the traffic-agent's sftp and the client's sshfs processes + are properly logged as errors. + - type: bugfix + title: Don't use Windows path separators in workload pod template + body: >- + Auto installer will no longer not emit backslash separators for the /tel-app-mounts paths in the + traffic-agent container spec when running on Windows. + - version: 2.4.9 + date: "2021-12-09" + notes: + - type: bugfix + title: Helm upgrade nil pointer error + body: >- + A helm upgrade using the --reuse-values flag no longer fails on a "nil pointer" error caused by a nil + telpresenceAPI value. + docs: install/helm#upgrading-the-traffic-manager + - version: 2.4.8 + date: "2021-12-03" + notes: + - type: feature + title: VPN diagnostics tool + body: >- + There is a new subcommand, test-vpn, that can be used to diagnose connectivity issues with a VPN. + See the VPN docs for more information on how to use it. + docs: reference/vpn + image: telepresence-2.4.8-vpn.png + + - type: feature + title: RESTful API service + body: >- + A RESTful service was added to Telepresence, both locally to the client and to the traffic-agent to + help determine if messages with a set of headers should be consumed or not from a message queue where the + intercept headers are added to the messages. + docs: reference/restapi + image: telepresence-2.4.8-health-check.png + + - type: change + title: TELEPRESENCE_LOGIN_CLIENT_ID env variable no longer used + body: >- + You could previously configure this value, but there was no reason to change it, so the value + was removed. + + - type: bugfix + title: Tunneled network connections behave more like ordinary TCP connections. + body: >- + When using Telepresence with an external cloud provider for extensions, those tunneled + connections now behave more like TCP connections, especially when it comes to timeouts. + We've also added increased testing around these types of connections. + - version: 2.4.7 + date: "2021-11-24" + notes: + - type: feature + title: Injector service-name annotation + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-service-name, that can be used to set the name of the service to be intercepted. + This will help disambiguate which service to intercept for when a workload is exposed by multiple services, such as can happen with Argo Rollouts + docs: reference/cluster-config#service-name-annotation + - type: feature + title: Skip the Ingress Dialogue + body: >- + You can now skip the ingress dialogue by setting the ingress parameters in the corresponding flags. + docs: reference/intercepts#skipping-the-ingress-dialogue + - type: feature + title: Never proxy subnets + body: >- + The kubeconfig extensions now support a never-proxy argument, + analogous to also-proxy, that defines a set of subnets that + will never be proxied via telepresence. + docs: reference/config#neverproxy + - type: change + title: Daemon versions check + body: >- + Telepresence now checks the versions of the client and the daemons and asks the user to quit and restart if they don't match. + - type: change + title: No explicit DNS flushes + body: >- + Telepresence DNS now uses a very short TTL instead of explicitly flushing DNS by killing the mDNSResponder or doing resolvectl flush-caches + docs: reference/routing#dns-caching + - type: bugfix + title: Legacy flags now work with global flags + body: >- + Legacy flags such as --swap-deployment can now be used together with global flags. + - type: bugfix + title: Outbound connection closing + body: >- + Outbound connections are now properly closed when the peer closes. + - type: bugfix + title: Prevent DNS recursion + body: >- + The DNS-resolver will trap recursive resolution attempts (may happen when the cluster runs in a docker-container on the client). + docs: reference/routing#dns-recursion + - type: bugfix + title: Prevent network recursion + body: >- + The TUN-device will trap failed connection attempts that results in recursive calls back into the TUN-device (may happen when the + cluster runs in a docker-container on the client). + docs: reference/routing#connect-recursion + - type: bugfix + title: Traffic Manager deadlock fix + body: >- + The Traffic Manager no longer runs a risk of entering a deadlock when a new Traffic agent arrives. + - type: bugfix + title: webhookRegistry config propagation + body: >- + The configured webhookRegistry is now propagated to the webhook installer even if no webhookAgentImage has been set. + docs: reference/config#images + - type: bugfix + title: Login refreshes expired tokens + body: >- + When a user's token has expired, telepresence login + will prompt the user to log in again to get a new token. Previously, + the user had to telepresence quit and telepresence logout + to get a new token. + docs: https://github.com/telepresenceio/telepresence/issues/2062 + - version: 2.4.6 + date: "2021-11-02" + notes: + - type: feature + title: Manually injecting Traffic Agent + body: >- + Telepresence now supports manually injecting the traffic-agent YAML into workload manifests. + Use the genyaml command to create the sidecar YAML, then add the telepresence.getambassador.io/manually-injected: "true" annotation to your pods to allow Telepresence to intercept them. + docs: reference/intercepts/manual-agent + + - type: feature + title: Telepresence CLI released for Apple silicon + body: >- + Telepresence is now built and released for Apple silicon. + docs: install/?os=macos + + - type: change + title: Telepresence help text now links to telepresence.io + body: >- + We now include a link to our documentation when you run telepresence --help. This will make it easier + for users to find this page whether they acquire Telepresence through Brew or some other mechanism. + image: telepresence-2.4.6-help-text.png + + - type: bugfix + title: Fixed bug when API server is inside CIDR range of pods/services + body: >- + If the API server for your kubernetes cluster had an IP that fell within the + subnet generated from pods/services in a kubernetes cluster, it would proxy traffic + to the API server which would result in hanging or a failed connection. We now ensure + that the API server is explicitly not proxied. + - version: 2.4.5 + date: "2021-10-15" + notes: + - type: feature + title: Get pod yaml with gather-logs command + body: >- + Adding the flag --get-pod-yaml to your request will get the + pod yaml manifest for all kubernetes components you are getting logs for + ( traffic-manager and/or pods containing a + traffic-agent container). This flag is set to false + by default. + docs: reference/client + image: telepresence-2.4.5-pod-yaml.png + + - type: feature + title: Anonymize pod name + namespace when using gather-logs command + body: >- + Adding the flag --anonymize to your command will + anonymize your pod names + namespaces in the output file. We replace the + sensitive names with simple names (e.g. pod-1, namespace-2) to maintain + relationships between the objects without exposing the real names of your + objects. This flag is set to false by default. + docs: reference/client + image: telepresence-2.4.5-logs-anonymize.png + + - type: feature + title: Added context and defaults to ingress questions when creating a preview URL + body: >- + Previously, we referred to OSI model layers when asking these questions, but this + terminology is not commonly used. The questions now provide a clearer context for the user, along with a default answer as an example. + docs: howtos/preview-urls + image: telepresence-2.4.5-preview-url-questions.png + + - type: feature + title: Support for intercepting headless services + body: >- + Intercepting headless services is now officially supported. You can request a + headless service on whatever port it exposes and get a response from the + intercept. This leverages the same approach as intercepting numeric ports when + using the mutating webhook injector, mainly requires the initContainer + to have NET_ADMIN capabilities. + docs: reference/intercepts/#intercepting-headless-services + + - type: change + title: Use one tunnel per connection instead of multiplexing into one tunnel + body: >- + We have changed Telepresence so that it uses one tunnel per connection instead + of multiplexing all connections into one tunnel. This will provide substantial + performance improvements. Clients will still be backwards compatible with older + managers that only support multiplexing. + + - type: bugfix + title: Added checks for Telepresence kubernetes compatibility + body: >- + Telepresence currently works with Kubernetes server versions 1.17.0 + and higher. We have added logs in the connector and traffic-manager + to let users know when they are using Telepresence with a cluster it doesn't support. + docs: reference/cluster-config + + - type: bugfix + title: Traffic Agent security context is now only added when necessary + body: >- + When creating an intercept, Telepresence will now only set the traffic agent's GID + when strictly necessary (i.e. when using headless services or numeric ports). This mitigates + an issue on openshift clusters where the traffic agent can fail to be created due to + openshift's security policies banning arbitrary GIDs. + + - version: 2.4.4 + date: "2021-09-27" + notes: + - type: feature + title: Numeric ports in agent injector + body: >- + The agent injector now supports injecting Traffic Agents into pods that have unnamed ports. + docs: reference/cluster-config/#note-on-numeric-ports + + - type: feature + title: New subcommand to gather logs and export into zip file + body: >- + Telepresence has logs for various components (the + traffic-manager, traffic-agents, the root and + user daemons), which are integral for understanding and debugging + Telepresence behavior. We have added the telepresence + gather-logs command to make it simple to compile logs for + all Telepresence components and export them in a zip file that can + be shared to others and/or included in a github issue. For more + information on usage, run telepresence gather-logs --help + . + docs: reference/client + image: telepresence-2.4.4-gather-logs.png + + - type: feature + title: Pod CIDR strategy is configurable in Helm chart + body: >- + Telepresence now enables you to directly configure how to get + pod CIDRs when deploying Telepresence with the Helm chart. + The default behavior remains the same. We've also introduced + the ability to explicitly set what the pod CIDRs should be. + docs: install/helm + + - type: bugfix + title: Compute pod CIDRs more efficiently + body: >- + When computing subnets using the pod CIDRs, the traffic-manager + now uses less CPU cycles. + docs: reference/routing/#subnets + + - type: bugfix + title: Prevent busy loop in traffic-manager + body: >- + In some circumstances, the traffic-manager's CPU + would max out and get pinned at its limit. This required a + shutdown or pod restart to fix. We've added some fixes + to prevent the traffic-manager from getting into this state. + + - type: bugfix + title: Added a fixed buffer size to TUN-device + body: >- + The TUN-device now has a max buffer size of 64K. This prevents the + buffer from growing limitlessly until it receies a PSH, which could + be a blocking operation when receiving lots of TCP-packets. + docs: reference/tun-device + + - type: bugfix + title: Fix hanging user daemon + body: >- + When Telepresence encountered an issue connecting to the cluster or + the root daemon, it could hang indefintely. It now will error correctly + when it encounters that situation. + + - type: bugfix + title: Improved proprietary agent connectivity + body: >- + To determine whether the environment cluster is air-gapped, the + proprietary agent attempts to connect to the cloud during startup. + To deal with a possible initial failure, the agent backs off + and retries the connection with an increasing backoff duration. + + - type: bugfix + title: Telepresence correctly reports intercept port conflict + body: >- + When creating a second intercept targetting the same local port, + it now gives the user an informative error message. Additionally, + it tells them which intercept is currently using that port to make + it easier to remedy. + + - version: 2.4.3 + date: "2021-09-15" + notes: + - type: feature + title: Environment variable TELEPRESENCE_INTERCEPT_ID available in interceptor's environment + body: >- + When you perform an intercept, we now include a TELEPRESENCE_INTERCEPT_ID environment + variable in the environment. + docs: reference/environment/#telepresence-environment-variables + + - type: bugfix + title: Improved daemon stability + body: >- + Fixed a timing bug that sometimes caused a "daemon did not start" failure. + + - type: bugfix + title: Complete logs for Windows + body: >- + Crash stack traces and other errors were incorrectly not written to log files. This has + been fixed so logs for Windows should be at parity with the ones in MacOS and Linux. + + - type: bugfix + title: Log rotation fix for Linux kernel 4.11+ + body: >- + On Linux kernel 4.11 and above, the log file rotation now properly reads the + birth-time of the log file. Older kernels continue to use the old behavior + of using the change-time in place of the birth-time. + + - type: bugfix + title: Improved error messaging + body: >- + When Telepresence encounters an error, it tells the user where they should look for + logs related to the error. We have refined this so that it only tells users to look + for errors in the daemon logs for issues that are logged there. + + - type: bugfix + title: Stop resolving localhost + body: >- + When using the overriding DNS resolver, it will no longer apply search paths when + resolving localhost, since that should be resolved on the user's machine + instead of the cluster. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Variable cluster domain + body: >- + Previously, the cluster domain was hardcoded to cluster.local. While this + is true for many kubernetes clusters, it is not for all of them. Now this value is + retrieved from the traffic-manager. + + - type: bugfix + title: Improved cleanup of traffic-agents + body: >- + Telepresence now uninstalls traffic-agents installed via mutating webhook + when using telepresence uninstall --everything. + + - type: bugfix + title: More large file transfer fixes + body: >- + Downloading large files during an intercept will no longer cause timeouts and hanging + traffic-agents. + + - type: bugfix + title: Setting --mount to false when intercepting works as expected + body: >- + When using --mount=false while performing an intercept, the file system + was still mounted. This has been remedied so the intercept behavior respects the + flag. + docs: reference/volume + + - type: bugfix + title: Traffic-manager establishes outbound connections in parallel + body: >- + Previously, the traffic-manager established outbound connections + sequentially. This resulted in slow (and failing) Dial calls would + block all outbound traffic from the workstation (for up to 30 seconds). We now + establish these connections in parallel so that won't occur. + docs: reference/routing/#outbound + + - type: bugfix + title: Status command reports correct DNS settings + body: >- + Telepresence status now correctly reports DNS settings for all operating + systems, instead of Local IP:nil, Remote IP:nil when they don't exist. + + - version: 2.4.2 + date: "2021-09-01" + notes: + - type: feature + title: New subcommand to temporarily change log-level + body: >- + We have added a new telepresence loglevel subcommand that enables users + to temporarily change the log-level for the local demons, the traffic-manager and + the traffic-agents. While the logLevels settings from the config will + still be used by default, this can be helpful if you are currently experiencing an issue and + want to have higher fidelity logs, without doing a telepresence quit and + telepresence connect. You can use telepresence loglevel --help to get + more information on options for the command. + docs: reference/config + + - type: change + title: All components have info as the default log-level + body: >- + We've now set the default for all components of Telepresence (traffic-agent, + traffic-manager, local daemons) to use info as the default log-level. + + - type: bugfix + title: Updating RBAC in helm chart to fix cluster-id regression + body: >- + In 2.4.1, we enabled the traffic-manager to get the cluster ID by getting the UID + of the default namespace. The helm chart was not updated to give the traffic-manager + those permissions, which has since been fixed. This impacted users who use licensed features of + the Telepresence extension in an air-gapped environment. + docs: reference/cluster-config/#air-gapped-cluster + + - type: bugfix + title: Timeouts for Helm actions are now respected + body: >- + The user-defined timeout for Helm actions wasn't always respected, causing the daemon to hang + indefinitely when failing to install the traffic-manager. + docs: reference/config#timeouts + + - version: 2.4.1 + date: "2021-08-30" + notes: + - type: feature + title: External cloud variables are now configurable + body: >- + We now support configuring the host and port for the cloud in your config.yml. These + are used when logging in to utilize features provided by an extension, and are also passed + along as environment variables when installing the traffic-manager. Additionally, we + now run our testsuite with these variables set to localhost to continue to ensure Telepresence + is fully fuctional without depeneding on an external service. The SYSTEMA_HOST and SYSTEMA_PORT + environment variables are no longer used. + image: telepresence-2.4.1-systema-vars.png + docs: reference/config/#cloud + + - type: feature + title: Helm chart can now regenerate certificate used for mutating webhook on-demand. + body: >- + You can now set agentInjector.certificate.regenerate when deploying Telepresence + with the Helm chart to automatically regenerate the certificate used by the agent injector webhook. + docs: install/helm + + - type: change + title: Traffic Manager installed via helm + body: >- + The traffic-manager is now installed via an embedded version of the Helm chart when telepresence connect is first performed on a cluster. + This change is transparent to the user. + A new configuration flag, timeouts.helm sets the timeouts for all helm operations performed by the Telepresence binary. + docs: reference/config#timeouts + + - type: change + title: traffic-manager gets cluster ID itself instead of via environment variable + body: >- + The traffic-manager used to get the cluster ID as an environment variable when running + telepresence connnect or via adding the value in the helm chart. This was + clunky so now the traffic-manager gets the value itself as long as it has permissions + to "get" and "list" namespaces (this has been updated in the helm chart). + docs: install/helm + + - type: bugfix + title: Telepresence now mounts all directories from /var/run/secrets + body: >- + In the past, we only mounted secret directories in /var/run/secrets/kubernetes.io. + We now mount *all* directories in /var/run/secrets, which, for example, includes + directories like eks.amazonaws.com used for IRSA tokens. + docs: reference/volume + + - type: bugfix + title: Max gRPC receive size correctly propagates to all grpc servers + body: >- + This fixes a bug where the max gRPC receive size was only propagated to some of the + grpc servers, causing failures when the message size was over the default. + docs: reference/config/#grpc + + - type: bugfix + title: Updated our Homebrew packaging to run manually + body: >- + We made some updates to our script that packages Telepresence for Homebrew so that it + can be run manually. This will enable maintainers of Telepresence to run the script manually + should we ever need to rollback a release and have latest point to an older verison. + docs: install/ + + - type: bugfix + title: Telepresence uses namespace from kubeconfig context on each call + body: >- + In the past, Telepresence would use whatever namespace was specified in the kubeconfig's current-context + for the entirety of the time a user was connected to Telepresence. This would lead to confusing behavior + when a user changed the context in their kubeconfig and expected Telepresence to acknowledge that change. + Telepresence now will do that and use the namespace designated by the context on each call. + + - type: bugfix + title: Idle outbound TCP connections timeout increased to 7200 seconds + body: >- + Some users were noticing that their intercepts would start failing after 60 seconds. + This was because the keep idle outbound TCP connections were set to 60 seconds, which we have + now bumped to 7200 seconds to match Linux's tcp_keepalive_time default. + + - type: bugfix + title: Telepresence will automatically remove a socket upon ungraceful termination + body: >- + When a Telepresence process terminates ungracefully, it would inform users that "this usually means + that the process has terminated ungracefully" and implied that they should remove the socket. We've + now made it so Telepresence will automatically attempt to remove the socket upon ungraceful termination. + + - type: bugfix + title: Fixed user daemon deadlock + body: >- + Remedied a situation where the user daemon could hang when a user was logged in. + + - type: bugfix + title: Fixed agentImage config setting + body: >- + The config setting images.agentImages is no longer required to contain the repository, and it + will use the value at images.repository. + docs: reference/config/#images + + - version: 2.4.0 + date: "2021-08-04" + notes: + - type: feature + title: Windows Client Developer Preview + body: >- + There is now a native Windows client for Telepresence that is being released as a Developer Preview. + All the same features supported by the MacOS and Linux client are available on Windows. + image: telepresence-2.4.0-windows.png + docs: install + + - type: feature + title: CLI raises helpful messages from Ambassador Cloud + body: >- + Telepresence can now receive messages from Ambassador Cloud and raise + them to the user when they perform certain commands. This enables us + to send you messages that may enhance your Telepresence experience when + using certain commands. Frequency of messages can be configured in your + config.yml. + image: telepresence-2.4.0-cloud-messages.png + docs: reference/config#cloud + + - type: bugfix + title: Improved stability of systemd-resolved-based DNS + body: >- + When initializing the systemd-resolved-based DNS, the routing domain + is set to improve stability in non-standard configurations. This also enables the + overriding resolver to do a proper take over once the DNS service ends. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Fixed an edge case when intercepting a container with multiple ports + body: >- + When specifying a port of a container to intercept, if there was a container in the + pod without ports, it was automatically selected. This has been fixed so we'll only + choose the container with "no ports" if there's no container that explicitly matches + the port used in your intercept. + docs: reference/intercepts/#creating-an-intercept-when-a-service-has-multiple-ports + + - type: bugfix + title: $(NAME) references in agent's environments are now interpolated correctly. + body: >- + If you had an environment variable $(NAME) in your workload that referenced another, intercepts + would not correctly interpolate $(NAME). This has been fixed and works automatically. + + - type: bugfix + title: Telepresence no longer prints INFO message when there is no config.yml + body: >- + Fixed a regression that printed an INFO message to the terminal when there wasn't a + config.yml present. The config is optional, so this message has been + removed. + docs: reference/config + + - type: bugfix + title: Telepresence no longer panics when using --http-match + body: >- + Fixed a bug where Telepresence would panic if the value passed to --http-match + didn't contain an equal sign, which has been fixed. The correct syntax is in the --help + string and looks like --http-match=HTTP2_HEADER=REGEX + docs: reference/intercepts/#intercept-behavior-when-logged-in-to-ambassador-cloud + + - type: bugfix + title: Improved subnet updates + body: >- + The traffic-manager used to update subnets whenever the Nodes or Pods changed, even if + the underlying subnet hadn't changed, which created a lot of unnecessary traffic between the + client and the traffic-manager. This has been fixed so we only send updates when the subnets + themselves actually change. + docs: reference/routing/#subnets + + - version: 2.3.7 + date: "2021-07-23" + notes: + - type: feature + title: Also-proxy in telepresence status + body: >- + An also-proxy entry in the Kubernetes cluster config will + show up in the output of the telepresence status command. + docs: reference/config + + - type: feature + title: Non-interactive telepresence login + body: >- + telepresence login now has an + --apikey=KEY flag that allows for + non-interactive logins. This is useful for headless + environments where launching a web-browser is impossible, + such as cloud shells, Docker containers, or CI. + image: telepresence-2.3.7-newkey.png + docs: reference/client/login/ + + - type: bugfix + title: Mutating webhook injector correctly hides named ports for probes. + body: >- + The mutating webhook injector has been fixed to correctly rename named ports for liveness and readiness probes + docs: reference/cluster-config + + - type: bugfix + title: telepresence current-cluster-id crash fixed + body: >- + Fixed a regression introduced in 2.3.5 that caused telepresence current-cluster-id + to crash. + docs: reference/cluster-config + + - type: bugfix + title: Better UX around intercepts with no local process running + body: >- + Requests would hang indefinitely when initiating an intercept before you + had a local process running. This has been fixed and will result in an + Empty reply from server until you start a local process. + docs: reference/intercepts + + - type: bugfix + title: API keys no longer show as "no description" + body: >- + New API keys generated internally for communication with + Ambassador Cloud no longer show up as "no description" in + the Ambassador Cloud web UI. Existing API keys generated by + older versions of Telepresence will still show up this way. + image: telepresence-2.3.7-keydesc.png + + - type: bugfix + title: Fix corruption of user-info.json + body: >- + Fixed a race condition that logging in and logging out + rapidly could cause memory corruption or corruption of the + user-info.json cache file used when + authenticating with Ambassador Cloud. + + - type: bugfix + title: Improved DNS resolver for systemd-resolved + body: + Telepresence's systemd-resolved-based DNS resolver is now more + stable and in case it fails to initialize, the overriding resolver + will no longer cause general DNS lookup failures when telepresence defaults to + using it. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Faster telepresence list command + body: + The performance of telepresence list has been increased + significantly by reducing the number of calls the command makes to the cluster. + docs: reference/client + + - version: 2.3.6 + date: "2021-07-20" + notes: + - type: bugfix + title: Fix preview URLs + body: >- + Fixed a regression introduced in 2.3.5 that caused preview + URLs to not work. + + - type: bugfix + title: Fix subnet discovery + body: >- + Fixed a regression introduced in 2.3.5 where the Traffic + Manager's RoleBinding did not correctly appoint + the traffic-manager Role, causing + subnet discovery to not be able to work correctly. + docs: reference/rbac/ + + - type: bugfix + title: Fix root-user configuration loading + body: >- + Fixed a regression introduced in 2.3.5 where the root daemon + did not correctly read the configuration file; ignoring the + user's configured log levels and timeouts. + docs: reference/config/ + + - type: bugfix + title: Fix a user daemon crash + body: >- + Fixed an issue that could cause the user daemon to crash + during shutdown, as during shutdown it unconditionally + attempted to close a channel even though the channel might + already be closed. + + - version: 2.3.5 + date: "2021-07-15" + notes: + - type: feature + title: traffic-manager in multiple namespaces + body: >- + We now support installing multiple traffic managers in the same cluster. + This will allow operators to install deployments of telepresence that are + limited to certain namespaces. + image: ./telepresence-2.3.5-traffic-manager-namespaces.png + docs: install/helm + - type: feature + title: No more dependence on kubectl + body: >- + Telepresence no longer depends on having an external + kubectl binary, which might not be present for + OpenShift users (who have oc instead of + kubectl). + - type: feature + title: Agent image now configurable + body: >- + We now support configuring which agent image + registry to use in the + config. This enables users whose laptop is an air-gapped environment to + create personal intercepts without requiring a login. It also makes it easier + for those who are developing on Telepresence to specify which agent image should + be used. Env vars TELEPRESENCE_AGENT_IMAGE and TELEPRESENCE_REGISTRY are no longer + used. + image: ./telepresence-2.3.5-agent-config.png + docs: reference/config/#images + - type: feature + title: Max gRPC receive size now configurable + body: >- + The default max size of messages received through gRPC (4 MB) is sometimes insufficient. It can now be configured. + image: ./telepresence-2.3.5-grpc-max-receive-size.png + docs: reference/config/#grpc + - type: feature + title: CLI can be used in air-gapped environments + body: >- + While Telepresence will auto-detect if your cluster is in an air-gapped environment, + we've added an option users can add to their config.yml to ensure the cli acts like it + is in an air-gapped environment. Air-gapped environments require a manually installed + licence. + docs: reference/cluster-config/#air-gapped-cluster + image: ./telepresence-2.3.5-skipLogin.png + - version: 2.3.4 + date: "2021-07-09" + notes: + - type: bugfix + title: Improved IP log statements + body: >- + Some log statements were printing incorrect characters, when they should have been IP addresses. + This has been resolved to include more accurate and useful logging. + docs: reference/config/#log-levels + image: ./telepresence-2.3.4-ip-error.png + - type: bugfix + title: Improved messaging when multiple services match a workload + body: >- + If multiple services matched a workload when performing an intercept, Telepresence would crash. + It now gives the correct error message, instructing the user on how to specify which + service the intercept should use. + image: ./telepresence-2.3.4-improved-error.png + docs: reference/intercepts + - type: bugfix + title: Traffic-manger creates services in its own namespace to determine subnet + body: >- + Telepresence will now determine the service subnet by creating a dummy-service in its own + namespace, instead of the default namespace, which was causing RBAC permissions issues in + some clusters. + docs: reference/routing/#subnets + - type: bugfix + title: Telepresence connect respects pre-existing clusterrole + body: >- + When Telepresence connects, if the traffic-manager's desired clusterrole already exists in the + cluster, Telepresence will no longer try to update the clusterrole. + docs: reference/rbac + - type: bugfix + title: Helm Chart fixed for clientRbac.namespaced + body: >- + The Telepresence Helm chart no longer fails when installing with --set clientRbac.namespaced=true. + docs: install/helm + - version: 2.3.3 + date: "2021-07-07" + notes: + - type: feature + title: Traffic Manager Helm Chart + body: >- + Telepresence now supports installing the Traffic Manager via Helm. + This will make it easy for operators to install and configure the + server-side components of Telepresence separately from the CLI (which + in turn allows for better separation of permissions). + image: ./telepresence-2.3.3-helm.png + docs: install/helm/ + - type: feature + title: Traffic-manager in custom namespace + body: >- + As the traffic-manager can now be installed in any + namespace via Helm, Telepresence can now be configured to look for the + Traffic Manager in a namespace other than ambassador. + This can be configured on a per-cluster basis. + image: ./telepresence-2.3.3-namespace-config.png + docs: reference/config + - type: feature + title: Intercept --to-pod + body: >- + telepresence intercept now supports a + --to-pod flag that can be used to port-forward sidecars' + ports from an intercepted pod. + image: ./telepresence-2.3.3-to-pod.png + docs: reference/intercepts + - type: change + title: Change in migration from edgectl + body: >- + Telepresence no longer automatically shuts down the old + api_version=1 edgectl daemon. If migrating + from such an old version of edgectl you must now manually + shut down the edgectl daemon before running Telepresence. + This was already the case when migrating from the newer + api_version=2 edgectl. + - type: bugfix + title: Fixed error during shutdown + body: >- + The root daemon no longer terminates when the user daemon disconnects + from its gRPC streams, and instead waits to be terminated by the CLI. + This could cause problems with things not being cleaned up correctly. + - type: bugfix + title: Intercepts will survive deletion of intercepted pod + body: >- + An intercept will survive deletion of the intercepted pod provided + that another pod is created (or already exists) that can take over. + - version: 2.3.2 + date: "2021-06-18" + notes: + # Headliners + - type: feature + title: Service Port Annotation + body: >- + The mutator webhook for injecting traffic-agents now + recognizes a + telepresence.getambassador.io/inject-service-port + annotation to specify which port to intercept; bringing the + functionality of the --port flag to users who + use the mutator webook in order to control Telepresence via + GitOps. + image: ./telepresence-2.3.2-svcport-annotation.png + docs: reference/cluster-config#service-port-annotation + - type: feature + title: Outbound Connections + body: >- + Outbound connections are now routed through the intercepted + Pods which means that the connections originate from that + Pod from the cluster's perspective. This allows service + meshes to correctly identify the traffic. + docs: reference/routing/#outbound + - type: change + title: Inbound Connections + body: >- + Inbound connections from an intercepted agent are now + tunneled to the manager over the existing gRPC connection, + instead of establishing a new connection to the manager for + each inbound connection. This avoids interference from + certain service mesh configurations. + docs: reference/routing/#inbound + + # RBAC changes + - type: change + title: Traffic Manager needs new RBAC permissions + body: >- + The Traffic Manager requires RBAC + permissions to list Nodes, Pods, and to create a dummy + Service in the manager's namespace. + docs: reference/routing/#subnets + - type: change + title: Reduced developer RBAC requirements + body: >- + The on-laptop client no longer requires RBAC permissions to list the Nodes + in the cluster or to create Services, as that functionality + has been moved to the Traffic Manager. + + # Bugfixes + - type: bugfix + title: Able to detect subnets + body: >- + Telepresence will now detect the Pod CIDR ranges even if + they are not listed in the Nodes. + image: ./telepresence-2.3.2-subnets.png + docs: reference/routing/#subnets + - type: bugfix + title: Dynamic IP ranges + body: >- + The list of cluster subnets that the virtual network + interface will route is now configured dynamically and will + follow changes in the cluster. + - type: bugfix + title: No duplicate subnets + body: >- + Subnets fully covered by other subnets are now pruned + internally and thus never superfluously added to the + laptop's routing table. + docs: reference/routing/#subnets + - type: change # not a bugfix, but it only makes sense to mention after the above bugfixes + title: Change in default timeout + body: >- + The trafficManagerAPI timeout default has + changed from 5 seconds to 15 seconds, in order to facilitate + the extended time it takes for the traffic-manager to do its + initial discovery of cluster info as a result of the above + bugfixes. + - type: bugfix + title: Removal of DNS config files on macOS + body: >- + On macOS, files generated under + /etc/resolver/ as the result of using + include-suffixes in the cluster config are now + properly removed on quit. + docs: reference/routing/#macos-resolver + + - type: bugfix + title: Large file transfers + body: >- + Telepresence no longer erroneously terminates connections + early when sending a large HTTP response from an intercepted + service. + - type: bugfix + title: Race condition in shutdown + body: >- + When shutting down the user-daemon or root-daemon on the + laptop, telepresence quit and related commands + no longer return early before everything is fully shut down. + Now it can be counted on that by the time the command has + returned that all of the side-effects on the laptop have + been cleaned up. + - version: 2.3.1 + date: "2021-06-14" + notes: + - title: DNS Resolver Configuration + body: "Telepresence now supports per-cluster configuration for custom dns behavior, which will enable users to determine which local + remote resolver to use and which suffixes should be ignored + included. These can be configured on a per-cluster basis." + image: ./telepresence-2.3.1-dns.png + docs: reference/config + type: feature + - title: AlsoProxy Configuration + body: "Telepresence now supports also proxying user-specified subnets so that they can access external services only accessible to the cluster while connected to Telepresence. These can be configured on a per-cluster basis and each subnet is added to the TUN device so that requests are routed to the cluster for IPs that fall within that subnet." + image: ./telepresence-2.3.1-alsoProxy.png + docs: reference/config + type: feature + - title: Mutating Webhook for Injecting Traffic Agents + body: "The Traffic Manager now contains a mutating webhook to automatically add an agent to pods that have the telepresence.getambassador.io/traffic-agent: enabled annotation. This enables Telepresence to work well with GitOps CD platforms that rely on higher level kubernetes objects matching what is stored in git. For workloads without the annotation, Telepresence will add the agent the way it has in the past" + image: ./telepresence-2.3.1-inject.png + docs: reference/rbac + type: feature + - title: Traffic Manager Connect Timeout + body: "The trafficManagerConnect timeout default has changed from 20 seconds to 60 seconds, in order to facilitate the extended time it takes to apply everything needed for the mutator webhook." + image: ./telepresence-2.3.1-trafficmanagerconnect.png + docs: reference/config + type: change + - title: Fix for large file transfers + body: "Fix a tun-device bug where sometimes large transfers from services on the cluster would hang indefinitely" + image: ./telepresence-2.3.1-large-file-transfer.png + docs: reference/tun-device + type: bugfix + - title: Brew Formula Changed + body: "Now that the Telepresence rewrite is the main version of Telepresence, you can install it via Brew like so: brew install datawire/blackbird/telepresence." + image: ./telepresence-2.3.1-brew.png + docs: install/ + type: change + - version: 2.3.0 + date: "2021-06-01" + notes: + - title: Brew install Telepresence + body: "Telepresence can now be installed via brew on macOS, which makes it easier for users to stay up-to-date with the latest telepresence version. To install via brew, you can use the following command: brew install datawire/blackbird/telepresence2." + image: ./telepresence-2.3.0-homebrew.png + docs: install/ + type: feature + - title: TCP and UDP routing via Virtual Network Interface + body: "Telepresence will now perform routing of outbound TCP and UDP traffic via a Virtual Network Interface (VIF). The VIF is a layer 3 TUN-device that exists while Telepresence is connected. It makes the subnets in the cluster available to the workstation and will also route DNS requests to the cluster and forward them to intercepted pods. This means that pods with custom DNS configuration will work as expected. Prior versions of Telepresence would use firewall rules and were only capable of routing TCP." + image: ./tunnel.jpg + docs: reference/tun-device + type: feature + - title: SSH is no longer used + body: "All traffic between the client and the cluster is now tunneled via the traffic manager gRPC API. This means that Telepresence no longer uses ssh tunnels and that the manager no longer have an sshd installed. Volume mounts are still established using sshfs but it is now configured to communicate using the sftp-protocol directly, which means that the traffic agent also runs without sshd. A desired side effect of this is that the manager and agent containers no longer need a special user configuration." + image: ./no-ssh.png + docs: reference/tun-device/#no-ssh-required + type: change + - title: Running in a Docker container + body: "Telepresence can now be run inside a Docker container. This can be useful for avoiding side effects on a workstation's network, establishing multiple sessions with the traffic manager, or working with different clusters simultaneously." + image: ./run-tp-in-docker.png + docs: reference/inside-container + type: feature + - title: Configurable Log Levels + body: "Telepresence now supports configuring the log level for Root Daemon and User Daemon logs. This provides control over the nature and volume of information that Telepresence generates in daemon.log and connector.log." + image: ./telepresence-2.3.0-loglevels.png + docs: reference/config/#log-levels + type: feature + - version: 2.2.2 + date: "2021-05-17" + notes: + - title: Legacy Telepresence subcommands + body: Telepresence is now able to translate common legacy Telepresence commands into native Telepresence commands. So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used to with the new Telepresence binary. + image: ./telepresence-2.2.png + docs: install/migrate-from-legacy/ + type: feature diff --git a/docs/telepresence/2.13/troubleshooting/index.md b/docs/telepresence/2.13/troubleshooting/index.md new file mode 100644 index 000000000..5a477f20a --- /dev/null +++ b/docs/telepresence/2.13/troubleshooting/index.md @@ -0,0 +1,331 @@ +--- +title: "Telepresence Troubleshooting" +description: "Learn how to troubleshoot common issues related to Telepresence, including intercept issues, cluster connection issues, and errors related to Ambassador Cloud." +--- +# Troubleshooting + + +## Creating an intercept did not generate a preview URL + +Preview URLs can only be created if Telepresence is [logged in to +Ambassador Cloud](../reference/client/login/). When not logged in, it +will not even try to create a preview URL (additionally, by default it +will intercept all traffic rather than just a subset of the traffic). +Remove the intercept with `telepresence leave [deployment name]`, run +`telepresence login` to login to Ambassador Cloud, then recreate the +intercept. See the [intercepts how-to doc](../howtos/intercepts) for +more details. + +## Error on accessing preview URL: `First record does not look like a TLS handshake` + +The service you are intercepting is likely not using TLS, however when configuring the intercept you indicated that it does use TLS. Remove the intercept with `telepresence leave [deployment name]` and recreate it, setting `TLS` to `n`. Telepresence tries to intelligently determine these settings for you when creating an intercept and offer them as defaults, but odd service configurations might cause it to suggest the wrong settings. + +## Error on accessing preview URL: Detected a 301 Redirect Loop + +If your ingress is set to redirect HTTP requests to HTTPS and your web app uses HTTPS, but you configure the intercept to not use TLS, you will get this error when opening the preview URL. Remove the intercept with `telepresence leave [deployment name]` and recreate it, selecting the correct port and setting `TLS` to `y` when prompted. + +## Connecting to a cluster via VPN doesn't work. + +There are a few different issues that could arise when working with a VPN. Please see the [dedicated page](../reference/vpn) on Telepresence and VPNs to learn more on how to fix these. + +## Connecting to a cluster hosted in a VM on the workstation doesn't work + +The cluster probably has access to the host's network and gets confused when it is mapped by Telepresence. +Please check the [cluster in hosted vm](../howtos/cluster-in-vm) for more details. + +## Your GitHub organization isn't listed + +Ambassador Cloud needs access granted to your GitHub organization as a +third-party OAuth app. If an organization isn't listed during login +then the correct access has not been granted. + +The quickest way to resolve this is to go to the **Github menu** → +**Settings** → **Applications** → **Authorized OAuth Apps** → +**Ambassador Labs**. An organization owner will have a **Grant** +button, anyone not an owner will have **Request** which sends an email +to the owner. If an access request has been denied in the past the +user will not see the **Request** button, they will have to reach out +to the owner. + +Once access is granted, log out of Ambassador Cloud and log back in; +you should see the GitHub organization listed. + +The organization owner can go to the **GitHub menu** → **Your +organizations** → **[org name]** → **Settings** → **Third-party +access** to see if Ambassador Labs has access already or authorize a +request for access (only owners will see **Settings** on the +organization page). Clicking the pencil icon will show the +permissions that were granted. + +GitHub's documentation provides more detail about [managing access granted to third-party applications](https://docs.github.com/en/github/authenticating-to-github/connecting-with-third-party-applications) and [approving access to apps](https://docs.github.com/en/github/setting-up-and-managing-organizations-and-teams/approving-oauth-apps-for-your-organization). + +### Granting or requesting access on initial login + +When using GitHub as your identity provider, the first time you log in +to Ambassador Cloud GitHub will ask to authorize Ambassador Labs to +access your organizations and certain user data. + +Authorize Ambassador labs form + +Any listed organization with a green check has already granted access +to Ambassador Labs (you still need to authorize to allow Ambassador +Labs to read your user data and organization membership). + +Any organization with a red "X" requires access to be granted to +Ambassador Labs. Owners of the organization will see a **Grant** +button. Anyone who is not an owner will see a **Request** button. +This will send an email to the organization owner requesting approval +to access the organization. If an access request has been denied in +the past the user will not see the **Request** button, they will have +to reach out to the owner. + +Once approval is granted, you will have to log out of Ambassador Cloud +then back in to select the organization. + +## Volume mounts are not working on macOS + +It's necessary to have `sshfs` installed in order for volume mounts to work correctly during intercepts. Lately there's been some issues using `brew install sshfs` a macOS workstation because the required component `osxfuse` (now named `macfuse`) isn't open source and hence, no longer supported. As a workaround, you can now use `gromgit/fuse/sshfs-mac` instead. Follow these steps: + +1. Remove old sshfs, macfuse, osxfuse using `brew uninstall` +2. `brew install --cask macfuse` +3. `brew install gromgit/fuse/sshfs-mac` +4. `brew link --overwrite sshfs-mac` + +Now sshfs -V shows you the correct version, e.g.: +``` +$ sshfs -V +SSHFS version 2.10 +FUSE library version: 2.9.9 +fuse: no mount point +``` + +5. Next, try a mount (or an intercept that performs a mount). It will fail because you need to give permission to “Benjamin Fleischer” to execute a kernel extension (a pop-up appears that takes you to the system preferences). +6. Approve the needed permission +7. Reboot your computer. + +## Volume mounts are not working on Linux +It's necessary to have `sshfs` installed in order for volume mounts to work correctly during intercepts. + +After you've installed `sshfs`, if mounts still aren't working: +1. Uncomment `user_allow_other` in `/etc/fuse.conf` +2. Add your user to the "fuse" group with: `sudo usermod -a -G fuse ` +3. Restart your computer after uncommenting `user_allow_other` + + +## Authorization for preview URLs +Services that require authentication may not function correctly with preview URLs. When accessing a preview URL, it is necessary to configure your intercept to use custom authentication headers for the preview URL. If you don't, you may receive an unauthorized response or be redirected to the login page for Ambassador Cloud. + +You can accomplish this by using a browser extension such as the `ModHeader extension` for [Chrome](https://chrome.google.com/webstore/detail/modheader/idgpnmonknjnojddfkpgkljpfnnfcklj) +or [Firefox](https://addons.mozilla.org/en-CA/firefox/addon/modheader-firefox/). + +It is important to note that Ambassador Cloud does not support OAuth browser flows when accessing a preview URL, but other auth schemes such as Basic access authentication and session cookies will work. + +## Distributed tracing + +Telepresence is a complex piece of software with components running locally on your laptop and remotely in a distributed kubernetes environment. +As such, troubleshooting investigations require tools that can give users, cluster admins, and maintainers a broad view of what these distributed components are doing. +In order to facilitate such investigations, telepresence >= 2.7.0 includes distributed tracing functionality via [OpenTelemetry](https://opentelemetry.io/) +Tracing is controlled via a `grpcPort` flag under the `tracing` configuration of your `values.yaml`. It is enabled by default and can be disabled by setting `grpcPort` to `0`, or `tracing` to an empty object: + +```yaml +tracing: {} +``` + +If tracing is configured, the traffic manager and traffic agents will open a GRPC server under the port given, from which telepresence clients will be able to gather trace data. +To collect trace data, ensure you're connected to the cluster, perform whatever operation you'd like to debug and then run `gather-traces` immediately after: + +```console +$ telepresence gather-traces +``` + +This command will gather traces from both the cloud and local components of telepresence and output them into a file called `traces.gz` in your current working directory: + +```console +$ file traces.gz + traces.gz: gzip compressed data, original size modulo 2^32 158255 +``` + +Please do not try to open or uncompress this file, as it contains binary trace data. +Instead, you can use the `upload-traces` command built into telepresence to send it to an [OpenTelemetry collector](https://opentelemetry.io/docs/collector/) for ingestion: + +```console +$ telepresence upload-traces traces.gz $OTLP_GRPC_ENDPOINT +``` + +Once that's been done, the traces will be visible via whatever means your usual collector allows. For example, this is what they look like when loaded into Jaeger's [OTLP API](https://www.jaegertracing.io/docs/1.36/apis/#opentelemetry-protocol-stable): + +![Jaeger Interface](../images/tracing.png) + +**Note:** The host and port provided for the `OTLP_GRPC_ENDPOINT` must accept OTLP formatted spans (instead of e.g. Jaeger or Zipkin specific spans) via a GRPC API (instead of the HTTP API that is also available in some collectors) +**Note:** Since traces are not automatically shipped to the backend by telepresence, they are stored in memory. Hence, to avoid running telepresence components out of memory, only the last 10MB of trace data are available for export. + +## No Sidecar Injected in GKE private clusters + +An attempt to `telepresence intercept` results in a timeout, and upon examination of the pods (`kubectl get pods`) it's discovered that the intercept command did not inject a sidecar into the workload's pods: + +```bash +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +echo-easy-7f6d54cff8-rz44k 1/1 Running 0 5m5s + +$ telepresence intercept echo-easy -p 8080 +telepresence: error: connector.CreateIntercept: request timed out while waiting for agent echo-easy.default to arrive +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +echo-easy-d8dc4cc7c-27567 1/1 Running 0 2m9s + +# Notice how 1/1 containers are ready. +``` + +If this is occurring in a GKE cluster with private networking enabled, it is likely due to firewall rules blocking the +Traffic Manager's webhook injector from the API server. +To fix this, add a firewall rule allowing your cluster's master nodes to access TCP port `443` in your cluster's pods, +or change the port number that Telepresence is using for the agent injector by providing the number of an allowed port +using the Helm chart value `agentInjector.webhook.port`. +Please refer to the [telepresence install instructions](../install/cloud#gke) or the [GCP docs](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) for information to resolve this. + +## Injected init-container doesn't function properly + +The init-container is injected to insert `iptables` rules that redirects port numbers from the app container to the +traffic-agent sidecar. This is necessary when the service's `targetPort` is numeric. It requires elevated privileges +(`NET_ADMIN` capabilities), and the inserted rules may get overridden by `iptables` rules inserted by other vendors, +such as Istio or Linkerd. + +Injection of the init-container can often be avoided by using a `targetPort` _name_ instead of a number, and ensure +that the corresponding container's `containerPort` is also named. This example uses the name "http", but any valid +name will do: +```yaml +apiVersion: v1 +kind: Pod +metadata: + ... +spec: + ... + containers: + - ... + ports: + - name: http + containerPort: 8080 +--- +apiVersion: v1 +kind: Service +metadata: + ... +spec: + ... + ports: + - port: 80 + targetPort: http +``` + +Telepresence's mutating webhook will refrain from injecting an init-container when the `targetPort` is a name. Instead, +it will do the following during the injection of the traffic-agent: + +1. Rename the designated container's port by prefixing it (i.e., containerPort: http becomes containerPort: tm-http). +2. Let the container port of our injected traffic-agent use the original name (i.e., containerPort: http). + +Kubernetes takes care of the rest and will now associate the service's `targetPort` with our traffic-agent's +`containerPort`. + +### Important note +If the service is "headless" (using `ClusterIP: None`), then using named ports won't help because the `targetPort` will +not get remapped. A headless service will always require the init-container. + +## Error connecting to GKE or EKS cluster + +GKE and EKS require a plugin that utilizes their resepective IAM providers. +You will need to install the [gke](../install/cloud#gke-authentication-plugin) or [eks](../install/cloud#eks-authentication-plugin) plugins +for Telepresence to connect to your cluster. + +## `too many files open` error when running `telepresence connect` on Linux + +If `telepresence connect` on linux fails with a message in the logs `too many files open`, then check if `fs.inotify.max_user_instances` is set too low. Check the current settings with `sysctl fs.notify.max_user_instances` and increase it temporarily with `sudo sysctl -w fs.inotify.max_user_instances=512`. For more information about permanently increasing it see [Kernel inotify watch limit reached](https://unix.stackexchange.com/a/13757/514457). + +## Connected to cluster via VPN but IPs don't resolve + +If `telepresence connect` succeeds, but you find yourself unable to reach services on your cluster, a routing conflict may be to blame. This frequently happens when connecting to a VPN at the same time as telepresence, +as often VPN clients may add routes that conflict with those added by telepresence. To debug this, pick an IP address in the cluster and get its route information. In this case, we'll get the route for `100.124.150.45`, and discover +that it's running through a `tailscale` device. + + + + +```console +$ route -n get 100.124.150.45 + route to: 100.64.2.3 +destination: 100.64.0.0 + mask: 255.192.0.0 + interface: utun4 + flags: + recvpipe sendpipe ssthresh rtt,msec rttvar hopcount mtu expire + 0 0 0 0 0 0 1280 0 +``` + +Note that in macos it's difficult to determine what software the name of a virtual interface corresponds to -- `utun4` doesn't indicate that it was created by tailscale. +One option is to look at the output of `ifconfig` before and after connecting to your VPN to see if the interface in question is being added upon connection + + + + +```console +$ ip route get 100.124.150.45 +100.64.2.3 dev tailscale0 table 52 src 100.111.250.89 uid 0 +``` + + + + +```console +$ Find-NetRoute -RemoteIPAddress 100.124.150.45 + +IPAddress : 100.102.111.26 +InterfaceIndex : 29 +InterfaceAlias : Tailscale +AddressFamily : IPv4 +Type : Unicast +PrefixLength : 32 +PrefixOrigin : Manual +SuffixOrigin : Manual +AddressState : Preferred +ValidLifetime : Infinite ([TimeSpan]::MaxValue) +PreferredLifetime : Infinite ([TimeSpan]::MaxValue) +SkipAsSource : False +PolicyStore : ActiveStore + + +Caption : +Description : +ElementName : +InstanceID : ;::8;;;8 + + +This will tell you which device the traffic is being routed through. As a rule, if the traffic is not being routed by the telepresence device, +your VPN may need to be reconfigured, as its routing configuration is conflicting with telepresence. One way to determine if this is the case +is to run `telepresence quit -s`, check the route for an IP in the cluster (see commands above), run `telepresence connect`, and re-run the commands to see if the output changes. +If it doesn't change, that means telepresence is unable to override your VPN routes, and your VPN may need to be reconfigured. Talk to your network admins +to configure it such that clients do not add routes that conflict with the pod and service CIDRs of the clusters. How this will be done will +vary depending on the VPN provider. +Future versions of telepresence will be smarter about informing you of such conflicts upon connection. diff --git a/docs/telepresence/2.13/versions.yml b/docs/telepresence/2.13/versions.yml new file mode 100644 index 000000000..51b5eaf7e --- /dev/null +++ b/docs/telepresence/2.13/versions.yml @@ -0,0 +1,5 @@ +version: "2.13.3" +dlVersion: "latest" +docsVersion: "2.13" +branch: release/v2 +productName: "Telepresence" diff --git a/docs/telepresence/2.14 b/docs/telepresence/2.14 deleted file mode 120000 index 0a066d2eb..000000000 --- a/docs/telepresence/2.14 +++ /dev/null @@ -1 +0,0 @@ -../../../docs/telepresence/v2.14 \ No newline at end of file diff --git a/docs/telepresence/2.14/ci/github-actions.md b/docs/telepresence/2.14/ci/github-actions.md new file mode 100644 index 000000000..810a2d239 --- /dev/null +++ b/docs/telepresence/2.14/ci/github-actions.md @@ -0,0 +1,176 @@ +--- +title: GitHub Actions for Telepresence +description: "Learn more about GitHub Actions for Telepresence and how to integrate them in your processes to run tests for your own environments and improve your CI/CD pipeline. " +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Telepresence with GitHub Actions + +Telepresence combined with [GitHub Actions](https://docs.github.com/en/actions) allows you to run integration tests in your continuous integration/continuous delivery (CI/CD) pipeline without the need to run any dependant service. When you connect to the target Kubernetes cluster, you can intercept traffic of the remote services and send it to an instance of the local service running in CI. This way, you can quickly test the bugfixes, updates, and features that you develop in your project. + +You can [register here](https://app.getambassador.io/auth/realms/production/protocol/openid-connect/auth?client_id=telepresence-github-actions&response_type=code&code_challenge=qhXI67CwarbmH-pqjDIV1ZE6kqggBKvGfs69cxst43w&code_challenge_method=S256&redirect_uri=https://app.getambassador.io) to get a free Ambassador Cloud account to try the GitHub Actions for Telepresence yourself. + +## GitHub Actions for Telepresence + +Ambassador Labs has created a set of GitHub Actions for Telepresence that enable you to run integration tests in your CI pipeline against any existing remote cluster. The GitHub Actions for Telepresence are the following: + + - **configure**: Initial configuration setup for Telepresence that is needed to run the actions successfully. + - **install**: Installs Telepresence in your CI server with latest version or the one specified by you. + - **login**: Logs into Telepresence, you can create a [personal intercept](/docs/telepresence/latest/concepts/intercepts/#personal-intercept). You'll need a Telepresence API key and set it as an environment variable in your workflow. See the [acquiring an API key guide](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) for instructions on how to get one. + - **connect**: Connects to the remote target environment. + - **intercept**: Redirects traffic to the remote service to the version of the service running in CI so you can run integration tests. + +Each action contains a post-action script to clean up resources. This includes logging out of Telepresence, closing the connection to the remote cluster, and stopping the intercept process. These post scripts are executed automatically, regardless of job result. This way, you don't have to worry about terminating the session yourself. You can look at the [GitHub Actions for Telepresence repository](https://github.com/datawire/telepresence-actions) for more information. + +# Using Telepresence in your GitHub Actions CI pipeline + +## Prerequisites + +To enable GitHub Actions with telepresence, you need: + +* A [Telepresence API KEY](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) and set it as an environment variable in your workflow. +* Access to your remote Kubernetes cluster, like a `kubeconfig.yaml` file with the information to connect to the cluster. +* If your remote cluster already has Telepresence installed, you need to know whether Telepresence is installed [Cluster wide](/docs/telepresence/latest/reference/rbac/#cluster-wide-telepresence-user-access) or [Namespace only](/docs/telepresence/latest/reference/rbac/#namespace-only-telepresence-user-access). If telepresence is configured for namespace only, verify that your `kubeconfig.yaml` is configured to find the installation of the Traffic Manager. For example: + + ```yaml + apiVersion: v1 + clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: traffic-manager-namespace + name: example-cluster + ``` + +* If Telepresence is installed, you also need to know the version of Telepresence running in the cluster. You can run the command `kubectl describe service traffic-manager -n namespace`. The version is listed in the `labels` section of the output. +* You need a GitHub Actions secret named `TELEPRESENCE_API_KEY` in your repository that has your Telepresence API key. See [GitHub docs](https://docs.github.com/en/github-ae@latest/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository) for instructions on how to create GitHub Actions secrets. +* You need a GitHub Actions secret named `KUBECONFIG_FILE` in your repository with the content of your `kubeconfig.yaml`). + +**Does your environment look different?** We're actively working on making GitHub Actions for Telepresence more useful for more + +
+ + +
+ +## Initial configuration setup + +To be able to use the GitHub Actions for Telepresence, you need to do an initial setup to [configure Telepresence](../../reference/config/) so the repository is able to run your workflow. To complete the Telepresence setup: + + +This action only supports Ubuntu runners at the moment. + +1. In your main branch, create a `.github/workflows` directory in your GitHub repository if it does not already exist. +1. Next, in the `.github/workflows` directory, create a new YAML file named `configure-telepresence.yaml`: + + ```yaml + name: Configuring telepresence + on: workflow_dispatch + jobs: + configuring: + name: Configure telepresence + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + steps: + - name : Checkout + uses: actions/checkout@v3 + #---- here run your custom command to connect to your cluster + #- name: Connect to cluster + # shell: bash + # run: ./connnect to cluster + #---- + - name: Congifuring Telepresence + uses: datawire/telepresence-actions/configure@v1.0-rc + with: + version: latest + ``` + +1. Push the `configure-telepresence.yaml` file to your repository. +1. Run the `Configuring Telepresence Workflow` [manually](https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow) in your repository's Actions tab. + +When the workflow runs, the action caches the configuration directory of Telepresence and a Telepresence configuration file if is provided by you. This configuration file should be placed in the`/.github/telepresence-config/config.yml` with your own [Telepresence config](../../reference/config/). If you update your this file with a new configuration, you must run the `Configuring Telepresence Workflow` action manually on your main branch so your workflow detects the new configuration. + + +When you create a branch, do not remove the .telepresence/config.yml file. This is required for Telepresence to run GitHub action properly when there is a new push to the branch in your repository. + + +## Using Telepresence in your GitHub Actions workflows + +1. In the `.github/workflows` directory create a new YAML file named `run-integration-tests.yaml` and modify placeholders with real actions to run your service and perform integration tests. + + ```yaml + name: Run Integration Tests + on: + push: + branches-ignore: + - 'main' + jobs: + my-job: + name: Run Integration Test using Remote Cluster + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + KUBECONFIG_FILE: ${{ secrets.KUBECONFIG_FILE }} + KUBECONFIG: /opt/kubeconfig + steps: + - name : Checkout + uses: actions/checkout@v3 + with: + ref: ${{ github.event.pull_request.head.sha }} + #---- here run your custom command to run your service + #- name: Run your service to test + # shell: bash + # run: ./run_local_service + #---- + # First you need to log in to Telepresence, with your api key + - name: Create kubeconfig file + run: | + cat < /opt/kubeconfig + ${{ env.KUBECONFIG_FILE }} + EOF + - name: Install Telepresence + uses: datawire/telepresence-actions/install@v1.0-rc + with: + version: 2.5.8 # Change the version number here according to the version of Telepresence in your cluster or omit this parameter to install the latest version + - name: Telepresence connect + uses: datawire/telepresence-actions/connect@v1.0-rc + - name: Login + uses: datawire/telepresence-actions/login@v1.0-rc + with: + telepresence_api_key: ${{ secrets.TELEPRESENCE_API_KEY }} + - name: Intercept the service + uses: datawire/telepresence-actions/intercept@v1.0-rc + with: + service_name: service-name + service_port: 8081:8080 + namespace: namespacename-of-your-service + http_header: "x-telepresence-intercept-id=service-intercepted" + print_logs: true # Flag to instruct the action to print out Telepresence logs and export an artifact with them + #---- here run your custom command + #- name: Run integrations test + # shell: bash + # run: ./run_integration_test + #---- + ``` + +The previous is an example of a workflow that: + +* Checks out the repository code. +* Has a placeholder step to run the service during CI. +* Creates the `/opt/kubeconfig` file with the contents of the `secrets.KUBECONFIG_FILE` to make it available for Telepresence. +* Installs Telepresence. +* Runs Telepresence Connect. +* Logs into Telepresence. +* Intercepts traffic to the service running in the remote cluster. +* A placeholder for an action that would run integration tests, such as one that makes HTTP requests to your running service and verifies it works while dependent services run in the remote cluster. + +This workflow gives you the ability to run integration tests during the CI run against an ephemeral instance of your service to verify that the any change that is pushed to the working branch works as expected. After you push the changes, the CI server will run the integration tests against the intercept. You can view the results on your GitHub repository, under "Actions" tab. diff --git a/docs/telepresence/2.14/ci/pod-daemon.md b/docs/telepresence/2.14/ci/pod-daemon.md new file mode 100644 index 000000000..9342a2d86 --- /dev/null +++ b/docs/telepresence/2.14/ci/pod-daemon.md @@ -0,0 +1,202 @@ +--- +title: Pod Daemon +description: "Pod Daemon and how to integrate it in your processes to run tests for your own environments and improve your CI/CD pipeline." +--- + +# Telepresence with Pod Daemon + + +The Pod Daemon facilitates the execution of Telepresence by using a Pod as a sidecar to your application. This becomes particularly beneficial when intending to incorporate Deployment Previews into your pipeline. Essentially, the pod-daemon is a Telepresence instance running in a pod, rather than operating on a developer's laptop. + +This presents a compelling solution for developers who wish to share a live iteration of their work within the organization. A preview URL can be produced, which links directly to the image created during the Continuous Integration (CI) process. This Preview URL can then be appended to the pull request, streamlining the code review process and enabling real-time project sharing within the team. + +## Overview + +The Pod Daemon functions as an optimized version of Telepresence, undertaking all preliminary configuration tasks (such as login and daemon startup), and additionally executing the intercept. + +The initial setup phase involves deploying a service account with the necessary minimal permissions for running Telepresence, coupled with a secret that holds the API KEY essential for executing a Telepresence login. + +Following this setup, your main responsibility consists of deploying your operational application, which incorporates a pod daemon operating as a sidecar. The parameters for the pod daemon require the relevant details concerning your live application. As it initiates, the pod daemon will intercept your live application and divert traffic towards your working application. This traffic redirection is based on your configured headers, which come into play each time the application is accessed. + +

+ +

+ +## Usage + +To commence the setup, it's necessary to deploy both a service account and a secret. Here's how to go about it: + +1. Establish a connection to your cluster and proceed to deploy this within the namespace of your live application (default in this case). + + ```yaml + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: ambassador-deploy-previews + namespace: default + labels: + app.kubernetes.io/name: ambassador-deploy-previews + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: ambassador-deploy-previews + namespace: default + labels: + app.kubernetes.io/name: ambassador-deploy-previews + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: ambassador-deploy-previews + labels: + app.kubernetes.io/name: ambassador-deploy-previews + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: ambassador-deploy-previews + subjects: + - name: ambassador-deploy-previews + namespace: default + kind: ServiceAccount + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + labels: + rbac.getambassador.io/role-group: ambassador-deploy-previews + name: ambassador-deploy-previews + rules: + - apiGroups: [ "" ] + verbs: [ "get", "list", "watch", "create", "delete" ] + resources: + - namespaces + - pods + - pods/log + - pods/portforward + - services + - secrets + - configmaps + - endpoints + - nodes + - deployments + - serviceaccounts + + - apiGroups: [ "apps", "rbac.authorization.k8s.io", "admissionregistration.k8s.io" ] + verbs: [ "get", "list", "create", "update", "watch" ] + resources: + - deployments + - statefulsets + - clusterrolebindings + - rolebindings + - clusterroles + - replicasets + - roles + - serviceaccounts + - mutatingwebhookconfigurations + + - apiGroups: [ "getambassador.io" ] + verbs: [ "get", "list", "watch" ] + resources: [ "*" ] + + - apiGroups: [ "getambassador.io" ] + verbs: [ "update" ] + resources: [ "mappings/status" ] + + - apiGroups: [ "networking.x-k8s.io" ] + verbs: [ "get", "list", "watch" ] + resources: [ "*" ] + + - apiGroups: [ "networking.internal.knative.dev" ] + verbs: [ "get", "list", "watch" ] + resources: [ "ingresses", "clusteringresses" ] + + - apiGroups: [ "networking.internal.knative.dev" ] + verbs: [ "update" ] + resources: [ "ingresses/status", "clusteringresses/status" ] + + - apiGroups: [ "extensions", "networking.k8s.io" ] + verbs: [ "get", "list", "watch" ] + resources: [ "ingresses", "ingressclasses" ] + + - apiGroups: [ "extensions", "networking.k8s.io" ] + verbs: [ "update" ] + resources: [ "ingresses/status" ] + --- + apiVersion: v1 + kind: Secret + metadata: + name: deployment-preview-apikey + namespace: default + type: Opaque + stringData: + AMBASSADOR_CLOUD_APIKEY: "{YOUR_API_KEY}" + + ``` + +2. Following this, you will need to deploy the iteration image together with the pod daemon, serving as a sidecar. In order to utilize the pod-daemon command, the environmental variable `IS_POD_DAEMON` must be set to `True`. This setting is a prerequisite for activating the pod-daemon functionality. + + ```yaml + --- + apiVersion: apps/v1 + kind: Deployment + metadata: + name: quote-ci + spec: + selector: + matchLabels: + run: quote-ci + replicas: 1 + template: + metadata: + labels: + run: quote-ci + spec: + serviceAccountName: ambassador-deploy-previews + containers: + # Include your application container + # - name: your-original-application + # image: image-built-from-pull-request + # [...] + # Inject the pod-daemon container + # In the following example, we'll demonstrate how to integrate the pod-daemon container by intercepting the quote app + - name: pod-daemon + image: datawire/telepresence:$version$ + ports: + - name: http + containerPort: 80 + - name: https + containerPort: 443 + resources: + limits: + cpu: "0.1" + memory: 100Mi + args: + - pod-daemon + - --workload-name=quote + - --workload-namespace=default + - --workload-kind=Deployment + - --port=8080 + - --http-header=test-telepresence=1 # Custom header can be specified + - --ingress-tls=false + - --ingress-port=80 + - --ingress-host=quote.default.svc.cluster.local + - --ingress-l5host=quote.default.svc.cluster.local + env: + - name: AMBASSADOR_CLOUD_APIKEY + valueFrom: + secretKeyRef: + name: deployment-preview-apikey + key: AMBASSADOR_CLOUD_APIKEY + - name: TELEPRESENCE_MANAGER_NAMESPACE + value: ambassador + - name: IS_POD_DAEMON + value: "True" + ``` + +3. The preview URL can be located within the logs of the pod daemon: + + ```bash + kubectl logs -f quote-ci-6dcc864445-x98wt -c pod-daemon + ``` \ No newline at end of file diff --git a/docs/telepresence/2.14/community.md b/docs/telepresence/2.14/community.md new file mode 100644 index 000000000..922457c9d --- /dev/null +++ b/docs/telepresence/2.14/community.md @@ -0,0 +1,12 @@ +# Community + +## Contributor's guide +Please review our [contributor's guide](https://github.com/telepresenceio/telepresence/blob/release/v2/DEVELOPING.md) +on GitHub to learn how you can help make Telepresence better. + +## Changelog +Our [changelog](https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md) +describes new features, bug fixes, and updates to each version of Telepresence. + +## Meetings +Check out our community [meeting schedule](https://github.com/telepresenceio/telepresence/blob/release/v2/MEETING_SCHEDULE.md) for opportunities to interact with Telepresence developers. diff --git a/docs/telepresence/2.14/concepts/context-prop.md b/docs/telepresence/2.14/concepts/context-prop.md new file mode 100644 index 000000000..b3eb41e32 --- /dev/null +++ b/docs/telepresence/2.14/concepts/context-prop.md @@ -0,0 +1,37 @@ +# Context propagation + +**Context propagation** is the transfer of request metadata across the services and remote processes of a distributed system. Telepresence uses context propagation to intelligently route requests to the appropriate destination. + +This metadata is the context that is transferred across system services. It commonly takes the form of HTTP headers; context propagation is usually referred to as header propagation. A component of the system (like a proxy or performance monitoring tool) injects the headers into requests as it relays them. + +Metadata propagation refers to any service or other middleware not stripping away the headers. Propagation facilitates the movement of the injected contexts between other downstream services and processes. + + +## What is distributed tracing? + +Distributed tracing is a technique for troubleshooting and profiling distributed microservices applications and is a common application for context propagation. It is becoming a key component for debugging. + +In a microservices architecture, a single request may trigger additional requests to other services. The originating service may not cause the failure or slow request directly; a downstream dependent service may instead be to blame. + +An application like Datadog or New Relic will use agents running on services throughout the system to inject traffic with HTTP headers (the context). They will track the request’s entire path from origin to destination to reply, gathering data on routes the requests follow and performance. The injected headers follow the [W3C Trace Context specification](https://www.w3.org/TR/trace-context/) (or another header format, such as [B3 headers](https://github.com/openzipkin/b3-propagation)), which facilitates maintaining the headers through every service without being stripped (the propagation). + + +## What are intercepts and preview URLs? + +[Intercepts](../../reference/intercepts) and [preview +URLs](../../howtos/preview-urls/) are functions of Telepresence that +enable easy local development from a remote Kubernetes cluster and +offer a preview environment for sharing and real-time collaboration. + +Telepresence uses custom HTTP headers and header propagation to +identify which traffic to intercept both for plain personal intercepts +and for personal intercepts with preview URLs; these techniques are +more commonly used for distributed tracing, so what they are being +used for is a little unorthodox, but the mechanisms for their use are +already widely deployed because of the prevalence of tracing. The +headers facilitate the smart routing of requests either to live +services in the cluster or services running locally on a developer’s +machine. The intercepted traffic can be further limited by using path +based routing. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to [Ambassador Cloud](https://app.getambassador.io) with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. diff --git a/docs/telepresence/2.14/concepts/devloop.md b/docs/telepresence/2.14/concepts/devloop.md new file mode 100644 index 000000000..86aac87e2 --- /dev/null +++ b/docs/telepresence/2.14/concepts/devloop.md @@ -0,0 +1,54 @@ +--- +title: "The developer and the inner dev loop | Ambassador " +--- + +# The developer experience and the inner dev loop + +## How is the developer experience changing? + +The developer experience is the workflow a developer uses to develop, test, deploy, and release software. + +Typically this experience has consisted of both an inner dev loop and an outer dev loop. The inner dev loop is where the individual developer codes and tests, and once the developer pushes their code to version control, the outer dev loop is triggered. + +The outer dev loop is _everything else_ that happens leading up to release. This includes code merge, automated code review, test execution, deployment, [controlled (canary) release](https://www.getambassador.io/docs/argo/latest/concepts/canary/), and observation of results. The modern outer dev loop might include, for example, an automated CI/CD pipeline as part of a [GitOps workflow](https://www.getambassador.io/docs/argo/latest/concepts/gitops/#what-is-gitops) and a [progressive delivery](/docs/argo/latest/concepts/cicd/) strategy relying on automated canaries, i.e. to make the outer loop as fast, efficient and automated as possible. + +Cloud-native technologies have fundamentally altered the developer experience in two ways: one, developers now have to take extra steps in the inner dev loop; two, developers need to be concerned with the outer dev loop as part of their workflow, even if most of their time is spent in the inner dev loop. + +Engineers now must design and build distributed service-based applications _and_ also assume responsibility for the full development life cycle. The new developer experience means that developers can no longer rely on monolithic application developer best practices, such as checking out the entire codebase and coding locally with a rapid “live-reload” inner development loop. Now developers have to manage external dependencies, build containers, and implement orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this adds development time to the equation. + +## What is the inner dev loop? + +The inner dev loop is the single developer workflow. A single developer should be able to set up and use an inner dev loop to code and test changes quickly. + +Even within the Kubernetes space, developers will find much of the inner dev loop familiar. That is, code can still be written locally at a level that a developer controls and committed to version control. + +In a traditional inner dev loop, if a typical developer codes for 360 minutes (6 hours) a day, with a traditional local iterative development loop of 5 minutes — 3 coding, 1 building, i.e. compiling/deploying/reloading, 1 testing inspecting, and 10-20 seconds for committing code — they can expect to make ~70 iterations of their code per day. Any one of these iterations could be a release candidate. The only “developer tax” being paid here is for the commit process, which is negligible. + +![traditional inner dev loop](../images/trad-inner-dev-loop.png) + +## In search of lost time: How does containerization change the inner dev loop? + +The inner dev loop is where writing and testing code happens, and time is critical for maximum developer productivity and getting features in front of end users. The faster the feedback loop, the faster developers can refactor and test again. + +Changes to the inner dev loop process, i.e., containerization, threaten to slow this development workflow down. Coding stays the same in the new inner dev loop, but code has to be containerized. The _containerized_ inner dev loop requires a number of new steps: + +* packaging code in containers +* writing a manifest to specify how Kubernetes should run the application (e.g., YAML-based configuration information, such as how much memory should be given to a container) +* pushing the container to the registry +* deploying containers in Kubernetes + +Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. + + +![container inner dev loop](../images/container-inner-dev-loop.png) + +## Tackling the slow inner dev loop + +A slow inner dev loop can negatively impact frontend and backend teams, delaying work on individual and team levels and slowing releases into production overall. + +For example: + +* Frontend developers have to wait for previews of backend changes on a shared dev/staging environment (for example, until CI/CD deploys a new version) and/or rely on mocks/stubs/virtual services when coding their application locally. These changes are only verifiable by going through the CI/CD process to build and deploy within a target environment. +* Backend developers have to wait for CI/CD to build and deploy their app to a target environment to verify that their code works correctly with cluster or cloud-based dependencies as well as to share their work to get feedback. + +New technologies and tools can facilitate cloud-native, containerized development. And in the case of a sluggish inner dev loop, developers can accelerate productivity with tools that help speed the loop up again. diff --git a/docs/telepresence/2.14/concepts/devworkflow.md b/docs/telepresence/2.14/concepts/devworkflow.md new file mode 100644 index 000000000..fa24fc2bd --- /dev/null +++ b/docs/telepresence/2.14/concepts/devworkflow.md @@ -0,0 +1,7 @@ +# The changing development workflow + +A changing workflow is one of the main challenges for developers adopting Kubernetes. Software development itself isn’t the challenge. Developers can continue to [code using the languages and tools with which they are most productive and comfortable](https://www.getambassador.io/resources/kubernetes-local-dev-toolkit/). That’s the beauty of containerized development. + +However, the cloud-native, Kubernetes-based approach to development means adopting a new development workflow and development environment. Beyond the basics, such as figuring out how to containerize software, [how to run containers in Kubernetes](https://www.getambassador.io/docs/kubernetes/latest/concepts/appdev/), and how to deploy changes into containers, for example, Kubernetes adds complexity before it delivers efficiency. The promise of a “quicker way to develop software” applies at least within the traditional aspects of the inner dev loop, where the single developer codes, builds and tests their software. But both within the inner dev loop and once code is pushed into version control to trigger the outer dev loop, the developer experience changes considerably from what many developers are used to. + +In this new paradigm, new steps are added to the inner dev loop, and more broadly, the developer begins to share responsibility for the full life cycle of their software. Inevitably this means taking new workflows and tools on board to ensure that the full life cycle continues full speed ahead. diff --git a/docs/telepresence/2.14/concepts/faster.md b/docs/telepresence/2.14/concepts/faster.md new file mode 100644 index 000000000..3950dce38 --- /dev/null +++ b/docs/telepresence/2.14/concepts/faster.md @@ -0,0 +1,28 @@ +--- +title: Install the Telepresence Docker extension | Ambassador +--- +# Making the remote local: Faster feedback, collaboration and debugging + +With the goal of achieving [fast, efficient development](https://www.getambassador.io/use-case/local-kubernetes-development/), developers need a set of approaches to bridge the gap between remote Kubernetes clusters and local development, and reduce time to feedback and debugging. + +## How should I set up a Kubernetes development environment? + +[Setting up a development environment](https://www.getambassador.io/resources/development-environments-microservices/) for Kubernetes can be much more complex than the setup for traditional web applications. Creating and maintaining a Kubernetes development environment relies on a number of external dependencies, such as databases or authentication. + +While there are several ways to set up a Kubernetes development environment, most introduce complexities and impediments to speed. The dev environment should be set up to easily code and test in conditions where a service can access the resources it depends on. + +A good way to meet the goals of faster feedback, possibilities for collaboration, and scale in a realistic production environment is the "single service local, all other remote" environment. Developing in a fully remote environment offers some benefits, but for developers, it offers the slowest possible feedback loop. With local development in a remote environment, the developer retains considerable control while using tools like [Telepresence](../../quick-start/) to facilitate fast feedback, debugging and collaboration. + +## What is Telepresence? + +Telepresence is an open source tool that lets developers [code and test microservices locally against a remote Kubernetes cluster](../../quick-start/). Telepresence facilitates more efficient development workflows while relieving the need to worry about other service dependencies. + +## How can I get fast, efficient local development? + +The dev loop can be jump-started with the right development environment and Kubernetes development tools to support speed, efficiency and collaboration. Telepresence is designed to let Kubernetes developers code as though their laptop is in their Kubernetes cluster, enabling the service to run locally and be proxied into the remote cluster. Telepresence runs code locally and forwards requests to and from the remote Kubernetes cluster, bypassing the much slower process of waiting for a container to build, pushing it to registry, and deploying to production. + +A rapid and continuous feedback loop is essential for productivity and speed; Telepresence enables the fast, efficient feedback loop to ensure that developers can access the rapid local development loop they rely on without disrupting their own or other developers' workflows. Telepresence safely intercepts traffic from the production cluster and enables near-instant testing of code, local debugging in production, and [preview URL](../../howtos/preview-urls/) functionality to share dev environments with others for multi-user collaboration. + +Telepresence works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This pod proxies data from the Kubernetes environment (e.g., TCP/UDP connections, environment variables, volumes) to the local process. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +The intercept proxy works thanks to context propagation, which is most frequently associated with distributed tracing but also plays a key role in controllable intercepts and preview URLs. diff --git a/docs/telepresence/2.14/concepts/goldenpaths.md b/docs/telepresence/2.14/concepts/goldenpaths.md new file mode 100644 index 000000000..7e5f48f34 --- /dev/null +++ b/docs/telepresence/2.14/concepts/goldenpaths.md @@ -0,0 +1,10 @@ +# Golden Paths + +A golden path is a best practice or a standardized process you should apply to Telepresence, often used to optimize productivity or quality control. It can be used as a benchmark or a reference point for measuring success and progress towards a particular goal or outcome. + +We have provided Golden Paths for multiple use cases listed below. + +1. [Intercept Specifications](../goldenpaths/specs) +2. [Using Telepresence with Docker](../goldenpaths/docker) +3. [Installing Telepresence in Team Mode](../goldenpaths/installation) +4. [Docker Compose](../goldenpaths/compose) \ No newline at end of file diff --git a/docs/telepresence/2.14/concepts/goldenpaths/compose.md b/docs/telepresence/2.14/concepts/goldenpaths/compose.md new file mode 100644 index 000000000..e3a6db407 --- /dev/null +++ b/docs/telepresence/2.14/concepts/goldenpaths/compose.md @@ -0,0 +1,63 @@ +# Telepresence with Docker Compose Golden Path + +## Why? + +When adopting Telepresence, you may be hesitant to throw away all the investment you made replicating your infrastructure with +[Docker Compose](https://docs.docker.com/compose/). + +Thankfully, it doesn't have to be this way, since you can associate the [Telepresence Specification](../specs) with [Docker mode](../docker) to integrate your Docker Compose file. + +## How? +Telepresence Intercept Specifications are integrated with Docker Compose! Let's look at an example to see how it works. + +Below is an example of an Intercept Spec and Docker Compose file that is intercepting an echo service with a custom header and being handled by a service created through Docker Compose. + +Intercept Spec: +```yaml +workloads: + - name: echo + intercepts: + - handler: echo + localport: 8080 + port: 80 + headers: + - name: "{{ .Telepresence.Username }}" + value: 1 +handlers: + - name: echo + docker: + compose: + services: + - name: echo + behavior: interceptHandler +``` + +The Docker Compose file is creating two services, a postgres database, and your local echo service. The local echo service is utilizing Docker's [watch](https://docs.docker.com/compose/file-watch/) feature to take advantage of hot reloads. + +Docker compose file: +```yaml +services: + postgres: + image: "postgres:14.1" + ports: + - "5432" + echo: + build: . + ports: + - "8080" + x-develop: + watch: + - action: rebuild + path: main.go + environment: + DATABASE_HOST: "localhost:5432" + DATABASE_PASSWORD: postgres + DEV_MODE: "true" +``` + +By combining Intercept Specifications and Docker Compose, you can intercept the traffic going to your cluster while developing on multiple local services and utilizing hot reloads. + +## Key learnings + +* Using **Docker Compose** with **Telepresence** allows you to have a **hybrid** development setup between local & remote. +* You can **reuse your existing setup** with minimum effort. diff --git a/docs/telepresence/2.14/concepts/goldenpaths/docker.md b/docs/telepresence/2.14/concepts/goldenpaths/docker.md new file mode 100644 index 000000000..863aa497a --- /dev/null +++ b/docs/telepresence/2.14/concepts/goldenpaths/docker.md @@ -0,0 +1,70 @@ +# Telepresence with Docker Golden Path + +## Why? + +It can be tedious to adopt Telepresence across your organization, since in its handiest form, it requires admin access, and needs to get along with any exotic +networking setup that your company may have. + +If Docker is already approved in your organization, this Golden path should be considered. + +## How? + +When using Telepresence in Docker mode, users can eliminate the need for admin access on their machines, address several networking challenges, and forego the need for third-party applications to enable volume mounts. + +You can simply add the docker flag to any Telepresence command, and it will start your daemon in a container. +Thus removing the need for root access, making it easier to adopt as an organization + +Let's illustrate with a quick demo, assuming a default Kubernetes context named default, and a simple HTTP service: + +```cli +$ telepresence connect --docker +Connected to context default (https://default.cluster.bakerstreet.io) + +$ docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +7a0e01cab325 datawire/telepresence:2.12.1 "telepresence connec…" 18 seconds ago Up 16 seconds 127.0.0.1:58802->58802/tcp tp-default +``` + +This method limits the scope of the potential networking issues since everything stays inside Docker. The Telepresence daemon can be found under the name `tp-` when listing your containers. + +Start an intercept: + +```cli +$ telepresence intercept echo-easy --port 8080:80 -n default +Using Deployment echo-easy + Intercept name : echo-easy-default + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: proxied + Volume Mount Point : /var/folders/x_/4x_4pfvx2j3_94f36x551g140000gp/T/telfs-505935483 + Intercepting : HTTP requests with headers + 'x-telepresence-intercept-id: e20f0764-7fd8-45c1-b911-b2adeee1af45:echo-easy-default' + Preview URL : https://gracious-ishizaka-5365.preview.edgestack.me + Layer 5 Hostname : echo-easy.default.svc.cluster.local +``` + +Start your intercept handler (interceptor) by targeting the daemon container `--network=container:tp-`, and open the preview URL to see the traffic routed to your machine. + +```cli +$ docker run \ + --network=container:tp-default \ + -e PORT=8080 jmalloc/echo-server +Echo server listening on port 8080. +127.0.0.1:41500 | GET / +127.0.0.1:41512 | GET /favicon.ico +127.0.0.1:41500 | GET / +127.0.0.1:41512 | GET /favicon.ico +``` + +For users utilizing Docker mode in Telepresence, we strongly recommend using [Intercept Specifications](../specs) to seamlessly configure their Intercept Handler as a Docker container. + +It's essential to ensure that users also open the debugging port on their container to allow them to attach their local debugger from their IDE. +By leveraging Intercept Specifications and Docker mode together, users can optimize their Telepresence experience and streamline their debugging workflow. + +## Key learnings + +* Using the Docker mode of telepresence **do not require root access**, and make it **easier** to adopt it across your organization. +* It **limits the potential networking issues** you can encounter. +* It leverages **Docker** for your interceptor. +* You can use it with the [Intercept Specifications](../specs). diff --git a/docs/telepresence/2.14/concepts/goldenpaths/installation.md b/docs/telepresence/2.14/concepts/goldenpaths/installation.md new file mode 100644 index 000000000..d5c447d20 --- /dev/null +++ b/docs/telepresence/2.14/concepts/goldenpaths/installation.md @@ -0,0 +1,39 @@ +# Traffic Manager Installation Golden Path + +## Why? + +Telepresence requires a Traffic Manager to be installed in your cluster, to control how traffic is redirected while intercepting. The Traffic Manager can be installed in two different modes, [Single User Mode](../../modes#single-user-mode) and [Team Mode](../../modes#team-mode). + +Single User Mode is great for an individual user that has autonomy within their cluster and won't impede other developers if they were to intercept traffic. However, this is often not the case for most developers, you often work in a shared environment and will affect other team members by hi-jacking their traffic. + +We recommend installing your Traffic Manager in Team Mode. This will default all Intercepts created to be a [Personal Intercept](../../../reference/intercepts#personal-intercept). This will give each Intercept a specific HTTP header, that will only reroute the traffic containing the header. Thus working best in a team environment. + +## How? + +Installing the Traffic Manager in Team Mode is quite easy. + +If you install the Traffic Manager using the Telepresence command you can simply pass the `--team-mode` flag like so: + +```cli +telepresence helm install --team-mode +``` + +If you use the Helm chart directly, you can just set the `mode` variable. +```cli +helm install traffic-manager datawire/telepresence --set mode=team +``` + +Or if you are upgrading your Traffic Manager you can run: + +```cli +telepresence helm upgrade --team-mode +``` + +```cli +helm upgrade traffic-manager datawire/telepresence --set mode=team +``` + +## Key Learnings + +* Team mode is essential when working in a shared cluster to ensure you aren't interrupting other developers workflows +* You can always change the mode of your Traffic Manager while installing or upgrading \ No newline at end of file diff --git a/docs/telepresence/2.14/concepts/goldenpaths/specs.md b/docs/telepresence/2.14/concepts/goldenpaths/specs.md new file mode 100644 index 000000000..0d8e5dc30 --- /dev/null +++ b/docs/telepresence/2.14/concepts/goldenpaths/specs.md @@ -0,0 +1,80 @@ +# Intercept Specification Golden Path + +## Why? + +Telepresence can be difficult to adopt Organization-wide. Each developer has their own local setup and adds many variables to running Telepresence, duplicating work amongst developers. + +For these reasons, and many others we recommend using [Intercept Specifications](../../../reference/intercepts/specs). + +## How? + +When using an Intercept Specification you write a YAML file, similar to a CI workflow, or a Docker compose file. An Intercept Specification enables you to standardization amongst your developers. + +With a spec you will be able to define the kubernetes context to work in, the workload you want to intercept, the local intercept handler your traffic will be flowing to, and any pre/post requisties that are required to run your applications. + +Lets look at an example: + +I have a service `quote` running in the `default` namespace I want to intercept to test changes I've made before opening a Pull Request. + +I can use the Intercept Specification below to tell Telepresence to Intercept the quote serivce with a [Personal Intercept](../../../reference/intercepts#personal-intercept), in the default namespace of my cluster `test-cluster`. I also want to start the Intercept Handler, as a Docker container, with the provided image. + +```yaml +--- +connection: + context: test-cluster +workloads: + - name: quote + namespace: default + intercepts: + - headers: + - name: test-{{ .Telepresence.Username }} + value: "{{ .Telepresence.Username }}" + localPort: 8080 + mountPoint: "false" + port: 80 + handler: quote + service: quote + previewURL: + enable: true +handlers: + - name: quote + environment: + - name: PORT + value: "8080" + docker: + image: docker.io/datawire/quote:0.5.0 +``` + +You can then run this Intercept Specification with: + +```cli +telepresence intercept run quote-spec.yaml + Intercept name : quote-default + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests with headers + 'test-user =~ user' + Preview URL : https://charming-newton-3109.preview.edgestack.me + Layer 5 Hostname : quote.default.svc.cluster.local +Intercept spec "quote-spec" started successfully, use ctrl-c to cancel. +2023/04/12 16:05:00 CONSUL_IP environment variable not found, continuing without Consul registration +2023/04/12 16:05:00 listening on :8080 +``` + +You can see that the Intercept was started, and if I check the local docker containers I can see that the Telepresence daemon is running in a container, and your Intercept Handler was successfully started. + +```cli +docker ps + +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +bdd99d244fbb datawire/quote:0.5.0 "/bin/qotm" 2 minutes ago Up 2 minutes tp-quote +5966d7099adf datawire/telepresence:2.12.1 "telepresence connec…" 2 minutes ago Up 2 minutes 127.0.0.1:58443->58443/tcp tp-test-cluster +``` + +## Key Learnings + +* Using Intercept Specification enables you to create a standardized approach for Intercepts across your Organization in an easy to share way. +* You can easily leverage Docker to remove other potential hiccups associated with networking. +* There are many more great things you can do with an Intercept Specification, check those out [here](../../../reference/intercepts/specs) \ No newline at end of file diff --git a/docs/telepresence/2.14/concepts/intercepts.md b/docs/telepresence/2.14/concepts/intercepts.md new file mode 100644 index 000000000..0a2909be2 --- /dev/null +++ b/docs/telepresence/2.14/concepts/intercepts.md @@ -0,0 +1,208 @@ +--- +title: "Types of intercepts" +description: "Short demonstration of personal vs global intercepts" +--- + +import React from 'react'; + +import Alert from '@material-ui/lab/Alert'; +import AppBar from '@material-ui/core/AppBar'; +import Paper from '@material-ui/core/Paper'; +import Tab from '@material-ui/core/Tab'; +import TabContext from '@material-ui/lab/TabContext'; +import TabList from '@material-ui/lab/TabList'; +import TabPanel from '@material-ui/lab/TabPanel'; +import Animation from '@src/components/InterceptAnimation'; + +export function TabsContainer({ children, ...props }) { + const [state, setState] = React.useState({curTab: "personal"}); + React.useEffect(() => { + const query = new URLSearchParams(window.location.search); + var interceptType = query.get('intercept') || "personal"; + if (state.curTab != interceptType) { + setState({curTab: interceptType}); + } + }, [state, setState]) + var setURL = function(newTab) { + history.replaceState(null,null, + `?intercept=${newTab}${window.location.hash}`, + ); + }; + return ( +
+ + + {setState({curTab: newTab}); setURL(newTab)}} aria-label="intercept types"> + + + + + + {children} + +
+ ); +}; + +# Types of intercepts + + + + +# No intercept + + + + +This is the normal operation of your cluster without Telepresence. + + + + + +# Global intercept + + + + +**Global intercepts** replace the Kubernetes "Orders" service with the +Orders service running on your laptop. The users see no change, but +with all the traffic coming to your laptop, you can observe and debug +with all your dev tools. + + + +### Creating and using global intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=all + ``` + + + + Make sure your current kubectl context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service: + + All requests will be sent to the version of your service that is + running in the local development environment. + + + + +# Personal intercept + +**Personal intercepts** allow you to be selective and intercept only +some of the traffic to a service while not interfering with the rest +of the traffic. This allows you to share a cluster with others on your +team without interfering with their work. + +Personal intercepts are subject to the Ambassador Cloud active service and user limit quotas. +To read more about these quotas limits, see the [subscription management page](../../../cloud/latest/subscriptions/howtos/manage-my-subscriptions). + + + + +In the illustration above, **Orange** +requests are being made by Developer 2 on their laptop and the +**green** are made by a teammate, +Developer 1, on a different laptop. + +Each developer can intercept the Orders service for their requests only, +while sharing the rest of the development environment. + + + +### Creating and using personal intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b + ``` + + We're using + `Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b` as the + header for the sake of the example, but you can use any + `key=value` pair you want, or `--http-header=auto` to have it + choose something automatically. + + + + Make sure your current kubect context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service by passing the + HTTP header: + + ```http + Personal-Intercept: 126a72c7-be8b-4329-af64-768e207a184b + ``` + + + + Need a browser extension to modify or remove an HTTP-request-headers? + + Chrome + {' '} + Firefox + + + + 3. Using the intercept: Send requests to your service without the + HTTP header: + + Requests without the header will be sent to the version of your + service that is running in the cluster. This enables you to share + the cluster with a team! + +### Intercepting a specific endpoint + +It's not uncommon to have one service serving several endpoints. Telepresence is capable of limiting an +intercept to only affect the endpoints you want to work with by using one of the `--http-path-xxx` +flags below in addition to using `--http-header` flags. Only one such flag can be used in an intercept +and, contrary to the `--http-header` flag, it cannot be repeated. + +The following flags are available: + +| Flag | Meaning | +|-------------------------------|------------------------------------------------------------------| +| `--http-path-equal ` | Only intercept the endpoint for this exact path | +| `--http-path-prefix ` | Only intercept endpoints with a matching path prefix | +| `--http-path-regex ` | Only intercept endpoints that match the given regular expression | + +#### Examples: + +1. A personal intercept using the header "Coder: Bob" limited to all endpoints that start with "/api': + + ```shell + telepresence intercept SERVICENAME --http-path-prefix=/api --http-header=Coder=Bob + ``` + +2. A personal intercept using the auto generated header that applies only to the endpoint "/api/version": + + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version --http-header=auto + ``` + or, since `--http-header=auto` is the implicit when using `--http` options, just: + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version + ``` + +3. A personal intercept using the auto generated header limited to all endpoints matching the regular expression "(staging-)?api/.*": + + ```shell + telepresence intercept SERVICENAME --http-path-regex='/(staging-)?api/.*' + ``` + + + diff --git a/docs/telepresence/2.14/concepts/modes.md b/docs/telepresence/2.14/concepts/modes.md new file mode 100644 index 000000000..3402f07e4 --- /dev/null +++ b/docs/telepresence/2.14/concepts/modes.md @@ -0,0 +1,36 @@ +--- +title: "Modes" +--- + +# Modes + +A Telepresence installation happens in two locations, initially on your laptop or workstation, and then on your cluster after running `telepresence helm install`. +The main component that gets installed into the cluster is known as the Traffic Manager. +The Traffic Manager can be put either into single user mode (the default) or into team mode. +Modes give cluster admins the ability to enforce both [intercept type](../intercepts) defaults and logins across all connected users, enabling teams to colaborate and intercept without stepping getting in eachothers way. + +## Single user mode + +In single user mode, all intercepts will be [global intercepts](../intercepts?intercept=global) by default. +Global intercepts affect all traffic coming into the intercepted workload; this can cause issues for teams working on the same service. +While single user mode is the default, switching back from team mode is done buy runnning: +``` +telepresence helm install --single-user-mode +``` + +## Team mode + +In team mode, all intercepts will be [personal intercepts](../intercepts?intercept=personal) by default and all intercepting users must be logged in. +Personal intercepts selectively affect http traffic coming into the intercepted workload. +Being in team mode adds an additional layer of confidence for developers working on the same service, knowing their teammates wont interrupt their intercepts by mistake. +Since logins are enforced in this mode as well, you can ensure that Ambassador Cloud features, such as intercept history and saved intercepts, are being taken advantage of by everybody on your team. +To switch from single user mode to team mode, run: +``` +telepresence helm install --team-mode +``` + +## Default intercept types based on modes +The mode of the Traffic Manager determines the default type of intercept, [personal](../intercepts?intercept=personal) vs [global](../intercepts?intercept=global). +When in team mode, intercepts default to [personal intercepts](../intercepts?intercept=personal), and logins are enforced for non logged in users. +When in single user mode, all intercepts default to [global intercepts](../intercepts?intercept=global), regardless of login status. +![mode defaults](../images/mode-defaults.png) \ No newline at end of file diff --git a/docs/telepresence/2.14/doc-links.yml b/docs/telepresence/2.14/doc-links.yml new file mode 100644 index 000000000..b4ba6dcac --- /dev/null +++ b/docs/telepresence/2.14/doc-links.yml @@ -0,0 +1,123 @@ +- title: Quick start + link: quick-start +- title: Install Telepresence + items: + - title: Install + link: install/ + - title: Upgrade + link: install/upgrade/ + - title: Install Traffic Manager + link: install/manager/ + - title: Install Traffic Manager with Helm + link: install/helm/ + - title: Cloud Provider Prerequisites + link: install/cloud/ + - title: Migrate from legacy Telepresence + link: install/migrate-from-legacy/ +- title: Core concepts + items: + - title: The changing development workflow + link: concepts/devworkflow + - title: The developer experience and the inner dev loop + link: concepts/devloop + - title: "Making the remote local: Faster feedback, collaboration and debugging" + link: concepts/faster + - title: Context propagation + link: concepts/context-prop + - title: Types of intercepts + link: concepts/intercepts + - title: Modes + link: concepts/modes + - title: Golden Paths + link: concepts/goldenpaths + items: + - title: Intercept Specifications + link: concepts/goldenpaths/specs + - title: Docker Mode + link: concepts/goldenpaths/docker + - title: Docker Compose integration + link: concepts/goldenpaths/compose + - title: Team Mode + link: concepts/goldenpaths/installation +- title: How do I... + items: + - title: Intercept a service in your own environment + link: howtos/intercepts + - title: Share dev environments with preview URLs + link: howtos/preview-urls + - title: Proxy outbound traffic to my cluster + link: howtos/outbound + - title: Host a cluster in a local VM + link: howtos/cluster-in-vm + - title: Send requests to an intercepted service + link: howtos/request + - title: Package and share my intercepts + link: howtos/package +- title: Telepresence with Docker + items: + - title: Telepresence for Docker Compose + link: docker/compose + - title: Telepresence for Docker Extension + link: docker/extension + - title: Telepresence in Docker Mode + link: docker/cli +- title: Telepresence for CI + items: + - title: Github Actions + link: ci/github-actions + - title: Pod Daemons + link: ci/pod-daemon +- title: Technical reference + items: + - title: Architecture + link: reference/architecture + - title: Client reference + link: reference/client + items: + - title: login + link: reference/client/login + - title: Laptop-side configuration + link: reference/config + - title: Cluster-side configuration + link: reference/cluster-config + - title: Using Docker for intercepts + link: reference/docker-run + - title: Running Telepresence in a Docker container + link: reference/inside-container + - title: Environment variables + link: reference/environment + - title: Intercepts + link: reference/intercepts/ + items: + - title: Configure intercept using CLI + link: reference/intercepts/cli + - title: Configure intercept using specifications + link: reference/intercepts/specs + - title: Manually injecting the Traffic Agent + link: reference/intercepts/manual-agent + - title: Volume mounts + link: reference/volume + - title: RESTful API service + link: reference/restapi + - title: DNS resolution + link: reference/dns + - title: RBAC + link: reference/rbac + - title: Telepresence and VPNs + link: reference/vpn + - title: Networking through Virtual Network Interface + link: reference/tun-device + - title: Connection Routing + link: reference/routing + - title: Using Telepresence with Linkerd + link: reference/linkerd +- title: FAQs + link: faqs +- title: Troubleshooting + link: troubleshooting +- title: Community + link: community +- title: Release Notes + link: release-notes +- title: Licenses + link: licenses \ No newline at end of file diff --git a/docs/telepresence/2.14/docker/cli.md b/docs/telepresence/2.14/docker/cli.md new file mode 100644 index 000000000..6adde7c40 --- /dev/null +++ b/docs/telepresence/2.14/docker/cli.md @@ -0,0 +1,294 @@ +--- +title: "Telepresence in Docker Mode" +description: "Claim a remote demo cluster and learn about running Telepresence in Docker Mode, speeding up local development and debugging." +indexable: true +--- + +import { EmojivotoServicesList, DCPLink, Login, DemoClusterWarning } from "../../../../../src/components/Docs/Telepresence"; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; + +# Telepresence in Docker Mode + +
+

Contents

+ +* [What is Telepresence Docker Mode?](#what-is-telepresence-docker-mode) +* [Key Benefits](#key-benefits) +* [Prerequisites](#prerequisites) +* [1. Get a free remote cluster](#1-get-a-free-remote-cluster) +* [2. Try the Emojivoto application](#2-try-the-emojivoto-application) +* [3. Testing the fix in your local environment](#3-testing-the-fix-in-your-local-environment) +* [4. Download the demo cluster config file](#4-download-the-demo-cluster-config-file) +* [5. Enable Telepresence Docker mode](#5-enable-telepresence-docker-mode) +* [6. Set up your local development environment and make a global intercept](#6-set-up-your-local-development-environment-and-make-a-global-intercept) +* [7.Make a personal intercept](#7-make-a-personal-intercept)) + +
+ +Welcome to the quickstart guide for Telepresence Docker mode! In this hands-on tutorial, we will explore the powerful features of Telepresence and learn how to leverage Telepresence Docker mode to enhance local development and debugging workflows. + +## What is Telepresence Docker Mode? + +Telepresence Docker Mode enables you to run a single service locally while seamlessly connecting it to a remote Kubernetes cluster. This mode enables developers to accelerate their development cycles by providing a fast and efficient way to iterate on code changes without requiring admin access on their machines. + +## Key Benefits + +When using Telepresence in Docker mode, you can enjoy the following benefits: + +1. **Simplified Development Setup**: Eliminate the need for admin access on your local machine, making it easier to set up and configure your development environment. + +2. **Efficient Networking**: Address common networking challenges by seamlessly connecting your locally running service to a remote Kubernetes cluster. This enables you to leverage the cluster's resources and dependencies while maintaining a productive local development experience. + +3. **Enhanced Debugging**: Gain the ability to debug your service in its natural environment, directly from your local development environment. This eliminates the need for complex workarounds or third-party applications to enable volume mounts or access remote resources. + +## Prerequisites + +1. [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/). Kubectl is the official Kubernetes command-line tool. You will use it regularly to interact with your cluster, whether deploying applications, inspecting resources, or debugging issues. + +2. [Telepresence 2.13 or latest](../../install). Telepresence is a command-line tool that lets you run a single service locally, while connecting that service to a remote Kubernetes cluster. You can use Telepresence to speed up local development and debugging. + +3. [Docker Desktop](https://www.docker.com/get-started). Docker Desktop is a tool for building and sharing containerized applications and microservices. You'll use Docker Desktop to run a local development environment. + +Now that we have a clear understanding of Telepresence Docker mode and its benefits, let's dive into the hands-on tutorial! + +## 1. Get a free remote cluster + +[Telepresence](/docs/telepresence/) connects your local workstation with a remote Kubernetes cluster. In this tutorial, we'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. Test out the application: + +1. Go to the and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. We're going to use Telepresence shortly to fix this bug, as everyone should be able to vote for 🍩! + + + Congratulations! You've successfully accessed the Emojivoto application on your remote cluster. + + +## 3. Testing the fix in your local environment + +We'll set up a development environment locally on your workstation. We'll then use [Telepresence](../../reference/inside-container/) to connect this local development environment to the remote Kubernetes cluster. To save time, the development environment we'll use is pre-packaged as a Docker container. + +1. Download and run the image for the service locally: + + ```bash + docker run -d --name ambassador-demo --pull always -p 8083:8083 -p 8080:8080 --rm -it datawire/demoemojivoto + ``` + + + If you're using Docker Desktop on Windows, you may need to enable virtualization to run the container.
> + Make sure that ports 8080 and 8083 are free. If the Docker engine is not running, the command will fail and you will see docker: unknown server OS in your terminal. +
+ + The Docker container includes a copy of the Emojivoto application that fixes the bug. Visit the [leaderboard](http://localhost:8083/leaderboard) and notice how it is different from the leaderboard in your Kubernetes cluster. + +2. Now, stop the container by running the following command in your terminal: + + ```bash + docker stop ambassador-demo + ``` + +In this section of the quickstart, you ran the Emojivoto application locally. In the next section, you'll use Telepresence to connect your local development environment to the remote Kubernetes cluster. + +## 4. Download the demo cluster config file + +1. {window.open('https://app.getambassador.io/cloud/demo-cluster-download-popup/config', 'ambassador-cloud-demo-cluster', 'menubar=no,location=no,resizable=yes,scrollbars=yes,status=no,width=550,height=750'); e.preventDefault(); }} target="_blank">Download your demo cluster config file. This file contains the credentials you need to access your demo cluster. + +2. Export the file's location to KUBECONFIG by running this command in your terminal: + + + + + ```bash + export KUBECONFIG=/path/to/kubeconfig.yaml + ``` + + + + + ```bash + export KUBECONFIG=/path/to/kubeconfig.yaml + ``` + + + + + ```bash + SET KUBECONFIG=/path/to/kubeconfig.yaml + ``` + + + + + + You should now be able to run `kubectl` commands against your demo cluster. + +3. Verify that you can access the cluster by listing the app's services: + + ``` + $ kubectl get services -n emojivoto + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + emoji-svc ClusterIP 10.43.131.84 8080/TCP,8801/TCP 3h12m + voting-svc ClusterIP 10.43.32.184 8080/TCP,8801/TCP 3h12m + web-svc ClusterIP 10.43.105.110 8080/TCP 3h12m + web-app ClusterIP 10.43.53.247 80/TCP 3h12m + web-app-canary ClusterIP 10.43.8.90 80/TCP 3h12m + ``` + +## 5. Enable Telepresence Docker mode + +You can simply add the docker flag to any Telepresence command, and it will start your daemon in a container. Thus removing the need for root access, making it easier to adopt as an organization. + +1. Confirm that the Telepresence CLI is now installed, we expect to see that the daemons are not yet running: +`telepresence status` + + ``` + $ telepresence status + User Daemon: Not running + Root Daemon: Not running + Ambassador Cloud: + Status : Logged out + Traffic Manager: Not connected + Intercept Spec: Not running + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open System Preferences → Security & Privacy → General. Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence status command. + + +2. Log in to Ambassador Cloud: + + ``` + $ telepresence login + ``` + +3. Then, install the Helm chart and quit Telepresence: + + ```bash + telepresence helm install + telepresence quit -s + ``` + +4. Finally, connect to the remote cluster using Docker mode: + + ``` + $ telepresence connect --docker + Connected to context default (https://default.cluster.bakerstreet.io) + ``` + +5. Verify that you are connected to the remote cluster by listing your Docker containers: + + ``` + $ docker ps + CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES + 7a0e01cab325 datawire/telepresence:2.12.1 "telepresence connec…" 18 seconds ago Up 16 seconds + ``` + + +This method limits the scope of the potential networking issues since everything stays inside Docker. The Telepresence daemon can be found under the name `tp-` when listing your containers. + + +## 6. Set up your local development environment and make a global intercept + +Start your intercept handler (interceptor) by targeting the daemon container --network=container:tp-``, and open the preview URL to see the traffic routed to your machine. + +1. Run the Docker container locally, by running this command inside your local terminal. The image is the same as the one you ran in the previous step (step 1) but this time, you will run it with the `--network=container:tp-` flag: + + ```bash + docker run -d --name ambassador-demo --pull always --network=container:tp-default --rm -it datawire/demoemojivoto + ``` + +2. With Telepresence, you can create global intercepts that intercept all traffic going to a service in your cluster and route it to your local environment instead/ Start a global intercept by running this command in your terminal: + + ``` + $ telepresence intercept web --docker --port 8080 --ingress-port 80 --ingress-host edge-stack.ambassador -n emojivoto --ingress-l5 edge-stack.ambassador --preview-url=true + Using Deployment web + Intercept name : web-emojivoto + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Volume Mount Point : /var/folders/n5/rgwx1rvd40z3tt2v473h715c0000gp/T/telfs-2663656564 + Intercepting : HTTP requests with headers + 'x-telepresence-intercept-id: 8ff55336-9127-43b7-8175-08c598699bdb:web-emojivoto' + Preview URL : https://unruffled-morse-4172.preview.edgestack.me + Layer 5 Hostname : edge-stack.ambassador + ``` + + + Learn more about intercepts and how to use them. + + +## 7. Make a personal intercept + +Personal intercepts allow you to be selective and intercept only some of the traffic to a service while not interfering with the rest of the traffic. This allows you to share a cluster with others on your team without interfering with their work. + +1. First, switch to team-mode to create a personal intercept: + + ``` + $ telepresence helm upgrade --team-mode + ``` + +2. Then, quit Telepresence and stop the Docker container to clean up your environment: + + ```bash + telepresence quit -s + docker stop ambassador-demo + ``` + +3. Connect to telepresence docker mode again: + + ``` + $ telepresence connect --docker + ``` + +4. Run the docker container again: + + ``` + $ docker run -d --name ambassador-demo --pull always --network=container:tp-default --rm -it datawire/demoemojivoto + ``` + +5. Create a personal intercept by running this command in your terminal: + + ``` + $ telepresence intercept web --docker --port 8080 --ingress-port 80 --ingress-host edge-stack.ambassador -n emojivoto --ingress-l5 edge-stack.ambassador --preview-url=true + Using Deployment web + Intercept name : web-emojivoto + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Volume Mount Point : /var/folders/n5/rgwx1rvd40z3tt2v473h715c0000gp/T/telfs-2663656564 + Intercepting : HTTP requests with headers + 'x-telepresence-intercept-id: 8ff55336-9127-43b7-8175-08c598699bdb:web-emojivoto' + Preview URL : https://unruffled-morse-4172.preview.edgestack.me + Layer 5 Hostname : edge-stack.ambassador + ``` + +6. Open the preview URL to see the traffic routed to your machine. + +7. To stop the intercept, run this command in your terminal: + + ``` + $ telepresence leave web-emojivoto + ``` +## What's Next? + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! + +
\ No newline at end of file diff --git a/docs/telepresence/2.14/docker/compose.md b/docs/telepresence/2.14/docker/compose.md new file mode 100644 index 000000000..45de52d2e --- /dev/null +++ b/docs/telepresence/2.14/docker/compose.md @@ -0,0 +1,117 @@ +--- +title: "Telepresence for Docker Compose" +description: "Learn about how to use Docker Compose with Telepresence" +indexable: true +--- + +# Telepresence for Docker Compose + +The [Intercept Specification](../../reference/intercepts/specs) can contain an intercept handler that in turn references (or embeds) a [docker compose](../../reference/intercepts/specs#compose) specification. The docker compose services will then be used when handling the intercepted traffic. + +The intended user for the docker compose integration runs an existing compose specification locally on a workstation but wants to try it out "in the cluster" by intercepting cluster services. This is challenging, because the cluster's network, the intercepted pod's environment and volume mounts, and which of the services in the compose file to actually redirect traffic to, are not known to docker compose. In fact, the environment and volume mounts are not known until the actual intercept is activated. Telepresence helps with all of this by using an ephemeral and modified copy of the compose file that it creates when the intercept starts. The modification steps are described below. + +## Intended service behavior + +The user starts by declaring how each service in the docker compose spec. are intended to behave. These intentions can be declared directly in the Intercept spec. so that the docker compose spec. is left untouched, or they can be added to the docker compose spec. in the form of `x-telepresence` extensions. This is explained ([in detail](../../reference/intercepts/specs#service)) in the reference. + +The intended behavior can be one of `interceptHandler`, `remote`, or `local`, where `local` is the default that applies to all services that have no intended behavior specified. + +### The interceptHandler behavior + +A compose service intended to have the `interceptHandler` behavior will: + +- handle traffic from the intercepted pod +- remotely mount the volumes of the intercepted pod +- have access to the environment variables of the intercepted pod + +This means that Telepresence will: + +- modify the `network-mode` of the compose service so that it shares the network of the containerized Telepresence daemon. +- modify the `environment` of the service to include the environment variables exposed by the intercepted pod. +- create volumes that correspond to the volumes of the intercepted pod and replace volumes on the compose service that have overlapping targets. +- delete any networks from the service and instead attach those networks to the daemon. +- delete any exposed ports and instead expose them using the `telepresence` network. + +A docker compose service that originally looked like this: + +```yaml +services: + echo: + environment: + - PORT=8088 + - MODE=local + build: . + ports: + - "8088:8088" + volumes: + - local-secrets:/var/run/secrets/kubernetes.io/serviceaccount:ro + networks: + - green +``` + +when acting as an `interceptHandler` for the `echo-server` service, will instead look something like this: + +```yaml +services: + echo: + build: . + environment: + - A_TELEPRESENCE_MOUNTS=/var/run/secrets/kubernetes.io/serviceaccount + # ... other environment variables from the pod left out for brevity. + - PORT=8088 + - MODE=local + network_mode: container:tp-minikube + volumes: + - echo-server-0:/var/run/secrets/kubernetes.io/serviceaccount +``` + +and Telepresence will also have added this to the compose file: + +```yaml +volumes: + echo-server-0: + name: echo-server-0 + driver: datawire/telemount:amd64 + driver_opts: + container: echo-server + dir: /var/run/secrets/kubernetes.io/serviceaccount + host: 192.168.208.2 + port: "34439" +``` + +### The remote behavior + +A compose service intended to have the `remote` behavior will no longer run locally. Telepresence +will instead: + +- Remove the service from the docker compose spec. +- Reassign any `depends_on` for that service to what the service in turn `depends_on`. +- Inform the containerized Telepresence daemon about the `mapping` that was declared in the service intent (if any). + +### The local behavior + +A compose service intended to have the `local` behavior is more or less left untouched. If it has `depends_on` to a +service intended to have `remote` behavior, then those are swapped out for the `depends_on` in that service. + +## Other modifications + +### The telepresence network + +The default network of the docker compose file will be replaced with the `telepresence` network. This network enables +port access on the local host. + +```yaml +networks: + default: + name: telepresence + external: true + green: + name: echo_green +``` + +### Auto-detection of watcher + +Telepresence will check if the docker compose file contains a [watch](https://docs.docker.com/compose/file-watch/) +declaration for hot-deploy and start a `docker compose alpha watch` automatically when that is the case. This means that +an intercept handler that is modified will be deployed instantly even though the code runs in a container and the +changes will be visible using a preview URL. diff --git a/docs/telepresence/2.14/docker/extension.md b/docs/telepresence/2.14/docker/extension.md new file mode 100644 index 000000000..cc1e018d4 --- /dev/null +++ b/docs/telepresence/2.14/docker/extension.md @@ -0,0 +1,73 @@ +--- +title: "Telepresence for Docker Extension" +description: "Learn about the Telepresence Docker Extension." +indexable: true +--- +# Telepresence for Docker Extension + +The [Telepresence Docker extension](../../../../../kubernetes-learning-center/telepresence-docker-extension/) is an extension that runs in Docker Desktop. This extension allows you to spin up a selection of your application and run the Telepresence daemons in that container. The Telepresence extension allows you to intercept a service and redirect cloud traffic to containers. + +## Quick Start + +This Quick Start guide will walk you through creating your first intercept in the Telepresence extension in Docker Desktop. + +## Connect to Ambassador Cloud through the Telepresence Docker extension. + + 1. Click the Telepresence extension in Docker Desktop, then click **Get Started**. + + 2. You'll be redirected to Ambassador Cloud for login, you can authenticate with **Docker**, Google, GitHub or GitLab account. +

+ +

+ +## Create an Intercept from a Kubernetes service + + 1. Select the Kubernetes context you would like to connect to. +

+ +

+ + 2. Once Telepresence is connected to your cluster you will see a list of services you can connect to. If you don't see the service you want to intercept, you may need to change namespaces in the dropdown menu. +

+ +

+ + 3. Click the **Intercept** button on the service you want to intercept. You will see a popup to help configure your intercept, and intercept handlers. +

+ +

+ + 4. Telepresence will start an intercept on the service and your local container on the designated port. You will then be redirected to a management page where you can view your active intercepts. +

+ +

+ + +## Create an Intercept from an Intercept Specification. + + 1. Click the dropdown on the **Connect** button to activate the option to upload an intercept specification. +

+ +

+ + 2. Click **Upload Telepresence Spec** to run your intercept specification. +

+ +

+ + 3. Once your specification has been uploaded, the extension will process it and redirect you to the running intercepts page after it has been started. + + 4. The intercept information now shows up in the Docker Telepresence extension. You can now [test your code](#test-your-code). +

+ +

+ + + For more information on Intercept Specifications see the docs here. + + +## Test your code + +Now you can make your code changes in your preferred IDE. When you're finished, build a new container with your code changes and restart your intercept. + +Click `view` next to your preview URL to open a browser tab and see the changes you've made in real time, or you can share the preview URL with teammates so they can review your work. \ No newline at end of file diff --git a/docs/telepresence/2.14/faqs.md b/docs/telepresence/2.14/faqs.md new file mode 100644 index 000000000..092f11d6a --- /dev/null +++ b/docs/telepresence/2.14/faqs.md @@ -0,0 +1,126 @@ +--- +description: "Learn how Telepresence helps with fast development and debugging in your Kubernetes cluster." +--- + +# FAQs + +** Why Telepresence?** + +Modern microservices-based applications that are deployed into Kubernetes often consist of tens or hundreds of services. The resource constraints and number of these services means that it is often difficult to impossible to run all of this on a local development machine, which makes fast development and debugging very challenging. The fast [inner development loop](../concepts/devloop/) from previous software projects is often a distant memory for cloud developers. + +Telepresence enables you to connect your local development machine seamlessly to the cluster via a two way proxying mechanism. This enables you to code locally and run the majority of your services within a remote Kubernetes cluster -- which in the cloud means you have access to effectively unlimited resources. + +Ultimately, this empowers you to develop services locally and still test integrations with dependent services or data stores running in the remote cluster. + +You can “intercept” any requests made to a target Kubernetes workload, and code and debug your associated service locally using your favourite local IDE and in-process debugger. You can test your integrations by making requests against the remote cluster’s ingress and watching how the resulting internal traffic is handled by your service running locally. + +By using the preview URL functionality you can share access with additional developers or stakeholders to the application via an entry point associated with your intercept and locally developed service. You can make changes that are visible in near real-time to all of the participants authenticated and viewing the preview URL. All other viewers of the application entrypoint will not see the results of your changes. + +** What operating systems does Telepresence work on?** + +Telepresence currently works natively on macOS (Intel and Apple Silicon), Linux, and Windows. + +** What protocols can be intercepted by Telepresence?** + +Both TCP and UDP are supported for global intercepts. + +Personal intercepts require HTTP. All HTTP/1.1 and HTTP/2 protocols can be intercepted. This includes: + +- REST +- JSON/XML over HTTP +- gRPC +- GraphQL + +If you need another protocol supported, please [drop us a line](https://www.getambassador.io/feedback/) to request it. + +** When using Telepresence to intercept a pod, are the Kubernetes cluster environment variables proxied to my local machine?** + +Yes, you can either set the pod's environment variables on your machine or write the variables to a file to use with Docker or another build process. Please see [the environment variable reference doc](../reference/environment) for more information. + +** When using Telepresence to intercept a pod, can the associated pod volume mounts also be mounted by my local machine?** + +Yes, please see [the volume mounts reference doc](../reference/volume/) for more information. + +** When connected to a Kubernetes cluster via Telepresence, can I access cluster-based services via their DNS name?** + +Yes. After you have successfully connected to your cluster via `telepresence connect` you will be able to access any service in your cluster via their namespace qualified DNS name. + +This means you can curl endpoints directly e.g. `curl .:8080/mypath`. + +If you create an intercept for a service in a namespace, you will be able to use the service name directly. + +This means if you `telepresence intercept -n `, you will be able to resolve just the `` DNS record. + +You can connect to databases or middleware running in the cluster, such as MySQL, PostgreSQL and RabbitMQ, via their service name. + +** When connected to a Kubernetes cluster via Telepresence, can I access cloud-based services and data stores via their DNS name?** + +You can connect to cloud-based data stores and services that are directly addressable within the cluster (e.g. when using an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) Service type), such as AWS RDS, Google pub-sub, or Azure SQL Database. + +** What types of ingress does Telepresence support for the preview URL functionality?** + +The preview URL functionality should work with most ingress configurations, including straightforward load balancer setups. + +Telepresence will discover/prompt during first use for this info and make its best guess at figuring this out and ask you to confirm or update this. + +** Why are my intercepts still reporting as active when they've been disconnected?** + + In certain cases, Telepresence might not have been able to communicate back with Ambassador Cloud to update the intercept's status. Worry not, they will get garbage collected after a period of time. + +** Why is my intercept associated with an "Unreported" cluster?** + + Intercepts tagged with "Unreported" clusters simply mean Ambassador Cloud was unable to associate a service instance with a known detailed service from an Edge Stack or API Gateway cluster. [Connecting your cluster to the Service Catalog](/docs/telepresence/latest/quick-start/) will properly match your services from multiple data sources. + +** Will Telepresence be able to intercept workloads running on a private cluster or cluster running within a virtual private cloud (VPC)?** + +Yes. The cluster has to have outbound access to the internet for the preview URLs to function correctly, but it doesn't need to have a publicly accessible IP address. + +The cluster must also have access to an external registry in order to be able to download the traffic-manager and traffic-agent images that are deployed when connecting with Telepresence. + +** Why does running Telepresence require sudo access for the local daemon unless it runs in a Docker container?** + +The local daemon needs sudo to create a VIF (Virtual Network Interface) for outbound routing and DNS. Root access is needed to do that unless the daemon runs in a Docker container. + +** What components get installed in the cluster when running Telepresence?** + +A single `traffic-manager` service is deployed in the `ambassador` namespace within your cluster, and this manages resilient intercepts and connections between your local machine and the cluster. + +When running in `team` mode, a single `ambassador-agent` service is deployed in the `ambassador` namespace. It communicates with the cloud to keep your list of services up to date. + +A Traffic Agent container is injected per pod that is being intercepted. The first time a workload is intercepted all pods associated with this workload will be restarted with the Traffic Agent automatically injected. + +** How can I remove all the Telepresence components installed within my cluster?** + +You can run the command `telepresence helm uninstall` to remove everything from the cluster, including the `traffic-manager` and the `ambassador-agent` services, and all the `traffic-agent` containers injected into each pod being intercepted. + +Also run `telepresence quit -s` to stop the local daemon running. + +** What language is Telepresence written in?** + +All components of the Telepresence application and cluster components are written using Go. + +** How does Telepresence connect and tunnel into the Kubernetes cluster?** + +The connection between your laptop and cluster is established by using +the `kubectl port-forward` machinery (though without actually spawning +a separate program) to establish a TLS encrypted connection to Telepresence +Traffic Manager in the cluster, and running Telepresence's custom VPN +protocol over that connection. + + + +** What identity providers are supported for authenticating to view a preview URL?** + +* GitHub +* GitLab +* Google + +More authentication mechanisms and identity provider support will be added soon. Please [let us know](https://www.getambassador.io/feedback/) which providers are the most important to you and your team in order for us to prioritize those. + +** Is Telepresence open source?** + +A large part of it is! You can find its source code on [GitHub](https://github.com/telepresenceio/telepresence). + +** How do I share my feedback on Telepresence?** + +Your feedback is always appreciated and helps us build a product that provides as much value as possible for our community. You can chat with us directly on our [feedback page](https://www.getambassador.io/feedback/), or you can [join our Slack channel](http://a8r.io/slack) to share your thoughts. diff --git a/docs/telepresence/2.14/howtos/cluster-in-vm.md b/docs/telepresence/2.14/howtos/cluster-in-vm.md new file mode 100644 index 000000000..4762344c9 --- /dev/null +++ b/docs/telepresence/2.14/howtos/cluster-in-vm.md @@ -0,0 +1,192 @@ +--- +title: "Considerations for locally hosted clusters | Ambassador" +description: "Use Telepresence to intercept services in a cluster running in a hosted virtual machine." +--- + +# Network considerations for locally hosted clusters + +## The problem +Telepresence creates a Virtual Network Interface ([VIF](../../reference/tun-device)) that maps the clusters subnets to the host machine when it connects. If you're running Kubernetes locally (e.g., k3s, Minikube, Docker for Desktop), you may encounter network problems because the devices in the host are also accessible from the cluster's nodes. + +### Example: +A k3s cluster runs in a headless VirtualBox machine that uses a "host-only" network. This network will allow both host-to-guest and guest-to-host connections. In other words, the cluster will have access to the host's network and, while Telepresence is connected, also to its VIF. This means that from the cluster's perspective, there will now be more than one interface that maps the cluster's subnets; the ones already present in the cluster's nodes, and then the Telepresence VIF, mapping them again. + +Now, if a request arrives to Telepresence that is covered by a subnet mapped by the VIF, the request is routed to the cluster. If the cluster for some reason doesn't find a corresponding listener that can handle the request, it will eventually try the host network, and find the VIF. The VIF routes the request to the cluster and now the recursion is in motion. The final outcome of the request will likely be a timeout but since the recursion is very resource intensive (a large amount of very rapid connection requests), this will likely also affect other connections in a bad way. + +## Solution + +### Create a bridge network +A bridge network is a Link Layer (L2) device that forwards traffic between network segments. By creating a bridge network, you can bypass the host's network stack which enable the Kubernetes cluster to connect directly to the same router as your host. + +To create a bridge network, you need to change the network settings of the guest running a cluster's node so that it connects directly to a physical network device on your host. The details on how to configure the bridge depends on what type of virtualization solution you're using. + +### Vagrant + Virtualbox + k3s example +Here's a sample `Vagrantfile` that will spin up a server node and two agent nodes in three headless instances using a bridged network. It also adds the configuration needed for the cluster to host a docker repository (very handy in case you want to save bandwidth). The Kubernetes registry manifest must be applied using `kubectl -f registry.yaml` once the cluster is up and running. + +#### Vagrantfile +```ruby +# -*- mode: ruby -*- +# vi: set ft=ruby : + +# bridge is the name of the host's default network device +$bridge = 'wlp5s0' + +# default_route should be the IP of the host's default route. +$default_route = '192.168.1.1' + +# nameserver must be the IP of an external DNS, such as 8.8.8.8 +$nameserver = '8.8.8.8' + +# server_name should also be added to the host's /etc/hosts file and point to the server_ip +# for easy access when pushing docker images +server_name = 'multi' + +# static IPs for the server and agents. Those IPs must be on the default router's subnet +server_ip = '192.168.1.110' +agents = { + 'agent1' => '192.168.1.111', + 'agent2' => '192.168.1.112', +} + +# Extra parameters in INSTALL_K3S_EXEC variable because of +# K3s picking up the wrong interface when starting server and agent +# https://github.com/alexellis/k3sup/issues/306 +server_script = <<-SHELL + sudo -i + apk add curl + export INSTALL_K3S_EXEC="--bind-address=#{server_ip} --node-external-ip=#{server_ip} --flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + echo "Sleeping for 5 seconds to wait for k3s to start" + sleep 5 + cp /var/lib/rancher/k3s/server/token /vagrant_shared + cp /etc/rancher/k3s/k3s.yaml /vagrant_shared + cp /etc/rancher/k3s/registries.yaml /vagrant_shared + SHELL + +agent_script = <<-SHELL + sudo -i + apk add curl + export K3S_TOKEN_FILE=/vagrant_shared/token + export K3S_URL=https://#{server_ip}:6443 + export INSTALL_K3S_EXEC="--flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + SHELL + +def config_vm(name, ip, script, vm) + # The network_script has two objectives: + # 1. Ensure that the guest's default route is the bridged network (bypass the network of the host) + # 2. Ensure that the DNS points to an external DNS service, as opposed to the DNS of the host that + # the NAT network provides. + network_script = <<-SHELL + sudo -i + ip route delete default 2>&1 >/dev/null || true; ip route add default via #{$default_route} + cp /etc/resolv.conf /etc/resolv.conf.orig + sed 's/^nameserver.*/nameserver #{$nameserver}/' /etc/resolv.conf.orig > /etc/resolv.conf + SHELL + + vm.hostname = name + vm.network 'public_network', bridge: $bridge, ip: ip + vm.synced_folder './shared', '/vagrant_shared' + vm.provider 'virtualbox' do |vb| + vb.memory = '4096' + vb.cpus = '2' + end + vm.provision 'shell', inline: script + vm.provision 'shell', inline: network_script, run: 'always' +end + +Vagrant.configure('2') do |config| + config.vm.box = 'generic/alpine314' + + config.vm.define 'server', primary: true do |server| + config_vm(server_name, server_ip, server_script, server.vm) + end + + agents.each do |agent_name, agent_ip| + config.vm.define agent_name do |agent| + config_vm(agent_name, agent_ip, agent_script, agent.vm) + end + end +end +``` + +The Kubernetes manifest to add the registry: + +#### registry.yaml +```yaml +apiVersion: v1 +kind: ReplicationController +metadata: + name: kube-registry-v0 + namespace: kube-system + labels: + k8s-app: kube-registry + version: v0 +spec: + replicas: 1 + selector: + app: kube-registry + version: v0 + template: + metadata: + labels: + app: kube-registry + version: v0 + spec: + containers: + - name: registry + image: registry:2 + resources: + limits: + cpu: 100m + memory: 200Mi + env: + - name: REGISTRY_HTTP_ADDR + value: :5000 + - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY + value: /var/lib/registry + volumeMounts: + - name: image-store + mountPath: /var/lib/registry + ports: + - containerPort: 5000 + name: registry + protocol: TCP + volumes: + - name: image-store + hostPath: + path: /var/lib/registry-storage +--- +apiVersion: v1 +kind: Service +metadata: + name: kube-registry + namespace: kube-system + labels: + app: kube-registry + kubernetes.io/name: "KubeRegistry" +spec: + selector: + app: kube-registry + ports: + - name: registry + port: 5000 + targetPort: 5000 + protocol: TCP + type: LoadBalancer +``` + diff --git a/docs/telepresence/2.14/howtos/intercepts.md b/docs/telepresence/2.14/howtos/intercepts.md new file mode 100644 index 000000000..5ac4b41bc --- /dev/null +++ b/docs/telepresence/2.14/howtos/intercepts.md @@ -0,0 +1,108 @@ +--- +description: "Start using Telepresence in your own environment. Follow these steps to intercept your service in your cluster." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Intercept a service in your own environment + +Telepresence enables you to create intercepts to a target Kubernetes workload. Once you have created and intercept, you can code and debug your associated service locally. + +For a detailed walk-though on creating intercepts using our sample app, follow the [quick start guide](../../quick-start/demo-node/). + + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +This guide assumes you have a Kubernetes deployment and service accessible publicly by an ingress controller, and that you can run a copy of that service on your laptop. + + +## Intercept your service with a global intercept + +With Telepresence, you can create [global intercepts](../../concepts/intercepts/?intercept=global) that intercept all traffic going to a service in your cluster and route it to your local environment instead. + +1. Connect to your cluster with `telepresence connect` and connect to the Kubernetes API server: + + ```console + $ curl -ik https://kubernetes.default + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected when you first connect. + + + You now have access to your remote Kubernetes API server as if you were on the same network. You can now use any local tools to connect to any service in the cluster. + + If you have difficulties connecting, make sure you are using Telepresence 2.0.3 or a later version. Check your version by entering `telepresence version` and [upgrade if needed](../../install/upgrade/). + + +2. Enter `telepresence list` and make sure the service you want to intercept is listed. For example: + + ```console + $ telepresence list + ... + example-service: ready to intercept (traffic-agent not yet installed) + ... + ``` + +3. Get the name of the port you want to intercept on your service: + `kubectl get service --output yaml`. + + For example: + + ```console + $ kubectl get service example-service --output yaml + ... + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + ... + ``` + +4. Intercept all traffic going to the service in your cluster: + `telepresence intercept --port [:] --env-file `. + * For `--port`: specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * For `--env-file`: specify a file path for Telepresence to write the environment variables that are set in the pod. + The example below shows Telepresence intercepting traffic going to service `example-service`. Requests now reach the service on port `http` in the cluster get routed to `8080` on the workstation and write the environment variables of the service to `~/example-service-intercept.env`. + ```console + $ telepresence intercept example-service --port 8080:http --env-file ~/example-service-intercept.env + Using Deployment example-service + intercepted + Intercept name: example-service + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Intercepting : all TCP connections + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + The following are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#env). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Query the environment in which you intercepted a service and verify your local instance being invoked. + All the traffic previously routed to your Kubernetes Service is now routed to your local environment + +You can now: +- Make changes on the fly and see them reflected when interacting with + your Kubernetes environment. +- Query services only exposed in your cluster's network. +- Set breakpoints in your IDE to investigate bugs. + + + + **Didn't work?** Make sure the port you're listening on matches the one you specified when you created your intercept. + + diff --git a/docs/telepresence/2.14/howtos/outbound.md b/docs/telepresence/2.14/howtos/outbound.md new file mode 100644 index 000000000..a163cb339 --- /dev/null +++ b/docs/telepresence/2.14/howtos/outbound.md @@ -0,0 +1,89 @@ +--- +description: "Telepresence can connect to your Kubernetes cluster, letting you access cluster services as if your laptop was another pod in the cluster." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Proxy outbound traffic to my cluster + +While preview URLs are a powerful feature, Telepresence offers other options for proxying traffic between your laptop and the cluster. This section discribes how to proxy outbound traffic and control outbound connectivity to your cluster. + + This guide assumes that you have the quick start sample web app running in your cluster to test accessing the web-app service. You can substitute this service for any other service you are running. + +## Proxying outbound traffic + +Connecting to the cluster instead of running an intercept allows you to access cluster workloads as if your laptop was another pod in the cluster. This enables you to access other Kubernetes services using `.`. A service running on your laptop can interact with other services on the cluster by name. + +When you connect to your cluster, the background daemon on your machine runs and installs the [Traffic Manager deployment](../../reference/architecture/) into the cluster of your current `kubectl` context. The Traffic Manager handles the service proxying. + +1. Run `telepresence connect` and enter your password to run the daemon. + + ``` + $ telepresence connect + Launching Telepresence Daemon v2.3.7 (api v3) + Need root privileges to run "/usr/local/bin/telepresence daemon-foreground /home//.cache/telepresence/logs '' ''" + [sudo] password: + Connecting to traffic manager... + Connected to context default (https://) + ``` + +2. Run `telepresence status` to confirm connection to your cluster and that it is proxying traffic. + + ``` + $ telepresence status + Root Daemon: Running + Version : v2.3.7 (api 3) + Primary DNS : "" + Fallback DNS: "" + User Daemon: Running + Version : v2.3.7 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 0 total + ``` + +3. Access your service by name with `curl web-app.emojivoto:80`. Telepresence routes the request to the cluster, as if your laptop is actually running in the cluster. + + ``` + $ curl web-app.emojivoto:80 + + + + + Emoji Vote + ... + ``` + +If you terminate the client with `telepresence quit` and try to access the service again, it will fail because traffic is no longer proxied from your laptop. + + ``` + $ telepresence quit + Telepresence Daemon quitting...done + ``` + +When using Telepresence in this way, you need to access services with the namespace qualified DNS name (<service name>.<namespace>) before you start an intercept. After you start an intercept, only <service name> is required. Read more about these differences in the DNS resolution reference guide. + +## Controlling outbound connectivity + +By default, Telepresence provides access to all Services found in all namespaces in the connected cluster. This can lead to problems if the user does not have RBAC access permissions to all namespaces. You can use the `--mapped-namespaces ` flag to control which namespaces are accessible. + +When you use the `--mapped-namespaces` flag, you need to include all namespaces containing services you want to access, as well as all namespaces that contain services related to the intercept. + +### Using local-only intercepts + +When you develop on isolated apps or on a virtualized container, you don't need an outbound connection. However, when developing services that aren't deployed to the cluster, it can be necessary to provide outbound connectivity to the namespace where the service will be deployed. This is because services that aren't exposed through ingress controllers require connectivity to those services. When you provide outbound connectivity, the service can access other services in that namespace without using qualified names. A local-only intercept does not cause outbound connections to originate from the intercepted namespace. The reason for this is to establish correct origin; the connection must be routed to a `traffic-agent`of an intercepted pod. For local-only intercepts, the outbound connections originates from the `traffic-manager`. + +To control outbound connectivity to specific namespaces, add the `--local-only` flag: + + ``` + $ telepresence intercept --namespace --local-only + ``` +The resources in the given namespace can now be accessed using unqualified names as long as the intercept is active. +You can deactivate the intercept with `telepresence leave `. This removes unqualified name access. + +### Proxy outcound connectivity for laptops + +To specify additional hosts or subnets that should be resolved inside the cluster, see [AlsoProxy](../../reference/config/#alsoproxysubnets) for more details. diff --git a/docs/telepresence/2.14/howtos/package.md b/docs/telepresence/2.14/howtos/package.md new file mode 100644 index 000000000..2baa7a66c --- /dev/null +++ b/docs/telepresence/2.14/howtos/package.md @@ -0,0 +1,178 @@ +--- +title: "How to package and share my intercepts" +description: "Use telepresence intercept specs to enable your teammates faster" +--- +# Introduction + +While telepresence takes cares of the interception part of your setup, you usually still need to script +some boiler plate code to run the local part (the handler) of your code. + +Classic solutions rely on a Makefile, or bash scripts, but this becomes cumbersome to maintain. + +Instead, you can use [telepresence intercept specs](../../reference/intercepts/specs): They allow you +to specify all aspects of an intercept, including prerequisites, the local processes that receive the intercepted traffic, +and the actual intercept. Telepresence can then run the specification. + +# Getting started + +You will need a Kubernetes cluster, a deployment, and a service to begin using an Intercept Specification. + +Once you have a Kubernetes cluster you can apply this configuration to start an echo easy deployment that +we can then use for our Intercept Specifcation + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: "echo-easy" +spec: + type: ClusterIP + selector: + service: echo-easy + ports: + - name: proxied + port: 80 + targetPort: http +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "echo-easy" + labels: + service: echo-easy +spec: + replicas: 1 + selector: + matchLabels: + service: echo-easy + template: + metadata: + labels: + service: echo-easy + spec: + containers: + - name: echo-easy + image: jmalloc/echo-server + ports: + - containerPort: 8080 + name: http + resources: + limits: + cpu: 50m + memory: 128Mi +``` + +You can create the local yaml file by using + +```console +$ cat > echo-server.yaml < my-intercept.yaml < --port --env-file --mechanism http`and adjust the flags as follows: + Start the intercept: + * **port:** specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * **env-file:** specify a file path for Telepresence to write the environment variables that are set in the pod. + * You can remove the **--mechanism http** flag if you have your traffic-manager set to *team-mode* + +4. Answer the question prompts. + The example below shows a preview URL for `example-service` which listens on port 8080. The preview URL for ingress will use the `ambassador` service in the `ambassador` namespace on port `443` using TLS encryption and the hostname `dev-environment.edgestack.me`: + + ```console +$ telepresence intercept example-service --mechanism http --ingress-host ambassador.ambassador --ingress-port 80 --ingress-l5 dev-environment.edgestack.me --ingress-tls --port 8080 --env-file ~/ex-svc.env + + Using deployment example-service + intercepted + Intercept name : example-service + State : ACTIVE + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":example-service") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname : dev-environment.edgestack.me + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + Here are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#env). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Go to the Preview URL generated from the intercept. +Traffic is now intercepted from your preview URL without impacting other traffic from your Ingress. + + + Didn't work? It might be because you have services in between your ingress controller and the service you are intercepting that do not propagate the x-telepresence-intercept-id HTTP Header. Read more on context propagation. + + +7. Make a request on the URL you would usually query for that environment. Don't route a request to your laptop. + + Normal traffic coming into the cluster through the Ingress (i.e. not coming from the preview URL) routes to services in the cluster like normal. + +8. Share with a teammate. + + You can collaborate with teammates by sending your preview URL to them. Once your teammate logs in, they must select the same identity provider and org as you are using. This authorizes their access to the preview URL. When they visit the preview URL, they see the intercepted service running on your laptop. + You can now collaborate with a teammate to debug the service on the shared intercept URL without impacting the production environment. + +## Sharing a preview URL with people outside your team + +To collaborate with someone outside of your identity provider's organization: +Log into [Ambassador Cloud](https://app.getambassador.io/cloud/). + navigate to your service's intercepts, select the preview URL details, and click **Make Publicly Accessible**. Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on your laptop. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. Removing the preview URL either from the dashboard or by running `telepresence preview remove ` also removes all access to the preview URL. + +## Change access restrictions + +To collaborate with someone outside of your identity provider's organization, you must make your preview URL publicly accessible. + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/). +2. Select the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Make Publicly Accessible**. + +Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on a local environment. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. + +## Remove a preview URL from an Intercept + +To delete a preview URL and remove all access to the intercepted service, + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) +2. Click on the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Remove Preview**. + +Alternatively, you can remove a preview URL with the following command: +`telepresence preview remove ` diff --git a/docs/telepresence/2.14/howtos/request.md b/docs/telepresence/2.14/howtos/request.md new file mode 100644 index 000000000..1109c68df --- /dev/null +++ b/docs/telepresence/2.14/howtos/request.md @@ -0,0 +1,12 @@ +import Alert from '@material-ui/lab/Alert'; + +# Send requests to an intercepted service + +Ambassador Cloud can inform you about the required request parameters to reach an intercepted service. + + 1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) + 2. Navigate to the desired service Intercepts page + 3. Click the **Query** button to open the pop-up menu. + 4. Toggle between **CURL**, **Headers** and **Browse**. + +The pre-built queries and header information will help you get started to query the desired intercepted service and manage header propagation. diff --git a/docs/telepresence/2.14/images/container-inner-dev-loop.png b/docs/telepresence/2.14/images/container-inner-dev-loop.png new file mode 100644 index 000000000..06586cd6e Binary files /dev/null and b/docs/telepresence/2.14/images/container-inner-dev-loop.png differ diff --git a/docs/telepresence/2.14/images/daemon-in-container.png b/docs/telepresence/2.14/images/daemon-in-container.png new file mode 100644 index 000000000..ed02e8386 Binary files /dev/null and b/docs/telepresence/2.14/images/daemon-in-container.png differ diff --git a/docs/telepresence/2.14/images/docker-extension-intercept.png b/docs/telepresence/2.14/images/docker-extension-intercept.png new file mode 100644 index 000000000..d01daef8f Binary files /dev/null and b/docs/telepresence/2.14/images/docker-extension-intercept.png differ diff --git a/docs/telepresence/2.14/images/docker-header-containers.png b/docs/telepresence/2.14/images/docker-header-containers.png new file mode 100644 index 000000000..06f422a93 Binary files /dev/null and b/docs/telepresence/2.14/images/docker-header-containers.png differ diff --git a/docs/telepresence/2.14/images/docker_extension_button_drop_down.png b/docs/telepresence/2.14/images/docker_extension_button_drop_down.png new file mode 100644 index 000000000..775323e56 Binary files /dev/null and b/docs/telepresence/2.14/images/docker_extension_button_drop_down.png differ diff --git a/docs/telepresence/2.14/images/docker_extension_connect_to_cluster.png b/docs/telepresence/2.14/images/docker_extension_connect_to_cluster.png new file mode 100644 index 000000000..eb95e5180 Binary files /dev/null and b/docs/telepresence/2.14/images/docker_extension_connect_to_cluster.png differ diff --git a/docs/telepresence/2.14/images/docker_extension_login.png b/docs/telepresence/2.14/images/docker_extension_login.png new file mode 100644 index 000000000..8874fa959 Binary files /dev/null and b/docs/telepresence/2.14/images/docker_extension_login.png differ diff --git a/docs/telepresence/2.14/images/docker_extension_running_intercepts_page.png b/docs/telepresence/2.14/images/docker_extension_running_intercepts_page.png new file mode 100644 index 000000000..7870e2691 Binary files /dev/null and b/docs/telepresence/2.14/images/docker_extension_running_intercepts_page.png differ diff --git a/docs/telepresence/2.14/images/docker_extension_start_intercept_page.png b/docs/telepresence/2.14/images/docker_extension_start_intercept_page.png new file mode 100644 index 000000000..6788994e3 Binary files /dev/null and b/docs/telepresence/2.14/images/docker_extension_start_intercept_page.png differ diff --git a/docs/telepresence/2.14/images/docker_extension_start_intercept_popup.png b/docs/telepresence/2.14/images/docker_extension_start_intercept_popup.png new file mode 100644 index 000000000..12839b0e5 Binary files /dev/null and b/docs/telepresence/2.14/images/docker_extension_start_intercept_popup.png differ diff --git a/docs/telepresence/2.14/images/docker_extension_upload_spec_button.png b/docs/telepresence/2.14/images/docker_extension_upload_spec_button.png new file mode 100644 index 000000000..f571aefd3 Binary files /dev/null and b/docs/telepresence/2.14/images/docker_extension_upload_spec_button.png differ diff --git a/docs/telepresence/2.14/images/github-login.png b/docs/telepresence/2.14/images/github-login.png new file mode 100644 index 000000000..cfd4d4bf1 Binary files /dev/null and b/docs/telepresence/2.14/images/github-login.png differ diff --git a/docs/telepresence/2.14/images/logo.png b/docs/telepresence/2.14/images/logo.png new file mode 100644 index 000000000..701f63ba8 Binary files /dev/null and b/docs/telepresence/2.14/images/logo.png differ diff --git a/docs/telepresence/2.14/images/mode-defaults.png b/docs/telepresence/2.14/images/mode-defaults.png new file mode 100644 index 000000000..1dcca4116 Binary files /dev/null and b/docs/telepresence/2.14/images/mode-defaults.png differ diff --git a/docs/telepresence/2.14/images/pod-daemon-overview.png b/docs/telepresence/2.14/images/pod-daemon-overview.png new file mode 100644 index 000000000..effb05314 Binary files /dev/null and b/docs/telepresence/2.14/images/pod-daemon-overview.png differ diff --git a/docs/telepresence/2.14/images/split-tunnel.png b/docs/telepresence/2.14/images/split-tunnel.png new file mode 100644 index 000000000..5bf30378e Binary files /dev/null and b/docs/telepresence/2.14/images/split-tunnel.png differ diff --git a/docs/telepresence/2.14/images/tracing.png b/docs/telepresence/2.14/images/tracing.png new file mode 100644 index 000000000..c374807e5 Binary files /dev/null and b/docs/telepresence/2.14/images/tracing.png differ diff --git a/docs/telepresence/2.14/images/trad-inner-dev-loop.png b/docs/telepresence/2.14/images/trad-inner-dev-loop.png new file mode 100644 index 000000000..618b674f8 Binary files /dev/null and b/docs/telepresence/2.14/images/trad-inner-dev-loop.png differ diff --git a/docs/telepresence/2.14/images/tunnelblick.png b/docs/telepresence/2.14/images/tunnelblick.png new file mode 100644 index 000000000..8944d445a Binary files /dev/null and b/docs/telepresence/2.14/images/tunnelblick.png differ diff --git a/docs/telepresence/2.14/images/vpn-dns.png b/docs/telepresence/2.14/images/vpn-dns.png new file mode 100644 index 000000000..eed535c45 Binary files /dev/null and b/docs/telepresence/2.14/images/vpn-dns.png differ diff --git a/docs/telepresence/2.14/images/vpn-k8s-config.jpg b/docs/telepresence/2.14/images/vpn-k8s-config.jpg new file mode 100644 index 000000000..66116e41d Binary files /dev/null and b/docs/telepresence/2.14/images/vpn-k8s-config.jpg differ diff --git a/docs/telepresence/2.14/images/vpn-routing.jpg b/docs/telepresence/2.14/images/vpn-routing.jpg new file mode 100644 index 000000000..18410dd48 Binary files /dev/null and b/docs/telepresence/2.14/images/vpn-routing.jpg differ diff --git a/docs/telepresence/2.14/images/vpn-with-tele.jpg b/docs/telepresence/2.14/images/vpn-with-tele.jpg new file mode 100644 index 000000000..843b253e9 Binary files /dev/null and b/docs/telepresence/2.14/images/vpn-with-tele.jpg differ diff --git a/docs/telepresence/2.14/install/cloud.md b/docs/telepresence/2.14/install/cloud.md new file mode 100644 index 000000000..bf8c80669 --- /dev/null +++ b/docs/telepresence/2.14/install/cloud.md @@ -0,0 +1,63 @@ +# Provider Prerequisites for Traffic Manager + +## GKE + +### Firewall Rules for private clusters + +A GKE cluster with private networking will come preconfigured with firewall rules that prevent the Traffic Manager's +webhook injector from being invoked by the Kubernetes API server. +For Telepresence to work in such a cluster, you'll need to [add a firewall rule](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) allowing the Kubernetes masters to access TCP port `8443` in your pods. +For example, for a cluster named `tele-webhook-gke` in region `us-central1-c1`: + +```bash +$ gcloud container clusters describe tele-webhook-gke --region us-central1-c | grep masterIpv4CidrBlock + masterIpv4CidrBlock: 172.16.0.0/28 # Take note of the IP range, 172.16.0.0/28 + +$ gcloud compute firewall-rules list \ + --filter 'name~^gke-tele-webhook-gke' \ + --format 'table( + name, + network, + direction, + sourceRanges.list():label=SRC_RANGES, + allowed[].map().firewall_rule().list():label=ALLOW, + targetTags.list():label=TARGET_TAGS + )' + +NAME NETWORK DIRECTION SRC_RANGES ALLOW TARGET_TAGS +gke-tele-webhook-gke-33fa1791-all tele-webhook-net INGRESS 10.40.0.0/14 esp,ah,sctp,tcp,udp,icmp gke-tele-webhook-gke-33fa1791-node +gke-tele-webhook-gke-33fa1791-master tele-webhook-net INGRESS 172.16.0.0/28 tcp:10250,tcp:443 gke-tele-webhook-gke-33fa1791-node +gke-tele-webhook-gke-33fa1791-vms tele-webhook-net INGRESS 10.128.0.0/9 icmp,tcp:1-65535,udp:1-65535 gke-tele-webhook-gke-33fa1791-node +# Take note fo the TARGET_TAGS value, gke-tele-webhook-gke-33fa1791-node + +$ gcloud compute firewall-rules create gke-tele-webhook-gke-webhook \ + --action ALLOW \ + --direction INGRESS \ + --source-ranges 172.16.0.0/28 \ + --rules tcp:8443 \ + --target-tags gke-tele-webhook-gke-33fa1791-node --network tele-webhook-net +Creating firewall...⠹Created [https://www.googleapis.com/compute/v1/projects/datawire-dev/global/firewalls/gke-tele-webhook-gke-webhook]. +Creating firewall...done. +NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED +gke-tele-webhook-gke-webhook tele-webhook-net INGRESS 1000 tcp:8443 False +``` + +### GKE Authentication Plugin + +Starting with Kubernetes version 1.26 GKE will require the use of the [gke-gcloud-auth-plugin](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke). +You will need to install this plugin to use Telepresence with Docker while using GKE. + +If you are using the [Telepresence Docker Extension](../../docker/extension) you will need to ensure that your `command` is set to an absolute path in your kubeconfig file. If you've installed not using homebrew you may see in your file `command: gke-gcloud-auth-plugin`. This would need to be replaced with the path to the binary. +You can check this by opening your kubeconfig file, and under the `users` section with your GKE cluster there is a `command` if you've installed with homebrew it would look like this +`command: /opt/homebrew/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/gcloud`. + +## EKS + +### EKS Authentication Plugin + +If you are using AWS CLI version earlier than `1.16.156` you will need to install [aws-iam-authenticator](https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html). +You will need to install this plugin to use Telepresence with Docker while using EKS. + +If you are using the [Telepresence Docker Extension](../../docker/extension) you will need to ensure that your `command` is set to an absolute path in your kubeconfig file instead of a relative path. +You can check this by opening your kubeconfig file, and under the `users` section with your EKS cluster there is a `command` if you've installed with homebrew it would look like this +`command: /opt/homebrew/Cellar/aws-iam-authenticator/0.6.2/bin/aws-iam-authenticator`. \ No newline at end of file diff --git a/docs/telepresence/2.14/install/helm.md b/docs/telepresence/2.14/install/helm.md new file mode 100644 index 000000000..2709ee8f3 --- /dev/null +++ b/docs/telepresence/2.14/install/helm.md @@ -0,0 +1,181 @@ +# Install the Traffic Manager with Helm + +[Helm](https://helm.sh) is a package manager for Kubernetes that automates the release and management of software on Kubernetes. The Telepresence Traffic Manager can be installed via a Helm chart with a few simple steps. + +For more details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +## Before you begin + +Before you begin you need to have [`helm`](https://helm.sh/docs/intro/install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +The Telepresence Helm chart is hosted by Ambassador Labs and published at `https://app.getambassador.io`. + +Start by adding this repo to your Helm client with the following command: + +```shell +helm repo add datawire https://app.getambassador.io +helm repo update +``` + +## Install with Helm + +When you run the Helm chart, it installs all the components required for the Telepresence Traffic Manager. + +1. If you are installing the Telepresence Traffic Manager **for the first time on your cluster**, create the `ambassador` namespace in your cluster: + + ```shell + kubectl create namespace ambassador + ``` + +2. Install the Telepresence Traffic Manager with the following command: + + ```shell + helm install traffic-manager --namespace ambassador datawire/telepresence + ``` + +### Install into custom namespace + +The Helm chart supports being installed into any namespace, not necessarily `ambassador`. Simply pass a different `namespace` argument to `helm install`. +For example, if you wanted to deploy the traffic manager to the `staging` namespace: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence +``` + +Note that users of Telepresence will need to configure their kubeconfig to find this installation of the Traffic Manager: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +See [the kubeconfig documentation](../../reference/config#manager) for more information. + +### Upgrading the Traffic Manager. + +Versions of the Traffic Manager Helm chart are coupled to the versions of the Telepresence CLI that they are intended for. +Thus, for example, if you wish to use Telepresence `v2.4.0`, you'll need to install version `v2.4.0` of the Traffic Manager Helm chart. + +Upgrading the Traffic Manager is the same as upgrading any other Helm chart; for example, if you installed the release into the `ambassador` namespace, and you just wished to upgrade it to the latest version without changing any configuration values: + +```shell +helm repo up +helm upgrade traffic-manager datawire/telepresence --reuse-values --namespace ambassador +``` + +If you want to upgrade the Traffic-Manager to a specific version, add a `--version` flag with the version number to the upgrade command. For example: `--version v2.4.1` + +## RBAC + +### Installing a namespace-scoped traffic manager + +You might not want the Traffic Manager to have permissions across the entire kubernetes cluster, or you might want to be able to install multiple traffic managers per cluster (for example, to separate them by environment). +In these cases, the traffic manager supports being installed with a namespace scope, allowing cluster administrators to limit the reach of a traffic manager's permissions. + +For example, suppose you want a Traffic Manager that only works on namespaces `dev` and `staging`. +To do this, create a `values.yaml` like the following: + +```yaml +managerRbac: + create: true + namespaced: true + namespaces: + - dev + - staging +``` + +This can then be installed via: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence -f ./values.yaml +``` + +**NOTE** Do not install namespace-scoped Traffic Managers and a global Traffic Manager in the same cluster, as it could have unexpected effects. + +#### Namespace collision detection + +The Telepresence Helm chart will try to prevent namespace-scoped Traffic Managers from managing the same namespaces. +It will do this by creating a ConfigMap, called `traffic-manager-claim`, in each namespace that a given install manages. + +So, for example, suppose you install one Traffic Manager to manage namespaces `dev` and `staging`, as: + +```bash +helm install traffic-manager --namespace dev datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={dev,staging}' +``` + +You might then attempt to install another Traffic Manager to manage namespaces `staging` and `prod`: + +```bash +helm install traffic-manager --namespace prod datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={staging,prod}' +``` + +This would fail with an error: + +``` +Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap "traffic-manager-claim" in namespace "staging" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "prod": current value is "dev" +``` + +To fix this error, fix the overlap either by removing `staging` from the first install, or from the second. + +#### Namespace scoped user permissions + +Optionally, you can also configure user rbac to be scoped to the same namespaces as the manager itself. +You might want to do this if you don't give your users permissions throughout the cluster, and want to make sure they only have the minimum set required to perform telepresence commands on certain namespaces. + +Continuing with the `dev` and `staging` example from the previous section, simply add the following to `values.yaml` (make sure you set the `subjects`!): + +```yaml +clientRbac: + create: true + + # These are the users or groups to which the user rbac will be bound. + # This MUST be set. + subjects: {} + # - kind: User + # name: jane + # apiGroup: rbac.authorization.k8s.io + + namespaced: true + + namespaces: + - dev + - staging +``` + +#### Namespace-scoped webhook + +If you wish to use the traffic-manager's [mutating webhook](../../reference/cluster-config#mutating-webhook) with a namespace-scoped traffic manager, you will have to ensure that each namespace has an `app.kubernetes.io/name` label that is identical to its name: + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: staging + labels: + app.kubernetes.io/name: staging +``` + +You can also use `kubectl label` to add the label to an existing namespace, e.g.: + +```shell +kubectl label namespace staging app.kubernetes.io/name=staging +``` + +This is required because the mutating webhook will use the name label to find namespaces to operate on. + +**NOTE** This labelling happens automatically in kubernetes >= 1.21. + +### Installing RBAC only + +Telepresence Traffic Manager does require some [RBAC](../../reference/rbac/) for the traffic-manager deployment itself, as well as for users. +To make it easier for operators to introspect / manage RBAC separately, you can use `rbac.only=true` to +only create the rbac-related objects. +Additionally, you can use `clientRbac.create=true` and `managerRbac.create=true` to toggle which subset(s) of RBAC objects you wish to create. diff --git a/docs/telepresence/2.14/install/index.md b/docs/telepresence/2.14/install/index.md new file mode 100644 index 000000000..d7a5642ed --- /dev/null +++ b/docs/telepresence/2.14/install/index.md @@ -0,0 +1,157 @@ +import Platform from '@src/components/Platform'; + +# Install + +Install Telepresence by running the commands below for your OS. If you are not the administrator of your cluster, you will need [administrative RBAC permissions](../reference/rbac#administrating-telepresence) to install and use Telepresence in your cluster. + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +We offer an easy installation path using an [MSI Installer](https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence-setup.exe). However if you'd like to setup Telepresence using Powershell, you can run these commands: + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## What's Next? + +Follow one of our [quick start guides](../quick-start/) to start using Telepresence, either with our sample app or in your own environment. + +## Installing nightly versions of Telepresence + +We build and publish the contents of the default branch, [release/v2](https://github.com/telepresenceio/telepresence), of Telepresence +nightly, Monday through Friday, for macOS (Intel and Apple silicon), Linux, and Windows. + +The tags are formatted like so: `vX.Y.Z-nightly-$gitShortHash`. + +`vX.Y.Z` is the most recent release of Telepresence with the patch version (Z) bumped one higher. +For example, if our last release was 2.3.4, nightly builds would start with v2.3.5, until a new +version of Telepresence is released. + +`$gitShortHash` will be the short hash of the git commit of the build. + +Use these URLs to download the most recent nightly build. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/nightly/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/nightly/telepresence.zip +``` + + + + +## Installing older versions of Telepresence + +Use these URLs to download an older version for your OS (including older nightly builds), replacing `x.y.z` with the versions you want. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/x.y.z/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/x.y.z/telepresence +``` + + + + + diff --git a/docs/telepresence/2.14/install/manager.md b/docs/telepresence/2.14/install/manager.md new file mode 100644 index 000000000..4efdc3c69 --- /dev/null +++ b/docs/telepresence/2.14/install/manager.md @@ -0,0 +1,85 @@ +# Install/Uninstall the Traffic Manager + +Telepresence uses a traffic manager to send/recieve cloud traffic to the user. Telepresence uses [Helm](https://helm.sh) under the hood to install the traffic manager in your cluster. + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/). +In addition, you may need certain prerequisites depending on your cloud provider and platform. +See the [cloud provider installation notes](../../install/cloud) for more. + +## Install the Traffic Manager + +The telepresence cli can install the traffic manager for you. The basic install will install the same version as the client used. + +1. Install the Telepresence Traffic Manager with the following command: + + ```shell + telepresence helm install + ``` + +### Customizing the Traffic Manager. + +For details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +1. Create a values.yaml file with your config values. + +2. Run the install command with the values flag set to the path to your values file. + + ```shell + telepresence helm install --values values.yaml + ``` + + +## Upgrading/Downgrading the Traffic Manager. + +1. Download the cli of the version of Telepresence you wish to use. + +2. Run the install command with the upgrade flag. + + ```shell + telepresence helm install --upgrade + ``` + + +## Uninstall + +The telepresence cli can uninstall the traffic manager for you using the `telepresence helm uninstall` command (previously `telepresence uninstall --everything`). + +1. Uninstall the Telepresence Traffic Manager and all of the agents installed by it using the following command: + + ```shell + telepresence helm uninstall + ``` + +## Ambassador Agent + +The Ambassador Agent is installed alongside the Traffic Manager to report your services to Ambassador Cloud and give you the ability to trigger intercepts from the Cloud UI. + +If you are already using the Emissary-Ingress or Edge-Stack you do not need to install the Ambassador Agent. When installing the `traffic-manager` you can add the flag `--set ambassador-agent.enabled=false`, to not include the ambassador-agent. Emissary and Edge-Stack both already include this agent within their deployments. + +If your namespace runs with tight security parameters you may need to set a few additional parameters. These parameters are `securityContext`, `tolerations`, and `resources`. +You can set these parameters in a `values.yaml` file under the `ambassador-agent` prefix to fit your namespace requirements. + +### Adding an API Key to your Ambassador Agent + +While installing the traffic-manager you can pass your cloud-token directly to the helm chart using the flag, `--set ambassador-agent.cloudConnectToken=`. +The [API Key](../reference/client/login.md) will be created as a secret and your agent will use it upon start-up. Telepresence will not override the API key given via Helm. + +### Creating a secret manually +The Ambassador agent watches for secrets with a name ending in `agent-cloud-token`. You can create this secret yourself. This API key will always be used. + + ```shell +kubectl apply -f - < + labels: + app.kubernetes.io/name: agent-cloud-token +data: + CLOUD_CONNECT_TOKEN: +EOF + ``` \ No newline at end of file diff --git a/docs/telepresence/2.14/install/migrate-from-legacy.md b/docs/telepresence/2.14/install/migrate-from-legacy.md new file mode 100644 index 000000000..94307dfa1 --- /dev/null +++ b/docs/telepresence/2.14/install/migrate-from-legacy.md @@ -0,0 +1,110 @@ +# Migrate from legacy Telepresence + +[Telepresence](/products/telepresence/) (formerly referenced as Telepresence 2, which is the current major version) has different mechanics and requires a different mental model from [legacy Telepresence 1](https://www.telepresence.io/docs/v1/) when working with local instances of your services. + +In legacy Telepresence, a pod running a service was swapped with a pod running the Telepresence proxy. This proxy received traffic intended for the service, and sent the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment". + +In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time. + +Telepresence 2 introduces a [new +architecture](../../reference/architecture/) built around "intercepts" +that addresses these problems. With the new Telepresence, a sidecar +proxy ("traffic agent") is injected onto the pod. The proxy then +intercepts traffic intended for the Pod and routes it to the +workstation/laptop. The advantage of this approach is that the +service is running at all times, and no swapping is used. By using +the proxy approach, we can also do personal intercepts, where rather +than re-routing all traffic to the laptop/workstation, it only +re-routes the traffic designated as belonging to that user, so that +multiple developers can intercept the same service at the same time +without disrupting normal operation or disrupting eacho. + +Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts. + +## Using legacy Telepresence commands + +First please ensure you've [installed Telepresence](../). + +Telepresence is able to translate common legacy Telepresence commands into native Telepresence commands. +So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used +to with the Telepresence binary. + +For example, say you have a deployment (`myserver`) that you want to swap deployment (equivalent to intercept in +Telepresence) with a python server, you could run the following command: + +``` +$ telepresence --swap-deployment myserver --expose 9090 --run python3 -m http.server 9090 +< help text > + +Legacy telepresence command used +Command roughly translates to the following in Telepresence: +telepresence intercept echo-easy --port 9090 -- python3 -m http.server 9090 +running... +Connecting to traffic manager... +Connected to context +Using Deployment myserver +intercepted + Intercept name : myserver + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:9090 + Intercepting : all TCP connections +Serving HTTP on :: port 9090 (http://[::]:9090/) ... +``` + +Telepresence will let you know what the legacy Telepresence command has mapped to and automatically +runs it. So you can get started with Telepresence today, using the commands you are used to +and it will help you learn the Telepresence syntax. + +### Legacy command mapping + +Below is the mapping of legacy Telepresence to Telepresence commands (where they exist and +are supported). + +| Legacy Telepresence Command | Telepresence Command | +|--------------------------------------------------|--------------------------------------------| +| --swap-deployment $workload | intercept $workload | +| --expose localPort[:remotePort] | intercept --port localPort[:remotePort] | +| --swap-deployment $workload --run-shell | intercept $workload -- bash | +| --swap-deployment $workload --run $cmd | intercept $workload -- $cmd | +| --swap-deployment $workload --docker-run $cmd | intercept $workload --docker-run -- $cmd | +| --run-shell | connect -- bash | +| --run $cmd | connect -- $cmd | +| --env-file,--env-json | --env-file, --env-json (haven't changed) | +| --context,--namespace | --context, --namespace (haven't changed) | +| --mount,--docker-mount | --mount, --docker-mount (haven't changed) | + +### Legacy Telepresence command limitations + +Some of the commands and flags from legacy Telepresence either didn't apply to Telepresence or +aren't yet supported in Telepresence. For some known popular commands, such as --method, +Telepresence will include output letting you know that the flag has gone away. For flags that +Telepresence can't translate yet, it will let you know that that flag is "unsupported". + +If Telepresence is missing any flags or functionality that is integral to your usage, please let us know +by [creating an issue](https://github.com/telepresenceio/telepresence/issues) and/or talking to us on our [Slack channel](http://a8r.io/slack)! + +## Telepresence changes + +Telepresence installs a Traffic Manager in the cluster and Traffic Agents alongside workloads when performing intercepts (including +with `--swap-deployment`) and leaves them. If you use `--swap-deployment`, the intercept will be left once the process +dies, but the agent will remain. There's no harm in leaving the agent running alongside your service, but when you +want to remove them from the cluster, the following Telepresence command will help: +``` +$ telepresence uninstall --help +Uninstall telepresence agents + +Usage: + telepresence uninstall [flags] { --agent |--all-agents } + +Flags: + -d, --agent uninstall intercept agent on specific deployments + -a, --all-agents uninstall intercept agent on all deployments + -h, --help help for uninstall + -n, --namespace string If present, the namespace scope for this CLI request +``` + +Since the new architecture deploys a Traffic Manager into the Ambassador namespace, please take a look at +our [rbac guide](../../reference/rbac) if you run into any issues with permissions while upgrading to Telepresence. + +The Traffic Manager can be uninstalled using `telepresence helm uninstall`. \ No newline at end of file diff --git a/docs/telepresence/2.14/install/upgrade.md b/docs/telepresence/2.14/install/upgrade.md new file mode 100644 index 000000000..34385935c --- /dev/null +++ b/docs/telepresence/2.14/install/upgrade.md @@ -0,0 +1,83 @@ +--- +description: "How to upgrade your installation of Telepresence and install previous versions." +--- + +# Upgrade Process +The Telepresence CLI will periodically check for new versions and notify you when an upgrade is available. Running the same commands used for installation will replace your current binary with the latest version. + +Before upgrading your CLI, you must stop any live Telepresence processes by issuing `telepresence quit -s` (or `telepresence quit -ur` +if your current version is less than 2.8.0). + + + + +```shell +# Intel Macs + +# Upgrade via brew: +brew upgrade datawire/blackbird/telepresence + +# OR upgrade manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +The [MSI Installer](https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence-setup.exe) can upgrade Telepresence, or if you installed it with Powershell: + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +The Telepresence CLI contains an embedded Helm chart. See [Install/Uninstall the Traffic Manager](../manager/) if you want to also upgrade +the Traffic Manager in your cluster. diff --git a/docs/telepresence/2.14/quick-start/TelepresenceQuickStartLanding.js b/docs/telepresence/2.14/quick-start/TelepresenceQuickStartLanding.js new file mode 100644 index 000000000..bd375dee0 --- /dev/null +++ b/docs/telepresence/2.14/quick-start/TelepresenceQuickStartLanding.js @@ -0,0 +1,118 @@ +import queryString from 'query-string'; +import React, { useEffect, useState } from 'react'; + +import Embed from '../../../../src/components/Embed'; +import Icon from '../../../../src/components/Icon'; +import Link from '../../../../src/components/Link'; + +import './telepresence-quickstart-landing.less'; + +/** @type React.FC> */ +const RightArrow = (props) => ( + + + +); + +const TelepresenceQuickStartLanding = () => { + const [getStartedUrl, setGetStartedUrl] = useState( + 'https://app.getambassador.io/cloud/welcome?docs_source=telepresence-quick-start', + ); + + const getUrlFromQueryParams = () => { + const { docs_source, docs_campaign } = queryString.parse( + window.location.search, + ); + + if (docs_source === 'cloud-quickstart-ad' && docs_campaign === 'loops') { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=loops', + ); + } else if ( + docs_source === 'cloud-quickstart-ad' && + docs_campaign === 'environments' + ) { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=environments', + ); + } + }; + + useEffect(() => { + getUrlFromQueryParams(); + }, []); + + return ( +
+

+ Telepresence +

+

+ Set up your ideal development environment for Kubernetes in seconds. + Accelerate your inner development loop with hot reload using your + existing IDE, and workflow. +

+ +
+
+
+

+ Set Up Telepresence with Ambassador Cloud +

+

+ Seamlessly integrate Telepresence into your existing Kubernetes + environment by following our 3-step setup guide. +

+ + Get Started + +
+
+

+ + Do it Yourself: + {' '} + install Telepresence and manually connect to your Kubernetes + workloads. +

+
+ +
+
+
+

+ What Can Telepresence Do for You? +

+

Telepresence gives Kubernetes application developers:

+
    +
  • Instant feedback loops
  • +
  • Remote development environments
  • +
  • Access to your favorite local tools
  • +
  • Easy collaborative development with teammates
  • +
+ + LEARN MORE{' '} + + +
+
+ +
+
+
+
+ ); +}; + +export default TelepresenceQuickStartLanding; diff --git a/docs/telepresence/2.14/quick-start/demo-node.md b/docs/telepresence/2.14/quick-start/demo-node.md new file mode 100644 index 000000000..c1725fe30 --- /dev/null +++ b/docs/telepresence/2.14/quick-start/demo-node.md @@ -0,0 +1,155 @@ +--- +description: "Claim a remote demo cluster and learn to use Telepresence to intercept services running in a Kubernetes Cluster, speeding up local development and debugging." +--- + +import {DemoClusterMetadata, ExpirationDate} from '../../../../../src/components/DemoClusterMetadata'; +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start + +
+

Contents

+ +* [1. Get a free remote cluster](#1-get-a-free-remote-cluster) +* [2. Try the Emojivoto application](#2-try-the-emojivoto-application) +* [3. Set up your local development environment](#3-set-up-your-local-development-environment) +* [4. Testing our fix](#4-testing-our-fix) +* [5. Preview URLs](#5-preview-urls) +* [6. How/Why does this all work](#6-howwhy-does-this-all-work) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +In this guide, we'll give you a hands-on tutorial with [Telepresence](/products/telepresence/). To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js and Golang. We have a version in React if you prefer. + + +## 1. Get a free remote cluster + +[Telepresence](/docs/telepresence/) connects your local workstation with a remote Kubernetes cluster. In this tutorial, we'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. Test out the application: + +1. Go to the and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. We're going to use Telepresence shortly to fix this bug, as everyone should be able to vote for 🍩! + + + Congratulations! You've successfully accessed the Emojivoto application on your remote cluster. + + +## 3. Set up your local development environment + +We'll set up a development environment locally on your workstation. We'll then use [Telepresence](../../reference/inside-container/) to connect this local development environment to the remote Kubernetes cluster. To save time, the development environment we'll use is pre-packaged as a Docker container. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + + + + + + + +Make sure that ports 8080 and 8083 are free.
+If the Docker engine is not running, the command will fail and you will see docker: unknown server OS in your terminal. +
+ +2. The Docker container includes a copy of the Emojivoto application that fixes the bug. Visit the [leaderboard](http://localhost:8083/leaderboard) and notice how it is different from the leaderboard in your Kubernetes cluster. + +3. Vote for 🍩 on your local leaderboard, and you can see that the bug is fixed! + + + Congratulations! You have successfully set up a local development environment, and tested the fix locally. + + +## 4. Testing our fix + +A common use case for Telepresence is to connect your local development environment to a remote cluster. This way, if your application is too big or complex to run locally, you can still develop locally. In this Quick Start, we're also going to show Telepresence can be used for integration testing, by testing our fix against the services in the remote cluster. + +1. From your Docker container, create an intercept, which will tell Telepresence to send traffic to the service in your container instead of the service in the cluster: + `telepresence intercept web --port 8080` + + When prompted for ingress configuration, all default values should be correct as displayed below. + + + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Preview URLs + +Preview URLs enable you to safely share your development environment with anyone. For example, you may want your UX designer to take a quick look at what you're developing, before you commit the code. Preview URLs enable this easy collaboration. + +1. If you access the Emojivoto application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +2. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version running locally. + + +Now you're able to share your fix in your local environment with your team! + + + + To get more information regarding Preview URLs and intercepts, visit Ambassador Cloud. + + +
+ +## 6. How/Why does this all work? + +[Telepresence](../qs-go/) works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +Intercepts and preview URLs are functions of Telepresence that enable easy local development from a remote Kubernetes cluster and offer a preview environment for sharing and real-time collaboration. + +Telepresence also uses custom headers and header propagation for controllable intercepts and preview URLs. The headers facilitate the smart routing of requests either to live services in the cluster or services running locally on a developer’s machine. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to Ambassador Cloud with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. + +## What's Next? + + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! diff --git a/docs/telepresence/2.14/quick-start/demo-react.md b/docs/telepresence/2.14/quick-start/demo-react.md new file mode 100644 index 000000000..2312dbbbc --- /dev/null +++ b/docs/telepresence/2.14/quick-start/demo-react.md @@ -0,0 +1,257 @@ +--- +description: "Telepresence Quick Start - React. In this guide we'll give you everything you need in a preconfigured demo cluster: the Telepresence CLI, a config file for..." +--- + +import Alert from '@material-ui/lab/Alert'; +import QSCards26 from './qs-cards'; +import { DownloadDemo } from '../../../../../src/components/Docs/DownloadDemo'; +import { UserInterceptCommand } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start - React + +
+

Contents

+ +* [1. Download the demo cluster archive](#1-download-the-demo-cluster-archive) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Set up the sample application](#3-set-up-the-sample-application) +* [4. Test app](#4-test-app) +* [5. Run a service on your laptop](#5-run-a-service-on-your-laptop) +* [6. Make a code change](#6-make-a-code-change) +* [7. Intercept all traffic to the service](#7-intercept-all-traffic-to-the-service) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +In this guide we'll give you **everything you need in a preconfigured demo cluster:** the [Telepresence](/products/telepresence/) CLI, a config file for connecting to your demo cluster, and code to run a cluster service locally. + + + While Telepresence works with any language, this guide uses a sample app with a frontend written in React. We have a version with a Node.js backend if you prefer. + + + + +## 1. Download the demo cluster archive + +1. + +2. Extract the archive file, open the `ambassador-demo-cluster` folder, and run the installer script (the commands below might vary based on where your browser saves downloaded files). + + + This step will also install some dependency packages onto your laptop using npm, you can see those packages at ambassador-demo-cluster/edgey-corp-nodejs/DataProcessingService/package.json. + + + ``` + cd ~/Downloads + unzip ambassador-demo-cluster.zip -d ambassador-demo-cluster + cd ambassador-demo-cluster + ./install.sh + # type y to install the npm dependencies when asked + ``` + +3. Confirm that your `kubectl` is configured to use the demo cluster by getting the status of the cluster nodes, you should see a single node named `tpdemo-prod-...`: + `kubectl get nodes` + + ``` + $ kubectl get nodes + + NAME STATUS ROLES AGE VERSION + tpdemo-prod-1234 Ready control-plane,master 5d10h v1.20.2+k3s1 + ``` + +4. Confirm that the Telepresence CLI is now installed (we expect to see the daemons are not running yet): +`telepresence status` + + ``` + $ telepresence status + + Root Daemon: Not running + User Daemon: Not running + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open System Preferences → Security & Privacy → General. Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence status command. + + + + You now have Telepresence installed on your workstation and a Kubernetes cluster configured in your terminal! + + +## 2. Test Telepresence + +[Telepresence](../../reference/client/login/) connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster (this requires **root** privileges and will ask for your password): +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Set up the sample application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + +1. Clone the emojivoto app: +`git clone https://github.com/datawire/emojivoto.git` + +1. Deploy the app to your cluster: +`kubectl apply -k emojivoto/kustomize/deployment` + +1. Change the kubectl namespace: +`kubectl config set-context --current --namespace=emojivoto` + +1. List the Services: +`kubectl get svc` + + ``` + $ kubectl get svc + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + emoji-svc ClusterIP 10.43.162.236 8080/TCP,8801/TCP 29s + voting-svc ClusterIP 10.43.51.201 8080/TCP,8801/TCP 29s + web-app ClusterIP 10.43.242.240 80/TCP 29s + web-svc ClusterIP 10.43.182.119 8080/TCP 29s + ``` + +1. Since you’ve already connected Telepresence to your cluster, you can access the frontend service in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). This is the namespace qualified DNS name in the form of `service.namespace`. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Test app + +1. Vote for some emojis and see how the [leaderboard](http://web-app.emojivoto/leaderboard) changes. + +1. There is one emoji that causes an error when you vote for it. Vote for 🍩 and the leaderboard does not actually update. Also an error is shown on the browser dev console: +`GET http://web-svc.emojivoto:8080/api/vote?choice=:doughnut: 500 (Internal Server Error)` + + + Open the dev console in Chrome or Firefox with Option + ⌘ + J (macOS) or Shift + CTRL + J (Windows/Linux).
+ Open the dev console in Safari with Option + ⌘ + C. +
+ +The error is on a backend service, so **we can add an error page to notify the user** while the bug is fixed. + +## 5. Run a service on your laptop + +Now start up the `web-app` service on your laptop. We'll then make a code change and intercept this service so that we can see the immediate results of a code change to the service. + +1. **In a new terminal window**, change into the repo directory and build the application: + + `cd /emojivoto` + `make web-app-local` + + ``` + $ make web-app-local + + ... + webpack 5.34.0 compiled successfully in 4326 ms + ✨ Done in 5.38s. + ``` + +2. Change into the service's code directory and start the server: + + `cd emojivoto-web-app` + `yarn webpack serve` + + ``` + $ yarn webpack serve + + ... + ℹ 「wds」: Project is running at http://localhost:8080/ + ... + ℹ 「wdm」: Compiled successfully. + ``` + +4. Access the application at [http://localhost:8080](http://localhost:8080) and see how voting for the 🍩 is generating the same error as the application deployed in the cluster. + + + Victory, your local React server is running a-ok! + + +## 6. Make a code change +We’ve now set up a local development environment for the app. Next we'll make and locally test a code change to the app to improve the issue with voting for 🍩. + +1. In the terminal running webpack, stop the server with `Ctrl+c`. + +1. In your preferred editor open the file `emojivoto/emojivoto-web-app/js/components/Vote.jsx` and replace the `render()` function (lines 83 to the end) with [this highlighted code snippet](https://github.com/datawire/emojivoto/blob/main/assets/Vote-fixed.jsx#L83-L149). + +1. Run webpack to fully recompile the code then start the server again: + + `yarn webpack` + `yarn webpack serve` + +1. Reload the browser tab showing [http://localhost:8080](http://localhost:8080) and vote for 🍩. Notice how you see an error instead, improving the user experience. + +## 7. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the app to the version running locally instead. + + + This command must be run in the terminal window where you ran the script because the script set environment variables to access the demo cluster. Those variables will only will apply to that terminal session. + + +1. Start the intercept with the `intercept` command, setting the workload name (a Deployment in this case), namespace, and port: +`telepresence intercept web-app --namespace emojivoto --port 8080` + + ``` + $ telepresence intercept web-app --namespace emojivoto --port 8080 + + Using deployment web-app + intercepted + Intercept name: web-app-emojivoto + State : ACTIVE + ... + ``` + +2. Go to the frontend service again in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). Voting for 🍩 should now show an error message to the user. + + + The web-app Deployment is being intercepted and rerouted to the server on your laptop! + + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## What's Next? + + diff --git a/docs/telepresence/2.14/quick-start/index.md b/docs/telepresence/2.14/quick-start/index.md new file mode 100644 index 000000000..c0157b6c1 --- /dev/null +++ b/docs/telepresence/2.14/quick-start/index.md @@ -0,0 +1,8 @@ +--- +title: Telepresence Quick Start +description: Telepresence Quick Start. +--- + +import NewTelepresenceQuickStartLanding from './TelepresenceQuickStartLanding' + + diff --git a/docs/telepresence/2.14/quick-start/qs-cards.js b/docs/telepresence/2.14/quick-start/qs-cards.js new file mode 100644 index 000000000..5b68aa4ae --- /dev/null +++ b/docs/telepresence/2.14/quick-start/qs-cards.js @@ -0,0 +1,71 @@ +import Grid from '@material-ui/core/Grid'; +import Paper from '@material-ui/core/Paper'; +import Typography from '@material-ui/core/Typography'; +import { makeStyles } from '@material-ui/core/styles'; +import { Link as GatsbyLink } from 'gatsby'; +import React from 'react'; + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + textAlign: 'center', + alignItem: 'stretch', + padding: 0, + }, + paper: { + padding: theme.spacing(1), + textAlign: 'center', + color: 'black', + height: '100%', + }, +})); + +export default function CenteredGrid() { + const classes = useStyles(); + + return ( +
+ + + + + + Collaborating + + + + Use preview URLS to collaborate with your colleagues and others + outside of your organization. + + + + + + + + Outbound Sessions + + + + While connected to the cluster, your laptop can interact with + services as if it was another pod in the cluster. + + + + + + + + FAQs + + + + Learn more about uses cases and the technical implementation of + Telepresence. + + + + +
+ ); +} diff --git a/docs/telepresence/2.14/quick-start/qs-go.md b/docs/telepresence/2.14/quick-start/qs-go.md new file mode 100644 index 000000000..f3ef47199 --- /dev/null +++ b/docs/telepresence/2.14/quick-start/qs-go.md @@ -0,0 +1,398 @@ +--- +description: "Telepresence Quick Start Go. You will need kubectl or oc installed and set up (Linux / macOS / Windows) to use a Kubernetes cluster, preferably an empty." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Go** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Go application](#3-install-a-sample-go-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used [Telepresence](/products/telepresence/) previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +We offer an easy installation path using an [MSI Installer](https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence-setup.exe). However if you'd like to setup Telepresence using Powershell, you can run these commands: + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Go application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Go. We have versions in Python (Flask), Python (FastAPI), Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-go.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-go.git + + Cloning into 'edgey-corp-go'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-go/DataProcessingService/` + +3. You will use [Fresh](https://pkg.go.dev/github.com/BUGLAN/fresh) to support auto reloading of the Go server, which we'll use later. Confirm it is installed by running: + `go get github.com/pilu/fresh` + Then start the Go server: + `$GOPATH/bin/fresh` + + ``` + $ go get github.com/pilu/fresh + + $ $GOPATH/bin/fresh + + ... + 10:23:41 app | Welcome to the DataProcessingGoService! + ``` + + + Install Go from here and set your GOPATH if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Go server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Go server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-go/DataProcessingService/main.go` in your editor and change `var color string` from `blue` to `orange`. Save the file and the Go server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.14/quick-start/qs-java.md b/docs/telepresence/2.14/quick-start/qs-java.md new file mode 100644 index 000000000..57365ca58 --- /dev/null +++ b/docs/telepresence/2.14/quick-start/qs-java.md @@ -0,0 +1,392 @@ +--- +description: "Telepresence Quick Start - Java. This document uses kubectl in all example commands, but OpenShift users should have no problem substituting in the oc command." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Java** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Java application](#3-install-a-sample-java-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +We offer an easy installation path using an [MSI Installer](https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence-setup.exe). However if you'd like to setup Telepresence using Powershell, you can run these commands: + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Java application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Java. We have versions in Python (FastAPI), Python (Flask), Go, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-java.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-java.git + + Cloning into 'edgey-corp-java'... + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-java/DataProcessingService/` + +3. Start the Maven server. + `mvn spring-boot:run` + + + Install Java and Maven first if needed. + + + ``` + $ mvn spring-boot:run + + ... + g.d.DataProcessingServiceJavaApplication : Started DataProcessingServiceJavaApplication in 1.408 seconds (JVM running for 1.684) + + ``` + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Java server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Java server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-java/DataProcessingService/src/main/resources/application.properties` in your editor and change `app.default.color` on line 2 from `blue` to `orange`. Save the file then stop and restart your Java server. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.14/quick-start/qs-node.md b/docs/telepresence/2.14/quick-start/qs-node.md new file mode 100644 index 000000000..c6b82bd17 --- /dev/null +++ b/docs/telepresence/2.14/quick-start/qs-node.md @@ -0,0 +1,386 @@ +--- +description: "Telepresence Quick Start Node.js. This document uses kubectl in all example commands. OpenShift users should have no problem substituting in the oc command..." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Node.js** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Node.js application](#3-install-a-sample-nodejs-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +We offer an easy installation path using an [MSI Installer](https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence-setup.exe). However if you'd like to setup Telepresence using Powershell, you can run these commands: + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Node.js application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js. We have versions in Go, Java,Python using Flask, and Python using FastAPI if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-nodejs.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-nodejs.git + + Cloning into 'edgey-corp-nodejs'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-nodejs/DataProcessingService/` + +3. Install the dependencies and start the Node server: +`npm install && npm start` + + ``` + $ npm install && npm start + + ... + Welcome to the DataProcessingService! + { _: [] } + Server running on port 3000 + ``` + + + Install Node.js from here if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Node server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + See this doc for more information on how Telepresence resolves DNS. + + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Node server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-nodejs/DataProcessingService/app.js` in your editor and change line 6 from `blue` to `orange`. Save the file and the Node server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.14/quick-start/qs-python-fastapi.md b/docs/telepresence/2.14/quick-start/qs-python-fastapi.md new file mode 100644 index 000000000..c2594fec8 --- /dev/null +++ b/docs/telepresence/2.14/quick-start/qs-python-fastapi.md @@ -0,0 +1,383 @@ +--- +description: "Telepresence Quick Start - Python (FastAPI) You need kubectl or oc installed & set up (Linux/macOS/Windows) to use Kubernetes cluster, preferably an empty test." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Python (FastAPI)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +We offer an easy installation path using an [MSI Installer](https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence-setup.exe). However if you'd like to setup Telepresence using Powershell, you can run these commands: + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the FastAPI framework. We have versions in Python (Flask), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python-fastapi.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python-fastapi.git + + Cloning into 'edgey-corp-python-fastapi'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python-fastapi/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install fastapi uvicorn requests && python app.py + + Collecting fastapi + ... + Application startup complete. + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local service is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python-fastapi/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 17 from `blue` to `orange`. Save the file and the Python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080) and it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.14/quick-start/qs-python.md b/docs/telepresence/2.14/quick-start/qs-python.md new file mode 100644 index 000000000..a03e2f106 --- /dev/null +++ b/docs/telepresence/2.14/quick-start/qs-python.md @@ -0,0 +1,394 @@ +--- +description: "Telepresence Quick Start - Python (Flask). This document uses kubectl in all example commands, but OpenShift users should have no problem substituting in the oc." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Python (Flask)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +We offer an easy installation path using an [MSI Installer](https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence-setup.exe). However if you'd like to setup Telepresence using Powershell, you can run these commands: + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the Flask framework. We have versions in Python (FastAPI), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python.git + + Cloning into 'edgey-corp-python'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install flask requests && python app.py + + Collecting flask + ... + Welcome to the DataServiceProcessingPythonService! + ... + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Python server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 15 from `blue` to `orange`. Save the file and the python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.14/quick-start/telepresence-quickstart-landing.less b/docs/telepresence/2.14/quick-start/telepresence-quickstart-landing.less new file mode 100644 index 000000000..e2a83df4f --- /dev/null +++ b/docs/telepresence/2.14/quick-start/telepresence-quickstart-landing.less @@ -0,0 +1,152 @@ +@import '~@src/components/Layout/vars.less'; + +.doc-body .telepresence-quickstart-landing { + font-family: @InterFont; + color: @black; + margin: -8.4px auto 48px; + max-width: 1050px; + min-width: @docs-min-width; + width: 100%; + + h1 { + color: @blue-dark; + font-weight: normal; + letter-spacing: 0.25px; + font-size: 33px; + margin: 0 0 15px; + } + p { + font-size: 0.875rem; + line-height: 24px; + margin: 0; + padding: 0; + } + + .demo-cluster-container { + display: grid; + margin: 40px 0; + grid-template-columns: 1fr; + grid-template-columns: 1fr; + @media screen and (max-width: 900px) { + grid-template-columns: repeat(1, 1fr); + } + } + .main-title-container { + display: flex; + flex-direction: column; + align-items: center; + p { + text-align: center; + font-size: 0.875rem; + } + } + h2 { + font-size: 23px; + color: @black; + margin: 0 0 20px 0; + padding: 0; + &.underlined { + padding-bottom: 2px; + border-bottom: 3px solid @grey-separator; + text-align: center; + } + strong { + font-weight: 800; + } + &.subtitle { + margin-bottom: 10px; + font-size: 19px; + line-height: 28px; + } + } + .learn-more, + .get-started { + font-size: 14px; + font-weight: 600; + letter-spacing: 1.25px; + display: flex; + align-items: center; + text-decoration: none; + &.inline { + display: inline-block; + text-decoration: underline; + font-size: unset; + font-weight: normal; + &:hover { + text-decoration: none; + } + } + &.blue { + color: @blue-5; + } + &.blue:hover { + color: @blue-dark; + } + } + + .learn-more { + margin-top: 20px; + padding: 13px 0; + } + + .box-container { + &.border { + border: 1.5px solid @grey-separator; + border-radius: 5px; + padding: 10px; + } + &::before { + content: ''; + position: absolute; + width: 14px; + height: 14px; + border-radius: 50%; + top: 0; + left: 50%; + transform: translate(-50%, -50%); + } + p { + font-size: 0.875rem; + line-height: 24px; + padding: 0; + } + } + + .telepresence-video { + border: 2px solid @grey-separator; + box-shadow: -6px 12px 0px fade(@black, 12%); + border-radius: 8px; + padding: 18px; + h2.telepresence-video-title { + font-weight: 400; + font-size: 23px; + line-height: 33px; + color: @blue-6; + } + } + + .video-section { + display: grid; + grid-template-columns: 1fr 1fr; + column-gap: 20px; + @media screen and (max-width: 800px) { + grid-template-columns: 1fr; + } + ul { + font-size: 14px; + margin: 0 10px 6px 0; + } + .video-container { + position: relative; + padding-bottom: 56.25%; // 16:9 aspect ratio + height: 0; + iframe { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + } + } + } +} diff --git a/docs/telepresence/2.14/redirects.yml b/docs/telepresence/2.14/redirects.yml new file mode 100644 index 000000000..5961b3477 --- /dev/null +++ b/docs/telepresence/2.14/redirects.yml @@ -0,0 +1 @@ +- {from: "", to: "quick-start"} diff --git a/docs/telepresence/2.14/reference/architecture.md b/docs/telepresence/2.14/reference/architecture.md new file mode 100644 index 000000000..6d45f010d --- /dev/null +++ b/docs/telepresence/2.14/reference/architecture.md @@ -0,0 +1,101 @@ +--- +description: "How Telepresence works to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Telepresence Architecture + +
+ +![Telepresence Architecture](https://www.getambassador.io/images/documentation/telepresence-architecture.inline.svg) + +
+ +## Telepresence CLI + +The Telepresence CLI orchestrates the moving parts on the workstation: it starts the Telepresence Daemons, +authenticates against Ambassador Cloud, and then acts as a user-friendly interface to the Telepresence User Daemon. + +## Telepresence Daemons +Telepresence has Daemons that run on a developer's workstation and act as the main point of communication to the cluster's +network in order to communicate with the cluster and handle intercepted traffic. + +### User-Daemon +The User-Daemon coordinates the creation and deletion of intercepts by communicating with the [Traffic Manager](#traffic-manager). +All requests from and to the cluster go through this Daemon. + +When you run telepresence login, Telepresence installs an enhanced version of the User-Daemon. This replaces the existing User-Daemon and +allows you to create intercepts on your local machine from Ambassador Cloud. + +### Root-Daemon +The Root-Daemon manages the networking necessary to handle traffic between the local workstation and the cluster by setting up a +[Virtual Network Device](../tun-device) (VIF). For a detailed description of how the VIF manages traffic and why it is necessary +please refer to this blog post: +[Implementing Telepresence Networking with a TUN Device](https://blog.getambassador.io/implementing-telepresence-networking-with-a-tun-device-a23a786d51e9). + +When you run telepresence login, Telepresence installs an enhanced Telepresence User Daemon. This replaces the open source +User Daemon and allows you to create intercepts on your local machine from Ambassador Cloud. + +## Traffic Manager + +The Traffic Manager is the central point of communication between Traffic Agents in the cluster and Telepresence Daemons +on developer workstations. It is responsible for injecting the Traffic Agent sidecar into intercepted pods, proxying all +relevant inbound and outbound traffic, and tracking active intercepts. + +The Traffic-Manager is installed, either by a cluster administrator using a Helm Chart, or on demand by the Telepresence +User Daemon. When the User Daemon performs its initial connect, it first checks the cluster for the Traffic Manager +deployment, and if missing it will make an attempt to install it using its embedded Helm Chart. + +When an intercept gets created with a Preview URL, the Traffic Manager will establish a connection with Ambassador Cloud +so that Preview URL requests can be routed to the cluster. This allows Ambassador Cloud to reach the Traffic Manager +without requiring the Traffic Manager to be publicly exposed. Once the Traffic Manager receives a request from a Preview +URL, it forwards the request to the ingress service specified at the Preview URL creation. + +## Traffic Agent + +The Traffic Agent is a sidecar container that facilitates intercepts. When an intercept is first started, the Traffic Agent +container is injected into the workload's pod(s). You can see the Traffic Agent's status by running `telepresence list` +or `kubectl describe pod `. + +Depending on the type of intercept that gets created, the Traffic Agent will either route the incoming request to the +Traffic Manager so that it gets routed to a developer's workstation, or it will pass it along to the container in the +pod usually handling requests on that port. + +## Ambassador Cloud + +Ambassador Cloud enables Preview URLs by generating random ephemeral domain names and routing requests received on those +domains from authorized users to the appropriate Traffic Manager. + +Ambassador Cloud also lets users manage their Preview URLs: making them publicly accessible, seeing users who have +accessed them and deleting them. + +## Pod-Daemon + +The Pod-Daemon is a modified version of the [Telepresence User-Daemon](#user-daemon) built as a container image so that +it can be inserted into a `Deployment` manifest as an additional container. This allows users to create intercepts completely +within the cluster with the benefit that the intercept stays active until the deployment with the Pod-Daemon container is removed. + +The Pod-Daemon will take arguments and environment variables as part of the `Deployment` manifest to specify which service the intercept +should be run on and to provide similar configuration that would be provided when using Telepresence intercepts from the command line. + +After being deployed to the cluster, it behaves similarly to the Telepresence User-Daemon and installs the [Traffic Agent Sidecar](#traffic-agent) +on the service that is being intercepted. After the intercept is created, traffic can then be redirected to the `Deployment` with the Pod-Daemon +container instead. The Pod-Daemon will automatically generate a Preview URL so that the intercept can be accessed from outside the cluster. +The Preview URL can be obtained from the Pod-Daemon logs if you are deploying it manually. + +The Pod-Daemon was created for use as a component of Deployment Previews in order to automatically create intercepts with development images built +by CI so that changes from pull requests can be quickly visualized in a live cluster before changes are landed by accessing the Preview URL +link which would be posted to an associated GitHub pull request when using Deployment Previews. + +See the [Deployment Previews quick-start](../../ci/pod-daemon) for information on how to get started with Deployment Previews +or for a reference on how Pod-Daemon can be manually deployed to the cluster. + +# Changes from Service Preview + +Using Ambassador's previous offering, Service Preview, the Traffic Agent had to be manually added to a pod by an +annotation. This is no longer required as the Traffic Agent is automatically injected when an intercept is started. + +Service Preview also started an intercept via `edgectl intercept`. The `edgectl` CLI is no longer required to intercept +as this functionality has been moved to the Telepresence CLI. + +For both the Traffic Manager and Traffic Agents, configuring Kubernetes ClusterRoles and ClusterRoleBindings is not +required as it was in Service Preview. Instead, the user running Telepresence must already have sufficient permissions in the cluster to add and modify deployments in the cluster. diff --git a/docs/telepresence/2.14/reference/client.md b/docs/telepresence/2.14/reference/client.md new file mode 100644 index 000000000..84137db98 --- /dev/null +++ b/docs/telepresence/2.14/reference/client.md @@ -0,0 +1,31 @@ +--- +description: "CLI options for Telepresence to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Client reference + +The [Telepresence CLI client](../../quick-start) is used to connect Telepresence to your cluster, start and stop intercepts, and create preview URLs. All commands are run in the form of `telepresence `. + +## Commands + +A list of all CLI commands and flags is available by running `telepresence help`, but here is more detail on the most common ones. +You can append `--help` to each command below to get even more information about its usage. + +| Command | Description | +|----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `connect` | Starts the local daemon and connects Telepresence to your cluster and installs the Traffic Manager if it is missing. After connecting, outbound traffic is routed to the cluster so that you can interact with services as if your laptop was another pod (for example, curling a service by it's name) | +| [`login`](login) | Authenticates you to Ambassador Cloud to create, manage, and share [preview URLs](../../howtos/preview-urls/) | +| `logout` | Logs out out of Ambassador Cloud | +| `license` | Formats a license from Ambassdor Cloud into a secret that can be [applied to your cluster](../cluster-config#add-license-to-cluster) if you require features of the extension in an air-gapped environment | +| `status` | Shows the current connectivity status | +| `quit` | Tell Telepresence daemons to quit | +| `list` | Lists the current active intercepts | +| `intercept` | Intercepts a service, run followed by the service name to be intercepted and what port to proxy to your laptop: `telepresence intercept --port ` (use `port/UDP` to force UDP). This command can also start a process so you can run a local instance of the service you are intercepting. For example the following will intercept the hello service on port 8000 and start a Python web server: `telepresence intercept hello --port 8000 -- python3 -m http.server 8000`. A special flag `--docker-run` can be used to run the local instance [in a docker container](../docker-run). | +| `leave` | Stops an active intercept: `telepresence leave hello` | +| `preview` | Create or remove [preview URLs](../../howtos/preview-urls) for existing intercepts: `telepresence preview create ` | +| `loglevel` | Temporarily change the log-level of the traffic-manager, traffic-agents, and user and root daemons | +| `gather-logs` | Gather logs from traffic-manager, traffic-agents, user, and root daemons, and export them into a zip file that can be shared with others or included with a github issue. Use `--get-pod-yaml` to include the yaml for the `traffic-manager` and `traffic-agent`s. Use `--anonymize` to replace the actual pod names + namespaces used for the `traffic-manager` and pods containing `traffic-agent`s in the logs. | +| `version` | Show version of Telepresence CLI + Traffic-Manager (if connected) | +| `uninstall` | Uninstalls Telepresence from your cluster, using the `--agent` flag to target the Traffic Agent for a specific workload, the `--all-agents` flag to remove all Traffic Agents from all workloads, or the `--everything` flag to remove all Traffic Agents and the Traffic Manager. | +| `dashboard` | Reopens the Ambassador Cloud dashboard in your browser | +| `current-cluster-id` | Get cluster ID for your kubernetes cluster, used for [configuring license](../cluster-config#add-license-to-cluster) in an air-gapped environment | diff --git a/docs/telepresence/2.14/reference/client/login.md b/docs/telepresence/2.14/reference/client/login.md new file mode 100644 index 000000000..fc90ea385 --- /dev/null +++ b/docs/telepresence/2.14/reference/client/login.md @@ -0,0 +1,61 @@ +# Telepresence Login + +```console +$ telepresence login --help +Authenticate to Ambassador Cloud + +Usage: + telepresence login [flags] + +Flags: + --apikey string Static API key to use instead of performing an interactive login +``` + +## Description + +Use `telepresence login` to explicitly authenticate with [Ambassador +Cloud](https://www.getambassador.io/docs/cloud). Unless the +[`skipLogin` option](../../config) is set, other commands will +automatically invoke the `telepresence login` interactive login +procedure as nescessary, so it is rarely nescessary to explicitly run +`telepresence login`; it should only be truly nescessary to explictly +run `telepresence login` when you require a non-interactive login. + +The normal interactive login procedure involves launching a web +browser, a user interacting with that web browser, and finally having +the web browser make callbacks to the local Telepresence process. If +it is not possible to do this (perhaps you are using a headless remote +box via SSH, or are using Telepresence in CI), then you may instead +have Ambassador Cloud issue an API key that you pass to `telepresence +login` with the `--apikey` flag. + +## Telepresence + +When you run `telepresence login`, the CLI installs +a Telepresence binary. The Telepresence enhanced free client of the [User +Daemon](../../architecture) communicates with the Ambassador Cloud to +provide fremium features including the ability to create intercepts from +Ambassador Cloud. + +## Acquiring an API key + +1. Log in to Ambassador Cloud at https://app.getambassador.io/ . + +2. Click on your profile icon in the upper-left: ![Screenshot with the + mouse pointer over the upper-left profile icon](./login/apikey-2.png) + +3. Click on the "API Keys" menu button: ![Screenshot with the mouse + pointer over the "API Keys" menu button](./login/apikey-3.png) + +4. Click on the "generate new key" button in the upper-right: + ![Screenshot with the mouse pointer over the "generate new key" + button](./login/apikey-4.png) + +5. Enter a description for the key (perhaps the name of your laptop, + or perhaps the "CI"), and click "generate api key" to create it. + +You may now pass the API key as `KEY` to `telepresence login --apikey=KEY`. + +Telepresence will use that "master" API key to create narrower keys +for different components of Telepresence. You will see these appear +in the Ambassador Cloud web interface. \ No newline at end of file diff --git a/docs/telepresence/2.14/reference/client/login/apikey-2.png b/docs/telepresence/2.14/reference/client/login/apikey-2.png new file mode 100644 index 000000000..1379502a9 Binary files /dev/null and b/docs/telepresence/2.14/reference/client/login/apikey-2.png differ diff --git a/docs/telepresence/2.14/reference/client/login/apikey-3.png b/docs/telepresence/2.14/reference/client/login/apikey-3.png new file mode 100644 index 000000000..4559b784d Binary files /dev/null and b/docs/telepresence/2.14/reference/client/login/apikey-3.png differ diff --git a/docs/telepresence/2.14/reference/client/login/apikey-4.png b/docs/telepresence/2.14/reference/client/login/apikey-4.png new file mode 100644 index 000000000..25c6581a4 Binary files /dev/null and b/docs/telepresence/2.14/reference/client/login/apikey-4.png differ diff --git a/docs/telepresence/2.14/reference/cluster-config.md b/docs/telepresence/2.14/reference/cluster-config.md new file mode 100644 index 000000000..23e2cd54f --- /dev/null +++ b/docs/telepresence/2.14/reference/cluster-config.md @@ -0,0 +1,389 @@ +import Alert from '@material-ui/lab/Alert'; +import { ClusterConfig } from '../../../../../src/components/Docs/Telepresence'; + +# Cluster-side configuration + +For the most part, Telepresence doesn't require any special +configuration in the cluster and can be used right away in any +cluster (as long as the user has adequate [RBAC permissions](../rbac) +and the cluster's server version is `1.19.0` or higher). + +## Helm Chart configuration +Some cluster specific configuration can be provided when installing +or upgrading the Telepresence cluster installation using Helm. Once +installed, the Telepresence client will configure itself from values +that it receives when connecting to the Traffic manager. + +See the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) +for a full list of available configuration settings. + +### Values +To add configuration, create a yaml file with the configuration values and then use it executing `telepresence helm install [--upgrade] --values ` + +## Client Configuration + +It is possible for the Traffic Manager to automatically push config to all +connecting clients. To learn more about this, please see the [client config docs](../config#global-configuration) + +### Agent Configuration + +The `agent` structure of the Helm chart configures the behavior of the Telepresence agents. + +#### Application Protocol Selection +The `agent.appProtocolStrategy` is relevant when using personal intercepts and controls how telepresence selects the application protocol to use +when intercepting a service that has no `service.ports.appProtocol` declared. The port's `appProtocol` is always trusted if it is present. +Valid values are: + +| Value | Resulting action | +|--------------|------------------------------------------------------------------------------------------------------------------------------| +| `http2Probe` | The Telepresence Traffic Agent will probe the intercepted container to check whether it supports http2. This is the default. | +| `portName` | Telepresence will make an educated guess about the protocol based on the name of the service port | +| `http` | Telepresence will use http | +| `http2` | Telepresence will use http2 | + +When `portName` is used, Telepresence will determine the protocol by the name of the port: `[-suffix]`. The following protocols +are recognized: + +| Protocol | Meaning | +|----------|---------------------------------------| +| `http` | Plaintext HTTP/1.1 traffic | +| `http2` | Plaintext HTTP/2 traffic | +| `https` | TLS Encrypted HTTP (1.1 or 2) traffic | +| `grpc` | Same as http2 | + + + +#### Envoy Configuration + +The `agent.envoy` structure contains three values: + +| Setting | Meaning | +|--------------|----------------------------------------------------------| +| `logLevel` | Log level used by the Envoy proxy. Defaults to "warning" | +| `serverPort` | Port used by the Envoy server. Default 18000. | +| `adminPort` | Port used for Envoy administration. Default 19000. | + +#### Image Configuration + +The `agent.image` structure contains the following values: + +| Setting | Meaning | +|------------|-----------------------------------------------------------------------------| +| `registry` | Registry used when downloading the image. Defaults to "docker.io/datawire". | +| `name` | The name of the image. Retrieved from Ambassador Cloud if not set. | +| `tag` | The tag of the image. Retrieved from Ambassador Cloud if not set. | + +#### Log level + +The `agent.LogLevel` controls the log level of the traffic-agent. See [Log Levels](../config/#log-levels) for more info. + +#### Resources + +The `agent.resources` and `agent.initResources` will be used as the `resources` element when injecting traffic-agents and init-containers. + +## TLS + +In this example, other applications in the cluster expect to speak TLS to your +intercepted application (perhaps you're using a service-mesh that does +mTLS). + +In order to use `--mechanism=http` (or any features that imply +`--mechanism=http`) you need to tell Telepresence about the TLS +certificates in use. + +Tell Telepresence about the certificates in use by adjusting your +[workload's](../intercepts/#supported-workloads) Pod template to set a couple of +annotations on the intercepted Pods: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ "getambassador.io/inject-terminating-tls-secret": "your-terminating-secret" # optional ++ "getambassador.io/inject-originating-tls-secret": "your-originating-secret" # optional +``` + +- The `getambassador.io/inject-terminating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS server + certificate to use for decrypting and responding to incoming + requests. + + When Telepresence modifies the Service and workload port + definitions to point at the Telepresence Agent sidecar's port + instead of your application's actual port, the sidecar will use this + certificate to terminate TLS. + +- The `getambassador.io/inject-originating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS + client certificate to use for communicating with your application. + + You will need to set this if your application expects incoming + requests to speak TLS (for example, your + code expects to handle mTLS itself instead of letting a service-mesh + sidecar handle mTLS for it, or the port definition that Telepresence + modified pointed at the service-mesh sidecar instead of at your + application). + + If you do set this, you should to set it to the + same client certificate Secret that you configure the Ambassador + Edge Stack to use for mTLS. + +It is only possible to refer to a Secret that is in the same Namespace +as the Pod. The Secret will be mounted into the traffic agent's container. + +Telepresence understands `type: kubernetes.io/tls` Secrets and +`type: istio.io/key-and-cert` Secrets; as well as `type: Opaque` +Secrets that it detects to be formatted as one of those types. + +## Air-gapped cluster + +If your cluster is on an isolated network such that it cannot +communicate with Ambassador Cloud, then some additional configuration +is required to acquire a license key in order to use personal +intercepts. + +### Create a license + +1. + +2. Generate a new license (if one doesn't already exist) by clicking *Generate New License*. + +3. You will be prompted for your Cluster ID. Ensure your +kubeconfig context is using the cluster you want to create a license for then +run this command to generate the Cluster ID: + + ``` + $ telepresence current-cluster-id + + Cluster ID: + ``` + +4. Click *Generate API Key* to finish generating the license. + +5. On the licenses page, download the license file associated with your cluster. + +### Add license to cluster +There are two separate ways you can add the license to your cluster: manually creating and deploying +the license secret or having the helm chart manage the secret + +You only need to do one of the two options. + +#### Manual deploy of license secret + +1. Use this command to generate a Kubernetes Secret config using the license file: + + ``` + $ telepresence license -f + + apiVersion: v1 + data: + hostDomain: + license: + kind: Secret + metadata: + creationTimestamp: null + name: systema-license + namespace: ambassador + ``` + +2. Save the output as a YAML file and apply it to your +cluster with `kubectl`. + +3. When deploying the `traffic-manager` chart, you must add the additional values when running `helm install` by putting +the following into a file (for the example we'll assume it's called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + secret: + # This tells the helm chart not to create the secret since you've created it yourself + create: false + ``` + +4. Install the helm chart into the cluster + + ``` + telepresence helm install -f license-values.yaml + ``` + +5. Ensure that you have the docker image for the Smart Agent (datawire/ambassador-telepresence-agent:1.11.0) +pulled and in a registry your cluster can pull from. + +6. Have users use the `images` [config key](../config/#images) keys so telepresence uses the aforementioned image for their agent. + +#### Helm chart manages the secret + +1. Get the jwt token from the downloaded license file + + ``` + $ cat ~/Downloads/ambassador.License_for_yourcluster + eyJhbGnotarealtoken.butanexample + ``` + +2. Create the following values file, substituting your real jwt token in for the one used in the example below. +(for this example we'll assume the following is placed in a file called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + # This is the value from the license file you download. this value is an example and will not work + value: eyJhbGnotarealtoken.butanexample + secret: + # This tells the helm chart to create the secret + create: true + ``` + +3. Install the helm chart into the cluster + + ``` + telepresence helm install -f license-values.yaml + ``` + +Users will now be able to use preview intercepts with the +`--preview-url=false` flag. Even with the license key, preview URLs +cannot be used without enabling direct communication with Ambassador +Cloud, as Ambassador Cloud is essential to their operation. + +If using Helm to install the server-side components, see the chart's [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) to learn how to configure the image registry and license secret. + +Have clients use the [skipLogin](../config/#cloud) key to ensure the cli knows it is operating in an +air-gapped environment. + +## Mutating Webhook + +Telepresence uses a Mutating Webhook to inject the [Traffic Agent](../architecture/#traffic-agent) sidecar container and update the +port definitions. This means that an intercepted workload (Deployment, StatefulSet, ReplicaSet) will remain untouched +and in sync as far as GitOps workflows (such as ArgoCD) are concerned. + +The injection will happen on demand the first time an attempt is made to intercept the workload. + +If you want to prevent that the injection ever happens, simply add the `telepresence.getambassador.io/inject-traffic-agent: disabled` +annotation to your workload template's annotations: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ telepresence.getambassador.io/inject-traffic-agent: disabled + spec: + containers: +``` + +### Service Name and Port Annotations + +Telepresence will automatically find all services and all ports that will connect to a workload and make them available +for an intercept, but you can explicitly define that only one service and/or port can be intercepted. + +```diff + spec: + template: + metadata: + labels: + service: your-service + annotations: ++ telepresence.getambassador.io/inject-service-name: my-service ++ telepresence.getambassador.io/inject-service-port: https + spec: + containers: +``` + +### Ignore Certain Volume Mounts + +An annotation `telepresence.getambassador.io/inject-ignore-volume-mounts` can be used to make the injector ignore certain volume mounts denoted by a comma-separated string. The specified volume mounts from the original container will not be appended to the agent sidecar container. + +```diff + spec: + template: + metadata: + annotations: ++ telepresence.getambassador.io/inject-ignore-volume-mounts: "foo,bar" + spec: + containers: +``` + +### Note on Numeric Ports + +If the targetPort of your intercepted service is pointing at a port number, in addition to +injecting the Traffic Agent sidecar, Telepresence will also inject an initContainer that will +reconfigure the pod's firewall rules to redirect traffic to the Traffic Agent. + + +Note that this initContainer requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + +If you need to use numeric ports without the aforementioned capabilities, you can [manually install the agent](../intercepts/manual-agent) + +For example, the following service is using a numeric port, so Telepresence would inject an initContainer into it: +```yaml +apiVersion: v1 +kind: Service +metadata: + name: your-service +spec: + type: ClusterIP + selector: + service: your-service + ports: + - port: 80 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: your-service + labels: + service: your-service +spec: + replicas: 1 + selector: + matchLabels: + service: your-service + template: + metadata: + annotations: + telepresence.getambassador.io/inject-traffic-agent: enabled + labels: + service: your-service + spec: + containers: + - name: your-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 +``` + +## Excluding Envrionment Variables + +If your pod contains sensitive variables like a database password, or third party API Key, you may want to exclude those from being propagated through an intercept. +Telepresence allows you to configure this through a ConfigMap that is then read and removes the sensitive variables. + +This can be done in two ways: + +When installing your traffic-manager through helm you can use the `--set` flag and pass a comma separated list of variables: + +`telepresence helm install --set intercept.environment.excluded="{DATABASE_PASSWORD,API_KEY}"` + +This also applies when upgrading: + +`telepresence helm upgrade --set intercept.environment.excluded="{DATABASE_PASSWORD,API_KEY}"` + +Once this is completed, the environment variables will no longer be in the environment file created by an Intercept. + +The other way to complete this is in your custom `values.yaml`. Customizing your traffic-manager through a values file can be viewed [here](../../install/manager). + +```yaml +intercept: + environment: + excluded: ['DATABASE_PASSWORD', 'API_KEY'] +``` + +You can exclude any number of variables, they just need to match the `key` of the variable within a pod to be excluded. \ No newline at end of file diff --git a/docs/telepresence/2.14/reference/config.md b/docs/telepresence/2.14/reference/config.md new file mode 100644 index 000000000..0a164a0fb --- /dev/null +++ b/docs/telepresence/2.14/reference/config.md @@ -0,0 +1,374 @@ +# Laptop-side configuration + +There are a number of configuration values that can be tweaked to change how Telepresence behaves. +These can be set in two ways: globally, by a platform engineer with powers to deploy the Telepresence Traffic Manager, or locally by any user. +One important exception is the location of the traffic manager itself, which, if it's different from the default of `ambassador`, [must be set](#manager) locally per-cluster to be able to connect. + +## Global Configuration + +Global configuration is set at the Traffic Manager level and applies to any user connecting to that Traffic Manager. +To set it, simply pass in a `client` dictionary to the `helm install` command, with any config values you wish to set. + +### Values + +The `client` config supports values for `timeouts`, `logLevels`, `images`, `cloud`, `grpc`, `dns`, and `routing`. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +client: + timeouts: + agentInstall: 1m + intercept: 10s + logLevels: + userDaemon: debug + images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting + cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. + grpc: + maxReceiveSize: 10Mi + telepresenceAPI: + port: 9980 + dns: + includeSuffixes: [.private] + excludeSuffixes: [.se, .com, .io, .net, .org, .ru] + lookupTimeout: 30s + routing: + alsoProxySubnets: + - 1.2.3.4/32 + neverProxySubnets: + - 1.2.3.4/32 +``` + +#### Timeouts + +Values for `client.timeouts` are all durations either as a number of seconds +or as a string with a unit suffix of `ms`, `s`, `m`, or `h`. Strings +can be fractional (`1.5h`) or combined (`2h45m`). + +These are the valid fields for the `timeouts` key: + +| Field | Description | Type | Default | +|-------------------------|------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|------------| +| `agentInstall` | Waiting for Traffic Agent to be installed | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | +| `apply` | Waiting for a Kubernetes manifest to be applied | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 1 minute | +| `clusterConnect` | Waiting for cluster to be connected | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `intercept` | Waiting for an intercept to become active | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `proxyDial` | Waiting for an outbound connection to be established | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `trafficManagerConnect` | Waiting for the Traffic Manager API to connect for port forwards | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `trafficManagerAPI` | Waiting for connection to the gPRC API after `trafficManagerConnect` is successful | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 15 seconds | +| `helm` | Waiting for Helm operations (e.g. `install`) on the Traffic Manager | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | + +#### Log Levels + +Values for the `client.logLevels` fields are one of the following strings, +case-insensitive: + + - `trace` + - `debug` + - `info` + - `warning` or `warn` + - `error` + +For whichever log-level you select, you will get logs labeled with that level and of higher severity. +(e.g. if you use `info`, you will also get logs labeled `error`. You will NOT get logs labeled `debug`. + +These are the valid fields for the `client.logLevels` key: + +| Field | Description | Type | Default | +|--------------|---------------------------------------------------------------------|---------------------------------------------|---------| +| `userDaemon` | Logging level to be used by the User Daemon (logs to connector.log) | [loglevel][logrus-level] [string][yaml-str] | debug | +| `rootDaemon` | Logging level to be used for the Root Daemon (logs to daemon.log) | [loglevel][logrus-level] [string][yaml-str] | info | + +#### Images +Values for `client.images` are strings. These values affect the objects that are deployed in the cluster, +so it's important to ensure users have the same configuration. + +Additionally, you can deploy the server-side components with [Helm](../../install/helm), to prevent them +from being overridden by a client's config and use the [mutating-webhook](../cluster-config/#mutating-webhook) +to handle installation of the `traffic-agents`. + +These are the valid fields for the `client.images` key: + +| Field | Description | Type | Default | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------|----------------------| +| `registry` | Docker registry to be used for installing the Traffic Manager and default Traffic Agent. If not using a helm chart to deploy server-side objects, changing this value will create a new traffic-manager deployment when using Telepresence commands. Additionally, changing this value will update installed default `traffic-agents` to use the new registry when creating a new intercept. | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `agentImage` | `$registry/$imageName:$imageTag` to use when installing the Traffic Agent. Changing this value will update pre-existing `traffic-agents` to use this new image. *The `registry` value is not used for the `traffic-agent` if you have this value set.* | qualified Docker image name [string][yaml-str] | (unset) | +| `webhookRegistry` | The container `$registry` that the [Traffic Manager](../cluster-config/#mutating-webhook) will use with the `webhookAgentImage` *This value is only used if a new `traffic-manager` is deployed* | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `webhookAgentImage` | The container image that the [Traffic Manager](../cluster-config/#mutating-webhook) will pull from the `webhookRegistry` when installing the Traffic Agent in annotated pods *This value is only used if a new `traffic-manager` is deployed* | non-qualified Docker image name [string][yaml-str] | (unset) | + +#### Cloud +Values for `client.cloud` are listed below and their type varies, so please see the chart for the expected type for each config value. +These fields control how the client interacts with the Cloud service. + +| Field | Description | Type | Default | +|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------|----------------------| +| `skipLogin` | Whether the CLI should skip automatic login to Ambassador Cloud. If set to true, in order to perform personal intercepts you must have a [license key](../cluster-config/#air-gapped-cluster) installed in the cluster. | [bool][yaml-bool] | false | +| `refreshMessages` | How frequently the CLI should communicate with Ambassador Cloud to get new command messages, which also resets whether the message has been raised or not. You will see each message at most once within the duration given by this config | [duration][go-duration] [string][yaml-str] | 168h | +| `systemaHost` | The host used to communicate with Ambassador Cloud | [string][yaml-str] | app.getambassador.io | +| `systemaPort` | The port used with `systemaHost` to communicate with Ambassador Cloud | [string][yaml-str] | 443 | + +Telepresence attempts to auto-detect if the cluster is capable of +communication with Ambassador Cloud, but in cases where only the on-laptop client wishes to communicate with +Ambassador Cloud Telepresence may still prompt you to log in. If you want those auto-login points to be disabled +as well, or would like it to not attempt to communicate with +Ambassador Cloud at all (even for the auto-detection), then be sure to +set the `skipLogin` value to `true`. + +Reminder: To use personal intercepts, which normally require a login, +you must have a license key in your cluster and specify which +`agentImage` should be installed by also adding the following to your +`config.yml`: + +```yaml +images: + agentImage: / +``` + +#### Grpc +The `maxReceiveSize` determines how large a message that the workstation receives via gRPC can be. The default is 4Mi (determined by gRPC). All traffic to and from the cluster is tunneled via gRPC. + +The size is measured in bytes. You can express it as a plain integer or as a fixed-point number using E, G, M, or K. You can also use the power-of-two equivalents: Gi, Mi, Ki. For example, the following represent roughly the same value: +``` +128974848, 129e6, 129M, 123Mi +``` + +#### RESTful API server +The `client.telepresenceAPI` controls the behavior of Telepresence's RESTful API server that can be queried for additional information about ongoing intercepts. When present, and the `port` is set to a valid port number, it's propagated to the auto-installer so that application containers that can be intercepted gets the `TELEPRESENCE_API_PORT` environment set. The server can then be queried at `localhost:`. In addition, the `traffic-agent` and the `user-daemon` on the workstation that performs an intercept will start the server on that port. +If the `traffic-manager` is auto-installed, its webhook agent injector will be configured to add the `TELEPRESENCE_API_PORT` environment to the app container when the `traffic-agent` is injected. +See [RESTful API server](../restapi) for more info. + +#### DNS + +The `client.dns` configuration offers options for configuring the DNS resolution behavior in a client application or system. Here is a summary of the available fields: + + + +The fields for `client.dns` are: `localIP`, `excludeSuffixes`, `includeSuffixes`, and `lookupTimeout`. + +| Field | Description | Type | Default | +|-------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|--------------------------------------------------------------------------| +| `localIP` | The address of the local DNS server. This entry is only used on Linux systems that are not configured to use systemd-resolved. | IP address [string][yaml-str] | first `nameserver` mentioned in `/etc/resolv.conf` | +| `excludeSuffixes` | Suffixes for which the DNS resolver will always fail (or fallback in case of the overriding resolver). Can be globally configured in the Helm chart. | [sequence][yaml-seq] of [strings][yaml-str] | `[".arpa", ".com", ".io", ".net", ".org", ".ru"]` | +| `includeSuffixes` | Suffixes for which the DNS resolver will always attempt to do a lookup. Includes have higher priority than excludes. Can be globally configured in the Helm chart. | [sequence][yaml-seq] of [strings][yaml-str] | `[]` | +| `lookupTimeout` | Maximum time to wait for a cluster side host lookup. | [duration][go-duration] [string][yaml-str] | 4 seconds | + +Here is an example values.yaml: +```yaml +client: + dns: + includeSuffixes: [.private] + excludeSuffixes: [.se, .com, .io, .net, .org, .ru] + localIP: 8.8.8.8 + lookupTimeout: 30s +``` + +##### Mappings + +Allows you to map hostnames to aliases. This is useful when you want to redirect traffic from one service to another within the cluster. + +In the given cluster, the service named `postgres` is located within a separate namespace titled `big-data`, and it's referred to as `psql` : + +```yaml +dns: + mappings: + - name: postgres + aliasFor: psql.big-data +``` + +##### Exclude + +Lists service names to be excluded from the Telepresence DNS server. This is useful when you want your application to interact with a local service instead of a cluster service. In this example, "redis" will not be resolved by the cluster, but locally. + +```yaml +dns: + excludes: + - redis +``` + +#### Routing + +##### AlsoProxySubnets + +When using `alsoProxySubnets`, you provide a list of subnets to be added to the TUN device. +All connections to addresses that the subnet spans will be dispatched to the cluster + +Here is an example values.yaml for the subnet `1.2.3.4/32`: +```yaml +client: + routing: + alsoProxySubnets: + - 1.2.3.4/32 +``` + +##### NeverProxySubnets + +When using `neverProxySubnets` you provide a list of subnets. These will never be routed via the TUN device, +even if they fall within the subnets (pod or service) for the cluster. Instead, whatever route they have before +telepresence connects is the route they will keep. + +Here is an example kubeconfig for the subnet `1.2.3.4/32`: + +```yaml +client: + routing: + neverProxySubnets: + - 1.2.3.4/32 +``` + +##### Using AlsoProxy together with NeverProxy + +Never proxy and also proxy are implemented as routing rules, meaning that when the two conflict, regular routing routes apply. +Usually this means that the most specific route will win. + +So, for example, if an `alsoProxySubnets` subnet falls within a broader `neverProxySubnets` subnet: + +```yaml +neverProxySubnets: [10.0.0.0/16] +alsoProxySubnets: [10.0.5.0/24] +``` + +Then the specific `alsoProxySubnets` of `10.0.5.0/24` will be proxied by the TUN device, whereas the rest of `10.0.0.0/16` will not. + +Conversely, if a `neverProxySubnets` subnet is inside a larger `alsoProxySubnets` subnet: + +```yaml +alsoProxySubnets: [10.0.0.0/16] +neverProxySubnets: [10.0.5.0/24] +``` + +Then all of the `alsoProxySubnets` of `10.0.0.0/16` will be proxied, with the exception of the specific `neverProxySubnets` of `10.0.5.0/24` + +## Local Overrides + +In addition, it is possible to override each of these variables at the local level by setting up new values in local config files. +There are two types of config values that can be set locally: those that apply to all clusters, which are set in a single `config.yml` file, and those +that only apply to specific clusters, which are set as extensions to the `$KUBECONFIG` file. + +### Config for all clusters +Telepresence uses a `config.yml` file to store and change those configuration values that will be used for all clusters you use Telepresence with. +The location of this file varies based on your OS: + +* macOS: `$HOME/Library/Application Support/telepresence/config.yml` +* Linux: `$XDG_CONFIG_HOME/telepresence/config.yml` or, if that variable is not set, `$HOME/.config/telepresence/config.yml` +* Windows: `%APPDATA%\telepresence\config.yml` + +For Linux, the above paths are for a user-level configuration. For system-level configuration, use the file at `$XDG_CONFIG_DIRS/telepresence/config.yml` or, if that variable is empty, `/etc/xdg/telepresence/config.yml`. If a file exists at both the user-level and system-level paths, the user-level path file will take precedence. + +### Values + +The config file currently supports values for the `timeouts`, `logLevels`, `images`, `cloud`, and `grpc` keys. +The definitions of these values are identical to those values in the `client` config above. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +timeouts: + agentInstall: 1m + intercept: 10s +logLevels: + userDaemon: debug +images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting +cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. +grpc: + maxReceiveSize: 10Mi +telepresenceAPI: + port: 9980 +``` + + +## Workstation Per-Cluster Configuration + +Configuration that is specific to a cluster can also be overriden per-workstation by modifying your `$KUBECONFIG` file. +It is recommended that you do not do this, and instead rely on upstream values provided to the Traffic Manager. This ensures +that all users that connect to the Traffic Manager will have the same routing and DNS resolution behavior. +An important exception to this is the [`manager.namespace` configuration](#manager) which must be set locally. + +### Values + +The kubeconfig supports values for `dns`, `also-proxy`, `never-proxy`, and `manager`. + +Example kubeconfig: +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + dns: + include-suffixes: [.private] + exclude-suffixes: [.se, .com, .io, .net, .org, .ru] + local-ip: 8.8.8.8 + lookup-timeout: 30s + never-proxy: [10.0.0.0/16] + also-proxy: [10.0.5.0/24] + name: example-cluster +``` + +#### Manager + +This is the one cluster configuration that cannot be set using the Helm chart because it defines how Telepresence connects to +the Traffic manager. When not default, that setting needs to be configured in the workstation's kubeconfig for the cluster. + +The `manager` key contains configuration for finding the `traffic-manager` that telepresence will connect to. It supports one key, `namespace`, indicating the namespace where the traffic manager is to be found + +Here is an example kubeconfig that will instruct telepresence to connect to a manager in namespace `staging`: + +```yaml +apiVersion: v1 +clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +[yaml-bool]: https://yaml.org/type/bool.html +[yaml-float]: https://yaml.org/type/float.html +[yaml-int]: https://yaml.org/type/int.html +[yaml-seq]: https://yaml.org/type/seq.html +[yaml-str]: https://yaml.org/type/str.html +[go-duration]: https://pkg.go.dev/time#ParseDuration +[logrus-level]: https://github.com/sirupsen/logrus/blob/v1.8.1/logrus.go#L25-L45 diff --git a/docs/telepresence/2.14/reference/dns.md b/docs/telepresence/2.14/reference/dns.md new file mode 100644 index 000000000..2f263860e --- /dev/null +++ b/docs/telepresence/2.14/reference/dns.md @@ -0,0 +1,80 @@ +# DNS resolution + +The Telepresence DNS resolver is dynamically configured to resolve names using the namespaces of currently active intercepts. Processes running locally on the desktop will have network access to all services in the such namespaces by service-name only. + +All intercepts contribute to the DNS resolver, even those that do not use the `--namespace=` option. This is because `--namespace default` is implied, and in this context, `default` is treated just like any other namespace. + +No namespaces are used by the DNS resolver (not even `default`) when no intercepts are active, which means that no service is available by `` only. Without an active intercept, the namespace qualified DNS name must be used (in the form `.`). + +See this demonstrated below, using the [quick start's](../../quick-start/) sample app services. + +No intercepts are currently running, we'll connect to the cluster and list the services that can be intercepted. + +``` +$ telepresence connect + + Connecting to traffic manager... + Connected to context default (https://) + +$ telepresence list + + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + emoji : ready to intercept (traffic-agent not yet installed) + web : ready to intercept (traffic-agent not yet installed) + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + +$ curl web-app:80 + + curl: (6) Could not resolve host: web-app + +``` + +This is expected as Telepresence cannot reach the service yet by short name without an active intercept in that namespace. + +``` +$ curl web-app.emojivoto:80 + + + + + + Emoji Vote + ... +``` + +Using the namespaced qualified DNS name though does work. +Now we'll start an intercept against another service in the same namespace. Remember, `--namespace default` is implied since it is not specified. + +``` +$ telepresence intercept web --port 8080 + + Using Deployment web + intercepted + Intercept name : web + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Volume Mount Point: /tmp/telfs-166119801 + Intercepting : HTTP requests that match all headers: + 'x-telepresence-intercept-id: 8eac04e3-bf24-4d62-b3ba-35297c16f5cd:web' + +$ curl webapp:80 + + + + + + Emoji Vote + ... +``` + +Now curling that service by its short name works and will as long as the intercept is active. + +The DNS resolver will always be able to resolve services using `.` regardless of intercepts. + +### Supported Query Types + +The Telepresence DNS resolver is now capable of resolving queries of type `A`, `AAAA`, `CNAME`, +`MX`, `NS`, `PTR`, `SRV`, and `TXT`. + +See [Outbound connectivity](../routing/#dns-resolution) for details on DNS lookups. diff --git a/docs/telepresence/2.14/reference/docker-run.md b/docs/telepresence/2.14/reference/docker-run.md new file mode 100644 index 000000000..27b2f316f --- /dev/null +++ b/docs/telepresence/2.14/reference/docker-run.md @@ -0,0 +1,90 @@ +--- +Description: "How a Telepresence intercept can run a Docker container with configured environment and volume mounts." +--- + +# Using Docker for intercepts + +## Use the Intercept Specification +The recommended way to use Telepresence with Docker is to create an [Intercept Specification](../intercepts/specs) that uses docker images as intercept handlers. + +## Using command flags + +### The docker flag +You can start the Telepresence daemon in a Docker container on your laptop using the command: + +```console +$ telepresence connect --docker +``` + +The `--docker` flag is a global flag, and if passed directly like `telepresence intercept --docker`, then the implicit connect that takes place if no connections is active, will use a container based daemon. + +### The docker-run flag + +If you want your intercept to go to another Docker container, you can use the `--docker-run` flag. It creates the intercept, runs your container in the foreground, then automatically ends the intercept when the container exits. + +```console +$ telepresence intercept --port --docker-run -- +``` + +The `--` separates flags intended for `telepresence intercept` from flags intended for `docker run`. + +It's recommended that you always use the `--docker-run` in combination with the global `--docker` flag, because that makes everything less intrusive. +- No admin user access is needed. Network modifications are confined to a Docker network. +- There's no need for special filesystem mount software like MacFUSE or WinFSP. The volume mounts happen in the Docker engine. + +The following happens under the hood when both flags are in use: + +- The network of for the intercept handler will be set to the same as the network used by the daemon. This guarantees that the + intercept handler can access the Telepresence VIF, and hence have access the cluster. +- Volume mounts will be automatic and made using the Telemount Docker volume plugin so that all volumes exposed by the intercepted + container are mounted on the intercept handler container. +- The environment of the intercepted container becomes the environment of the intercept handler container. + +### The docker-build flag + +The `--docker-build ` and the repeatable `docker-build-opt key=value` flags enable container's to be build on the fly by the intercept command. + +When using `--docker-build`, the image name used in the argument list must be verbatim `IMAGE`. The word acts as a placeholder and will be replaced by the ID of the image that is built. + +The `--docker-build` flag implies `--docker-run`. + +## Using docker-run flag without docker + +It is possible to use `--docker-run` with a daemon running on your host, which is the default behavior of Telepresence. + +However, it isn't recommended since you'll be in a hybrid mode: while your intercept runs in a container, the daemon will modify the host network, and if remote mounts are desired, they may require extra software. + +The ability to use this special combination is retained for backward compatibility reasons. It might be removed in a future release of Telepresence. + +The `--port` flag has slightly different semantics and can be used in situations when the local and container port must be different. This +is done using `--port :`. The container port will default to the local port when using the `--port ` syntax. + +## Examples + +Imagine you are working on a new version of your frontend service. It is running in your cluster as a Deployment called `frontend-v1`. You use Docker on your laptop to build an improved version of the container called `frontend-v2`. To test it out, use this command to run the new container on your laptop and start an intercept of the cluster service to your local container. + +```console +$ telepresence intercept --docker frontend-v1 --port 8000 --docker-run -- frontend-v2 +``` + +Now, imagine that the `frontend-v2` image is built by a `Dockerfile` that resides in the directory `images/frontend-v2`. You can build and intercept directly. + +```console +$ telepresence intercept --docker frontend-v1 --port8000 --docker-build images/frontend-v2 --docker-build-opt tag=mytag -- IMAGE +``` + +## Automatic flags + +Telepresence will automatically pass some relevant flags to Docker in order to connect the container with the intercept. Those flags are combined with the arguments given after `--` on the command line. + +- `--env-file ` Loads the intercepted environment +- `--name intercept--` Names the Docker container, this flag is omitted if explicitly given on the command line +- `-v ` Volume mount specification, see CLI help for `--docker-mount` flags for more info + +When used with a container based daemon: +- `--rm` Mandatory, because the volume mounts cannot be removed until the container is removed. +- `-v :` Volume mount specifications propagated from the intercepted container + +When used with a daemon that isn't container based: +- `--dns-search tel2-search` Enables single label name lookups in intercepted namespaces +- `-p ` The local port for the intercept and the container port diff --git a/docs/telepresence/2.14/reference/environment.md b/docs/telepresence/2.14/reference/environment.md new file mode 100644 index 000000000..7f83ff119 --- /dev/null +++ b/docs/telepresence/2.14/reference/environment.md @@ -0,0 +1,46 @@ +--- +description: "How Telepresence can import environment variables from your Kubernetes cluster to use with code running on your laptop." +--- + +# Environment variables + +Telepresence can import environment variables from the cluster pod when running an intercept. +You can then use these variables with the code running on your laptop of the service being intercepted. + +There are three options available to do this: + +1. `telepresence intercept [service] --port [port] --env-file=FILENAME` + + This will write the environment variables to a Docker Compose `.env` file. This file can be used with `docker-compose` when starting containers locally. Please see the Docker documentation regarding the [file syntax](https://docs.docker.com/compose/env-file/) and [usage](https://docs.docker.com/compose/environment-variables/) for more information. + +2. `telepresence intercept [service] --port [port] --env-json=FILENAME` + + This will write the environment variables to a JSON file. This file can be injected into other build processes. + +3. `telepresence intercept [service] --port [port] -- [COMMAND]` + + This will run a command locally with the pod's environment variables set on your laptop. Once the command quits the intercept is stopped (as if `telepresence leave [service]` was run). This can be used in conjunction with a local server command, such as `python [FILENAME]` or `node [FILENAME]` to run a service locally while using the environment variables that were set on the pod via a ConfigMap or other means. + + Another use would be running a subshell, Bash for example: + + `telepresence intercept [service] --port [port] -- /bin/bash` + + This would start the intercept then launch the subshell on your laptop with all the same variables set as on the pod. + +## Telepresence Environment Variables + +Telepresence adds some useful environment variables in addition to the ones imported from the intercepted pod: + +### TELEPRESENCE_ROOT +Directory where all remote volumes mounts are rooted. See [Volume Mounts](../volume/) for more info. + +### TELEPRESENCE_MOUNTS +Colon separated list of remotely mounted directories. + +### TELEPRESENCE_CONTAINER +The name of the intercepted container. Useful when a pod has several containers, and you want to know which one that was intercepted by Telepresence. + +### TELEPRESENCE_INTERCEPT_ID +ID of the intercept (same as the "x-intercept-id" http header). + +Useful if you need special behavior when intercepting a pod. One example might be when dealing with pub/sub systems like Kafka, where all processes that don't have the `TELEPRESENCE_INTERCEPT_ID` set can filter out all messages that contain an `x-intercept-id` header, while those that do, instead filter based on a matching `x-intercept-id` header. This is to assure that messages belonging to a certain intercept always are consumed by the intercepting process. diff --git a/docs/telepresence/2.14/reference/inside-container.md b/docs/telepresence/2.14/reference/inside-container.md new file mode 100644 index 000000000..48a38b5a3 --- /dev/null +++ b/docs/telepresence/2.14/reference/inside-container.md @@ -0,0 +1,19 @@ +# Running Telepresence inside a container + +All Telepresence commands now have the global option `--docker`. This option tells telepresence to start the Telepresence daemon in a +docker container. + +Running the daemon in a container brings many advantages. The daemon will no longer make modifications to the host's network or DNS, and +it will not mount files in the host's filesystem. Consequently, it will not need admin privileges to run, nor will it need special software +like macFUSE or WinFSP to mount the remote file systems. + +The intercept handler (the process that will receive the intercepted traffic) must also be a docker container, because that is the only +way to access the cluster network that the daemon makes available, and to mount the docker volumes needed. + +It's highly recommended that you use the new [Intercept Specification](../intercepts/specs) to set things up. That way, Telepresence can do +all the plumbing needed to start the intercept handler with the correct environment and volume mounts. +Otherwise, doing a fully container based intercept manually with all bells and whistles is a complicated process that involves: +- Capturing the details of an intercept +- Ensuring that the [Telemount](https://github.com/datawire/docker-volume-telemount#readme) Docker volume plugin is installed +- Creating volumes for all remotely exposed directories +- Starting the intercept handler container using the same network as the daemon. diff --git a/docs/telepresence/2.14/reference/intercepts/cli.md b/docs/telepresence/2.14/reference/intercepts/cli.md new file mode 100644 index 000000000..d7e482329 --- /dev/null +++ b/docs/telepresence/2.14/reference/intercepts/cli.md @@ -0,0 +1,335 @@ +import Alert from '@material-ui/lab/Alert'; + +# Configuring intercept using CLI + +## Specifying a namespace for an intercept + +The namespace of the intercepted workload is specified using the +`--namespace` option. When this option is used, and `--workload` is +not used, then the given name is interpreted as the name of the +workload and the name of the intercept will be constructed from that +name and the namespace. + +```shell +telepresence intercept hello --namespace myns --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept +`hello-myns`. In order to remove the intercept, you will need to run +`telepresence leave hello-mydns` instead of just `telepresence leave +hello`. + +The name of the intercept will be left unchanged if the workload is specified. + +```shell +telepresence intercept myhello --namespace myns --workload hello --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept `myhello`. + +## Importing environment variables + +Telepresence can import the environment variables from the pod that is +being intercepted, see [this doc](../../environment/) for more details. + +## Creating an intercept without a preview URL + +If you *are not* logged in to Ambassador Cloud, the following command +will intercept all traffic bound to the service and proxy it to your +laptop. This includes traffic coming through your ingress controller, +so use this option carefully as to not disrupt production +environments. + +```shell +telepresence intercept --port= +``` + +If you *are* logged in to Ambassador Cloud, setting the +`--preview-url` flag to `false` is necessary. + +```shell +telepresence intercept --port= --preview-url=false +``` + +This will output an HTTP header that you can set on your request for +that traffic to be intercepted: + +```console +$ telepresence intercept --port= --preview-url=false +Using Deployment +intercepted + Intercept name: + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":") +``` + +Run `telepresence status` to see the list of active intercepts. + +```console +$ telepresence status +Root Daemon: Running + Version : v2.1.4 (api 3) + Primary DNS : "" + Fallback DNS: "" +User Daemon: Running + Version : v2.1.4 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 1 total + dataprocessingnodeservice: @ +``` + +Finally, run `telepresence leave ` to stop the intercept. + +## Skipping the ingress dialogue + +You can skip the ingress dialogue by setting the relevant parameters using flags. If any of the following flags are set, the dialogue will be skipped and the flag values will be used instead. If any of the required flags are missing, an error will be thrown. + +| Flag | Description | Required | +|------------------|------------------------------------------------------------------|------------| +| `--ingress-host` | The ip address for the ingress | yes | +| `--ingress-port` | The port for the ingress | yes | +| `--ingress-tls` | Whether tls should be used | no | +| `--ingress-l5` | Whether a different ip address should be used in request headers | no | + +## Creating an intercept when a service has multiple ports + +If you are trying to intercept a service that has multiple ports, you +need to tell Telepresence which service port you are trying to +intercept. To specify, you can either use the name of the service +port or the port number itself. To see which options might be +available to you and your service, use kubectl to describe your +service or look in the object's YAML. For more information on multiple +ports, see the [Kubernetes documentation][kube-multi-port-services]. + +[kube-multi-port-services]: https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services + +```console +$ telepresence intercept --port=: +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +When intercepting a service that has multiple ports, the name of the +service port that has been intercepted is also listed. + +If you want to change which port has been intercepted, you can create +a new intercept the same way you did above and it will change which +service port is being intercepted. + +## Creating an intercept When multiple services match your workload + +Oftentimes, there's a 1-to-1 relationship between a service and a +workload, so telepresence is able to auto-detect which service it +should intercept based on the workload you are trying to intercept. +But if you use something like +[Argo](https://www.getambassador.io/docs/argo/latest/), there may be +two services (that use the same labels) to manage traffic between a +canary and a stable service. + +Fortunately, if you know which service you want to use when +intercepting a workload, you can use the `--service` flag. So in the +aforementioned example, if you wanted to use the `echo-stable` service +when intercepting your workload, your command would look like this: + +```console +$ telepresence intercept echo-rollout- --port --service echo-stable +Using ReplicaSet echo-rollout- +intercepted + Intercept name : echo-rollout- + State : ACTIVE + Workload kind : ReplicaSet + Destination : 127.0.0.1:3000 + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-921196036 + Intercepting : all TCP connections +``` + +## Intercepting multiple ports + +It is possible to intercept more than one service and/or service port that are using the same workload. You do this +by creating more than one intercept that identify the same workload using the `--workload` flag. + +Let's assume that we have a service `multi-echo` with the two ports `http` and `grpc`. They are both +targeting the same `multi-echo` deployment. + +```console +$ telepresence intercept multi-echo-http --workload multi-echo --port 8080:http --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-http + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Volume Mount Point : /tmp/telfs-893700837 + Intercepting : all TCP requests + Preview URL : https://sleepy-bassi-1140.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +$ telepresence intercept multi-echo-grpc --workload multi-echo --port 8443:grpc --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-grpc + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8443 + Service Port Identifier: extra + Volume Mount Point : /tmp/telfs-1277723591 + Intercepting : all TCP requests + Preview URL : https://upbeat-thompson-6613.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +``` + +## Port-forwarding an intercepted container's sidecars + +Sidecars are containers that sit in the same pod as an application +container; they usually provide auxiliary functionality to an +application, and can usually be reached at +`localhost:${SIDECAR_PORT}`. For example, a common use case for a +sidecar is to proxy requests to a database, your application would +connect to `localhost:${SIDECAR_PORT}`, and the sidecar would then +connect to the database, perhaps augmenting the connection with TLS or +authentication. + +When intercepting a container that uses sidecars, you might want those +sidecars' ports to be available to your local application at +`localhost:${SIDECAR_PORT}`, exactly as they would be if running +in-cluster. Telepresence's `--to-pod ${PORT}` flag implements this +behavior, adding port-forwards for the port given. + +```console +$ telepresence intercept --port=: --to-pod= +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +If there are multiple ports that you need forwarded, simply repeat the +flag (`--to-pod= --to-pod=`). + +## Intercepting headless services + +Kubernetes supports creating [services without a ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services), +which, when they have a pod selector, serve to provide a DNS record that will directly point to the service's backing pods. +Telepresence supports intercepting these `headless` services as it would a regular service with a ClusterIP. +So, for example, if you have the following service: + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: my-headless +spec: + type: ClusterIP + clusterIP: None + selector: + service: my-headless + ports: + - port: 8080 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: my-headless + labels: + service: my-headless +spec: + replicas: 1 + serviceName: my-headless + selector: + matchLabels: + service: my-headless + template: + metadata: + labels: + service: my-headless + spec: + containers: + - name: my-headless + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +``` + +You can intercept it like any other: + +```console +$ telepresence intercept my-headless --port 8080 +Using StatefulSet my-headless +intercepted + Intercept name : my-headless + State : ACTIVE + Workload kind : StatefulSet + Destination : 127.0.0.1:8080 + Volume Mount Point: /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-524189712 + Intercepting : all TCP connections +``` + + +This utilizes an initContainer that requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + + +This requires the Traffic Agent to run as GID 7777. By default, this is disabled on openshift clusters. +To enable running as GID 7777 on a specific openshift namespace, run: +oc adm policy add-scc-to-group anyuid system:serviceaccounts:$NAMESPACE + + + +Intercepting headless services without a selector is not supported. + + +## Sharing intercepts with teammates + +Once a combination of flags to easily intercept a service has been found, it's useful to share it with teammates. You +can do that easily by going to [Ambassador Cloud -> Intercepts history](https://app.getambassador.io/cloud/saved-intercepts) +pick the intercept command from the history tab and create a Saved Intercept by giving it a name, when doing that +the intercept command will be easily accessible for all your teammates. Note that this requires the free enhanced +client to be installed and to be logged in (`telepresence login`). + +To instantiate an intercept based on a saved intercept, simply run +`telepresence intercept --use-saved-intercept `. When logged in, the command will first check for a +saved intercept in Ambassador Cloud and will use it if found, otherwise an error will be returned. + +Saved Intercepts can be [managed through Ambassador Cloud](../../../../../cloud/latest/telepresence-saved-intercepts). + +## Specifying the intercept traffic target + +By default, it's assumed that your local app is reachable on `127.0.0.1`, and intercepted traffic will be sent to that IP +at the port given by `--port`. If you wish to change this behavior and send traffic to a different IP address, you can use the `--address` parameter +to `telepresence intercept`. Say your machine is configured to respond to HTTP requests for an intercept on `172.16.0.19:8080`. You would run this as: + +```console +$ telepresence intercept my-service --address 172.16.0.19 --port 8080 +Using Deployment echo-easy + Intercept name : echo-easy + State : ACTIVE + Workload kind : Deployment + Destination : 172.16.0.19:8080 + Service Port Identifier: proxied + Volume Mount Point : /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-517018422 + Intercepting : HTTP requests with headers + 'x-telepresence-intercept-id: 8e0dd8ea-b55a-43bd-ad04-018b9de9cfab:echo-easy' + Preview URL : https://laughing-curran-5375.preview.edgestack.me + Layer 5 Hostname : echo-easy.default.svc.cluster.local +``` diff --git a/docs/telepresence/2.14/reference/intercepts/index.md b/docs/telepresence/2.14/reference/intercepts/index.md new file mode 100644 index 000000000..5b317aeec --- /dev/null +++ b/docs/telepresence/2.14/reference/intercepts/index.md @@ -0,0 +1,61 @@ +import Alert from '@material-ui/lab/Alert'; + +# Intercepts + +When intercepting a service, the Telepresence Traffic Manager ensures +that a Traffic Agent has been injected into the intercepted workload. +The injection is triggered by a Kubernetes Mutating Webhook and will +only happen once. The Traffic Agent is responsible for redirecting +intercepted traffic to the developer's workstation. + +An intercept is either global or personal. + +### Global intercet +This intercept will intercept all`tcp` and/or `udp` traffic to the +intercepted service and send all of that traffic down to the developer's +workstation. This means that a global intercept will affect all users of +the intercepted service. + +### Personal intercept +This intercept will intercept specific HTTP requests, allowing other HTTP +requests through to the regular service. The selection is based on http +headers or paths, and allows for intercepts which only intercept traffic +tagged as belonging to a given developer. + +There are two ways of configuring an intercept: +- one from the [CLI](./cli) directly +- one from an [Intercept Specification](./specs) + +## Intercept behavior when using single-user versus team mode. + +Switching the Traffic Manager from `single-user` mode to `team` mode changes +the Telepresence defaults in two ways. + + +First, in team mode, Telepresence will require that the user is logged in to +Ambassador Cloud, or is using an api-key. The team mode aldo causes Telepresence +to default to a personal intercept using `--http-header=auto --http-path-prefix=/`. +Personal intercepts are important for working in a shared cluster with teammates, +and is important for the preview URL functionality below. See `telepresence intercept --help` +for information on using the `--http-header` and `--http-path-xxx` flags to +customize which requests that are intercepted. + +Secondly, team mode causes Telepresence to default to`--preview-url=true`. This +tells Telepresence to take advantage of Ambassador Cloud to create a preview URL +for this intercept, creating a shareable URL that automatically sets the +appropriate headers to have requests coming from the preview URL be +intercepted. + +## Supported workloads + +Kubernetes has various +[workloads](https://kubernetes.io/docs/concepts/workloads/). +Currently, Telepresence supports intercepting (installing a +traffic-agent on) `Deployments`, `ReplicaSets`, and `StatefulSets`. + + + +While many of our examples use Deployments, they would also work on +ReplicaSets and StatefulSets + + diff --git a/docs/telepresence/2.14/reference/intercepts/manual-agent.md b/docs/telepresence/2.14/reference/intercepts/manual-agent.md new file mode 100644 index 000000000..8c24d6dbe --- /dev/null +++ b/docs/telepresence/2.14/reference/intercepts/manual-agent.md @@ -0,0 +1,267 @@ +import Alert from '@material-ui/lab/Alert'; + +# Manually injecting the Traffic Agent + +You can directly modify your workload's YAML configuration to add the Telepresence Traffic Agent and enable it to be intercepted. + +When you use a Telepresence intercept for the first time on a Pod, the [Telepresence Mutating Webhook](../../cluster-config/#mutating-webhook) +will automatically inject a Traffic Agent sidecar into it. There might be some situations where this approach cannot be used, such +as very strict company security policies preventing it. + + +Although it is possible to manually inject the Traffic Agent, it is not the recommended approach to making a workload interceptable, +try the Mutating Webhook before proceeding. + + +## Procedure + +You can manually inject the agent into Deployments, StatefulSets, or ReplicaSets. The example on this page +uses the following Deployment and Service. It's a prerequisite that they have been applied to the cluster: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "my-service" + labels: + service: my-service +spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +--- +apiVersion: v1 +kind: Service +metadata: + name: "my-service" +spec: + type: ClusterIP + selector: + service: my-service + ports: + - port: 80 + targetPort: 8080 +``` + +### 1. Generating the YAML + +First, generate the YAML for the traffic-agent configmap entry. It's important that the generated file have +the same name as the service, and no extension: + +```console +$ telepresence genyaml config --workload my-service -o /tmp/my-service +$ cat /tmp/my-service-config.yaml +agentImage: docker.io/datawire/tel2:2.7.0 +agentName: my-service +containers: +- Mounts: null + envPrefix: A_ + intercepts: + - agentPort: 9900 + containerPort: 8080 + protocol: TCP + serviceName: my-service + servicePort: 80 + serviceUID: f6680334-10ef-4703-aa4e-bb1f9d1665fd + mountPoint: /tel_app_mounts/echo-container + name: echo-container +logLevel: info +managerHost: traffic-manager.ambassador +managerPort: 8081 +manual: true +namespace: default +workloadKind: Deployment +workloadName: my-service +``` + +Next, generate the YAML for the traffic-agent container: + +```console +$ telepresence genyaml container --config /tmp/my-service -o /tmp/my-service-agent.yaml +$ cat /tmp/my-service-agent.yaml +args: +- agent +env: +- name: _TEL_AGENT_POD_IP + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: status.podIP +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: traffic-agent +ports: +- containerPort: 9900 + protocol: TCP +readinessProbe: + exec: + command: + - /bin/stat + - /tmp/agent/ready +resources: {} +volumeMounts: +- mountPath: /tel_pod_info + name: traffic-annotations +- mountPath: /etc/traffic-agent + name: traffic-config +- mountPath: /tel_app_exports + name: export-volume + name: traffic-annotations +``` + +Next, generate the init-container + +```console +$ telepresence genyaml initcontainer --config /tmp/my-service -o /tmp/my-service-init.yaml +$ cat /tmp/my-service-init.yaml +args: +- agent-init +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: tel-agent-init +resources: {} +securityContext: + capabilities: + add: + - NET_ADMIN +volumeMounts: +- mountPath: /etc/traffic-agent + name: traffic-config +``` + +Next, generate the YAML for the volumes: + +```console +$ telepresence genyaml volume --workload my-service -o /tmp/my-service-volume.yaml +$ cat /tmp/my-service-volume.yaml +- downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.annotations + path: annotations + name: traffic-annotations +- configMap: + items: + - key: my-service + path: config.yaml + name: telepresence-agents + name: traffic-config +- emptyDir: {} + name: export-volume + +``` + + +Enter `telepresence genyaml container --help` or `telepresence genyaml volume --help` for more information about these flags. + + +### 2. Creating (or updating) the configmap + +The generated configmap entry must be insterted into the `telepresence-agents` `ConfigMap` in the same namespace as the +modified `Deployment`. If the `ConfigMap` doesn't exist yet, it can be created using the following command: + +```console +$ kubectl create configmap telepresence-agents --from-file=/tmp/my-service +``` + +If it already exists, new entries can be added under the `Data` key using `kubectl edit configmap telepresence-agents`. + +### 3. Injecting the YAML into the Deployment + +You need to add the `Deployment` YAML you genereated to include the container and the volume. These are placed as elements +of `spec.template.spec.containers`, `spec.template.spec.initContainers`, and `spec.template.spec.volumes` respectively. +You also need to modify `spec.template.metadata.annotations` and add the annotation +`telepresence.getambassador.io/manually-injected: "true"`. These changes should look like the following: + +```diff + apiVersion: apps/v1 + kind: Deployment + metadata: + name: "my-service" + labels: + service: my-service + spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service ++ annotations: ++ telepresence.getambassador.io/manually-injected: "true" + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} ++ - args: ++ - agent ++ env: ++ - name: _TEL_AGENT_POD_IP ++ valueFrom: ++ fieldRef: ++ apiVersion: v1 ++ fieldPath: status.podIP ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: traffic-agent ++ ports: ++ - containerPort: 9900 ++ protocol: TCP ++ readinessProbe: ++ exec: ++ command: ++ - /bin/stat ++ - /tmp/agent/ready ++ resources: { } ++ volumeMounts: ++ - mountPath: /tel_pod_info ++ name: traffic-annotations ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ - mountPath: /tel_app_exports ++ name: export-volume ++ initContainers: ++ - args: ++ - agent-init ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: tel-agent-init ++ resources: { } ++ securityContext: ++ capabilities: ++ add: ++ - NET_ADMIN ++ volumeMounts: ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ volumes: ++ - downwardAPI: ++ items: ++ - fieldRef: ++ apiVersion: v1 ++ fieldPath: metadata.annotations ++ path: annotations ++ name: traffic-annotations ++ - configMap: ++ items: ++ - key: my-service ++ path: config.yaml ++ name: telepresence-agents ++ name: traffic-config ++ - emptyDir: { } ++ name: export-volume +``` diff --git a/docs/telepresence/2.14/reference/intercepts/specs.md b/docs/telepresence/2.14/reference/intercepts/specs.md new file mode 100644 index 000000000..89a8200ce --- /dev/null +++ b/docs/telepresence/2.14/reference/intercepts/specs.md @@ -0,0 +1,477 @@ +# Configuring intercept using specifications + +This page references the different options available to the telepresence intercept specification. + +With telepresence, you can provide a file to define how an intercept should work. + + +## Root + +Your intercept specification is where you can create a standard, easy to use, configuration to easily run pre and post tasks, start an intercept, and start your local application to handle the intercepted traffic. + +There are many ways to configure your specification to suit your needs, the table below shows the possible options within your specifcation, +and you can see the spec's schema, with all available options and formats, [here](#ide-integration). + +| Options | Description | +|-------------------------------------------|---------------------------------------------------------------------------------------------------------| +| [name](#name) | Name of the specification. | +| [connection](#connection) | Connection properties to use when Telepresence connects to the cluster. | +| [handlers](#handlers) | Local processes to handle traffic and/or setup . | +| [prerequisites](#prerequisites) | Things to set up prior to starting any intercepts, and tear things down once the intercept is complete. | +| [workloads](#workloads) | Remote workloads that are intercepted, keyed by workload name. | + +### Name +The name is optional. If you don't specify the name it will use the filename of the specification file. + +```yaml +name : echo-server-spec +``` + +### Connection + +The connection option is used to define how Telepresence connects to your cluster. + +```yaml +connection: + context: "shared-cluster" + mappedNamespaces: + - "my_app" +``` + +You can pass the most common parameters from telepresence connect command (`telepresence connect --help`) using a camel case format. + +Some of the most commonly used options include: + +| Options | Type | Format | Description | +|------------------|-------------|-------------------------|---------------------------------------------------------| +| context | string | N/A | The kubernetes context to use | +| mappedNamespaces | string list | [a-z0-9][a-z0-9-]{1,62} | The namespaces that Telepresence will be concerned with | + + +## Handlers + +A handler is code running locally. + +It can receive traffic for an intercepted service, or can set up prerequisites to run before/after the intercept itself. + +When it is intended as an intercept handler (i.e. to handle traffic), it's usually the service you're working on, or another dependency (database, another third party service, ...) running on your machine. +A handler can be a Docker container, or an application running natively. + +The sample below is creating an intercept handler, giving it the name `echo-server` and using a docker container. The container will +automatically have access to the ports, environment, and mounted directories of the intercepted container. + + + The ports field is important for the intercept handler while running in docker, it indicates which ports should be exposed to the host. If you want to access to it locally (to attach a debugger to your container for example), this field must be provided. + + +```yaml +handlers: + - name: echo-server + environment: + - name: PORT + value: "8080" + docker: + image: jmalloc/echo-server:latest + ports: + - 8080 +``` + +If you don't want to use Docker containers, you can still configure your handlers to start via a regular script. +The snippet below shows how to create an handler called echo-server, that sets an environment variable of `PORT=8080` +and starts the application. + + +```yaml +handlers: + - name: echo-server + environment: + - name: PORT + value: "8080" + script: + run: bin/echo-server +``` + +Keep in mind that an empty handler is still a valid handler. This is sometimes useful when you want to, for example, +simulate an intercepted service going down: + +```yaml +handlers: + - name: no-op +``` + +The table belows defines the parameters that can be used within the handlers section. + +| Options | Type | Format | Description | +|------------------------|-------------|--------------------------|------------------------------------------------------------------------------| +| name | string | [a-zA-Z][a-zA-Z0-9_-]* | Defines name of your handler that the intercepts use to reference it | +| environment | map list | N/A | Environment Defines environment variables within your handler | +| environment[*].name | string | [a-zA-Z_][a-zA-Z0-9_]* | The name of the environment variable | +| environment[*].value | string | N/A | The value for the environment variable | +| [script](#script) | map | N/A | Tells the handler to run as a script, mutually exclusive to docker | +| [docker](#docker) | map | N/A | Tells the handler to run as a docker container, mutually exclusive to script | + +### Script + +The handler's script element defines the parameters: + +| Options | Type | Format | Description | +|---------|--------|------------------------|-----------------------------------------------------------------------------------------------------------------------------| +| run | string | N/A | The script to run. Can be multi-line | +| shell | string | bash|sh|sh | Shell that will parse and run the script. Can be bash, zsh, or sh. Defaults to the value of the`SHELL` environment variable | + +### Docker +The handler's docker element defines the parameters. The `build` and `image` parameters are mutually exclusive: + +| Options | Type | Format | Description | +|---------------------|-------------|--------|--------------------------------------------------------------------------------------------------------------------------------------| +| [build](#build) | map | N/A | Defines how to build the image from source using [docker build](https://docs.docker.com/engine/reference/commandline/build/) command | +| [compose](#compose) | map | N/A | Defines how to integrate with an existing Docker Compose file | +| image | string | image | Defines which image to be used | +| ports | int list | N/A | The ports which should be exposed to the host | +| options | string list | N/A | Options for docker run [options](https://docs.docker.com/engine/reference/commandline/run/#options) | +| command | string | N/A | Optional command to run | +| args | string list | N/A | Optional command arguments | + + +#### Build + +The docker build element defines the parameters: + +| Options | Type | Format | Description | +|---------|-------------|--------|--------------------------------------------------------------------------------------------| +| context | string | N/A | Defines either a path to a directory containing a Dockerfile, or a url to a git repository | +| args | string list | N/A | Additional arguments for the docker build command. | + +For additional informations on these parameters, please check the docker [documentation](https://docs.docker.com/engine/reference/commandline/run). + +#### Compose + +The Docker Compose element defines the way to integrate with the tool of the same name. + +| Options | Type | Format | Description | +|----------------------|----------|--------------|--------------------------------------------------------------------------------------------------------| +| context | string | N/A | An optional Docker context, meaning the path to / or the directory containing your docker compose file | +| [services](#service) | map list | | The services to use with the Telepresence integration | +| spec | map | compose spec | Optional embedded docker compose specification. | + +##### Service + +The service describe how to integrate with each service from your Docker Compose file, and can be seen as an override +functionality. A service is normally not provided when you want to keep the original behavior, but can be provided for +documentation purposes using the `local` behavior. + +A service can be declared either as a property of `compose` in the Intercept Specification, or as an `x-telepresence` +extension in the Docker compose specification. The syntax is the same in both cases, but the `name` property must not be +used together with `x-telepresence` because it is implicit. + +| Options | Type | Format | Description | +|-----------------------|--------|-----------------------------------------|-----------------------------------------------------------------------------| +| name | string | [a-zA-Z][a-zA-Z0-9_-]* | The name of your service in the compose file | +| [behavior](#behavior) | string | interceptHandler|remote|local | Behavior of the service in context of the intercept. | +| [mapping](#mapping) | map | | Optional mapping to cluster service. Only applicable for `behavior: remote` | + +###### Behavior + +| Value | Description | +|------------------|-----------------------------------------------------------------------------------------------------------------| +| interceptHandler | The service runs locally and will receive traffic from the intercepted pod. | +| remote | The service will not run as part of docker compose. Instead, traffic is redirected to a service in the cluster. | +| local | The service runs locally without modifications. This is the default. | + +###### Mapping + +| Options | Type | Description | +|-----------|---------------|----------------------------------------------------------------------------------------------------| +| name | string | The name of the cluster service to link the compose service with | +| namespace | string | The cluster namespace for service. This is optional and defaults to the namespace of the intercept | + +**Examples** + +Considering the following Docker Compose file: + +```yaml +services: + redis: + image: redis:6.2.6 + ports: + - "6379" + postgres: + image: "postgres:14.1" + ports: + - "5432" + myapp: + build: + # Directory containing the Dockerfile and source code + context: ../../myapp + ports: + - "8080" + volumes: + - .:/code + environment: + DEV_MODE: "true" +``` + +This will use the myapp service as the interceptor. +```yaml +services: + - name: myapp + behavior: interceptHandler +``` + +This will prevent the service from running locally. DNS will point the service in the cluster with the same name. +```yaml +services: + - name: postgres + behavior: remote +``` + +Adding a mapping allows to select the cluster service more accurately, here by indicating to Telepresence that +the postgres service should be mapped to the **psql** service in the **big-data** namespace. + +```yaml +services: + - name: postgres + behavior: remote + mapping: + name: psql + namespace: big-data +``` + +As an alternative, the `services` can instead be added as `x-telepresence` extensions in the docker compose file: + +```yaml +services: + redis: + image: redis:6.2.6 + ports: + - "6379" + postgres: + x-telepresence: + behavior: remote + mapping: + name: psql + namespace: big-data + image: "postgres:14.1" + ports: + - "5432" + myapp: + x-telepresence: + behavior: interceptHandler + build: + # Directory containing the Dockerfile and source code + context: ../../myapp + ports: + - "8080" + volumes: + - .:/code + environment: + DEV_MODE: "true" +``` + +## Prerequisites +When creating an intercept specification there is an option to include prerequisites. + +Prerequisites give you the ability to run scripts for setup, build binaries to run as your intercept handler, or many other use cases. + +Prerequisites is an array, so it can handle many options prior to starting your intercept and running your intercept handlers. +The elements of the `prerequisites` array correspond to [`handlers`](#handlers). + +The sample below is declaring that `build-binary` and `rm-binary` are two handlers; the first will be run before any intercepts, +the second will be run after cleaning up the intercepts. + +If a prerequisite create succeeds, the corresponding delete is guaranteed to run even if the other steps in the spec fail. + +```yaml +prerequisites: + - create: build-binary + delete: rm-binary +``` + + +The table below defines the parameters availble within the prerequistes section. + +| Options | Description | +|---------|-------------------------------------------------- | +| create | The name of a handler to run before the intercept | +| delete | The name of a handler to run after the intercept | + + +## Workloads + +Workloads define the services in your cluster that will be intercepted. + +The example below is creating an intercept on a service called `echo-server` on port 8080. +It creates a personal intercept with the header of `x-intercept-id: foo`, and routes its traffic to a handler called `echo-server` + +```yaml +workloads: + # You can define one or more workload(s) + - name: echo-server: + intercepts: + # You can define one or more intercept(s) + - headers: + - name: x-intercept-id + value: foo + port: 8080 + handler: echo-server +``` + +This table defines the parameters available within a workload. + +| Options | Type | Format | Description | Default | +|---------------------------|--------------------------------|-------------------------|---------------------------------------------------------------|---------| +| name | string | [a-z][a-z0-9-]* | Name of the workload to intercept | N/A | +| namespace | string | [a-z0-9][a-z0-9-]{1,62} | Namespace of workload to intercept | N/A | +| intercepts | [intercept](#intercepts) list | N/A | The list of intercepts associated to the workload | N/A | + +### Intercepts +This table defines the parameters available for each intercept. + +| Options | Type | Format | Description | Default | +|---------------------|-------------------------|----------------------|-----------------------------------------------------------------------|----------------| +| enabled | boolean | N/A | If set to false, disables this intercept. | true | +| headers | [header](#header) list | N/A | Headers that will filter the intercept. | Auto generated | +| service | name | [a-z][a-z0-9-]{1,62} | Name of service to intercept | N/A | +| localPort | integersh|string | 0-65535 | The port for the service which is intercepted | N/A | +| port | integer | 0-65535 | The port the service in the cluster is running on | N/A | +| pathPrefix | string | N/A | Path prefix filter for the intercept. Defaults to "/" | / | +| previewURL | boolean | N/A | Determine if a preview url should be created | true | +| banner | boolean | N/A | Used in the preview url option, displays a banner on the preview page | true | + +#### Header + +You can define headers to filter the requests which should end up on your machine when intercepting. + +| Options | Type | Format | Description | Default | +|---------------------------|----------|-------------------------|---------------------------------------------------------------|---------| +| name | string | N/A | Name of the header | N/A | +| value | string | N/A | Value of the header | N/A | + +## Templating +Telepresence specs also support templating of scripts, commands, arguments, environments, and intercept headers. All +[Go Builtin](https://pkg.go.dev/text/template#hdr-Functions) and [Sprig](http://masterminds.github.io/sprig/) +template functions can be used. In addition, Telepresence also adds **variables**: + +```yaml +intercepts: + - headers: + - name: test-{{ .Telepresence.Username }} + value: {{ .Telepresence.Username | quote }} +handlers: + - name: my-handler + docker: + build: + - context: {{ env "MY_DOCKER_CONTEXT" | quote }} +``` + +### Telepresence template variables +| Options | Type | Description | +|---------------------------|----------|------------------------------------------| +| Telepresence.Username | string | The name of the user running the spec | + + + +## Usage + +### Running your specification from the CLI +After you've written your intercept specification you will want to run it. + +To start your intercept, use this command: + +```bash +telepresence intercept run +``` +This will validate and run your spec. In case you just want to validate it, you can do so by using this command: + +```bash +telepresence intercept validate +``` + +### Using and sharing your specification as a CRD + +If you want to share specifications across your team or your organization. You can save specifications as CRDs inside your cluster. + + + The Intercept Specification CRD requires Kubernetes 1.22 or higher, if you are using an old cluster you will + need to install using helm directly, and use the --disable-openapi-validation flag + + +1. Install CRD object in your cluster (one time installation) : + + ```bash + telepresence helm install --crds + ``` + +1. Then you need to deploy the specification in your cluster as a CRD: + + ```yaml + apiVersion: getambassador.io/v1alpha2 + kind: InterceptSpecification + metadata: + name: my-crd-spec + namespace: my-crd-namespace + spec: + {intercept specification} + ``` + + So `echo-server` example looks like this: + + ```bash + kubectl apply -f - < # See note below +``` + +The Service Account token will be obtained by the cluster administrator after they create the user's Service Account. Creating the Service Account will create an associated Secret in the same namespace with the format `-token-`. This token can be obtained by your cluster administrator by running `kubectl get secret -n ambassador -o jsonpath='{.data.token}' | base64 -d`. + +After creating `config.yaml` in your current directory, export the file's location to KUBECONFIG by running `export KUBECONFIG=$(pwd)/config.yaml`. You should then be able to switch to this context by running `kubectl config use-context my-context`. + +## Administrating Telepresence + +Telepresence administration requires permissions for creating `Namespaces`, `ServiceAccounts`, `ClusterRoles`, `ClusterRoleBindings`, `Secrets`, `Services`, `MutatingWebhookConfiguration`, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator. The following permissions are needed for the installation and use of Telepresence: + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: telepresence-admin + namespace: default +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-admin-role +rules: + - apiGroups: [""] + resources: ["pods", "pods/log"] + verbs: ["get", "list", "create", "delete", "watch"] + - apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "update", "create", "delete"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] + - apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update", "create", "delete", "watch"] + - apiGroups: ["rbac.authorization.k8s.io"] + resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"] + verbs: ["get", "list", "watch", "create", "delete"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["create"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["get", "list", "watch", "delete"] + resourceNames: ["telepresence-agents"] + - apiGroups: [""] + resources: ["namespaces"] + verbs: ["get", "list", "watch", "create"] + - apiGroups: [""] + resources: ["secrets"] + verbs: ["get", "create", "list", "delete"] + - apiGroups: [""] + resources: ["serviceaccounts"] + verbs: ["get", "create", "delete"] + - apiGroups: ["admissionregistration.k8s.io"] + resources: ["mutatingwebhookconfigurations"] + verbs: ["get", "create", "delete"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["list", "get", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-clusterrolebinding +subjects: + - name: telepresence-admin + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-admin-role + kind: ClusterRole +``` + +There are two ways to install the traffic-manager: Using `telepresence connect` and installing the [helm chart](../../install/helm/). + +By using `telepresence connect`, Telepresence will use your kubeconfig to create the objects mentioned above in the cluster if they don't already exist. If you want the most introspection into what is being installed, we recommend using the helm chart to install the traffic-manager. + +## Cluster-wide telepresence user access + +To allow users to make intercepts across all namespaces, but with more limited `kubectl` permissions, the following `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` will allow full `telepresence intercept` functionality. + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate value + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-role +rules: +# For gather-logs command +- apiGroups: [""] + resources: ["pods/log"] + verbs: ["get"] +- apiGroups: [""] + resources: ["pods"] + verbs: ["list"] +# Needed in order to maintain a list of workloads +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +- apiGroups: [""] + resources: ["namespaces", "services"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-rolebinding +subjects: +- name: tp-user + kind: ServiceAccount + namespace: ambassador +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-role + kind: ClusterRole +``` + +### Traffic Manager connect permission +In addition to the cluster-wide permissions, the client will also need the following namespace scoped permissions +in the traffic-manager's namespace in order to establish the needed port-forward to the traffic-manager. +```yaml +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: traffic-manager-connect +rules: + - apiGroups: [""] + resources: ["pods"] + verbs: ["get", "list", "watch"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: traffic-manager-connect +subjects: + - name: telepresence-test-developer + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: traffic-manager-connect + kind: Role +``` + +## Namespace only telepresence user access + +RBAC for multi-tenant scenarios where multiple dev teams are sharing a single cluster where users are constrained to a specific namespace(s). + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +For each accessible namespace +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate user name + namespace: tp-namespace # Update value for appropriate namespace +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +rules: +- apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "watch"] +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role-binding + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount +roleRef: + kind: Role + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +``` + +The user will also need the [Traffic Manager connect permission](#traffic-manager-connect-permission) described above. diff --git a/docs/telepresence/2.14/reference/restapi.md b/docs/telepresence/2.14/reference/restapi.md new file mode 100644 index 000000000..4be1924a3 --- /dev/null +++ b/docs/telepresence/2.14/reference/restapi.md @@ -0,0 +1,93 @@ +# Telepresence RESTful API server + +[Telepresence](/products/telepresence/) can run a RESTful API server on the local host, both on the local workstation and in a pod that contains a `traffic-agent`. The server currently has two endpoints. The standard `healthz` endpoint and the `consume-here` endpoint. + +## Enabling the server +The server is enabled by setting the `telepresenceAPI.port` to a valid port number in the [Telepresence Helm Chart](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). The values may be passed explicitly to Helm during install, or configured using the [Telepresence Config](../config#restful-api-server) to impact an auto-install. + +## Querying the server +On the cluster's side, it's the `traffic-agent` of potentially intercepted pods that runs the server. The server can be accessed using `http://localhost:/` from the application container. Telepresence ensures that the container has the `TELEPRESENCE_API_PORT` environment variable set when the `traffic-agent` is installed. On the workstation, it is the `user-daemon` that runs the server. It uses the `TELEPRESENCE_API_PORT` that is conveyed in the environment of the intercept. This means that the server can be accessed the exact same way locally, provided that the environment is propagated correctly to the interceptor process. + +## Endpoints + +The `consume-here` and `intercept-info` endpoints are both intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar. Telepresence provides the ID of the intercept in the environment variable [TELEPRESENCE_INTERCEPT_ID](../environment/#telepresence_intercept_id) during an intercept. This ID must be provided in a `x-telepresence-caller-intercept-id: = ` header. [Telepresence](/products/telepresence/) needs this to identify the caller correctly. The `` will be empty when running in the cluster, but it's harmless to provide it there too, so there's no need for conditional code. + +There are three prerequisites to fulfill before testing The `consume-here` and `intercept-info` endpoints using `curl -v` on the workstation: +1. An intercept must be active +2. The "/healthz" endpoint must respond with OK +3. The ID of the intercept must be known. It will be visible as `ID` in the output of `telepresence list --debug`. + +### healthz +The `http://localhost:/healthz` endpoint should respond with status code 200 OK. If it doesn't then something isn't configured correctly. Check that the `traffic-agent` container is present and that the `TELEPRESENCE_API_PORT` has been added to the environment of the application container and/or in the environment that is propagated to the interceptor that runs on the local workstation. + +#### test endpoint using curl +A `curl -v` call can be used to test the endpoint when an intercept is active. This example assumes that the API port is configured to be 9980. +```console +$ curl -v localhost:9980/healthz +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /healthz HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Date: Fri, 26 Nov 2021 07:06:18 GMT +< Content-Length: 0 +< +* Connection #0 to host localhost left intact +``` + +### consume-here +`http://localhost:/consume-here` will respond with "true" (consume the message) or "false" (leave the message on the queue). When running in the cluster, this endpoint will respond with `false` if the headers match an ongoing intercept for the same workload because it's assumed that it's up to the intercept to consume the message. When running locally, the response is inverted. Matching headers means that the message should be consumed. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api`, we can now check that the "/consume-here" returns "true" for the path "/api" and given headers. +```console +$ curl -v localhost:9980/consume-here?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /consume-here?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Fri, 26 Nov 2021 06:43:28 GMT +< Content-Length: 4 +< +* Connection #0 to host localhost left intact +true +``` + +If you can run curl from the pod, you can try the exact same URL. The result should be "false" when there's an ongoing intercept. The `x-telepresence-caller-intercept-id` is not needed when the call is made from the pod. + +### intercept-info +`http://localhost:/intercept-info` is intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar, and will respond with a JSON structure containing the two booleans `clientSide` and `intercepted`, and a `metadata` map which corresponds to the `--http-meta` key pairs used when the intercept was created. This field is always omitted in case `intercepted` is `false`. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api --http-meta a=b --http-meta b=c`, we can now check that the "/intercept-info" returns information for the given path and headers. +```console +$ curl -v localhost:9980/intercept-info?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980...* Connected to localhost (127.0.0.1) port 9980 (#0) +> GET /intercept-info?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.79.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Tue, 01 Feb 2022 11:39:55 GMT +< Content-Length: 68 +< +{"intercepted":true,"clientSide":true,"metadata":{"a":"b","b":"c"}} +* Connection #0 to host localhost left intact +``` diff --git a/docs/telepresence/2.14/reference/routing.md b/docs/telepresence/2.14/reference/routing.md new file mode 100644 index 000000000..e974adbe1 --- /dev/null +++ b/docs/telepresence/2.14/reference/routing.md @@ -0,0 +1,69 @@ +# Connection Routing + +## Outbound + +### DNS resolution +When requesting a connection to a host, the IP of that host must be determined. Telepresence provides DNS resolvers to help with this task. There are currently four types of resolvers but only one of them will be used on a workstation at any given time. Common for all of them is that they will propagate a selection of the host lookups to be performed in the cluster. The selection normally includes all names ending with `.cluster.local` or a currently mapped namespace but more entries can be added to the list using the `includeSuffixes` option in the +[cluster DNS configuration](../config/#dns) + +#### Cluster side DNS lookups +The cluster side host lookup will be performed by the traffic-manager unless the client has an active intercept, in which case, the agent performing that intercept will be responsible for doing it. If the client has multiple intercepts, then all of them will be asked to perform the lookup, and the response to the client will contain the unique sum of IPs that they produce. It's therefore important to never have multiple intercepts that span more than one namespace[[1](#namespacelimit)] running concurrently on the same workstation because that would logically put the workstation in several namespaces and make the DNS resolution ambiguous. The reason for asking all of them is that the workstation currently impersonates multiple containers, and it is not possible to determine on behalf of what container the lookup request is made. + +#### macOS resolver +This resolver hooks into the macOS DNS system by creating files under `/etc/resolver`. Those files correspond to some domain and contain the port number of the Telepresence resolver. Telepresence creates one such file for each of the currently mapped namespaces and `include-suffixes` option. The file `telepresence.local` contains a search path that is configured based on current intercepts so that single label names can be resolved correctly. + +#### Linux systemd-resolved resolver +This resolver registers itself as part of telepresence's [VIF](../tun-device) using `systemd-resolved` and uses the DBus API to configure domains and routes that corresponds to the current set of intercepts and namespaces. + +#### Linux overriding resolver +Linux systems that aren't configured with `systemd-resolved` will use this resolver. A Typical case is when running Telepresence [inside a docker container](../inside-container). During initialization, the resolver will first establish a _fallback_ connection to the IP passed as `--dns`, the one configured as `local-ip` in the [local DNS configuration](../config#dns), or the primary `nameserver` registered in `/etc/resolv.conf`. It will then use iptables to actually override that IP so that requests to it instead end up in the overriding resolver, which unless it succeeds on its own, will use the _fallback_. + +#### Windows resolver +This resolver uses the DNS resolution capabilities of the [win-tun](https://www.wintun.net/) device in conjunction with [Win32_NetworkAdapterConfiguration SetDNSDomain](https://docs.microsoft.com/en-us/powershell/scripting/samples/performing-networking-tasks?view=powershell-7.2#assigning-the-dns-domain-for-a-network-adapter). + +#### DNS caching +The Telepresence DNS resolver often changes its configuration. This means that Telepresence must either flush the DNS caches on the local host, or ensure that DNS-records returned from the Telepresence resolver aren't cached (or cached for a very short time). All operating systems have different ways of flushing the DNS caches and even different versions of one system may have differences. Also, on some systems it is necessary to actually kill and restart processes to ensure a proper flush, which in turn may result in network instabilities. + +Starting with 2.4.7, Telepresence will no longer flush the host's DNS caches. Instead, all records will have a short Time To Live (TTL) so that such caches evict the entries quickly. This causes increased load on the Telepresence resolver (shorter TTL means more frequent queries) and to cater for that, telepresence now has an internal cache to minimize the number of DNS queries that it sends to the cluster. This cache is flushed as needed without causing instabilities. + +### Routing + +#### Subnets +The Telepresence `traffic-manager` service is responsible for discovering the cluster's service subnet and all subnets used by the pods. In order to do this, it needs permission to create a dummy service[[2](#servicesubnet)] in its own namespace, and the ability to list, get, and watch nodes and pods. Most clusters will expose the pod subnets as `podCIDR` in the `Node` while others, like Amazon EKS, don't. Telepresence will then fall back to deriving the subnets from the IPs of all pods. If you'd like to choose a specific method for discovering subnets, or want to provide the list yourself, you can use the `podCIDRStrategy` configuration value in the [helm](../../install/helm) chart to do that. + +The complete set of subnets that the [VIF](../tun-device) will be configured with is dynamic and may change during a connection's life cycle as new nodes arrive or disappear from the cluster. The set consists of what that the traffic-manager finds in the cluster, and the subnets configured using the [also-proxy](../config#alsoproxysubnets) configuration option. Telepresence will remove subnets that are equal to, or completely covered by, other subnets. + +#### Connection origin +A request to connect to an IP-address that belongs to one of the subnets of the [VIF](../tun-device) will cause a connection request to be made in the cluster. As with host name lookups, the request will originate from the traffic-manager unless the client has ongoing intercepts. If it does, one of the intercepted pods will be chosen, and the request will instead originate from that pod. This is a best-effort approach. Telepresence only knows that the request originated from the workstation. It cannot know that it is intended to originate from a specific pod when multiple intercepts are active. + +A `--local-only` intercept will not have any effect on the connection origin because there is no pod from which the connection can originate. The intercept must be made on a workload that has been deployed in the cluster if there's a requirement for correct connection origin. + +There are multiple reasons for doing this. One is that it is important that the request originates from the correct namespace. Example: + +```bash +curl some-host +``` +results in a http request with header `Host: some-host`. Now, if a service-mesh like Istio performs header based routing, then it will fail to find that host unless the request originates from the same namespace as the host resides in. Another reason is that the configuration of a service mesh can contain very strict rules. If the request then originates from the wrong pod, it will be denied. Only one intercept at a time can be used if there is a need to ensure that the chosen pod is exactly right. + +### Recursion detection +It is common that clusters used in development, such as Minikube, Minishift or k3s, run on the same host as the Telepresence client, often in a Docker container. Such clusters may have access to host network, which means that both DNS and L4 routing may be subjected to recursion. + +#### DNS recursion +When a local cluster's DNS-resolver fails to resolve a hostname, it may fall back to querying the local host network. This means that the Telepresence resolver will be asked to resolve a query that was issued from the cluster. Telepresence must check if such a query is recursive because there is a chance that it actually originated from the Telepresence DNS resolver and was dispatched to the `traffic-manager`, or a `traffic-agent`. + +Telepresence handles this by sending one initial DNS-query to resolve the hostname "tel2-recursion-check.kube-system". If the cluster runs locally, and has access to the local host's network, then that query will recurse back into the Telepresence resolver. Telepresence remembers this and alters its own behavior so that queries that are believed to be recursions are detected and respond with an NXNAME record. Telepresence performs this solution to the best of its ability, but may not be completely accurate in all situations. There's a chance that the DNS-resolver will yield a false negative for the second query if the same hostname is queried more than once in rapid succession, that is when the second query is made before the first query has received a response from the cluster. + +#### Connect recursion +A cluster running locally may dispatch connection attempts to non-existing host:port combinations to the host network. This means that they may reach the Telepresence [VIF](../tun-device). Endless recursions occur if the VIF simply dispatches such attempts on to the cluster. + +The telepresence client handles this by serializing all connection attempts to one specific IP:PORT, trapping all subsequent attempts to connect to that IP:PORT until the first attempt has completed. If the first attempt was deemed a success, then the currently trapped attempts are allowed to proceed. If the first attempt failed, then the currently trapped attempts fail. + +## Inbound + +The traffic-manager and traffic-agent are mutually responsible for setting up the necessary connection to the workstation when an intercept becomes active. In versions prior to 2.3.2, this would be accomplished by the traffic-manager creating a port dynamically that it would pass to the traffic-agent. The traffic-agent would then forward the intercepted connection to that port, and the traffic-manager would forward it to the workstation. This lead to problems when integrating with service meshes like Istio since those dynamic ports needed to be configured. It also imposed an undesired requirement to be able to use mTLS between the traffic-manager and traffic-agent. + +In 2.3.2, this changes, so that the traffic-agent instead creates a tunnel to the traffic-manager using the already existing gRPC API connection. The traffic-manager then forwards that using another tunnel to the workstation. This is completely invisible to other service meshes and is therefore much easier to configure. + +##### Footnotes: +

1: Starting with 2.8.0, Telepresence will not allow that the same workstation creates concurrent intercepts that span multiple namespaces.

+

2: The error message from an attempt to create a service in a bad subnet contains the service subnet. The trick of creating a dummy service is currently the only way to get Kubernetes to expose that subnet.

diff --git a/docs/telepresence/2.14/reference/tun-device.md b/docs/telepresence/2.14/reference/tun-device.md new file mode 100644 index 000000000..af7e3828c --- /dev/null +++ b/docs/telepresence/2.14/reference/tun-device.md @@ -0,0 +1,27 @@ +# Networking through Virtual Network Interface + +The Telepresence daemon process creates a Virtual Network Interface (VIF) when Telepresence connects to the cluster. The VIF ensures that the cluster's subnets are available to the workstation. It also intercepts DNS requests and forwards them to the traffic-manager which in turn forwards them to intercepted agents, if any, or performs a host lookup by itself. + +### TUN-Device +The VIF is a TUN-device, which means that it communicates with the workstation in terms of L3 IP-packets. The router will recognize UDP and TCP packets and tunnel their payload to the traffic-manager via its encrypted gRPC API. The traffic-manager will then establish corresponding connections in the cluster. All protocol negotiation takes place in the client because the VIF takes care of the L3 to L4 translation (i.e. the tunnel is L4, not L3). + +## Gains when using the VIF + +### Both TCP and UDP +The TUN-device is capable of routing both TCP and UDP traffic. + +### No SSH required + +The VIF approach is somewhat similar to using `sshuttle` but without +any requirements for extra software, configuration or connections. +Using the VIF means that only one single connection needs to be +forwarded through the Kubernetes apiserver (à la `kubectl +port-forward`), using only one single port. There is no need for +`ssh` in the client nor for `sshd` in the traffic-manager. This also +means that the traffic-manager container can run as the default user. + +#### sshfs without ssh encryption +When a POD is intercepted, and its volumes are mounted on the local machine, this mount is performed by [sshfs](https://github.com/libfuse/sshfs). Telepresence will run `sshfs -o slave` which means that instead of using `ssh` to establish an encrypted communication to an `sshd`, which in turn terminates the encryption and forwards to `sftp`, the `sshfs` will talk `sftp` directly on its `stdin/stdout` pair. Telepresence tunnels that directly to an `sftp` in the agent using its already encrypted gRPC API. As a result, no `sshd` is needed in client nor in the traffic-agent, and the traffic-agent container can run as the default user. + +### No Firewall rules +With the VIF in place, there's no longer any need to tamper with firewalls in order to establish IP routes. The VIF makes the cluster subnets available during connect, and the kernel will perform the routing automatically. When the session ends, the kernel is also responsible for cleaning up. diff --git a/docs/telepresence/2.14/reference/volume.md b/docs/telepresence/2.14/reference/volume.md new file mode 100644 index 000000000..82df9cafa --- /dev/null +++ b/docs/telepresence/2.14/reference/volume.md @@ -0,0 +1,36 @@ +# Volume mounts + +import Alert from '@material-ui/lab/Alert'; + +Telepresence supports locally mounting of volumes that are mounted to your Pods. You can specify a command to run when starting the intercept, this could be a subshell or local server such as Python or Node. + +``` +telepresence intercept --port --mount=/tmp/ -- /bin/bash +``` + +In this case, Telepresence creates the intercept, mounts the Pod's volumes to locally to `/tmp`, and starts a Bash subshell. + +Telepresence can set a random mount point for you by using `--mount=true` instead, you can then find the mount point in the output of `telepresence list` or using the `$TELEPRESENCE_ROOT` variable. + +``` +$ telepresence intercept --port --mount=true -- /bin/bash +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 + Intercepting : all TCP connections + +bash-3.2$ echo $TELEPRESENCE_ROOT +/var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 +``` + +--mount=true is the default if a mount option is not specified, use --mount=false to disable mounting volumes. + +With either method, the code you run locally either from the subshell or from the intercept command will need to be prepended with the `$TELEPRESENCE_ROOT` environment variable to utilize the mounted volumes. + +For example, Kubernetes mounts secrets to `/var/run/secrets/kubernetes.io` (even if no `mountPoint` for it exists in the Pod spec). Once mounted, to access these you would need to change your code to use `$TELEPRESENCE_ROOT/var/run/secrets/kubernetes.io`. + +If using --mount=true without a command, you can use either environment variable flag to retrieve the variable. diff --git a/docs/telepresence/2.14/reference/vpn.md b/docs/telepresence/2.14/reference/vpn.md new file mode 100644 index 000000000..457cc873c --- /dev/null +++ b/docs/telepresence/2.14/reference/vpn.md @@ -0,0 +1,89 @@ + +
+ + +# Telepresence and VPNs + +It is often important to set up Kubernetes API server endpoints to be only accessible via a VPN. +In setups like these, users need to connect first to their VPN, and then use Telepresence to connect +to their cluster. As Telepresence uses many of the same underlying technologies that VPNs use, +the two can sometimes conflict. This page will help you identify and resolve such VPN conflicts. + + + +The test-vpn command, which was once part of Telepresence, became obsolete in 2.14 due to a change in functionality and was subsequently removed. + + + +## VPN Configuration + +Let's begin by reviewing what a VPN does and imagining a sample configuration that might come +to conflict with Telepresence. +Usually, a VPN client adds two kinds of routes to your machine when you connect. +The first serves to override your default route; in other words, it makes sure that packets +you send out to the public internet go through the private tunnel instead of your +ethernet or wifi adapter. We'll call this a `public VPN route`. +The second kind of route is a `private VPN route`. These are the routes that allow your +machine to access hosts inside the VPN that are not accessible to the public internet. +Generally speaking, this is a more circumscribed route that will connect your machine +only to reachable hosts on the private network, such as your Kubernetes API server. + +This diagram represents what happens when you connect to a VPN, supposing that your +private network spans the CIDR range: `10.0.0.0/8`. + +![VPN routing](../images/vpn-routing.jpg) + +## Kubernetes configuration + +One of the things a Kubernetes cluster does for you is assign IP addresses to pods and services. +This is one of the key elements of Kubernetes networking, as it allows applications on the cluster +to reach each other. When Telepresence connects you to the cluster, it will try to connect you +to the IP addresses that your cluster assigns to services and pods. +Cluster administrators can configure, on cluster creation, the CIDR ranges that the Kubernetes +cluster will place resources in. Let's imagine your cluster is configured to place services in +`10.130.0.0/16` and pods in `10.132.0.0/16`: + +![VPN Kubernetes config](../images/vpn-k8s-config.jpg) + +## Telepresence conflicts + +When you run `telepresence connect` to connect to a cluster, it talks to the API server +to figure out what pod and service CIDRs it needs to map in your machine. If it detects +that these CIDR ranges are already mapped by a VPN's `private route`, it will produce an +error and inform you of the conflicting subnets: + +```console +$ telepresence connect +telepresence connect: error: connector.Connect: failed to connect to root daemon: rpc error: code = Unknown desc = subnet 10.43.0.0/16 overlaps with existing route "10.0.0.0/8 via 10.0.0.0 dev utun4, gw 10.0.0.1" +``` + +To resolve this, you'll need to carefully consider what your network layout looks like. +Telepresence is refusing to map these conflicting subnets because its mapping them +could render certain hosts that are inside the VPN completely unreachable. However, +you (or your network admin) know better than anyone how hosts are spread out inside your VPN. +Even if the private route routes ALL of `10.0.0.0/8`, it's possible that hosts are only +being spun up in one of the subblocks of the `/8` space. Let's say, for example, +that you happen to know that all your hosts in the VPN are bunched up in the first +half of the space -- `10.0.0.0/9` (and that you know that any new hosts will +only be assigned IP addresses from the `/9` block). In this case you +can configure Telepresence to override the other half of this CIDR block, which is where the +services and pods happen to be. +To do this, all you have to do is configure the `client.routing.allowConflictingSubnets` flag +in the Telepresence helm chart. You can do this directly via `telepresence helm upgrade`: + +```console +$ telepresence helm upgrade --set client.routing.allowConflictingSubnets="{10.128.0.0/9}" +``` + +You can also choose to be more specific about this, and only allow the CIDRs that you KNOW +are in use by the cluster: + +```console +$ telepresence helm upgrade --set client.routing.allowConflictingSubnets="{10.130.0.0/16,10.132.0.0/16}" +``` + +The end result of this (assuming an allow list of `/9`) will be a configuration like this: + +![VPN Telepresence](../images/vpn-with-tele.jpg) + +
diff --git a/docs/telepresence/2.14/release-notes/no-ssh.png b/docs/telepresence/2.14/release-notes/no-ssh.png new file mode 100644 index 000000000..025f20ab7 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/no-ssh.png differ diff --git a/docs/telepresence/2.14/release-notes/run-tp-in-docker.png b/docs/telepresence/2.14/release-notes/run-tp-in-docker.png new file mode 100644 index 000000000..53b66a9b2 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/run-tp-in-docker.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.2.png b/docs/telepresence/2.14/release-notes/telepresence-2.2.png new file mode 100644 index 000000000..43abc7e89 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.2.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.3.0-homebrew.png b/docs/telepresence/2.14/release-notes/telepresence-2.3.0-homebrew.png new file mode 100644 index 000000000..e203a9750 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.3.0-homebrew.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.3.0-loglevels.png b/docs/telepresence/2.14/release-notes/telepresence-2.3.0-loglevels.png new file mode 100644 index 000000000..3d628c54a Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.3.0-loglevels.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.3.1-alsoProxy.png b/docs/telepresence/2.14/release-notes/telepresence-2.3.1-alsoProxy.png new file mode 100644 index 000000000..4052b927b Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.3.1-alsoProxy.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.3.1-brew.png b/docs/telepresence/2.14/release-notes/telepresence-2.3.1-brew.png new file mode 100644 index 000000000..2af424904 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.3.1-brew.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.3.1-dns.png b/docs/telepresence/2.14/release-notes/telepresence-2.3.1-dns.png new file mode 100644 index 000000000..c6335e7a7 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.3.1-dns.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.3.1-inject.png b/docs/telepresence/2.14/release-notes/telepresence-2.3.1-inject.png new file mode 100644 index 000000000..aea1003ef Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.3.1-inject.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.3.1-large-file-transfer.png b/docs/telepresence/2.14/release-notes/telepresence-2.3.1-large-file-transfer.png new file mode 100644 index 000000000..48ceb3817 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.3.1-large-file-transfer.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.3.1-trafficmanagerconnect.png b/docs/telepresence/2.14/release-notes/telepresence-2.3.1-trafficmanagerconnect.png new file mode 100644 index 000000000..78128c174 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.3.1-trafficmanagerconnect.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.3.2-subnets.png b/docs/telepresence/2.14/release-notes/telepresence-2.3.2-subnets.png new file mode 100644 index 000000000..778c722ab Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.3.2-subnets.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.3.2-svcport-annotation.png b/docs/telepresence/2.14/release-notes/telepresence-2.3.2-svcport-annotation.png new file mode 100644 index 000000000..1e1e92408 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.3.2-svcport-annotation.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.3.3-helm.png b/docs/telepresence/2.14/release-notes/telepresence-2.3.3-helm.png new file mode 100644 index 000000000..7b81480a7 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.3.3-helm.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.3.3-namespace-config.png b/docs/telepresence/2.14/release-notes/telepresence-2.3.3-namespace-config.png new file mode 100644 index 000000000..7864d3a30 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.3.3-namespace-config.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.3.3-to-pod.png b/docs/telepresence/2.14/release-notes/telepresence-2.3.3-to-pod.png new file mode 100644 index 000000000..aa7be3f63 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.3.3-to-pod.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.3.4-improved-error.png b/docs/telepresence/2.14/release-notes/telepresence-2.3.4-improved-error.png new file mode 100644 index 000000000..fa8a12986 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.3.4-improved-error.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.3.4-ip-error.png b/docs/telepresence/2.14/release-notes/telepresence-2.3.4-ip-error.png new file mode 100644 index 000000000..1d37380c7 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.3.4-ip-error.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.3.5-agent-config.png b/docs/telepresence/2.14/release-notes/telepresence-2.3.5-agent-config.png new file mode 100644 index 000000000..67d6d3e8b Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.3.5-agent-config.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.3.5-grpc-max-receive-size.png b/docs/telepresence/2.14/release-notes/telepresence-2.3.5-grpc-max-receive-size.png new file mode 100644 index 000000000..32939f9dd Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.3.5-grpc-max-receive-size.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.3.5-skipLogin.png b/docs/telepresence/2.14/release-notes/telepresence-2.3.5-skipLogin.png new file mode 100644 index 000000000..bf79c1910 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.3.5-skipLogin.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png b/docs/telepresence/2.14/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png new file mode 100644 index 000000000..d29a05ad7 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.3.7-keydesc.png b/docs/telepresence/2.14/release-notes/telepresence-2.3.7-keydesc.png new file mode 100644 index 000000000..9bffe5ccb Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.3.7-keydesc.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.3.7-newkey.png b/docs/telepresence/2.14/release-notes/telepresence-2.3.7-newkey.png new file mode 100644 index 000000000..c7d47c42d Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.3.7-newkey.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.4.0-cloud-messages.png b/docs/telepresence/2.14/release-notes/telepresence-2.4.0-cloud-messages.png new file mode 100644 index 000000000..ffd045ae0 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.4.0-cloud-messages.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.4.0-windows.png b/docs/telepresence/2.14/release-notes/telepresence-2.4.0-windows.png new file mode 100644 index 000000000..d27ba254a Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.4.0-windows.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.4.1-systema-vars.png b/docs/telepresence/2.14/release-notes/telepresence-2.4.1-systema-vars.png new file mode 100644 index 000000000..c098b439f Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.4.1-systema-vars.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.4.10-actions.png b/docs/telepresence/2.14/release-notes/telepresence-2.4.10-actions.png new file mode 100644 index 000000000..6d849ac21 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.4.10-actions.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.4.10-intercept-config.png b/docs/telepresence/2.14/release-notes/telepresence-2.4.10-intercept-config.png new file mode 100644 index 000000000..e3f1136ac Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.4.10-intercept-config.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.4.4-gather-logs.png b/docs/telepresence/2.14/release-notes/telepresence-2.4.4-gather-logs.png new file mode 100644 index 000000000..7db541735 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.4.4-gather-logs.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.4.5-logs-anonymize.png b/docs/telepresence/2.14/release-notes/telepresence-2.4.5-logs-anonymize.png new file mode 100644 index 000000000..edd01fde4 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.4.5-logs-anonymize.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.4.5-pod-yaml.png b/docs/telepresence/2.14/release-notes/telepresence-2.4.5-pod-yaml.png new file mode 100644 index 000000000..3f565c4f8 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.4.5-pod-yaml.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.4.5-preview-url-questions.png b/docs/telepresence/2.14/release-notes/telepresence-2.4.5-preview-url-questions.png new file mode 100644 index 000000000..1823aaa14 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.4.5-preview-url-questions.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.4.6-help-text.png b/docs/telepresence/2.14/release-notes/telepresence-2.4.6-help-text.png new file mode 100644 index 000000000..aab9178ad Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.4.6-help-text.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.4.8-health-check.png b/docs/telepresence/2.14/release-notes/telepresence-2.4.8-health-check.png new file mode 100644 index 000000000..e10a0b472 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.4.8-health-check.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.4.8-vpn.png b/docs/telepresence/2.14/release-notes/telepresence-2.4.8-vpn.png new file mode 100644 index 000000000..fbb215882 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.4.8-vpn.png differ diff --git a/docs/telepresence/2.14/release-notes/telepresence-2.5.0-pro-daemon.png b/docs/telepresence/2.14/release-notes/telepresence-2.5.0-pro-daemon.png new file mode 100644 index 000000000..5b82fc769 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/telepresence-2.5.0-pro-daemon.png differ diff --git a/docs/telepresence/2.14/release-notes/tunnel.jpg b/docs/telepresence/2.14/release-notes/tunnel.jpg new file mode 100644 index 000000000..59a0397e6 Binary files /dev/null and b/docs/telepresence/2.14/release-notes/tunnel.jpg differ diff --git a/docs/telepresence/2.14/releaseNotes.yml b/docs/telepresence/2.14/releaseNotes.yml new file mode 100644 index 000000000..0f3db1667 --- /dev/null +++ b/docs/telepresence/2.14/releaseNotes.yml @@ -0,0 +1,2422 @@ +# This file should be placed in the folder for the version of the +# product that's meant to be documented. A `/release-notes` page will +# be automatically generated and populated at build time. +# +# Note that an entry needs to be added to the `doc-links.yml` file in +# order to surface the release notes in the table of contents. +# +# The YAML in this file should contain: +# +# changelog: An (optional) URL to the CHANGELOG for the product. +# items: An array of releases with the following attributes: +# - version: The (optional) version number of the release, if applicable. +# - date: The date of the release in the format YYYY-MM-DD. +# - notes: An array of noteworthy changes included in the release, each having the following attributes: +# - type: The type of change, one of `bugfix`, `feature`, `security` or `change`. +# - title: A short title of the noteworthy change. +# - body: >- +# Two or three sentences describing the change and why it +# is noteworthy. This is HTML, not plain text or +# markdown. It is handy to use YAML's ">-" feature to +# allow line-wrapping. +# - image: >- +# The URL of an image that visually represents the +# noteworthy change. This path is relative to the +# `release-notes` directory; if this file is +# `FOO/releaseNotes.yml`, then the image paths are +# relative to `FOO/release-notes/`. +# - docs: The path to the documentation page where additional information can be found. +# - href: A path from the root to a resource on the getambassador website, takes precedence over a docs link. + +docTitle: Telepresence Release Notes +docDescription: >- + Release notes for Telepresence by Ambassador Labs, a CNCF project + that enables developers to iterate rapidly on Kubernetes + microservices by arming them with infinite-scale development + environments, access to instantaneous feedback loops, and highly + customizable development environments. + +changelog: https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md + +items: + - version: 2.14.4 + date: "2023-08-23" + notes: + - type: bugfix + title: Nil pointer exception when upgrading the traffic-manager. + body: >- + Upgrading the traffic-manager using telepresence helm upgrade would sometimes + result in a helm error message executing "telepresence/templates/intercept-env-configmap.yaml" + at <.Values.intercept.environment.excluded>: nil pointer evaluating interface {}.excluded" + docs: https://github.com/telepresenceio/telepresence/issues/3313 + - version: 2.14.2 + date: "2023-07-26" + notes: + - type: feature + title: Incorporation of the last version of Telepresence. + body: >- + A new version of Telepresence OSS was published. + + - version: 2.14.1 + date: "2023-06-12" + notes: + - type: feature + title: More flexible templating in the Intercept Speficiation. + body: >- + The Sprig template functions can now be used + in many unconstrained fields of an Intercept Specification, such as environments, arguments, scripts, + commands, and intercept headers. + docs: https://www.getambassador.io/docs/telepresence/latest/reference/intercepts/specs#templating + - type: bugfix + title: User daemon would panic during connect + body: >- + An attempt to connect on a host where no login has ever been made, could cause the user daemon to panic. + - type: feature + title: Envoy's http idle timout is now configurable. + body: >- + A new agent.helm.httpIdleTimeout setting was added to the Helm chart that controls + the proprietary Traffic agent's http idle timeout. The default of one hour, which in some situations + would cause a lot of resource consuming and lingering connections, was changed to 70 seconds. + - type: feature + title: Add more gauges to the Traffic manager's Prometheus client. + body: >- + Several gauges were added to the Prometheus client to make it easier to monitor + what the Traffic manager spends resources on. + - type: feature + title: Agent Pull Policy + body: >- + Add option to set traffic agent pull policy in helm chart. + - type: bugfix + title: Resource leak in the Traffic manager. + body: >- + Fixes a resource leak in the Traffic manager caused by lingering tunnels between the clients and + Traffic agents. The tunnels are now closed correctly when terminated from the side that created them. + - type: bugfix + title: Fixed problem setting traffic manager namespace using the kubeconfig extension. + body: >- + Fixes a regression introduced in version 2.10.5, making it impossible to set the traffic-manager namespace + using the telepresence.io kubeconfig extension. + docs: https://www.getambassador.io/docs/telepresence/latest/reference/config#manager + - version: 2.14.0 + date: "2023-06-12" + notes: + - type: feature + title: Telepresence with Docker Compose + body: >- + Telepresence now is integrated with Docker Compose. You can now use a compose file as an Intercept Handler in your Intercept Specifcations to utilize you local dev stack alongside an Intercept. + docs: reference/with-compose + - type: feature + title: Added the ability to exclude envrionment variables + body: >- + You can now configure your traffic-manager to exclude certain environment variables from being propagated to your local environment while doing an intercept. + docs: reference/cluster-config#excluding-envrionment-variables + - type: change + title: Routing conflict reporting. + body: >- + Telepresence will now attempt to detect and report routing conflicts with other running VPN software on client machines. + There is a new configuration flag that can be tweaked to allow certain CIDRs to be overridden by Telepresence. + docs: reference/vpn + - type: change + title: Migration of Pod Daemon to the proprietary version of Telepresence + body: >- + Pod Daemon has been successfully integrated with the most recent proprietary version of Telepresence. This development allows users to leverage the datawire/telepresence image for their deployment previews. This enhancement streamlines the process, improving the efficiency and effectiveness of deployment preview scenarios. + docs: ci/pod-daemon + + - version: 2.13.3 + date: "2023-05-25" + notes: + - type: feature + title: Add imagePullSecrets to hooks + body: >- + Add .Values.hooks.curl.imagePullSecrets and .Values.hooks curl.imagePullSecrets to Helm values. + docs: https://github.com/telepresenceio/telepresence/pull/3079 + + - type: change + title: Change reinvocation policy to Never for the mutating webhook + body: >- + The default setting of the reinvocationPolicy for the mutating webhook dealing with agent injections changed from Never to IfNeeded. + + - type: bugfix + title: Fix mounting fail of IAM roles for service accounts web identity token + body: >- + The eks.amazonaws.com/serviceaccount volume injected by EKS is now exported and remotely mounted during an intercept. + docs: https://github.com/telepresenceio/telepresence/issues/3166 + + - type: bugfix + title: Correct namespace selector for cluster versions with non-numeric characters + body: >- + The mutating webhook now correctly applies the namespace selector even if the cluster version contains non-numeric characters. For example, it can now handle versions such as Major:"1", Minor:"22+". + docs: https://github.com/telepresenceio/telepresence/pull/3184 + + - type: bugfix + title: Enable IPv6 on the telepresence docker network + body: >- + The "telepresence" Docker network will now propagate DNS AAAA queries to the Telepresence DNS resolver when it runs in a Docker container. + docs: https://github.com/telepresenceio/telepresence/issues/3179 + + - type: bugfix + title: Fix the crash when intercepting with --local-only and --docker-run + body: >- + Running telepresence intercept --local-only --docker-run no longer results in a panic. + docs: https://github.com/telepresenceio/telepresence/issues/3171 + + - type: bugfix + title: Fix incorrect error message with local-only mounts + body: >- + Running telepresence intercept --local-only --mount false no longer results in an incorrect error message saying "a local-only intercept cannot have mounts". + docs: https://github.com/telepresenceio/telepresence/issues/3171 + + - type: bugfix + title: specify port in hook urls + body: >- + The helm chart now correctly handles custom agentInjector.webhook.port that was not being set in hook URLs. + docs: https://github.com/telepresenceio/telepresence/pull/3161 + + - type: bugfix + title: Fix wrong default value for disableGlobal and agentArrival + body: >- + Params .intercept.disableGlobal and .timeouts.agentArrival are now correctly honored. + + - version: 2.13.2 + date: "2023-05-12" + notes: + - type: bugfix + title: Authenticator Service Update + body: >- + Replaced / characters with a - when the authenticator service creates the kubeconfig in the Telepresence cache. + docs: https://github.com/telepresenceio/telepresence/pull/3167 + + - type: bugfix + title: Enhanced DNS Search Path Configuration for Windows (Auto, PowerShell, and Registry Options) + body: >- + Configurable strategy (auto, powershell. or registry) to set the global DNS search path on Windows. Default is auto which means try powershell first, and if it fails, fall back to registry. + docs: https://github.com/telepresenceio/telepresence/pull/3154 + + - type: feature + title: Configurable Traffic Manager Timeout in values.yaml + body: >- + The timeout for the traffic manager to wait for traffic agent to arrive can now be configured in the values.yaml file using timeouts.agentArrival. The default timeout is still 30 seconds. + docs: https://github.com/telepresenceio/telepresence/pull/3148 + + - type: bugfix + title: Enhanced Local Cluster Discovery for macOS and Windows + body: >- + The automatic discovery of a local container based cluster (minikube or kind) used when the Telepresence daemon runs in a container, now works on macOS and Windows, and with different profiles, ports, and cluster names + docs: https://github.com/telepresenceio/telepresence/pull/3165 + + - type: bugfix + title: FTP Stability Improvements + body: >- + Multiple simultaneous intercepts can transfer large files in bidirectionally and in parallel. + docs: https://github.com/telepresenceio/telepresence/pull/3157 + + - type: bugfix + title: Intercepted Persistent Volume Pods No Longer Cause Timeouts + body: >- + Pods using persistent volumes no longer causes timeouts when intercepted. + docs: https://github.com/telepresenceio/telepresence/pull/3157 + + - type: bugfix + title: Successful 'Telepresence Connect' Regardless of DNS Configuration + body: >- + Ensure that `telepresence connect`` succeeds even though DNS isn't configured correctly. + docs: https://github.com/telepresenceio/telepresence/pull/3154 + + - type: bugfix + title: Traffic-Manager's 'Close of Closed Channel' Panic Issue + body: >- + The traffic-manager would sometimes panic with a "close of closed channel" message and exit. + docs: https://github.com/telepresenceio/telepresence/pull/3160 + + - type: bugfix + title: Traffic-Manager's Type Cast Panic Issue + body: >- + The traffic-manager would sometimes panic and exit after some time due to a type cast panic. + docs: https://github.com/telepresenceio/telepresence/pull/3153 + + - type: bugfix + title: Login Friction + body: >- + Improve login behavior by clearing the saved intermediary API Keys when a user logins to force Telepresence to generate new ones. + + - version: 2.13.1 + date: "2023-04-20" + notes: + - type: change + title: Update ambassador-telepresence-agent to version 1.13.13 + body: >- + The malfunction of the Ambassador Telepresence Agent occurred as a result of an update which compressed the executable file. + + - version: 2.13.0 + date: "2023-04-18" + notes: + - type: feature + title: Better kind / minikube network integration with docker + body: >- + The Docker network used by a Kind or Minikube (using the "docker" driver) installation, is automatically detected and connected to a Docker container running the Telepresence daemon. + docs: https://github.com/telepresenceio/telepresence/pull/3104 + + - type: feature + title: New mapped namespace output + body: >- + Mapped namespaces are included in the output of the telepresence status command. + + - type: feature + title: Setting of the target IP of the intercept + docs: reference/intercepts/cli + body: >- + There's a new --address flag to the intercept command allowing users to set the target IP of the intercept. + + - type: feature + title: Multi-tenant support + body: >- + The client will no longer need cluster wide permissions when connected to a namespace scoped Traffic Manager. + + - type: bugfix + title: Cluster domain resolution bugfix + body: >- + The Traffic Manager now uses a fail-proof way to determine the cluster domain. + docs: https://github.com/telepresenceio/telepresence/issues/3114 + + - type: bugfix + title: Windows DNS + body: >- + DNS on windows is more reliable and performant. + docs: https://github.com/telepresenceio/telepresence/issues/2939 + + - type: bugfix + title: Agent injection with huge amount of deployments + body: >- + The agent is now correctly injected even with a high number of deployment starting at the same time. + docs: https://github.com/telepresenceio/telepresence/issues/3025 + + - type: bugfix + title: Self-contained kubeconfig with Docker + body: >- + The kubeconfig is made self-contained before running Telepresence daemon in a Docker container. + docs: https://github.com/telepresenceio/telepresence/issues/3099 + + - type: bugfix + title: Version command error + body: >- + The version command won't throw an error anymore if there is no kubeconfig file defined. + docs: https://github.com/telepresenceio/telepresence/issues/3095 + + - type: change + title: Intercept Spec CRD v1alpha1 depreciated + body: >- + Please use version v1alpha2 of the intercept spec crd. + + - version: 2.12.2 + date: "2023-04-04" + notes: + - type: security + title: Update Golang build version to 1.20.3 + body: >- + Update Golang to 1.20.3 to address CVE-2023-24534, CVE-2023-24536, CVE-2023-24537, and CVE-2023-24538 + - version: 2.12.1 + date: "2023-03-22" + notes: + - type: feature + title: Additions to gather-logs + body: >- + Telepresence now includes the kubeauth logs when running + the gather-logs command + - type: bugfix + title: Airgapped Clusters can once again create personal intercepts + body: >- + Telepresence on airgapped clusters regained the ability to use the + skipLogin config option to bypass login and create personal intercepts. + - type: bugfix + title: Environment Variables are now propagated to kubeauth + body: >- + Telepresence now propagates environment variables properly + to the kubeauth-foreground to be used with cluster authentication + - version: 2.12.0 + date: "2023-03-20" + notes: + - type: feature + title: Intercept spec can build images from source + body: >- + Handlers in the Intercept Specification can now specify a build property instead of an image so that + the image is built when the spec runs. + docs: reference/intercepts/specs#build + - type: feature + title: Improve volume mount experience for Windows and Mac users + body: >- + On macOS and Windows platforms, the installation of sshfs or platform specific FUSE implementations such as macFUSE or WinFSP are + no longer needed when running an Intercept Specification that uses docker images. + docs: reference/intercepts/specs + - type: feature + title: Check for service connectivity independently from pod connectivity + body: >- + Telepresence now enables you to check for a service and pod's connectivity independently, so that it can proxy one without proxying the other. + docs: https://github.com/telepresenceio/telepresence/issues/2911 + - type: bugfix + title: Fix cluster authentication when running the telepresence daemon in a docker container. + body: >- + Authentication to EKS and GKE clusters have been fixed (k8s >= v1.26) + docs: https://github.com/telepresenceio/telepresence/pull/3055 + - type: bugfix + title: The Intercept spec image pattern now allows nested and sha256 images. + body: >- + Telepresence Intercept Specifications now handle passing nested images or the sha256 of an image + docs: https://github.com/telepresenceio/telepresence/issues/3064 + - type: bugfix + body: >- + Telepresence will not longer panic when a CNAME does not contain the .svc in it + title: Fix panic when CNAME of kubernetes.default doesn't contain .svc + docs: https://github.com/telepresenceio/telepresence/issues/3015 + - version: 2.11.1 + date: "2023-02-27" + notes: + - type: bugfix + title: Multiple architectures + docs: https://github.com/telepresenceio/telepresence/issues/3043 + body: >- + The multi-arch build for the ambassador-telepresence-manager and ambassador-telepresence-agent now + works for both amd64 and arm64. + - type: bugfix + title: Ambassador agent Helm chart duplicates + docs: https://github.com/telepresenceio/telepresence/issues/3046 + body: >- + Some labels in the Helm chart for the Ambassador Agent were duplicated, causing problems for FluxCD. + - version: 2.11.0 + date: "2023-02-22" + notes: + - type: feature + title: Intercept specification + body: >- + It is now possible to leverage the intercept specification to spin up your environment without extra tools. + - type: feature + title: Support for arm64 (Apple Silicon) + body: >- + The ambassador-telepresence-manager and ambassador-telepresence-agent are now distributed as + multi-architecture images and can run natively on both linux/amd64 and linux/arm64. + - type: bugfix + title: Connectivity check can break routing in VPN setups + docs: https://github.com/telepresenceio/telepresence/issues/3006 + body: >- + The connectivity check failed to recognize that the connected peer wasn't a traffic-manager. Consequently, + it didn't proxy the cluster because it incorrectly assumed that a successful connect meant cluster connectivity, + - type: bugfix + title: VPN routes not detected by telepresence test-vpn on macOS + docs: https://github.com/telepresenceio/telepresence/pull/3038 + body: >- + The telepresence test-vpn did not include routes of type link when checking for subnet + conflicts. + - version: 2.10.5 + date: "2023-02-06" + notes: + - type: change + title: mTLS secrets mount + body: >- + mTLS Secrets will now be mounted into the traffic agent, instead of expected to be read by it from the API. + This is only applicable to users of team mode and the proprietary agent + docs: reference/cluster-config#tls + - type: bugfix + title: Daemon reconnection fix + body: >- + Fixed a bug that prevented the local daemons from automatically reconnecting to the traffic manager when the network connection was lost. + - version: 2.10.4 + date: "2023-01-20" + notes: + - type: bugfix + title: Backward compatibility restored + body: >- + Telepresence can now create intercepts with traffic-managers of version 2.9.5 and older. + - type: bugfix + title: Saved intercepts now works with preview URLs. + body: >- + Preview URLs are now included/excluded correctly when using saved intercepts. + - version: 2.10.3 + date: "2023-01-17" + notes: + - type: bugfix + title: Saved intercepts + body: >- + Fixed an issue which was causing the saved intercepts to not be completely interpreted by telepresence. + - type: bugfix + title: Traffic manager restart during upgrade to team mode + body: >- + Fixed an issue which was causing the traffic manager to be redeployed after an upgrade to the team mode. + docs: https://github.com/telepresenceio/telepresence/pull/2979 + - version: 2.10.2 + date: "2023-01-16" + notes: + - type: bugfix + title: version consistency in helm commands + body: >- + Ensure that CLI and user-daemon binaries are the same version when running
telepresence helm install + or telepresence helm upgrade. + docs: https://github.com/telepresenceio/telepresence/pull/2975 + - type: bugfix + title: Release Process + body: >- + Fixed an issue that prevented the --use-saved-intercept flag from working. + - version: 2.10.1 + date: "2023-01-11" + notes: + - type: bugfix + title: Release Process + body: >- + Fixed a regex in our release process that prevented 2.10.0 promotion. + - version: 2.10.0 + date: "2023-01-11" + notes: + - type: feature + title: Team Mode and Single User Mode + body: >- + The Traffic Manager can now be set to either "team" mode or "single user" mode. When in team mode, intercepts will default to http intercepts. + - type: feature + title: Added `insert` and `upgrade` Subcommands to `telepresence helm` + body: >- + The `telepresence helm` sub-commands `insert` and `upgrade` now accepts all types of helm `--set-XXX` flags. + - type: feature + title: Added Image Pull Secrets to Helm Chart + body: >- + Image pull secrets for the traffic-agent can now be added using the Helm chart setting `agent.image.pullSecrets`. + - type: change + title: Rename Configmap + body: >- + The configmap `traffic-manager-clients` has been renamed to `traffic-manager`. + - type: change + title: Webhook Namespace Field + body: >- + If the cluster is Kubernetes 1.21 or later, the mutating webhook will find the correct namespace using the label `kubernetes.io/metadata.name` rather than `app.kuberenetes.io/name`. + docs: https://github.com/telepresenceio/telepresence/issues/2913 + - type: change + title: Rename Webhook + body: >- + The name of the mutating webhook now contains the namespace of the traffic-manager so that the webhook is easier to identify when there are multiple namespace scoped telepresence installations in the cluster. + - type: change + title: OSS Binaries + body: >- + The OSS Helm chart is no longer pushed to the datawire Helm repository. It will instead be pushed from the telepresence proprietary repository. The OSS Helm chart is still what's embedded in the OSS telepresence client. + docs: https://github.com/telepresenceio/telepresence/pull/2943 + - type: bugfix + title: Fix Panic Using `--docker-run` + body: >- + Telepresence no longer panics when `--docker-run` is combined with `--name ` instead of `--name=`. + docs: https://github.com/telepresenceio/telepresence/issues/2953 + - type: bugfix + title: Stop assuming cluster domain + body: >- + Telepresence traffic-manager extracts the cluster domain (e.g. "cluster.local") using a CNAME lookup for "kubernetes.default" instead of "kubernetes.default.svc". + docs: https://github.com/telepresenceio/telepresence/pull/2959 + - type: bugfix + title: Uninstall hook timeout + body: >- + A timeout was added to the pre-delete hook `uninstall-agents`, so that a helm uninstall doesn't hang when there is no running traffic-manager. + docs: https://github.com/telepresenceio/telepresence/pull/2937 + - type: bugfix + title: Uninstall hook check + body: >- + The `Helm.Revision` is now used to prevent that Helm hook calls are served by the wrong revision of the traffic-manager. + docs: https://github.com/telepresenceio/telepresence/issues/2954 + - version: 2.9.5 + date: "2022-12-08" + notes: + - type: security + title: Update to golang v1.19.4 + body: >- + Apply security updates by updating to golang v1.19.4 + docs: https://groups.google.com/g/golang-announce/c/L_3rmdT0BMU + - type: bugfix + title: GCE authentication + body: >- + Fixed a regression, that was introduced in 2.9.3, preventing use of gce authentication without also having a config element present in the gce configuration in the kubeconfig. + - version: 2.9.4 + date: "2022-12-02" + notes: + - type: feature + title: Subnet detection strategy + body: >- + The traffic-manager can automatically detect that the node subnets are different from the pod subnets, and switch detection strategy to instead use subnets that cover the pod IPs. + - type: bugfix + title: Fix `--set` flag for `telepresence helm install` + body: >- + The `telepresence helm` command `--set x=y` flag didn't correctly set values of other types than `string`. The code now uses standard Helm semantics for this flag. + - type: bugfix + title: Fix `agent.image` setting propigation + body: >- + Telepresence now uses the correct `agent.image` properties in the Helm chart when copying agent image settings from the `config.yml` file. + - type: bugfix + title: Delay file sharing until needed + body: >- + Initialization of FTP type file sharing is delayed, so that setting it using the Helm chart value `intercept.useFtp=true` works as expected. + - type: bugfix + title: Cleanup on `telepresence quit` + body: >- + The port-forward that is created when Telepresence connects to a cluster is now properly closed when `telepresence quit` is called. + - type: bugfix + title: Watch `config.yml` without panic + body: >- + The user daemon no longer panics when the `config.yml` is modified at a time when the user daemon is running but no session is active. + - type: bugfix + title: Thread safety + body: >- + Fix race condition that would occur when `telepresence connect` `telepresence leave` was called several times in rapid succession. + - version: 2.9.3 + date: "2022-11-23" + notes: + - type: feature + title: Helm options for `livenessProbe` and `readinessProbe` + body: >- + The helm chart now supports `livenessProbe` and `readinessProbe` for the traffic-manager deployment, so that the pod automatically restarts if it doesn't respond. + - type: change + title: Improved network communication + body: >- + The root daemon now communicates directly with the traffic-manager instead of routing all outbound traffic through the user daemon. + - type: bugfix + title: Root daemon debug logging + body: >- + Using `telepresence loglevel LEVEL` now also sets the log level in the root daemon. + - type: bugfix + title: Multivalue flag value propagation + body: >- + Multi valued kubernetes flags such as `--as-group` are now propagated correctly. + - type: bugfix + title: Root daemon stability + body: >- + The root daemon would sometimes hang indefinitely when quit and connect were called in rapid succession. + - type: bugfix + title: Base DNS resolver + body: >- + Don't use `systemd.resolved` base DNS resolver unless cluster is proxied. + - version: 2.9.2 + date: "2022-11-16" + notes: + - type: bugfix + title: Fix panic + body: >- + Fix panic when connecting to an older traffic-manager. + - type: bugfix + title: Fix header flag + body: >- + Fix an issue where the `http-header` flag sometimes wouldn't propagate correctly. + - version: 2.9.1 + date: "2022-11-16" + notes: + - type: bugfix + title: Connect failures due to missing auth provider. + body: >- + The regression in 2.9.0 that caused a `no Auth Provider found for name “gcp”` error when connecting was fixed. + - version: 2.9.0 + date: "2022-11-15" + notes: + - type: feature + title: New command to view client configuration. + body: >- + A new telepresence config view was added to make it easy to view the current + client configuration. + docs: new-in-2.9#view-the-client-configuration + - type: feature + title: Configure Clients using the Helm chart. + body: >- + The traffic-manager can now configure all clients that connect through the client: map in + the values.yaml file. + docs: reference/cluster-config#client-configuration + - type: feature + title: The Traffic manager version is more visible. + body: >- + The command telepresence version will now include the version of the traffic manager when + the client is connected to a cluster. + - type: feature + title: Command output in YAML format. + body: >- + The global --output flag now accepts both yaml and json. + docs: new-in-2.9#yaml-output + - type: change + title: Deprecated status command flag + body: >- + The telepresence status --json flag is deprecated. Use telepresence status --output=json instead. + - type: bugfix + title: Unqualified service name resolution in docker. + body: >- + Unqualified service names now resolves OK from the docker container when using telepresence intercept --docker-run. + docs: https://github.com/telepresenceio/telepresence/issues/2870 + - type: bugfix + title: Output no longer mixes plaintext and json. + body: >- + Informational messages that don't really originate from the command, such as "Launching Telepresence Root Daemon", + or "An update of telepresence ...", are discarded instead of being printed as plain text before the actual formatted + output when using the --output=json. + docs: https://github.com/telepresenceio/telepresence/issues/2854 + - type: bugfix + title: No more panic when invalid port names are detected. + body: >- + A `telepresence intercept` of services with invalid port no longer causes a panic. + docs: https://github.com/telepresenceio/telepresence/issues/2880 + - type: bugfix + title: Proper errors for bad output formats. + body: >- + An attempt to use an invalid value for the global --output flag now renders a proper error message. + - type: bugfix + title: Remove lingering DNS config on macOS. + body: >- + Files lingering under /etc/resolver as a result of ungraceful shutdown of the root daemon on macOS, are + now removed when a new root daemon starts. + - version: 2.8.5 + date: "2022-11-2" + notes: + - type: security + title: CVE-2022-41716 + body: >- + Updated Golang to 1.19.3 to address CVE-2022-41716. + - version: 2.8.4 + date: "2022-11-2" + notes: + - type: bugfix + title: Release Process + body: >- + This release resulted in changes to our release process. + - version: 2.8.3 + date: "2022-10-27" + notes: + - type: feature + title: Ability to disable global intercepts. + body: >- + Global intercepts (a.k.a. TCP intercepts) can now be disabled by using the new Helm chart setting intercept.disableGlobal. + docs: https://github.com/telepresenceio/telepresence/issues/2140 + - type: feature + title: Configurable mutating webhook port + body: >- + The port used for the mutating webhook can be configured using the Helm chart setting + agentInjector.webhook.port. + docs: install/helm + - type: change + title: Mutating webhook port defaults to 443 + body: >- + The default port for the mutating webhook is now 443. It used to be 8443. + - type: change + title: Agent image configuration mandatory in air-gapped environments. + body: >- + The traffic-manager will no longer default to use the tel2 image for the traffic-agent when it is + unable to connect to Ambassador Cloud. Air-gapped environments must declare what image to use in the Helm chart. + - type: bugfix + title: Can now connect to non-helm installs + body: >- + telepresence connect now works as long as the traffic manager is installed, even if + it wasn't installed via >code>helm install + docs: https://github.com/telepresenceio/telepresence/issues/2824 + - type: bugfix + title: check-vpn crash fixed + body: >- + telepresence check-vpn no longer crashes when the daemons don't start properly. + - version: 2.8.2 + date: "2022-10-15" + notes: + - type: bugfix + title: Reinstate 2.8.0 + body: >- + There was an issue downloading the free enhanced client. This problem was fixed, 2.8.0 was reinstated + - version: 2.8.1 + date: "2022-10-14" + notes: + - type: bugfix + title: Rollback 2.8.0 + body: >- + Rollback 2.8.0 while we investigate an issue with ambassador cloud. + - version: 2.8.0 + date: "2022-10-14" + notes: + - type: feature + title: Improved DNS resolver + body: >- + The Telepresence DNS resolver is now capable of resolving queries of type A, AAAA, CNAME, + MX, NS, PTR, SRV, and TXT. + docs: reference/dns + - type: feature + title: New `client` structure in Helm chart + body: >- + A new client struct was added to the Helm chart. It contains a connectionTTL that controls + how long the traffic manager will retain a client connection without seeing any sign of life from the client. + docs: reference/cluster-config#Client-Configuration + - type: feature + title: Include and exclude suffixes configurable using the Helm chart. + body: >- + A dns element was added to the client struct in Helm chart. It contains an includeSuffixes and + an excludeSuffixes value that controls what type of names that the DNS resolver in the client will delegate to + the cluster. + docs: reference/cluster-config#DNS + - type: feature + title: Configurable traffic-manager API port + body: >- + The API port used by the traffic-manager is now configurable using the Helm chart value apiPort. + The default port is 8081. + docs: https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence + - type: feature + title: Envoy server and admin port configuration. + body: >- + An new agent struct was added to the Helm chart. It contains an `envoy` structure where the server and + admin port of the Envoy proxy running in the enhanced traffic-agent can be configured. + docs: reference/cluster-config#Envoy-Configuration + - type: change + title: Helm chart `dnsConfig` moved to `client.routing`. + body: >- + The Helm chart dnsConfig was deprecated but retained for backward compatibility. The fields alsoProxySubnets + and neverProxySubnets can now be found under routing in the client struct. + docs: reference/cluster-config#Routing + - type: change + title: Helm chart `agentInjector.agentImage` moved to `agent.image`. + body: >- + The Helm chart agentInjector.agentImage was moved to agent.image. The old value is deprecated but + retained for backward compatibility. + docs: reference/cluster-config#Image-Configuration + - type: change + title: Helm chart `agentInjector.appProtocolStrategy` moved to `agent.appProtocolStrategy`. + body: >- + The Helm chart agentInjector.appProtocolStrategy was moved to agent.appProtocolStrategy. The old + value is deprecated but retained for backward compatibility. + docs: reference/cluster-config#Application-Protocol-Selection + - type: change + title: Helm chart `dnsServiceName`, `dnsServiceNamespace`, and `dnsServiceIP` removed. + body: >- + The Helm chart dnsServiceName, dnsServiceNamespace, and dnsServiceIP has been removed, because + they are no longer needed. The TUN-device will use the traffic-manager pod-IP on platforms where it needs to + dedicate an IP for its local resolver. + - type: change + title: Quit daemons with `telepresence quit -s` + body: >- + The former options `-u` and `-r` for `telepresence quit` has been deprecated and replaced with one option `-s` which will + quit both the root daemon and the user daemon. + - type: bugfix + title: Environment variable interpolation in pods now works. + body: >- + Environment variable interpolation now works for all definitions that are copied from pod containers + into the injected traffic-agent container. + - type: bugfix + title: Early detection of namespace conflict + body: >- + An attempt to create simultaneous intercepts that span multiple namespace on the same workstation + is detected early and prohibited instead of resulting in failing DNS lookups later on. + - type: bugfix + title: Annoying log message removed + body: >- + Spurious and incorrect ""!! SRV xxx"" messages will no longer appear in the logs when the reason + is normal context cancellation. + - type: bugfix + title: Single name DNS resolution in Docker on Linux host + body: >- + Single label names now resolves correctly when using Telepresence in Docker on a Linux host + - type: bugfix + title: Misnomer `appPortStrategy` in Helm chart renamed to `appProtocolStrategy`. + body: >- + The Helm chart value appProtocolStrategy is now correctly named (used to be appPortStategy) + - version: 2.7.6 + date: "2022-09-16" + notes: + - type: feature + title: Helm chart resource entries for injected agents + body: >- + The resources for the traffic-agent container and the optional init container can be + specified in the Helm chart using the resources and initResource fields + of the agentInjector.agentImage + - type: feature + title: Cluster event propagation when injection fails + body: >- + When the traffic-manager fails to inject a traffic-agent, the cause for the failure is + detected by reading the cluster events, and propagated to the user. + - type: feature + title: FTP-client instead of sshfs for remote mounts + body: >- + Telepresence can now use an embedded FTP client and load an existing FUSE library instead of running + an external sshfs or sshfs-win binary. This feature is experimental in 2.7.x + and enabled by setting intercept.useFtp to true> in the config.yml. + - type: change + title: Upgrade of winfsp + body: >- + Telepresence on Windows upgraded winfsp from version 1.10 to 1.11 + - type: bugfix + title: Removal of invalid warning messages + body: >- + Running CLI commands on Apple M1 machines will no longer throw warnings about /proc/cpuinfo + and /proc/self/auxv. + - version: 2.7.5 + date: "2022-09-14" + notes: + - type: change + title: Rollback of release 2.7.4 + body: >- + This release is a rollback of the changes in 2.7.4, so essentially the same as 2.7.3 + - version: 2.7.4 + date: "2022-09-14" + notes: + - type: change + body: >- + This release was broken on some platforms. Use 2.7.6 instead. + - version: 2.7.3 + date: "2022-09-07" + notes: + - type: bugfix + title: PTY for CLI commands + body: >- + CLI commands that are executed by the user daemon now use a pseudo TTY. This enables + docker run -it to allocate a TTY and will also give other commands like bash read the + same behavior as when executed directly in a terminal. + docs: https://github.com/telepresenceio/telepresence/issues/2724 + - type: bugfix + title: Traffic Manager useless warning silenced + body: >- + The traffic-manager will no longer log numerous warnings saying Issuing a + systema request without ApiKey or InstallID may result in an error. + - type: bugfix + title: Traffic Manager useless error silenced + body: >- + The traffic-manager will no longer log an error saying Unable to derive subnets + from nodes when the podCIDRStrategy is auto and it chooses to instead derive the + subnets from the pod IPs. + - version: 2.7.2 + date: "2022-08-25" + notes: + - type: feature + title: Autocompletion scripts + body: >- + Autocompletion scripts can now be generated with telepresence completion SHELL where SHELL can be bash, zsh, fish or powershell. + - type: feature + title: Connectivity check timeout + body: >- + The timeout for the initial connectivity check that Telepresence performs + in order to determine if the cluster's subnets are proxied or not can now be configured + in the config.yml file using timeouts.connectivityCheck. The default timeout was + changed from 5 seconds to 500 milliseconds to speed up the actual connect. + docs: reference/config#timeouts + - type: change + title: gather-traces feedback + body: >- + The command telepresence gather-traces now prints out a message on success. + docs: troubleshooting#distributed-tracing + - type: change + title: upload-traces feedback + body: >- + The command telepresence upload-traces now prints out a message on success. + docs: troubleshooting#distributed-tracing + - type: change + title: gather-traces tracing + body: >- + The command telepresence gather-traces now traces itself and reports errors with trace gathering. + docs: troubleshooting#distributed-tracing + - type: change + title: CLI log level + body: >- + The cli.log log is now logged at the same level as the connector.log + docs: reference/config#log-levels + - type: bugfix + title: Telepresence --help fixed + body: >- + telepresence --help now works once more even if there's no user daemon running. + docs: https://github.com/telepresenceio/telepresence/issues/2735 + - type: bugfix + title: Stream cancellation when no process intercepts + body: >- + Streams created between the traffic-agent and the workstation are now properly closed + when no interceptor process has been started on the workstation. This fixes a potential problem where + a large number of attempts to connect to a non-existing interceptor would cause stream congestion + and an unresponsive intercept. + - type: bugfix + title: List command excludes the traffic-manager + body: >- + The telepresence list command no longer includes the traffic-manager deployment. + - version: 2.7.1 + date: "2022-08-10" + notes: + - type: change + title: Reinstate telepresence uninstall + body: >- + Reinstate telepresence uninstall with --everything depreciated + - type: change + title: Reduce telepresence helm uninstall + body: >- + telepresence helm uninstall will only uninstall the traffic-manager helm chart and no longer accepts the --everything, --agent, or --all-agents flags. + - type: bugfix + title: Auto-connect for telepresence intercpet + body: >- + telepresence intercept will attempt to connect to the traffic manager before creating an intercept. + - version: 2.7.0 + date: "2022-08-07" + notes: + - type: feature + title: Saved Intercepts + body: >- + Create telepresence intercepts based on existing Saved Intercepts configurations with telepresence intercept --use-saved-intercept $SAVED_INTERCEPT_NAME + docs: reference/intercepts#sharing-intercepts-with-teammates + - type: feature + title: Distributed Tracing + body: >- + The Telepresence components now collect OpenTelemetry traces. + Up to 10MB of trace data are available at any given time for collection from + components. telepresence gather-traces is a new command that will collect + all that data and place it into a gzip file, and telepresence upload-traces is + a new command that will push the gzipped data into an OTLP collector. + docs: troubleshooting#distributed-tracing + - type: feature + title: Helm install + body: >- + A new telepresence helm command was added to provide an easy way to install, upgrade, or uninstall the telepresence traffic-manager. + docs: install/manager + - type: feature + title: Ignore Volume Mounts + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-ignore-volume-mounts, that can be used to make the injector ignore specified volume mounts denoted by a comma-separated string. + - type: feature + title: telepresence pod-daemon + body: >- + The Docker image now contains a new program in addition to + the existing traffic-manager and traffic-agent: the pod-daemon. The + pod-daemon is a trimmed-down version of the user-daemon that is + designed to run as a sidecar in a Pod, enabling CI systems to create + preview deploys. + - type: feature + title: Prometheus support for traffic manager + body: >- + Added prometheus support to the traffic manager. + - type: change + title: No install on telepresence connect + body: >- + The traffic manager is no longer automatically installed into the cluster. Connecting or creating an intercept in a cluster without a traffic manager will return an error. + docs: install/manager + - type: change + title: Helm Uninstall + body: >- + The command telepresence uninstall has been moved to telepresence helm uninstall. + docs: install/manager + - type: bugfix + title: readOnlyRootFileSystem mounts work + body: >- + Add an emptyDir volume and volume mount under /tmp on the agent sidecar so it works with `readOnlyRootFileSystem: true` + docs: https://github.com/telepresenceio/telepresence/pull/2666 + - version: 2.6.8 + date: "2022-06-23" + notes: + - type: feature + title: Specify Your DNS + body: >- + The name and namespace for the DNS Service that the traffic-manager uses in DNS auto-detection can now be specified. + - type: feature + title: Specify a Fallback DNS + body: >- + Should the DNS auto-detection logic in the traffic-manager fail, users can now specify a fallback IP to use. + - type: feature + title: Intercept UDP Ports + body: >- + It is now possible to intercept UDP ports with Telepresence and also use --to-pod to forward UDP traffic from ports on localhost. + - type: change + title: Additional Helm Values + body: >- + The Helm chart will now add the nodeSelector, affinity and tolerations values to the traffic-manager's post-upgrade-hook and pre-delete-hook jobs. + - type: bugfix + title: Agent Injection Bugfix + body: >- + Telepresence no longer fails to inject the traffic agent into the pod generated for workloads that have no volumes and `automountServiceAccountToken: false`. + - version: 2.6.7 + date: "2022-06-22" + notes: + - type: bugfix + title: Persistant Sessions + body: >- + The Telepresence client will remember and reuse the traffic-manager session after a network failure or other reason that caused an unclean disconnect. + - type: bugfix + title: DNS Requests + body: >- + Telepresence will no longer forward DNS requests for "wpad" to the cluster. + - type: bugfix + title: Graceful Shutdown + body: >- + The traffic-agent will properly shut down if one of its goroutines errors. + - version: 2.6.6 + date: "2022-06-9" + notes: + - type: bugfix + title: Env Var `TELEPRESENCE_API_PORT` + body: >- + The propagation of the TELEPRESENCE_API_PORT environment variable now works correctly. + - type: bugfix + title: Double Printing `--output json` + body: >- + The --output json global flag no longer outputs multiple objects + - version: 2.6.5 + date: "2022-06-03" + notes: + - type: feature + title: Helm Option -- `reinvocationPolicy` + body: >- + The reinvocationPolicy or the traffic-agent injector webhook can now be configured using the Helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Proxy Certificate + body: >- + The traffic manager now accepts a root CA for a proxy, allowing it to connect to ambassador cloud from behind an HTTPS proxy. This can be configured through the helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Agent Injection + body: >- + A policy that controls when the mutating webhook injects the traffic-agent was added, and can be configured in the Helm chart. + docs: install/helm + - type: change + title: Windows Tunnel Version Upgrade + body: >- + Telepresence on Windows upgraded wintun.dll from version 0.12 to version 0.14.1 + - type: change + title: Helm Version Upgrade + body: >- + Telepresence upgraded its embedded Helm from version 3.8.1 to 3.9 + - type: change + title: Kubernetes API Version Upgrade + body: >- + Telepresence upgraded its embedded Kubernetes API from version 0.23.4 to 0.24.1 + - type: feature + title: Flag `--watch` Added to `list` Command + body: >- + Added a --watch flag to telepresence list that can be used to watch interceptable workloads in a namespace. + - type: change + title: Depreciated `images.webhookAgentImage` + body: >- + The Telepresence configuration setting for `images.webhookAgentImage` is now deprecated. Use `images.agentImage` instead. + - type: bugfix + title: Default `reinvocationPolicy` Set to Never + body: >- + The reinvocationPolicy or the traffic-agent injector webhook now defaults to Never insteadof IfNeeded so that LimitRanges on namespaces can inject a missing resources element into the injected traffic-agent container. + - type: bugfix + title: UDP + body: >- + UDP based communication with services in the cluster now works as expected. + - type: bugfix + title: Telepresence `--help` + body: >- + The command help will only show Kubernetes flags on the commands that supports them + - type: change + title: Error Count + body: >- + Only the errors from the last session will be considered when counting the number of errors in the log after a command failure. + - version: 2.6.4 + date: "2022-05-23" + notes: + - type: bugfix + title: Upgrade RBAC Permissions + body: >- + The traffic-manager RBAC grants permissions to update services, deployments, replicatsets, and statefulsets. Those permissions are needed when the traffic-manager upgrades from versions < 2.6.0 and can be revoked after the upgrade. + - version: 2.6.3 + date: "2022-05-20" + notes: + - type: bugfix + title: Relative Mount Paths + body: >- + The --mount intercept flag now handles relative mount points correctly on non-windows platforms. Windows still require the argument to be a drive letter followed by a colon. + - type: bugfix + title: Traffic Agent Config + body: >- + The traffic-agent's configuration update automatically when services are added, updated or deleted. + - type: bugfix + title: Container Injection for Numeric Ports + body: >- + Telepresence will now always inject an initContainer when the service's targetPort is numeric + - type: bugfix + title: Matching Services + body: >- + Workloads that have several matching services pointing to the same target port are now handled correctly. + - type: bugfix + title: Unexpected Panic + body: >- + A potential race condition causing a panic when closing a DNS connection is now handled correctly. + - type: bugfix + title: Mount Volume Cleanup + body: >- + A container start would sometimes fail because and old directory remained in a mounted temp volume. + - version: 2.6.2 + date: "2022-05-17" + notes: + - type: bugfix + title: Argo Injection + body: >- + Workloads controlled by workloads like Argo Rollout are injected correctly. + - type: bugfix + title: Agent Port Mapping + body: >- + Multiple services appointing the same container port no longer result in duplicated ports in an injected pod. + - type: bugfix + title: GRPC Max Message Size + body: >- + The telepresence list command no longer errors out with "grpc: received message larger than max" when listing namespaces with a large number of workloads. + - version: 2.6.1 + date: "2022-05-16" + notes: + - type: bugfix + title: KUBECONFIG environment variable + body: >- + Telepresence will now handle multiple path entries in the KUBECONFIG environment correctly. + - type: bugfix + title: Don't Panic + body: >- + Telepresence will no longer panic when using preview URLs with traffic-managers < 2.6.0 + - version: 2.6.0 + date: "2022-05-13" + notes: + - type: feature + title: Intercept multiple containers in a pod, and multiple ports per container + body: >- + Telepresence can now intercept multiple services and/or service-ports that connect to the same pod. + docs: new-in-2.6#intercept-multiple-containers-and-ports + - type: feature + title: The Traffic Agent sidecar is always injected by the Traffic Manager's mutating webhook + body: >- + The client will no longer modify deployments, replicasets, or statefulsets in order to + inject a Traffic Agent into an intercepted pod. Instead, all injection is now performed by a mutating webhook. As a result, + the client now needs less permissions in the cluster. + docs: install/upgrade#important-note-about-upgrading-to-2.6.0 + - type: change + title: Automatic upgrade of Traffic Agents + body: >- + When upgrading, all workloads with injected agents will have their agent "uninstalled" automatically. + The mutating webhook will then ensure that their pods will receive an updated Traffic Agent. + docs: new-in-2.6#no-more-workload-modifications + - type: change + title: No default image in the Helm chart + body: >- + The helm chart no longer has a default set for the agentInjector.image.name, and unless it's set, the + traffic-manager will ask Ambassador Could for the preferred image. + docs: new-in-2.6#smarter-agent + - type: change + title: Upgrade to Helm version 3.8.1 + body: The Telepresence client now uses Helm version 3.8.1 when auto-installing the Traffic Manager. + - type: bugfix + title: Remote mounts will now function correctly with custom securityContext + body: >- + The bug causing permission problems when the Traffic Agent is in a Pod with a custom securityContext has been fixed. + - type: bugfix + title: Improved presentation of flags in CLI help + body: The help for commands that accept Kubernetes flags will now display those flags in a separate group. + - type: bugfix + title: Better termination of process parented by intercept + body: >- + Occasionally an intercept will spawn a command using -- on the command line, often in another console. + When you use telepresence leave or telepresence quit while the intercept with the spawned command is still active, + Telepresence will now terminate that the command because it's considered to be parented by the intercept that is being removed. + - version: 2.5.8 + date: "2022-04-27" + notes: + - type: bugfix + title: Folder creation on `telepresence login` + body: >- + Fixed a bug where the telepresence config folder would not be created if the user ran telepresence login before other commands. + - version: 2.5.7 + date: "2022-04-25" + notes: + - type: change + title: RBAC requirements + body: >- + A namespaced traffic-manager will no longer require cluster wide RBAC. Only Roles and RoleBindings are now used. + - type: bugfix + title: Windows DNS + body: >- + The DNS recursion detector didn't work correctly on Windows, resulting in sporadic failures to resolve names that were resolved correctly at other times. + - type: bugfix + title: Session TTL and Reconnect + body: >- + A telepresence session will now last for 24 hours after the user's last connectivity. If a session expires, the connector will automatically try to reconnect. + - version: 2.5.6 + date: "2022-04-18" + notes: + - type: change + title: Less Watchers + body: >- + Telepresence agents watcher will now only watch namespaces that the user has accessed since the last connect. + - type: bugfix + title: More Efficient `gather-logs` + body: >- + The gather-logs command will no longer send any logs through gRPC. + - version: 2.5.5 + date: "2022-04-08" + notes: + - type: change + title: Traffic Manager Permissions + body: >- + The traffic-manager now requires permissions to read pods across namespaces even if installed with limited permissions + - type: bugfix + title: Linux DNS Cache + body: >- + The DNS resolver used on Linux with systemd-resolved now flushes the cache when the search path changes. + - type: bugfix + title: Automatic Connect Sync + body: >- + The telepresence list command will produce a correct listing even when not preceded by a telepresence connect. + - type: bugfix + title: Disconnect Reconnect Stability + body: >- + The root daemon will no longer get into a bad state when a disconnect is rapidly followed by a new connect. + - type: bugfix + title: Limit Watched Namespaces + body: >- + The client will now only watch agents from accessible namespaces, and is also constrained to namespaces explicitly mapped using the connect command's --mapped-namespaces flag. + - type: bugfix + title: Limit Namespaces used in `gather-logs` + body: >- + The gather-logs command will only gather traffic-agent logs from accessible namespaces, and is also constrained to namespaces explicitly mapped using the connect command's --mapped-namespaces flag. + - version: 2.5.4 + date: "2022-03-29" + notes: + - type: bugfix + title: Linux DNS Concurrency + body: >- + The DNS fallback resolver on Linux now correctly handles concurrent requests without timing them out + - type: bugfix + title: Non-Functional Flag + body: >- + The ingress-l5 flag will no longer be forcefully set to equal the --ingress-host flag + - type: bugfix + title: Automatically Remove Failed Intercepts + body: >- + Intercepts that fail to create are now consistently removed to prevent non-working dangling intercepts from sticking around. + - type: bugfix + title: Agent UID + body: >- + Agent container is no longer sensitive to a random UID or an UID imposed by a SecurityContext. + - type: bugfix + title: Gather-Logs Output Filepath + body: >- + Removed a bad concatenation that corrupted the output path of telepresence gather-logs. + - type: change + title: Remove Unnecessary Error Advice + body: >- + An advice to "see logs for details" is no longer printed when the argument count is incorrect in a CLI command. + - type: bugfix + title: Garbage Collection + body: >- + Client and agent sessions no longer leaves dangling waiters in the traffic-manager when they depart. + - type: bugfix + title: Limit Gathered Logs + body: >- + The client's gather logs command and agent watcher will now respect the configured grpc.maxReceiveSize + - type: change + title: In-Cluster Checks + body: >- + The TUN device will no longer route pod or service subnets if it is running in a machine that's already connected to the cluster + - type: change + title: Expanded Status Command + body: >- + The status command includes the install id, user id, account id, and user email in its result, and can print output as JSON + - type: change + title: List Command Shows All Intercepts + body: >- + The list command, when used with the --intercepts flag, will list the users intercepts from all namespaces + - version: 2.5.3 + date: "2022-02-25" + notes: + - type: bugfix + title: TCP Connectivity + body: >- + Fixed bug in the TCP stack causing timeouts after repeated connects to the same address + - type: feature + title: Linux Binaries + body: >- + Client-side binaries for the arm64 architecture are now available for linux + - version: 2.5.2 + date: "2022-02-23" + notes: + - type: bugfix + title: DNS server bugfix + body: >- + Fixed a bug where Telepresence would use the last server in resolv.conf + - version: 2.5.1 + date: "2022-02-19" + notes: + - type: bugfix + title: Fix GKE auth issue + body: >- + Fixed a bug where using a GKE cluster would error with: No Auth Provider found for name "gcp" + - version: 2.5.0 + date: "2022-02-18" + notes: + - type: feature + title: Intercept specific endpoints + body: >- + The flags --http-path-equal, --http-path-prefix, and --http-path-regex can can be used in + addition to the --http-match flag to filter personal intercepts by the request URL path + docs: concepts/intercepts#intercepting-a-specific-endpoint + - type: feature + title: Intercept metadata + body: >- + The flag --http-meta can be used to declare metadata key value pairs that will be returned by the Telepresence rest + API endpoint /intercept-info + docs: reference/restapi#intercept-info + - type: change + title: Client RBAC watch + body: >- + The verb "watch" was added to the set of required verbs when accessing services and workloads for the client RBAC + ClusterRole + docs: reference/rbac + - type: change + title: Dropped backward compatibility with versions <=2.4.4 + body: >- + Telepresence is no longer backward compatible with versions 2.4.4 or older because the deprecated multiplexing tunnel + functionality was removed. + - type: change + title: No global networking flags + body: >- + The global networking flags are no longer used and using them will render a deprecation warning unless they are supported by the + command. The subcommands that support networking flags are connect, current-cluster-id, + and genyaml. + - type: bugfix + title: Output of status command + body: >- + The also-proxy and never-proxy subnets are now displayed correctly when using the + telepresence status command. + - type: bugfix + title: SETENV sudo privilege no longer needed + body: >- + Telepresence longer requires SETENV privileges when starting the root daemon. + - type: bugfix + title: Network device names containing dash + body: >- + Telepresence will now parse device names containing dashes correctly when determining routes that it should never block. + - type: bugfix + title: Linux uses cluster.local as domain instead of search + body: >- + The cluster domain (typically "cluster.local") is no longer added to the DNS search on Linux using + systemd-resolved. Instead, it is added as a domain so that names ending with it are routed + to the DNS server. + - version: 2.4.11 + date: "2022-02-10" + notes: + - type: change + title: Add additional logging to troubleshoot intermittent issues with intercepts + body: >- + We've noticed some issues with intercepts in v2.4.10, so we are releasing a version + with enhanced logging to help debug and fix the issue. + - version: 2.4.10 + date: "2022-01-13" + notes: + - type: feature + title: Application Protocol Strategy + body: >- + The strategy used when selecting the application protocol for personal intercepts can now be configured using + the intercept.appProtocolStrategy in the config.yml file. + docs: reference/config/#intercept + image: telepresence-2.4.10-intercept-config.png + - type: feature + title: Helm value for the Application Protocol Strategy + body: >- + The strategy when selecting the application protocol for personal intercepts in agents injected by the + mutating webhook can now be configured using the agentInjector.appProtocolStrategy in the Helm chart. + docs: install/helm + - type: feature + title: New --http-plaintext option + body: >- + The flag --http-plaintext can be used to ensure that an intercept uses plaintext http or grpc when + communicating with the workstation process. + docs: reference/intercepts/#tls + - type: feature + title: Configure the default intercept port + body: >- + The port used by default in the telepresence intercept command (8080), can now be changed by setting + the intercept.defaultPort in the config.yml file. + docs: reference/config/#intercept + - type: change + title: Telepresence CI now uses Github Actions + body: >- + Telepresence now uses Github Actions for doing unit and integration testing. It is + now easier for contributors to run tests on PRs since maintainers can add an + "ok to test" label to PRs (including from forks) to run integration tests. + docs: https://github.com/telepresenceio/telepresence/actions + image: telepresence-2.4.10-actions.png + - type: bugfix + title: Check conditions before asking questions + body: >- + User will not be asked to log in or add ingress information when creating an intercept until a check has been + made that the intercept is possible. + docs: reference/intercepts/ + - type: bugfix + title: Fix invalid log statement + body: >- + Telepresence will no longer log invalid: "unhandled connection control message: code DIAL_OK" errors. + - type: bugfix + title: Log errors from sshfs/sftp + body: >- + Output to stderr from the traffic-agent's sftp and the client's sshfs processes + are properly logged as errors. + - type: bugfix + title: Don't use Windows path separators in workload pod template + body: >- + Auto installer will no longer not emit backslash separators for the /tel-app-mounts paths in the + traffic-agent container spec when running on Windows. + - version: 2.4.9 + date: "2021-12-09" + notes: + - type: bugfix + title: Helm upgrade nil pointer error + body: >- + A helm upgrade using the --reuse-values flag no longer fails on a "nil pointer" error caused by a nil + telpresenceAPI value. + docs: install/helm#upgrading-the-traffic-manager + - version: 2.4.8 + date: "2021-12-03" + notes: + - type: feature + title: VPN diagnostics tool + body: >- + There is a new subcommand, test-vpn, that can be used to diagnose connectivity issues with a VPN. + See the VPN docs for more information on how to use it. + docs: reference/vpn + image: telepresence-2.4.8-vpn.png + + - type: feature + title: RESTful API service + body: >- + A RESTful service was added to Telepresence, both locally to the client and to the traffic-agent to + help determine if messages with a set of headers should be consumed or not from a message queue where the + intercept headers are added to the messages. + docs: reference/restapi + image: telepresence-2.4.8-health-check.png + + - type: change + title: TELEPRESENCE_LOGIN_CLIENT_ID env variable no longer used + body: >- + You could previously configure this value, but there was no reason to change it, so the value + was removed. + + - type: bugfix + title: Tunneled network connections behave more like ordinary TCP connections. + body: >- + When using Telepresence with an external cloud provider for extensions, those tunneled + connections now behave more like TCP connections, especially when it comes to timeouts. + We've also added increased testing around these types of connections. + - version: 2.4.7 + date: "2021-11-24" + notes: + - type: feature + title: Injector service-name annotation + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-service-name, that can be used to set the name of the service to be intercepted. + This will help disambiguate which service to intercept for when a workload is exposed by multiple services, such as can happen with Argo Rollouts + docs: reference/cluster-config#service-name-annotation + - type: feature + title: Skip the Ingress Dialogue + body: >- + You can now skip the ingress dialogue by setting the ingress parameters in the corresponding flags. + docs: reference/intercepts#skipping-the-ingress-dialogue + - type: feature + title: Never proxy subnets + body: >- + The kubeconfig extensions now support a never-proxy argument, + analogous to also-proxy, that defines a set of subnets that + will never be proxied via telepresence. + docs: reference/config#neverproxy + - type: change + title: Daemon versions check + body: >- + Telepresence now checks the versions of the client and the daemons and asks the user to quit and restart if they don't match. + - type: change + title: No explicit DNS flushes + body: >- + Telepresence DNS now uses a very short TTL instead of explicitly flushing DNS by killing the mDNSResponder or doing resolvectl flush-caches + docs: reference/routing#dns-caching + - type: bugfix + title: Legacy flags now work with global flags + body: >- + Legacy flags such as --swap-deployment can now be used together with global flags. + - type: bugfix + title: Outbound connection closing + body: >- + Outbound connections are now properly closed when the peer closes. + - type: bugfix + title: Prevent DNS recursion + body: >- + The DNS-resolver will trap recursive resolution attempts (may happen when the cluster runs in a docker-container on the client). + docs: reference/routing#dns-recursion + - type: bugfix + title: Prevent network recursion + body: >- + The TUN-device will trap failed connection attempts that results in recursive calls back into the TUN-device (may happen when the + cluster runs in a docker-container on the client). + docs: reference/routing#connect-recursion + - type: bugfix + title: Traffic Manager deadlock fix + body: >- + The Traffic Manager no longer runs a risk of entering a deadlock when a new Traffic agent arrives. + - type: bugfix + title: webhookRegistry config propagation + body: >- + The configured webhookRegistry is now propagated to the webhook installer even if no webhookAgentImage has been set. + docs: reference/config#images + - type: bugfix + title: Login refreshes expired tokens + body: >- + When a user's token has expired, telepresence login + will prompt the user to log in again to get a new token. Previously, + the user had to telepresence quit and telepresence logout + to get a new token. + docs: https://github.com/telepresenceio/telepresence/issues/2062 + - version: 2.4.6 + date: "2021-11-02" + notes: + - type: feature + title: Manually injecting Traffic Agent + body: >- + Telepresence now supports manually injecting the traffic-agent YAML into workload manifests. + Use the genyaml command to create the sidecar YAML, then add the telepresence.getambassador.io/manually-injected: "true" annotation to your pods to allow Telepresence to intercept them. + docs: reference/intercepts/manual-agent + + - type: feature + title: Telepresence CLI released for Apple silicon + body: >- + Telepresence is now built and released for Apple silicon. + docs: install/?os=macos + + - type: change + title: Telepresence help text now links to telepresence.io + body: >- + We now include a link to our documentation when you run telepresence --help. This will make it easier + for users to find this page whether they acquire Telepresence through Brew or some other mechanism. + image: telepresence-2.4.6-help-text.png + + - type: bugfix + title: Fixed bug when API server is inside CIDR range of pods/services + body: >- + If the API server for your kubernetes cluster had an IP that fell within the + subnet generated from pods/services in a kubernetes cluster, it would proxy traffic + to the API server which would result in hanging or a failed connection. We now ensure + that the API server is explicitly not proxied. + - version: 2.4.5 + date: "2021-10-15" + notes: + - type: feature + title: Get pod yaml with gather-logs command + body: >- + Adding the flag --get-pod-yaml to your request will get the + pod yaml manifest for all kubernetes components you are getting logs for + ( traffic-manager and/or pods containing a + traffic-agent container). This flag is set to false + by default. + docs: reference/client + image: telepresence-2.4.5-pod-yaml.png + + - type: feature + title: Anonymize pod name + namespace when using gather-logs command + body: >- + Adding the flag --anonymize to your command will + anonymize your pod names + namespaces in the output file. We replace the + sensitive names with simple names (e.g. pod-1, namespace-2) to maintain + relationships between the objects without exposing the real names of your + objects. This flag is set to false by default. + docs: reference/client + image: telepresence-2.4.5-logs-anonymize.png + + - type: feature + title: Added context and defaults to ingress questions when creating a preview URL + body: >- + Previously, we referred to OSI model layers when asking these questions, but this + terminology is not commonly used. The questions now provide a clearer context for the user, along with a default answer as an example. + docs: howtos/preview-urls + image: telepresence-2.4.5-preview-url-questions.png + + - type: feature + title: Support for intercepting headless services + body: >- + Intercepting headless services is now officially supported. You can request a + headless service on whatever port it exposes and get a response from the + intercept. This leverages the same approach as intercepting numeric ports when + using the mutating webhook injector, mainly requires the initContainer + to have NET_ADMIN capabilities. + docs: reference/intercepts/#intercepting-headless-services + + - type: change + title: Use one tunnel per connection instead of multiplexing into one tunnel + body: >- + We have changed Telepresence so that it uses one tunnel per connection instead + of multiplexing all connections into one tunnel. This will provide substantial + performance improvements. Clients will still be backwards compatible with older + managers that only support multiplexing. + + - type: bugfix + title: Added checks for Telepresence kubernetes compatibility + body: >- + Telepresence currently works with Kubernetes server versions 1.17.0 + and higher. We have added logs in the connector and traffic-manager + to let users know when they are using Telepresence with a cluster it doesn't support. + docs: reference/cluster-config + + - type: bugfix + title: Traffic Agent security context is now only added when necessary + body: >- + When creating an intercept, Telepresence will now only set the traffic agent's GID + when strictly necessary (i.e. when using headless services or numeric ports). This mitigates + an issue on openshift clusters where the traffic agent can fail to be created due to + openshift's security policies banning arbitrary GIDs. + + - version: 2.4.4 + date: "2021-09-27" + notes: + - type: feature + title: Numeric ports in agent injector + body: >- + The agent injector now supports injecting Traffic Agents into pods that have unnamed ports. + docs: reference/cluster-config/#note-on-numeric-ports + + - type: feature + title: New subcommand to gather logs and export into zip file + body: >- + Telepresence has logs for various components (the + traffic-manager, traffic-agents, the root and + user daemons), which are integral for understanding and debugging + Telepresence behavior. We have added the telepresence + gather-logs command to make it simple to compile logs for + all Telepresence components and export them in a zip file that can + be shared to others and/or included in a github issue. For more + information on usage, run telepresence gather-logs --help + . + docs: reference/client + image: telepresence-2.4.4-gather-logs.png + + - type: feature + title: Pod CIDR strategy is configurable in Helm chart + body: >- + Telepresence now enables you to directly configure how to get + pod CIDRs when deploying Telepresence with the Helm chart. + The default behavior remains the same. We've also introduced + the ability to explicitly set what the pod CIDRs should be. + docs: install/helm + + - type: bugfix + title: Compute pod CIDRs more efficiently + body: >- + When computing subnets using the pod CIDRs, the traffic-manager + now uses less CPU cycles. + docs: reference/routing/#subnets + + - type: bugfix + title: Prevent busy loop in traffic-manager + body: >- + In some circumstances, the traffic-manager's CPU + would max out and get pinned at its limit. This required a + shutdown or pod restart to fix. We've added some fixes + to prevent the traffic-manager from getting into this state. + + - type: bugfix + title: Added a fixed buffer size to TUN-device + body: >- + The TUN-device now has a max buffer size of 64K. This prevents the + buffer from growing limitlessly until it receies a PSH, which could + be a blocking operation when receiving lots of TCP-packets. + docs: reference/tun-device + + - type: bugfix + title: Fix hanging user daemon + body: >- + When Telepresence encountered an issue connecting to the cluster or + the root daemon, it could hang indefintely. It now will error correctly + when it encounters that situation. + + - type: bugfix + title: Improved proprietary agent connectivity + body: >- + To determine whether the environment cluster is air-gapped, the + proprietary agent attempts to connect to the cloud during startup. + To deal with a possible initial failure, the agent backs off + and retries the connection with an increasing backoff duration. + + - type: bugfix + title: Telepresence correctly reports intercept port conflict + body: >- + When creating a second intercept targetting the same local port, + it now gives the user an informative error message. Additionally, + it tells them which intercept is currently using that port to make + it easier to remedy. + + - version: 2.4.3 + date: "2021-09-15" + notes: + - type: feature + title: Environment variable TELEPRESENCE_INTERCEPT_ID available in interceptor's environment + body: >- + When you perform an intercept, we now include a TELEPRESENCE_INTERCEPT_ID environment + variable in the environment. + docs: reference/environment/#telepresence-environment-variables + + - type: bugfix + title: Improved daemon stability + body: >- + Fixed a timing bug that sometimes caused a "daemon did not start" failure. + + - type: bugfix + title: Complete logs for Windows + body: >- + Crash stack traces and other errors were incorrectly not written to log files. This has + been fixed so logs for Windows should be at parity with the ones in MacOS and Linux. + + - type: bugfix + title: Log rotation fix for Linux kernel 4.11+ + body: >- + On Linux kernel 4.11 and above, the log file rotation now properly reads the + birth-time of the log file. Older kernels continue to use the old behavior + of using the change-time in place of the birth-time. + + - type: bugfix + title: Improved error messaging + body: >- + When Telepresence encounters an error, it tells the user where they should look for + logs related to the error. We have refined this so that it only tells users to look + for errors in the daemon logs for issues that are logged there. + + - type: bugfix + title: Stop resolving localhost + body: >- + When using the overriding DNS resolver, it will no longer apply search paths when + resolving localhost, since that should be resolved on the user's machine + instead of the cluster. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Variable cluster domain + body: >- + Previously, the cluster domain was hardcoded to cluster.local. While this + is true for many kubernetes clusters, it is not for all of them. Now this value is + retrieved from the traffic-manager. + + - type: bugfix + title: Improved cleanup of traffic-agents + body: >- + Telepresence now uninstalls traffic-agents installed via mutating webhook + when using telepresence uninstall --everything. + + - type: bugfix + title: More large file transfer fixes + body: >- + Downloading large files during an intercept will no longer cause timeouts and hanging + traffic-agents. + + - type: bugfix + title: Setting --mount to false when intercepting works as expected + body: >- + When using --mount=false while performing an intercept, the file system + was still mounted. This has been remedied so the intercept behavior respects the + flag. + docs: reference/volume + + - type: bugfix + title: Traffic-manager establishes outbound connections in parallel + body: >- + Previously, the traffic-manager established outbound connections + sequentially. This resulted in slow (and failing) Dial calls would + block all outbound traffic from the workstation (for up to 30 seconds). We now + establish these connections in parallel so that won't occur. + docs: reference/routing/#outbound + + - type: bugfix + title: Status command reports correct DNS settings + body: >- + Telepresence status now correctly reports DNS settings for all operating + systems, instead of Local IP:nil, Remote IP:nil when they don't exist. + + - version: 2.4.2 + date: "2021-09-01" + notes: + - type: feature + title: New subcommand to temporarily change log-level + body: >- + We have added a new telepresence loglevel subcommand that enables users + to temporarily change the log-level for the local demons, the traffic-manager and + the traffic-agents. While the logLevels settings from the config will + still be used by default, this can be helpful if you are currently experiencing an issue and + want to have higher fidelity logs, without doing a telepresence quit and + telepresence connect. You can use telepresence loglevel --help to get + more information on options for the command. + docs: reference/config + + - type: change + title: All components have info as the default log-level + body: >- + We've now set the default for all components of Telepresence (traffic-agent, + traffic-manager, local daemons) to use info as the default log-level. + + - type: bugfix + title: Updating RBAC in helm chart to fix cluster-id regression + body: >- + In 2.4.1, we enabled the traffic-manager to get the cluster ID by getting the UID + of the default namespace. The helm chart was not updated to give the traffic-manager + those permissions, which has since been fixed. This impacted users who use licensed features of + the Telepresence extension in an air-gapped environment. + docs: reference/cluster-config/#air-gapped-cluster + + - type: bugfix + title: Timeouts for Helm actions are now respected + body: >- + The user-defined timeout for Helm actions wasn't always respected, causing the daemon to hang + indefinitely when failing to install the traffic-manager. + docs: reference/config#timeouts + + - version: 2.4.1 + date: "2021-08-30" + notes: + - type: feature + title: External cloud variables are now configurable + body: >- + We now support configuring the host and port for the cloud in your config.yml. These + are used when logging in to utilize features provided by an extension, and are also passed + along as environment variables when installing the traffic-manager. Additionally, we + now run our testsuite with these variables set to localhost to continue to ensure Telepresence + is fully fuctional without depeneding on an external service. The SYSTEMA_HOST and SYSTEMA_PORT + environment variables are no longer used. + image: telepresence-2.4.1-systema-vars.png + docs: reference/config/#cloud + + - type: feature + title: Helm chart can now regenerate certificate used for mutating webhook on-demand. + body: >- + You can now set agentInjector.certificate.regenerate when deploying Telepresence + with the Helm chart to automatically regenerate the certificate used by the agent injector webhook. + docs: install/helm + + - type: change + title: Traffic Manager installed via helm + body: >- + The traffic-manager is now installed via an embedded version of the Helm chart when telepresence connect is first performed on a cluster. + This change is transparent to the user. + A new configuration flag, timeouts.helm sets the timeouts for all helm operations performed by the Telepresence binary. + docs: reference/config#timeouts + + - type: change + title: traffic-manager gets cluster ID itself instead of via environment variable + body: >- + The traffic-manager used to get the cluster ID as an environment variable when running + telepresence connnect or via adding the value in the helm chart. This was + clunky so now the traffic-manager gets the value itself as long as it has permissions + to "get" and "list" namespaces (this has been updated in the helm chart). + docs: install/helm + + - type: bugfix + title: Telepresence now mounts all directories from /var/run/secrets + body: >- + In the past, we only mounted secret directories in /var/run/secrets/kubernetes.io. + We now mount *all* directories in /var/run/secrets, which, for example, includes + directories like eks.amazonaws.com used for IRSA tokens. + docs: reference/volume + + - type: bugfix + title: Max gRPC receive size correctly propagates to all grpc servers + body: >- + This fixes a bug where the max gRPC receive size was only propagated to some of the + grpc servers, causing failures when the message size was over the default. + docs: reference/config/#grpc + + - type: bugfix + title: Updated our Homebrew packaging to run manually + body: >- + We made some updates to our script that packages Telepresence for Homebrew so that it + can be run manually. This will enable maintainers of Telepresence to run the script manually + should we ever need to rollback a release and have latest point to an older verison. + docs: install/ + + - type: bugfix + title: Telepresence uses namespace from kubeconfig context on each call + body: >- + In the past, Telepresence would use whatever namespace was specified in the kubeconfig's current-context + for the entirety of the time a user was connected to Telepresence. This would lead to confusing behavior + when a user changed the context in their kubeconfig and expected Telepresence to acknowledge that change. + Telepresence now will do that and use the namespace designated by the context on each call. + + - type: bugfix + title: Idle outbound TCP connections timeout increased to 7200 seconds + body: >- + Some users were noticing that their intercepts would start failing after 60 seconds. + This was because the keep idle outbound TCP connections were set to 60 seconds, which we have + now bumped to 7200 seconds to match Linux's tcp_keepalive_time default. + + - type: bugfix + title: Telepresence will automatically remove a socket upon ungraceful termination + body: >- + When a Telepresence process terminates ungracefully, it would inform users that "this usually means + that the process has terminated ungracefully" and implied that they should remove the socket. We've + now made it so Telepresence will automatically attempt to remove the socket upon ungraceful termination. + + - type: bugfix + title: Fixed user daemon deadlock + body: >- + Remedied a situation where the user daemon could hang when a user was logged in. + + - type: bugfix + title: Fixed agentImage config setting + body: >- + The config setting images.agentImages is no longer required to contain the repository, and it + will use the value at images.repository. + docs: reference/config/#images + + - version: 2.4.0 + date: "2021-08-04" + notes: + - type: feature + title: Windows Client Developer Preview + body: >- + There is now a native Windows client for Telepresence that is being released as a Developer Preview. + All the same features supported by the MacOS and Linux client are available on Windows. + image: telepresence-2.4.0-windows.png + docs: install + + - type: feature + title: CLI raises helpful messages from Ambassador Cloud + body: >- + Telepresence can now receive messages from Ambassador Cloud and raise + them to the user when they perform certain commands. This enables us + to send you messages that may enhance your Telepresence experience when + using certain commands. Frequency of messages can be configured in your + config.yml. + image: telepresence-2.4.0-cloud-messages.png + docs: reference/config#cloud + + - type: bugfix + title: Improved stability of systemd-resolved-based DNS + body: >- + When initializing the systemd-resolved-based DNS, the routing domain + is set to improve stability in non-standard configurations. This also enables the + overriding resolver to do a proper take over once the DNS service ends. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Fixed an edge case when intercepting a container with multiple ports + body: >- + When specifying a port of a container to intercept, if there was a container in the + pod without ports, it was automatically selected. This has been fixed so we'll only + choose the container with "no ports" if there's no container that explicitly matches + the port used in your intercept. + docs: reference/intercepts/#creating-an-intercept-when-a-service-has-multiple-ports + + - type: bugfix + title: $(NAME) references in agent's environments are now interpolated correctly. + body: >- + If you had an environment variable $(NAME) in your workload that referenced another, intercepts + would not correctly interpolate $(NAME). This has been fixed and works automatically. + + - type: bugfix + title: Telepresence no longer prints INFO message when there is no config.yml + body: >- + Fixed a regression that printed an INFO message to the terminal when there wasn't a + config.yml present. The config is optional, so this message has been + removed. + docs: reference/config + + - type: bugfix + title: Telepresence no longer panics when using --http-match + body: >- + Fixed a bug where Telepresence would panic if the value passed to --http-match + didn't contain an equal sign, which has been fixed. The correct syntax is in the --help + string and looks like --http-match=HTTP2_HEADER=REGEX + docs: reference/intercepts/#intercept-behavior-when-logged-in-to-ambassador-cloud + + - type: bugfix + title: Improved subnet updates + body: >- + The traffic-manager used to update subnets whenever the Nodes or Pods changed, even if + the underlying subnet hadn't changed, which created a lot of unnecessary traffic between the + client and the traffic-manager. This has been fixed so we only send updates when the subnets + themselves actually change. + docs: reference/routing/#subnets + + - version: 2.3.7 + date: "2021-07-23" + notes: + - type: feature + title: Also-proxy in telepresence status + body: >- + An also-proxy entry in the Kubernetes cluster config will + show up in the output of the telepresence status command. + docs: reference/config + + - type: feature + title: Non-interactive telepresence login + body: >- + telepresence login now has an + --apikey=KEY flag that allows for + non-interactive logins. This is useful for headless + environments where launching a web-browser is impossible, + such as cloud shells, Docker containers, or CI. + image: telepresence-2.3.7-newkey.png + docs: reference/client/login/ + + - type: bugfix + title: Mutating webhook injector correctly hides named ports for probes. + body: >- + The mutating webhook injector has been fixed to correctly rename named ports for liveness and readiness probes + docs: reference/cluster-config + + - type: bugfix + title: telepresence current-cluster-id crash fixed + body: >- + Fixed a regression introduced in 2.3.5 that caused telepresence current-cluster-id + to crash. + docs: reference/cluster-config + + - type: bugfix + title: Better UX around intercepts with no local process running + body: >- + Requests would hang indefinitely when initiating an intercept before you + had a local process running. This has been fixed and will result in an + Empty reply from server until you start a local process. + docs: reference/intercepts + + - type: bugfix + title: API keys no longer show as "no description" + body: >- + New API keys generated internally for communication with + Ambassador Cloud no longer show up as "no description" in + the Ambassador Cloud web UI. Existing API keys generated by + older versions of Telepresence will still show up this way. + image: telepresence-2.3.7-keydesc.png + + - type: bugfix + title: Fix corruption of user-info.json + body: >- + Fixed a race condition that logging in and logging out + rapidly could cause memory corruption or corruption of the + user-info.json cache file used when + authenticating with Ambassador Cloud. + + - type: bugfix + title: Improved DNS resolver for systemd-resolved + body: + Telepresence's systemd-resolved-based DNS resolver is now more + stable and in case it fails to initialize, the overriding resolver + will no longer cause general DNS lookup failures when telepresence defaults to + using it. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Faster telepresence list command + body: + The performance of telepresence list has been increased + significantly by reducing the number of calls the command makes to the cluster. + docs: reference/client + + - version: 2.3.6 + date: "2021-07-20" + notes: + - type: bugfix + title: Fix preview URLs + body: >- + Fixed a regression introduced in 2.3.5 that caused preview + URLs to not work. + + - type: bugfix + title: Fix subnet discovery + body: >- + Fixed a regression introduced in 2.3.5 where the Traffic + Manager's RoleBinding did not correctly appoint + the traffic-manager Role, causing + subnet discovery to not be able to work correctly. + docs: reference/rbac/ + + - type: bugfix + title: Fix root-user configuration loading + body: >- + Fixed a regression introduced in 2.3.5 where the root daemon + did not correctly read the configuration file; ignoring the + user's configured log levels and timeouts. + docs: reference/config/ + + - type: bugfix + title: Fix a user daemon crash + body: >- + Fixed an issue that could cause the user daemon to crash + during shutdown, as during shutdown it unconditionally + attempted to close a channel even though the channel might + already be closed. + + - version: 2.3.5 + date: "2021-07-15" + notes: + - type: feature + title: traffic-manager in multiple namespaces + body: >- + We now support installing multiple traffic managers in the same cluster. + This will allow operators to install deployments of telepresence that are + limited to certain namespaces. + image: ./telepresence-2.3.5-traffic-manager-namespaces.png + docs: install/helm + - type: feature + title: No more dependence on kubectl + body: >- + Telepresence no longer depends on having an external + kubectl binary, which might not be present for + OpenShift users (who have oc instead of + kubectl). + - type: feature + title: Agent image now configurable + body: >- + We now support configuring which agent image + registry to use in the + config. This enables users whose laptop is an air-gapped environment to + create personal intercepts without requiring a login. It also makes it easier + for those who are developing on Telepresence to specify which agent image should + be used. Env vars TELEPRESENCE_AGENT_IMAGE and TELEPRESENCE_REGISTRY are no longer + used. + image: ./telepresence-2.3.5-agent-config.png + docs: reference/config/#images + - type: feature + title: Max gRPC receive size now configurable + body: >- + The default max size of messages received through gRPC (4 MB) is sometimes insufficient. It can now be configured. + image: ./telepresence-2.3.5-grpc-max-receive-size.png + docs: reference/config/#grpc + - type: feature + title: CLI can be used in air-gapped environments + body: >- + While Telepresence will auto-detect if your cluster is in an air-gapped environment, + we've added an option users can add to their config.yml to ensure the cli acts like it + is in an air-gapped environment. Air-gapped environments require a manually installed + licence. + docs: reference/cluster-config/#air-gapped-cluster + image: ./telepresence-2.3.5-skipLogin.png + - version: 2.3.4 + date: "2021-07-09" + notes: + - type: bugfix + title: Improved IP log statements + body: >- + Some log statements were printing incorrect characters, when they should have been IP addresses. + This has been resolved to include more accurate and useful logging. + docs: reference/config/#log-levels + image: ./telepresence-2.3.4-ip-error.png + - type: bugfix + title: Improved messaging when multiple services match a workload + body: >- + If multiple services matched a workload when performing an intercept, Telepresence would crash. + It now gives the correct error message, instructing the user on how to specify which + service the intercept should use. + image: ./telepresence-2.3.4-improved-error.png + docs: reference/intercepts + - type: bugfix + title: Traffic-manger creates services in its own namespace to determine subnet + body: >- + Telepresence will now determine the service subnet by creating a dummy-service in its own + namespace, instead of the default namespace, which was causing RBAC permissions issues in + some clusters. + docs: reference/routing/#subnets + - type: bugfix + title: Telepresence connect respects pre-existing clusterrole + body: >- + When Telepresence connects, if the traffic-manager's desired clusterrole already exists in the + cluster, Telepresence will no longer try to update the clusterrole. + docs: reference/rbac + - type: bugfix + title: Helm Chart fixed for clientRbac.namespaced + body: >- + The Telepresence Helm chart no longer fails when installing with --set clientRbac.namespaced=true. + docs: install/helm + - version: 2.3.3 + date: "2021-07-07" + notes: + - type: feature + title: Traffic Manager Helm Chart + body: >- + Telepresence now supports installing the Traffic Manager via Helm. + This will make it easy for operators to install and configure the + server-side components of Telepresence separately from the CLI (which + in turn allows for better separation of permissions). + image: ./telepresence-2.3.3-helm.png + docs: install/helm/ + - type: feature + title: Traffic-manager in custom namespace + body: >- + As the traffic-manager can now be installed in any + namespace via Helm, Telepresence can now be configured to look for the + Traffic Manager in a namespace other than ambassador. + This can be configured on a per-cluster basis. + image: ./telepresence-2.3.3-namespace-config.png + docs: reference/config + - type: feature + title: Intercept --to-pod + body: >- + telepresence intercept now supports a + --to-pod flag that can be used to port-forward sidecars' + ports from an intercepted pod. + image: ./telepresence-2.3.3-to-pod.png + docs: reference/intercepts + - type: change + title: Change in migration from edgectl + body: >- + Telepresence no longer automatically shuts down the old + api_version=1 edgectl daemon. If migrating + from such an old version of edgectl you must now manually + shut down the edgectl daemon before running Telepresence. + This was already the case when migrating from the newer + api_version=2 edgectl. + - type: bugfix + title: Fixed error during shutdown + body: >- + The root daemon no longer terminates when the user daemon disconnects + from its gRPC streams, and instead waits to be terminated by the CLI. + This could cause problems with things not being cleaned up correctly. + - type: bugfix + title: Intercepts will survive deletion of intercepted pod + body: >- + An intercept will survive deletion of the intercepted pod provided + that another pod is created (or already exists) that can take over. + - version: 2.3.2 + date: "2021-06-18" + notes: + # Headliners + - type: feature + title: Service Port Annotation + body: >- + The mutator webhook for injecting traffic-agents now + recognizes a + telepresence.getambassador.io/inject-service-port + annotation to specify which port to intercept; bringing the + functionality of the --port flag to users who + use the mutator webook in order to control Telepresence via + GitOps. + image: ./telepresence-2.3.2-svcport-annotation.png + docs: reference/cluster-config#service-port-annotation + - type: feature + title: Outbound Connections + body: >- + Outbound connections are now routed through the intercepted + Pods which means that the connections originate from that + Pod from the cluster's perspective. This allows service + meshes to correctly identify the traffic. + docs: reference/routing/#outbound + - type: change + title: Inbound Connections + body: >- + Inbound connections from an intercepted agent are now + tunneled to the manager over the existing gRPC connection, + instead of establishing a new connection to the manager for + each inbound connection. This avoids interference from + certain service mesh configurations. + docs: reference/routing/#inbound + + # RBAC changes + - type: change + title: Traffic Manager needs new RBAC permissions + body: >- + The Traffic Manager requires RBAC + permissions to list Nodes, Pods, and to create a dummy + Service in the manager's namespace. + docs: reference/routing/#subnets + - type: change + title: Reduced developer RBAC requirements + body: >- + The on-laptop client no longer requires RBAC permissions to list the Nodes + in the cluster or to create Services, as that functionality + has been moved to the Traffic Manager. + + # Bugfixes + - type: bugfix + title: Able to detect subnets + body: >- + Telepresence will now detect the Pod CIDR ranges even if + they are not listed in the Nodes. + image: ./telepresence-2.3.2-subnets.png + docs: reference/routing/#subnets + - type: bugfix + title: Dynamic IP ranges + body: >- + The list of cluster subnets that the virtual network + interface will route is now configured dynamically and will + follow changes in the cluster. + - type: bugfix + title: No duplicate subnets + body: >- + Subnets fully covered by other subnets are now pruned + internally and thus never superfluously added to the + laptop's routing table. + docs: reference/routing/#subnets + - type: change # not a bugfix, but it only makes sense to mention after the above bugfixes + title: Change in default timeout + body: >- + The trafficManagerAPI timeout default has + changed from 5 seconds to 15 seconds, in order to facilitate + the extended time it takes for the traffic-manager to do its + initial discovery of cluster info as a result of the above + bugfixes. + - type: bugfix + title: Removal of DNS config files on macOS + body: >- + On macOS, files generated under + /etc/resolver/ as the result of using + include-suffixes in the cluster config are now + properly removed on quit. + docs: reference/routing/#macos-resolver + + - type: bugfix + title: Large file transfers + body: >- + Telepresence no longer erroneously terminates connections + early when sending a large HTTP response from an intercepted + service. + - type: bugfix + title: Race condition in shutdown + body: >- + When shutting down the user-daemon or root-daemon on the + laptop, telepresence quit and related commands + no longer return early before everything is fully shut down. + Now it can be counted on that by the time the command has + returned that all of the side-effects on the laptop have + been cleaned up. + - version: 2.3.1 + date: "2021-06-14" + notes: + - title: DNS Resolver Configuration + body: "Telepresence now supports per-cluster configuration for custom dns behavior, which will enable users to determine which local + remote resolver to use and which suffixes should be ignored + included. These can be configured on a per-cluster basis." + image: ./telepresence-2.3.1-dns.png + docs: reference/config + type: feature + - title: AlsoProxy Configuration + body: "Telepresence now supports also proxying user-specified subnets so that they can access external services only accessible to the cluster while connected to Telepresence. These can be configured on a per-cluster basis and each subnet is added to the TUN device so that requests are routed to the cluster for IPs that fall within that subnet." + image: ./telepresence-2.3.1-alsoProxy.png + docs: reference/config + type: feature + - title: Mutating Webhook for Injecting Traffic Agents + body: "The Traffic Manager now contains a mutating webhook to automatically add an agent to pods that have the telepresence.getambassador.io/traffic-agent: enabled annotation. This enables Telepresence to work well with GitOps CD platforms that rely on higher level kubernetes objects matching what is stored in git. For workloads without the annotation, Telepresence will add the agent the way it has in the past" + image: ./telepresence-2.3.1-inject.png + docs: reference/rbac + type: feature + - title: Traffic Manager Connect Timeout + body: "The trafficManagerConnect timeout default has changed from 20 seconds to 60 seconds, in order to facilitate the extended time it takes to apply everything needed for the mutator webhook." + image: ./telepresence-2.3.1-trafficmanagerconnect.png + docs: reference/config + type: change + - title: Fix for large file transfers + body: "Fix a tun-device bug where sometimes large transfers from services on the cluster would hang indefinitely" + image: ./telepresence-2.3.1-large-file-transfer.png + docs: reference/tun-device + type: bugfix + - title: Brew Formula Changed + body: "Now that the Telepresence rewrite is the main version of Telepresence, you can install it via Brew like so: brew install datawire/blackbird/telepresence." + image: ./telepresence-2.3.1-brew.png + docs: install/ + type: change + - version: 2.3.0 + date: "2021-06-01" + notes: + - title: Brew install Telepresence + body: "Telepresence can now be installed via brew on macOS, which makes it easier for users to stay up-to-date with the latest telepresence version. To install via brew, you can use the following command: brew install datawire/blackbird/telepresence2." + image: ./telepresence-2.3.0-homebrew.png + docs: install/ + type: feature + - title: TCP and UDP routing via Virtual Network Interface + body: "Telepresence will now perform routing of outbound TCP and UDP traffic via a Virtual Network Interface (VIF). The VIF is a layer 3 TUN-device that exists while Telepresence is connected. It makes the subnets in the cluster available to the workstation and will also route DNS requests to the cluster and forward them to intercepted pods. This means that pods with custom DNS configuration will work as expected. Prior versions of Telepresence would use firewall rules and were only capable of routing TCP." + image: ./tunnel.jpg + docs: reference/tun-device + type: feature + - title: SSH is no longer used + body: "All traffic between the client and the cluster is now tunneled via the traffic manager gRPC API. This means that Telepresence no longer uses ssh tunnels and that the manager no longer have an sshd installed. Volume mounts are still established using sshfs but it is now configured to communicate using the sftp-protocol directly, which means that the traffic agent also runs without sshd. A desired side effect of this is that the manager and agent containers no longer need a special user configuration." + image: ./no-ssh.png + docs: reference/tun-device/#no-ssh-required + type: change + - title: Running in a Docker container + body: "Telepresence can now be run inside a Docker container. This can be useful for avoiding side effects on a workstation's network, establishing multiple sessions with the traffic manager, or working with different clusters simultaneously." + image: ./run-tp-in-docker.png + docs: reference/inside-container + type: feature + - title: Configurable Log Levels + body: "Telepresence now supports configuring the log level for Root Daemon and User Daemon logs. This provides control over the nature and volume of information that Telepresence generates in daemon.log and connector.log." + image: ./telepresence-2.3.0-loglevels.png + docs: reference/config/#log-levels + type: feature + - version: 2.2.2 + date: "2021-05-17" + notes: + - title: Legacy Telepresence subcommands + body: Telepresence is now able to translate common legacy Telepresence commands into native Telepresence commands. So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used to with the new Telepresence binary. + image: ./telepresence-2.2.png + docs: install/migrate-from-legacy/ + type: feature diff --git a/docs/telepresence/2.14/troubleshooting/index.md b/docs/telepresence/2.14/troubleshooting/index.md new file mode 100644 index 000000000..5a477f20a --- /dev/null +++ b/docs/telepresence/2.14/troubleshooting/index.md @@ -0,0 +1,331 @@ +--- +title: "Telepresence Troubleshooting" +description: "Learn how to troubleshoot common issues related to Telepresence, including intercept issues, cluster connection issues, and errors related to Ambassador Cloud." +--- +# Troubleshooting + + +## Creating an intercept did not generate a preview URL + +Preview URLs can only be created if Telepresence is [logged in to +Ambassador Cloud](../reference/client/login/). When not logged in, it +will not even try to create a preview URL (additionally, by default it +will intercept all traffic rather than just a subset of the traffic). +Remove the intercept with `telepresence leave [deployment name]`, run +`telepresence login` to login to Ambassador Cloud, then recreate the +intercept. See the [intercepts how-to doc](../howtos/intercepts) for +more details. + +## Error on accessing preview URL: `First record does not look like a TLS handshake` + +The service you are intercepting is likely not using TLS, however when configuring the intercept you indicated that it does use TLS. Remove the intercept with `telepresence leave [deployment name]` and recreate it, setting `TLS` to `n`. Telepresence tries to intelligently determine these settings for you when creating an intercept and offer them as defaults, but odd service configurations might cause it to suggest the wrong settings. + +## Error on accessing preview URL: Detected a 301 Redirect Loop + +If your ingress is set to redirect HTTP requests to HTTPS and your web app uses HTTPS, but you configure the intercept to not use TLS, you will get this error when opening the preview URL. Remove the intercept with `telepresence leave [deployment name]` and recreate it, selecting the correct port and setting `TLS` to `y` when prompted. + +## Connecting to a cluster via VPN doesn't work. + +There are a few different issues that could arise when working with a VPN. Please see the [dedicated page](../reference/vpn) on Telepresence and VPNs to learn more on how to fix these. + +## Connecting to a cluster hosted in a VM on the workstation doesn't work + +The cluster probably has access to the host's network and gets confused when it is mapped by Telepresence. +Please check the [cluster in hosted vm](../howtos/cluster-in-vm) for more details. + +## Your GitHub organization isn't listed + +Ambassador Cloud needs access granted to your GitHub organization as a +third-party OAuth app. If an organization isn't listed during login +then the correct access has not been granted. + +The quickest way to resolve this is to go to the **Github menu** → +**Settings** → **Applications** → **Authorized OAuth Apps** → +**Ambassador Labs**. An organization owner will have a **Grant** +button, anyone not an owner will have **Request** which sends an email +to the owner. If an access request has been denied in the past the +user will not see the **Request** button, they will have to reach out +to the owner. + +Once access is granted, log out of Ambassador Cloud and log back in; +you should see the GitHub organization listed. + +The organization owner can go to the **GitHub menu** → **Your +organizations** → **[org name]** → **Settings** → **Third-party +access** to see if Ambassador Labs has access already or authorize a +request for access (only owners will see **Settings** on the +organization page). Clicking the pencil icon will show the +permissions that were granted. + +GitHub's documentation provides more detail about [managing access granted to third-party applications](https://docs.github.com/en/github/authenticating-to-github/connecting-with-third-party-applications) and [approving access to apps](https://docs.github.com/en/github/setting-up-and-managing-organizations-and-teams/approving-oauth-apps-for-your-organization). + +### Granting or requesting access on initial login + +When using GitHub as your identity provider, the first time you log in +to Ambassador Cloud GitHub will ask to authorize Ambassador Labs to +access your organizations and certain user data. + +Authorize Ambassador labs form + +Any listed organization with a green check has already granted access +to Ambassador Labs (you still need to authorize to allow Ambassador +Labs to read your user data and organization membership). + +Any organization with a red "X" requires access to be granted to +Ambassador Labs. Owners of the organization will see a **Grant** +button. Anyone who is not an owner will see a **Request** button. +This will send an email to the organization owner requesting approval +to access the organization. If an access request has been denied in +the past the user will not see the **Request** button, they will have +to reach out to the owner. + +Once approval is granted, you will have to log out of Ambassador Cloud +then back in to select the organization. + +## Volume mounts are not working on macOS + +It's necessary to have `sshfs` installed in order for volume mounts to work correctly during intercepts. Lately there's been some issues using `brew install sshfs` a macOS workstation because the required component `osxfuse` (now named `macfuse`) isn't open source and hence, no longer supported. As a workaround, you can now use `gromgit/fuse/sshfs-mac` instead. Follow these steps: + +1. Remove old sshfs, macfuse, osxfuse using `brew uninstall` +2. `brew install --cask macfuse` +3. `brew install gromgit/fuse/sshfs-mac` +4. `brew link --overwrite sshfs-mac` + +Now sshfs -V shows you the correct version, e.g.: +``` +$ sshfs -V +SSHFS version 2.10 +FUSE library version: 2.9.9 +fuse: no mount point +``` + +5. Next, try a mount (or an intercept that performs a mount). It will fail because you need to give permission to “Benjamin Fleischer” to execute a kernel extension (a pop-up appears that takes you to the system preferences). +6. Approve the needed permission +7. Reboot your computer. + +## Volume mounts are not working on Linux +It's necessary to have `sshfs` installed in order for volume mounts to work correctly during intercepts. + +After you've installed `sshfs`, if mounts still aren't working: +1. Uncomment `user_allow_other` in `/etc/fuse.conf` +2. Add your user to the "fuse" group with: `sudo usermod -a -G fuse ` +3. Restart your computer after uncommenting `user_allow_other` + + +## Authorization for preview URLs +Services that require authentication may not function correctly with preview URLs. When accessing a preview URL, it is necessary to configure your intercept to use custom authentication headers for the preview URL. If you don't, you may receive an unauthorized response or be redirected to the login page for Ambassador Cloud. + +You can accomplish this by using a browser extension such as the `ModHeader extension` for [Chrome](https://chrome.google.com/webstore/detail/modheader/idgpnmonknjnojddfkpgkljpfnnfcklj) +or [Firefox](https://addons.mozilla.org/en-CA/firefox/addon/modheader-firefox/). + +It is important to note that Ambassador Cloud does not support OAuth browser flows when accessing a preview URL, but other auth schemes such as Basic access authentication and session cookies will work. + +## Distributed tracing + +Telepresence is a complex piece of software with components running locally on your laptop and remotely in a distributed kubernetes environment. +As such, troubleshooting investigations require tools that can give users, cluster admins, and maintainers a broad view of what these distributed components are doing. +In order to facilitate such investigations, telepresence >= 2.7.0 includes distributed tracing functionality via [OpenTelemetry](https://opentelemetry.io/) +Tracing is controlled via a `grpcPort` flag under the `tracing` configuration of your `values.yaml`. It is enabled by default and can be disabled by setting `grpcPort` to `0`, or `tracing` to an empty object: + +```yaml +tracing: {} +``` + +If tracing is configured, the traffic manager and traffic agents will open a GRPC server under the port given, from which telepresence clients will be able to gather trace data. +To collect trace data, ensure you're connected to the cluster, perform whatever operation you'd like to debug and then run `gather-traces` immediately after: + +```console +$ telepresence gather-traces +``` + +This command will gather traces from both the cloud and local components of telepresence and output them into a file called `traces.gz` in your current working directory: + +```console +$ file traces.gz + traces.gz: gzip compressed data, original size modulo 2^32 158255 +``` + +Please do not try to open or uncompress this file, as it contains binary trace data. +Instead, you can use the `upload-traces` command built into telepresence to send it to an [OpenTelemetry collector](https://opentelemetry.io/docs/collector/) for ingestion: + +```console +$ telepresence upload-traces traces.gz $OTLP_GRPC_ENDPOINT +``` + +Once that's been done, the traces will be visible via whatever means your usual collector allows. For example, this is what they look like when loaded into Jaeger's [OTLP API](https://www.jaegertracing.io/docs/1.36/apis/#opentelemetry-protocol-stable): + +![Jaeger Interface](../images/tracing.png) + +**Note:** The host and port provided for the `OTLP_GRPC_ENDPOINT` must accept OTLP formatted spans (instead of e.g. Jaeger or Zipkin specific spans) via a GRPC API (instead of the HTTP API that is also available in some collectors) +**Note:** Since traces are not automatically shipped to the backend by telepresence, they are stored in memory. Hence, to avoid running telepresence components out of memory, only the last 10MB of trace data are available for export. + +## No Sidecar Injected in GKE private clusters + +An attempt to `telepresence intercept` results in a timeout, and upon examination of the pods (`kubectl get pods`) it's discovered that the intercept command did not inject a sidecar into the workload's pods: + +```bash +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +echo-easy-7f6d54cff8-rz44k 1/1 Running 0 5m5s + +$ telepresence intercept echo-easy -p 8080 +telepresence: error: connector.CreateIntercept: request timed out while waiting for agent echo-easy.default to arrive +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +echo-easy-d8dc4cc7c-27567 1/1 Running 0 2m9s + +# Notice how 1/1 containers are ready. +``` + +If this is occurring in a GKE cluster with private networking enabled, it is likely due to firewall rules blocking the +Traffic Manager's webhook injector from the API server. +To fix this, add a firewall rule allowing your cluster's master nodes to access TCP port `443` in your cluster's pods, +or change the port number that Telepresence is using for the agent injector by providing the number of an allowed port +using the Helm chart value `agentInjector.webhook.port`. +Please refer to the [telepresence install instructions](../install/cloud#gke) or the [GCP docs](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) for information to resolve this. + +## Injected init-container doesn't function properly + +The init-container is injected to insert `iptables` rules that redirects port numbers from the app container to the +traffic-agent sidecar. This is necessary when the service's `targetPort` is numeric. It requires elevated privileges +(`NET_ADMIN` capabilities), and the inserted rules may get overridden by `iptables` rules inserted by other vendors, +such as Istio or Linkerd. + +Injection of the init-container can often be avoided by using a `targetPort` _name_ instead of a number, and ensure +that the corresponding container's `containerPort` is also named. This example uses the name "http", but any valid +name will do: +```yaml +apiVersion: v1 +kind: Pod +metadata: + ... +spec: + ... + containers: + - ... + ports: + - name: http + containerPort: 8080 +--- +apiVersion: v1 +kind: Service +metadata: + ... +spec: + ... + ports: + - port: 80 + targetPort: http +``` + +Telepresence's mutating webhook will refrain from injecting an init-container when the `targetPort` is a name. Instead, +it will do the following during the injection of the traffic-agent: + +1. Rename the designated container's port by prefixing it (i.e., containerPort: http becomes containerPort: tm-http). +2. Let the container port of our injected traffic-agent use the original name (i.e., containerPort: http). + +Kubernetes takes care of the rest and will now associate the service's `targetPort` with our traffic-agent's +`containerPort`. + +### Important note +If the service is "headless" (using `ClusterIP: None`), then using named ports won't help because the `targetPort` will +not get remapped. A headless service will always require the init-container. + +## Error connecting to GKE or EKS cluster + +GKE and EKS require a plugin that utilizes their resepective IAM providers. +You will need to install the [gke](../install/cloud#gke-authentication-plugin) or [eks](../install/cloud#eks-authentication-plugin) plugins +for Telepresence to connect to your cluster. + +## `too many files open` error when running `telepresence connect` on Linux + +If `telepresence connect` on linux fails with a message in the logs `too many files open`, then check if `fs.inotify.max_user_instances` is set too low. Check the current settings with `sysctl fs.notify.max_user_instances` and increase it temporarily with `sudo sysctl -w fs.inotify.max_user_instances=512`. For more information about permanently increasing it see [Kernel inotify watch limit reached](https://unix.stackexchange.com/a/13757/514457). + +## Connected to cluster via VPN but IPs don't resolve + +If `telepresence connect` succeeds, but you find yourself unable to reach services on your cluster, a routing conflict may be to blame. This frequently happens when connecting to a VPN at the same time as telepresence, +as often VPN clients may add routes that conflict with those added by telepresence. To debug this, pick an IP address in the cluster and get its route information. In this case, we'll get the route for `100.124.150.45`, and discover +that it's running through a `tailscale` device. + + + + +```console +$ route -n get 100.124.150.45 + route to: 100.64.2.3 +destination: 100.64.0.0 + mask: 255.192.0.0 + interface: utun4 + flags: + recvpipe sendpipe ssthresh rtt,msec rttvar hopcount mtu expire + 0 0 0 0 0 0 1280 0 +``` + +Note that in macos it's difficult to determine what software the name of a virtual interface corresponds to -- `utun4` doesn't indicate that it was created by tailscale. +One option is to look at the output of `ifconfig` before and after connecting to your VPN to see if the interface in question is being added upon connection + + + + +```console +$ ip route get 100.124.150.45 +100.64.2.3 dev tailscale0 table 52 src 100.111.250.89 uid 0 +``` + + + + +```console +$ Find-NetRoute -RemoteIPAddress 100.124.150.45 + +IPAddress : 100.102.111.26 +InterfaceIndex : 29 +InterfaceAlias : Tailscale +AddressFamily : IPv4 +Type : Unicast +PrefixLength : 32 +PrefixOrigin : Manual +SuffixOrigin : Manual +AddressState : Preferred +ValidLifetime : Infinite ([TimeSpan]::MaxValue) +PreferredLifetime : Infinite ([TimeSpan]::MaxValue) +SkipAsSource : False +PolicyStore : ActiveStore + + +Caption : +Description : +ElementName : +InstanceID : ;::8;;;8 + + +This will tell you which device the traffic is being routed through. As a rule, if the traffic is not being routed by the telepresence device, +your VPN may need to be reconfigured, as its routing configuration is conflicting with telepresence. One way to determine if this is the case +is to run `telepresence quit -s`, check the route for an IP in the cluster (see commands above), run `telepresence connect`, and re-run the commands to see if the output changes. +If it doesn't change, that means telepresence is unable to override your VPN routes, and your VPN may need to be reconfigured. Talk to your network admins +to configure it such that clients do not add routes that conflict with the pod and service CIDRs of the clusters. How this will be done will +vary depending on the VPN provider. +Future versions of telepresence will be smarter about informing you of such conflicts upon connection. diff --git a/docs/telepresence/2.14/versions.yml b/docs/telepresence/2.14/versions.yml new file mode 100644 index 000000000..17feab19b --- /dev/null +++ b/docs/telepresence/2.14/versions.yml @@ -0,0 +1,5 @@ +version: "2.14.4" +dlVersion: "latest" +docsVersion: "2.14" +branch: release/v2 +productName: "Telepresence" diff --git a/docs/telepresence/2.15 b/docs/telepresence/2.15 deleted file mode 120000 index 4171330a2..000000000 --- a/docs/telepresence/2.15 +++ /dev/null @@ -1 +0,0 @@ -../../../docs/telepresence/v2.15 \ No newline at end of file diff --git a/docs/telepresence/2.15/ci/github-actions.md b/docs/telepresence/2.15/ci/github-actions.md new file mode 100644 index 000000000..810a2d239 --- /dev/null +++ b/docs/telepresence/2.15/ci/github-actions.md @@ -0,0 +1,176 @@ +--- +title: GitHub Actions for Telepresence +description: "Learn more about GitHub Actions for Telepresence and how to integrate them in your processes to run tests for your own environments and improve your CI/CD pipeline. " +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Telepresence with GitHub Actions + +Telepresence combined with [GitHub Actions](https://docs.github.com/en/actions) allows you to run integration tests in your continuous integration/continuous delivery (CI/CD) pipeline without the need to run any dependant service. When you connect to the target Kubernetes cluster, you can intercept traffic of the remote services and send it to an instance of the local service running in CI. This way, you can quickly test the bugfixes, updates, and features that you develop in your project. + +You can [register here](https://app.getambassador.io/auth/realms/production/protocol/openid-connect/auth?client_id=telepresence-github-actions&response_type=code&code_challenge=qhXI67CwarbmH-pqjDIV1ZE6kqggBKvGfs69cxst43w&code_challenge_method=S256&redirect_uri=https://app.getambassador.io) to get a free Ambassador Cloud account to try the GitHub Actions for Telepresence yourself. + +## GitHub Actions for Telepresence + +Ambassador Labs has created a set of GitHub Actions for Telepresence that enable you to run integration tests in your CI pipeline against any existing remote cluster. The GitHub Actions for Telepresence are the following: + + - **configure**: Initial configuration setup for Telepresence that is needed to run the actions successfully. + - **install**: Installs Telepresence in your CI server with latest version or the one specified by you. + - **login**: Logs into Telepresence, you can create a [personal intercept](/docs/telepresence/latest/concepts/intercepts/#personal-intercept). You'll need a Telepresence API key and set it as an environment variable in your workflow. See the [acquiring an API key guide](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) for instructions on how to get one. + - **connect**: Connects to the remote target environment. + - **intercept**: Redirects traffic to the remote service to the version of the service running in CI so you can run integration tests. + +Each action contains a post-action script to clean up resources. This includes logging out of Telepresence, closing the connection to the remote cluster, and stopping the intercept process. These post scripts are executed automatically, regardless of job result. This way, you don't have to worry about terminating the session yourself. You can look at the [GitHub Actions for Telepresence repository](https://github.com/datawire/telepresence-actions) for more information. + +# Using Telepresence in your GitHub Actions CI pipeline + +## Prerequisites + +To enable GitHub Actions with telepresence, you need: + +* A [Telepresence API KEY](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) and set it as an environment variable in your workflow. +* Access to your remote Kubernetes cluster, like a `kubeconfig.yaml` file with the information to connect to the cluster. +* If your remote cluster already has Telepresence installed, you need to know whether Telepresence is installed [Cluster wide](/docs/telepresence/latest/reference/rbac/#cluster-wide-telepresence-user-access) or [Namespace only](/docs/telepresence/latest/reference/rbac/#namespace-only-telepresence-user-access). If telepresence is configured for namespace only, verify that your `kubeconfig.yaml` is configured to find the installation of the Traffic Manager. For example: + + ```yaml + apiVersion: v1 + clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: traffic-manager-namespace + name: example-cluster + ``` + +* If Telepresence is installed, you also need to know the version of Telepresence running in the cluster. You can run the command `kubectl describe service traffic-manager -n namespace`. The version is listed in the `labels` section of the output. +* You need a GitHub Actions secret named `TELEPRESENCE_API_KEY` in your repository that has your Telepresence API key. See [GitHub docs](https://docs.github.com/en/github-ae@latest/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository) for instructions on how to create GitHub Actions secrets. +* You need a GitHub Actions secret named `KUBECONFIG_FILE` in your repository with the content of your `kubeconfig.yaml`). + +**Does your environment look different?** We're actively working on making GitHub Actions for Telepresence more useful for more + +
+ + +
+ +## Initial configuration setup + +To be able to use the GitHub Actions for Telepresence, you need to do an initial setup to [configure Telepresence](../../reference/config/) so the repository is able to run your workflow. To complete the Telepresence setup: + + +This action only supports Ubuntu runners at the moment. + +1. In your main branch, create a `.github/workflows` directory in your GitHub repository if it does not already exist. +1. Next, in the `.github/workflows` directory, create a new YAML file named `configure-telepresence.yaml`: + + ```yaml + name: Configuring telepresence + on: workflow_dispatch + jobs: + configuring: + name: Configure telepresence + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + steps: + - name : Checkout + uses: actions/checkout@v3 + #---- here run your custom command to connect to your cluster + #- name: Connect to cluster + # shell: bash + # run: ./connnect to cluster + #---- + - name: Congifuring Telepresence + uses: datawire/telepresence-actions/configure@v1.0-rc + with: + version: latest + ``` + +1. Push the `configure-telepresence.yaml` file to your repository. +1. Run the `Configuring Telepresence Workflow` [manually](https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow) in your repository's Actions tab. + +When the workflow runs, the action caches the configuration directory of Telepresence and a Telepresence configuration file if is provided by you. This configuration file should be placed in the`/.github/telepresence-config/config.yml` with your own [Telepresence config](../../reference/config/). If you update your this file with a new configuration, you must run the `Configuring Telepresence Workflow` action manually on your main branch so your workflow detects the new configuration. + + +When you create a branch, do not remove the .telepresence/config.yml file. This is required for Telepresence to run GitHub action properly when there is a new push to the branch in your repository. + + +## Using Telepresence in your GitHub Actions workflows + +1. In the `.github/workflows` directory create a new YAML file named `run-integration-tests.yaml` and modify placeholders with real actions to run your service and perform integration tests. + + ```yaml + name: Run Integration Tests + on: + push: + branches-ignore: + - 'main' + jobs: + my-job: + name: Run Integration Test using Remote Cluster + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + KUBECONFIG_FILE: ${{ secrets.KUBECONFIG_FILE }} + KUBECONFIG: /opt/kubeconfig + steps: + - name : Checkout + uses: actions/checkout@v3 + with: + ref: ${{ github.event.pull_request.head.sha }} + #---- here run your custom command to run your service + #- name: Run your service to test + # shell: bash + # run: ./run_local_service + #---- + # First you need to log in to Telepresence, with your api key + - name: Create kubeconfig file + run: | + cat < /opt/kubeconfig + ${{ env.KUBECONFIG_FILE }} + EOF + - name: Install Telepresence + uses: datawire/telepresence-actions/install@v1.0-rc + with: + version: 2.5.8 # Change the version number here according to the version of Telepresence in your cluster or omit this parameter to install the latest version + - name: Telepresence connect + uses: datawire/telepresence-actions/connect@v1.0-rc + - name: Login + uses: datawire/telepresence-actions/login@v1.0-rc + with: + telepresence_api_key: ${{ secrets.TELEPRESENCE_API_KEY }} + - name: Intercept the service + uses: datawire/telepresence-actions/intercept@v1.0-rc + with: + service_name: service-name + service_port: 8081:8080 + namespace: namespacename-of-your-service + http_header: "x-telepresence-intercept-id=service-intercepted" + print_logs: true # Flag to instruct the action to print out Telepresence logs and export an artifact with them + #---- here run your custom command + #- name: Run integrations test + # shell: bash + # run: ./run_integration_test + #---- + ``` + +The previous is an example of a workflow that: + +* Checks out the repository code. +* Has a placeholder step to run the service during CI. +* Creates the `/opt/kubeconfig` file with the contents of the `secrets.KUBECONFIG_FILE` to make it available for Telepresence. +* Installs Telepresence. +* Runs Telepresence Connect. +* Logs into Telepresence. +* Intercepts traffic to the service running in the remote cluster. +* A placeholder for an action that would run integration tests, such as one that makes HTTP requests to your running service and verifies it works while dependent services run in the remote cluster. + +This workflow gives you the ability to run integration tests during the CI run against an ephemeral instance of your service to verify that the any change that is pushed to the working branch works as expected. After you push the changes, the CI server will run the integration tests against the intercept. You can view the results on your GitHub repository, under "Actions" tab. diff --git a/docs/telepresence/2.15/ci/pod-daemon.md b/docs/telepresence/2.15/ci/pod-daemon.md new file mode 100644 index 000000000..9342a2d86 --- /dev/null +++ b/docs/telepresence/2.15/ci/pod-daemon.md @@ -0,0 +1,202 @@ +--- +title: Pod Daemon +description: "Pod Daemon and how to integrate it in your processes to run tests for your own environments and improve your CI/CD pipeline." +--- + +# Telepresence with Pod Daemon + + +The Pod Daemon facilitates the execution of Telepresence by using a Pod as a sidecar to your application. This becomes particularly beneficial when intending to incorporate Deployment Previews into your pipeline. Essentially, the pod-daemon is a Telepresence instance running in a pod, rather than operating on a developer's laptop. + +This presents a compelling solution for developers who wish to share a live iteration of their work within the organization. A preview URL can be produced, which links directly to the image created during the Continuous Integration (CI) process. This Preview URL can then be appended to the pull request, streamlining the code review process and enabling real-time project sharing within the team. + +## Overview + +The Pod Daemon functions as an optimized version of Telepresence, undertaking all preliminary configuration tasks (such as login and daemon startup), and additionally executing the intercept. + +The initial setup phase involves deploying a service account with the necessary minimal permissions for running Telepresence, coupled with a secret that holds the API KEY essential for executing a Telepresence login. + +Following this setup, your main responsibility consists of deploying your operational application, which incorporates a pod daemon operating as a sidecar. The parameters for the pod daemon require the relevant details concerning your live application. As it initiates, the pod daemon will intercept your live application and divert traffic towards your working application. This traffic redirection is based on your configured headers, which come into play each time the application is accessed. + +

+ +

+ +## Usage + +To commence the setup, it's necessary to deploy both a service account and a secret. Here's how to go about it: + +1. Establish a connection to your cluster and proceed to deploy this within the namespace of your live application (default in this case). + + ```yaml + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: ambassador-deploy-previews + namespace: default + labels: + app.kubernetes.io/name: ambassador-deploy-previews + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: ambassador-deploy-previews + namespace: default + labels: + app.kubernetes.io/name: ambassador-deploy-previews + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: ambassador-deploy-previews + labels: + app.kubernetes.io/name: ambassador-deploy-previews + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: ambassador-deploy-previews + subjects: + - name: ambassador-deploy-previews + namespace: default + kind: ServiceAccount + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + labels: + rbac.getambassador.io/role-group: ambassador-deploy-previews + name: ambassador-deploy-previews + rules: + - apiGroups: [ "" ] + verbs: [ "get", "list", "watch", "create", "delete" ] + resources: + - namespaces + - pods + - pods/log + - pods/portforward + - services + - secrets + - configmaps + - endpoints + - nodes + - deployments + - serviceaccounts + + - apiGroups: [ "apps", "rbac.authorization.k8s.io", "admissionregistration.k8s.io" ] + verbs: [ "get", "list", "create", "update", "watch" ] + resources: + - deployments + - statefulsets + - clusterrolebindings + - rolebindings + - clusterroles + - replicasets + - roles + - serviceaccounts + - mutatingwebhookconfigurations + + - apiGroups: [ "getambassador.io" ] + verbs: [ "get", "list", "watch" ] + resources: [ "*" ] + + - apiGroups: [ "getambassador.io" ] + verbs: [ "update" ] + resources: [ "mappings/status" ] + + - apiGroups: [ "networking.x-k8s.io" ] + verbs: [ "get", "list", "watch" ] + resources: [ "*" ] + + - apiGroups: [ "networking.internal.knative.dev" ] + verbs: [ "get", "list", "watch" ] + resources: [ "ingresses", "clusteringresses" ] + + - apiGroups: [ "networking.internal.knative.dev" ] + verbs: [ "update" ] + resources: [ "ingresses/status", "clusteringresses/status" ] + + - apiGroups: [ "extensions", "networking.k8s.io" ] + verbs: [ "get", "list", "watch" ] + resources: [ "ingresses", "ingressclasses" ] + + - apiGroups: [ "extensions", "networking.k8s.io" ] + verbs: [ "update" ] + resources: [ "ingresses/status" ] + --- + apiVersion: v1 + kind: Secret + metadata: + name: deployment-preview-apikey + namespace: default + type: Opaque + stringData: + AMBASSADOR_CLOUD_APIKEY: "{YOUR_API_KEY}" + + ``` + +2. Following this, you will need to deploy the iteration image together with the pod daemon, serving as a sidecar. In order to utilize the pod-daemon command, the environmental variable `IS_POD_DAEMON` must be set to `True`. This setting is a prerequisite for activating the pod-daemon functionality. + + ```yaml + --- + apiVersion: apps/v1 + kind: Deployment + metadata: + name: quote-ci + spec: + selector: + matchLabels: + run: quote-ci + replicas: 1 + template: + metadata: + labels: + run: quote-ci + spec: + serviceAccountName: ambassador-deploy-previews + containers: + # Include your application container + # - name: your-original-application + # image: image-built-from-pull-request + # [...] + # Inject the pod-daemon container + # In the following example, we'll demonstrate how to integrate the pod-daemon container by intercepting the quote app + - name: pod-daemon + image: datawire/telepresence:$version$ + ports: + - name: http + containerPort: 80 + - name: https + containerPort: 443 + resources: + limits: + cpu: "0.1" + memory: 100Mi + args: + - pod-daemon + - --workload-name=quote + - --workload-namespace=default + - --workload-kind=Deployment + - --port=8080 + - --http-header=test-telepresence=1 # Custom header can be specified + - --ingress-tls=false + - --ingress-port=80 + - --ingress-host=quote.default.svc.cluster.local + - --ingress-l5host=quote.default.svc.cluster.local + env: + - name: AMBASSADOR_CLOUD_APIKEY + valueFrom: + secretKeyRef: + name: deployment-preview-apikey + key: AMBASSADOR_CLOUD_APIKEY + - name: TELEPRESENCE_MANAGER_NAMESPACE + value: ambassador + - name: IS_POD_DAEMON + value: "True" + ``` + +3. The preview URL can be located within the logs of the pod daemon: + + ```bash + kubectl logs -f quote-ci-6dcc864445-x98wt -c pod-daemon + ``` \ No newline at end of file diff --git a/docs/telepresence/2.15/community.md b/docs/telepresence/2.15/community.md new file mode 100644 index 000000000..922457c9d --- /dev/null +++ b/docs/telepresence/2.15/community.md @@ -0,0 +1,12 @@ +# Community + +## Contributor's guide +Please review our [contributor's guide](https://github.com/telepresenceio/telepresence/blob/release/v2/DEVELOPING.md) +on GitHub to learn how you can help make Telepresence better. + +## Changelog +Our [changelog](https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md) +describes new features, bug fixes, and updates to each version of Telepresence. + +## Meetings +Check out our community [meeting schedule](https://github.com/telepresenceio/telepresence/blob/release/v2/MEETING_SCHEDULE.md) for opportunities to interact with Telepresence developers. diff --git a/docs/telepresence/2.15/concepts/context-prop.md b/docs/telepresence/2.15/concepts/context-prop.md new file mode 100644 index 000000000..46993af06 --- /dev/null +++ b/docs/telepresence/2.15/concepts/context-prop.md @@ -0,0 +1,39 @@ +# Context propagation + +**Context propagation** is the transfer of request metadata across the services and remote processes of a distributed system. Telepresence uses context propagation to intelligently route requests to the appropriate destination. + +This metadata is the context that is transferred across system services. It commonly takes the form of HTTP headers; context propagation is usually referred to as header propagation. A component of the system (like a proxy or performance monitoring tool) injects the headers into requests as it relays them. + +Metadata propagation refers to any service or other middleware not stripping away the headers. Propagation facilitates the movement of the injected contexts between other downstream services and processes. + + +## What is distributed tracing? + +Distributed tracing is a technique for troubleshooting and profiling distributed microservices applications and is a common application for context propagation. It is becoming a key component for debugging. + +In a microservices architecture, a single request may trigger additional requests to other services. The originating service may not cause the failure or slow request directly; a downstream dependent service may instead be to blame. + +An application like Datadog or New Relic will use agents running on services throughout the system to inject traffic with HTTP headers (the context). They will track the request’s entire path from origin to destination to reply, gathering data on routes the requests follow and performance. The injected headers follow the [W3C Trace Context specification](https://www.w3.org/TR/trace-context/) (or another header format, such as [B3 headers](https://github.com/openzipkin/b3-propagation)), which facilitates maintaining the headers through every service without being stripped (the propagation). + + +## What are intercepts and preview URLs? + +[Intercepts](../../reference/intercepts) and [preview +URLs](../../howtos/preview-urls/) are functions of Telepresence that +enable easy local development from a remote Kubernetes cluster and +offer a preview environment for sharing and real-time collaboration. + +Telepresence uses custom HTTP headers and header propagation to +identify which traffic to intercept both for plain personal intercepts +and for personal intercepts with preview URLs; these techniques are +more commonly used for distributed tracing, so what they are being +used for is a little unorthodox, but the mechanisms for their use are +already widely deployed because of the prevalence of tracing. The +headers facilitate the smart routing of requests either to live +services in the cluster or services running locally on a developer’s +machine. The intercepted traffic can be further limited by using path +based routing. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to [Ambassador Cloud](https://app.getambassador.io) with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. + +If you need to implement context propagation in your environment to support personal intercepts in services deeper in the stack, we [offer a guide to doing so](https://github.com/ambassadorlabs/telepresence-header-propagation) with the lowest complexity and effort possible. diff --git a/docs/telepresence/2.15/concepts/devloop.md b/docs/telepresence/2.15/concepts/devloop.md new file mode 100644 index 000000000..86aac87e2 --- /dev/null +++ b/docs/telepresence/2.15/concepts/devloop.md @@ -0,0 +1,54 @@ +--- +title: "The developer and the inner dev loop | Ambassador " +--- + +# The developer experience and the inner dev loop + +## How is the developer experience changing? + +The developer experience is the workflow a developer uses to develop, test, deploy, and release software. + +Typically this experience has consisted of both an inner dev loop and an outer dev loop. The inner dev loop is where the individual developer codes and tests, and once the developer pushes their code to version control, the outer dev loop is triggered. + +The outer dev loop is _everything else_ that happens leading up to release. This includes code merge, automated code review, test execution, deployment, [controlled (canary) release](https://www.getambassador.io/docs/argo/latest/concepts/canary/), and observation of results. The modern outer dev loop might include, for example, an automated CI/CD pipeline as part of a [GitOps workflow](https://www.getambassador.io/docs/argo/latest/concepts/gitops/#what-is-gitops) and a [progressive delivery](/docs/argo/latest/concepts/cicd/) strategy relying on automated canaries, i.e. to make the outer loop as fast, efficient and automated as possible. + +Cloud-native technologies have fundamentally altered the developer experience in two ways: one, developers now have to take extra steps in the inner dev loop; two, developers need to be concerned with the outer dev loop as part of their workflow, even if most of their time is spent in the inner dev loop. + +Engineers now must design and build distributed service-based applications _and_ also assume responsibility for the full development life cycle. The new developer experience means that developers can no longer rely on monolithic application developer best practices, such as checking out the entire codebase and coding locally with a rapid “live-reload” inner development loop. Now developers have to manage external dependencies, build containers, and implement orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this adds development time to the equation. + +## What is the inner dev loop? + +The inner dev loop is the single developer workflow. A single developer should be able to set up and use an inner dev loop to code and test changes quickly. + +Even within the Kubernetes space, developers will find much of the inner dev loop familiar. That is, code can still be written locally at a level that a developer controls and committed to version control. + +In a traditional inner dev loop, if a typical developer codes for 360 minutes (6 hours) a day, with a traditional local iterative development loop of 5 minutes — 3 coding, 1 building, i.e. compiling/deploying/reloading, 1 testing inspecting, and 10-20 seconds for committing code — they can expect to make ~70 iterations of their code per day. Any one of these iterations could be a release candidate. The only “developer tax” being paid here is for the commit process, which is negligible. + +![traditional inner dev loop](../images/trad-inner-dev-loop.png) + +## In search of lost time: How does containerization change the inner dev loop? + +The inner dev loop is where writing and testing code happens, and time is critical for maximum developer productivity and getting features in front of end users. The faster the feedback loop, the faster developers can refactor and test again. + +Changes to the inner dev loop process, i.e., containerization, threaten to slow this development workflow down. Coding stays the same in the new inner dev loop, but code has to be containerized. The _containerized_ inner dev loop requires a number of new steps: + +* packaging code in containers +* writing a manifest to specify how Kubernetes should run the application (e.g., YAML-based configuration information, such as how much memory should be given to a container) +* pushing the container to the registry +* deploying containers in Kubernetes + +Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. + + +![container inner dev loop](../images/container-inner-dev-loop.png) + +## Tackling the slow inner dev loop + +A slow inner dev loop can negatively impact frontend and backend teams, delaying work on individual and team levels and slowing releases into production overall. + +For example: + +* Frontend developers have to wait for previews of backend changes on a shared dev/staging environment (for example, until CI/CD deploys a new version) and/or rely on mocks/stubs/virtual services when coding their application locally. These changes are only verifiable by going through the CI/CD process to build and deploy within a target environment. +* Backend developers have to wait for CI/CD to build and deploy their app to a target environment to verify that their code works correctly with cluster or cloud-based dependencies as well as to share their work to get feedback. + +New technologies and tools can facilitate cloud-native, containerized development. And in the case of a sluggish inner dev loop, developers can accelerate productivity with tools that help speed the loop up again. diff --git a/docs/telepresence/2.15/concepts/devworkflow.md b/docs/telepresence/2.15/concepts/devworkflow.md new file mode 100644 index 000000000..fa24fc2bd --- /dev/null +++ b/docs/telepresence/2.15/concepts/devworkflow.md @@ -0,0 +1,7 @@ +# The changing development workflow + +A changing workflow is one of the main challenges for developers adopting Kubernetes. Software development itself isn’t the challenge. Developers can continue to [code using the languages and tools with which they are most productive and comfortable](https://www.getambassador.io/resources/kubernetes-local-dev-toolkit/). That’s the beauty of containerized development. + +However, the cloud-native, Kubernetes-based approach to development means adopting a new development workflow and development environment. Beyond the basics, such as figuring out how to containerize software, [how to run containers in Kubernetes](https://www.getambassador.io/docs/kubernetes/latest/concepts/appdev/), and how to deploy changes into containers, for example, Kubernetes adds complexity before it delivers efficiency. The promise of a “quicker way to develop software” applies at least within the traditional aspects of the inner dev loop, where the single developer codes, builds and tests their software. But both within the inner dev loop and once code is pushed into version control to trigger the outer dev loop, the developer experience changes considerably from what many developers are used to. + +In this new paradigm, new steps are added to the inner dev loop, and more broadly, the developer begins to share responsibility for the full life cycle of their software. Inevitably this means taking new workflows and tools on board to ensure that the full life cycle continues full speed ahead. diff --git a/docs/telepresence/2.15/concepts/faster.md b/docs/telepresence/2.15/concepts/faster.md new file mode 100644 index 000000000..3950dce38 --- /dev/null +++ b/docs/telepresence/2.15/concepts/faster.md @@ -0,0 +1,28 @@ +--- +title: Install the Telepresence Docker extension | Ambassador +--- +# Making the remote local: Faster feedback, collaboration and debugging + +With the goal of achieving [fast, efficient development](https://www.getambassador.io/use-case/local-kubernetes-development/), developers need a set of approaches to bridge the gap between remote Kubernetes clusters and local development, and reduce time to feedback and debugging. + +## How should I set up a Kubernetes development environment? + +[Setting up a development environment](https://www.getambassador.io/resources/development-environments-microservices/) for Kubernetes can be much more complex than the setup for traditional web applications. Creating and maintaining a Kubernetes development environment relies on a number of external dependencies, such as databases or authentication. + +While there are several ways to set up a Kubernetes development environment, most introduce complexities and impediments to speed. The dev environment should be set up to easily code and test in conditions where a service can access the resources it depends on. + +A good way to meet the goals of faster feedback, possibilities for collaboration, and scale in a realistic production environment is the "single service local, all other remote" environment. Developing in a fully remote environment offers some benefits, but for developers, it offers the slowest possible feedback loop. With local development in a remote environment, the developer retains considerable control while using tools like [Telepresence](../../quick-start/) to facilitate fast feedback, debugging and collaboration. + +## What is Telepresence? + +Telepresence is an open source tool that lets developers [code and test microservices locally against a remote Kubernetes cluster](../../quick-start/). Telepresence facilitates more efficient development workflows while relieving the need to worry about other service dependencies. + +## How can I get fast, efficient local development? + +The dev loop can be jump-started with the right development environment and Kubernetes development tools to support speed, efficiency and collaboration. Telepresence is designed to let Kubernetes developers code as though their laptop is in their Kubernetes cluster, enabling the service to run locally and be proxied into the remote cluster. Telepresence runs code locally and forwards requests to and from the remote Kubernetes cluster, bypassing the much slower process of waiting for a container to build, pushing it to registry, and deploying to production. + +A rapid and continuous feedback loop is essential for productivity and speed; Telepresence enables the fast, efficient feedback loop to ensure that developers can access the rapid local development loop they rely on without disrupting their own or other developers' workflows. Telepresence safely intercepts traffic from the production cluster and enables near-instant testing of code, local debugging in production, and [preview URL](../../howtos/preview-urls/) functionality to share dev environments with others for multi-user collaboration. + +Telepresence works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This pod proxies data from the Kubernetes environment (e.g., TCP/UDP connections, environment variables, volumes) to the local process. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +The intercept proxy works thanks to context propagation, which is most frequently associated with distributed tracing but also plays a key role in controllable intercepts and preview URLs. diff --git a/docs/telepresence/2.15/concepts/goldenpaths.md b/docs/telepresence/2.15/concepts/goldenpaths.md new file mode 100644 index 000000000..15f084224 --- /dev/null +++ b/docs/telepresence/2.15/concepts/goldenpaths.md @@ -0,0 +1,9 @@ +# Golden Paths + +A golden path is a best practice or a standardized process you should apply to Telepresence, often used to optimize productivity or quality control. It can be used as a benchmark or a reference point for measuring success and progress towards a particular goal or outcome. + +We have provided Golden Paths for multiple use cases listed below. + +1. [Intercept Specifications](../goldenpaths/specs) +2. [Using Telepresence with Docker](../goldenpaths/docker) +3. [Docker Compose](../goldenpaths/compose) \ No newline at end of file diff --git a/docs/telepresence/2.15/concepts/goldenpaths/compose.md b/docs/telepresence/2.15/concepts/goldenpaths/compose.md new file mode 100644 index 000000000..e3a6db407 --- /dev/null +++ b/docs/telepresence/2.15/concepts/goldenpaths/compose.md @@ -0,0 +1,63 @@ +# Telepresence with Docker Compose Golden Path + +## Why? + +When adopting Telepresence, you may be hesitant to throw away all the investment you made replicating your infrastructure with +[Docker Compose](https://docs.docker.com/compose/). + +Thankfully, it doesn't have to be this way, since you can associate the [Telepresence Specification](../specs) with [Docker mode](../docker) to integrate your Docker Compose file. + +## How? +Telepresence Intercept Specifications are integrated with Docker Compose! Let's look at an example to see how it works. + +Below is an example of an Intercept Spec and Docker Compose file that is intercepting an echo service with a custom header and being handled by a service created through Docker Compose. + +Intercept Spec: +```yaml +workloads: + - name: echo + intercepts: + - handler: echo + localport: 8080 + port: 80 + headers: + - name: "{{ .Telepresence.Username }}" + value: 1 +handlers: + - name: echo + docker: + compose: + services: + - name: echo + behavior: interceptHandler +``` + +The Docker Compose file is creating two services, a postgres database, and your local echo service. The local echo service is utilizing Docker's [watch](https://docs.docker.com/compose/file-watch/) feature to take advantage of hot reloads. + +Docker compose file: +```yaml +services: + postgres: + image: "postgres:14.1" + ports: + - "5432" + echo: + build: . + ports: + - "8080" + x-develop: + watch: + - action: rebuild + path: main.go + environment: + DATABASE_HOST: "localhost:5432" + DATABASE_PASSWORD: postgres + DEV_MODE: "true" +``` + +By combining Intercept Specifications and Docker Compose, you can intercept the traffic going to your cluster while developing on multiple local services and utilizing hot reloads. + +## Key learnings + +* Using **Docker Compose** with **Telepresence** allows you to have a **hybrid** development setup between local & remote. +* You can **reuse your existing setup** with minimum effort. diff --git a/docs/telepresence/2.15/concepts/goldenpaths/docker.md b/docs/telepresence/2.15/concepts/goldenpaths/docker.md new file mode 100644 index 000000000..863aa497a --- /dev/null +++ b/docs/telepresence/2.15/concepts/goldenpaths/docker.md @@ -0,0 +1,70 @@ +# Telepresence with Docker Golden Path + +## Why? + +It can be tedious to adopt Telepresence across your organization, since in its handiest form, it requires admin access, and needs to get along with any exotic +networking setup that your company may have. + +If Docker is already approved in your organization, this Golden path should be considered. + +## How? + +When using Telepresence in Docker mode, users can eliminate the need for admin access on their machines, address several networking challenges, and forego the need for third-party applications to enable volume mounts. + +You can simply add the docker flag to any Telepresence command, and it will start your daemon in a container. +Thus removing the need for root access, making it easier to adopt as an organization + +Let's illustrate with a quick demo, assuming a default Kubernetes context named default, and a simple HTTP service: + +```cli +$ telepresence connect --docker +Connected to context default (https://default.cluster.bakerstreet.io) + +$ docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +7a0e01cab325 datawire/telepresence:2.12.1 "telepresence connec…" 18 seconds ago Up 16 seconds 127.0.0.1:58802->58802/tcp tp-default +``` + +This method limits the scope of the potential networking issues since everything stays inside Docker. The Telepresence daemon can be found under the name `tp-` when listing your containers. + +Start an intercept: + +```cli +$ telepresence intercept echo-easy --port 8080:80 -n default +Using Deployment echo-easy + Intercept name : echo-easy-default + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: proxied + Volume Mount Point : /var/folders/x_/4x_4pfvx2j3_94f36x551g140000gp/T/telfs-505935483 + Intercepting : HTTP requests with headers + 'x-telepresence-intercept-id: e20f0764-7fd8-45c1-b911-b2adeee1af45:echo-easy-default' + Preview URL : https://gracious-ishizaka-5365.preview.edgestack.me + Layer 5 Hostname : echo-easy.default.svc.cluster.local +``` + +Start your intercept handler (interceptor) by targeting the daemon container `--network=container:tp-`, and open the preview URL to see the traffic routed to your machine. + +```cli +$ docker run \ + --network=container:tp-default \ + -e PORT=8080 jmalloc/echo-server +Echo server listening on port 8080. +127.0.0.1:41500 | GET / +127.0.0.1:41512 | GET /favicon.ico +127.0.0.1:41500 | GET / +127.0.0.1:41512 | GET /favicon.ico +``` + +For users utilizing Docker mode in Telepresence, we strongly recommend using [Intercept Specifications](../specs) to seamlessly configure their Intercept Handler as a Docker container. + +It's essential to ensure that users also open the debugging port on their container to allow them to attach their local debugger from their IDE. +By leveraging Intercept Specifications and Docker mode together, users can optimize their Telepresence experience and streamline their debugging workflow. + +## Key learnings + +* Using the Docker mode of telepresence **do not require root access**, and make it **easier** to adopt it across your organization. +* It **limits the potential networking issues** you can encounter. +* It leverages **Docker** for your interceptor. +* You can use it with the [Intercept Specifications](../specs). diff --git a/docs/telepresence/2.15/concepts/goldenpaths/specs.md b/docs/telepresence/2.15/concepts/goldenpaths/specs.md new file mode 100644 index 000000000..0d8e5dc30 --- /dev/null +++ b/docs/telepresence/2.15/concepts/goldenpaths/specs.md @@ -0,0 +1,80 @@ +# Intercept Specification Golden Path + +## Why? + +Telepresence can be difficult to adopt Organization-wide. Each developer has their own local setup and adds many variables to running Telepresence, duplicating work amongst developers. + +For these reasons, and many others we recommend using [Intercept Specifications](../../../reference/intercepts/specs). + +## How? + +When using an Intercept Specification you write a YAML file, similar to a CI workflow, or a Docker compose file. An Intercept Specification enables you to standardization amongst your developers. + +With a spec you will be able to define the kubernetes context to work in, the workload you want to intercept, the local intercept handler your traffic will be flowing to, and any pre/post requisties that are required to run your applications. + +Lets look at an example: + +I have a service `quote` running in the `default` namespace I want to intercept to test changes I've made before opening a Pull Request. + +I can use the Intercept Specification below to tell Telepresence to Intercept the quote serivce with a [Personal Intercept](../../../reference/intercepts#personal-intercept), in the default namespace of my cluster `test-cluster`. I also want to start the Intercept Handler, as a Docker container, with the provided image. + +```yaml +--- +connection: + context: test-cluster +workloads: + - name: quote + namespace: default + intercepts: + - headers: + - name: test-{{ .Telepresence.Username }} + value: "{{ .Telepresence.Username }}" + localPort: 8080 + mountPoint: "false" + port: 80 + handler: quote + service: quote + previewURL: + enable: true +handlers: + - name: quote + environment: + - name: PORT + value: "8080" + docker: + image: docker.io/datawire/quote:0.5.0 +``` + +You can then run this Intercept Specification with: + +```cli +telepresence intercept run quote-spec.yaml + Intercept name : quote-default + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests with headers + 'test-user =~ user' + Preview URL : https://charming-newton-3109.preview.edgestack.me + Layer 5 Hostname : quote.default.svc.cluster.local +Intercept spec "quote-spec" started successfully, use ctrl-c to cancel. +2023/04/12 16:05:00 CONSUL_IP environment variable not found, continuing without Consul registration +2023/04/12 16:05:00 listening on :8080 +``` + +You can see that the Intercept was started, and if I check the local docker containers I can see that the Telepresence daemon is running in a container, and your Intercept Handler was successfully started. + +```cli +docker ps + +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +bdd99d244fbb datawire/quote:0.5.0 "/bin/qotm" 2 minutes ago Up 2 minutes tp-quote +5966d7099adf datawire/telepresence:2.12.1 "telepresence connec…" 2 minutes ago Up 2 minutes 127.0.0.1:58443->58443/tcp tp-test-cluster +``` + +## Key Learnings + +* Using Intercept Specification enables you to create a standardized approach for Intercepts across your Organization in an easy to share way. +* You can easily leverage Docker to remove other potential hiccups associated with networking. +* There are many more great things you can do with an Intercept Specification, check those out [here](../../../reference/intercepts/specs) \ No newline at end of file diff --git a/docs/telepresence/2.15/concepts/intercepts.md b/docs/telepresence/2.15/concepts/intercepts.md new file mode 100644 index 000000000..bf0bfd5b3 --- /dev/null +++ b/docs/telepresence/2.15/concepts/intercepts.md @@ -0,0 +1,208 @@ +--- +title: "Types of intercepts" +description: "Short demonstration of personal vs global intercepts" +--- + +import React from 'react'; + +import Alert from '@material-ui/lab/Alert'; +import AppBar from '@material-ui/core/AppBar'; +import Paper from '@material-ui/core/Paper'; +import Tab from '@material-ui/core/Tab'; +import TabContext from '@material-ui/lab/TabContext'; +import TabList from '@material-ui/lab/TabList'; +import TabPanel from '@material-ui/lab/TabPanel'; +import Animation from '@src/components/InterceptAnimation'; + +export function TabsContainer({ children, ...props }) { + const [state, setState] = React.useState({curTab: "personal"}); + React.useEffect(() => { + const query = new URLSearchParams(window.location.search); + var interceptType = query.get('intercept') || "personal"; + if (state.curTab != interceptType) { + setState({curTab: interceptType}); + } + }, [state, setState]) + var setURL = function(newTab) { + history.replaceState(null,null, + `?intercept=${newTab}${window.location.hash}`, + ); + }; + return ( +
+ + + {setState({curTab: newTab}); setURL(newTab)}} aria-label="intercept types"> + + + + + + {children} + +
+ ); +}; + +# Types of intercepts + + + + +# No intercept + + + + +This is the normal operation of your cluster without Telepresence. + + + + + +# Global intercept + + + + +**Global intercepts** replace the Kubernetes "Orders" service with the +Orders service running on your laptop. The users see no change, but +with all the traffic coming to your laptop, you can observe and debug +with all your dev tools. + + + +### Creating and using global intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=all + ``` + + + + Make sure your current kubectl context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service: + + All requests will be sent to the version of your service that is + running in the local development environment. + + + + +# Personal intercept + +**Personal intercepts** allow you to be selective and intercept only +some of the traffic to a service while not interfering with the rest +of the traffic. This allows you to share a cluster with others on your +team without interfering with their work. + +Personal intercepts are subject to different plans. +To read more about their capabilities & limits, see the [subscription management page](../../../cloud/latest/subscriptions/howtos/manage-my-subscriptions). + + + + +In the illustration above, **Orange** +requests are being made by Developer 2 on their laptop and the +**green** are made by a teammate, +Developer 1, on a different laptop. + +Each developer can intercept the Orders service for their requests only, +while sharing the rest of the development environment. + + + +### Creating and using personal intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b + ``` + + We're using + `Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b` as the + header for the sake of the example, but you can use any + `key=value` pair you want, or `--http-header=auto` to have it + choose something automatically. + + + + Make sure your current kubect context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service by passing the + HTTP header: + + ```http + Personal-Intercept: 126a72c7-be8b-4329-af64-768e207a184b + ``` + + + + Need a browser extension to modify or remove an HTTP-request-headers? + + Chrome + {' '} + Firefox + + + + 3. Using the intercept: Send requests to your service without the + HTTP header: + + Requests without the header will be sent to the version of your + service that is running in the cluster. This enables you to share + the cluster with a team! + +### Intercepting a specific endpoint + +It's not uncommon to have one service serving several endpoints. Telepresence is capable of limiting an +intercept to only affect the endpoints you want to work with by using one of the `--http-path-xxx` +flags below in addition to using `--http-header` flags. Only one such flag can be used in an intercept +and, contrary to the `--http-header` flag, it cannot be repeated. + +The following flags are available: + +| Flag | Meaning | +|-------------------------------|------------------------------------------------------------------| +| `--http-path-equal ` | Only intercept the endpoint for this exact path | +| `--http-path-prefix ` | Only intercept endpoints with a matching path prefix | +| `--http-path-regex ` | Only intercept endpoints that match the given regular expression | + +#### Examples: + +1. A personal intercept using the header "Coder: Bob" limited to all endpoints that start with "/api': + + ```shell + telepresence intercept SERVICENAME --http-path-prefix=/api --http-header=Coder=Bob + ``` + +2. A personal intercept using the auto generated header that applies only to the endpoint "/api/version": + + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version --http-header=auto + ``` + or, since `--http-header=auto` is the implicit when using `--http` options, just: + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version + ``` + +3. A personal intercept using the auto generated header limited to all endpoints matching the regular expression "(staging-)?api/.*": + + ```shell + telepresence intercept SERVICENAME --http-path-regex='/(staging-)?api/.*' + ``` + + + diff --git a/docs/telepresence/2.15/doc-links.yml b/docs/telepresence/2.15/doc-links.yml new file mode 100644 index 000000000..2ae653691 --- /dev/null +++ b/docs/telepresence/2.15/doc-links.yml @@ -0,0 +1,121 @@ +- title: Quick start + link: quick-start +- title: Install Telepresence + items: + - title: Install + link: install/ + - title: Upgrade + link: install/upgrade/ + - title: Install Traffic Manager + link: install/manager/ + - title: Install Traffic Manager with Helm + link: install/helm/ + - title: Cloud Provider Prerequisites + link: install/cloud/ + - title: Migrate from legacy Telepresence + link: install/migrate-from-legacy/ +- title: Core concepts + items: + - title: The changing development workflow + link: concepts/devworkflow + - title: The developer experience and the inner dev loop + link: concepts/devloop + - title: "Making the remote local: Faster feedback, collaboration and debugging" + link: concepts/faster + - title: Context propagation + link: concepts/context-prop + - title: Types of intercepts + link: concepts/intercepts + - title: Golden Paths + link: concepts/goldenpaths + items: + - title: Intercept Specifications + link: concepts/goldenpaths/specs + - title: Docker Mode + link: concepts/goldenpaths/docker + - title: Docker Compose integration + link: concepts/goldenpaths/compose +- title: How do I... + items: + - title: Intercept a service in your own environment + link: howtos/intercepts + - title: Share dev environments with Personal Intercepts + link: howtos/personal-intercepts + - title: Share public previews with Preview URLs + link: howtos/preview-urls + - title: Proxy outbound traffic to my cluster + link: howtos/outbound + - title: Host a cluster in a local VM + link: howtos/cluster-in-vm + - title: Send requests to an intercepted service + link: howtos/request + - title: Package and share my intercepts + link: howtos/package +- title: Telepresence with Docker + items: + - title: Telepresence for Docker Compose + link: docker/compose + - title: Telepresence for Docker Extension + link: docker/extension + - title: Telepresence in Docker Mode + link: docker/cli +- title: Telepresence for CI + items: + - title: Github Actions + link: ci/github-actions + - title: Pod Daemons + link: ci/pod-daemon +- title: Technical reference + items: + - title: Architecture + link: reference/architecture + - title: Client reference + link: reference/client + items: + - title: login + link: reference/client/login + - title: Laptop-side configuration + link: reference/config + - title: Cluster-side configuration + link: reference/cluster-config + - title: Using Docker for intercepts + link: reference/docker-run + - title: Running Telepresence in a Docker container + link: reference/inside-container + - title: Environment variables + link: reference/environment + - title: Intercepts + link: reference/intercepts/ + items: + - title: Configure intercept using CLI + link: reference/intercepts/cli + - title: Configure intercept using specifications + link: reference/intercepts/specs + - title: Manually injecting the Traffic Agent + link: reference/intercepts/manual-agent + - title: Volume mounts + link: reference/volume + - title: RESTful API service + link: reference/restapi + - title: DNS resolution + link: reference/dns + - title: RBAC + link: reference/rbac + - title: Telepresence and VPNs + link: reference/vpn + - title: Networking through Virtual Network Interface + link: reference/tun-device + - title: Connection Routing + link: reference/routing + - title: Using Telepresence with Linkerd + link: reference/linkerd +- title: FAQs + link: faqs +- title: Troubleshooting + link: troubleshooting +- title: Community + link: community +- title: Release Notes + link: release-notes +- title: Licenses + link: licenses diff --git a/docs/telepresence/2.15/docker/cli.md b/docs/telepresence/2.15/docker/cli.md new file mode 100644 index 000000000..7b37ba2a8 --- /dev/null +++ b/docs/telepresence/2.15/docker/cli.md @@ -0,0 +1,281 @@ +--- +title: "Telepresence in Docker Mode" +description: "Claim a remote demo cluster and learn about running Telepresence in Docker Mode, speeding up local development and debugging." +indexable: true +--- + +import { EmojivotoServicesList, DCPLink, Login, DemoClusterWarning } from "../../../../../src/components/Docs/Telepresence"; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; + +# Telepresence in Docker Mode + +
+

Contents

+ +* [What is Telepresence Docker Mode?](#what-is-telepresence-docker-mode) +* [Key Benefits](#key-benefits) +* [Prerequisites](#prerequisites) +* [1. Get a free remote cluster](#1-get-a-free-remote-cluster) +* [2. Try the Emojivoto application](#2-try-the-emojivoto-application) +* [3. Testing the fix in your local environment](#3-testing-the-fix-in-your-local-environment) +* [4. Download the demo cluster config file](#4-download-the-demo-cluster-config-file) +* [5. Enable Telepresence Docker mode](#5-enable-telepresence-docker-mode) +* [6. Set up your local development environment and make a global intercept](#6-set-up-your-local-development-environment-and-make-a-global-intercept) +* [7.Make a personal intercept](#7-make-a-personal-intercept)) + +
+ +Welcome to the quickstart guide for Telepresence Docker mode! In this hands-on tutorial, we will explore the powerful features of Telepresence and learn how to leverage Telepresence Docker mode to enhance local development and debugging workflows. + +## What is Telepresence Docker Mode? + +Telepresence Docker Mode enables you to run a single service locally while seamlessly connecting it to a remote Kubernetes cluster. This mode enables developers to accelerate their development cycles by providing a fast and efficient way to iterate on code changes without requiring admin access on their machines. + +## Key Benefits + +When using Telepresence in Docker mode, you can enjoy the following benefits: + +1. **Simplified Development Setup**: Eliminate the need for admin access on your local machine, making it easier to set up and configure your development environment. + +2. **Efficient Networking**: Address common networking challenges by seamlessly connecting your locally running service to a remote Kubernetes cluster. This enables you to leverage the cluster's resources and dependencies while maintaining a productive local development experience. + +3. **Enhanced Debugging**: Gain the ability to debug your service in its natural environment, directly from your local development environment. This eliminates the need for complex workarounds or third-party applications to enable volume mounts or access remote resources. + +## Prerequisites + +1. [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/). Kubectl is the official Kubernetes command-line tool. You will use it regularly to interact with your cluster, whether deploying applications, inspecting resources, or debugging issues. + +2. [Telepresence 2.13 or latest](../../install). Telepresence is a command-line tool that lets you run a single service locally, while connecting that service to a remote Kubernetes cluster. You can use Telepresence to speed up local development and debugging. + +3. [Docker Desktop](https://www.docker.com/get-started). Docker Desktop is a tool for building and sharing containerized applications and microservices. You'll use Docker Desktop to run a local development environment. + +Now that we have a clear understanding of Telepresence Docker mode and its benefits, let's dive into the hands-on tutorial! + +## 1. Get a free remote cluster + +[Telepresence](/docs/telepresence/) connects your local workstation with a remote Kubernetes cluster. In this tutorial, we'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. Test out the application: + +1. Go to the and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. We're going to use Telepresence shortly to fix this bug, as everyone should be able to vote for 🍩! + + + Congratulations! You've successfully accessed the Emojivoto application on your remote cluster. + + +## 3. Testing the fix in your local environment + +We'll set up a development environment locally on your workstation. We'll then use [Telepresence](../../reference/inside-container/) to connect this local development environment to the remote Kubernetes cluster. To save time, the development environment we'll use is pre-packaged as a Docker container. + +1. Download and run the image for the service locally: + + ```bash + docker run -d --name ambassador-demo --pull always -p 8083:8083 -p 8080:8080 --rm -it datawire/demoemojivoto + ``` + + + If you're using Docker Desktop on Windows, you may need to enable virtualization to run the container.
> + Make sure that ports 8080 and 8083 are free. If the Docker engine is not running, the command will fail and you will see docker: unknown server OS in your terminal. +
+ + The Docker container includes a copy of the Emojivoto application that fixes the bug. Visit the [leaderboard](http://localhost:8083/leaderboard) and notice how it is different from the leaderboard in your Kubernetes cluster. + +2. Now, stop the container by running the following command in your terminal: + + ```bash + docker stop ambassador-demo + ``` + +In this section of the quickstart, you ran the Emojivoto application locally. In the next section, you'll use Telepresence to connect your local development environment to the remote Kubernetes cluster. + +## 4. Download the demo cluster config file + +1. {window.open('https://app.getambassador.io/cloud/demo-cluster-download-popup/config', 'ambassador-cloud-demo-cluster', 'menubar=no,location=no,resizable=yes,scrollbars=yes,status=no,width=550,height=750'); e.preventDefault(); }} target="_blank">Download your demo cluster config file. This file contains the credentials you need to access your demo cluster. + +2. Export the file's location to KUBECONFIG by running this command in your terminal: + + + + + ```bash + export KUBECONFIG=/path/to/kubeconfig.yaml + ``` + + + + + ```bash + export KUBECONFIG=/path/to/kubeconfig.yaml + ``` + + + + + ```bash + SET KUBECONFIG=/path/to/kubeconfig.yaml + ``` + + + + + + You should now be able to run `kubectl` commands against your demo cluster. + +3. Verify that you can access the cluster by listing the app's services: + + ``` + $ kubectl get services -n emojivoto + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + emoji-svc ClusterIP 10.43.131.84 8080/TCP,8801/TCP 3h12m + voting-svc ClusterIP 10.43.32.184 8080/TCP,8801/TCP 3h12m + web-svc ClusterIP 10.43.105.110 8080/TCP 3h12m + web-app ClusterIP 10.43.53.247 80/TCP 3h12m + web-app-canary ClusterIP 10.43.8.90 80/TCP 3h12m + ``` + +## 5. Enable Telepresence Docker mode + +You can simply add the docker flag to any Telepresence command, and it will start your daemon in a container. Thus removing the need for root access, making it easier to adopt as an organization. + +1. Confirm that the Telepresence CLI is now installed, we expect to see that the daemons are not yet running: +`telepresence status` + + ``` + $ telepresence status + User Daemon: Not running + Root Daemon: Not running + Ambassador Cloud: + Status : Logged out + Traffic Manager: Not connected + Intercept Spec: Not running + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open System Preferences → Security & Privacy → General. Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence status command. + + +2. Log in to Ambassador Cloud: + + ``` + $ telepresence login + ``` + +3. Then, install the Helm chart and quit Telepresence: + + ```bash + telepresence helm install + telepresence quit -s + ``` + +4. Finally, connect to the remote cluster using Docker mode: + + ``` + $ telepresence connect --docker + Connected to context default (https://default.cluster.bakerstreet.io) + ``` + +5. Verify that you are connected to the remote cluster by listing your Docker containers: + + ``` + $ docker ps + CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES + 7a0e01cab325 datawire/telepresence:2.12.1 "telepresence connec…" 18 seconds ago Up 16 seconds + ``` + + +This method limits the scope of the potential networking issues since everything stays inside Docker. The Telepresence daemon can be found under the name `tp-` when listing your containers. + + +## 6. Set up your local development environment and make a global intercept + +Start your intercept handler (interceptor) by targeting the daemon container --network=container:tp-``, and open the preview URL to see the traffic routed to your machine. + +1. Run the Docker container locally, by running this command inside your local terminal. The image is the same as the one you ran in the previous step (step 1) but this time, you will run it with the `--network=container:tp-` flag: + + ```bash + docker run -d --name ambassador-demo --pull always --network=container:tp-default --rm -it datawire/demoemojivoto + ``` + +2. With Telepresence, you can create global intercepts that intercept all traffic going to a service in your cluster and route it to your local environment instead/ Start a global intercept by running this command in your terminal: + + ``` + $ telepresence intercept web --docker --port 8080 --ingress-port 80 --ingress-host edge-stack.ambassador -n emojivoto --ingress-l5 edge-stack.ambassador --preview-url=true + Using Deployment web + Intercept name : web-emojivoto + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Volume Mount Point : /var/folders/n5/rgwx1rvd40z3tt2v473h715c0000gp/T/telfs-2663656564 + Intercepting : HTTP requests with headers + 'x-telepresence-intercept-id: 8ff55336-9127-43b7-8175-08c598699bdb:web-emojivoto' + Preview URL : https://unruffled-morse-4172.preview.edgestack.me + Layer 5 Hostname : edge-stack.ambassador + ``` + + + Learn more about intercepts and how to use them. + + +## 7. Make a personal intercept + +Personal intercepts allow you to be selective and intercept only some of the traffic to a service while not interfering with the rest of the traffic. This allows you to share a cluster with others on your team without interfering with their work. + +1. First, connect to telepresence docker mode again: + + ``` + $ telepresence connect --docker + ``` + +2. Run the docker container again: + + ``` + $ docker run -d --name ambassador-demo --pull always --network=container:tp-default --rm -it datawire/demoemojivoto + ``` + +3. Create a personal intercept by running this command in your terminal: + + ``` + $ telepresence intercept web --docker --port 8080 --ingress-port 80 --ingress-host edge-stack.ambassador -n emojivoto --ingress-l5 edge-stack.ambassador --preview-url=true + Using Deployment web + Intercept name : web-emojivoto + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Volume Mount Point : /var/folders/n5/rgwx1rvd40z3tt2v473h715c0000gp/T/telfs-2663656564 + Intercepting : HTTP requests with headers + 'x-telepresence-intercept-id: 8ff55336-9127-43b7-8175-08c598699bdb:web-emojivoto' + Preview URL : https://unruffled-morse-4172.preview.edgestack.me + Layer 5 Hostname : edge-stack.ambassador + ``` + +4. Open the preview URL to see the traffic routed to your machine. + +5. To stop the intercept, run this command in your terminal: + + ``` + $ telepresence leave web-emojivoto + ``` +## What's Next? + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! + +
\ No newline at end of file diff --git a/docs/telepresence/2.15/docker/compose.md b/docs/telepresence/2.15/docker/compose.md new file mode 100644 index 000000000..45de52d2e --- /dev/null +++ b/docs/telepresence/2.15/docker/compose.md @@ -0,0 +1,117 @@ +--- +title: "Telepresence for Docker Compose" +description: "Learn about how to use Docker Compose with Telepresence" +indexable: true +--- + +# Telepresence for Docker Compose + +The [Intercept Specification](../../reference/intercepts/specs) can contain an intercept handler that in turn references (or embeds) a [docker compose](../../reference/intercepts/specs#compose) specification. The docker compose services will then be used when handling the intercepted traffic. + +The intended user for the docker compose integration runs an existing compose specification locally on a workstation but wants to try it out "in the cluster" by intercepting cluster services. This is challenging, because the cluster's network, the intercepted pod's environment and volume mounts, and which of the services in the compose file to actually redirect traffic to, are not known to docker compose. In fact, the environment and volume mounts are not known until the actual intercept is activated. Telepresence helps with all of this by using an ephemeral and modified copy of the compose file that it creates when the intercept starts. The modification steps are described below. + +## Intended service behavior + +The user starts by declaring how each service in the docker compose spec. are intended to behave. These intentions can be declared directly in the Intercept spec. so that the docker compose spec. is left untouched, or they can be added to the docker compose spec. in the form of `x-telepresence` extensions. This is explained ([in detail](../../reference/intercepts/specs#service)) in the reference. + +The intended behavior can be one of `interceptHandler`, `remote`, or `local`, where `local` is the default that applies to all services that have no intended behavior specified. + +### The interceptHandler behavior + +A compose service intended to have the `interceptHandler` behavior will: + +- handle traffic from the intercepted pod +- remotely mount the volumes of the intercepted pod +- have access to the environment variables of the intercepted pod + +This means that Telepresence will: + +- modify the `network-mode` of the compose service so that it shares the network of the containerized Telepresence daemon. +- modify the `environment` of the service to include the environment variables exposed by the intercepted pod. +- create volumes that correspond to the volumes of the intercepted pod and replace volumes on the compose service that have overlapping targets. +- delete any networks from the service and instead attach those networks to the daemon. +- delete any exposed ports and instead expose them using the `telepresence` network. + +A docker compose service that originally looked like this: + +```yaml +services: + echo: + environment: + - PORT=8088 + - MODE=local + build: . + ports: + - "8088:8088" + volumes: + - local-secrets:/var/run/secrets/kubernetes.io/serviceaccount:ro + networks: + - green +``` + +when acting as an `interceptHandler` for the `echo-server` service, will instead look something like this: + +```yaml +services: + echo: + build: . + environment: + - A_TELEPRESENCE_MOUNTS=/var/run/secrets/kubernetes.io/serviceaccount + # ... other environment variables from the pod left out for brevity. + - PORT=8088 + - MODE=local + network_mode: container:tp-minikube + volumes: + - echo-server-0:/var/run/secrets/kubernetes.io/serviceaccount +``` + +and Telepresence will also have added this to the compose file: + +```yaml +volumes: + echo-server-0: + name: echo-server-0 + driver: datawire/telemount:amd64 + driver_opts: + container: echo-server + dir: /var/run/secrets/kubernetes.io/serviceaccount + host: 192.168.208.2 + port: "34439" +``` + +### The remote behavior + +A compose service intended to have the `remote` behavior will no longer run locally. Telepresence +will instead: + +- Remove the service from the docker compose spec. +- Reassign any `depends_on` for that service to what the service in turn `depends_on`. +- Inform the containerized Telepresence daemon about the `mapping` that was declared in the service intent (if any). + +### The local behavior + +A compose service intended to have the `local` behavior is more or less left untouched. If it has `depends_on` to a +service intended to have `remote` behavior, then those are swapped out for the `depends_on` in that service. + +## Other modifications + +### The telepresence network + +The default network of the docker compose file will be replaced with the `telepresence` network. This network enables +port access on the local host. + +```yaml +networks: + default: + name: telepresence + external: true + green: + name: echo_green +``` + +### Auto-detection of watcher + +Telepresence will check if the docker compose file contains a [watch](https://docs.docker.com/compose/file-watch/) +declaration for hot-deploy and start a `docker compose alpha watch` automatically when that is the case. This means that +an intercept handler that is modified will be deployed instantly even though the code runs in a container and the +changes will be visible using a preview URL. diff --git a/docs/telepresence/2.15/docker/extension.md b/docs/telepresence/2.15/docker/extension.md new file mode 100644 index 000000000..37da65b34 --- /dev/null +++ b/docs/telepresence/2.15/docker/extension.md @@ -0,0 +1,68 @@ +--- +title: "Telepresence for Docker Extension" +description: "Learn about the Telepresence Docker Extension." +indexable: true +--- +# Telepresence for Docker Extension + +The [Telepresence Docker extension](../../../../../kubernetes-learning-center/telepresence-docker-extension/) is an extension that runs in Docker Desktop. This extension allows you to spin up a selection of your application and run the Telepresence daemons in that container. The Telepresence extension allows you to intercept a service and redirect cloud traffic to containers. + +## Quick Start + +This Quick Start guide will walk you through creating your first intercept in the Telepresence extension in Docker Desktop. + +## Connect to Ambassador Cloud through the Telepresence Docker extension. + + 1. Click the Telepresence extension in Docker Desktop, then click **Get Started**. + + 2. You'll be redirected to Ambassador Cloud for login, you can authenticate with **Docker**, Google, GitHub or GitLab account. +

+ +

+ +## Create an Intercept from a Kubernetes service + + 1. Select the Kubernetes context you would like to connect to. +

+ +

+ + 2. Once Telepresence is connected to your cluster you will see a list of services you can connect to. If you don't see the service you want to intercept, you may need to change namespaces in the dropdown menu. +

+ +

+ + 3. Click the **Intercept** button on the service you want to intercept. You will see a popup to help configure your intercept, and intercept handlers. +

+ +

+ + 4. Telepresence will start an intercept on the service and your local container on the designated port. You will then be redirected to a management page where you can view your active intercepts. +

+ +

+ + +## Create an Intercept from an Intercept Specification. + + 1. Click the dropdown on the **Connect** button to activate the option to upload an intercept specification. +

+ +

+ + 2. Once your specification has been uploaded, the extension will process it and redirect you to the running intercepts page after it has been started. + + 3. The intercept information now shows up in the Docker Telepresence extension. You can now [test your code](#test-your-code). +

+ +

+ + + For more information on Intercept Specifications see the docs here. + + +## Test your code + +Now you can make your code changes in your preferred IDE. When you're finished, build a new container with your code changes and restart your intercept. + +Click `view` next to your preview URL to open a browser tab and see the changes you've made in real time, or you can share the preview URL with teammates so they can review your work. \ No newline at end of file diff --git a/docs/telepresence/2.15/faq-215-login.md b/docs/telepresence/2.15/faq-215-login.md new file mode 100644 index 000000000..1a1fcab29 --- /dev/null +++ b/docs/telepresence/2.15/faq-215-login.md @@ -0,0 +1,37 @@ +--- +description: "Learn about the account requirement in Telepresence 2.15." +--- + +# Telepresence v2.15 Account Requirement FAQ + +There are some big changes in Telepresence 2.15, including the need for an Ambassador Cloud account. We address that change here and have a [more comprehensive FAQ](../faq-215) on the v2.15 release. + +** Why do I need an Ambassador Cloud account to use Telepresence now?** + +The new pricing model for Telepresence accompanies the new requirement to create an account. Previously we only required an account for team features. +We’re now focused on making Telepresence an extremely capable tool for individual developers, from your first connect. We have added new capabilities to the product, which are listed in the question below comparing the two versions. +Because of that, we now require an account to use any Telepresence feature. Creating an account on Ambassador Cloud is completely free, takes a few seconds and can be done through accounts you already have, like GitHub and Google. + +** Can I get the old experience of connecting and doing global intercepts without an Ambassador Cloud account?** + +Yes! The [open source version of Telepresence](https://telepresence.io) can do Connect and Global Intercepts and is independent of Ambassador Labs’ account infrastructure. +You can install the open-source binaries from the [GitHub repository](https://github.com/telepresenceio/telepresence/releases). + +** What do I get from Telepresence that I can't get from the open-source version?** + +We distribute up-to-date Telepresence binaries through Homebrew and a Windows installer; open-source binaries must be downloaded from the GitHub repository. +Our Lite plan offers the same capabilities as open-source but with that added convenience, and is completely free. The Lite plan also includes the Docker Extension. +Our Developer plan adds features like Personal Intercepts, Intercept Specs, Docker Compose integration, and 8x5 support to help you use Telepresence effectively. + +We believe the Lite plan offers the best experience for hobbyists, the Developer plan for individual developers using Telepresence professionally, and the open-source version for users who require for compliance, or prefer, a fully open-source solution. + +** What if I'm in an air-gapped environment and can't login?"** + +Air-gapped environments are supported in the [Enteprise edition](https://www.getambassador.io/editions) of Telepresence. Please [contact our sales team](https://www.getambassador.io/contact-us). + +export const metaData = [ + {name: "Telepresence OSS", path: "https://telepresence.io"}, + {name: "Telepresence Releases", path: "https://github.com/telepresenceio/telepresence/releases"}, + {name: "Telepresence Pricing", path: "https://www.getambassador.io/editions"}, + {name: "Contact Us", path: "https://www.getambassador.io/contact-us"}, +] diff --git a/docs/telepresence/2.15/faq-215.md b/docs/telepresence/2.15/faq-215.md new file mode 100644 index 000000000..a5f83d044 --- /dev/null +++ b/docs/telepresence/2.15/faq-215.md @@ -0,0 +1,50 @@ +--- +description: "Learn about the major changes in Telepresence v2.15." +--- + +# FAQ for v2.15 + +There are some big changes in Telepresence v2.15, read on to learn more about them. + +** What are the main differences between v2.15 and v2.14?** + +* In v2.15 we now require an Ambassador Cloud account to use most features of Telepresence. We offer [three plans](https://www.getambassador.io/editions) including a completely free tier. +* We have removed [Team Mode](../../2.14/concepts/modes#team-mode), and the default type of intercept is now a [Global Intercept](../concepts/intercepts), even when you are logged in. + +** Why do I need an Ambassador Cloud account to use Telepresence now?** + +The new pricing model for Telepresence accompanies the new requirement to create an account. Previously we only required an account for team features. +We’re now focused on making Telepresence an extremely capable tool for individual developers, from your first connect. We have added new capabilities to the product, which are listed in the question below comparing the two versions. +Because of that, we now require an account to use any Telepresence feature. Creating an account on Ambassador Cloud is completely free, takes a few seconds and can be done through accounts you already have, like GitHub and Google. + +** What do I get from Telepresence that I can't get from the open-source version?** + +We distribute up-to-date Telepresence binaries through Homebrew and a Windows installer; open-source binaries must be downloaded from the GitHub repository. +Our Lite plan offers the same capabilities as open-source but with that added convenience, and is completely free. The Lite plan also includes the Docker Extension. +Our Developer plan adds features like Personal Intercepts, Intercept Specs, Docker Compose integration, and 8x5 support to help you use Telepresence effectively. + +We believe the Lite plan offers the best experience for hobbyists, the Developer plan for individual developers using Telepresence professionally, and the [open-source version](https://telepresence.io) for users who require for compliance, or prefer, a fully open-source solution. + +** This feels like a push by Ambassador Labs to force people to the commercial version of Telepresence, what's up with that?** + +One of the most common pieces of feedback we've received is how hard it’s been for users to tell what features of Telepresence were proprietary vs open-source, and which version they were using. +We've always made it more convenient to use the commercial version of Telepresence but we want to make it clearer now what the benefits are and when you're using it. + +** What is the future of the open-source version Telepresence?** + +Development on the open-source version remains active as it is the basis for the commercial version. We're regularly improving the client, the Traffic Manager, and other pieces of Telepresence open-source. +In addition, we recently started the process to move Telepresence in the CNCF from Sandbox status to Incubating status. + +** Why are there limits on the Lite and Developer plans?** + +The limits on the Developer plan exist to prevent abuse of individual licenses. We believe they are above what an individual developer would use in a given month, but reach out to support, included in your Developer plan, if they are causing an issue for you. +The limits on the Lite plan exist because it is a free plan. + +** What if I'm in an air-gapped environment and can't login?"** + +Air-gapped environments are supported in the [Enteprise edition](https://www.getambassador.io/editions) of Telepresence. Please [contact our sales team](https://www.getambassador.io/contact-us). + +export const metaData = [ + {name: "Telepresence Pricing", path: "https://www.getambassador.io/editions"}, + {name: "Contact Us", path: "https://www.getambassador.io/contact-us"}, +] diff --git a/docs/telepresence/2.15/faqs.md b/docs/telepresence/2.15/faqs.md new file mode 100644 index 000000000..018658c5b --- /dev/null +++ b/docs/telepresence/2.15/faqs.md @@ -0,0 +1,133 @@ +--- +description: "Learn how Telepresence helps with fast development and debugging in your Kubernetes cluster." +--- + +# FAQs + +For questions about the new account changes introduced in v2.15, please see our FAQ [specific to that topic](../faq-215). + +** Why Telepresence?** + +Modern microservices-based applications that are deployed into Kubernetes often consist of tens or hundreds of services. The resource constraints and number of these services means that it is often difficult to impossible to run all of this on a local development machine, which makes fast development and debugging very challenging. The fast [inner development loop](../concepts/devloop/) from previous software projects is often a distant memory for cloud developers. + +Telepresence enables you to connect your local development machine seamlessly to the cluster via a two way proxying mechanism. This enables you to code locally and run the majority of your services within a remote Kubernetes cluster -- which in the cloud means you have access to effectively unlimited resources. + +Ultimately, this empowers you to develop services locally and still test integrations with dependent services or data stores running in the remote cluster. + +You can “intercept” any requests made to a target Kubernetes workload, and code and debug your associated service locally using your favourite local IDE and in-process debugger. You can test your integrations by making requests against the remote cluster’s ingress and watching how the resulting internal traffic is handled by your service running locally. + +By using the preview URL functionality you can share access with additional developers or stakeholders to the application via an entry point associated with your intercept and locally developed service. You can make changes that are visible in near real-time to all of the participants authenticated and viewing the preview URL. All other viewers of the application entrypoint will not see the results of your changes. + +** What operating systems does Telepresence work on?** + +Telepresence currently works natively on macOS (Intel and Apple Silicon), Linux, and Windows. + +** What protocols can be intercepted by Telepresence?** + +Both TCP and UDP are supported for global intercepts. + +Personal intercepts require HTTP. All HTTP/1.1 and HTTP/2 protocols can be intercepted. This includes: + +- REST +- JSON/XML over HTTP +- gRPC +- GraphQL + +If you need another protocol supported, please [drop us a line](https://www.getambassador.io/feedback/) to request it. + +** When using Telepresence to intercept a pod, are the Kubernetes cluster environment variables proxied to my local machine?** + +Yes, you can either set the pod's environment variables on your machine or write the variables to a file to use with Docker or another build process. Please see [the environment variable reference doc](../reference/environment) for more information. + +** When using Telepresence to intercept a pod, can the associated pod volume mounts also be mounted by my local machine?** + +Yes, please see [the volume mounts reference doc](../reference/volume/) for more information. + +** When connected to a Kubernetes cluster via Telepresence, can I access cluster-based services via their DNS name?** + +Yes. After you have successfully connected to your cluster via `telepresence connect` you will be able to access any service in your cluster via their namespace qualified DNS name. + +This means you can curl endpoints directly e.g. `curl .:8080/mypath`. + +If you create an intercept for a service in a namespace, you will be able to use the service name directly. + +This means if you `telepresence intercept -n `, you will be able to resolve just the `` DNS record. + +You can connect to databases or middleware running in the cluster, such as MySQL, PostgreSQL and RabbitMQ, via their service name. + +** When connected to a Kubernetes cluster via Telepresence, can I access cloud-based services and data stores via their DNS name?** + +You can connect to cloud-based data stores and services that are directly addressable within the cluster (e.g. when using an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) Service type), such as AWS RDS, Google pub-sub, or Azure SQL Database. + +** What types of ingress does Telepresence support for the preview URL functionality?** + +The preview URL functionality should work with most ingress configurations, including straightforward load balancer setups. + +Telepresence will discover/prompt during first use for this info and make its best guess at figuring this out and ask you to confirm or update this. + +** Why are my intercepts still reporting as active when they've been disconnected?** + + In certain cases, Telepresence might not have been able to communicate back with Ambassador Cloud to update the intercept's status. Worry not, they will get garbage collected after a period of time. + +** Why is my intercept associated with an "Unreported" cluster?** + + Intercepts tagged with "Unreported" clusters simply mean Ambassador Cloud was unable to associate a service instance with a known detailed service from an Edge Stack or API Gateway cluster. [Connecting your cluster to the Service Catalog](/docs/telepresence/latest/quick-start/) will properly match your services from multiple data sources. + +** Will Telepresence be able to intercept workloads running on a private cluster or cluster running within a virtual private cloud (VPC)?** + +Yes. The cluster has to have outbound access to the internet for the preview URLs to function correctly, but it doesn't need to have a publicly accessible IP address. + +The cluster must also have access to an external registry in order to be able to download the traffic-manager and traffic-agent images that are deployed when connecting with Telepresence. + +** Why does running Telepresence require sudo access for the local daemon unless it runs in a Docker container?** + +The local daemon needs sudo to create a VIF (Virtual Network Interface) for outbound routing and DNS. Root access is needed to do that unless the daemon runs in a Docker container. + +** What components get installed in the cluster when running Telepresence?** + +A single `traffic-manager` service is deployed in the `ambassador` namespace within your cluster, and this manages resilient intercepts and connections between your local machine and the cluster. + +When running in `team` mode, a single `ambassador-agent` service is deployed in the `ambassador` namespace. It communicates with the cloud to keep your list of services up to date. + +A Traffic Agent container is injected per pod that is being intercepted. The first time a workload is intercepted all pods associated with this workload will be restarted with the Traffic Agent automatically injected. + +** How can I remove all the Telepresence components installed within my cluster?** + +You can run the command `telepresence helm uninstall` to remove everything from the cluster, including the `traffic-manager` and the `ambassador-agent` services, and all the `traffic-agent` containers injected into each pod being intercepted. + +Also run `telepresence quit -s` to stop the local daemon running. + +** What language is Telepresence written in?** + +All components of the Telepresence application and cluster components are written using Go. + +** How does Telepresence connect and tunnel into the Kubernetes cluster?** + +The connection between your laptop and cluster is established by using +the `kubectl port-forward` machinery (though without actually spawning +a separate program) to establish a TLS encrypted connection to Telepresence +Traffic Manager in the cluster, and running Telepresence's custom VPN +protocol over that connection. + + + +** What identity providers are supported for authenticating to view a preview URL?** + +* GitHub +* GitLab +* Google + +More authentication mechanisms and identity provider support will be added soon. Please [let us know](https://www.getambassador.io/feedback/) which providers are the most important to you and your team in order for us to prioritize those. + +** How do I disable desktop notifications from the Telepresence CLI? ** + +Desktop notifications for the Telepresence CLI tool can be activated/deactivated from Ambassador Cloud. +Users can head over to their [Notifications](https://app.getambassador.io/cloud/settings/notifications) page to configure this feature. + +** Is Telepresence open source?** + +A large part of it is! You can find its source code on [GitHub](https://github.com/telepresenceio/telepresence). + +** How do I share my feedback on Telepresence?** + +Your feedback is always appreciated and helps us build a product that provides as much value as possible for our community. You can chat with us directly on our [feedback page](https://www.getambassador.io/feedback/), or you can [join our Slack channel](http://a8r.io/slack) to share your thoughts. diff --git a/docs/telepresence/2.15/howtos/cluster-in-vm.md b/docs/telepresence/2.15/howtos/cluster-in-vm.md new file mode 100644 index 000000000..4762344c9 --- /dev/null +++ b/docs/telepresence/2.15/howtos/cluster-in-vm.md @@ -0,0 +1,192 @@ +--- +title: "Considerations for locally hosted clusters | Ambassador" +description: "Use Telepresence to intercept services in a cluster running in a hosted virtual machine." +--- + +# Network considerations for locally hosted clusters + +## The problem +Telepresence creates a Virtual Network Interface ([VIF](../../reference/tun-device)) that maps the clusters subnets to the host machine when it connects. If you're running Kubernetes locally (e.g., k3s, Minikube, Docker for Desktop), you may encounter network problems because the devices in the host are also accessible from the cluster's nodes. + +### Example: +A k3s cluster runs in a headless VirtualBox machine that uses a "host-only" network. This network will allow both host-to-guest and guest-to-host connections. In other words, the cluster will have access to the host's network and, while Telepresence is connected, also to its VIF. This means that from the cluster's perspective, there will now be more than one interface that maps the cluster's subnets; the ones already present in the cluster's nodes, and then the Telepresence VIF, mapping them again. + +Now, if a request arrives to Telepresence that is covered by a subnet mapped by the VIF, the request is routed to the cluster. If the cluster for some reason doesn't find a corresponding listener that can handle the request, it will eventually try the host network, and find the VIF. The VIF routes the request to the cluster and now the recursion is in motion. The final outcome of the request will likely be a timeout but since the recursion is very resource intensive (a large amount of very rapid connection requests), this will likely also affect other connections in a bad way. + +## Solution + +### Create a bridge network +A bridge network is a Link Layer (L2) device that forwards traffic between network segments. By creating a bridge network, you can bypass the host's network stack which enable the Kubernetes cluster to connect directly to the same router as your host. + +To create a bridge network, you need to change the network settings of the guest running a cluster's node so that it connects directly to a physical network device on your host. The details on how to configure the bridge depends on what type of virtualization solution you're using. + +### Vagrant + Virtualbox + k3s example +Here's a sample `Vagrantfile` that will spin up a server node and two agent nodes in three headless instances using a bridged network. It also adds the configuration needed for the cluster to host a docker repository (very handy in case you want to save bandwidth). The Kubernetes registry manifest must be applied using `kubectl -f registry.yaml` once the cluster is up and running. + +#### Vagrantfile +```ruby +# -*- mode: ruby -*- +# vi: set ft=ruby : + +# bridge is the name of the host's default network device +$bridge = 'wlp5s0' + +# default_route should be the IP of the host's default route. +$default_route = '192.168.1.1' + +# nameserver must be the IP of an external DNS, such as 8.8.8.8 +$nameserver = '8.8.8.8' + +# server_name should also be added to the host's /etc/hosts file and point to the server_ip +# for easy access when pushing docker images +server_name = 'multi' + +# static IPs for the server and agents. Those IPs must be on the default router's subnet +server_ip = '192.168.1.110' +agents = { + 'agent1' => '192.168.1.111', + 'agent2' => '192.168.1.112', +} + +# Extra parameters in INSTALL_K3S_EXEC variable because of +# K3s picking up the wrong interface when starting server and agent +# https://github.com/alexellis/k3sup/issues/306 +server_script = <<-SHELL + sudo -i + apk add curl + export INSTALL_K3S_EXEC="--bind-address=#{server_ip} --node-external-ip=#{server_ip} --flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + echo "Sleeping for 5 seconds to wait for k3s to start" + sleep 5 + cp /var/lib/rancher/k3s/server/token /vagrant_shared + cp /etc/rancher/k3s/k3s.yaml /vagrant_shared + cp /etc/rancher/k3s/registries.yaml /vagrant_shared + SHELL + +agent_script = <<-SHELL + sudo -i + apk add curl + export K3S_TOKEN_FILE=/vagrant_shared/token + export K3S_URL=https://#{server_ip}:6443 + export INSTALL_K3S_EXEC="--flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + SHELL + +def config_vm(name, ip, script, vm) + # The network_script has two objectives: + # 1. Ensure that the guest's default route is the bridged network (bypass the network of the host) + # 2. Ensure that the DNS points to an external DNS service, as opposed to the DNS of the host that + # the NAT network provides. + network_script = <<-SHELL + sudo -i + ip route delete default 2>&1 >/dev/null || true; ip route add default via #{$default_route} + cp /etc/resolv.conf /etc/resolv.conf.orig + sed 's/^nameserver.*/nameserver #{$nameserver}/' /etc/resolv.conf.orig > /etc/resolv.conf + SHELL + + vm.hostname = name + vm.network 'public_network', bridge: $bridge, ip: ip + vm.synced_folder './shared', '/vagrant_shared' + vm.provider 'virtualbox' do |vb| + vb.memory = '4096' + vb.cpus = '2' + end + vm.provision 'shell', inline: script + vm.provision 'shell', inline: network_script, run: 'always' +end + +Vagrant.configure('2') do |config| + config.vm.box = 'generic/alpine314' + + config.vm.define 'server', primary: true do |server| + config_vm(server_name, server_ip, server_script, server.vm) + end + + agents.each do |agent_name, agent_ip| + config.vm.define agent_name do |agent| + config_vm(agent_name, agent_ip, agent_script, agent.vm) + end + end +end +``` + +The Kubernetes manifest to add the registry: + +#### registry.yaml +```yaml +apiVersion: v1 +kind: ReplicationController +metadata: + name: kube-registry-v0 + namespace: kube-system + labels: + k8s-app: kube-registry + version: v0 +spec: + replicas: 1 + selector: + app: kube-registry + version: v0 + template: + metadata: + labels: + app: kube-registry + version: v0 + spec: + containers: + - name: registry + image: registry:2 + resources: + limits: + cpu: 100m + memory: 200Mi + env: + - name: REGISTRY_HTTP_ADDR + value: :5000 + - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY + value: /var/lib/registry + volumeMounts: + - name: image-store + mountPath: /var/lib/registry + ports: + - containerPort: 5000 + name: registry + protocol: TCP + volumes: + - name: image-store + hostPath: + path: /var/lib/registry-storage +--- +apiVersion: v1 +kind: Service +metadata: + name: kube-registry + namespace: kube-system + labels: + app: kube-registry + kubernetes.io/name: "KubeRegistry" +spec: + selector: + app: kube-registry + ports: + - name: registry + port: 5000 + targetPort: 5000 + protocol: TCP + type: LoadBalancer +``` + diff --git a/docs/telepresence/2.15/howtos/intercepts.md b/docs/telepresence/2.15/howtos/intercepts.md new file mode 100644 index 000000000..f853b134d --- /dev/null +++ b/docs/telepresence/2.15/howtos/intercepts.md @@ -0,0 +1,108 @@ +--- +description: "Start using Telepresence in your own environment. Follow these steps to intercept your service in your cluster." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Intercept a service in your own environment + +Telepresence enables you to create intercepts to a target Kubernetes workload. Once you have created and intercept, you can code and debug your associated service locally. + +For a detailed walk-though on creating intercepts using our sample app, follow the [quick start guide](../../quick-start/). + + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +This guide assumes you have a Kubernetes deployment and service accessible publicly by an ingress controller, and that you can run a copy of that service on your laptop. + + +## Intercept your service with a global intercept + +With Telepresence, you can create [global intercepts](../../concepts/intercepts/?intercept=global) that intercept all traffic going to a service in your cluster and route it to your local environment instead. + +1. Connect to your cluster with `telepresence connect` and connect to the Kubernetes API server: + + ```console + $ curl -ik https://kubernetes.default + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected when you first connect. + + + You now have access to your remote Kubernetes API server as if you were on the same network. You can now use any local tools to connect to any service in the cluster. + + If you have difficulties connecting, make sure you are using Telepresence 2.0.3 or a later version. Check your version by entering `telepresence version` and [upgrade if needed](../../install/upgrade/). + + +2. Enter `telepresence list` and make sure the service you want to intercept is listed. For example: + + ```console + $ telepresence list + ... + example-service: ready to intercept (traffic-agent not yet installed) + ... + ``` + +3. Get the name of the port you want to intercept on your service: + `kubectl get service --output yaml`. + + For example: + + ```console + $ kubectl get service example-service --output yaml + ... + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + ... + ``` + +4. Intercept all traffic going to the service in your cluster: + `telepresence intercept --port [:] --env-file `. + * For `--port`: specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * For `--env-file`: specify a file path for Telepresence to write the environment variables that are set in the pod. + The example below shows Telepresence intercepting traffic going to service `example-service`. Requests now reach the service on port `http` in the cluster get routed to `8080` on the workstation and write the environment variables of the service to `~/example-service-intercept.env`. + ```console + $ telepresence intercept example-service --port 8080:http --env-file ~/example-service-intercept.env + Using Deployment example-service + intercepted + Intercept name: example-service + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Intercepting : all TCP connections + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + The following are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#env). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Query the environment in which you intercepted a service and verify your local instance being invoked. + All the traffic previously routed to your Kubernetes Service is now routed to your local environment + +You can now: +- Make changes on the fly and see them reflected when interacting with + your Kubernetes environment. +- Query services only exposed in your cluster's network. +- Set breakpoints in your IDE to investigate bugs. + + + + **Didn't work?** Make sure the port you're listening on matches the one you specified when you created your intercept. + + diff --git a/docs/telepresence/2.15/howtos/outbound.md b/docs/telepresence/2.15/howtos/outbound.md new file mode 100644 index 000000000..9afcb75df --- /dev/null +++ b/docs/telepresence/2.15/howtos/outbound.md @@ -0,0 +1,89 @@ +--- +description: "Telepresence can connect to your Kubernetes cluster, letting you access cluster services as if your laptop was another pod in the cluster." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Proxy outbound traffic to my cluster + +While preview URLs are a powerful feature, Telepresence offers other options for proxying traffic between your laptop and the cluster. This section discribes how to proxy outbound traffic and control outbound connectivity to your cluster. + + This guide assumes that you have the quick start sample web app running in your cluster to test accessing the web-app service. You can substitute this service for any other service you are running. + +## Proxying outbound traffic + +Connecting to the cluster instead of running an intercept allows you to access cluster workloads as if your laptop was another pod in the cluster. This enables you to access other Kubernetes services using `.`. A service running on your laptop can interact with other services on the cluster by name. + +When you connect to your cluster, the background daemon on your machine runs and installs the [Traffic Manager deployment](../../reference/architecture/) into the cluster of your current `kubectl` context. The Traffic Manager handles the service proxying. + +1. Run `telepresence connect` and enter your password to run the daemon. + + ``` + $ telepresence connect + Launching Telepresence Daemon v2.3.7 (api v3) + Need root privileges to run "/usr/local/bin/telepresence daemon-foreground /home//.cache/telepresence/logs '' ''" + [sudo] password: + Connecting to traffic manager... + Connected to context default (https://) + ``` + +2. Run `telepresence status` to confirm connection to your cluster and that it is proxying traffic. + + ``` + $ telepresence status + Root Daemon: Running + Version : v2.3.7 (api 3) + Primary DNS : "" + Fallback DNS: "" + User Daemon: Running + Version : v2.3.7 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 0 total + ``` + +3. Access your service by name with `curl web-app.emojivoto:80`. Telepresence routes the request to the cluster, as if your laptop is actually running in the cluster. + + ``` + $ curl web-app.emojivoto:80 + + + + + Emoji Vote + ... + ``` + +If you terminate the client with `telepresence quit` and try to access the service again, it will fail because traffic is no longer proxied from your laptop. + + ``` + $ telepresence quit + Telepresence Daemon quitting...done + ``` + +When using Telepresence in this way, you need to access services with the namespace qualified DNS name (<service name>.<namespace>) before you start an intercept. After you start an intercept, only <service name> is required. Read more about these differences in the DNS resolution reference guide. + +## Controlling outbound connectivity + +By default, Telepresence provides access to all Services found in all namespaces in the connected cluster. This can lead to problems if the user does not have RBAC access permissions to all namespaces. You can use the `--mapped-namespaces ` flag to control which namespaces are accessible. + +When you use the `--mapped-namespaces` flag, you need to include all namespaces containing services you want to access, as well as all namespaces that contain services related to the intercept. + +### Using local-only intercepts + +When you develop on isolated apps or on a virtualized container, you don't need an outbound connection. However, when developing services that aren't deployed to the cluster, it can be necessary to provide outbound connectivity to the namespace where the service will be deployed. This is because services that aren't exposed through ingress controllers require connectivity to those services. When you provide outbound connectivity, the service can access other services in that namespace without using qualified names. A local-only intercept does not cause outbound connections to originate from the intercepted namespace. The reason for this is to establish correct origin; the connection must be routed to a `traffic-agent`of an intercepted pod. For local-only intercepts, the outbound connections originates from the `traffic-manager`. + +To control outbound connectivity to specific namespaces, add the `--local-only` flag: + + ``` + $ telepresence intercept --namespace --local-only + ``` +The resources in the given namespace can now be accessed using unqualified names as long as the intercept is active. +You can deactivate the intercept with `telepresence leave `. This removes unqualified name access. + +### Proxy outcound connectivity for laptops + +To specify additional hosts or subnets that should be resolved inside the cluster, see [AlsoProxy](../../reference/config/#alsoproxysubnets) for more details. diff --git a/docs/telepresence/2.15/howtos/package.md b/docs/telepresence/2.15/howtos/package.md new file mode 100644 index 000000000..2baa7a66c --- /dev/null +++ b/docs/telepresence/2.15/howtos/package.md @@ -0,0 +1,178 @@ +--- +title: "How to package and share my intercepts" +description: "Use telepresence intercept specs to enable your teammates faster" +--- +# Introduction + +While telepresence takes cares of the interception part of your setup, you usually still need to script +some boiler plate code to run the local part (the handler) of your code. + +Classic solutions rely on a Makefile, or bash scripts, but this becomes cumbersome to maintain. + +Instead, you can use [telepresence intercept specs](../../reference/intercepts/specs): They allow you +to specify all aspects of an intercept, including prerequisites, the local processes that receive the intercepted traffic, +and the actual intercept. Telepresence can then run the specification. + +# Getting started + +You will need a Kubernetes cluster, a deployment, and a service to begin using an Intercept Specification. + +Once you have a Kubernetes cluster you can apply this configuration to start an echo easy deployment that +we can then use for our Intercept Specifcation + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: "echo-easy" +spec: + type: ClusterIP + selector: + service: echo-easy + ports: + - name: proxied + port: 80 + targetPort: http +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "echo-easy" + labels: + service: echo-easy +spec: + replicas: 1 + selector: + matchLabels: + service: echo-easy + template: + metadata: + labels: + service: echo-easy + spec: + containers: + - name: echo-easy + image: jmalloc/echo-server + ports: + - containerPort: 8080 + name: http + resources: + limits: + cpu: 50m + memory: 128Mi +``` + +You can create the local yaml file by using + +```console +$ cat > echo-server.yaml < my-intercept.yaml < + Telepresence global intercept architecture +

+ +## Personal intercepts + +For these cases, Telepresence has a feature in the Developer and Enterprise plans called the Personal Intercept. +When using a Personal Intercept, Telepresence can selectively route requests to a developer's computer based on an HTTP header value. +By default, Telepresence looks for the header x-telepresence-id, and a logged in Telepresence user is assigned a unique value for that +header on any intercept they create. You can also specify your own custom header. You get your test requests, your coworker gets their test requests, +and the rest of the traffic to the application goes to the original pod in the cluster. +

+ Telepresence personal intercept architecture +

+ +## Requirements + +Because Personal Intercepts rely on an HTTP header value, that header must be present in any request +I want to intercept. This is very easy in the first service behind an API gateway, as the header can +be added using Telepresence's [Preview URL feature](../preview-urls), +browser plugins or in tools like Postman, and the entire request, with headers intact, +will be forwarded to the first upstream service. +

+ Diagram of request with intercept header being sent through API gateway to Payments Service +

+ +However, the original request terminates at the first service that receives it. For the intercept header +to reach any services further upstream, the first service must _propagate_ it, by retrieving the header value +from the request and storing it somewhere or passing it down the function call chain to be retrieved +by any functions that make a network call to the upstream service. +

+ Diagram of a request losing the header when sent to the next upstream service unless propagated +

+ +## Solutions + +If the application you develop is directly the first service to receive incoming requests, you can use [Preview URLs](../preview-urls) +to generate a custom URL that automatically passes an `x-telepresence-id` header that your intercept is configured to look for. + +If your applications already propagate a header that can be used to differentiate requests between developers, you can pass the +`--http-header` [flag](../../concepts/intercepts?intercept=personal#creating-and-using-personal-intercepts) to `telepresence intercept`. + +If your applications do _not_ already propagate a header that can be used to differentiate requests, we have a +[comprehensive guide](https://github.com/ambassadorlabs/telepresence-header-propagation) +on doing so quickly and easily using OpenTelemetry auto-instrumentation. diff --git a/docs/telepresence/2.15/howtos/preview-urls.md b/docs/telepresence/2.15/howtos/preview-urls.md new file mode 100644 index 000000000..8923492c4 --- /dev/null +++ b/docs/telepresence/2.15/howtos/preview-urls.md @@ -0,0 +1,100 @@ +--- +title: "Share dev environments with preview URLs | Ambassador" +description: "Telepresence uses Preview URLs to help you collaborate on developing Kubernetes services with teammates." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Share development environments with preview URLs + +Telepresence can generate sharable preview URLs. This enables you to work on a copy of your service locally, and share that environment with a teammate for pair programming. While using preview URLs, Telepresence will route only the requests coming from that preview URL to your local environment. Requests to the ingress are routed to your cluster as usual. + +Preview URLs are protected behind authentication through Ambassador Cloud, and, access to the URL is only available to users in your organization. You can make the URL publicly accessible for sharing with outside collaborators. + +## Creating a preview URL + +1. Enter `telepresence login` to launch Ambassador Cloud in your browser. + +If you are in an environment you can't launch Telepresence in your local browser, pass the [`--apikey` flag to `telepresence login`](../../reference/client/login/). + +2. Connect to Telepresence and enter the `telepresence list` command in your CLI to verify the service is listed. +Telepresence only supports Deployments, ReplicaSets, and StatefulSet workloads with a label that matches a Service. + +3. Start the intercept with `telepresence intercept --preview-url --port --env-file --mechanism http`and adjust the flags as follows: + Start the intercept: + * **port:** specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * **env-file:** specify a file path for Telepresence to write the environment variables that are set in the pod. + +4. Answer the question prompts. + The example below shows a preview URL for `example-service` which listens on port 8080. The preview URL for ingress will use the `ambassador` service in the `ambassador` namespace on port `443` using TLS encryption and the hostname `dev-environment.edgestack.me`: + + ```console + $ telepresence intercept example-service --preview-url --mechanism http --ingress-host ambassador.ambassador --ingress-port 80 --ingress-l5 dev-environment.edgestack.me --ingress-tls --port 8080 --env-file ~/ex-svc.env + + Using deployment example-service + intercepted + Intercept name : example-service + State : ACTIVE + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":example-service") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname : dev-environment.edgestack.me + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + Here are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#env). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Go to the Preview URL generated from the intercept. +Traffic is now intercepted from your preview URL without impacting other traffic from your Ingress. + + + Didn't work? It might be because you have services in between your ingress controller and the service you are intercepting that do not propagate the x-telepresence-intercept-id HTTP Header. Read more on context propagation. + + +7. Make a request on the URL you would usually query for that environment. Don't route a request to your laptop. + + Normal traffic coming into the cluster through the Ingress (i.e. not coming from the preview URL) routes to services in the cluster like normal. + +8. Share with a teammate. + + You can collaborate with teammates by sending your preview URL to them. Once your teammate logs in, they must select the same identity provider and org as you are using. This authorizes their access to the preview URL. When they visit the preview URL, they see the intercepted service running on your laptop. + You can now collaborate with a teammate to debug the service on the shared intercept URL without impacting the production environment. + +## Sharing a preview URL with people outside your team + +To collaborate with someone outside of your identity provider's organization: +Log into [Ambassador Cloud](https://app.getambassador.io/cloud/). + navigate to your service's intercepts, select the preview URL details, and click **Make Publicly Accessible**. Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on your laptop. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. Removing the preview URL either from the dashboard or by running `telepresence preview remove ` also removes all access to the preview URL. + +## Change access restrictions + +To collaborate with someone outside of your identity provider's organization, you must make your preview URL publicly accessible. + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/). +2. Select the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Make Publicly Accessible**. + +Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on a local environment. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. + +## Remove a preview URL from an Intercept + +To delete a preview URL and remove all access to the intercepted service, + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) +2. Click on the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Remove Preview**. + +Alternatively, you can remove a preview URL with the following command: +`telepresence preview remove ` diff --git a/docs/telepresence/2.15/howtos/request.md b/docs/telepresence/2.15/howtos/request.md new file mode 100644 index 000000000..1109c68df --- /dev/null +++ b/docs/telepresence/2.15/howtos/request.md @@ -0,0 +1,12 @@ +import Alert from '@material-ui/lab/Alert'; + +# Send requests to an intercepted service + +Ambassador Cloud can inform you about the required request parameters to reach an intercepted service. + + 1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) + 2. Navigate to the desired service Intercepts page + 3. Click the **Query** button to open the pop-up menu. + 4. Toggle between **CURL**, **Headers** and **Browse**. + +The pre-built queries and header information will help you get started to query the desired intercepted service and manage header propagation. diff --git a/docs/telepresence/2.15/images/container-inner-dev-loop.png b/docs/telepresence/2.15/images/container-inner-dev-loop.png new file mode 100644 index 000000000..06586cd6e Binary files /dev/null and b/docs/telepresence/2.15/images/container-inner-dev-loop.png differ diff --git a/docs/telepresence/2.15/images/daemon-in-container.png b/docs/telepresence/2.15/images/daemon-in-container.png new file mode 100644 index 000000000..ed02e8386 Binary files /dev/null and b/docs/telepresence/2.15/images/daemon-in-container.png differ diff --git a/docs/telepresence/2.15/images/docker-extension-intercept.png b/docs/telepresence/2.15/images/docker-extension-intercept.png new file mode 100644 index 000000000..d01daef8f Binary files /dev/null and b/docs/telepresence/2.15/images/docker-extension-intercept.png differ diff --git a/docs/telepresence/2.15/images/docker-header-containers.png b/docs/telepresence/2.15/images/docker-header-containers.png new file mode 100644 index 000000000..06f422a93 Binary files /dev/null and b/docs/telepresence/2.15/images/docker-header-containers.png differ diff --git a/docs/telepresence/2.15/images/docker_extension_button_drop_down.png b/docs/telepresence/2.15/images/docker_extension_button_drop_down.png new file mode 100644 index 000000000..b65c53091 Binary files /dev/null and b/docs/telepresence/2.15/images/docker_extension_button_drop_down.png differ diff --git a/docs/telepresence/2.15/images/docker_extension_connect_to_cluster.png b/docs/telepresence/2.15/images/docker_extension_connect_to_cluster.png new file mode 100644 index 000000000..4ec182581 Binary files /dev/null and b/docs/telepresence/2.15/images/docker_extension_connect_to_cluster.png differ diff --git a/docs/telepresence/2.15/images/docker_extension_login.png b/docs/telepresence/2.15/images/docker_extension_login.png new file mode 100644 index 000000000..8874fa959 Binary files /dev/null and b/docs/telepresence/2.15/images/docker_extension_login.png differ diff --git a/docs/telepresence/2.15/images/docker_extension_running_intercepts_page.png b/docs/telepresence/2.15/images/docker_extension_running_intercepts_page.png new file mode 100644 index 000000000..68a2f22fc Binary files /dev/null and b/docs/telepresence/2.15/images/docker_extension_running_intercepts_page.png differ diff --git a/docs/telepresence/2.15/images/docker_extension_start_intercept_page.png b/docs/telepresence/2.15/images/docker_extension_start_intercept_page.png new file mode 100644 index 000000000..df2cffdd3 Binary files /dev/null and b/docs/telepresence/2.15/images/docker_extension_start_intercept_page.png differ diff --git a/docs/telepresence/2.15/images/docker_extension_start_intercept_popup.png b/docs/telepresence/2.15/images/docker_extension_start_intercept_popup.png new file mode 100644 index 000000000..07af9e7bb Binary files /dev/null and b/docs/telepresence/2.15/images/docker_extension_start_intercept_popup.png differ diff --git a/docs/telepresence/2.15/images/docker_extension_upload_spec_button.png b/docs/telepresence/2.15/images/docker_extension_upload_spec_button.png new file mode 100644 index 000000000..f571aefd3 Binary files /dev/null and b/docs/telepresence/2.15/images/docker_extension_upload_spec_button.png differ diff --git a/docs/telepresence/2.15/images/edgey-corp.png b/docs/telepresence/2.15/images/edgey-corp.png new file mode 100644 index 000000000..d5f724c55 Binary files /dev/null and b/docs/telepresence/2.15/images/edgey-corp.png differ diff --git a/docs/telepresence/2.15/images/github-login.png b/docs/telepresence/2.15/images/github-login.png new file mode 100644 index 000000000..cfd4d4bf1 Binary files /dev/null and b/docs/telepresence/2.15/images/github-login.png differ diff --git a/docs/telepresence/2.15/images/header_arrives.png b/docs/telepresence/2.15/images/header_arrives.png new file mode 100644 index 000000000..6abc71266 Binary files /dev/null and b/docs/telepresence/2.15/images/header_arrives.png differ diff --git a/docs/telepresence/2.15/images/header_requires_propagating.png b/docs/telepresence/2.15/images/header_requires_propagating.png new file mode 100644 index 000000000..219980292 Binary files /dev/null and b/docs/telepresence/2.15/images/header_requires_propagating.png differ diff --git a/docs/telepresence/2.15/images/logo.png b/docs/telepresence/2.15/images/logo.png new file mode 100644 index 000000000..701f63ba8 Binary files /dev/null and b/docs/telepresence/2.15/images/logo.png differ diff --git a/docs/telepresence/2.15/images/mode-defaults.png b/docs/telepresence/2.15/images/mode-defaults.png new file mode 100644 index 000000000..1dcca4116 Binary files /dev/null and b/docs/telepresence/2.15/images/mode-defaults.png differ diff --git a/docs/telepresence/2.15/images/pod-daemon-overview.png b/docs/telepresence/2.15/images/pod-daemon-overview.png new file mode 100644 index 000000000..effb05314 Binary files /dev/null and b/docs/telepresence/2.15/images/pod-daemon-overview.png differ diff --git a/docs/telepresence/2.15/images/split-tunnel.png b/docs/telepresence/2.15/images/split-tunnel.png new file mode 100644 index 000000000..5bf30378e Binary files /dev/null and b/docs/telepresence/2.15/images/split-tunnel.png differ diff --git a/docs/telepresence/2.15/images/tp_global_intercept.png b/docs/telepresence/2.15/images/tp_global_intercept.png new file mode 100644 index 000000000..e6c8bfbe7 Binary files /dev/null and b/docs/telepresence/2.15/images/tp_global_intercept.png differ diff --git a/docs/telepresence/2.15/images/tp_personal_intercept.png b/docs/telepresence/2.15/images/tp_personal_intercept.png new file mode 100644 index 000000000..2cfeb005a Binary files /dev/null and b/docs/telepresence/2.15/images/tp_personal_intercept.png differ diff --git a/docs/telepresence/2.15/images/tracing.png b/docs/telepresence/2.15/images/tracing.png new file mode 100644 index 000000000..c374807e5 Binary files /dev/null and b/docs/telepresence/2.15/images/tracing.png differ diff --git a/docs/telepresence/2.15/images/trad-inner-dev-loop.png b/docs/telepresence/2.15/images/trad-inner-dev-loop.png new file mode 100644 index 000000000..618b674f8 Binary files /dev/null and b/docs/telepresence/2.15/images/trad-inner-dev-loop.png differ diff --git a/docs/telepresence/2.15/images/tunnelblick.png b/docs/telepresence/2.15/images/tunnelblick.png new file mode 100644 index 000000000..8944d445a Binary files /dev/null and b/docs/telepresence/2.15/images/tunnelblick.png differ diff --git a/docs/telepresence/2.15/images/vpn-dns.png b/docs/telepresence/2.15/images/vpn-dns.png new file mode 100644 index 000000000..eed535c45 Binary files /dev/null and b/docs/telepresence/2.15/images/vpn-dns.png differ diff --git a/docs/telepresence/2.15/images/vpn-k8s-config.jpg b/docs/telepresence/2.15/images/vpn-k8s-config.jpg new file mode 100644 index 000000000..66116e41d Binary files /dev/null and b/docs/telepresence/2.15/images/vpn-k8s-config.jpg differ diff --git a/docs/telepresence/2.15/images/vpn-routing.jpg b/docs/telepresence/2.15/images/vpn-routing.jpg new file mode 100644 index 000000000..18410dd48 Binary files /dev/null and b/docs/telepresence/2.15/images/vpn-routing.jpg differ diff --git a/docs/telepresence/2.15/images/vpn-with-tele.jpg b/docs/telepresence/2.15/images/vpn-with-tele.jpg new file mode 100644 index 000000000..843b253e9 Binary files /dev/null and b/docs/telepresence/2.15/images/vpn-with-tele.jpg differ diff --git a/docs/telepresence/2.15/install/cloud.md b/docs/telepresence/2.15/install/cloud.md new file mode 100644 index 000000000..bf8c80669 --- /dev/null +++ b/docs/telepresence/2.15/install/cloud.md @@ -0,0 +1,63 @@ +# Provider Prerequisites for Traffic Manager + +## GKE + +### Firewall Rules for private clusters + +A GKE cluster with private networking will come preconfigured with firewall rules that prevent the Traffic Manager's +webhook injector from being invoked by the Kubernetes API server. +For Telepresence to work in such a cluster, you'll need to [add a firewall rule](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) allowing the Kubernetes masters to access TCP port `8443` in your pods. +For example, for a cluster named `tele-webhook-gke` in region `us-central1-c1`: + +```bash +$ gcloud container clusters describe tele-webhook-gke --region us-central1-c | grep masterIpv4CidrBlock + masterIpv4CidrBlock: 172.16.0.0/28 # Take note of the IP range, 172.16.0.0/28 + +$ gcloud compute firewall-rules list \ + --filter 'name~^gke-tele-webhook-gke' \ + --format 'table( + name, + network, + direction, + sourceRanges.list():label=SRC_RANGES, + allowed[].map().firewall_rule().list():label=ALLOW, + targetTags.list():label=TARGET_TAGS + )' + +NAME NETWORK DIRECTION SRC_RANGES ALLOW TARGET_TAGS +gke-tele-webhook-gke-33fa1791-all tele-webhook-net INGRESS 10.40.0.0/14 esp,ah,sctp,tcp,udp,icmp gke-tele-webhook-gke-33fa1791-node +gke-tele-webhook-gke-33fa1791-master tele-webhook-net INGRESS 172.16.0.0/28 tcp:10250,tcp:443 gke-tele-webhook-gke-33fa1791-node +gke-tele-webhook-gke-33fa1791-vms tele-webhook-net INGRESS 10.128.0.0/9 icmp,tcp:1-65535,udp:1-65535 gke-tele-webhook-gke-33fa1791-node +# Take note fo the TARGET_TAGS value, gke-tele-webhook-gke-33fa1791-node + +$ gcloud compute firewall-rules create gke-tele-webhook-gke-webhook \ + --action ALLOW \ + --direction INGRESS \ + --source-ranges 172.16.0.0/28 \ + --rules tcp:8443 \ + --target-tags gke-tele-webhook-gke-33fa1791-node --network tele-webhook-net +Creating firewall...⠹Created [https://www.googleapis.com/compute/v1/projects/datawire-dev/global/firewalls/gke-tele-webhook-gke-webhook]. +Creating firewall...done. +NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED +gke-tele-webhook-gke-webhook tele-webhook-net INGRESS 1000 tcp:8443 False +``` + +### GKE Authentication Plugin + +Starting with Kubernetes version 1.26 GKE will require the use of the [gke-gcloud-auth-plugin](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke). +You will need to install this plugin to use Telepresence with Docker while using GKE. + +If you are using the [Telepresence Docker Extension](../../docker/extension) you will need to ensure that your `command` is set to an absolute path in your kubeconfig file. If you've installed not using homebrew you may see in your file `command: gke-gcloud-auth-plugin`. This would need to be replaced with the path to the binary. +You can check this by opening your kubeconfig file, and under the `users` section with your GKE cluster there is a `command` if you've installed with homebrew it would look like this +`command: /opt/homebrew/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/gcloud`. + +## EKS + +### EKS Authentication Plugin + +If you are using AWS CLI version earlier than `1.16.156` you will need to install [aws-iam-authenticator](https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html). +You will need to install this plugin to use Telepresence with Docker while using EKS. + +If you are using the [Telepresence Docker Extension](../../docker/extension) you will need to ensure that your `command` is set to an absolute path in your kubeconfig file instead of a relative path. +You can check this by opening your kubeconfig file, and under the `users` section with your EKS cluster there is a `command` if you've installed with homebrew it would look like this +`command: /opt/homebrew/Cellar/aws-iam-authenticator/0.6.2/bin/aws-iam-authenticator`. \ No newline at end of file diff --git a/docs/telepresence/2.15/install/helm.md b/docs/telepresence/2.15/install/helm.md new file mode 100644 index 000000000..8aefb1d59 --- /dev/null +++ b/docs/telepresence/2.15/install/helm.md @@ -0,0 +1,181 @@ +# Install the Traffic Manager with Helm + +[Helm](https://helm.sh) is a package manager for Kubernetes that automates the release and management of software on Kubernetes. The Telepresence Traffic Manager can be installed via a Helm chart with a few simple steps. + +For more details on what the Helm chart installs and what can be configured, see the Helm chart [configuration on artifacthub](https://artifacthub.io/packages/helm/datawire/telepresence). + +## Before you begin + +Before you begin you need to have [`helm`](https://helm.sh/docs/intro/install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +The Telepresence Helm chart is hosted by Ambassador Labs and published at `https://app.getambassador.io`. + +Start by adding this repo to your Helm client with the following command: + +```shell +helm repo add datawire https://app.getambassador.io +helm repo update +``` + +## Install with Helm + +When you run the Helm chart, it installs all the components required for the Telepresence Traffic Manager. + +1. If you are installing the Telepresence Traffic Manager **for the first time on your cluster**, create the `ambassador` namespace in your cluster: + + ```shell + kubectl create namespace ambassador + ``` + +2. Install the Telepresence Traffic Manager with the following command: + + ```shell + helm install traffic-manager --namespace ambassador datawire/telepresence + ``` + +### Install into custom namespace + +The Helm chart supports being installed into any namespace, not necessarily `ambassador`. Simply pass a different `namespace` argument to `helm install`. +For example, if you wanted to deploy the traffic manager to the `staging` namespace: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence +``` + +Note that users of Telepresence will need to configure their kubeconfig to find this installation of the Traffic Manager: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +See [the kubeconfig documentation](../../reference/config#manager) for more information. + +### Upgrading the Traffic Manager. + +Versions of the Traffic Manager Helm chart are coupled to the versions of the Telepresence CLI that they are intended for. +Thus, for example, if you wish to use Telepresence `v2.4.0`, you'll need to install version `v2.4.0` of the Traffic Manager Helm chart. + +Upgrading the Traffic Manager is the same as upgrading any other Helm chart; for example, if you installed the release into the `ambassador` namespace, and you just wished to upgrade it to the latest version without changing any configuration values: + +```shell +helm repo up +helm upgrade traffic-manager datawire/telepresence --reuse-values --namespace ambassador +``` + +If you want to upgrade the Traffic-Manager to a specific version, add a `--version` flag with the version number to the upgrade command. For example: `--version v2.4.1` + +## RBAC + +### Installing a namespace-scoped traffic manager + +You might not want the Traffic Manager to have permissions across the entire kubernetes cluster, or you might want to be able to install multiple traffic managers per cluster (for example, to separate them by environment). +In these cases, the traffic manager supports being installed with a namespace scope, allowing cluster administrators to limit the reach of a traffic manager's permissions. + +For example, suppose you want a Traffic Manager that only works on namespaces `dev` and `staging`. +To do this, create a `values.yaml` like the following: + +```yaml +managerRbac: + create: true + namespaced: true + namespaces: + - dev + - staging +``` + +This can then be installed via: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence -f ./values.yaml +``` + +**NOTE** Do not install namespace-scoped Traffic Managers and a global Traffic Manager in the same cluster, as it could have unexpected effects. + +#### Namespace collision detection + +The Telepresence Helm chart will try to prevent namespace-scoped Traffic Managers from managing the same namespaces. +It will do this by creating a ConfigMap, called `traffic-manager-claim`, in each namespace that a given install manages. + +So, for example, suppose you install one Traffic Manager to manage namespaces `dev` and `staging`, as: + +```bash +helm install traffic-manager --namespace dev datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={dev,staging}' +``` + +You might then attempt to install another Traffic Manager to manage namespaces `staging` and `prod`: + +```bash +helm install traffic-manager --namespace prod datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={staging,prod}' +``` + +This would fail with an error: + +``` +Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap "traffic-manager-claim" in namespace "staging" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "prod": current value is "dev" +``` + +To fix this error, fix the overlap either by removing `staging` from the first install, or from the second. + +#### Namespace scoped user permissions + +Optionally, you can also configure user rbac to be scoped to the same namespaces as the manager itself. +You might want to do this if you don't give your users permissions throughout the cluster, and want to make sure they only have the minimum set required to perform telepresence commands on certain namespaces. + +Continuing with the `dev` and `staging` example from the previous section, simply add the following to `values.yaml` (make sure you set the `subjects`!): + +```yaml +clientRbac: + create: true + + # These are the users or groups to which the user rbac will be bound. + # This MUST be set. + subjects: {} + # - kind: User + # name: jane + # apiGroup: rbac.authorization.k8s.io + + namespaced: true + + namespaces: + - dev + - staging +``` + +#### Namespace-scoped webhook + +If you wish to use the traffic-manager's [mutating webhook](../../reference/cluster-config#mutating-webhook) with a namespace-scoped traffic manager, you will have to ensure that each namespace has an `app.kubernetes.io/name` label that is identical to its name: + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: staging + labels: + app.kubernetes.io/name: staging +``` + +You can also use `kubectl label` to add the label to an existing namespace, e.g.: + +```shell +kubectl label namespace staging app.kubernetes.io/name=staging +``` + +This is required because the mutating webhook will use the name label to find namespaces to operate on. + +**NOTE** This labelling happens automatically in kubernetes >= 1.21. + +### Installing RBAC only + +Telepresence Traffic Manager does require some [RBAC](../../reference/rbac/) for the traffic-manager deployment itself, as well as for users. +To make it easier for operators to introspect / manage RBAC separately, you can use `rbac.only=true` to +only create the rbac-related objects. +Additionally, you can use `clientRbac.create=true` and `managerRbac.create=true` to toggle which subset(s) of RBAC objects you wish to create. diff --git a/docs/telepresence/2.15/install/index.md b/docs/telepresence/2.15/install/index.md new file mode 100644 index 000000000..d7a5642ed --- /dev/null +++ b/docs/telepresence/2.15/install/index.md @@ -0,0 +1,157 @@ +import Platform from '@src/components/Platform'; + +# Install + +Install Telepresence by running the commands below for your OS. If you are not the administrator of your cluster, you will need [administrative RBAC permissions](../reference/rbac#administrating-telepresence) to install and use Telepresence in your cluster. + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +We offer an easy installation path using an [MSI Installer](https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence-setup.exe). However if you'd like to setup Telepresence using Powershell, you can run these commands: + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## What's Next? + +Follow one of our [quick start guides](../quick-start/) to start using Telepresence, either with our sample app or in your own environment. + +## Installing nightly versions of Telepresence + +We build and publish the contents of the default branch, [release/v2](https://github.com/telepresenceio/telepresence), of Telepresence +nightly, Monday through Friday, for macOS (Intel and Apple silicon), Linux, and Windows. + +The tags are formatted like so: `vX.Y.Z-nightly-$gitShortHash`. + +`vX.Y.Z` is the most recent release of Telepresence with the patch version (Z) bumped one higher. +For example, if our last release was 2.3.4, nightly builds would start with v2.3.5, until a new +version of Telepresence is released. + +`$gitShortHash` will be the short hash of the git commit of the build. + +Use these URLs to download the most recent nightly build. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/nightly/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/nightly/telepresence.zip +``` + + + + +## Installing older versions of Telepresence + +Use these URLs to download an older version for your OS (including older nightly builds), replacing `x.y.z` with the versions you want. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/x.y.z/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/x.y.z/telepresence +``` + + + + + diff --git a/docs/telepresence/2.15/install/manager.md b/docs/telepresence/2.15/install/manager.md new file mode 100644 index 000000000..c192f45c1 --- /dev/null +++ b/docs/telepresence/2.15/install/manager.md @@ -0,0 +1,85 @@ +# Install/Uninstall the Traffic Manager + +Telepresence uses a traffic manager to send/recieve cloud traffic to the user. Telepresence uses [Helm](https://helm.sh) under the hood to install the traffic manager in your cluster. + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/). +In addition, you may need certain prerequisites depending on your cloud provider and platform. +See the [cloud provider installation notes](../../install/cloud) for more. + +## Install the Traffic Manager + +The telepresence cli can install the traffic manager for you. The basic install will install the same version as the client used. + +1. Install the Telepresence Traffic Manager with the following command: + + ```shell + telepresence helm install + ``` + +### Customizing the Traffic Manager. + +For details on what the Helm chart installs and what can be configured, see the Helm chart [configuration on artifacthub](https://artifacthub.io/packages/helm/datawire/telepresence). + +1. Create a values.yaml file with your config values. + +2. Run the install command with the values flag set to the path to your values file. + + ```shell + telepresence helm install --values values.yaml + ``` + + +## Upgrading/Downgrading the Traffic Manager. + +1. Download the cli of the version of Telepresence you wish to use. + +2. Run the install command with the upgrade flag. + + ```shell + telepresence helm install --upgrade + ``` + + +## Uninstall + +The telepresence cli can uninstall the traffic manager for you using the `telepresence helm uninstall` command (previously `telepresence uninstall --everything`). + +1. Uninstall the Telepresence Traffic Manager and all of the agents installed by it using the following command: + + ```shell + telepresence helm uninstall + ``` + +## Ambassador Agent + +The Ambassador Agent is installed alongside the Traffic Manager to report your services to Ambassador Cloud and give you the ability to trigger intercepts from the Cloud UI. + +If you are already using the Emissary-Ingress or Edge-Stack you do not need to install the Ambassador Agent. When installing the `traffic-manager` you can add the flag `--set ambassador-agent.enabled=false`, to not include the ambassador-agent. Emissary and Edge-Stack both already include this agent within their deployments. + +If your namespace runs with tight security parameters you may need to set a few additional parameters. These parameters are `securityContext`, `tolerations`, and `resources`. +You can set these parameters in a `values.yaml` file under the `ambassador-agent` prefix to fit your namespace requirements. + +### Adding an API Key to your Ambassador Agent + +While installing the traffic-manager you can pass your cloud-token directly to the helm chart using the flag, `--set ambassador-agent.cloudConnectToken=`. +The [API Key](../reference/client/login.md) will be created as a secret and your agent will use it upon start-up. Telepresence will not override the API key given via Helm. + +### Creating a secret manually +The Ambassador agent watches for secrets with a name ending in `agent-cloud-token`. You can create this secret yourself. This API key will always be used. + + ```shell +kubectl apply -f - < + labels: + app.kubernetes.io/name: agent-cloud-token +data: + CLOUD_CONNECT_TOKEN: +EOF + ``` \ No newline at end of file diff --git a/docs/telepresence/2.15/install/migrate-from-legacy.md b/docs/telepresence/2.15/install/migrate-from-legacy.md new file mode 100644 index 000000000..94307dfa1 --- /dev/null +++ b/docs/telepresence/2.15/install/migrate-from-legacy.md @@ -0,0 +1,110 @@ +# Migrate from legacy Telepresence + +[Telepresence](/products/telepresence/) (formerly referenced as Telepresence 2, which is the current major version) has different mechanics and requires a different mental model from [legacy Telepresence 1](https://www.telepresence.io/docs/v1/) when working with local instances of your services. + +In legacy Telepresence, a pod running a service was swapped with a pod running the Telepresence proxy. This proxy received traffic intended for the service, and sent the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment". + +In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time. + +Telepresence 2 introduces a [new +architecture](../../reference/architecture/) built around "intercepts" +that addresses these problems. With the new Telepresence, a sidecar +proxy ("traffic agent") is injected onto the pod. The proxy then +intercepts traffic intended for the Pod and routes it to the +workstation/laptop. The advantage of this approach is that the +service is running at all times, and no swapping is used. By using +the proxy approach, we can also do personal intercepts, where rather +than re-routing all traffic to the laptop/workstation, it only +re-routes the traffic designated as belonging to that user, so that +multiple developers can intercept the same service at the same time +without disrupting normal operation or disrupting eacho. + +Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts. + +## Using legacy Telepresence commands + +First please ensure you've [installed Telepresence](../). + +Telepresence is able to translate common legacy Telepresence commands into native Telepresence commands. +So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used +to with the Telepresence binary. + +For example, say you have a deployment (`myserver`) that you want to swap deployment (equivalent to intercept in +Telepresence) with a python server, you could run the following command: + +``` +$ telepresence --swap-deployment myserver --expose 9090 --run python3 -m http.server 9090 +< help text > + +Legacy telepresence command used +Command roughly translates to the following in Telepresence: +telepresence intercept echo-easy --port 9090 -- python3 -m http.server 9090 +running... +Connecting to traffic manager... +Connected to context +Using Deployment myserver +intercepted + Intercept name : myserver + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:9090 + Intercepting : all TCP connections +Serving HTTP on :: port 9090 (http://[::]:9090/) ... +``` + +Telepresence will let you know what the legacy Telepresence command has mapped to and automatically +runs it. So you can get started with Telepresence today, using the commands you are used to +and it will help you learn the Telepresence syntax. + +### Legacy command mapping + +Below is the mapping of legacy Telepresence to Telepresence commands (where they exist and +are supported). + +| Legacy Telepresence Command | Telepresence Command | +|--------------------------------------------------|--------------------------------------------| +| --swap-deployment $workload | intercept $workload | +| --expose localPort[:remotePort] | intercept --port localPort[:remotePort] | +| --swap-deployment $workload --run-shell | intercept $workload -- bash | +| --swap-deployment $workload --run $cmd | intercept $workload -- $cmd | +| --swap-deployment $workload --docker-run $cmd | intercept $workload --docker-run -- $cmd | +| --run-shell | connect -- bash | +| --run $cmd | connect -- $cmd | +| --env-file,--env-json | --env-file, --env-json (haven't changed) | +| --context,--namespace | --context, --namespace (haven't changed) | +| --mount,--docker-mount | --mount, --docker-mount (haven't changed) | + +### Legacy Telepresence command limitations + +Some of the commands and flags from legacy Telepresence either didn't apply to Telepresence or +aren't yet supported in Telepresence. For some known popular commands, such as --method, +Telepresence will include output letting you know that the flag has gone away. For flags that +Telepresence can't translate yet, it will let you know that that flag is "unsupported". + +If Telepresence is missing any flags or functionality that is integral to your usage, please let us know +by [creating an issue](https://github.com/telepresenceio/telepresence/issues) and/or talking to us on our [Slack channel](http://a8r.io/slack)! + +## Telepresence changes + +Telepresence installs a Traffic Manager in the cluster and Traffic Agents alongside workloads when performing intercepts (including +with `--swap-deployment`) and leaves them. If you use `--swap-deployment`, the intercept will be left once the process +dies, but the agent will remain. There's no harm in leaving the agent running alongside your service, but when you +want to remove them from the cluster, the following Telepresence command will help: +``` +$ telepresence uninstall --help +Uninstall telepresence agents + +Usage: + telepresence uninstall [flags] { --agent |--all-agents } + +Flags: + -d, --agent uninstall intercept agent on specific deployments + -a, --all-agents uninstall intercept agent on all deployments + -h, --help help for uninstall + -n, --namespace string If present, the namespace scope for this CLI request +``` + +Since the new architecture deploys a Traffic Manager into the Ambassador namespace, please take a look at +our [rbac guide](../../reference/rbac) if you run into any issues with permissions while upgrading to Telepresence. + +The Traffic Manager can be uninstalled using `telepresence helm uninstall`. \ No newline at end of file diff --git a/docs/telepresence/2.15/install/upgrade.md b/docs/telepresence/2.15/install/upgrade.md new file mode 100644 index 000000000..34385935c --- /dev/null +++ b/docs/telepresence/2.15/install/upgrade.md @@ -0,0 +1,83 @@ +--- +description: "How to upgrade your installation of Telepresence and install previous versions." +--- + +# Upgrade Process +The Telepresence CLI will periodically check for new versions and notify you when an upgrade is available. Running the same commands used for installation will replace your current binary with the latest version. + +Before upgrading your CLI, you must stop any live Telepresence processes by issuing `telepresence quit -s` (or `telepresence quit -ur` +if your current version is less than 2.8.0). + + + + +```shell +# Intel Macs + +# Upgrade via brew: +brew upgrade datawire/blackbird/telepresence + +# OR upgrade manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +The [MSI Installer](https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence-setup.exe) can upgrade Telepresence, or if you installed it with Powershell: + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +The Telepresence CLI contains an embedded Helm chart. See [Install/Uninstall the Traffic Manager](../manager/) if you want to also upgrade +the Traffic Manager in your cluster. diff --git a/docs/telepresence/2.15/quick-start/TelepresenceQuickStartLanding.js b/docs/telepresence/2.15/quick-start/TelepresenceQuickStartLanding.js new file mode 100644 index 000000000..bd375dee0 --- /dev/null +++ b/docs/telepresence/2.15/quick-start/TelepresenceQuickStartLanding.js @@ -0,0 +1,118 @@ +import queryString from 'query-string'; +import React, { useEffect, useState } from 'react'; + +import Embed from '../../../../src/components/Embed'; +import Icon from '../../../../src/components/Icon'; +import Link from '../../../../src/components/Link'; + +import './telepresence-quickstart-landing.less'; + +/** @type React.FC> */ +const RightArrow = (props) => ( + + + +); + +const TelepresenceQuickStartLanding = () => { + const [getStartedUrl, setGetStartedUrl] = useState( + 'https://app.getambassador.io/cloud/welcome?docs_source=telepresence-quick-start', + ); + + const getUrlFromQueryParams = () => { + const { docs_source, docs_campaign } = queryString.parse( + window.location.search, + ); + + if (docs_source === 'cloud-quickstart-ad' && docs_campaign === 'loops') { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=loops', + ); + } else if ( + docs_source === 'cloud-quickstart-ad' && + docs_campaign === 'environments' + ) { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=environments', + ); + } + }; + + useEffect(() => { + getUrlFromQueryParams(); + }, []); + + return ( +
+

+ Telepresence +

+

+ Set up your ideal development environment for Kubernetes in seconds. + Accelerate your inner development loop with hot reload using your + existing IDE, and workflow. +

+ +
+
+
+

+ Set Up Telepresence with Ambassador Cloud +

+

+ Seamlessly integrate Telepresence into your existing Kubernetes + environment by following our 3-step setup guide. +

+ + Get Started + +
+
+

+ + Do it Yourself: + {' '} + install Telepresence and manually connect to your Kubernetes + workloads. +

+
+ +
+
+
+

+ What Can Telepresence Do for You? +

+

Telepresence gives Kubernetes application developers:

+
    +
  • Instant feedback loops
  • +
  • Remote development environments
  • +
  • Access to your favorite local tools
  • +
  • Easy collaborative development with teammates
  • +
+ + LEARN MORE{' '} + + +
+
+ +
+
+
+
+ ); +}; + +export default TelepresenceQuickStartLanding; diff --git a/docs/telepresence/2.15/quick-start/index.md b/docs/telepresence/2.15/quick-start/index.md new file mode 100644 index 000000000..02dc29c47 --- /dev/null +++ b/docs/telepresence/2.15/quick-start/index.md @@ -0,0 +1,422 @@ +--- +title: Quick Start | Telepresence +description: "Telepresence Quick Start by Ambassador Labs: Dive into Kubernetes development with ease. Get set up swiftly and unlock efficient microservice debugging" +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + +# Telepresence Quick Start + +
+ +

Contents

+ + * [Overview](#overview) + * [Prerequisites](#prerequisites) + * [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) + * [2. Set up a local cluster with sample app](#2-set-up-a-local-cluster-with-sample-app) + * [3. Use Telepresence to connect your laptop to the cluster](#3-use-telepresence-to-connect-your-laptop-to-the-cluster) + * [4. Run the sample application locally](#4-run-the-sample-application-locally) + * [5. Route traffic from the cluster to your local application](#5-route-traffic-from-the-cluster-to-your-local-application) + * [6. Make a code change (and see it reflected live?)](#6-make-a-code-change) + * [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Overview +This quickstart provides the fastest way to get an understanding of how [Telepresence](https://www.getambassador.io/products/telepresence) +can speed up your development in Kubernetes. It should take you about 5-10 minutes. You'll create a local cluster using Kind with a sample app installed, and use Telepresence to +* access services in the cluster directly from your laptop +* make changes locally and see those changes immediately in the cluster + +Then we'll point you to some next steps you can take, including trying out collaboration features and trying it in your own infrastructure. + +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) installed and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / +[macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / +[Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster. + +You will also need [Docker installed](https://docs.docker.com/get-docker/). + +The sample application instructions default to Python, which is pre-installed on MacOS and Linux. If you are on Windows and don't already have +Python installed, you can install it from the [official Python site](https://www.python.org/downloads/). + +There are also instructions for NodeJS, Java and Go if you already have those installed and prefer to work in them. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +We offer an easy installation path using an [MSI Installer](https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence-setup.exe). However if you'd like to setup Telepresence using Powershell, you can run these commands: + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Set up a local cluster with sample app + +We provide [a repo](https://github.com/ambassadorlabs/telepresence-local-quickstart) that sets up a local cluster for you +with the in-cluster Telepresence components and a sample app already installed. It does not need `sudo` or `Run as Administrator` privileges. + + + + +```shell +# Clone the repo with submodules +git clone https://github.com/ambassadorlabs/telepresence-local-quickstart.git --recurse-submodules + +# Change to the repo directory +cd telepresence-local-quickstart + +# Run the macOS setup script +./macos-setup.sh +``` + + + + +```shell +# Clone the repo with submodules +git clone https://github.com/ambassadorlabs/telepresence-local-quickstart.git --recurse-submodules + +# Change to the repo directory +cd telepresence-local-quickstart + +# Run the Linux setup script +./linux-setup.sh +``` + + + + +```powershell +# Clone the repo with submodules +git clone https://github.com/ambassadorlabs/telepresence-local-quickstart.git --recurse-submodules + +# Change to the repo directory +cd .\telepresence-local-quickstart\ + +# Run the Windows setup script +.\windows-setup.ps1 +``` + + + + +## 3. Use Telepresence to connect your laptop to the cluster + +Telepresence connects your local workstation to a remote Kubernetes cluster, allowing you to talk to cluster resources like your laptop +is in the cluster. + + + The first time you run a Telepresence command you will be prompted to create an Ambassador Labs account. Creating an account is completely free, + takes a few seconds and can be done through accounts you already have, like GitHub and Google. + + +1. Connect to the cluster: + `telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Now we'll test that Telepresence is working properly by accessing a service running in the cluster. Telepresence has merged your local IP routing +tables and DNS resolution with the clusters, so you can talk to the cluster in its DNS language and to services on their cluster IP address. + +Open up a browser and go to [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). As you can see you've loaded up a dashboard showing the architecture of the sample app. +

+ Edgey Corp Architecture +

+ +You are connected to the VeryLargeJavaService, which talks to the DataProcessingService as an upstream dependency. The DataProcessingService in turn +has a dependency on VeryLargeDatastore. You were able to connect to it using the cluster DNS name thanks to Telepresence. + +## 4. Run the sample application locally + +We'll take on the role of a DataProcessingService developer. We want to be able to connect to that big test database that everyone has that dates back to the +founding of the company and has all the critical test scenarios and is too big to run locally. In the other direction, VeryLargeJavaService is developed by another team +and we need to make sure with each change that we are being good upstream citizens and maintaining valid contracts with that service. + + +Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + + + +To run the DataProcessingService locally: + +1. Change into the repo directory, then into DataProcessingService: `cd edgey-corp-python/DataProcessingService/` +2. Install the dependencies and start the Python server: `pip install flask requests && python app.py` +3. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: `curl localhost:3000/color` + +``` +$ pip install flask requests && python app.py + +Collecting flask +... +Welcome to the DataServiceProcessingPythonService! +... + + +$ curl localhost:3000/color + +"blue" +``` + + +To run the DataProcessingService locally: + +1. Change into the repo directory, then into DataProcessingService: `cd edgey-corp-nodejs/DataProcessingService/` +2. Install the dependencies and start the NodeJS server: `npm install && npm start` +3. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: `curl localhost:3000/color` + +``` +$ npm install && npm start + +added 170 packages, and audited 171 packages in 597ms +... +Welcome to the DataServiceProcessingNodeService! +... + + +$ curl localhost:3000/color + +"blue" +``` + + +To run the DataProcessingService locally: + +1. Change into the repo directory, then into DataProcessingService: `cd edgey-corp-java/DataProcessingService/` +2. Install the dependencies and start the Java server: `mvn spring-boot:run` +3. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: `curl localhost:3000/color` + +``` +$ mvn spring-boot:run + +[INFO] Scanning for projects... +... +INFO 49318 --- [ restartedMain] g.d.DataProcessingServiceJavaApplication : Starting DataProcessingServiceJavaApplication using Java +... + + +$ curl localhost:3000/color + +"blue" +``` + + +To run the DataProcessingService locally: + +1. Change into the repo directory, then into DataProcessingService: `cd edgey-corp-go/DataProcessingService/` +2. Install the dependencies and start the Go server: `go get github.com/pilu/fresh && go install github.com/pilu/fresh && fresh` +3. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: `curl localhost:3000/color` + +``` +$ go get github.com/pilu/fresh && go install github.com/pilu/fresh && fresh + +12:24:13 runner | InitFolders +... +12:24:14 app | Welcome to the DataProcessingGoService! +... + + +$ curl localhost:3000/color + +"blue" +``` + + + + +Victory, your local server is running a-ok! + + +## 5. Route traffic from the cluster to your local application +Historically, developing with microservices on Kubernetes your choices have been to run an entire set of services in a cluster or namespace just for you, +and spend 15 minutes on every one line change pushing the code, waiting for it to build, waiting for it to deploy, etc. Or, you could run all 50 services +in your environment on your laptop, and be deafened by the fans. + +With Telepresence, you can *intercept* traffic from a service in the cluster and route it to our laptop, effectively replacing the cluster version +with your local development environment. This gives you back the fast feedback loop of local development, and access to your preferred tools like your favorite IDE or debugger. +And you still have access to all the cluster resources via `telepresence connect`. Now you'll see this in action. + +Look back at your browser tab looking at the app dashboard. You see the EdgyCorp WebApp with a green title and green pod in the diagram. +The local version of the code has the UI color set to blue instead of green. + +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP requests + ``` + +2. Go to the frontend service again in your browser and refresh. You will now see the blue elements in the app. + + +The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + + + +To update the color: + +1. Open `edgey-corp-python/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 15 from `blue` to `orange`. Save the file and the python server will auto reload. +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser and refresh. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+
+ +To update the color: + +1. Open `edgey-corp-nodejs/DataProcessingService/app.js` in your editor and change line 6 from `blue` to `orange`. Save the file and the Node server will auto reload. +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+
+ +To update the color: + +1. Open `edgey-corp-java/DataProcessingService/src/main/resources/application.properties` in your editor and change `app.default.color` on line 2 from `blue` to `orange`. Save the file then stop and restart your Java server. +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+
+ +To update the color: + +1. Open `edgey-corp-go/DataProcessingService/main.go` in your editor and change `var color string` from `blue` to `orange`. Save the file and the Go server will auto reload. +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+
+
+ +## What's Next? + + + +export const metaData = [ + {name: "Ambassador Labs", path: "https://getambassador.io"}, + {name: "Telepresence", path: "https://www.getambassador.io/products/telepresence"}, + {name: "Install Tools | Kubernetes", path: "https://kubernetes.io/docs/tasks/tools/install-kubectl/"}, + {name: "Get Docker", path: "https://docs.docker.com/get-docker/"}, + {name: "Download Python | Python.org", path: "https://www.python.org/downloads/"}, + {name: "Telepresence Local Quickstart", path: "https://github.com/ambassadorlabs/telepresence-local-quickstart"} +] diff --git a/docs/telepresence/2.15/quick-start/qs-cards.js b/docs/telepresence/2.15/quick-start/qs-cards.js new file mode 100644 index 000000000..084af19b3 --- /dev/null +++ b/docs/telepresence/2.15/quick-start/qs-cards.js @@ -0,0 +1,68 @@ +import Grid from '@material-ui/core/Grid'; +import Paper from '@material-ui/core/Paper'; +import Typography from '@material-ui/core/Typography'; +import { makeStyles } from '@material-ui/core/styles'; +import { Link as GatsbyLink } from 'gatsby'; +import React from 'react'; + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + textAlign: 'center', + alignItem: 'stretch', + padding: 0, + }, + paper: { + padding: theme.spacing(1), + textAlign: 'center', + color: 'black', + height: '100%', + }, +})); + +export default function CenteredGrid() { + const classes = useStyles(); + + return ( +
+ + + + + + Collaborating + + + + Use personal intercepts to get specific requests when working with colleagues. + + + + + + + + Outbound Sessions + + + + Control what your laptop can reach in the cluster while connected. + + + + + + + + Telepresence for Docker Compose + + + + Develop in a hybrid local/cluster environment using Telepresence for Docker Compose. + + + + +
+ ); +} diff --git a/docs/telepresence/2.15/quick-start/telepresence-quickstart-landing.less b/docs/telepresence/2.15/quick-start/telepresence-quickstart-landing.less new file mode 100644 index 000000000..e2a83df4f --- /dev/null +++ b/docs/telepresence/2.15/quick-start/telepresence-quickstart-landing.less @@ -0,0 +1,152 @@ +@import '~@src/components/Layout/vars.less'; + +.doc-body .telepresence-quickstart-landing { + font-family: @InterFont; + color: @black; + margin: -8.4px auto 48px; + max-width: 1050px; + min-width: @docs-min-width; + width: 100%; + + h1 { + color: @blue-dark; + font-weight: normal; + letter-spacing: 0.25px; + font-size: 33px; + margin: 0 0 15px; + } + p { + font-size: 0.875rem; + line-height: 24px; + margin: 0; + padding: 0; + } + + .demo-cluster-container { + display: grid; + margin: 40px 0; + grid-template-columns: 1fr; + grid-template-columns: 1fr; + @media screen and (max-width: 900px) { + grid-template-columns: repeat(1, 1fr); + } + } + .main-title-container { + display: flex; + flex-direction: column; + align-items: center; + p { + text-align: center; + font-size: 0.875rem; + } + } + h2 { + font-size: 23px; + color: @black; + margin: 0 0 20px 0; + padding: 0; + &.underlined { + padding-bottom: 2px; + border-bottom: 3px solid @grey-separator; + text-align: center; + } + strong { + font-weight: 800; + } + &.subtitle { + margin-bottom: 10px; + font-size: 19px; + line-height: 28px; + } + } + .learn-more, + .get-started { + font-size: 14px; + font-weight: 600; + letter-spacing: 1.25px; + display: flex; + align-items: center; + text-decoration: none; + &.inline { + display: inline-block; + text-decoration: underline; + font-size: unset; + font-weight: normal; + &:hover { + text-decoration: none; + } + } + &.blue { + color: @blue-5; + } + &.blue:hover { + color: @blue-dark; + } + } + + .learn-more { + margin-top: 20px; + padding: 13px 0; + } + + .box-container { + &.border { + border: 1.5px solid @grey-separator; + border-radius: 5px; + padding: 10px; + } + &::before { + content: ''; + position: absolute; + width: 14px; + height: 14px; + border-radius: 50%; + top: 0; + left: 50%; + transform: translate(-50%, -50%); + } + p { + font-size: 0.875rem; + line-height: 24px; + padding: 0; + } + } + + .telepresence-video { + border: 2px solid @grey-separator; + box-shadow: -6px 12px 0px fade(@black, 12%); + border-radius: 8px; + padding: 18px; + h2.telepresence-video-title { + font-weight: 400; + font-size: 23px; + line-height: 33px; + color: @blue-6; + } + } + + .video-section { + display: grid; + grid-template-columns: 1fr 1fr; + column-gap: 20px; + @media screen and (max-width: 800px) { + grid-template-columns: 1fr; + } + ul { + font-size: 14px; + margin: 0 10px 6px 0; + } + .video-container { + position: relative; + padding-bottom: 56.25%; // 16:9 aspect ratio + height: 0; + iframe { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + } + } + } +} diff --git a/docs/telepresence/2.15/redirects.yml b/docs/telepresence/2.15/redirects.yml new file mode 100644 index 000000000..c73de44b4 --- /dev/null +++ b/docs/telepresence/2.15/redirects.yml @@ -0,0 +1,6 @@ +- {from: "", to: "quick-start"} +- {from: /docs/telepresence/v2.15/quick-start/qs-go, to: /docs/telepresence/v2.15/quickstart/} +- {from: /docs/telepresence/v2.15/quick-start/qs-java, to: /docs/telepresence/v2.15/quickstart/} +- {from: /docs/telepresence/v2.15/quick-start/qs-node, to: /docs/telepresence/v2.15/quickstart/} +- {from: /docs/telepresence/v2.15/quick-start/qs-python, to: /docs/telepresence/v2.15/quickstart/} +- {from: /docs/telepresence/v2.15/quick-start/qs-python-fastapi, to: /docs/telepresence/v2.15/quickstart/} diff --git a/docs/telepresence/2.15/reference/architecture.md b/docs/telepresence/2.15/reference/architecture.md new file mode 100644 index 000000000..6d45f010d --- /dev/null +++ b/docs/telepresence/2.15/reference/architecture.md @@ -0,0 +1,101 @@ +--- +description: "How Telepresence works to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Telepresence Architecture + +
+ +![Telepresence Architecture](https://www.getambassador.io/images/documentation/telepresence-architecture.inline.svg) + +
+ +## Telepresence CLI + +The Telepresence CLI orchestrates the moving parts on the workstation: it starts the Telepresence Daemons, +authenticates against Ambassador Cloud, and then acts as a user-friendly interface to the Telepresence User Daemon. + +## Telepresence Daemons +Telepresence has Daemons that run on a developer's workstation and act as the main point of communication to the cluster's +network in order to communicate with the cluster and handle intercepted traffic. + +### User-Daemon +The User-Daemon coordinates the creation and deletion of intercepts by communicating with the [Traffic Manager](#traffic-manager). +All requests from and to the cluster go through this Daemon. + +When you run telepresence login, Telepresence installs an enhanced version of the User-Daemon. This replaces the existing User-Daemon and +allows you to create intercepts on your local machine from Ambassador Cloud. + +### Root-Daemon +The Root-Daemon manages the networking necessary to handle traffic between the local workstation and the cluster by setting up a +[Virtual Network Device](../tun-device) (VIF). For a detailed description of how the VIF manages traffic and why it is necessary +please refer to this blog post: +[Implementing Telepresence Networking with a TUN Device](https://blog.getambassador.io/implementing-telepresence-networking-with-a-tun-device-a23a786d51e9). + +When you run telepresence login, Telepresence installs an enhanced Telepresence User Daemon. This replaces the open source +User Daemon and allows you to create intercepts on your local machine from Ambassador Cloud. + +## Traffic Manager + +The Traffic Manager is the central point of communication between Traffic Agents in the cluster and Telepresence Daemons +on developer workstations. It is responsible for injecting the Traffic Agent sidecar into intercepted pods, proxying all +relevant inbound and outbound traffic, and tracking active intercepts. + +The Traffic-Manager is installed, either by a cluster administrator using a Helm Chart, or on demand by the Telepresence +User Daemon. When the User Daemon performs its initial connect, it first checks the cluster for the Traffic Manager +deployment, and if missing it will make an attempt to install it using its embedded Helm Chart. + +When an intercept gets created with a Preview URL, the Traffic Manager will establish a connection with Ambassador Cloud +so that Preview URL requests can be routed to the cluster. This allows Ambassador Cloud to reach the Traffic Manager +without requiring the Traffic Manager to be publicly exposed. Once the Traffic Manager receives a request from a Preview +URL, it forwards the request to the ingress service specified at the Preview URL creation. + +## Traffic Agent + +The Traffic Agent is a sidecar container that facilitates intercepts. When an intercept is first started, the Traffic Agent +container is injected into the workload's pod(s). You can see the Traffic Agent's status by running `telepresence list` +or `kubectl describe pod `. + +Depending on the type of intercept that gets created, the Traffic Agent will either route the incoming request to the +Traffic Manager so that it gets routed to a developer's workstation, or it will pass it along to the container in the +pod usually handling requests on that port. + +## Ambassador Cloud + +Ambassador Cloud enables Preview URLs by generating random ephemeral domain names and routing requests received on those +domains from authorized users to the appropriate Traffic Manager. + +Ambassador Cloud also lets users manage their Preview URLs: making them publicly accessible, seeing users who have +accessed them and deleting them. + +## Pod-Daemon + +The Pod-Daemon is a modified version of the [Telepresence User-Daemon](#user-daemon) built as a container image so that +it can be inserted into a `Deployment` manifest as an additional container. This allows users to create intercepts completely +within the cluster with the benefit that the intercept stays active until the deployment with the Pod-Daemon container is removed. + +The Pod-Daemon will take arguments and environment variables as part of the `Deployment` manifest to specify which service the intercept +should be run on and to provide similar configuration that would be provided when using Telepresence intercepts from the command line. + +After being deployed to the cluster, it behaves similarly to the Telepresence User-Daemon and installs the [Traffic Agent Sidecar](#traffic-agent) +on the service that is being intercepted. After the intercept is created, traffic can then be redirected to the `Deployment` with the Pod-Daemon +container instead. The Pod-Daemon will automatically generate a Preview URL so that the intercept can be accessed from outside the cluster. +The Preview URL can be obtained from the Pod-Daemon logs if you are deploying it manually. + +The Pod-Daemon was created for use as a component of Deployment Previews in order to automatically create intercepts with development images built +by CI so that changes from pull requests can be quickly visualized in a live cluster before changes are landed by accessing the Preview URL +link which would be posted to an associated GitHub pull request when using Deployment Previews. + +See the [Deployment Previews quick-start](../../ci/pod-daemon) for information on how to get started with Deployment Previews +or for a reference on how Pod-Daemon can be manually deployed to the cluster. + +# Changes from Service Preview + +Using Ambassador's previous offering, Service Preview, the Traffic Agent had to be manually added to a pod by an +annotation. This is no longer required as the Traffic Agent is automatically injected when an intercept is started. + +Service Preview also started an intercept via `edgectl intercept`. The `edgectl` CLI is no longer required to intercept +as this functionality has been moved to the Telepresence CLI. + +For both the Traffic Manager and Traffic Agents, configuring Kubernetes ClusterRoles and ClusterRoleBindings is not +required as it was in Service Preview. Instead, the user running Telepresence must already have sufficient permissions in the cluster to add and modify deployments in the cluster. diff --git a/docs/telepresence/2.15/reference/client.md b/docs/telepresence/2.15/reference/client.md new file mode 100644 index 000000000..84137db98 --- /dev/null +++ b/docs/telepresence/2.15/reference/client.md @@ -0,0 +1,31 @@ +--- +description: "CLI options for Telepresence to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Client reference + +The [Telepresence CLI client](../../quick-start) is used to connect Telepresence to your cluster, start and stop intercepts, and create preview URLs. All commands are run in the form of `telepresence `. + +## Commands + +A list of all CLI commands and flags is available by running `telepresence help`, but here is more detail on the most common ones. +You can append `--help` to each command below to get even more information about its usage. + +| Command | Description | +|----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `connect` | Starts the local daemon and connects Telepresence to your cluster and installs the Traffic Manager if it is missing. After connecting, outbound traffic is routed to the cluster so that you can interact with services as if your laptop was another pod (for example, curling a service by it's name) | +| [`login`](login) | Authenticates you to Ambassador Cloud to create, manage, and share [preview URLs](../../howtos/preview-urls/) | +| `logout` | Logs out out of Ambassador Cloud | +| `license` | Formats a license from Ambassdor Cloud into a secret that can be [applied to your cluster](../cluster-config#add-license-to-cluster) if you require features of the extension in an air-gapped environment | +| `status` | Shows the current connectivity status | +| `quit` | Tell Telepresence daemons to quit | +| `list` | Lists the current active intercepts | +| `intercept` | Intercepts a service, run followed by the service name to be intercepted and what port to proxy to your laptop: `telepresence intercept --port ` (use `port/UDP` to force UDP). This command can also start a process so you can run a local instance of the service you are intercepting. For example the following will intercept the hello service on port 8000 and start a Python web server: `telepresence intercept hello --port 8000 -- python3 -m http.server 8000`. A special flag `--docker-run` can be used to run the local instance [in a docker container](../docker-run). | +| `leave` | Stops an active intercept: `telepresence leave hello` | +| `preview` | Create or remove [preview URLs](../../howtos/preview-urls) for existing intercepts: `telepresence preview create ` | +| `loglevel` | Temporarily change the log-level of the traffic-manager, traffic-agents, and user and root daemons | +| `gather-logs` | Gather logs from traffic-manager, traffic-agents, user, and root daemons, and export them into a zip file that can be shared with others or included with a github issue. Use `--get-pod-yaml` to include the yaml for the `traffic-manager` and `traffic-agent`s. Use `--anonymize` to replace the actual pod names + namespaces used for the `traffic-manager` and pods containing `traffic-agent`s in the logs. | +| `version` | Show version of Telepresence CLI + Traffic-Manager (if connected) | +| `uninstall` | Uninstalls Telepresence from your cluster, using the `--agent` flag to target the Traffic Agent for a specific workload, the `--all-agents` flag to remove all Traffic Agents from all workloads, or the `--everything` flag to remove all Traffic Agents and the Traffic Manager. | +| `dashboard` | Reopens the Ambassador Cloud dashboard in your browser | +| `current-cluster-id` | Get cluster ID for your kubernetes cluster, used for [configuring license](../cluster-config#add-license-to-cluster) in an air-gapped environment | diff --git a/docs/telepresence/2.15/reference/client/login.md b/docs/telepresence/2.15/reference/client/login.md new file mode 100644 index 000000000..c5a5df7b0 --- /dev/null +++ b/docs/telepresence/2.15/reference/client/login.md @@ -0,0 +1,60 @@ +# Telepresence Login + +```console +$ telepresence login --help +Authenticate to Ambassador Cloud + +Usage: + telepresence login [flags] + +Flags: + --apikey string Static API key to use instead of performing an interactive login +``` + +## Description + +Use `telepresence login` to explicitly authenticate with [Ambassador +Cloud](https://www.getambassador.io/docs/cloud). Other commands will +automatically invoke the `telepresence login` interactive login +procedure as nescessary, so it is rarely nescessary to explicitly run +`telepresence login`; it should only be truly nescessary to explictly +run `telepresence login` when you require a non-interactive login. + +The normal interactive login procedure involves launching a web +browser, a user interacting with that web browser, and finally having +the web browser make callbacks to the local Telepresence process. If +it is not possible to do this (perhaps you are using a headless remote +box via SSH, or are using Telepresence in CI), then you may instead +have Ambassador Cloud issue an API key that you pass to `telepresence +login` with the `--apikey` flag. + +## Telepresence + +When you run `telepresence login`, the CLI installs +a Telepresence binary. The Telepresence enhanced free client of the [User +Daemon](../../architecture) communicates with the Ambassador Cloud to +provide fremium features including the ability to create intercepts from +Ambassador Cloud. + +## Acquiring an API key + +1. Log in to Ambassador Cloud at https://app.getambassador.io/ . + +2. Click on your profile icon in the upper-left: ![Screenshot with the + mouse pointer over the upper-left profile icon](./login/apikey-2.png) + +3. Click on the "API Keys" menu button: ![Screenshot with the mouse + pointer over the "API Keys" menu button](./login/apikey-3.png) + +4. Click on the "generate new key" button in the upper-right: + ![Screenshot with the mouse pointer over the "generate new key" + button](./login/apikey-4.png) + +5. Enter a description for the key (perhaps the name of your laptop, + or perhaps the "CI"), and click "generate api key" to create it. + +You may now pass the API key as `KEY` to `telepresence login --apikey=KEY`. + +Telepresence will use that "master" API key to create narrower keys +for different components of Telepresence. You will see these appear +in the Ambassador Cloud web interface. \ No newline at end of file diff --git a/docs/telepresence/2.15/reference/client/login/apikey-2.png b/docs/telepresence/2.15/reference/client/login/apikey-2.png new file mode 100644 index 000000000..1379502a9 Binary files /dev/null and b/docs/telepresence/2.15/reference/client/login/apikey-2.png differ diff --git a/docs/telepresence/2.15/reference/client/login/apikey-3.png b/docs/telepresence/2.15/reference/client/login/apikey-3.png new file mode 100644 index 000000000..4559b784d Binary files /dev/null and b/docs/telepresence/2.15/reference/client/login/apikey-3.png differ diff --git a/docs/telepresence/2.15/reference/client/login/apikey-4.png b/docs/telepresence/2.15/reference/client/login/apikey-4.png new file mode 100644 index 000000000..25c6581a4 Binary files /dev/null and b/docs/telepresence/2.15/reference/client/login/apikey-4.png differ diff --git a/docs/telepresence/2.15/reference/cluster-config.md b/docs/telepresence/2.15/reference/cluster-config.md new file mode 100644 index 000000000..b538c1ef7 --- /dev/null +++ b/docs/telepresence/2.15/reference/cluster-config.md @@ -0,0 +1,386 @@ +import Alert from '@material-ui/lab/Alert'; +import { ClusterConfig } from '../../../../../src/components/Docs/Telepresence'; + +# Cluster-side configuration + +For the most part, Telepresence doesn't require any special +configuration in the cluster and can be used right away in any +cluster (as long as the user has adequate [RBAC permissions](../rbac) +and the cluster's server version is `1.19.0` or higher). + +## Helm Chart configuration +Some cluster specific configuration can be provided when installing +or upgrading the Telepresence cluster installation using Helm. Once +installed, the Telepresence client will configure itself from values +that it receives when connecting to the Traffic manager. + +See the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) +for a full list of available configuration settings. + +### Values +To add configuration, create a yaml file with the configuration values and then use it executing `telepresence helm install [--upgrade] --values ` + +## Client Configuration + +It is possible for the Traffic Manager to automatically push config to all +connecting clients. To learn more about this, please see the [client config docs](../config#global-configuration) + +### Agent Configuration + +The `agent` structure of the Helm chart configures the behavior of the Telepresence agents. + +#### Application Protocol Selection +The `agent.appProtocolStrategy` is relevant when using personal intercepts and controls how telepresence selects the application protocol to use +when intercepting a service that has no `service.ports.appProtocol` declared. The port's `appProtocol` is always trusted if it is present. +Valid values are: + +| Value | Resulting action | +|--------------|------------------------------------------------------------------------------------------------------------------------------| +| `http2Probe` | The Telepresence Traffic Agent will probe the intercepted container to check whether it supports http2. This is the default. | +| `portName` | Telepresence will make an educated guess about the protocol based on the name of the service port | +| `http` | Telepresence will use http | +| `http2` | Telepresence will use http2 | + +When `portName` is used, Telepresence will determine the protocol by the name of the port: `[-suffix]`. The following protocols +are recognized: + +| Protocol | Meaning | +|----------|---------------------------------------| +| `http` | Plaintext HTTP/1.1 traffic | +| `http2` | Plaintext HTTP/2 traffic | +| `https` | TLS Encrypted HTTP (1.1 or 2) traffic | +| `grpc` | Same as http2 | + + + +#### Envoy Configuration + +The `agent.envoy` structure contains three values: + +| Setting | Meaning | +|--------------|----------------------------------------------------------| +| `logLevel` | Log level used by the Envoy proxy. Defaults to "warning" | +| `serverPort` | Port used by the Envoy server. Default 18000. | +| `adminPort` | Port used for Envoy administration. Default 19000. | + +#### Image Configuration + +The `agent.image` structure contains the following values: + +| Setting | Meaning | +|------------|-----------------------------------------------------------------------------| +| `registry` | Registry used when downloading the image. Defaults to "docker.io/datawire". | +| `name` | The name of the image. Retrieved from Ambassador Cloud if not set. | +| `tag` | The tag of the image. Retrieved from Ambassador Cloud if not set. | + +#### Log level + +The `agent.LogLevel` controls the log level of the traffic-agent. See [Log Levels](../config/#log-levels) for more info. + +#### Resources + +The `agent.resources` and `agent.initResources` will be used as the `resources` element when injecting traffic-agents and init-containers. + +## TLS + +In this example, other applications in the cluster expect to speak TLS to your +intercepted application (perhaps you're using a service-mesh that does +mTLS). + +In order to use `--mechanism=http` (or any features that imply +`--mechanism=http`) you need to tell Telepresence about the TLS +certificates in use. + +Tell Telepresence about the certificates in use by adjusting your +[workload's](../intercepts/#supported-workloads) Pod template to set a couple of +annotations on the intercepted Pods: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ "getambassador.io/inject-terminating-tls-secret": "your-terminating-secret" # optional ++ "getambassador.io/inject-originating-tls-secret": "your-originating-secret" # optional +``` + +- The `getambassador.io/inject-terminating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS server + certificate to use for decrypting and responding to incoming + requests. + + When Telepresence modifies the Service and workload port + definitions to point at the Telepresence Agent sidecar's port + instead of your application's actual port, the sidecar will use this + certificate to terminate TLS. + +- The `getambassador.io/inject-originating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS + client certificate to use for communicating with your application. + + You will need to set this if your application expects incoming + requests to speak TLS (for example, your + code expects to handle mTLS itself instead of letting a service-mesh + sidecar handle mTLS for it, or the port definition that Telepresence + modified pointed at the service-mesh sidecar instead of at your + application). + + If you do set this, you should to set it to the + same client certificate Secret that you configure the Ambassador + Edge Stack to use for mTLS. + +It is only possible to refer to a Secret that is in the same Namespace +as the Pod. The Secret will be mounted into the traffic agent's container. + +Telepresence understands `type: kubernetes.io/tls` Secrets and +`type: istio.io/key-and-cert` Secrets; as well as `type: Opaque` +Secrets that it detects to be formatted as one of those types. + +## Air-gapped cluster + +If your cluster is on an isolated network such that it cannot +communicate with Ambassador Cloud, then some additional configuration +is required to acquire a license key in order to use personal +intercepts. A buisness or enterprise plan is required to generate a license. + +### Create a license + +1. + +2. Generate a new license (if one doesn't already exist) by clicking *Generate New License*. + +3. You will be prompted for your Cluster ID. Ensure your +kubeconfig context is using the cluster you want to create a license for then +run this command to generate the Cluster ID: + + ``` + $ telepresence current-cluster-id + + Cluster ID: + ``` + +4. Click *Generate API Key* to finish generating the license. + +5. On the licenses page, download the license file associated with your cluster. + +### Add license to cluster +There are two separate ways you can add the license to your cluster: manually creating and deploying +the license secret or having the helm chart manage the secret + +You only need to do one of the two options. + +#### Manual deploy of license secret + +1. Use this command to generate a Kubernetes Secret config using the license file: + + ``` + $ telepresence license -f + + apiVersion: v1 + data: + hostDomain: + license: + kind: Secret + metadata: + creationTimestamp: null + name: systema-license + namespace: ambassador + ``` + +2. Save the output as a YAML file and apply it to your +cluster with `kubectl`. + +3. When deploying the `traffic-manager` chart, you must add the additional values when running `helm install` by putting +the following into a file (for the example we'll assume it's called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + secret: + # This tells the helm chart not to create the secret since you've created it yourself + create: false + ``` + +4. Install the helm chart into the cluster + + ``` + telepresence helm install -f license-values.yaml + ``` + +5. Ensure that you have the docker image for the Smart Agent (datawire/ambassador-telepresence-agent:1.11.0) +pulled and in a registry your cluster can pull from. + +6. Have users use the `images` [config key](../config/#images) keys so telepresence uses the aforementioned image for their agent. + +#### Helm chart manages the secret + +1. Get the jwt token from the downloaded license file + + ``` + $ cat ~/Downloads/ambassador.License_for_yourcluster + eyJhbGnotarealtoken.butanexample + ``` + +2. Create the following values file, substituting your real jwt token in for the one used in the example below. +(for this example we'll assume the following is placed in a file called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + # This is the value from the license file you download. this value is an example and will not work + value: eyJhbGnotarealtoken.butanexample + secret: + # This tells the helm chart to create the secret + create: true + ``` + +3. Install the helm chart into the cluster + + ``` + telepresence helm install -f license-values.yaml + ``` + +Users will now be able to use preview intercepts with the +`--preview-url=false` flag. Even with the license key, preview URLs +cannot be used without enabling direct communication with Ambassador +Cloud, as Ambassador Cloud is essential to their operation. + +If using Helm to install the server-side components, see the chart's [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) to learn how to configure the image registry and license secret. + +## Mutating Webhook + +Telepresence uses a Mutating Webhook to inject the [Traffic Agent](../architecture/#traffic-agent) sidecar container and update the +port definitions. This means that an intercepted workload (Deployment, StatefulSet, ReplicaSet) will remain untouched +and in sync as far as GitOps workflows (such as ArgoCD) are concerned. + +The injection will happen on demand the first time an attempt is made to intercept the workload. + +If you want to prevent that the injection ever happens, simply add the `telepresence.getambassador.io/inject-traffic-agent: disabled` +annotation to your workload template's annotations: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ telepresence.getambassador.io/inject-traffic-agent: disabled + spec: + containers: +``` + +### Service Name and Port Annotations + +Telepresence will automatically find all services and all ports that will connect to a workload and make them available +for an intercept, but you can explicitly define that only one service and/or port can be intercepted. + +```diff + spec: + template: + metadata: + labels: + service: your-service + annotations: ++ telepresence.getambassador.io/inject-service-name: my-service ++ telepresence.getambassador.io/inject-service-port: https + spec: + containers: +``` + +### Ignore Certain Volume Mounts + +An annotation `telepresence.getambassador.io/inject-ignore-volume-mounts` can be used to make the injector ignore certain volume mounts denoted by a comma-separated string. The specified volume mounts from the original container will not be appended to the agent sidecar container. + +```diff + spec: + template: + metadata: + annotations: ++ telepresence.getambassador.io/inject-ignore-volume-mounts: "foo,bar" + spec: + containers: +``` + +### Note on Numeric Ports + +If the targetPort of your intercepted service is pointing at a port number, in addition to +injecting the Traffic Agent sidecar, Telepresence will also inject an initContainer that will +reconfigure the pod's firewall rules to redirect traffic to the Traffic Agent. + + +Note that this initContainer requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + +If you need to use numeric ports without the aforementioned capabilities, you can [manually install the agent](../intercepts/manual-agent) + +For example, the following service is using a numeric port, so Telepresence would inject an initContainer into it: +```yaml +apiVersion: v1 +kind: Service +metadata: + name: your-service +spec: + type: ClusterIP + selector: + service: your-service + ports: + - port: 80 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: your-service + labels: + service: your-service +spec: + replicas: 1 + selector: + matchLabels: + service: your-service + template: + metadata: + annotations: + telepresence.getambassador.io/inject-traffic-agent: enabled + labels: + service: your-service + spec: + containers: + - name: your-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 +``` + +## Excluding Envrionment Variables + +If your pod contains sensitive variables like a database password, or third party API Key, you may want to exclude those from being propagated through an intercept. +Telepresence allows you to configure this through a ConfigMap that is then read and removes the sensitive variables. + +This can be done in two ways: + +When installing your traffic-manager through helm you can use the `--set` flag and pass a comma separated list of variables: + +`telepresence helm install --set intercept.environment.excluded="{DATABASE_PASSWORD,API_KEY}"` + +This also applies when upgrading: + +`telepresence helm upgrade --set intercept.environment.excluded="{DATABASE_PASSWORD,API_KEY}"` + +Once this is completed, the environment variables will no longer be in the environment file created by an Intercept. + +The other way to complete this is in your custom `values.yaml`. Customizing your traffic-manager through a values file can be viewed [here](../../install/manager). + +```yaml +intercept: + environment: + excluded: ['DATABASE_PASSWORD', 'API_KEY'] +``` + +You can exclude any number of variables, they just need to match the `key` of the variable within a pod to be excluded. \ No newline at end of file diff --git a/docs/telepresence/2.15/reference/config.md b/docs/telepresence/2.15/reference/config.md new file mode 100644 index 000000000..d3472eb01 --- /dev/null +++ b/docs/telepresence/2.15/reference/config.md @@ -0,0 +1,374 @@ +# Laptop-side configuration + +There are a number of configuration values that can be tweaked to change how Telepresence behaves. +These can be set in two ways: globally, by a platform engineer with powers to deploy the Telepresence Traffic Manager, or locally by any user. +One important exception is the location of the traffic manager itself, which, if it's different from the default of `ambassador`, [must be set](#manager) locally per-cluster to be able to connect. + +## Global Configuration + +Global configuration is set at the Traffic Manager level and applies to any user connecting to that Traffic Manager. +To set it, simply pass in a `client` dictionary to the `helm install` command, with any config values you wish to set. + +### Values + +The `client` config supports values for `timeouts`, `logLevels`, `images`, `cloud`, `grpc`, `dns`, and `routing`. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +client: + timeouts: + agentInstall: 1m + intercept: 10s + logLevels: + userDaemon: debug + images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting + cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. + grpc: + maxReceiveSize: 10Mi + telepresenceAPI: + port: 9980 + dns: + includeSuffixes: [.private] + excludeSuffixes: [.se, .com, .io, .net, .org, .ru] + lookupTimeout: 30s + routing: + alsoProxySubnets: + - 1.2.3.4/32 + neverProxySubnets: + - 1.2.3.4/32 +``` + +#### Timeouts + +Values for `client.timeouts` are all durations either as a number of seconds +or as a string with a unit suffix of `ms`, `s`, `m`, or `h`. Strings +can be fractional (`1.5h`) or combined (`2h45m`). + +These are the valid fields for the `timeouts` key: + +| Field | Description | Type | Default | +|-------------------------|------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|------------| +| `agentInstall` | Waiting for Traffic Agent to be installed | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | +| `apply` | Waiting for a Kubernetes manifest to be applied | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 1 minute | +| `clusterConnect` | Waiting for cluster to be connected | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `intercept` | Waiting for an intercept to become active | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `proxyDial` | Waiting for an outbound connection to be established | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `trafficManagerConnect` | Waiting for the Traffic Manager API to connect for port forwards | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `trafficManagerAPI` | Waiting for connection to the gPRC API after `trafficManagerConnect` is successful | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 15 seconds | +| `helm` | Waiting for Helm operations (e.g. `install`) on the Traffic Manager | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | + +#### Log Levels + +Values for the `client.logLevels` fields are one of the following strings, +case-insensitive: + + - `trace` + - `debug` + - `info` + - `warning` or `warn` + - `error` + +For whichever log-level you select, you will get logs labeled with that level and of higher severity. +(e.g. if you use `info`, you will also get logs labeled `error`. You will NOT get logs labeled `debug`. + +These are the valid fields for the `client.logLevels` key: + +| Field | Description | Type | Default | +|--------------|---------------------------------------------------------------------|---------------------------------------------|---------| +| `userDaemon` | Logging level to be used by the User Daemon (logs to connector.log) | [loglevel][logrus-level] [string][yaml-str] | debug | +| `rootDaemon` | Logging level to be used for the Root Daemon (logs to daemon.log) | [loglevel][logrus-level] [string][yaml-str] | info | + +#### Images +Values for `client.images` are strings. These values affect the objects that are deployed in the cluster, +so it's important to ensure users have the same configuration. + +Additionally, you can deploy the server-side components with [Helm](../../install/helm), to prevent them +from being overridden by a client's config and use the [mutating-webhook](../cluster-config/#mutating-webhook) +to handle installation of the `traffic-agents`. + +These are the valid fields for the `client.images` key: + +| Field | Description | Type | Default | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------|----------------------| +| `registry` | Docker registry to be used for installing the Traffic Manager and default Traffic Agent. If not using a helm chart to deploy server-side objects, changing this value will create a new traffic-manager deployment when using Telepresence commands. Additionally, changing this value will update installed default `traffic-agents` to use the new registry when creating a new intercept. | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `agentImage` | `$registry/$imageName:$imageTag` to use when installing the Traffic Agent. Changing this value will update pre-existing `traffic-agents` to use this new image. *The `registry` value is not used for the `traffic-agent` if you have this value set.* | qualified Docker image name [string][yaml-str] | (unset) | +| `webhookRegistry` | The container `$registry` that the [Traffic Manager](../cluster-config/#mutating-webhook) will use with the `webhookAgentImage` *This value is only used if a new `traffic-manager` is deployed* | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `webhookAgentImage` | The container image that the [Traffic Manager](../cluster-config/#mutating-webhook) will pull from the `webhookRegistry` when installing the Traffic Agent in annotated pods *This value is only used if a new `traffic-manager` is deployed* | non-qualified Docker image name [string][yaml-str] | (unset) | + +#### Cloud +Values for `client.cloud` are listed below and their type varies, so please see the chart for the expected type for each config value. +These fields control how the client interacts with the Cloud service. + +| Field | Description | Type | Default | +|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------|----------------------| +| `refreshMessages` | How frequently the CLI should communicate with Ambassador Cloud to get new command messages, which also resets whether the message has been raised or not. You will see each message at most once within the duration given by this config | [duration][go-duration] [string][yaml-str] | 168h | +| `systemaHost` | The host used to communicate with Ambassador Cloud | [string][yaml-str] | app.getambassador.io | +| `systemaPort` | The port used with `systemaHost` to communicate with Ambassador Cloud | [string][yaml-str] | 443 | + +Telepresence attempts to auto-detect if the cluster is capable of +communication with Ambassador Cloud, but in cases where only the on-laptop client wishes to communicate with +Ambassador Cloud Telepresence may still prompt you to log in. + +Reminder: To use personal intercepts, which normally require a login, +you must have a license key in your cluster and specify which +`agentImage` should be installed by also adding the following to your +`config.yml`: + +```yaml +images: + agentImage: / +``` + +#### Air-gapped clients + +If your laptop is on an isolated network, you will need an [air-gapped license](../cluster-config/#air-gapped-cluster) in your cluster. Telepresence will check for this license before requiring a login. + +#### Grpc +The `maxReceiveSize` determines how large a message that the workstation receives via gRPC can be. The default is 4Mi (determined by gRPC). All traffic to and from the cluster is tunneled via gRPC. + +The size is measured in bytes. You can express it as a plain integer or as a fixed-point number using E, G, M, or K. You can also use the power-of-two equivalents: Gi, Mi, Ki. For example, the following represent roughly the same value: +``` +128974848, 129e6, 129M, 123Mi +``` + +#### RESTful API server +The `client.telepresenceAPI` controls the behavior of Telepresence's RESTful API server that can be queried for additional information about ongoing intercepts. When present, and the `port` is set to a valid port number, it's propagated to the auto-installer so that application containers that can be intercepted gets the `TELEPRESENCE_API_PORT` environment set. The server can then be queried at `localhost:`. In addition, the `traffic-agent` and the `user-daemon` on the workstation that performs an intercept will start the server on that port. +If the `traffic-manager` is auto-installed, its webhook agent injector will be configured to add the `TELEPRESENCE_API_PORT` environment to the app container when the `traffic-agent` is injected. +See [RESTful API server](../restapi) for more info. + +#### DNS + +The `client.dns` configuration offers options for configuring the DNS resolution behavior in a client application or system. Here is a summary of the available fields: + + + +The fields for `client.dns` are: `localIP`, `excludeSuffixes`, `includeSuffixes`, and `lookupTimeout`. + +| Field | Description | Type | Default | +|-------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|--------------------------------------------------------------------------| +| `localIP` | The address of the local DNS server. This entry is only used on Linux systems that are not configured to use systemd-resolved. | IP address [string][yaml-str] | first `nameserver` mentioned in `/etc/resolv.conf` | +| `excludeSuffixes` | Suffixes for which the DNS resolver will always fail (or fallback in case of the overriding resolver). Can be globally configured in the Helm chart. | [sequence][yaml-seq] of [strings][yaml-str] | `[".arpa", ".com", ".io", ".net", ".org", ".ru"]` | +| `includeSuffixes` | Suffixes for which the DNS resolver will always attempt to do a lookup. Includes have higher priority than excludes. Can be globally configured in the Helm chart. | [sequence][yaml-seq] of [strings][yaml-str] | `[]` | +| `lookupTimeout` | Maximum time to wait for a cluster side host lookup. | [duration][go-duration] [string][yaml-str] | 4 seconds | + +Here is an example values.yaml: +```yaml +client: + dns: + includeSuffixes: [.private] + excludeSuffixes: [.se, .com, .io, .net, .org, .ru] + localIP: 8.8.8.8 + lookupTimeout: 30s +``` + +##### Mappings + +Allows you to map hostnames to aliases. This is useful when you want to redirect traffic from one service to another within the cluster. + +In the given cluster, the service named `postgres` is located within a separate namespace titled `big-data`, and it's referred to as `psql` : + +```yaml +dns: + mappings: + - name: postgres + aliasFor: psql.big-data +``` + +##### Exclude + +Lists service names to be excluded from the Telepresence DNS server. This is useful when you want your application to interact with a local service instead of a cluster service. In this example, "redis" will not be resolved by the cluster, but locally. + +```yaml +dns: + excludes: + - redis +``` + +#### Routing + +##### AlsoProxySubnets + +When using `alsoProxySubnets`, you provide a list of subnets to be added to the TUN device. +All connections to addresses that the subnet spans will be dispatched to the cluster + +Here is an example values.yaml for the subnet `1.2.3.4/32`: +```yaml +client: + routing: + alsoProxySubnets: + - 1.2.3.4/32 +``` + +##### NeverProxySubnets + +When using `neverProxySubnets` you provide a list of subnets. These will never be routed via the TUN device, +even if they fall within the subnets (pod or service) for the cluster. Instead, whatever route they have before +telepresence connects is the route they will keep. + +Here is an example kubeconfig for the subnet `1.2.3.4/32`: + +```yaml +client: + routing: + neverProxySubnets: + - 1.2.3.4/32 +``` + +##### Using AlsoProxy together with NeverProxy + +Never proxy and also proxy are implemented as routing rules, meaning that when the two conflict, regular routing routes apply. +Usually this means that the most specific route will win. + +So, for example, if an `alsoProxySubnets` subnet falls within a broader `neverProxySubnets` subnet: + +```yaml +neverProxySubnets: [10.0.0.0/16] +alsoProxySubnets: [10.0.5.0/24] +``` + +Then the specific `alsoProxySubnets` of `10.0.5.0/24` will be proxied by the TUN device, whereas the rest of `10.0.0.0/16` will not. + +Conversely, if a `neverProxySubnets` subnet is inside a larger `alsoProxySubnets` subnet: + +```yaml +alsoProxySubnets: [10.0.0.0/16] +neverProxySubnets: [10.0.5.0/24] +``` + +Then all of the `alsoProxySubnets` of `10.0.0.0/16` will be proxied, with the exception of the specific `neverProxySubnets` of `10.0.5.0/24` + +## Local Overrides + +In addition, it is possible to override each of these variables at the local level by setting up new values in local config files. +There are two types of config values that can be set locally: those that apply to all clusters, which are set in a single `config.yml` file, and those +that only apply to specific clusters, which are set as extensions to the `$KUBECONFIG` file. + +### Config for all clusters +Telepresence uses a `config.yml` file to store and change those configuration values that will be used for all clusters you use Telepresence with. +The location of this file varies based on your OS: + +* macOS: `$HOME/Library/Application Support/telepresence/config.yml` +* Linux: `$XDG_CONFIG_HOME/telepresence/config.yml` or, if that variable is not set, `$HOME/.config/telepresence/config.yml` +* Windows: `%APPDATA%\telepresence\config.yml` + +For Linux, the above paths are for a user-level configuration. For system-level configuration, use the file at `$XDG_CONFIG_DIRS/telepresence/config.yml` or, if that variable is empty, `/etc/xdg/telepresence/config.yml`. If a file exists at both the user-level and system-level paths, the user-level path file will take precedence. + +### Values + +The config file currently supports values for the `timeouts`, `logLevels`, `images`, `cloud`, and `grpc` keys. +The definitions of these values are identical to those values in the `client` config above. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +timeouts: + agentInstall: 1m + intercept: 10s +logLevels: + userDaemon: debug +images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting +cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. +grpc: + maxReceiveSize: 10Mi +telepresenceAPI: + port: 9980 +``` + + +## Workstation Per-Cluster Configuration + +Configuration that is specific to a cluster can also be overriden per-workstation by modifying your `$KUBECONFIG` file. +It is recommended that you do not do this, and instead rely on upstream values provided to the Traffic Manager. This ensures +that all users that connect to the Traffic Manager will have the same routing and DNS resolution behavior. +An important exception to this is the [`manager.namespace` configuration](#manager) which must be set locally. + +### Values + +The kubeconfig supports values for `dns`, `also-proxy`, `never-proxy`, and `manager`. + +Example kubeconfig: +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + dns: + include-suffixes: [.private] + exclude-suffixes: [.se, .com, .io, .net, .org, .ru] + local-ip: 8.8.8.8 + lookup-timeout: 30s + never-proxy: [10.0.0.0/16] + also-proxy: [10.0.5.0/24] + name: example-cluster +``` + +#### Manager + +This is the one cluster configuration that cannot be set using the Helm chart because it defines how Telepresence connects to +the Traffic manager. When not default, that setting needs to be configured in the workstation's kubeconfig for the cluster. + +The `manager` key contains configuration for finding the `traffic-manager` that telepresence will connect to. It supports one key, `namespace`, indicating the namespace where the traffic manager is to be found + +Here is an example kubeconfig that will instruct telepresence to connect to a manager in namespace `staging`: + +```yaml +apiVersion: v1 +clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +[yaml-bool]: https://yaml.org/type/bool.html +[yaml-float]: https://yaml.org/type/float.html +[yaml-int]: https://yaml.org/type/int.html +[yaml-seq]: https://yaml.org/type/seq.html +[yaml-str]: https://yaml.org/type/str.html +[go-duration]: https://pkg.go.dev/time#ParseDuration +[logrus-level]: https://github.com/sirupsen/logrus/blob/v1.8.1/logrus.go#L25-L45 diff --git a/docs/telepresence/2.15/reference/dns.md b/docs/telepresence/2.15/reference/dns.md new file mode 100644 index 000000000..2f263860e --- /dev/null +++ b/docs/telepresence/2.15/reference/dns.md @@ -0,0 +1,80 @@ +# DNS resolution + +The Telepresence DNS resolver is dynamically configured to resolve names using the namespaces of currently active intercepts. Processes running locally on the desktop will have network access to all services in the such namespaces by service-name only. + +All intercepts contribute to the DNS resolver, even those that do not use the `--namespace=` option. This is because `--namespace default` is implied, and in this context, `default` is treated just like any other namespace. + +No namespaces are used by the DNS resolver (not even `default`) when no intercepts are active, which means that no service is available by `` only. Without an active intercept, the namespace qualified DNS name must be used (in the form `.`). + +See this demonstrated below, using the [quick start's](../../quick-start/) sample app services. + +No intercepts are currently running, we'll connect to the cluster and list the services that can be intercepted. + +``` +$ telepresence connect + + Connecting to traffic manager... + Connected to context default (https://) + +$ telepresence list + + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + emoji : ready to intercept (traffic-agent not yet installed) + web : ready to intercept (traffic-agent not yet installed) + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + +$ curl web-app:80 + + curl: (6) Could not resolve host: web-app + +``` + +This is expected as Telepresence cannot reach the service yet by short name without an active intercept in that namespace. + +``` +$ curl web-app.emojivoto:80 + + + + + + Emoji Vote + ... +``` + +Using the namespaced qualified DNS name though does work. +Now we'll start an intercept against another service in the same namespace. Remember, `--namespace default` is implied since it is not specified. + +``` +$ telepresence intercept web --port 8080 + + Using Deployment web + intercepted + Intercept name : web + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Volume Mount Point: /tmp/telfs-166119801 + Intercepting : HTTP requests that match all headers: + 'x-telepresence-intercept-id: 8eac04e3-bf24-4d62-b3ba-35297c16f5cd:web' + +$ curl webapp:80 + + + + + + Emoji Vote + ... +``` + +Now curling that service by its short name works and will as long as the intercept is active. + +The DNS resolver will always be able to resolve services using `.` regardless of intercepts. + +### Supported Query Types + +The Telepresence DNS resolver is now capable of resolving queries of type `A`, `AAAA`, `CNAME`, +`MX`, `NS`, `PTR`, `SRV`, and `TXT`. + +See [Outbound connectivity](../routing/#dns-resolution) for details on DNS lookups. diff --git a/docs/telepresence/2.15/reference/docker-run.md b/docs/telepresence/2.15/reference/docker-run.md new file mode 100644 index 000000000..27b2f316f --- /dev/null +++ b/docs/telepresence/2.15/reference/docker-run.md @@ -0,0 +1,90 @@ +--- +Description: "How a Telepresence intercept can run a Docker container with configured environment and volume mounts." +--- + +# Using Docker for intercepts + +## Use the Intercept Specification +The recommended way to use Telepresence with Docker is to create an [Intercept Specification](../intercepts/specs) that uses docker images as intercept handlers. + +## Using command flags + +### The docker flag +You can start the Telepresence daemon in a Docker container on your laptop using the command: + +```console +$ telepresence connect --docker +``` + +The `--docker` flag is a global flag, and if passed directly like `telepresence intercept --docker`, then the implicit connect that takes place if no connections is active, will use a container based daemon. + +### The docker-run flag + +If you want your intercept to go to another Docker container, you can use the `--docker-run` flag. It creates the intercept, runs your container in the foreground, then automatically ends the intercept when the container exits. + +```console +$ telepresence intercept --port --docker-run -- +``` + +The `--` separates flags intended for `telepresence intercept` from flags intended for `docker run`. + +It's recommended that you always use the `--docker-run` in combination with the global `--docker` flag, because that makes everything less intrusive. +- No admin user access is needed. Network modifications are confined to a Docker network. +- There's no need for special filesystem mount software like MacFUSE or WinFSP. The volume mounts happen in the Docker engine. + +The following happens under the hood when both flags are in use: + +- The network of for the intercept handler will be set to the same as the network used by the daemon. This guarantees that the + intercept handler can access the Telepresence VIF, and hence have access the cluster. +- Volume mounts will be automatic and made using the Telemount Docker volume plugin so that all volumes exposed by the intercepted + container are mounted on the intercept handler container. +- The environment of the intercepted container becomes the environment of the intercept handler container. + +### The docker-build flag + +The `--docker-build ` and the repeatable `docker-build-opt key=value` flags enable container's to be build on the fly by the intercept command. + +When using `--docker-build`, the image name used in the argument list must be verbatim `IMAGE`. The word acts as a placeholder and will be replaced by the ID of the image that is built. + +The `--docker-build` flag implies `--docker-run`. + +## Using docker-run flag without docker + +It is possible to use `--docker-run` with a daemon running on your host, which is the default behavior of Telepresence. + +However, it isn't recommended since you'll be in a hybrid mode: while your intercept runs in a container, the daemon will modify the host network, and if remote mounts are desired, they may require extra software. + +The ability to use this special combination is retained for backward compatibility reasons. It might be removed in a future release of Telepresence. + +The `--port` flag has slightly different semantics and can be used in situations when the local and container port must be different. This +is done using `--port :`. The container port will default to the local port when using the `--port ` syntax. + +## Examples + +Imagine you are working on a new version of your frontend service. It is running in your cluster as a Deployment called `frontend-v1`. You use Docker on your laptop to build an improved version of the container called `frontend-v2`. To test it out, use this command to run the new container on your laptop and start an intercept of the cluster service to your local container. + +```console +$ telepresence intercept --docker frontend-v1 --port 8000 --docker-run -- frontend-v2 +``` + +Now, imagine that the `frontend-v2` image is built by a `Dockerfile` that resides in the directory `images/frontend-v2`. You can build and intercept directly. + +```console +$ telepresence intercept --docker frontend-v1 --port8000 --docker-build images/frontend-v2 --docker-build-opt tag=mytag -- IMAGE +``` + +## Automatic flags + +Telepresence will automatically pass some relevant flags to Docker in order to connect the container with the intercept. Those flags are combined with the arguments given after `--` on the command line. + +- `--env-file ` Loads the intercepted environment +- `--name intercept--` Names the Docker container, this flag is omitted if explicitly given on the command line +- `-v ` Volume mount specification, see CLI help for `--docker-mount` flags for more info + +When used with a container based daemon: +- `--rm` Mandatory, because the volume mounts cannot be removed until the container is removed. +- `-v :` Volume mount specifications propagated from the intercepted container + +When used with a daemon that isn't container based: +- `--dns-search tel2-search` Enables single label name lookups in intercepted namespaces +- `-p ` The local port for the intercept and the container port diff --git a/docs/telepresence/2.15/reference/environment.md b/docs/telepresence/2.15/reference/environment.md new file mode 100644 index 000000000..7f83ff119 --- /dev/null +++ b/docs/telepresence/2.15/reference/environment.md @@ -0,0 +1,46 @@ +--- +description: "How Telepresence can import environment variables from your Kubernetes cluster to use with code running on your laptop." +--- + +# Environment variables + +Telepresence can import environment variables from the cluster pod when running an intercept. +You can then use these variables with the code running on your laptop of the service being intercepted. + +There are three options available to do this: + +1. `telepresence intercept [service] --port [port] --env-file=FILENAME` + + This will write the environment variables to a Docker Compose `.env` file. This file can be used with `docker-compose` when starting containers locally. Please see the Docker documentation regarding the [file syntax](https://docs.docker.com/compose/env-file/) and [usage](https://docs.docker.com/compose/environment-variables/) for more information. + +2. `telepresence intercept [service] --port [port] --env-json=FILENAME` + + This will write the environment variables to a JSON file. This file can be injected into other build processes. + +3. `telepresence intercept [service] --port [port] -- [COMMAND]` + + This will run a command locally with the pod's environment variables set on your laptop. Once the command quits the intercept is stopped (as if `telepresence leave [service]` was run). This can be used in conjunction with a local server command, such as `python [FILENAME]` or `node [FILENAME]` to run a service locally while using the environment variables that were set on the pod via a ConfigMap or other means. + + Another use would be running a subshell, Bash for example: + + `telepresence intercept [service] --port [port] -- /bin/bash` + + This would start the intercept then launch the subshell on your laptop with all the same variables set as on the pod. + +## Telepresence Environment Variables + +Telepresence adds some useful environment variables in addition to the ones imported from the intercepted pod: + +### TELEPRESENCE_ROOT +Directory where all remote volumes mounts are rooted. See [Volume Mounts](../volume/) for more info. + +### TELEPRESENCE_MOUNTS +Colon separated list of remotely mounted directories. + +### TELEPRESENCE_CONTAINER +The name of the intercepted container. Useful when a pod has several containers, and you want to know which one that was intercepted by Telepresence. + +### TELEPRESENCE_INTERCEPT_ID +ID of the intercept (same as the "x-intercept-id" http header). + +Useful if you need special behavior when intercepting a pod. One example might be when dealing with pub/sub systems like Kafka, where all processes that don't have the `TELEPRESENCE_INTERCEPT_ID` set can filter out all messages that contain an `x-intercept-id` header, while those that do, instead filter based on a matching `x-intercept-id` header. This is to assure that messages belonging to a certain intercept always are consumed by the intercepting process. diff --git a/docs/telepresence/2.15/reference/inside-container.md b/docs/telepresence/2.15/reference/inside-container.md new file mode 100644 index 000000000..48a38b5a3 --- /dev/null +++ b/docs/telepresence/2.15/reference/inside-container.md @@ -0,0 +1,19 @@ +# Running Telepresence inside a container + +All Telepresence commands now have the global option `--docker`. This option tells telepresence to start the Telepresence daemon in a +docker container. + +Running the daemon in a container brings many advantages. The daemon will no longer make modifications to the host's network or DNS, and +it will not mount files in the host's filesystem. Consequently, it will not need admin privileges to run, nor will it need special software +like macFUSE or WinFSP to mount the remote file systems. + +The intercept handler (the process that will receive the intercepted traffic) must also be a docker container, because that is the only +way to access the cluster network that the daemon makes available, and to mount the docker volumes needed. + +It's highly recommended that you use the new [Intercept Specification](../intercepts/specs) to set things up. That way, Telepresence can do +all the plumbing needed to start the intercept handler with the correct environment and volume mounts. +Otherwise, doing a fully container based intercept manually with all bells and whistles is a complicated process that involves: +- Capturing the details of an intercept +- Ensuring that the [Telemount](https://github.com/datawire/docker-volume-telemount#readme) Docker volume plugin is installed +- Creating volumes for all remotely exposed directories +- Starting the intercept handler container using the same network as the daemon. diff --git a/docs/telepresence/2.15/reference/intercepts/cli.md b/docs/telepresence/2.15/reference/intercepts/cli.md new file mode 100644 index 000000000..d7e482329 --- /dev/null +++ b/docs/telepresence/2.15/reference/intercepts/cli.md @@ -0,0 +1,335 @@ +import Alert from '@material-ui/lab/Alert'; + +# Configuring intercept using CLI + +## Specifying a namespace for an intercept + +The namespace of the intercepted workload is specified using the +`--namespace` option. When this option is used, and `--workload` is +not used, then the given name is interpreted as the name of the +workload and the name of the intercept will be constructed from that +name and the namespace. + +```shell +telepresence intercept hello --namespace myns --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept +`hello-myns`. In order to remove the intercept, you will need to run +`telepresence leave hello-mydns` instead of just `telepresence leave +hello`. + +The name of the intercept will be left unchanged if the workload is specified. + +```shell +telepresence intercept myhello --namespace myns --workload hello --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept `myhello`. + +## Importing environment variables + +Telepresence can import the environment variables from the pod that is +being intercepted, see [this doc](../../environment/) for more details. + +## Creating an intercept without a preview URL + +If you *are not* logged in to Ambassador Cloud, the following command +will intercept all traffic bound to the service and proxy it to your +laptop. This includes traffic coming through your ingress controller, +so use this option carefully as to not disrupt production +environments. + +```shell +telepresence intercept --port= +``` + +If you *are* logged in to Ambassador Cloud, setting the +`--preview-url` flag to `false` is necessary. + +```shell +telepresence intercept --port= --preview-url=false +``` + +This will output an HTTP header that you can set on your request for +that traffic to be intercepted: + +```console +$ telepresence intercept --port= --preview-url=false +Using Deployment +intercepted + Intercept name: + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":") +``` + +Run `telepresence status` to see the list of active intercepts. + +```console +$ telepresence status +Root Daemon: Running + Version : v2.1.4 (api 3) + Primary DNS : "" + Fallback DNS: "" +User Daemon: Running + Version : v2.1.4 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 1 total + dataprocessingnodeservice: @ +``` + +Finally, run `telepresence leave ` to stop the intercept. + +## Skipping the ingress dialogue + +You can skip the ingress dialogue by setting the relevant parameters using flags. If any of the following flags are set, the dialogue will be skipped and the flag values will be used instead. If any of the required flags are missing, an error will be thrown. + +| Flag | Description | Required | +|------------------|------------------------------------------------------------------|------------| +| `--ingress-host` | The ip address for the ingress | yes | +| `--ingress-port` | The port for the ingress | yes | +| `--ingress-tls` | Whether tls should be used | no | +| `--ingress-l5` | Whether a different ip address should be used in request headers | no | + +## Creating an intercept when a service has multiple ports + +If you are trying to intercept a service that has multiple ports, you +need to tell Telepresence which service port you are trying to +intercept. To specify, you can either use the name of the service +port or the port number itself. To see which options might be +available to you and your service, use kubectl to describe your +service or look in the object's YAML. For more information on multiple +ports, see the [Kubernetes documentation][kube-multi-port-services]. + +[kube-multi-port-services]: https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services + +```console +$ telepresence intercept --port=: +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +When intercepting a service that has multiple ports, the name of the +service port that has been intercepted is also listed. + +If you want to change which port has been intercepted, you can create +a new intercept the same way you did above and it will change which +service port is being intercepted. + +## Creating an intercept When multiple services match your workload + +Oftentimes, there's a 1-to-1 relationship between a service and a +workload, so telepresence is able to auto-detect which service it +should intercept based on the workload you are trying to intercept. +But if you use something like +[Argo](https://www.getambassador.io/docs/argo/latest/), there may be +two services (that use the same labels) to manage traffic between a +canary and a stable service. + +Fortunately, if you know which service you want to use when +intercepting a workload, you can use the `--service` flag. So in the +aforementioned example, if you wanted to use the `echo-stable` service +when intercepting your workload, your command would look like this: + +```console +$ telepresence intercept echo-rollout- --port --service echo-stable +Using ReplicaSet echo-rollout- +intercepted + Intercept name : echo-rollout- + State : ACTIVE + Workload kind : ReplicaSet + Destination : 127.0.0.1:3000 + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-921196036 + Intercepting : all TCP connections +``` + +## Intercepting multiple ports + +It is possible to intercept more than one service and/or service port that are using the same workload. You do this +by creating more than one intercept that identify the same workload using the `--workload` flag. + +Let's assume that we have a service `multi-echo` with the two ports `http` and `grpc`. They are both +targeting the same `multi-echo` deployment. + +```console +$ telepresence intercept multi-echo-http --workload multi-echo --port 8080:http --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-http + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Volume Mount Point : /tmp/telfs-893700837 + Intercepting : all TCP requests + Preview URL : https://sleepy-bassi-1140.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +$ telepresence intercept multi-echo-grpc --workload multi-echo --port 8443:grpc --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-grpc + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8443 + Service Port Identifier: extra + Volume Mount Point : /tmp/telfs-1277723591 + Intercepting : all TCP requests + Preview URL : https://upbeat-thompson-6613.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +``` + +## Port-forwarding an intercepted container's sidecars + +Sidecars are containers that sit in the same pod as an application +container; they usually provide auxiliary functionality to an +application, and can usually be reached at +`localhost:${SIDECAR_PORT}`. For example, a common use case for a +sidecar is to proxy requests to a database, your application would +connect to `localhost:${SIDECAR_PORT}`, and the sidecar would then +connect to the database, perhaps augmenting the connection with TLS or +authentication. + +When intercepting a container that uses sidecars, you might want those +sidecars' ports to be available to your local application at +`localhost:${SIDECAR_PORT}`, exactly as they would be if running +in-cluster. Telepresence's `--to-pod ${PORT}` flag implements this +behavior, adding port-forwards for the port given. + +```console +$ telepresence intercept --port=: --to-pod= +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +If there are multiple ports that you need forwarded, simply repeat the +flag (`--to-pod= --to-pod=`). + +## Intercepting headless services + +Kubernetes supports creating [services without a ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services), +which, when they have a pod selector, serve to provide a DNS record that will directly point to the service's backing pods. +Telepresence supports intercepting these `headless` services as it would a regular service with a ClusterIP. +So, for example, if you have the following service: + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: my-headless +spec: + type: ClusterIP + clusterIP: None + selector: + service: my-headless + ports: + - port: 8080 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: my-headless + labels: + service: my-headless +spec: + replicas: 1 + serviceName: my-headless + selector: + matchLabels: + service: my-headless + template: + metadata: + labels: + service: my-headless + spec: + containers: + - name: my-headless + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +``` + +You can intercept it like any other: + +```console +$ telepresence intercept my-headless --port 8080 +Using StatefulSet my-headless +intercepted + Intercept name : my-headless + State : ACTIVE + Workload kind : StatefulSet + Destination : 127.0.0.1:8080 + Volume Mount Point: /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-524189712 + Intercepting : all TCP connections +``` + + +This utilizes an initContainer that requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + + +This requires the Traffic Agent to run as GID 7777. By default, this is disabled on openshift clusters. +To enable running as GID 7777 on a specific openshift namespace, run: +oc adm policy add-scc-to-group anyuid system:serviceaccounts:$NAMESPACE + + + +Intercepting headless services without a selector is not supported. + + +## Sharing intercepts with teammates + +Once a combination of flags to easily intercept a service has been found, it's useful to share it with teammates. You +can do that easily by going to [Ambassador Cloud -> Intercepts history](https://app.getambassador.io/cloud/saved-intercepts) +pick the intercept command from the history tab and create a Saved Intercept by giving it a name, when doing that +the intercept command will be easily accessible for all your teammates. Note that this requires the free enhanced +client to be installed and to be logged in (`telepresence login`). + +To instantiate an intercept based on a saved intercept, simply run +`telepresence intercept --use-saved-intercept `. When logged in, the command will first check for a +saved intercept in Ambassador Cloud and will use it if found, otherwise an error will be returned. + +Saved Intercepts can be [managed through Ambassador Cloud](../../../../../cloud/latest/telepresence-saved-intercepts). + +## Specifying the intercept traffic target + +By default, it's assumed that your local app is reachable on `127.0.0.1`, and intercepted traffic will be sent to that IP +at the port given by `--port`. If you wish to change this behavior and send traffic to a different IP address, you can use the `--address` parameter +to `telepresence intercept`. Say your machine is configured to respond to HTTP requests for an intercept on `172.16.0.19:8080`. You would run this as: + +```console +$ telepresence intercept my-service --address 172.16.0.19 --port 8080 +Using Deployment echo-easy + Intercept name : echo-easy + State : ACTIVE + Workload kind : Deployment + Destination : 172.16.0.19:8080 + Service Port Identifier: proxied + Volume Mount Point : /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-517018422 + Intercepting : HTTP requests with headers + 'x-telepresence-intercept-id: 8e0dd8ea-b55a-43bd-ad04-018b9de9cfab:echo-easy' + Preview URL : https://laughing-curran-5375.preview.edgestack.me + Layer 5 Hostname : echo-easy.default.svc.cluster.local +``` diff --git a/docs/telepresence/2.15/reference/intercepts/index.md b/docs/telepresence/2.15/reference/intercepts/index.md new file mode 100644 index 000000000..5b317aeec --- /dev/null +++ b/docs/telepresence/2.15/reference/intercepts/index.md @@ -0,0 +1,61 @@ +import Alert from '@material-ui/lab/Alert'; + +# Intercepts + +When intercepting a service, the Telepresence Traffic Manager ensures +that a Traffic Agent has been injected into the intercepted workload. +The injection is triggered by a Kubernetes Mutating Webhook and will +only happen once. The Traffic Agent is responsible for redirecting +intercepted traffic to the developer's workstation. + +An intercept is either global or personal. + +### Global intercet +This intercept will intercept all`tcp` and/or `udp` traffic to the +intercepted service and send all of that traffic down to the developer's +workstation. This means that a global intercept will affect all users of +the intercepted service. + +### Personal intercept +This intercept will intercept specific HTTP requests, allowing other HTTP +requests through to the regular service. The selection is based on http +headers or paths, and allows for intercepts which only intercept traffic +tagged as belonging to a given developer. + +There are two ways of configuring an intercept: +- one from the [CLI](./cli) directly +- one from an [Intercept Specification](./specs) + +## Intercept behavior when using single-user versus team mode. + +Switching the Traffic Manager from `single-user` mode to `team` mode changes +the Telepresence defaults in two ways. + + +First, in team mode, Telepresence will require that the user is logged in to +Ambassador Cloud, or is using an api-key. The team mode aldo causes Telepresence +to default to a personal intercept using `--http-header=auto --http-path-prefix=/`. +Personal intercepts are important for working in a shared cluster with teammates, +and is important for the preview URL functionality below. See `telepresence intercept --help` +for information on using the `--http-header` and `--http-path-xxx` flags to +customize which requests that are intercepted. + +Secondly, team mode causes Telepresence to default to`--preview-url=true`. This +tells Telepresence to take advantage of Ambassador Cloud to create a preview URL +for this intercept, creating a shareable URL that automatically sets the +appropriate headers to have requests coming from the preview URL be +intercepted. + +## Supported workloads + +Kubernetes has various +[workloads](https://kubernetes.io/docs/concepts/workloads/). +Currently, Telepresence supports intercepting (installing a +traffic-agent on) `Deployments`, `ReplicaSets`, and `StatefulSets`. + + + +While many of our examples use Deployments, they would also work on +ReplicaSets and StatefulSets + + diff --git a/docs/telepresence/2.15/reference/intercepts/manual-agent.md b/docs/telepresence/2.15/reference/intercepts/manual-agent.md new file mode 100644 index 000000000..8c24d6dbe --- /dev/null +++ b/docs/telepresence/2.15/reference/intercepts/manual-agent.md @@ -0,0 +1,267 @@ +import Alert from '@material-ui/lab/Alert'; + +# Manually injecting the Traffic Agent + +You can directly modify your workload's YAML configuration to add the Telepresence Traffic Agent and enable it to be intercepted. + +When you use a Telepresence intercept for the first time on a Pod, the [Telepresence Mutating Webhook](../../cluster-config/#mutating-webhook) +will automatically inject a Traffic Agent sidecar into it. There might be some situations where this approach cannot be used, such +as very strict company security policies preventing it. + + +Although it is possible to manually inject the Traffic Agent, it is not the recommended approach to making a workload interceptable, +try the Mutating Webhook before proceeding. + + +## Procedure + +You can manually inject the agent into Deployments, StatefulSets, or ReplicaSets. The example on this page +uses the following Deployment and Service. It's a prerequisite that they have been applied to the cluster: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "my-service" + labels: + service: my-service +spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +--- +apiVersion: v1 +kind: Service +metadata: + name: "my-service" +spec: + type: ClusterIP + selector: + service: my-service + ports: + - port: 80 + targetPort: 8080 +``` + +### 1. Generating the YAML + +First, generate the YAML for the traffic-agent configmap entry. It's important that the generated file have +the same name as the service, and no extension: + +```console +$ telepresence genyaml config --workload my-service -o /tmp/my-service +$ cat /tmp/my-service-config.yaml +agentImage: docker.io/datawire/tel2:2.7.0 +agentName: my-service +containers: +- Mounts: null + envPrefix: A_ + intercepts: + - agentPort: 9900 + containerPort: 8080 + protocol: TCP + serviceName: my-service + servicePort: 80 + serviceUID: f6680334-10ef-4703-aa4e-bb1f9d1665fd + mountPoint: /tel_app_mounts/echo-container + name: echo-container +logLevel: info +managerHost: traffic-manager.ambassador +managerPort: 8081 +manual: true +namespace: default +workloadKind: Deployment +workloadName: my-service +``` + +Next, generate the YAML for the traffic-agent container: + +```console +$ telepresence genyaml container --config /tmp/my-service -o /tmp/my-service-agent.yaml +$ cat /tmp/my-service-agent.yaml +args: +- agent +env: +- name: _TEL_AGENT_POD_IP + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: status.podIP +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: traffic-agent +ports: +- containerPort: 9900 + protocol: TCP +readinessProbe: + exec: + command: + - /bin/stat + - /tmp/agent/ready +resources: {} +volumeMounts: +- mountPath: /tel_pod_info + name: traffic-annotations +- mountPath: /etc/traffic-agent + name: traffic-config +- mountPath: /tel_app_exports + name: export-volume + name: traffic-annotations +``` + +Next, generate the init-container + +```console +$ telepresence genyaml initcontainer --config /tmp/my-service -o /tmp/my-service-init.yaml +$ cat /tmp/my-service-init.yaml +args: +- agent-init +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: tel-agent-init +resources: {} +securityContext: + capabilities: + add: + - NET_ADMIN +volumeMounts: +- mountPath: /etc/traffic-agent + name: traffic-config +``` + +Next, generate the YAML for the volumes: + +```console +$ telepresence genyaml volume --workload my-service -o /tmp/my-service-volume.yaml +$ cat /tmp/my-service-volume.yaml +- downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.annotations + path: annotations + name: traffic-annotations +- configMap: + items: + - key: my-service + path: config.yaml + name: telepresence-agents + name: traffic-config +- emptyDir: {} + name: export-volume + +``` + + +Enter `telepresence genyaml container --help` or `telepresence genyaml volume --help` for more information about these flags. + + +### 2. Creating (or updating) the configmap + +The generated configmap entry must be insterted into the `telepresence-agents` `ConfigMap` in the same namespace as the +modified `Deployment`. If the `ConfigMap` doesn't exist yet, it can be created using the following command: + +```console +$ kubectl create configmap telepresence-agents --from-file=/tmp/my-service +``` + +If it already exists, new entries can be added under the `Data` key using `kubectl edit configmap telepresence-agents`. + +### 3. Injecting the YAML into the Deployment + +You need to add the `Deployment` YAML you genereated to include the container and the volume. These are placed as elements +of `spec.template.spec.containers`, `spec.template.spec.initContainers`, and `spec.template.spec.volumes` respectively. +You also need to modify `spec.template.metadata.annotations` and add the annotation +`telepresence.getambassador.io/manually-injected: "true"`. These changes should look like the following: + +```diff + apiVersion: apps/v1 + kind: Deployment + metadata: + name: "my-service" + labels: + service: my-service + spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service ++ annotations: ++ telepresence.getambassador.io/manually-injected: "true" + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} ++ - args: ++ - agent ++ env: ++ - name: _TEL_AGENT_POD_IP ++ valueFrom: ++ fieldRef: ++ apiVersion: v1 ++ fieldPath: status.podIP ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: traffic-agent ++ ports: ++ - containerPort: 9900 ++ protocol: TCP ++ readinessProbe: ++ exec: ++ command: ++ - /bin/stat ++ - /tmp/agent/ready ++ resources: { } ++ volumeMounts: ++ - mountPath: /tel_pod_info ++ name: traffic-annotations ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ - mountPath: /tel_app_exports ++ name: export-volume ++ initContainers: ++ - args: ++ - agent-init ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: tel-agent-init ++ resources: { } ++ securityContext: ++ capabilities: ++ add: ++ - NET_ADMIN ++ volumeMounts: ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ volumes: ++ - downwardAPI: ++ items: ++ - fieldRef: ++ apiVersion: v1 ++ fieldPath: metadata.annotations ++ path: annotations ++ name: traffic-annotations ++ - configMap: ++ items: ++ - key: my-service ++ path: config.yaml ++ name: telepresence-agents ++ name: traffic-config ++ - emptyDir: { } ++ name: export-volume +``` diff --git a/docs/telepresence/2.15/reference/intercepts/specs.md b/docs/telepresence/2.15/reference/intercepts/specs.md new file mode 100644 index 000000000..9ac074c2e --- /dev/null +++ b/docs/telepresence/2.15/reference/intercepts/specs.md @@ -0,0 +1,467 @@ +# Configuring intercept using specifications + +This page references the different options available to the telepresence intercept specification. + +With telepresence, you can provide a file to define how an intercept should work. + + +## Root + +Your intercept specification is where you can create a standard, easy to use, configuration to easily run pre and post tasks, start an intercept, and start your local application to handle the intercepted traffic. + +There are many ways to configure your specification to suit your needs, the table below shows the possible options within your specifcation, +and you can see the spec's schema, with all available options and formats, [here](#ide-integration). + +| Options | Description | +|-------------------------------------------|---------------------------------------------------------------------------------------------------------| +| [name](#name) | Name of the specification. | +| [connection](#connection) | Connection properties to use when Telepresence connects to the cluster. | +| [handlers](#handlers) | Local processes to handle traffic and/or setup . | +| [prerequisites](#prerequisites) | Things to set up prior to starting any intercepts, and tear things down once the intercept is complete. | +| [workloads](#workloads) | Remote workloads that are intercepted, keyed by workload name. | + +### Name +The name is optional. If you don't specify the name it will use the filename of the specification file. + +```yaml +name : echo-server-spec +``` + +### Connection + +The connection option is used to define how Telepresence connects to your cluster. + +```yaml +connection: + context: "shared-cluster" + mappedNamespaces: + - "my_app" +``` + +You can pass the most common parameters from telepresence connect command (`telepresence connect --help`) using a camel case format. + +Some of the most commonly used options include: + +| Options | Type | Format | Description | +|------------------|-------------|-------------------------|---------------------------------------------------------| +| context | string | N/A | The kubernetes context to use | +| mappedNamespaces | string list | [a-z0-9][a-z0-9-]{1,62} | The namespaces that Telepresence will be concerned with | + + +## Handlers + +A handler is code running locally. + +It can receive traffic for an intercepted service, or can set up prerequisites to run before/after the intercept itself. + +When it is intended as an intercept handler (i.e. to handle traffic), it's usually the service you're working on, or another dependency (database, another third party service, ...) running on your machine. +A handler can be a Docker container, or an application running natively. + +The sample below is creating an intercept handler, giving it the name `echo-server` and using a docker container. The container will +automatically have access to the ports, environment, and mounted directories of the intercepted container. + + + The ports field is important for the intercept handler while running in docker, it indicates which ports should be exposed to the host. If you want to access to it locally (to attach a debugger to your container for example), this field must be provided. + + +```yaml +handlers: + - name: echo-server + environment: + - name: PORT + value: "8080" + docker: + image: jmalloc/echo-server:latest + ports: + - 8080 +``` + +If you don't want to use Docker containers, you can still configure your handlers to start via a regular script. +The snippet below shows how to create an handler called echo-server, that sets an environment variable of `PORT=8080` +and starts the application. + + +```yaml +handlers: + - name: echo-server + environment: + - name: PORT + value: "8080" + script: + run: bin/echo-server +``` + +Keep in mind that an empty handler is still a valid handler. This is sometimes useful when you want to, for example, +simulate an intercepted service going down: + +```yaml +handlers: + - name: no-op +``` + +The table belows defines the parameters that can be used within the handlers section. + +| Options | Type | Format | Description | +|------------------------|-------------|--------------------------|------------------------------------------------------------------------------| +| name | string | [a-zA-Z][a-zA-Z0-9_-]* | Defines name of your handler that the intercepts use to reference it | +| environment | map list | N/A | Environment Defines environment variables within your handler | +| environment[*].name | string | [a-zA-Z_][a-zA-Z0-9_]* | The name of the environment variable | +| environment[*].value | string | N/A | The value for the environment variable | +| [script](#script) | map | N/A | Tells the handler to run as a script, mutually exclusive to docker | +| [docker](#docker) | map | N/A | Tells the handler to run as a docker container, mutually exclusive to script | + +### Script + +The handler's script element defines the parameters: + +| Options | Type | Format | Description | +|---------|--------|------------------------|-----------------------------------------------------------------------------------------------------------------------------| +| run | string | N/A | The script to run. Can be multi-line | +| shell | string | bash|sh|sh | Shell that will parse and run the script. Can be bash, zsh, or sh. Defaults to the value of the`SHELL` environment variable | + +### Docker +The handler's docker element defines the parameters. The `build` and `image` parameters are mutually exclusive: + +| Options | Type | Format | Description | +|---------------------|-------------|--------|--------------------------------------------------------------------------------------------------------------------------------------| +| [build](#build) | map | N/A | Defines how to build the image from source using [docker build](https://docs.docker.com/engine/reference/commandline/build/) command | +| [compose](#compose) | map | N/A | Defines how to integrate with an existing Docker Compose file | +| image | string | image | Defines which image to be used | +| ports | int list | N/A | The ports which should be exposed to the host | +| options | string list | N/A | Options for docker run [options](https://docs.docker.com/engine/reference/commandline/run/#options) | +| command | string | N/A | Optional command to run | +| args | string list | N/A | Optional command arguments | + + +#### Build + +The docker build element defines the parameters: + +| Options | Type | Format | Description | +|---------|-------------|--------|--------------------------------------------------------------------------------------------| +| context | string | N/A | Defines either a path to a directory containing a Dockerfile, or a url to a git repository | +| args | string list | N/A | Additional arguments for the docker build command. | + +For additional informations on these parameters, please check the docker [documentation](https://docs.docker.com/engine/reference/commandline/run). + +#### Compose + +The Docker Compose element defines the way to integrate with the tool of the same name. + +| Options | Type | Format | Description | +|----------------------|----------|--------------|--------------------------------------------------------------------------------------------------------| +| context | string | N/A | An optional Docker context, meaning the path to / or the directory containing your docker compose file | +| [services](#service) | map list | | The services to use with the Telepresence integration | +| spec | map | compose spec | Optional embedded docker compose specification. | + +##### Service + +The service describe how to integrate with each service from your Docker Compose file, and can be seen as an override +functionality. A service is normally not provided when you want to keep the original behavior, but can be provided for +documentation purposes using the `local` behavior. + +A service can be declared either as a property of `compose` in the Intercept Specification, or as an `x-telepresence` +extension in the Docker compose specification. The syntax is the same in both cases, but the `name` property must not be +used together with `x-telepresence` because it is implicit. + +| Options | Type | Format | Description | +|-----------------------|--------|-----------------------------------------|-----------------------------------------------------------------------------| +| name | string | [a-zA-Z][a-zA-Z0-9_-]* | The name of your service in the compose file | +| [behavior](#behavior) | string | interceptHandler|remote|local | Behavior of the service in context of the intercept. | +| [mapping](#mapping) | map | | Optional mapping to cluster service. Only applicable for `behavior: remote` | + +###### Behavior + +| Value | Description | +|------------------|-----------------------------------------------------------------------------------------------------------------| +| interceptHandler | The service runs locally and will receive traffic from the intercepted pod. | +| remote | The service will not run as part of docker compose. Instead, traffic is redirected to a service in the cluster. | +| local | The service runs locally without modifications. This is the default. | + +###### Mapping + +| Options | Type | Description | +|-----------|---------------|----------------------------------------------------------------------------------------------------| +| name | string | The name of the cluster service to link the compose service with | +| namespace | string | The cluster namespace for service. This is optional and defaults to the namespace of the intercept | + +**Examples** + +Considering the following Docker Compose file: + +```yaml +services: + redis: + image: redis:6.2.6 + ports: + - "6379" + postgres: + image: "postgres:14.1" + ports: + - "5432" + myapp: + build: + # Directory containing the Dockerfile and source code + context: ../../myapp + ports: + - "8080" + volumes: + - .:/code + environment: + DEV_MODE: "true" +``` + +This will use the myapp service as the interceptor. +```yaml +services: + - name: myapp + behavior: interceptHandler +``` + +This will prevent the service from running locally. DNS will point the service in the cluster with the same name. +```yaml +services: + - name: postgres + behavior: remote +``` + +Adding a mapping allows to select the cluster service more accurately, here by indicating to Telepresence that +the postgres service should be mapped to the **psql** service in the **big-data** namespace. + +```yaml +services: + - name: postgres + behavior: remote + mapping: + name: psql + namespace: big-data +``` + +As an alternative, the `services` can instead be added as `x-telepresence` extensions in the docker compose file: + +```yaml +services: + redis: + image: redis:6.2.6 + ports: + - "6379" + postgres: + x-telepresence: + behavior: remote + mapping: + name: psql + namespace: big-data + image: "postgres:14.1" + ports: + - "5432" + myapp: + x-telepresence: + behavior: interceptHandler + build: + # Directory containing the Dockerfile and source code + context: ../../myapp + ports: + - "8080" + volumes: + - .:/code + environment: + DEV_MODE: "true" +``` + +## Prerequisites +When creating an intercept specification there is an option to include prerequisites. + +Prerequisites give you the ability to run scripts for setup, build binaries to run as your intercept handler, or many other use cases. + +Prerequisites is an array, so it can handle many options prior to starting your intercept and running your intercept handlers. +The elements of the `prerequisites` array correspond to [`handlers`](#handlers). + +The sample below is declaring that `build-binary` and `rm-binary` are two handlers; the first will be run before any intercepts, +the second will be run after cleaning up the intercepts. + +If a prerequisite create succeeds, the corresponding delete is guaranteed to run even if the other steps in the spec fail. + +```yaml +prerequisites: + - create: build-binary + delete: rm-binary +``` + + +The table below defines the parameters availble within the prerequistes section. + +| Options | Description | +|---------|-------------------------------------------------- | +| create | The name of a handler to run before the intercept | +| delete | The name of a handler to run after the intercept | + + +## Workloads + +Workloads define the services in your cluster that will be intercepted. + +The example below is creating an intercept on a service called `echo-server` on port 8080. +It creates a personal intercept with the header of `x-intercept-id: foo`, and routes its traffic to a handler called `echo-server` + +```yaml +workloads: + # You can define one or more workload(s) + - name: echo-server: + intercepts: + # You can define one or more intercept(s) + - headers: + - name: x-intercept-id + value: foo + port: 8080 + handler: echo-server +``` + +This table defines the parameters available within a workload. + +| Options | Type | Format | Description | Default | +|---------------------------|--------------------------------|-------------------------|---------------------------------------------------------------|---------| +| name | string | [a-z][a-z0-9-]* | Name of the workload to intercept | N/A | +| namespace | string | [a-z0-9][a-z0-9-]{1,62} | Namespace of workload to intercept | N/A | +| intercepts | [intercept](#intercepts) list | N/A | The list of intercepts associated to the workload | N/A | + +### Intercepts +This table defines the parameters available for each intercept. + +| Options | Type | Format | Description | Default | +|---------------------|-------------------------|----------------------|-----------------------------------------------------------------------|----------------| +| enabled | boolean | N/A | If set to false, disables this intercept. | true | +| headers | [header](#header) list | N/A | Headers that will filter the intercept. | Auto generated | +| service | name | [a-z][a-z0-9-]{1,62} | Name of service to intercept | N/A | +| localPort | integersh|string | 0-65535 | The port for the service which is intercepted | N/A | +| port | integer | 0-65535 | The port the service in the cluster is running on | N/A | +| pathPrefix | string | N/A | Path prefix filter for the intercept. Defaults to "/" | / | +| previewURL | boolean | N/A | Determine if a preview url should be created | true | +| banner | boolean | N/A | Used in the preview url option, displays a banner on the preview page | true | + +#### Header + +You can define headers to filter the requests which should end up on your machine when intercepting. + +| Options | Type | Format | Description | Default | +|---------------------------|----------|-------------------------|---------------------------------------------------------------|---------| +| name | string | N/A | Name of the header | N/A | +| value | string | N/A | Value of the header | N/A | + +Telepresence specs also support dynamic headers with **variables**: + +```yaml +intercepts: + - headers: + - name: test-{{ .Telepresence.Username }} + value: "{{ .Telepresence.Username }}" +``` + +| Options | Type | Description | +|---------------------------|----------|------------------------------------------| +| Telepresence.Username | string | The name of the user running the spec | + + +## Usage + +### Running your specification from the CLI +After you've written your intercept specification you will want to run it. + +To start your intercept, use this command: + +```bash +telepresence intercept run +``` +This will validate and run your spec. In case you just want to validate it, you can do so by using this command: + +```bash +telepresence intercept validate +``` + +### Using and sharing your specification as a CRD + +If you want to share specifications across your team or your organization. You can save specifications as CRDs inside your cluster. + + + The Intercept Specification CRD requires Kubernetes 1.22 or higher, if you are using an old cluster you will + need to install using helm directly, and use the --disable-openapi-validation flag + + +1. Install CRD object in your cluster (one time installation) : + + ```bash + telepresence helm install --crds + ``` + +1. Then you need to deploy the specification in your cluster as a CRD: + + ```yaml + apiVersion: getambassador.io/v1alpha2 + kind: InterceptSpecification + metadata: + name: my-crd-spec + namespace: my-crd-namespace + spec: + {intercept specification} + ``` + + So `echo-server` example looks like this: + + ```bash + kubectl apply -f - < # See note below +``` + +The Service Account token will be obtained by the cluster administrator after they create the user's Service Account. Creating the Service Account will create an associated Secret in the same namespace with the format `-token-`. This token can be obtained by your cluster administrator by running `kubectl get secret -n ambassador -o jsonpath='{.data.token}' | base64 -d`. + +After creating `config.yaml` in your current directory, export the file's location to KUBECONFIG by running `export KUBECONFIG=$(pwd)/config.yaml`. You should then be able to switch to this context by running `kubectl config use-context my-context`. + +## Administrating Telepresence + +Telepresence administration requires permissions for creating `Namespaces`, `ServiceAccounts`, `ClusterRoles`, `ClusterRoleBindings`, `Secrets`, `Services`, `MutatingWebhookConfiguration`, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator. The following permissions are needed for the installation and use of Telepresence: + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: telepresence-admin + namespace: default +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-admin-role +rules: + - apiGroups: [""] + resources: ["pods", "pods/log"] + verbs: ["get", "list", "create", "delete", "watch"] + - apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "update", "create", "delete"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] + - apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update", "create", "delete", "watch"] + - apiGroups: ["rbac.authorization.k8s.io"] + resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"] + verbs: ["get", "list", "watch", "create", "delete"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["create"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["get", "list", "watch", "delete"] + resourceNames: ["telepresence-agents"] + - apiGroups: [""] + resources: ["namespaces"] + verbs: ["get", "list", "watch", "create"] + - apiGroups: [""] + resources: ["secrets"] + verbs: ["get", "create", "list", "delete"] + - apiGroups: [""] + resources: ["serviceaccounts"] + verbs: ["get", "create", "delete"] + - apiGroups: ["admissionregistration.k8s.io"] + resources: ["mutatingwebhookconfigurations"] + verbs: ["get", "create", "delete"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["list", "get", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-clusterrolebinding +subjects: + - name: telepresence-admin + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-admin-role + kind: ClusterRole +``` + +There are two ways to install the traffic-manager: Using `telepresence connect` and installing the [helm chart](../../install/helm/). + +By using `telepresence connect`, Telepresence will use your kubeconfig to create the objects mentioned above in the cluster if they don't already exist. If you want the most introspection into what is being installed, we recommend using the helm chart to install the traffic-manager. + +## Cluster-wide telepresence user access + +To allow users to make intercepts across all namespaces, but with more limited `kubectl` permissions, the following `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` will allow full `telepresence intercept` functionality. + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate value + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-role +rules: +# For gather-logs command +- apiGroups: [""] + resources: ["pods/log"] + verbs: ["get"] +- apiGroups: [""] + resources: ["pods"] + verbs: ["list"] +# Needed in order to maintain a list of workloads +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +- apiGroups: [""] + resources: ["namespaces", "services"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-rolebinding +subjects: +- name: tp-user + kind: ServiceAccount + namespace: ambassador +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-role + kind: ClusterRole +``` + +### Traffic Manager connect permission +In addition to the cluster-wide permissions, the client will also need the following namespace scoped permissions +in the traffic-manager's namespace in order to establish the needed port-forward to the traffic-manager. +```yaml +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: traffic-manager-connect +rules: + - apiGroups: [""] + resources: ["pods"] + verbs: ["get", "list", "watch"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: traffic-manager-connect +subjects: + - name: telepresence-test-developer + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: traffic-manager-connect + kind: Role +``` + +## Namespace only telepresence user access + +RBAC for multi-tenant scenarios where multiple dev teams are sharing a single cluster where users are constrained to a specific namespace(s). + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +For each accessible namespace +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate user name + namespace: tp-namespace # Update value for appropriate namespace +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +rules: +- apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "watch"] +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role-binding + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount +roleRef: + kind: Role + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +``` + +The user will also need the [Traffic Manager connect permission](#traffic-manager-connect-permission) described above. diff --git a/docs/telepresence/2.15/reference/restapi.md b/docs/telepresence/2.15/reference/restapi.md new file mode 100644 index 000000000..4be1924a3 --- /dev/null +++ b/docs/telepresence/2.15/reference/restapi.md @@ -0,0 +1,93 @@ +# Telepresence RESTful API server + +[Telepresence](/products/telepresence/) can run a RESTful API server on the local host, both on the local workstation and in a pod that contains a `traffic-agent`. The server currently has two endpoints. The standard `healthz` endpoint and the `consume-here` endpoint. + +## Enabling the server +The server is enabled by setting the `telepresenceAPI.port` to a valid port number in the [Telepresence Helm Chart](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). The values may be passed explicitly to Helm during install, or configured using the [Telepresence Config](../config#restful-api-server) to impact an auto-install. + +## Querying the server +On the cluster's side, it's the `traffic-agent` of potentially intercepted pods that runs the server. The server can be accessed using `http://localhost:/` from the application container. Telepresence ensures that the container has the `TELEPRESENCE_API_PORT` environment variable set when the `traffic-agent` is installed. On the workstation, it is the `user-daemon` that runs the server. It uses the `TELEPRESENCE_API_PORT` that is conveyed in the environment of the intercept. This means that the server can be accessed the exact same way locally, provided that the environment is propagated correctly to the interceptor process. + +## Endpoints + +The `consume-here` and `intercept-info` endpoints are both intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar. Telepresence provides the ID of the intercept in the environment variable [TELEPRESENCE_INTERCEPT_ID](../environment/#telepresence_intercept_id) during an intercept. This ID must be provided in a `x-telepresence-caller-intercept-id: = ` header. [Telepresence](/products/telepresence/) needs this to identify the caller correctly. The `` will be empty when running in the cluster, but it's harmless to provide it there too, so there's no need for conditional code. + +There are three prerequisites to fulfill before testing The `consume-here` and `intercept-info` endpoints using `curl -v` on the workstation: +1. An intercept must be active +2. The "/healthz" endpoint must respond with OK +3. The ID of the intercept must be known. It will be visible as `ID` in the output of `telepresence list --debug`. + +### healthz +The `http://localhost:/healthz` endpoint should respond with status code 200 OK. If it doesn't then something isn't configured correctly. Check that the `traffic-agent` container is present and that the `TELEPRESENCE_API_PORT` has been added to the environment of the application container and/or in the environment that is propagated to the interceptor that runs on the local workstation. + +#### test endpoint using curl +A `curl -v` call can be used to test the endpoint when an intercept is active. This example assumes that the API port is configured to be 9980. +```console +$ curl -v localhost:9980/healthz +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /healthz HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Date: Fri, 26 Nov 2021 07:06:18 GMT +< Content-Length: 0 +< +* Connection #0 to host localhost left intact +``` + +### consume-here +`http://localhost:/consume-here` will respond with "true" (consume the message) or "false" (leave the message on the queue). When running in the cluster, this endpoint will respond with `false` if the headers match an ongoing intercept for the same workload because it's assumed that it's up to the intercept to consume the message. When running locally, the response is inverted. Matching headers means that the message should be consumed. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api`, we can now check that the "/consume-here" returns "true" for the path "/api" and given headers. +```console +$ curl -v localhost:9980/consume-here?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /consume-here?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Fri, 26 Nov 2021 06:43:28 GMT +< Content-Length: 4 +< +* Connection #0 to host localhost left intact +true +``` + +If you can run curl from the pod, you can try the exact same URL. The result should be "false" when there's an ongoing intercept. The `x-telepresence-caller-intercept-id` is not needed when the call is made from the pod. + +### intercept-info +`http://localhost:/intercept-info` is intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar, and will respond with a JSON structure containing the two booleans `clientSide` and `intercepted`, and a `metadata` map which corresponds to the `--http-meta` key pairs used when the intercept was created. This field is always omitted in case `intercepted` is `false`. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api --http-meta a=b --http-meta b=c`, we can now check that the "/intercept-info" returns information for the given path and headers. +```console +$ curl -v localhost:9980/intercept-info?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980...* Connected to localhost (127.0.0.1) port 9980 (#0) +> GET /intercept-info?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.79.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Tue, 01 Feb 2022 11:39:55 GMT +< Content-Length: 68 +< +{"intercepted":true,"clientSide":true,"metadata":{"a":"b","b":"c"}} +* Connection #0 to host localhost left intact +``` diff --git a/docs/telepresence/2.15/reference/routing.md b/docs/telepresence/2.15/reference/routing.md new file mode 100644 index 000000000..e974adbe1 --- /dev/null +++ b/docs/telepresence/2.15/reference/routing.md @@ -0,0 +1,69 @@ +# Connection Routing + +## Outbound + +### DNS resolution +When requesting a connection to a host, the IP of that host must be determined. Telepresence provides DNS resolvers to help with this task. There are currently four types of resolvers but only one of them will be used on a workstation at any given time. Common for all of them is that they will propagate a selection of the host lookups to be performed in the cluster. The selection normally includes all names ending with `.cluster.local` or a currently mapped namespace but more entries can be added to the list using the `includeSuffixes` option in the +[cluster DNS configuration](../config/#dns) + +#### Cluster side DNS lookups +The cluster side host lookup will be performed by the traffic-manager unless the client has an active intercept, in which case, the agent performing that intercept will be responsible for doing it. If the client has multiple intercepts, then all of them will be asked to perform the lookup, and the response to the client will contain the unique sum of IPs that they produce. It's therefore important to never have multiple intercepts that span more than one namespace[[1](#namespacelimit)] running concurrently on the same workstation because that would logically put the workstation in several namespaces and make the DNS resolution ambiguous. The reason for asking all of them is that the workstation currently impersonates multiple containers, and it is not possible to determine on behalf of what container the lookup request is made. + +#### macOS resolver +This resolver hooks into the macOS DNS system by creating files under `/etc/resolver`. Those files correspond to some domain and contain the port number of the Telepresence resolver. Telepresence creates one such file for each of the currently mapped namespaces and `include-suffixes` option. The file `telepresence.local` contains a search path that is configured based on current intercepts so that single label names can be resolved correctly. + +#### Linux systemd-resolved resolver +This resolver registers itself as part of telepresence's [VIF](../tun-device) using `systemd-resolved` and uses the DBus API to configure domains and routes that corresponds to the current set of intercepts and namespaces. + +#### Linux overriding resolver +Linux systems that aren't configured with `systemd-resolved` will use this resolver. A Typical case is when running Telepresence [inside a docker container](../inside-container). During initialization, the resolver will first establish a _fallback_ connection to the IP passed as `--dns`, the one configured as `local-ip` in the [local DNS configuration](../config#dns), or the primary `nameserver` registered in `/etc/resolv.conf`. It will then use iptables to actually override that IP so that requests to it instead end up in the overriding resolver, which unless it succeeds on its own, will use the _fallback_. + +#### Windows resolver +This resolver uses the DNS resolution capabilities of the [win-tun](https://www.wintun.net/) device in conjunction with [Win32_NetworkAdapterConfiguration SetDNSDomain](https://docs.microsoft.com/en-us/powershell/scripting/samples/performing-networking-tasks?view=powershell-7.2#assigning-the-dns-domain-for-a-network-adapter). + +#### DNS caching +The Telepresence DNS resolver often changes its configuration. This means that Telepresence must either flush the DNS caches on the local host, or ensure that DNS-records returned from the Telepresence resolver aren't cached (or cached for a very short time). All operating systems have different ways of flushing the DNS caches and even different versions of one system may have differences. Also, on some systems it is necessary to actually kill and restart processes to ensure a proper flush, which in turn may result in network instabilities. + +Starting with 2.4.7, Telepresence will no longer flush the host's DNS caches. Instead, all records will have a short Time To Live (TTL) so that such caches evict the entries quickly. This causes increased load on the Telepresence resolver (shorter TTL means more frequent queries) and to cater for that, telepresence now has an internal cache to minimize the number of DNS queries that it sends to the cluster. This cache is flushed as needed without causing instabilities. + +### Routing + +#### Subnets +The Telepresence `traffic-manager` service is responsible for discovering the cluster's service subnet and all subnets used by the pods. In order to do this, it needs permission to create a dummy service[[2](#servicesubnet)] in its own namespace, and the ability to list, get, and watch nodes and pods. Most clusters will expose the pod subnets as `podCIDR` in the `Node` while others, like Amazon EKS, don't. Telepresence will then fall back to deriving the subnets from the IPs of all pods. If you'd like to choose a specific method for discovering subnets, or want to provide the list yourself, you can use the `podCIDRStrategy` configuration value in the [helm](../../install/helm) chart to do that. + +The complete set of subnets that the [VIF](../tun-device) will be configured with is dynamic and may change during a connection's life cycle as new nodes arrive or disappear from the cluster. The set consists of what that the traffic-manager finds in the cluster, and the subnets configured using the [also-proxy](../config#alsoproxysubnets) configuration option. Telepresence will remove subnets that are equal to, or completely covered by, other subnets. + +#### Connection origin +A request to connect to an IP-address that belongs to one of the subnets of the [VIF](../tun-device) will cause a connection request to be made in the cluster. As with host name lookups, the request will originate from the traffic-manager unless the client has ongoing intercepts. If it does, one of the intercepted pods will be chosen, and the request will instead originate from that pod. This is a best-effort approach. Telepresence only knows that the request originated from the workstation. It cannot know that it is intended to originate from a specific pod when multiple intercepts are active. + +A `--local-only` intercept will not have any effect on the connection origin because there is no pod from which the connection can originate. The intercept must be made on a workload that has been deployed in the cluster if there's a requirement for correct connection origin. + +There are multiple reasons for doing this. One is that it is important that the request originates from the correct namespace. Example: + +```bash +curl some-host +``` +results in a http request with header `Host: some-host`. Now, if a service-mesh like Istio performs header based routing, then it will fail to find that host unless the request originates from the same namespace as the host resides in. Another reason is that the configuration of a service mesh can contain very strict rules. If the request then originates from the wrong pod, it will be denied. Only one intercept at a time can be used if there is a need to ensure that the chosen pod is exactly right. + +### Recursion detection +It is common that clusters used in development, such as Minikube, Minishift or k3s, run on the same host as the Telepresence client, often in a Docker container. Such clusters may have access to host network, which means that both DNS and L4 routing may be subjected to recursion. + +#### DNS recursion +When a local cluster's DNS-resolver fails to resolve a hostname, it may fall back to querying the local host network. This means that the Telepresence resolver will be asked to resolve a query that was issued from the cluster. Telepresence must check if such a query is recursive because there is a chance that it actually originated from the Telepresence DNS resolver and was dispatched to the `traffic-manager`, or a `traffic-agent`. + +Telepresence handles this by sending one initial DNS-query to resolve the hostname "tel2-recursion-check.kube-system". If the cluster runs locally, and has access to the local host's network, then that query will recurse back into the Telepresence resolver. Telepresence remembers this and alters its own behavior so that queries that are believed to be recursions are detected and respond with an NXNAME record. Telepresence performs this solution to the best of its ability, but may not be completely accurate in all situations. There's a chance that the DNS-resolver will yield a false negative for the second query if the same hostname is queried more than once in rapid succession, that is when the second query is made before the first query has received a response from the cluster. + +#### Connect recursion +A cluster running locally may dispatch connection attempts to non-existing host:port combinations to the host network. This means that they may reach the Telepresence [VIF](../tun-device). Endless recursions occur if the VIF simply dispatches such attempts on to the cluster. + +The telepresence client handles this by serializing all connection attempts to one specific IP:PORT, trapping all subsequent attempts to connect to that IP:PORT until the first attempt has completed. If the first attempt was deemed a success, then the currently trapped attempts are allowed to proceed. If the first attempt failed, then the currently trapped attempts fail. + +## Inbound + +The traffic-manager and traffic-agent are mutually responsible for setting up the necessary connection to the workstation when an intercept becomes active. In versions prior to 2.3.2, this would be accomplished by the traffic-manager creating a port dynamically that it would pass to the traffic-agent. The traffic-agent would then forward the intercepted connection to that port, and the traffic-manager would forward it to the workstation. This lead to problems when integrating with service meshes like Istio since those dynamic ports needed to be configured. It also imposed an undesired requirement to be able to use mTLS between the traffic-manager and traffic-agent. + +In 2.3.2, this changes, so that the traffic-agent instead creates a tunnel to the traffic-manager using the already existing gRPC API connection. The traffic-manager then forwards that using another tunnel to the workstation. This is completely invisible to other service meshes and is therefore much easier to configure. + +##### Footnotes: +

1: Starting with 2.8.0, Telepresence will not allow that the same workstation creates concurrent intercepts that span multiple namespaces.

+

2: The error message from an attempt to create a service in a bad subnet contains the service subnet. The trick of creating a dummy service is currently the only way to get Kubernetes to expose that subnet.

diff --git a/docs/telepresence/2.15/reference/tun-device.md b/docs/telepresence/2.15/reference/tun-device.md new file mode 100644 index 000000000..af7e3828c --- /dev/null +++ b/docs/telepresence/2.15/reference/tun-device.md @@ -0,0 +1,27 @@ +# Networking through Virtual Network Interface + +The Telepresence daemon process creates a Virtual Network Interface (VIF) when Telepresence connects to the cluster. The VIF ensures that the cluster's subnets are available to the workstation. It also intercepts DNS requests and forwards them to the traffic-manager which in turn forwards them to intercepted agents, if any, or performs a host lookup by itself. + +### TUN-Device +The VIF is a TUN-device, which means that it communicates with the workstation in terms of L3 IP-packets. The router will recognize UDP and TCP packets and tunnel their payload to the traffic-manager via its encrypted gRPC API. The traffic-manager will then establish corresponding connections in the cluster. All protocol negotiation takes place in the client because the VIF takes care of the L3 to L4 translation (i.e. the tunnel is L4, not L3). + +## Gains when using the VIF + +### Both TCP and UDP +The TUN-device is capable of routing both TCP and UDP traffic. + +### No SSH required + +The VIF approach is somewhat similar to using `sshuttle` but without +any requirements for extra software, configuration or connections. +Using the VIF means that only one single connection needs to be +forwarded through the Kubernetes apiserver (à la `kubectl +port-forward`), using only one single port. There is no need for +`ssh` in the client nor for `sshd` in the traffic-manager. This also +means that the traffic-manager container can run as the default user. + +#### sshfs without ssh encryption +When a POD is intercepted, and its volumes are mounted on the local machine, this mount is performed by [sshfs](https://github.com/libfuse/sshfs). Telepresence will run `sshfs -o slave` which means that instead of using `ssh` to establish an encrypted communication to an `sshd`, which in turn terminates the encryption and forwards to `sftp`, the `sshfs` will talk `sftp` directly on its `stdin/stdout` pair. Telepresence tunnels that directly to an `sftp` in the agent using its already encrypted gRPC API. As a result, no `sshd` is needed in client nor in the traffic-agent, and the traffic-agent container can run as the default user. + +### No Firewall rules +With the VIF in place, there's no longer any need to tamper with firewalls in order to establish IP routes. The VIF makes the cluster subnets available during connect, and the kernel will perform the routing automatically. When the session ends, the kernel is also responsible for cleaning up. diff --git a/docs/telepresence/2.15/reference/volume.md b/docs/telepresence/2.15/reference/volume.md new file mode 100644 index 000000000..82df9cafa --- /dev/null +++ b/docs/telepresence/2.15/reference/volume.md @@ -0,0 +1,36 @@ +# Volume mounts + +import Alert from '@material-ui/lab/Alert'; + +Telepresence supports locally mounting of volumes that are mounted to your Pods. You can specify a command to run when starting the intercept, this could be a subshell or local server such as Python or Node. + +``` +telepresence intercept --port --mount=/tmp/ -- /bin/bash +``` + +In this case, Telepresence creates the intercept, mounts the Pod's volumes to locally to `/tmp`, and starts a Bash subshell. + +Telepresence can set a random mount point for you by using `--mount=true` instead, you can then find the mount point in the output of `telepresence list` or using the `$TELEPRESENCE_ROOT` variable. + +``` +$ telepresence intercept --port --mount=true -- /bin/bash +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 + Intercepting : all TCP connections + +bash-3.2$ echo $TELEPRESENCE_ROOT +/var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 +``` + +--mount=true is the default if a mount option is not specified, use --mount=false to disable mounting volumes. + +With either method, the code you run locally either from the subshell or from the intercept command will need to be prepended with the `$TELEPRESENCE_ROOT` environment variable to utilize the mounted volumes. + +For example, Kubernetes mounts secrets to `/var/run/secrets/kubernetes.io` (even if no `mountPoint` for it exists in the Pod spec). Once mounted, to access these you would need to change your code to use `$TELEPRESENCE_ROOT/var/run/secrets/kubernetes.io`. + +If using --mount=true without a command, you can use either environment variable flag to retrieve the variable. diff --git a/docs/telepresence/2.15/reference/vpn.md b/docs/telepresence/2.15/reference/vpn.md new file mode 100644 index 000000000..457cc873c --- /dev/null +++ b/docs/telepresence/2.15/reference/vpn.md @@ -0,0 +1,89 @@ + +
+ + +# Telepresence and VPNs + +It is often important to set up Kubernetes API server endpoints to be only accessible via a VPN. +In setups like these, users need to connect first to their VPN, and then use Telepresence to connect +to their cluster. As Telepresence uses many of the same underlying technologies that VPNs use, +the two can sometimes conflict. This page will help you identify and resolve such VPN conflicts. + + + +The test-vpn command, which was once part of Telepresence, became obsolete in 2.14 due to a change in functionality and was subsequently removed. + + + +## VPN Configuration + +Let's begin by reviewing what a VPN does and imagining a sample configuration that might come +to conflict with Telepresence. +Usually, a VPN client adds two kinds of routes to your machine when you connect. +The first serves to override your default route; in other words, it makes sure that packets +you send out to the public internet go through the private tunnel instead of your +ethernet or wifi adapter. We'll call this a `public VPN route`. +The second kind of route is a `private VPN route`. These are the routes that allow your +machine to access hosts inside the VPN that are not accessible to the public internet. +Generally speaking, this is a more circumscribed route that will connect your machine +only to reachable hosts on the private network, such as your Kubernetes API server. + +This diagram represents what happens when you connect to a VPN, supposing that your +private network spans the CIDR range: `10.0.0.0/8`. + +![VPN routing](../images/vpn-routing.jpg) + +## Kubernetes configuration + +One of the things a Kubernetes cluster does for you is assign IP addresses to pods and services. +This is one of the key elements of Kubernetes networking, as it allows applications on the cluster +to reach each other. When Telepresence connects you to the cluster, it will try to connect you +to the IP addresses that your cluster assigns to services and pods. +Cluster administrators can configure, on cluster creation, the CIDR ranges that the Kubernetes +cluster will place resources in. Let's imagine your cluster is configured to place services in +`10.130.0.0/16` and pods in `10.132.0.0/16`: + +![VPN Kubernetes config](../images/vpn-k8s-config.jpg) + +## Telepresence conflicts + +When you run `telepresence connect` to connect to a cluster, it talks to the API server +to figure out what pod and service CIDRs it needs to map in your machine. If it detects +that these CIDR ranges are already mapped by a VPN's `private route`, it will produce an +error and inform you of the conflicting subnets: + +```console +$ telepresence connect +telepresence connect: error: connector.Connect: failed to connect to root daemon: rpc error: code = Unknown desc = subnet 10.43.0.0/16 overlaps with existing route "10.0.0.0/8 via 10.0.0.0 dev utun4, gw 10.0.0.1" +``` + +To resolve this, you'll need to carefully consider what your network layout looks like. +Telepresence is refusing to map these conflicting subnets because its mapping them +could render certain hosts that are inside the VPN completely unreachable. However, +you (or your network admin) know better than anyone how hosts are spread out inside your VPN. +Even if the private route routes ALL of `10.0.0.0/8`, it's possible that hosts are only +being spun up in one of the subblocks of the `/8` space. Let's say, for example, +that you happen to know that all your hosts in the VPN are bunched up in the first +half of the space -- `10.0.0.0/9` (and that you know that any new hosts will +only be assigned IP addresses from the `/9` block). In this case you +can configure Telepresence to override the other half of this CIDR block, which is where the +services and pods happen to be. +To do this, all you have to do is configure the `client.routing.allowConflictingSubnets` flag +in the Telepresence helm chart. You can do this directly via `telepresence helm upgrade`: + +```console +$ telepresence helm upgrade --set client.routing.allowConflictingSubnets="{10.128.0.0/9}" +``` + +You can also choose to be more specific about this, and only allow the CIDRs that you KNOW +are in use by the cluster: + +```console +$ telepresence helm upgrade --set client.routing.allowConflictingSubnets="{10.130.0.0/16,10.132.0.0/16}" +``` + +The end result of this (assuming an allow list of `/9`) will be a configuration like this: + +![VPN Telepresence](../images/vpn-with-tele.jpg) + +
diff --git a/docs/telepresence/2.15/release-notes/no-ssh.png b/docs/telepresence/2.15/release-notes/no-ssh.png new file mode 100644 index 000000000..025f20ab7 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/no-ssh.png differ diff --git a/docs/telepresence/2.15/release-notes/run-tp-in-docker.png b/docs/telepresence/2.15/release-notes/run-tp-in-docker.png new file mode 100644 index 000000000..53b66a9b2 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/run-tp-in-docker.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.2.png b/docs/telepresence/2.15/release-notes/telepresence-2.2.png new file mode 100644 index 000000000..43abc7e89 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.2.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.3.0-homebrew.png b/docs/telepresence/2.15/release-notes/telepresence-2.3.0-homebrew.png new file mode 100644 index 000000000..e203a9750 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.3.0-homebrew.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.3.0-loglevels.png b/docs/telepresence/2.15/release-notes/telepresence-2.3.0-loglevels.png new file mode 100644 index 000000000..3d628c54a Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.3.0-loglevels.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.3.1-alsoProxy.png b/docs/telepresence/2.15/release-notes/telepresence-2.3.1-alsoProxy.png new file mode 100644 index 000000000..4052b927b Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.3.1-alsoProxy.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.3.1-brew.png b/docs/telepresence/2.15/release-notes/telepresence-2.3.1-brew.png new file mode 100644 index 000000000..2af424904 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.3.1-brew.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.3.1-dns.png b/docs/telepresence/2.15/release-notes/telepresence-2.3.1-dns.png new file mode 100644 index 000000000..c6335e7a7 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.3.1-dns.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.3.1-inject.png b/docs/telepresence/2.15/release-notes/telepresence-2.3.1-inject.png new file mode 100644 index 000000000..aea1003ef Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.3.1-inject.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.3.1-large-file-transfer.png b/docs/telepresence/2.15/release-notes/telepresence-2.3.1-large-file-transfer.png new file mode 100644 index 000000000..48ceb3817 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.3.1-large-file-transfer.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.3.1-trafficmanagerconnect.png b/docs/telepresence/2.15/release-notes/telepresence-2.3.1-trafficmanagerconnect.png new file mode 100644 index 000000000..78128c174 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.3.1-trafficmanagerconnect.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.3.2-subnets.png b/docs/telepresence/2.15/release-notes/telepresence-2.3.2-subnets.png new file mode 100644 index 000000000..778c722ab Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.3.2-subnets.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.3.2-svcport-annotation.png b/docs/telepresence/2.15/release-notes/telepresence-2.3.2-svcport-annotation.png new file mode 100644 index 000000000..1e1e92408 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.3.2-svcport-annotation.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.3.3-helm.png b/docs/telepresence/2.15/release-notes/telepresence-2.3.3-helm.png new file mode 100644 index 000000000..7b81480a7 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.3.3-helm.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.3.3-namespace-config.png b/docs/telepresence/2.15/release-notes/telepresence-2.3.3-namespace-config.png new file mode 100644 index 000000000..7864d3a30 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.3.3-namespace-config.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.3.3-to-pod.png b/docs/telepresence/2.15/release-notes/telepresence-2.3.3-to-pod.png new file mode 100644 index 000000000..aa7be3f63 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.3.3-to-pod.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.3.4-improved-error.png b/docs/telepresence/2.15/release-notes/telepresence-2.3.4-improved-error.png new file mode 100644 index 000000000..fa8a12986 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.3.4-improved-error.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.3.4-ip-error.png b/docs/telepresence/2.15/release-notes/telepresence-2.3.4-ip-error.png new file mode 100644 index 000000000..1d37380c7 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.3.4-ip-error.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.3.5-agent-config.png b/docs/telepresence/2.15/release-notes/telepresence-2.3.5-agent-config.png new file mode 100644 index 000000000..67d6d3e8b Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.3.5-agent-config.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.3.5-grpc-max-receive-size.png b/docs/telepresence/2.15/release-notes/telepresence-2.3.5-grpc-max-receive-size.png new file mode 100644 index 000000000..32939f9dd Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.3.5-grpc-max-receive-size.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.3.5-skipLogin.png b/docs/telepresence/2.15/release-notes/telepresence-2.3.5-skipLogin.png new file mode 100644 index 000000000..bf79c1910 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.3.5-skipLogin.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png b/docs/telepresence/2.15/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png new file mode 100644 index 000000000..d29a05ad7 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.3.7-keydesc.png b/docs/telepresence/2.15/release-notes/telepresence-2.3.7-keydesc.png new file mode 100644 index 000000000..9bffe5ccb Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.3.7-keydesc.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.3.7-newkey.png b/docs/telepresence/2.15/release-notes/telepresence-2.3.7-newkey.png new file mode 100644 index 000000000..c7d47c42d Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.3.7-newkey.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.4.0-cloud-messages.png b/docs/telepresence/2.15/release-notes/telepresence-2.4.0-cloud-messages.png new file mode 100644 index 000000000..ffd045ae0 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.4.0-cloud-messages.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.4.0-windows.png b/docs/telepresence/2.15/release-notes/telepresence-2.4.0-windows.png new file mode 100644 index 000000000..d27ba254a Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.4.0-windows.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.4.1-systema-vars.png b/docs/telepresence/2.15/release-notes/telepresence-2.4.1-systema-vars.png new file mode 100644 index 000000000..c098b439f Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.4.1-systema-vars.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.4.10-actions.png b/docs/telepresence/2.15/release-notes/telepresence-2.4.10-actions.png new file mode 100644 index 000000000..6d849ac21 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.4.10-actions.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.4.10-intercept-config.png b/docs/telepresence/2.15/release-notes/telepresence-2.4.10-intercept-config.png new file mode 100644 index 000000000..e3f1136ac Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.4.10-intercept-config.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.4.4-gather-logs.png b/docs/telepresence/2.15/release-notes/telepresence-2.4.4-gather-logs.png new file mode 100644 index 000000000..7db541735 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.4.4-gather-logs.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.4.5-logs-anonymize.png b/docs/telepresence/2.15/release-notes/telepresence-2.4.5-logs-anonymize.png new file mode 100644 index 000000000..edd01fde4 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.4.5-logs-anonymize.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.4.5-pod-yaml.png b/docs/telepresence/2.15/release-notes/telepresence-2.4.5-pod-yaml.png new file mode 100644 index 000000000..3f565c4f8 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.4.5-pod-yaml.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.4.5-preview-url-questions.png b/docs/telepresence/2.15/release-notes/telepresence-2.4.5-preview-url-questions.png new file mode 100644 index 000000000..1823aaa14 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.4.5-preview-url-questions.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.4.6-help-text.png b/docs/telepresence/2.15/release-notes/telepresence-2.4.6-help-text.png new file mode 100644 index 000000000..aab9178ad Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.4.6-help-text.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.4.8-health-check.png b/docs/telepresence/2.15/release-notes/telepresence-2.4.8-health-check.png new file mode 100644 index 000000000..e10a0b472 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.4.8-health-check.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.4.8-vpn.png b/docs/telepresence/2.15/release-notes/telepresence-2.4.8-vpn.png new file mode 100644 index 000000000..fbb215882 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.4.8-vpn.png differ diff --git a/docs/telepresence/2.15/release-notes/telepresence-2.5.0-pro-daemon.png b/docs/telepresence/2.15/release-notes/telepresence-2.5.0-pro-daemon.png new file mode 100644 index 000000000..5b82fc769 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/telepresence-2.5.0-pro-daemon.png differ diff --git a/docs/telepresence/2.15/release-notes/tunnel.jpg b/docs/telepresence/2.15/release-notes/tunnel.jpg new file mode 100644 index 000000000..59a0397e6 Binary files /dev/null and b/docs/telepresence/2.15/release-notes/tunnel.jpg differ diff --git a/docs/telepresence/2.15/releaseNotes.yml b/docs/telepresence/2.15/releaseNotes.yml new file mode 100644 index 000000000..f078704c3 --- /dev/null +++ b/docs/telepresence/2.15/releaseNotes.yml @@ -0,0 +1,2425 @@ +# This file should be placed in the folder for the version of the +# product that's meant to be documented. A `/release-notes` page will +# be automatically generated and populated at build time. +# +# Note that an entry needs to be added to the `doc-links.yml` file in +# order to surface the release notes in the table of contents. +# +# The YAML in this file should contain: +# +# changelog: An (optional) URL to the CHANGELOG for the product. +# items: An array of releases with the following attributes: +# - version: The (optional) version number of the release, if applicable. +# - date: The date of the release in the format YYYY-MM-DD. +# - notes: An array of noteworthy changes included in the release, each having the following attributes: +# - type: The type of change, one of `bugfix`, `feature`, `security` or `change`. +# - title: A short title of the noteworthy change. +# - body: >- +# Two or three sentences describing the change and why it +# is noteworthy. This is HTML, not plain text or +# markdown. It is handy to use YAML's ">-" feature to +# allow line-wrapping. +# - image: >- +# The URL of an image that visually represents the +# noteworthy change. This path is relative to the +# `release-notes` directory; if this file is +# `FOO/releaseNotes.yml`, then the image paths are +# relative to `FOO/release-notes/`. +# - docs: The path to the documentation page where additional information can be found. +# - href: A path from the root to a resource on the getambassador website, takes precedence over a docs link. + +docTitle: Telepresence Release Notes +docDescription: >- + Release notes for Telepresence by Ambassador Labs, a CNCF project + that enables developers to iterate rapidly on Kubernetes + microservices by arming them with infinite-scale development + environments, access to instantaneous feedback loops, and highly + customizable development environments. + +changelog: https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md + +items: + - version: 2.15.1 + date: "2023-09-08" + notes: + - type: security + title: Rebuild with go 1.21.1 + body: >- + Rebuild Telepresence with go 1.21.1 to address CVEs. + - type: security + title: Set security context for traffic agent + body: >- + Openshift users reported that the traffic agent injection was failing due to a missing security context. + - version: 2.15.0 + date: "2023-08-28" + notes: + - type: change + title: When logging out you will now automatically be disconnected + body: >- + With the change of always being required to login for Telepresence commands, you will now be disconnected from + any existing sessions when logging out. + - type: feature + title: Add ASLR to binaries not in docker + body: >- + Addresses PEN test issue. + docs: https://github.com/datawire/telepresence2-proprietary/issues/315 + - type: bugfix + title: Ensure that the x-telepresence-intercept-id header is read-only. + body: >- + The system assumes that the x-telepresence-intercept-id header contains the ID of the intercept + when it is present, and attempts to redefine it will now result in an error instead of causing a malfunction + when using preview URLs. + - type: bugfix + title: Fix parsing of multiple --http-header arguments + body: >- + An intercept using multiple header flags, e.g. --http-header a=b --http-header x=y would assemble + them incorrectly into one header as --http-header a=b,x=y which were then interpreted as a match + for the header a with value b,x=y. + - type: bugfix + title: Fixed bug in telepresence status when apikey login fails + body: >- + A bug was found when the docker-desktop extension would issue a telepresence status command with an expired or invalid apikey. This would + cause the extension to get stuck in an authentication loop. This bug was addressed and resolved. + - version: 2.14.2 + date: "2023-07-26" + notes: + - type: feature + title: Incorporation of the last version of Telepresence. + body: >- + A new version of Telepresence OSS was published. + - version: 2.14.1 + date: "2023-07-07" + notes: + - type: feature + title: More flexible templating in the Intercept Speficiation. + body: >- + The Sprig template functions can now be used + in many unconstrained fields of an Intercept Specification, such as environments, arguments, scripts, + commands, and intercept headers. + - type: bug + title: User daemon would panic during connect + body: >- + An attempt to connect on a host where no login has ever been made, could cause the user daemon to panic. + - version: 2.14.0 + date: "2023-06-12" + notes: + - type: feature + title: Telepresence with Docker Compose + body: >- + Telepresence now is integrated with Docker Compose. You can now use a compose file as an Intercept Handler in your Intercept Specifcations to utilize you local dev stack alongside an Intercept. + docs: reference/with-compose + - type: feature + title: Added the ability to exclude envrionment variables + body: >- + You can now configure your traffic-manager to exclude certain environment variables from being propagated to your local environment while doing an intercept. + docs: reference/cluster-config#excluding-envrionment-variables + - type: change + title: Routing conflict reporting. + body: >- + Telepresence will now attempt to detect and report routing conflicts with other running VPN software on client machines. + There is a new configuration flag that can be tweaked to allow certain CIDRs to be overridden by Telepresence. + docs: reference/vpn + - type: change + title: Migration of Pod Daemon to the proprietary version of Telepresence + body: >- + Pod Daemon has been successfully integrated with the most recent proprietary version of Telepresence. This development allows users to leverage the datawire/telepresence image for their deployment previews. This enhancement streamlines the process, improving the efficiency and effectiveness of deployment preview scenarios. + docs: ci/pod-daemon + + - version: 2.13.3 + date: "2023-05-25" + notes: + - type: feature + title: Add imagePullSecrets to hooks + body: >- + Add .Values.hooks.curl.imagePullSecrets and .Values.hooks curl.imagePullSecrets to Helm values. + docs: https://github.com/telepresenceio/telepresence/pull/3079 + + - type: change + title: Change reinvocation policy to Never for the mutating webhook + body: >- + The default setting of the reinvocationPolicy for the mutating webhook dealing with agent injections changed from Never to IfNeeded. + + - type: bugfix + title: Fix mounting fail of IAM roles for service accounts web identity token + body: >- + The eks.amazonaws.com/serviceaccount volume injected by EKS is now exported and remotely mounted during an intercept. + docs: https://github.com/telepresenceio/telepresence/issues/3166 + + - type: bugfix + title: Correct namespace selector for cluster versions with non-numeric characters + body: >- + The mutating webhook now correctly applies the namespace selector even if the cluster version contains non-numeric characters. For example, it can now handle versions such as Major:"1", Minor:"22+". + docs: https://github.com/telepresenceio/telepresence/pull/3184 + + - type: bugfix + title: Enable IPv6 on the telepresence docker network + body: >- + The "telepresence" Docker network will now propagate DNS AAAA queries to the Telepresence DNS resolver when it runs in a Docker container. + docs: https://github.com/telepresenceio/telepresence/issues/3179 + + - type: bugfix + title: Fix the crash when intercepting with --local-only and --docker-run + body: >- + Running telepresence intercept --local-only --docker-run no longer results in a panic. + docs: https://github.com/telepresenceio/telepresence/issues/3171 + + - type: bugfix + title: Fix incorrect error message with local-only mounts + body: >- + Running telepresence intercept --local-only --mount false no longer results in an incorrect error message saying "a local-only intercept cannot have mounts". + docs: https://github.com/telepresenceio/telepresence/issues/3171 + + - type: bugfix + title: specify port in hook urls + body: >- + The helm chart now correctly handles custom agentInjector.webhook.port that was not being set in hook URLs. + docs: https://github.com/telepresenceio/telepresence/pull/3161 + + - type: bugfix + title: Fix wrong default value for disableGlobal and agentArrival + body: >- + Params .intercept.disableGlobal and .timeouts.agentArrival are now correctly honored. + + - version: 2.13.2 + date: "2023-05-12" + notes: + - type: bugfix + title: Authenticator Service Update + body: >- + Replaced / characters with a - when the authenticator service creates the kubeconfig in the Telepresence cache. + docs: https://github.com/telepresenceio/telepresence/pull/3167 + + - type: bugfix + title: Enhanced DNS Search Path Configuration for Windows (Auto, PowerShell, and Registry Options) + body: >- + Configurable strategy (auto, powershell. or registry) to set the global DNS search path on Windows. Default is auto which means try powershell first, and if it fails, fall back to registry. + docs: https://github.com/telepresenceio/telepresence/pull/3154 + + - type: feature + title: Configurable Traffic Manager Timeout in values.yaml + body: >- + The timeout for the traffic manager to wait for traffic agent to arrive can now be configured in the values.yaml file using timeouts.agentArrival. The default timeout is still 30 seconds. + docs: https://github.com/telepresenceio/telepresence/pull/3148 + + - type: bugfix + title: Enhanced Local Cluster Discovery for macOS and Windows + body: >- + The automatic discovery of a local container based cluster (minikube or kind) used when the Telepresence daemon runs in a container, now works on macOS and Windows, and with different profiles, ports, and cluster names + docs: https://github.com/telepresenceio/telepresence/pull/3165 + + - type: bugfix + title: FTP Stability Improvements + body: >- + Multiple simultaneous intercepts can transfer large files in bidirectionally and in parallel. + docs: https://github.com/telepresenceio/telepresence/pull/3157 + + - type: bugfix + title: Intercepted Persistent Volume Pods No Longer Cause Timeouts + body: >- + Pods using persistent volumes no longer causes timeouts when intercepted. + docs: https://github.com/telepresenceio/telepresence/pull/3157 + + - type: bugfix + title: Successful 'Telepresence Connect' Regardless of DNS Configuration + body: >- + Ensure that `telepresence connect`` succeeds even though DNS isn't configured correctly. + docs: https://github.com/telepresenceio/telepresence/pull/3154 + + - type: bugfix + title: Traffic-Manager's 'Close of Closed Channel' Panic Issue + body: >- + The traffic-manager would sometimes panic with a "close of closed channel" message and exit. + docs: https://github.com/telepresenceio/telepresence/pull/3160 + + - type: bugfix + title: Traffic-Manager's Type Cast Panic Issue + body: >- + The traffic-manager would sometimes panic and exit after some time due to a type cast panic. + docs: https://github.com/telepresenceio/telepresence/pull/3153 + + - type: bugfix + title: Login Friction + body: >- + Improve login behavior by clearing the saved intermediary API Keys when a user logins to force Telepresence to generate new ones. + + - version: 2.13.1 + date: "2023-04-20" + notes: + - type: change + title: Update ambassador-telepresence-agent to version 1.13.13 + body: >- + The malfunction of the Ambassador Telepresence Agent occurred as a result of an update which compressed the executable file. + + - version: 2.13.0 + date: "2023-04-18" + notes: + - type: feature + title: Better kind / minikube network integration with docker + body: >- + The Docker network used by a Kind or Minikube (using the "docker" driver) installation, is automatically detected and connected to a Docker container running the Telepresence daemon. + docs: https://github.com/telepresenceio/telepresence/pull/3104 + + - type: feature + title: New mapped namespace output + body: >- + Mapped namespaces are included in the output of the telepresence status command. + + - type: feature + title: Setting of the target IP of the intercept + docs: reference/intercepts/cli + body: >- + There's a new --address flag to the intercept command allowing users to set the target IP of the intercept. + + - type: feature + title: Multi-tenant support + body: >- + The client will no longer need cluster wide permissions when connected to a namespace scoped Traffic Manager. + + - type: bugfix + title: Cluster domain resolution bugfix + body: >- + The Traffic Manager now uses a fail-proof way to determine the cluster domain. + docs: https://github.com/telepresenceio/telepresence/issues/3114 + + - type: bugfix + title: Windows DNS + body: >- + DNS on windows is more reliable and performant. + docs: https://github.com/telepresenceio/telepresence/issues/2939 + + - type: bugfix + title: Agent injection with huge amount of deployments + body: >- + The agent is now correctly injected even with a high number of deployment starting at the same time. + docs: https://github.com/telepresenceio/telepresence/issues/3025 + + - type: bugfix + title: Self-contained kubeconfig with Docker + body: >- + The kubeconfig is made self-contained before running Telepresence daemon in a Docker container. + docs: https://github.com/telepresenceio/telepresence/issues/3099 + + - type: bugfix + title: Version command error + body: >- + The version command won't throw an error anymore if there is no kubeconfig file defined. + docs: https://github.com/telepresenceio/telepresence/issues/3095 + + - type: change + title: Intercept Spec CRD v1alpha1 depreciated + body: >- + Please use version v1alpha2 of the intercept spec crd. + + - version: 2.12.2 + date: "2023-04-04" + notes: + - type: security + title: Update Golang build version to 1.20.3 + body: >- + Update Golang to 1.20.3 to address CVE-2023-24534, CVE-2023-24536, CVE-2023-24537, and CVE-2023-24538 + - version: 2.12.1 + date: "2023-03-22" + notes: + - type: feature + title: Additions to gather-logs + body: >- + Telepresence now includes the kubeauth logs when running + the gather-logs command + - type: bugfix + title: Airgapped Clusters can once again create personal intercepts + body: >- + Telepresence on airgapped clusters regained the ability to use the + skipLogin config option to bypass login and create personal intercepts. + - type: bugfix + title: Environment Variables are now propagated to kubeauth + body: >- + Telepresence now propagates environment variables properly + to the kubeauth-foreground to be used with cluster authentication + - version: 2.12.0 + date: "2023-03-20" + notes: + - type: feature + title: Intercept spec can build images from source + body: >- + Handlers in the Intercept Specification can now specify a build property instead of an image so that + the image is built when the spec runs. + docs: reference/intercepts/specs#build + - type: feature + title: Improve volume mount experience for Windows and Mac users + body: >- + On macOS and Windows platforms, the installation of sshfs or platform specific FUSE implementations such as macFUSE or WinFSP are + no longer needed when running an Intercept Specification that uses docker images. + docs: reference/intercepts/specs + - type: feature + title: Check for service connectivity independently from pod connectivity + body: >- + Telepresence now enables you to check for a service and pod's connectivity independently, so that it can proxy one without proxying the other. + docs: https://github.com/telepresenceio/telepresence/issues/2911 + - type: bugfix + title: Fix cluster authentication when running the telepresence daemon in a docker container. + body: >- + Authentication to EKS and GKE clusters have been fixed (k8s >= v1.26) + docs: https://github.com/telepresenceio/telepresence/pull/3055 + - type: bugfix + title: The Intercept spec image pattern now allows nested and sha256 images. + body: >- + Telepresence Intercept Specifications now handle passing nested images or the sha256 of an image + docs: https://github.com/telepresenceio/telepresence/issues/3064 + - type: bugfix + body: >- + Telepresence will not longer panic when a CNAME does not contain the .svc in it + title: Fix panic when CNAME of kubernetes.default doesn't contain .svc + docs: https://github.com/telepresenceio/telepresence/issues/3015 + - version: 2.11.1 + date: "2023-02-27" + notes: + - type: bugfix + title: Multiple architectures + docs: https://github.com/telepresenceio/telepresence/issues/3043 + body: >- + The multi-arch build for the ambassador-telepresence-manager and ambassador-telepresence-agent now + works for both amd64 and arm64. + - type: bugfix + title: Ambassador agent Helm chart duplicates + docs: https://github.com/telepresenceio/telepresence/issues/3046 + body: >- + Some labels in the Helm chart for the Ambassador Agent were duplicated, causing problems for FluxCD. + - version: 2.11.0 + date: "2023-02-22" + notes: + - type: feature + title: Intercept specification + body: >- + It is now possible to leverage the intercept specification to spin up your environment without extra tools. + - type: feature + title: Support for arm64 (Apple Silicon) + body: >- + The ambassador-telepresence-manager and ambassador-telepresence-agent are now distributed as + multi-architecture images and can run natively on both linux/amd64 and linux/arm64. + - type: bugfix + title: Connectivity check can break routing in VPN setups + docs: https://github.com/telepresenceio/telepresence/issues/3006 + body: >- + The connectivity check failed to recognize that the connected peer wasn't a traffic-manager. Consequently, + it didn't proxy the cluster because it incorrectly assumed that a successful connect meant cluster connectivity, + - type: bugfix + title: VPN routes not detected by telepresence test-vpn on macOS + docs: https://github.com/telepresenceio/telepresence/pull/3038 + body: >- + The telepresence test-vpn did not include routes of type link when checking for subnet + conflicts. + - version: 2.10.5 + date: "2023-02-06" + notes: + - type: change + title: mTLS secrets mount + body: >- + mTLS Secrets will now be mounted into the traffic agent, instead of expected to be read by it from the API. + This is only applicable to users of team mode and the proprietary agent + docs: reference/cluster-config#tls + - type: bugfix + title: Daemon reconnection fix + body: >- + Fixed a bug that prevented the local daemons from automatically reconnecting to the traffic manager when the network connection was lost. + - version: 2.10.4 + date: "2023-01-20" + notes: + - type: bugfix + title: Backward compatibility restored + body: >- + Telepresence can now create intercepts with traffic-managers of version 2.9.5 and older. + - type: bugfix + title: Saved intercepts now works with preview URLs. + body: >- + Preview URLs are now included/excluded correctly when using saved intercepts. + - version: 2.10.3 + date: "2023-01-17" + notes: + - type: bugfix + title: Saved intercepts + body: >- + Fixed an issue which was causing the saved intercepts to not be completely interpreted by telepresence. + - type: bugfix + title: Traffic manager restart during upgrade to team mode + body: >- + Fixed an issue which was causing the traffic manager to be redeployed after an upgrade to the team mode. + docs: https://github.com/telepresenceio/telepresence/pull/2979 + - version: 2.10.2 + date: "2023-01-16" + notes: + - type: bugfix + title: version consistency in helm commands + body: >- + Ensure that CLI and user-daemon binaries are the same version when running
telepresence helm install + or telepresence helm upgrade. + docs: https://github.com/telepresenceio/telepresence/pull/2975 + - type: bugfix + title: Release Process + body: >- + Fixed an issue that prevented the --use-saved-intercept flag from working. + - version: 2.10.1 + date: "2023-01-11" + notes: + - type: bugfix + title: Release Process + body: >- + Fixed a regex in our release process that prevented 2.10.0 promotion. + - version: 2.10.0 + date: "2023-01-11" + notes: + - type: feature + title: Team Mode and Single User Mode + body: >- + The Traffic Manager can now be set to either "team" mode or "single user" mode. When in team mode, intercepts will default to http intercepts. + - type: feature + title: Added `insert` and `upgrade` Subcommands to `telepresence helm` + body: >- + The `telepresence helm` sub-commands `insert` and `upgrade` now accepts all types of helm `--set-XXX` flags. + - type: feature + title: Added Image Pull Secrets to Helm Chart + body: >- + Image pull secrets for the traffic-agent can now be added using the Helm chart setting `agent.image.pullSecrets`. + - type: change + title: Rename Configmap + body: >- + The configmap `traffic-manager-clients` has been renamed to `traffic-manager`. + - type: change + title: Webhook Namespace Field + body: >- + If the cluster is Kubernetes 1.21 or later, the mutating webhook will find the correct namespace using the label `kubernetes.io/metadata.name` rather than `app.kuberenetes.io/name`. + docs: https://github.com/telepresenceio/telepresence/issues/2913 + - type: change + title: Rename Webhook + body: >- + The name of the mutating webhook now contains the namespace of the traffic-manager so that the webhook is easier to identify when there are multiple namespace scoped telepresence installations in the cluster. + - type: change + title: OSS Binaries + body: >- + The OSS Helm chart is no longer pushed to the datawire Helm repository. It will instead be pushed from the telepresence proprietary repository. The OSS Helm chart is still what's embedded in the OSS telepresence client. + docs: https://github.com/telepresenceio/telepresence/pull/2943 + - type: bugfix + title: Fix Panic Using `--docker-run` + body: >- + Telepresence no longer panics when `--docker-run` is combined with `--name ` instead of `--name=`. + docs: https://github.com/telepresenceio/telepresence/issues/2953 + - type: bugfix + title: Stop assuming cluster domain + body: >- + Telepresence traffic-manager extracts the cluster domain (e.g. "cluster.local") using a CNAME lookup for "kubernetes.default" instead of "kubernetes.default.svc". + docs: https://github.com/telepresenceio/telepresence/pull/2959 + - type: bugfix + title: Uninstall hook timeout + body: >- + A timeout was added to the pre-delete hook `uninstall-agents`, so that a helm uninstall doesn't hang when there is no running traffic-manager. + docs: https://github.com/telepresenceio/telepresence/pull/2937 + - type: bugfix + title: Uninstall hook check + body: >- + The `Helm.Revision` is now used to prevent that Helm hook calls are served by the wrong revision of the traffic-manager. + docs: https://github.com/telepresenceio/telepresence/issues/2954 + - version: 2.9.5 + date: "2022-12-08" + notes: + - type: security + title: Update to golang v1.19.4 + body: >- + Apply security updates by updating to golang v1.19.4 + docs: https://groups.google.com/g/golang-announce/c/L_3rmdT0BMU + - type: bugfix + title: GCE authentication + body: >- + Fixed a regression, that was introduced in 2.9.3, preventing use of gce authentication without also having a config element present in the gce configuration in the kubeconfig. + - version: 2.9.4 + date: "2022-12-02" + notes: + - type: feature + title: Subnet detection strategy + body: >- + The traffic-manager can automatically detect that the node subnets are different from the pod subnets, and switch detection strategy to instead use subnets that cover the pod IPs. + - type: bugfix + title: Fix `--set` flag for `telepresence helm install` + body: >- + The `telepresence helm` command `--set x=y` flag didn't correctly set values of other types than `string`. The code now uses standard Helm semantics for this flag. + - type: bugfix + title: Fix `agent.image` setting propigation + body: >- + Telepresence now uses the correct `agent.image` properties in the Helm chart when copying agent image settings from the `config.yml` file. + - type: bugfix + title: Delay file sharing until needed + body: >- + Initialization of FTP type file sharing is delayed, so that setting it using the Helm chart value `intercept.useFtp=true` works as expected. + - type: bugfix + title: Cleanup on `telepresence quit` + body: >- + The port-forward that is created when Telepresence connects to a cluster is now properly closed when `telepresence quit` is called. + - type: bugfix + title: Watch `config.yml` without panic + body: >- + The user daemon no longer panics when the `config.yml` is modified at a time when the user daemon is running but no session is active. + - type: bugfix + title: Thread safety + body: >- + Fix race condition that would occur when `telepresence connect` `telepresence leave` was called several times in rapid succession. + - version: 2.9.3 + date: "2022-11-23" + notes: + - type: feature + title: Helm options for `livenessProbe` and `readinessProbe` + body: >- + The helm chart now supports `livenessProbe` and `readinessProbe` for the traffic-manager deployment, so that the pod automatically restarts if it doesn't respond. + - type: change + title: Improved network communication + body: >- + The root daemon now communicates directly with the traffic-manager instead of routing all outbound traffic through the user daemon. + - type: bugfix + title: Root daemon debug logging + body: >- + Using `telepresence loglevel LEVEL` now also sets the log level in the root daemon. + - type: bugfix + title: Multivalue flag value propagation + body: >- + Multi valued kubernetes flags such as `--as-group` are now propagated correctly. + - type: bugfix + title: Root daemon stability + body: >- + The root daemon would sometimes hang indefinitely when quit and connect were called in rapid succession. + - type: bugfix + title: Base DNS resolver + body: >- + Don't use `systemd.resolved` base DNS resolver unless cluster is proxied. + - version: 2.9.2 + date: "2022-11-16" + notes: + - type: bugfix + title: Fix panic + body: >- + Fix panic when connecting to an older traffic-manager. + - type: bugfix + title: Fix header flag + body: >- + Fix an issue where the `http-header` flag sometimes wouldn't propagate correctly. + - version: 2.9.1 + date: "2022-11-16" + notes: + - type: bugfix + title: Connect failures due to missing auth provider. + body: >- + The regression in 2.9.0 that caused a `no Auth Provider found for name “gcp”` error when connecting was fixed. + - version: 2.9.0 + date: "2022-11-15" + notes: + - type: feature + title: New command to view client configuration. + body: >- + A new telepresence config view was added to make it easy to view the current + client configuration. + docs: new-in-2.9#view-the-client-configuration + - type: feature + title: Configure Clients using the Helm chart. + body: >- + The traffic-manager can now configure all clients that connect through the client: map in + the values.yaml file. + docs: reference/cluster-config#client-configuration + - type: feature + title: The Traffic manager version is more visible. + body: >- + The command telepresence version will now include the version of the traffic manager when + the client is connected to a cluster. + - type: feature + title: Command output in YAML format. + body: >- + The global --output flag now accepts both yaml and json. + docs: new-in-2.9#yaml-output + - type: change + title: Deprecated status command flag + body: >- + The telepresence status --json flag is deprecated. Use telepresence status --output=json instead. + - type: bugfix + title: Unqualified service name resolution in docker. + body: >- + Unqualified service names now resolves OK from the docker container when using telepresence intercept --docker-run. + docs: https://github.com/telepresenceio/telepresence/issues/2870 + - type: bugfix + title: Output no longer mixes plaintext and json. + body: >- + Informational messages that don't really originate from the command, such as "Launching Telepresence Root Daemon", + or "An update of telepresence ...", are discarded instead of being printed as plain text before the actual formatted + output when using the --output=json. + docs: https://github.com/telepresenceio/telepresence/issues/2854 + - type: bugfix + title: No more panic when invalid port names are detected. + body: >- + A `telepresence intercept` of services with invalid port no longer causes a panic. + docs: https://github.com/telepresenceio/telepresence/issues/2880 + - type: bugfix + title: Proper errors for bad output formats. + body: >- + An attempt to use an invalid value for the global --output flag now renders a proper error message. + - type: bugfix + title: Remove lingering DNS config on macOS. + body: >- + Files lingering under /etc/resolver as a result of ungraceful shutdown of the root daemon on macOS, are + now removed when a new root daemon starts. + - version: 2.8.5 + date: "2022-11-2" + notes: + - type: security + title: CVE-2022-41716 + body: >- + Updated Golang to 1.19.3 to address CVE-2022-41716. + - version: 2.8.4 + date: "2022-11-2" + notes: + - type: bugfix + title: Release Process + body: >- + This release resulted in changes to our release process. + - version: 2.8.3 + date: "2022-10-27" + notes: + - type: feature + title: Ability to disable global intercepts. + body: >- + Global intercepts (a.k.a. TCP intercepts) can now be disabled by using the new Helm chart setting intercept.disableGlobal. + docs: https://github.com/telepresenceio/telepresence/issues/2140 + - type: feature + title: Configurable mutating webhook port + body: >- + The port used for the mutating webhook can be configured using the Helm chart setting + agentInjector.webhook.port. + docs: install/helm + - type: change + title: Mutating webhook port defaults to 443 + body: >- + The default port for the mutating webhook is now 443. It used to be 8443. + - type: change + title: Agent image configuration mandatory in air-gapped environments. + body: >- + The traffic-manager will no longer default to use the tel2 image for the traffic-agent when it is + unable to connect to Ambassador Cloud. Air-gapped environments must declare what image to use in the Helm chart. + - type: bugfix + title: Can now connect to non-helm installs + body: >- + telepresence connect now works as long as the traffic manager is installed, even if + it wasn't installed via >code>helm install + docs: https://github.com/telepresenceio/telepresence/issues/2824 + - type: bugfix + title: check-vpn crash fixed + body: >- + telepresence check-vpn no longer crashes when the daemons don't start properly. + - version: 2.8.2 + date: "2022-10-15" + notes: + - type: bugfix + title: Reinstate 2.8.0 + body: >- + There was an issue downloading the free enhanced client. This problem was fixed, 2.8.0 was reinstated + - version: 2.8.1 + date: "2022-10-14" + notes: + - type: bugfix + title: Rollback 2.8.0 + body: >- + Rollback 2.8.0 while we investigate an issue with ambassador cloud. + - version: 2.8.0 + date: "2022-10-14" + notes: + - type: feature + title: Improved DNS resolver + body: >- + The Telepresence DNS resolver is now capable of resolving queries of type A, AAAA, CNAME, + MX, NS, PTR, SRV, and TXT. + docs: reference/dns + - type: feature + title: New `client` structure in Helm chart + body: >- + A new client struct was added to the Helm chart. It contains a connectionTTL that controls + how long the traffic manager will retain a client connection without seeing any sign of life from the client. + docs: reference/cluster-config#Client-Configuration + - type: feature + title: Include and exclude suffixes configurable using the Helm chart. + body: >- + A dns element was added to the client struct in Helm chart. It contains an includeSuffixes and + an excludeSuffixes value that controls what type of names that the DNS resolver in the client will delegate to + the cluster. + docs: reference/cluster-config#DNS + - type: feature + title: Configurable traffic-manager API port + body: >- + The API port used by the traffic-manager is now configurable using the Helm chart value apiPort. + The default port is 8081. + docs: https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence + - type: feature + title: Envoy server and admin port configuration. + body: >- + An new agent struct was added to the Helm chart. It contains an `envoy` structure where the server and + admin port of the Envoy proxy running in the enhanced traffic-agent can be configured. + docs: reference/cluster-config#Envoy-Configuration + - type: change + title: Helm chart `dnsConfig` moved to `client.routing`. + body: >- + The Helm chart dnsConfig was deprecated but retained for backward compatibility. The fields alsoProxySubnets + and neverProxySubnets can now be found under routing in the client struct. + docs: reference/cluster-config#Routing + - type: change + title: Helm chart `agentInjector.agentImage` moved to `agent.image`. + body: >- + The Helm chart agentInjector.agentImage was moved to agent.image. The old value is deprecated but + retained for backward compatibility. + docs: reference/cluster-config#Image-Configuration + - type: change + title: Helm chart `agentInjector.appProtocolStrategy` moved to `agent.appProtocolStrategy`. + body: >- + The Helm chart agentInjector.appProtocolStrategy was moved to agent.appProtocolStrategy. The old + value is deprecated but retained for backward compatibility. + docs: reference/cluster-config#Application-Protocol-Selection + - type: change + title: Helm chart `dnsServiceName`, `dnsServiceNamespace`, and `dnsServiceIP` removed. + body: >- + The Helm chart dnsServiceName, dnsServiceNamespace, and dnsServiceIP has been removed, because + they are no longer needed. The TUN-device will use the traffic-manager pod-IP on platforms where it needs to + dedicate an IP for its local resolver. + - type: change + title: Quit daemons with `telepresence quit -s` + body: >- + The former options `-u` and `-r` for `telepresence quit` has been deprecated and replaced with one option `-s` which will + quit both the root daemon and the user daemon. + - type: bugfix + title: Environment variable interpolation in pods now works. + body: >- + Environment variable interpolation now works for all definitions that are copied from pod containers + into the injected traffic-agent container. + - type: bugfix + title: Early detection of namespace conflict + body: >- + An attempt to create simultaneous intercepts that span multiple namespace on the same workstation + is detected early and prohibited instead of resulting in failing DNS lookups later on. + - type: bugfix + title: Annoying log message removed + body: >- + Spurious and incorrect ""!! SRV xxx"" messages will no longer appear in the logs when the reason + is normal context cancellation. + - type: bugfix + title: Single name DNS resolution in Docker on Linux host + body: >- + Single label names now resolves correctly when using Telepresence in Docker on a Linux host + - type: bugfix + title: Misnomer `appPortStrategy` in Helm chart renamed to `appProtocolStrategy`. + body: >- + The Helm chart value appProtocolStrategy is now correctly named (used to be appPortStategy) + - version: 2.7.6 + date: "2022-09-16" + notes: + - type: feature + title: Helm chart resource entries for injected agents + body: >- + The resources for the traffic-agent container and the optional init container can be + specified in the Helm chart using the resources and initResource fields + of the agentInjector.agentImage + - type: feature + title: Cluster event propagation when injection fails + body: >- + When the traffic-manager fails to inject a traffic-agent, the cause for the failure is + detected by reading the cluster events, and propagated to the user. + - type: feature + title: FTP-client instead of sshfs for remote mounts + body: >- + Telepresence can now use an embedded FTP client and load an existing FUSE library instead of running + an external sshfs or sshfs-win binary. This feature is experimental in 2.7.x + and enabled by setting intercept.useFtp to true> in the config.yml. + - type: change + title: Upgrade of winfsp + body: >- + Telepresence on Windows upgraded winfsp from version 1.10 to 1.11 + - type: bugfix + title: Removal of invalid warning messages + body: >- + Running CLI commands on Apple M1 machines will no longer throw warnings about /proc/cpuinfo + and /proc/self/auxv. + - version: 2.7.5 + date: "2022-09-14" + notes: + - type: change + title: Rollback of release 2.7.4 + body: >- + This release is a rollback of the changes in 2.7.4, so essentially the same as 2.7.3 + - version: 2.7.4 + date: "2022-09-14" + notes: + - type: change + body: >- + This release was broken on some platforms. Use 2.7.6 instead. + - version: 2.7.3 + date: "2022-09-07" + notes: + - type: bugfix + title: PTY for CLI commands + body: >- + CLI commands that are executed by the user daemon now use a pseudo TTY. This enables + docker run -it to allocate a TTY and will also give other commands like bash read the + same behavior as when executed directly in a terminal. + docs: https://github.com/telepresenceio/telepresence/issues/2724 + - type: bugfix + title: Traffic Manager useless warning silenced + body: >- + The traffic-manager will no longer log numerous warnings saying Issuing a + systema request without ApiKey or InstallID may result in an error. + - type: bugfix + title: Traffic Manager useless error silenced + body: >- + The traffic-manager will no longer log an error saying Unable to derive subnets + from nodes when the podCIDRStrategy is auto and it chooses to instead derive the + subnets from the pod IPs. + - version: 2.7.2 + date: "2022-08-25" + notes: + - type: feature + title: Autocompletion scripts + body: >- + Autocompletion scripts can now be generated with telepresence completion SHELL where SHELL can be bash, zsh, fish or powershell. + - type: feature + title: Connectivity check timeout + body: >- + The timeout for the initial connectivity check that Telepresence performs + in order to determine if the cluster's subnets are proxied or not can now be configured + in the config.yml file using timeouts.connectivityCheck. The default timeout was + changed from 5 seconds to 500 milliseconds to speed up the actual connect. + docs: reference/config#timeouts + - type: change + title: gather-traces feedback + body: >- + The command telepresence gather-traces now prints out a message on success. + docs: troubleshooting#distributed-tracing + - type: change + title: upload-traces feedback + body: >- + The command telepresence upload-traces now prints out a message on success. + docs: troubleshooting#distributed-tracing + - type: change + title: gather-traces tracing + body: >- + The command telepresence gather-traces now traces itself and reports errors with trace gathering. + docs: troubleshooting#distributed-tracing + - type: change + title: CLI log level + body: >- + The cli.log log is now logged at the same level as the connector.log + docs: reference/config#log-levels + - type: bugfix + title: Telepresence --help fixed + body: >- + telepresence --help now works once more even if there's no user daemon running. + docs: https://github.com/telepresenceio/telepresence/issues/2735 + - type: bugfix + title: Stream cancellation when no process intercepts + body: >- + Streams created between the traffic-agent and the workstation are now properly closed + when no interceptor process has been started on the workstation. This fixes a potential problem where + a large number of attempts to connect to a non-existing interceptor would cause stream congestion + and an unresponsive intercept. + - type: bugfix + title: List command excludes the traffic-manager + body: >- + The telepresence list command no longer includes the traffic-manager deployment. + - version: 2.7.1 + date: "2022-08-10" + notes: + - type: change + title: Reinstate telepresence uninstall + body: >- + Reinstate telepresence uninstall with --everything depreciated + - type: change + title: Reduce telepresence helm uninstall + body: >- + telepresence helm uninstall will only uninstall the traffic-manager helm chart and no longer accepts the --everything, --agent, or --all-agents flags. + - type: bugfix + title: Auto-connect for telepresence intercpet + body: >- + telepresence intercept will attempt to connect to the traffic manager before creating an intercept. + - version: 2.7.0 + date: "2022-08-07" + notes: + - type: feature + title: Saved Intercepts + body: >- + Create telepresence intercepts based on existing Saved Intercepts configurations with telepresence intercept --use-saved-intercept $SAVED_INTERCEPT_NAME + docs: reference/intercepts#sharing-intercepts-with-teammates + - type: feature + title: Distributed Tracing + body: >- + The Telepresence components now collect OpenTelemetry traces. + Up to 10MB of trace data are available at any given time for collection from + components. telepresence gather-traces is a new command that will collect + all that data and place it into a gzip file, and telepresence upload-traces is + a new command that will push the gzipped data into an OTLP collector. + docs: troubleshooting#distributed-tracing + - type: feature + title: Helm install + body: >- + A new telepresence helm command was added to provide an easy way to install, upgrade, or uninstall the telepresence traffic-manager. + docs: install/manager + - type: feature + title: Ignore Volume Mounts + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-ignore-volume-mounts, that can be used to make the injector ignore specified volume mounts denoted by a comma-separated string. + - type: feature + title: telepresence pod-daemon + body: >- + The Docker image now contains a new program in addition to + the existing traffic-manager and traffic-agent: the pod-daemon. The + pod-daemon is a trimmed-down version of the user-daemon that is + designed to run as a sidecar in a Pod, enabling CI systems to create + preview deploys. + - type: feature + title: Prometheus support for traffic manager + body: >- + Added prometheus support to the traffic manager. + - type: change + title: No install on telepresence connect + body: >- + The traffic manager is no longer automatically installed into the cluster. Connecting or creating an intercept in a cluster without a traffic manager will return an error. + docs: install/manager + - type: change + title: Helm Uninstall + body: >- + The command telepresence uninstall has been moved to telepresence helm uninstall. + docs: install/manager + - type: bugfix + title: readOnlyRootFileSystem mounts work + body: >- + Add an emptyDir volume and volume mount under /tmp on the agent sidecar so it works with `readOnlyRootFileSystem: true` + docs: https://github.com/telepresenceio/telepresence/pull/2666 + - version: 2.6.8 + date: "2022-06-23" + notes: + - type: feature + title: Specify Your DNS + body: >- + The name and namespace for the DNS Service that the traffic-manager uses in DNS auto-detection can now be specified. + - type: feature + title: Specify a Fallback DNS + body: >- + Should the DNS auto-detection logic in the traffic-manager fail, users can now specify a fallback IP to use. + - type: feature + title: Intercept UDP Ports + body: >- + It is now possible to intercept UDP ports with Telepresence and also use --to-pod to forward UDP traffic from ports on localhost. + - type: change + title: Additional Helm Values + body: >- + The Helm chart will now add the nodeSelector, affinity and tolerations values to the traffic-manager's post-upgrade-hook and pre-delete-hook jobs. + - type: bugfix + title: Agent Injection Bugfix + body: >- + Telepresence no longer fails to inject the traffic agent into the pod generated for workloads that have no volumes and `automountServiceAccountToken: false`. + - version: 2.6.7 + date: "2022-06-22" + notes: + - type: bugfix + title: Persistant Sessions + body: >- + The Telepresence client will remember and reuse the traffic-manager session after a network failure or other reason that caused an unclean disconnect. + - type: bugfix + title: DNS Requests + body: >- + Telepresence will no longer forward DNS requests for "wpad" to the cluster. + - type: bugfix + title: Graceful Shutdown + body: >- + The traffic-agent will properly shut down if one of its goroutines errors. + - version: 2.6.6 + date: "2022-06-9" + notes: + - type: bugfix + title: Env Var `TELEPRESENCE_API_PORT` + body: >- + The propagation of the TELEPRESENCE_API_PORT environment variable now works correctly. + - type: bugfix + title: Double Printing `--output json` + body: >- + The --output json global flag no longer outputs multiple objects + - version: 2.6.5 + date: "2022-06-03" + notes: + - type: feature + title: Helm Option -- `reinvocationPolicy` + body: >- + The reinvocationPolicy or the traffic-agent injector webhook can now be configured using the Helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Proxy Certificate + body: >- + The traffic manager now accepts a root CA for a proxy, allowing it to connect to ambassador cloud from behind an HTTPS proxy. This can be configured through the helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Agent Injection + body: >- + A policy that controls when the mutating webhook injects the traffic-agent was added, and can be configured in the Helm chart. + docs: install/helm + - type: change + title: Windows Tunnel Version Upgrade + body: >- + Telepresence on Windows upgraded wintun.dll from version 0.12 to version 0.14.1 + - type: change + title: Helm Version Upgrade + body: >- + Telepresence upgraded its embedded Helm from version 3.8.1 to 3.9 + - type: change + title: Kubernetes API Version Upgrade + body: >- + Telepresence upgraded its embedded Kubernetes API from version 0.23.4 to 0.24.1 + - type: feature + title: Flag `--watch` Added to `list` Command + body: >- + Added a --watch flag to telepresence list that can be used to watch interceptable workloads in a namespace. + - type: change + title: Depreciated `images.webhookAgentImage` + body: >- + The Telepresence configuration setting for `images.webhookAgentImage` is now deprecated. Use `images.agentImage` instead. + - type: bugfix + title: Default `reinvocationPolicy` Set to Never + body: >- + The reinvocationPolicy or the traffic-agent injector webhook now defaults to Never insteadof IfNeeded so that LimitRanges on namespaces can inject a missing resources element into the injected traffic-agent container. + - type: bugfix + title: UDP + body: >- + UDP based communication with services in the cluster now works as expected. + - type: bugfix + title: Telepresence `--help` + body: >- + The command help will only show Kubernetes flags on the commands that supports them + - type: change + title: Error Count + body: >- + Only the errors from the last session will be considered when counting the number of errors in the log after a command failure. + - version: 2.6.4 + date: "2022-05-23" + notes: + - type: bugfix + title: Upgrade RBAC Permissions + body: >- + The traffic-manager RBAC grants permissions to update services, deployments, replicatsets, and statefulsets. Those permissions are needed when the traffic-manager upgrades from versions < 2.6.0 and can be revoked after the upgrade. + - version: 2.6.3 + date: "2022-05-20" + notes: + - type: bugfix + title: Relative Mount Paths + body: >- + The --mount intercept flag now handles relative mount points correctly on non-windows platforms. Windows still require the argument to be a drive letter followed by a colon. + - type: bugfix + title: Traffic Agent Config + body: >- + The traffic-agent's configuration update automatically when services are added, updated or deleted. + - type: bugfix + title: Container Injection for Numeric Ports + body: >- + Telepresence will now always inject an initContainer when the service's targetPort is numeric + - type: bugfix + title: Matching Services + body: >- + Workloads that have several matching services pointing to the same target port are now handled correctly. + - type: bugfix + title: Unexpected Panic + body: >- + A potential race condition causing a panic when closing a DNS connection is now handled correctly. + - type: bugfix + title: Mount Volume Cleanup + body: >- + A container start would sometimes fail because and old directory remained in a mounted temp volume. + - version: 2.6.2 + date: "2022-05-17" + notes: + - type: bugfix + title: Argo Injection + body: >- + Workloads controlled by workloads like Argo Rollout are injected correctly. + - type: bugfix + title: Agent Port Mapping + body: >- + Multiple services appointing the same container port no longer result in duplicated ports in an injected pod. + - type: bugfix + title: GRPC Max Message Size + body: >- + The telepresence list command no longer errors out with "grpc: received message larger than max" when listing namespaces with a large number of workloads. + - version: 2.6.1 + date: "2022-05-16" + notes: + - type: bugfix + title: KUBECONFIG environment variable + body: >- + Telepresence will now handle multiple path entries in the KUBECONFIG environment correctly. + - type: bugfix + title: Don't Panic + body: >- + Telepresence will no longer panic when using preview URLs with traffic-managers < 2.6.0 + - version: 2.6.0 + date: "2022-05-13" + notes: + - type: feature + title: Intercept multiple containers in a pod, and multiple ports per container + body: >- + Telepresence can now intercept multiple services and/or service-ports that connect to the same pod. + docs: new-in-2.6#intercept-multiple-containers-and-ports + - type: feature + title: The Traffic Agent sidecar is always injected by the Traffic Manager's mutating webhook + body: >- + The client will no longer modify deployments, replicasets, or statefulsets in order to + inject a Traffic Agent into an intercepted pod. Instead, all injection is now performed by a mutating webhook. As a result, + the client now needs less permissions in the cluster. + docs: install/upgrade#important-note-about-upgrading-to-2.6.0 + - type: change + title: Automatic upgrade of Traffic Agents + body: >- + When upgrading, all workloads with injected agents will have their agent "uninstalled" automatically. + The mutating webhook will then ensure that their pods will receive an updated Traffic Agent. + docs: new-in-2.6#no-more-workload-modifications + - type: change + title: No default image in the Helm chart + body: >- + The helm chart no longer has a default set for the agentInjector.image.name, and unless it's set, the + traffic-manager will ask Ambassador Could for the preferred image. + docs: new-in-2.6#smarter-agent + - type: change + title: Upgrade to Helm version 3.8.1 + body: The Telepresence client now uses Helm version 3.8.1 when auto-installing the Traffic Manager. + - type: bugfix + title: Remote mounts will now function correctly with custom securityContext + body: >- + The bug causing permission problems when the Traffic Agent is in a Pod with a custom securityContext has been fixed. + - type: bugfix + title: Improved presentation of flags in CLI help + body: The help for commands that accept Kubernetes flags will now display those flags in a separate group. + - type: bugfix + title: Better termination of process parented by intercept + body: >- + Occasionally an intercept will spawn a command using -- on the command line, often in another console. + When you use telepresence leave or telepresence quit while the intercept with the spawned command is still active, + Telepresence will now terminate that the command because it's considered to be parented by the intercept that is being removed. + - version: 2.5.8 + date: "2022-04-27" + notes: + - type: bugfix + title: Folder creation on `telepresence login` + body: >- + Fixed a bug where the telepresence config folder would not be created if the user ran telepresence login before other commands. + - version: 2.5.7 + date: "2022-04-25" + notes: + - type: change + title: RBAC requirements + body: >- + A namespaced traffic-manager will no longer require cluster wide RBAC. Only Roles and RoleBindings are now used. + - type: bugfix + title: Windows DNS + body: >- + The DNS recursion detector didn't work correctly on Windows, resulting in sporadic failures to resolve names that were resolved correctly at other times. + - type: bugfix + title: Session TTL and Reconnect + body: >- + A telepresence session will now last for 24 hours after the user's last connectivity. If a session expires, the connector will automatically try to reconnect. + - version: 2.5.6 + date: "2022-04-18" + notes: + - type: change + title: Less Watchers + body: >- + Telepresence agents watcher will now only watch namespaces that the user has accessed since the last connect. + - type: bugfix + title: More Efficient `gather-logs` + body: >- + The gather-logs command will no longer send any logs through gRPC. + - version: 2.5.5 + date: "2022-04-08" + notes: + - type: change + title: Traffic Manager Permissions + body: >- + The traffic-manager now requires permissions to read pods across namespaces even if installed with limited permissions + - type: bugfix + title: Linux DNS Cache + body: >- + The DNS resolver used on Linux with systemd-resolved now flushes the cache when the search path changes. + - type: bugfix + title: Automatic Connect Sync + body: >- + The telepresence list command will produce a correct listing even when not preceded by a telepresence connect. + - type: bugfix + title: Disconnect Reconnect Stability + body: >- + The root daemon will no longer get into a bad state when a disconnect is rapidly followed by a new connect. + - type: bugfix + title: Limit Watched Namespaces + body: >- + The client will now only watch agents from accessible namespaces, and is also constrained to namespaces explicitly mapped using the connect command's --mapped-namespaces flag. + - type: bugfix + title: Limit Namespaces used in `gather-logs` + body: >- + The gather-logs command will only gather traffic-agent logs from accessible namespaces, and is also constrained to namespaces explicitly mapped using the connect command's --mapped-namespaces flag. + - version: 2.5.4 + date: "2022-03-29" + notes: + - type: bugfix + title: Linux DNS Concurrency + body: >- + The DNS fallback resolver on Linux now correctly handles concurrent requests without timing them out + - type: bugfix + title: Non-Functional Flag + body: >- + The ingress-l5 flag will no longer be forcefully set to equal the --ingress-host flag + - type: bugfix + title: Automatically Remove Failed Intercepts + body: >- + Intercepts that fail to create are now consistently removed to prevent non-working dangling intercepts from sticking around. + - type: bugfix + title: Agent UID + body: >- + Agent container is no longer sensitive to a random UID or an UID imposed by a SecurityContext. + - type: bugfix + title: Gather-Logs Output Filepath + body: >- + Removed a bad concatenation that corrupted the output path of telepresence gather-logs. + - type: change + title: Remove Unnecessary Error Advice + body: >- + An advice to "see logs for details" is no longer printed when the argument count is incorrect in a CLI command. + - type: bugfix + title: Garbage Collection + body: >- + Client and agent sessions no longer leaves dangling waiters in the traffic-manager when they depart. + - type: bugfix + title: Limit Gathered Logs + body: >- + The client's gather logs command and agent watcher will now respect the configured grpc.maxReceiveSize + - type: change + title: In-Cluster Checks + body: >- + The TUN device will no longer route pod or service subnets if it is running in a machine that's already connected to the cluster + - type: change + title: Expanded Status Command + body: >- + The status command includes the install id, user id, account id, and user email in its result, and can print output as JSON + - type: change + title: List Command Shows All Intercepts + body: >- + The list command, when used with the --intercepts flag, will list the users intercepts from all namespaces + - version: 2.5.3 + date: "2022-02-25" + notes: + - type: bugfix + title: TCP Connectivity + body: >- + Fixed bug in the TCP stack causing timeouts after repeated connects to the same address + - type: feature + title: Linux Binaries + body: >- + Client-side binaries for the arm64 architecture are now available for linux + - version: 2.5.2 + date: "2022-02-23" + notes: + - type: bugfix + title: DNS server bugfix + body: >- + Fixed a bug where Telepresence would use the last server in resolv.conf + - version: 2.5.1 + date: "2022-02-19" + notes: + - type: bugfix + title: Fix GKE auth issue + body: >- + Fixed a bug where using a GKE cluster would error with: No Auth Provider found for name "gcp" + - version: 2.5.0 + date: "2022-02-18" + notes: + - type: feature + title: Intercept specific endpoints + body: >- + The flags --http-path-equal, --http-path-prefix, and --http-path-regex can can be used in + addition to the --http-match flag to filter personal intercepts by the request URL path + docs: concepts/intercepts#intercepting-a-specific-endpoint + - type: feature + title: Intercept metadata + body: >- + The flag --http-meta can be used to declare metadata key value pairs that will be returned by the Telepresence rest + API endpoint /intercept-info + docs: reference/restapi#intercept-info + - type: change + title: Client RBAC watch + body: >- + The verb "watch" was added to the set of required verbs when accessing services and workloads for the client RBAC + ClusterRole + docs: reference/rbac + - type: change + title: Dropped backward compatibility with versions <=2.4.4 + body: >- + Telepresence is no longer backward compatible with versions 2.4.4 or older because the deprecated multiplexing tunnel + functionality was removed. + - type: change + title: No global networking flags + body: >- + The global networking flags are no longer used and using them will render a deprecation warning unless they are supported by the + command. The subcommands that support networking flags are connect, current-cluster-id, + and genyaml. + - type: bugfix + title: Output of status command + body: >- + The also-proxy and never-proxy subnets are now displayed correctly when using the + telepresence status command. + - type: bugfix + title: SETENV sudo privilege no longer needed + body: >- + Telepresence longer requires SETENV privileges when starting the root daemon. + - type: bugfix + title: Network device names containing dash + body: >- + Telepresence will now parse device names containing dashes correctly when determining routes that it should never block. + - type: bugfix + title: Linux uses cluster.local as domain instead of search + body: >- + The cluster domain (typically "cluster.local") is no longer added to the DNS search on Linux using + systemd-resolved. Instead, it is added as a domain so that names ending with it are routed + to the DNS server. + - version: 2.4.11 + date: "2022-02-10" + notes: + - type: change + title: Add additional logging to troubleshoot intermittent issues with intercepts + body: >- + We've noticed some issues with intercepts in v2.4.10, so we are releasing a version + with enhanced logging to help debug and fix the issue. + - version: 2.4.10 + date: "2022-01-13" + notes: + - type: feature + title: Application Protocol Strategy + body: >- + The strategy used when selecting the application protocol for personal intercepts can now be configured using + the intercept.appProtocolStrategy in the config.yml file. + docs: reference/config/#intercept + image: telepresence-2.4.10-intercept-config.png + - type: feature + title: Helm value for the Application Protocol Strategy + body: >- + The strategy when selecting the application protocol for personal intercepts in agents injected by the + mutating webhook can now be configured using the agentInjector.appProtocolStrategy in the Helm chart. + docs: install/helm + - type: feature + title: New --http-plaintext option + body: >- + The flag --http-plaintext can be used to ensure that an intercept uses plaintext http or grpc when + communicating with the workstation process. + docs: reference/intercepts/#tls + - type: feature + title: Configure the default intercept port + body: >- + The port used by default in the telepresence intercept command (8080), can now be changed by setting + the intercept.defaultPort in the config.yml file. + docs: reference/config/#intercept + - type: change + title: Telepresence CI now uses Github Actions + body: >- + Telepresence now uses Github Actions for doing unit and integration testing. It is + now easier for contributors to run tests on PRs since maintainers can add an + "ok to test" label to PRs (including from forks) to run integration tests. + docs: https://github.com/telepresenceio/telepresence/actions + image: telepresence-2.4.10-actions.png + - type: bugfix + title: Check conditions before asking questions + body: >- + User will not be asked to log in or add ingress information when creating an intercept until a check has been + made that the intercept is possible. + docs: reference/intercepts/ + - type: bugfix + title: Fix invalid log statement + body: >- + Telepresence will no longer log invalid: "unhandled connection control message: code DIAL_OK" errors. + - type: bugfix + title: Log errors from sshfs/sftp + body: >- + Output to stderr from the traffic-agent's sftp and the client's sshfs processes + are properly logged as errors. + - type: bugfix + title: Don't use Windows path separators in workload pod template + body: >- + Auto installer will no longer not emit backslash separators for the /tel-app-mounts paths in the + traffic-agent container spec when running on Windows. + - version: 2.4.9 + date: "2021-12-09" + notes: + - type: bugfix + title: Helm upgrade nil pointer error + body: >- + A helm upgrade using the --reuse-values flag no longer fails on a "nil pointer" error caused by a nil + telpresenceAPI value. + docs: install/helm#upgrading-the-traffic-manager + - version: 2.4.8 + date: "2021-12-03" + notes: + - type: feature + title: VPN diagnostics tool + body: >- + There is a new subcommand, test-vpn, that can be used to diagnose connectivity issues with a VPN. + See the VPN docs for more information on how to use it. + docs: reference/vpn + image: telepresence-2.4.8-vpn.png + + - type: feature + title: RESTful API service + body: >- + A RESTful service was added to Telepresence, both locally to the client and to the traffic-agent to + help determine if messages with a set of headers should be consumed or not from a message queue where the + intercept headers are added to the messages. + docs: reference/restapi + image: telepresence-2.4.8-health-check.png + + - type: change + title: TELEPRESENCE_LOGIN_CLIENT_ID env variable no longer used + body: >- + You could previously configure this value, but there was no reason to change it, so the value + was removed. + + - type: bugfix + title: Tunneled network connections behave more like ordinary TCP connections. + body: >- + When using Telepresence with an external cloud provider for extensions, those tunneled + connections now behave more like TCP connections, especially when it comes to timeouts. + We've also added increased testing around these types of connections. + - version: 2.4.7 + date: "2021-11-24" + notes: + - type: feature + title: Injector service-name annotation + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-service-name, that can be used to set the name of the service to be intercepted. + This will help disambiguate which service to intercept for when a workload is exposed by multiple services, such as can happen with Argo Rollouts + docs: reference/cluster-config#service-name-annotation + - type: feature + title: Skip the Ingress Dialogue + body: >- + You can now skip the ingress dialogue by setting the ingress parameters in the corresponding flags. + docs: reference/intercepts#skipping-the-ingress-dialogue + - type: feature + title: Never proxy subnets + body: >- + The kubeconfig extensions now support a never-proxy argument, + analogous to also-proxy, that defines a set of subnets that + will never be proxied via telepresence. + docs: reference/config#neverproxy + - type: change + title: Daemon versions check + body: >- + Telepresence now checks the versions of the client and the daemons and asks the user to quit and restart if they don't match. + - type: change + title: No explicit DNS flushes + body: >- + Telepresence DNS now uses a very short TTL instead of explicitly flushing DNS by killing the mDNSResponder or doing resolvectl flush-caches + docs: reference/routing#dns-caching + - type: bugfix + title: Legacy flags now work with global flags + body: >- + Legacy flags such as --swap-deployment can now be used together with global flags. + - type: bugfix + title: Outbound connection closing + body: >- + Outbound connections are now properly closed when the peer closes. + - type: bugfix + title: Prevent DNS recursion + body: >- + The DNS-resolver will trap recursive resolution attempts (may happen when the cluster runs in a docker-container on the client). + docs: reference/routing#dns-recursion + - type: bugfix + title: Prevent network recursion + body: >- + The TUN-device will trap failed connection attempts that results in recursive calls back into the TUN-device (may happen when the + cluster runs in a docker-container on the client). + docs: reference/routing#connect-recursion + - type: bugfix + title: Traffic Manager deadlock fix + body: >- + The Traffic Manager no longer runs a risk of entering a deadlock when a new Traffic agent arrives. + - type: bugfix + title: webhookRegistry config propagation + body: >- + The configured webhookRegistry is now propagated to the webhook installer even if no webhookAgentImage has been set. + docs: reference/config#images + - type: bugfix + title: Login refreshes expired tokens + body: >- + When a user's token has expired, telepresence login + will prompt the user to log in again to get a new token. Previously, + the user had to telepresence quit and telepresence logout + to get a new token. + docs: https://github.com/telepresenceio/telepresence/issues/2062 + - version: 2.4.6 + date: "2021-11-02" + notes: + - type: feature + title: Manually injecting Traffic Agent + body: >- + Telepresence now supports manually injecting the traffic-agent YAML into workload manifests. + Use the genyaml command to create the sidecar YAML, then add the telepresence.getambassador.io/manually-injected: "true" annotation to your pods to allow Telepresence to intercept them. + docs: reference/intercepts/manual-agent + + - type: feature + title: Telepresence CLI released for Apple silicon + body: >- + Telepresence is now built and released for Apple silicon. + docs: install/?os=macos + + - type: change + title: Telepresence help text now links to telepresence.io + body: >- + We now include a link to our documentation when you run telepresence --help. This will make it easier + for users to find this page whether they acquire Telepresence through Brew or some other mechanism. + image: telepresence-2.4.6-help-text.png + + - type: bugfix + title: Fixed bug when API server is inside CIDR range of pods/services + body: >- + If the API server for your kubernetes cluster had an IP that fell within the + subnet generated from pods/services in a kubernetes cluster, it would proxy traffic + to the API server which would result in hanging or a failed connection. We now ensure + that the API server is explicitly not proxied. + - version: 2.4.5 + date: "2021-10-15" + notes: + - type: feature + title: Get pod yaml with gather-logs command + body: >- + Adding the flag --get-pod-yaml to your request will get the + pod yaml manifest for all kubernetes components you are getting logs for + ( traffic-manager and/or pods containing a + traffic-agent container). This flag is set to false + by default. + docs: reference/client + image: telepresence-2.4.5-pod-yaml.png + + - type: feature + title: Anonymize pod name + namespace when using gather-logs command + body: >- + Adding the flag --anonymize to your command will + anonymize your pod names + namespaces in the output file. We replace the + sensitive names with simple names (e.g. pod-1, namespace-2) to maintain + relationships between the objects without exposing the real names of your + objects. This flag is set to false by default. + docs: reference/client + image: telepresence-2.4.5-logs-anonymize.png + + - type: feature + title: Added context and defaults to ingress questions when creating a preview URL + body: >- + Previously, we referred to OSI model layers when asking these questions, but this + terminology is not commonly used. The questions now provide a clearer context for the user, along with a default answer as an example. + docs: howtos/preview-urls + image: telepresence-2.4.5-preview-url-questions.png + + - type: feature + title: Support for intercepting headless services + body: >- + Intercepting headless services is now officially supported. You can request a + headless service on whatever port it exposes and get a response from the + intercept. This leverages the same approach as intercepting numeric ports when + using the mutating webhook injector, mainly requires the initContainer + to have NET_ADMIN capabilities. + docs: reference/intercepts/#intercepting-headless-services + + - type: change + title: Use one tunnel per connection instead of multiplexing into one tunnel + body: >- + We have changed Telepresence so that it uses one tunnel per connection instead + of multiplexing all connections into one tunnel. This will provide substantial + performance improvements. Clients will still be backwards compatible with older + managers that only support multiplexing. + + - type: bugfix + title: Added checks for Telepresence kubernetes compatibility + body: >- + Telepresence currently works with Kubernetes server versions 1.17.0 + and higher. We have added logs in the connector and traffic-manager + to let users know when they are using Telepresence with a cluster it doesn't support. + docs: reference/cluster-config + + - type: bugfix + title: Traffic Agent security context is now only added when necessary + body: >- + When creating an intercept, Telepresence will now only set the traffic agent's GID + when strictly necessary (i.e. when using headless services or numeric ports). This mitigates + an issue on openshift clusters where the traffic agent can fail to be created due to + openshift's security policies banning arbitrary GIDs. + + - version: 2.4.4 + date: "2021-09-27" + notes: + - type: feature + title: Numeric ports in agent injector + body: >- + The agent injector now supports injecting Traffic Agents into pods that have unnamed ports. + docs: reference/cluster-config/#note-on-numeric-ports + + - type: feature + title: New subcommand to gather logs and export into zip file + body: >- + Telepresence has logs for various components (the + traffic-manager, traffic-agents, the root and + user daemons), which are integral for understanding and debugging + Telepresence behavior. We have added the telepresence + gather-logs command to make it simple to compile logs for + all Telepresence components and export them in a zip file that can + be shared to others and/or included in a github issue. For more + information on usage, run telepresence gather-logs --help + . + docs: reference/client + image: telepresence-2.4.4-gather-logs.png + + - type: feature + title: Pod CIDR strategy is configurable in Helm chart + body: >- + Telepresence now enables you to directly configure how to get + pod CIDRs when deploying Telepresence with the Helm chart. + The default behavior remains the same. We've also introduced + the ability to explicitly set what the pod CIDRs should be. + docs: install/helm + + - type: bugfix + title: Compute pod CIDRs more efficiently + body: >- + When computing subnets using the pod CIDRs, the traffic-manager + now uses less CPU cycles. + docs: reference/routing/#subnets + + - type: bugfix + title: Prevent busy loop in traffic-manager + body: >- + In some circumstances, the traffic-manager's CPU + would max out and get pinned at its limit. This required a + shutdown or pod restart to fix. We've added some fixes + to prevent the traffic-manager from getting into this state. + + - type: bugfix + title: Added a fixed buffer size to TUN-device + body: >- + The TUN-device now has a max buffer size of 64K. This prevents the + buffer from growing limitlessly until it receies a PSH, which could + be a blocking operation when receiving lots of TCP-packets. + docs: reference/tun-device + + - type: bugfix + title: Fix hanging user daemon + body: >- + When Telepresence encountered an issue connecting to the cluster or + the root daemon, it could hang indefintely. It now will error correctly + when it encounters that situation. + + - type: bugfix + title: Improved proprietary agent connectivity + body: >- + To determine whether the environment cluster is air-gapped, the + proprietary agent attempts to connect to the cloud during startup. + To deal with a possible initial failure, the agent backs off + and retries the connection with an increasing backoff duration. + + - type: bugfix + title: Telepresence correctly reports intercept port conflict + body: >- + When creating a second intercept targetting the same local port, + it now gives the user an informative error message. Additionally, + it tells them which intercept is currently using that port to make + it easier to remedy. + + - version: 2.4.3 + date: "2021-09-15" + notes: + - type: feature + title: Environment variable TELEPRESENCE_INTERCEPT_ID available in interceptor's environment + body: >- + When you perform an intercept, we now include a TELEPRESENCE_INTERCEPT_ID environment + variable in the environment. + docs: reference/environment/#telepresence-environment-variables + + - type: bugfix + title: Improved daemon stability + body: >- + Fixed a timing bug that sometimes caused a "daemon did not start" failure. + + - type: bugfix + title: Complete logs for Windows + body: >- + Crash stack traces and other errors were incorrectly not written to log files. This has + been fixed so logs for Windows should be at parity with the ones in MacOS and Linux. + + - type: bugfix + title: Log rotation fix for Linux kernel 4.11+ + body: >- + On Linux kernel 4.11 and above, the log file rotation now properly reads the + birth-time of the log file. Older kernels continue to use the old behavior + of using the change-time in place of the birth-time. + + - type: bugfix + title: Improved error messaging + body: >- + When Telepresence encounters an error, it tells the user where they should look for + logs related to the error. We have refined this so that it only tells users to look + for errors in the daemon logs for issues that are logged there. + + - type: bugfix + title: Stop resolving localhost + body: >- + When using the overriding DNS resolver, it will no longer apply search paths when + resolving localhost, since that should be resolved on the user's machine + instead of the cluster. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Variable cluster domain + body: >- + Previously, the cluster domain was hardcoded to cluster.local. While this + is true for many kubernetes clusters, it is not for all of them. Now this value is + retrieved from the traffic-manager. + + - type: bugfix + title: Improved cleanup of traffic-agents + body: >- + Telepresence now uninstalls traffic-agents installed via mutating webhook + when using telepresence uninstall --everything. + + - type: bugfix + title: More large file transfer fixes + body: >- + Downloading large files during an intercept will no longer cause timeouts and hanging + traffic-agents. + + - type: bugfix + title: Setting --mount to false when intercepting works as expected + body: >- + When using --mount=false while performing an intercept, the file system + was still mounted. This has been remedied so the intercept behavior respects the + flag. + docs: reference/volume + + - type: bugfix + title: Traffic-manager establishes outbound connections in parallel + body: >- + Previously, the traffic-manager established outbound connections + sequentially. This resulted in slow (and failing) Dial calls would + block all outbound traffic from the workstation (for up to 30 seconds). We now + establish these connections in parallel so that won't occur. + docs: reference/routing/#outbound + + - type: bugfix + title: Status command reports correct DNS settings + body: >- + Telepresence status now correctly reports DNS settings for all operating + systems, instead of Local IP:nil, Remote IP:nil when they don't exist. + + - version: 2.4.2 + date: "2021-09-01" + notes: + - type: feature + title: New subcommand to temporarily change log-level + body: >- + We have added a new telepresence loglevel subcommand that enables users + to temporarily change the log-level for the local demons, the traffic-manager and + the traffic-agents. While the logLevels settings from the config will + still be used by default, this can be helpful if you are currently experiencing an issue and + want to have higher fidelity logs, without doing a telepresence quit and + telepresence connect. You can use telepresence loglevel --help to get + more information on options for the command. + docs: reference/config + + - type: change + title: All components have info as the default log-level + body: >- + We've now set the default for all components of Telepresence (traffic-agent, + traffic-manager, local daemons) to use info as the default log-level. + + - type: bugfix + title: Updating RBAC in helm chart to fix cluster-id regression + body: >- + In 2.4.1, we enabled the traffic-manager to get the cluster ID by getting the UID + of the default namespace. The helm chart was not updated to give the traffic-manager + those permissions, which has since been fixed. This impacted users who use licensed features of + the Telepresence extension in an air-gapped environment. + docs: reference/cluster-config/#air-gapped-cluster + + - type: bugfix + title: Timeouts for Helm actions are now respected + body: >- + The user-defined timeout for Helm actions wasn't always respected, causing the daemon to hang + indefinitely when failing to install the traffic-manager. + docs: reference/config#timeouts + + - version: 2.4.1 + date: "2021-08-30" + notes: + - type: feature + title: External cloud variables are now configurable + body: >- + We now support configuring the host and port for the cloud in your config.yml. These + are used when logging in to utilize features provided by an extension, and are also passed + along as environment variables when installing the traffic-manager. Additionally, we + now run our testsuite with these variables set to localhost to continue to ensure Telepresence + is fully fuctional without depeneding on an external service. The SYSTEMA_HOST and SYSTEMA_PORT + environment variables are no longer used. + image: telepresence-2.4.1-systema-vars.png + docs: reference/config/#cloud + + - type: feature + title: Helm chart can now regenerate certificate used for mutating webhook on-demand. + body: >- + You can now set agentInjector.certificate.regenerate when deploying Telepresence + with the Helm chart to automatically regenerate the certificate used by the agent injector webhook. + docs: install/helm + + - type: change + title: Traffic Manager installed via helm + body: >- + The traffic-manager is now installed via an embedded version of the Helm chart when telepresence connect is first performed on a cluster. + This change is transparent to the user. + A new configuration flag, timeouts.helm sets the timeouts for all helm operations performed by the Telepresence binary. + docs: reference/config#timeouts + + - type: change + title: traffic-manager gets cluster ID itself instead of via environment variable + body: >- + The traffic-manager used to get the cluster ID as an environment variable when running + telepresence connnect or via adding the value in the helm chart. This was + clunky so now the traffic-manager gets the value itself as long as it has permissions + to "get" and "list" namespaces (this has been updated in the helm chart). + docs: install/helm + + - type: bugfix + title: Telepresence now mounts all directories from /var/run/secrets + body: >- + In the past, we only mounted secret directories in /var/run/secrets/kubernetes.io. + We now mount *all* directories in /var/run/secrets, which, for example, includes + directories like eks.amazonaws.com used for IRSA tokens. + docs: reference/volume + + - type: bugfix + title: Max gRPC receive size correctly propagates to all grpc servers + body: >- + This fixes a bug where the max gRPC receive size was only propagated to some of the + grpc servers, causing failures when the message size was over the default. + docs: reference/config/#grpc + + - type: bugfix + title: Updated our Homebrew packaging to run manually + body: >- + We made some updates to our script that packages Telepresence for Homebrew so that it + can be run manually. This will enable maintainers of Telepresence to run the script manually + should we ever need to rollback a release and have latest point to an older verison. + docs: install/ + + - type: bugfix + title: Telepresence uses namespace from kubeconfig context on each call + body: >- + In the past, Telepresence would use whatever namespace was specified in the kubeconfig's current-context + for the entirety of the time a user was connected to Telepresence. This would lead to confusing behavior + when a user changed the context in their kubeconfig and expected Telepresence to acknowledge that change. + Telepresence now will do that and use the namespace designated by the context on each call. + + - type: bugfix + title: Idle outbound TCP connections timeout increased to 7200 seconds + body: >- + Some users were noticing that their intercepts would start failing after 60 seconds. + This was because the keep idle outbound TCP connections were set to 60 seconds, which we have + now bumped to 7200 seconds to match Linux's tcp_keepalive_time default. + + - type: bugfix + title: Telepresence will automatically remove a socket upon ungraceful termination + body: >- + When a Telepresence process terminates ungracefully, it would inform users that "this usually means + that the process has terminated ungracefully" and implied that they should remove the socket. We've + now made it so Telepresence will automatically attempt to remove the socket upon ungraceful termination. + + - type: bugfix + title: Fixed user daemon deadlock + body: >- + Remedied a situation where the user daemon could hang when a user was logged in. + + - type: bugfix + title: Fixed agentImage config setting + body: >- + The config setting images.agentImages is no longer required to contain the repository, and it + will use the value at images.repository. + docs: reference/config/#images + + - version: 2.4.0 + date: "2021-08-04" + notes: + - type: feature + title: Windows Client Developer Preview + body: >- + There is now a native Windows client for Telepresence that is being released as a Developer Preview. + All the same features supported by the MacOS and Linux client are available on Windows. + image: telepresence-2.4.0-windows.png + docs: install + + - type: feature + title: CLI raises helpful messages from Ambassador Cloud + body: >- + Telepresence can now receive messages from Ambassador Cloud and raise + them to the user when they perform certain commands. This enables us + to send you messages that may enhance your Telepresence experience when + using certain commands. Frequency of messages can be configured in your + config.yml. + image: telepresence-2.4.0-cloud-messages.png + docs: reference/config#cloud + + - type: bugfix + title: Improved stability of systemd-resolved-based DNS + body: >- + When initializing the systemd-resolved-based DNS, the routing domain + is set to improve stability in non-standard configurations. This also enables the + overriding resolver to do a proper take over once the DNS service ends. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Fixed an edge case when intercepting a container with multiple ports + body: >- + When specifying a port of a container to intercept, if there was a container in the + pod without ports, it was automatically selected. This has been fixed so we'll only + choose the container with "no ports" if there's no container that explicitly matches + the port used in your intercept. + docs: reference/intercepts/#creating-an-intercept-when-a-service-has-multiple-ports + + - type: bugfix + title: $(NAME) references in agent's environments are now interpolated correctly. + body: >- + If you had an environment variable $(NAME) in your workload that referenced another, intercepts + would not correctly interpolate $(NAME). This has been fixed and works automatically. + + - type: bugfix + title: Telepresence no longer prints INFO message when there is no config.yml + body: >- + Fixed a regression that printed an INFO message to the terminal when there wasn't a + config.yml present. The config is optional, so this message has been + removed. + docs: reference/config + + - type: bugfix + title: Telepresence no longer panics when using --http-match + body: >- + Fixed a bug where Telepresence would panic if the value passed to --http-match + didn't contain an equal sign, which has been fixed. The correct syntax is in the --help + string and looks like --http-match=HTTP2_HEADER=REGEX + docs: reference/intercepts/#intercept-behavior-when-logged-in-to-ambassador-cloud + + - type: bugfix + title: Improved subnet updates + body: >- + The traffic-manager used to update subnets whenever the Nodes or Pods changed, even if + the underlying subnet hadn't changed, which created a lot of unnecessary traffic between the + client and the traffic-manager. This has been fixed so we only send updates when the subnets + themselves actually change. + docs: reference/routing/#subnets + + - version: 2.3.7 + date: "2021-07-23" + notes: + - type: feature + title: Also-proxy in telepresence status + body: >- + An also-proxy entry in the Kubernetes cluster config will + show up in the output of the telepresence status command. + docs: reference/config + + - type: feature + title: Non-interactive telepresence login + body: >- + telepresence login now has an + --apikey=KEY flag that allows for + non-interactive logins. This is useful for headless + environments where launching a web-browser is impossible, + such as cloud shells, Docker containers, or CI. + image: telepresence-2.3.7-newkey.png + docs: reference/client/login/ + + - type: bugfix + title: Mutating webhook injector correctly hides named ports for probes. + body: >- + The mutating webhook injector has been fixed to correctly rename named ports for liveness and readiness probes + docs: reference/cluster-config + + - type: bugfix + title: telepresence current-cluster-id crash fixed + body: >- + Fixed a regression introduced in 2.3.5 that caused telepresence current-cluster-id + to crash. + docs: reference/cluster-config + + - type: bugfix + title: Better UX around intercepts with no local process running + body: >- + Requests would hang indefinitely when initiating an intercept before you + had a local process running. This has been fixed and will result in an + Empty reply from server until you start a local process. + docs: reference/intercepts + + - type: bugfix + title: API keys no longer show as "no description" + body: >- + New API keys generated internally for communication with + Ambassador Cloud no longer show up as "no description" in + the Ambassador Cloud web UI. Existing API keys generated by + older versions of Telepresence will still show up this way. + image: telepresence-2.3.7-keydesc.png + + - type: bugfix + title: Fix corruption of user-info.json + body: >- + Fixed a race condition that logging in and logging out + rapidly could cause memory corruption or corruption of the + user-info.json cache file used when + authenticating with Ambassador Cloud. + + - type: bugfix + title: Improved DNS resolver for systemd-resolved + body: + Telepresence's systemd-resolved-based DNS resolver is now more + stable and in case it fails to initialize, the overriding resolver + will no longer cause general DNS lookup failures when telepresence defaults to + using it. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Faster telepresence list command + body: + The performance of telepresence list has been increased + significantly by reducing the number of calls the command makes to the cluster. + docs: reference/client + + - version: 2.3.6 + date: "2021-07-20" + notes: + - type: bugfix + title: Fix preview URLs + body: >- + Fixed a regression introduced in 2.3.5 that caused preview + URLs to not work. + + - type: bugfix + title: Fix subnet discovery + body: >- + Fixed a regression introduced in 2.3.5 where the Traffic + Manager's RoleBinding did not correctly appoint + the traffic-manager Role, causing + subnet discovery to not be able to work correctly. + docs: reference/rbac/ + + - type: bugfix + title: Fix root-user configuration loading + body: >- + Fixed a regression introduced in 2.3.5 where the root daemon + did not correctly read the configuration file; ignoring the + user's configured log levels and timeouts. + docs: reference/config/ + + - type: bugfix + title: Fix a user daemon crash + body: >- + Fixed an issue that could cause the user daemon to crash + during shutdown, as during shutdown it unconditionally + attempted to close a channel even though the channel might + already be closed. + + - version: 2.3.5 + date: "2021-07-15" + notes: + - type: feature + title: traffic-manager in multiple namespaces + body: >- + We now support installing multiple traffic managers in the same cluster. + This will allow operators to install deployments of telepresence that are + limited to certain namespaces. + image: ./telepresence-2.3.5-traffic-manager-namespaces.png + docs: install/helm + - type: feature + title: No more dependence on kubectl + body: >- + Telepresence no longer depends on having an external + kubectl binary, which might not be present for + OpenShift users (who have oc instead of + kubectl). + - type: feature + title: Agent image now configurable + body: >- + We now support configuring which agent image + registry to use in the + config. This enables users whose laptop is an air-gapped environment to + create personal intercepts without requiring a login. It also makes it easier + for those who are developing on Telepresence to specify which agent image should + be used. Env vars TELEPRESENCE_AGENT_IMAGE and TELEPRESENCE_REGISTRY are no longer + used. + image: ./telepresence-2.3.5-agent-config.png + docs: reference/config/#images + - type: feature + title: Max gRPC receive size now configurable + body: >- + The default max size of messages received through gRPC (4 MB) is sometimes insufficient. It can now be configured. + image: ./telepresence-2.3.5-grpc-max-receive-size.png + docs: reference/config/#grpc + - type: feature + title: CLI can be used in air-gapped environments + body: >- + While Telepresence will auto-detect if your cluster is in an air-gapped environment, + we've added an option users can add to their config.yml to ensure the cli acts like it + is in an air-gapped environment. Air-gapped environments require a manually installed + licence. + docs: reference/cluster-config/#air-gapped-cluster + image: ./telepresence-2.3.5-skipLogin.png + - version: 2.3.4 + date: "2021-07-09" + notes: + - type: bugfix + title: Improved IP log statements + body: >- + Some log statements were printing incorrect characters, when they should have been IP addresses. + This has been resolved to include more accurate and useful logging. + docs: reference/config/#log-levels + image: ./telepresence-2.3.4-ip-error.png + - type: bugfix + title: Improved messaging when multiple services match a workload + body: >- + If multiple services matched a workload when performing an intercept, Telepresence would crash. + It now gives the correct error message, instructing the user on how to specify which + service the intercept should use. + image: ./telepresence-2.3.4-improved-error.png + docs: reference/intercepts + - type: bugfix + title: Traffic-manger creates services in its own namespace to determine subnet + body: >- + Telepresence will now determine the service subnet by creating a dummy-service in its own + namespace, instead of the default namespace, which was causing RBAC permissions issues in + some clusters. + docs: reference/routing/#subnets + - type: bugfix + title: Telepresence connect respects pre-existing clusterrole + body: >- + When Telepresence connects, if the traffic-manager's desired clusterrole already exists in the + cluster, Telepresence will no longer try to update the clusterrole. + docs: reference/rbac + - type: bugfix + title: Helm Chart fixed for clientRbac.namespaced + body: >- + The Telepresence Helm chart no longer fails when installing with --set clientRbac.namespaced=true. + docs: install/helm + - version: 2.3.3 + date: "2021-07-07" + notes: + - type: feature + title: Traffic Manager Helm Chart + body: >- + Telepresence now supports installing the Traffic Manager via Helm. + This will make it easy for operators to install and configure the + server-side components of Telepresence separately from the CLI (which + in turn allows for better separation of permissions). + image: ./telepresence-2.3.3-helm.png + docs: install/helm/ + - type: feature + title: Traffic-manager in custom namespace + body: >- + As the traffic-manager can now be installed in any + namespace via Helm, Telepresence can now be configured to look for the + Traffic Manager in a namespace other than ambassador. + This can be configured on a per-cluster basis. + image: ./telepresence-2.3.3-namespace-config.png + docs: reference/config + - type: feature + title: Intercept --to-pod + body: >- + telepresence intercept now supports a + --to-pod flag that can be used to port-forward sidecars' + ports from an intercepted pod. + image: ./telepresence-2.3.3-to-pod.png + docs: reference/intercepts + - type: change + title: Change in migration from edgectl + body: >- + Telepresence no longer automatically shuts down the old + api_version=1 edgectl daemon. If migrating + from such an old version of edgectl you must now manually + shut down the edgectl daemon before running Telepresence. + This was already the case when migrating from the newer + api_version=2 edgectl. + - type: bugfix + title: Fixed error during shutdown + body: >- + The root daemon no longer terminates when the user daemon disconnects + from its gRPC streams, and instead waits to be terminated by the CLI. + This could cause problems with things not being cleaned up correctly. + - type: bugfix + title: Intercepts will survive deletion of intercepted pod + body: >- + An intercept will survive deletion of the intercepted pod provided + that another pod is created (or already exists) that can take over. + - version: 2.3.2 + date: "2021-06-18" + notes: + # Headliners + - type: feature + title: Service Port Annotation + body: >- + The mutator webhook for injecting traffic-agents now + recognizes a + telepresence.getambassador.io/inject-service-port + annotation to specify which port to intercept; bringing the + functionality of the --port flag to users who + use the mutator webook in order to control Telepresence via + GitOps. + image: ./telepresence-2.3.2-svcport-annotation.png + docs: reference/cluster-config#service-port-annotation + - type: feature + title: Outbound Connections + body: >- + Outbound connections are now routed through the intercepted + Pods which means that the connections originate from that + Pod from the cluster's perspective. This allows service + meshes to correctly identify the traffic. + docs: reference/routing/#outbound + - type: change + title: Inbound Connections + body: >- + Inbound connections from an intercepted agent are now + tunneled to the manager over the existing gRPC connection, + instead of establishing a new connection to the manager for + each inbound connection. This avoids interference from + certain service mesh configurations. + docs: reference/routing/#inbound + + # RBAC changes + - type: change + title: Traffic Manager needs new RBAC permissions + body: >- + The Traffic Manager requires RBAC + permissions to list Nodes, Pods, and to create a dummy + Service in the manager's namespace. + docs: reference/routing/#subnets + - type: change + title: Reduced developer RBAC requirements + body: >- + The on-laptop client no longer requires RBAC permissions to list the Nodes + in the cluster or to create Services, as that functionality + has been moved to the Traffic Manager. + + # Bugfixes + - type: bugfix + title: Able to detect subnets + body: >- + Telepresence will now detect the Pod CIDR ranges even if + they are not listed in the Nodes. + image: ./telepresence-2.3.2-subnets.png + docs: reference/routing/#subnets + - type: bugfix + title: Dynamic IP ranges + body: >- + The list of cluster subnets that the virtual network + interface will route is now configured dynamically and will + follow changes in the cluster. + - type: bugfix + title: No duplicate subnets + body: >- + Subnets fully covered by other subnets are now pruned + internally and thus never superfluously added to the + laptop's routing table. + docs: reference/routing/#subnets + - type: change # not a bugfix, but it only makes sense to mention after the above bugfixes + title: Change in default timeout + body: >- + The trafficManagerAPI timeout default has + changed from 5 seconds to 15 seconds, in order to facilitate + the extended time it takes for the traffic-manager to do its + initial discovery of cluster info as a result of the above + bugfixes. + - type: bugfix + title: Removal of DNS config files on macOS + body: >- + On macOS, files generated under + /etc/resolver/ as the result of using + include-suffixes in the cluster config are now + properly removed on quit. + docs: reference/routing/#macos-resolver + + - type: bugfix + title: Large file transfers + body: >- + Telepresence no longer erroneously terminates connections + early when sending a large HTTP response from an intercepted + service. + - type: bugfix + title: Race condition in shutdown + body: >- + When shutting down the user-daemon or root-daemon on the + laptop, telepresence quit and related commands + no longer return early before everything is fully shut down. + Now it can be counted on that by the time the command has + returned that all of the side-effects on the laptop have + been cleaned up. + - version: 2.3.1 + date: "2021-06-14" + notes: + - title: DNS Resolver Configuration + body: "Telepresence now supports per-cluster configuration for custom dns behavior, which will enable users to determine which local + remote resolver to use and which suffixes should be ignored + included. These can be configured on a per-cluster basis." + image: ./telepresence-2.3.1-dns.png + docs: reference/config + type: feature + - title: AlsoProxy Configuration + body: "Telepresence now supports also proxying user-specified subnets so that they can access external services only accessible to the cluster while connected to Telepresence. These can be configured on a per-cluster basis and each subnet is added to the TUN device so that requests are routed to the cluster for IPs that fall within that subnet." + image: ./telepresence-2.3.1-alsoProxy.png + docs: reference/config + type: feature + - title: Mutating Webhook for Injecting Traffic Agents + body: "The Traffic Manager now contains a mutating webhook to automatically add an agent to pods that have the telepresence.getambassador.io/traffic-agent: enabled annotation. This enables Telepresence to work well with GitOps CD platforms that rely on higher level kubernetes objects matching what is stored in git. For workloads without the annotation, Telepresence will add the agent the way it has in the past" + image: ./telepresence-2.3.1-inject.png + docs: reference/rbac + type: feature + - title: Traffic Manager Connect Timeout + body: "The trafficManagerConnect timeout default has changed from 20 seconds to 60 seconds, in order to facilitate the extended time it takes to apply everything needed for the mutator webhook." + image: ./telepresence-2.3.1-trafficmanagerconnect.png + docs: reference/config + type: change + - title: Fix for large file transfers + body: "Fix a tun-device bug where sometimes large transfers from services on the cluster would hang indefinitely" + image: ./telepresence-2.3.1-large-file-transfer.png + docs: reference/tun-device + type: bugfix + - title: Brew Formula Changed + body: "Now that the Telepresence rewrite is the main version of Telepresence, you can install it via Brew like so: brew install datawire/blackbird/telepresence." + image: ./telepresence-2.3.1-brew.png + docs: install/ + type: change + - version: 2.3.0 + date: "2021-06-01" + notes: + - title: Brew install Telepresence + body: "Telepresence can now be installed via brew on macOS, which makes it easier for users to stay up-to-date with the latest telepresence version. To install via brew, you can use the following command: brew install datawire/blackbird/telepresence2." + image: ./telepresence-2.3.0-homebrew.png + docs: install/ + type: feature + - title: TCP and UDP routing via Virtual Network Interface + body: "Telepresence will now perform routing of outbound TCP and UDP traffic via a Virtual Network Interface (VIF). The VIF is a layer 3 TUN-device that exists while Telepresence is connected. It makes the subnets in the cluster available to the workstation and will also route DNS requests to the cluster and forward them to intercepted pods. This means that pods with custom DNS configuration will work as expected. Prior versions of Telepresence would use firewall rules and were only capable of routing TCP." + image: ./tunnel.jpg + docs: reference/tun-device + type: feature + - title: SSH is no longer used + body: "All traffic between the client and the cluster is now tunneled via the traffic manager gRPC API. This means that Telepresence no longer uses ssh tunnels and that the manager no longer have an sshd installed. Volume mounts are still established using sshfs but it is now configured to communicate using the sftp-protocol directly, which means that the traffic agent also runs without sshd. A desired side effect of this is that the manager and agent containers no longer need a special user configuration." + image: ./no-ssh.png + docs: reference/tun-device/#no-ssh-required + type: change + - title: Running in a Docker container + body: "Telepresence can now be run inside a Docker container. This can be useful for avoiding side effects on a workstation's network, establishing multiple sessions with the traffic manager, or working with different clusters simultaneously." + image: ./run-tp-in-docker.png + docs: reference/inside-container + type: feature + - title: Configurable Log Levels + body: "Telepresence now supports configuring the log level for Root Daemon and User Daemon logs. This provides control over the nature and volume of information that Telepresence generates in daemon.log and connector.log." + image: ./telepresence-2.3.0-loglevels.png + docs: reference/config/#log-levels + type: feature + - version: 2.2.2 + date: "2021-05-17" + notes: + - title: Legacy Telepresence subcommands + body: Telepresence is now able to translate common legacy Telepresence commands into native Telepresence commands. So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used to with the new Telepresence binary. + image: ./telepresence-2.2.png + docs: install/migrate-from-legacy/ + type: feature diff --git a/docs/telepresence/2.15/troubleshooting/index.md b/docs/telepresence/2.15/troubleshooting/index.md new file mode 100644 index 000000000..5a477f20a --- /dev/null +++ b/docs/telepresence/2.15/troubleshooting/index.md @@ -0,0 +1,331 @@ +--- +title: "Telepresence Troubleshooting" +description: "Learn how to troubleshoot common issues related to Telepresence, including intercept issues, cluster connection issues, and errors related to Ambassador Cloud." +--- +# Troubleshooting + + +## Creating an intercept did not generate a preview URL + +Preview URLs can only be created if Telepresence is [logged in to +Ambassador Cloud](../reference/client/login/). When not logged in, it +will not even try to create a preview URL (additionally, by default it +will intercept all traffic rather than just a subset of the traffic). +Remove the intercept with `telepresence leave [deployment name]`, run +`telepresence login` to login to Ambassador Cloud, then recreate the +intercept. See the [intercepts how-to doc](../howtos/intercepts) for +more details. + +## Error on accessing preview URL: `First record does not look like a TLS handshake` + +The service you are intercepting is likely not using TLS, however when configuring the intercept you indicated that it does use TLS. Remove the intercept with `telepresence leave [deployment name]` and recreate it, setting `TLS` to `n`. Telepresence tries to intelligently determine these settings for you when creating an intercept and offer them as defaults, but odd service configurations might cause it to suggest the wrong settings. + +## Error on accessing preview URL: Detected a 301 Redirect Loop + +If your ingress is set to redirect HTTP requests to HTTPS and your web app uses HTTPS, but you configure the intercept to not use TLS, you will get this error when opening the preview URL. Remove the intercept with `telepresence leave [deployment name]` and recreate it, selecting the correct port and setting `TLS` to `y` when prompted. + +## Connecting to a cluster via VPN doesn't work. + +There are a few different issues that could arise when working with a VPN. Please see the [dedicated page](../reference/vpn) on Telepresence and VPNs to learn more on how to fix these. + +## Connecting to a cluster hosted in a VM on the workstation doesn't work + +The cluster probably has access to the host's network and gets confused when it is mapped by Telepresence. +Please check the [cluster in hosted vm](../howtos/cluster-in-vm) for more details. + +## Your GitHub organization isn't listed + +Ambassador Cloud needs access granted to your GitHub organization as a +third-party OAuth app. If an organization isn't listed during login +then the correct access has not been granted. + +The quickest way to resolve this is to go to the **Github menu** → +**Settings** → **Applications** → **Authorized OAuth Apps** → +**Ambassador Labs**. An organization owner will have a **Grant** +button, anyone not an owner will have **Request** which sends an email +to the owner. If an access request has been denied in the past the +user will not see the **Request** button, they will have to reach out +to the owner. + +Once access is granted, log out of Ambassador Cloud and log back in; +you should see the GitHub organization listed. + +The organization owner can go to the **GitHub menu** → **Your +organizations** → **[org name]** → **Settings** → **Third-party +access** to see if Ambassador Labs has access already or authorize a +request for access (only owners will see **Settings** on the +organization page). Clicking the pencil icon will show the +permissions that were granted. + +GitHub's documentation provides more detail about [managing access granted to third-party applications](https://docs.github.com/en/github/authenticating-to-github/connecting-with-third-party-applications) and [approving access to apps](https://docs.github.com/en/github/setting-up-and-managing-organizations-and-teams/approving-oauth-apps-for-your-organization). + +### Granting or requesting access on initial login + +When using GitHub as your identity provider, the first time you log in +to Ambassador Cloud GitHub will ask to authorize Ambassador Labs to +access your organizations and certain user data. + +Authorize Ambassador labs form + +Any listed organization with a green check has already granted access +to Ambassador Labs (you still need to authorize to allow Ambassador +Labs to read your user data and organization membership). + +Any organization with a red "X" requires access to be granted to +Ambassador Labs. Owners of the organization will see a **Grant** +button. Anyone who is not an owner will see a **Request** button. +This will send an email to the organization owner requesting approval +to access the organization. If an access request has been denied in +the past the user will not see the **Request** button, they will have +to reach out to the owner. + +Once approval is granted, you will have to log out of Ambassador Cloud +then back in to select the organization. + +## Volume mounts are not working on macOS + +It's necessary to have `sshfs` installed in order for volume mounts to work correctly during intercepts. Lately there's been some issues using `brew install sshfs` a macOS workstation because the required component `osxfuse` (now named `macfuse`) isn't open source and hence, no longer supported. As a workaround, you can now use `gromgit/fuse/sshfs-mac` instead. Follow these steps: + +1. Remove old sshfs, macfuse, osxfuse using `brew uninstall` +2. `brew install --cask macfuse` +3. `brew install gromgit/fuse/sshfs-mac` +4. `brew link --overwrite sshfs-mac` + +Now sshfs -V shows you the correct version, e.g.: +``` +$ sshfs -V +SSHFS version 2.10 +FUSE library version: 2.9.9 +fuse: no mount point +``` + +5. Next, try a mount (or an intercept that performs a mount). It will fail because you need to give permission to “Benjamin Fleischer” to execute a kernel extension (a pop-up appears that takes you to the system preferences). +6. Approve the needed permission +7. Reboot your computer. + +## Volume mounts are not working on Linux +It's necessary to have `sshfs` installed in order for volume mounts to work correctly during intercepts. + +After you've installed `sshfs`, if mounts still aren't working: +1. Uncomment `user_allow_other` in `/etc/fuse.conf` +2. Add your user to the "fuse" group with: `sudo usermod -a -G fuse ` +3. Restart your computer after uncommenting `user_allow_other` + + +## Authorization for preview URLs +Services that require authentication may not function correctly with preview URLs. When accessing a preview URL, it is necessary to configure your intercept to use custom authentication headers for the preview URL. If you don't, you may receive an unauthorized response or be redirected to the login page for Ambassador Cloud. + +You can accomplish this by using a browser extension such as the `ModHeader extension` for [Chrome](https://chrome.google.com/webstore/detail/modheader/idgpnmonknjnojddfkpgkljpfnnfcklj) +or [Firefox](https://addons.mozilla.org/en-CA/firefox/addon/modheader-firefox/). + +It is important to note that Ambassador Cloud does not support OAuth browser flows when accessing a preview URL, but other auth schemes such as Basic access authentication and session cookies will work. + +## Distributed tracing + +Telepresence is a complex piece of software with components running locally on your laptop and remotely in a distributed kubernetes environment. +As such, troubleshooting investigations require tools that can give users, cluster admins, and maintainers a broad view of what these distributed components are doing. +In order to facilitate such investigations, telepresence >= 2.7.0 includes distributed tracing functionality via [OpenTelemetry](https://opentelemetry.io/) +Tracing is controlled via a `grpcPort` flag under the `tracing` configuration of your `values.yaml`. It is enabled by default and can be disabled by setting `grpcPort` to `0`, or `tracing` to an empty object: + +```yaml +tracing: {} +``` + +If tracing is configured, the traffic manager and traffic agents will open a GRPC server under the port given, from which telepresence clients will be able to gather trace data. +To collect trace data, ensure you're connected to the cluster, perform whatever operation you'd like to debug and then run `gather-traces` immediately after: + +```console +$ telepresence gather-traces +``` + +This command will gather traces from both the cloud and local components of telepresence and output them into a file called `traces.gz` in your current working directory: + +```console +$ file traces.gz + traces.gz: gzip compressed data, original size modulo 2^32 158255 +``` + +Please do not try to open or uncompress this file, as it contains binary trace data. +Instead, you can use the `upload-traces` command built into telepresence to send it to an [OpenTelemetry collector](https://opentelemetry.io/docs/collector/) for ingestion: + +```console +$ telepresence upload-traces traces.gz $OTLP_GRPC_ENDPOINT +``` + +Once that's been done, the traces will be visible via whatever means your usual collector allows. For example, this is what they look like when loaded into Jaeger's [OTLP API](https://www.jaegertracing.io/docs/1.36/apis/#opentelemetry-protocol-stable): + +![Jaeger Interface](../images/tracing.png) + +**Note:** The host and port provided for the `OTLP_GRPC_ENDPOINT` must accept OTLP formatted spans (instead of e.g. Jaeger or Zipkin specific spans) via a GRPC API (instead of the HTTP API that is also available in some collectors) +**Note:** Since traces are not automatically shipped to the backend by telepresence, they are stored in memory. Hence, to avoid running telepresence components out of memory, only the last 10MB of trace data are available for export. + +## No Sidecar Injected in GKE private clusters + +An attempt to `telepresence intercept` results in a timeout, and upon examination of the pods (`kubectl get pods`) it's discovered that the intercept command did not inject a sidecar into the workload's pods: + +```bash +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +echo-easy-7f6d54cff8-rz44k 1/1 Running 0 5m5s + +$ telepresence intercept echo-easy -p 8080 +telepresence: error: connector.CreateIntercept: request timed out while waiting for agent echo-easy.default to arrive +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +echo-easy-d8dc4cc7c-27567 1/1 Running 0 2m9s + +# Notice how 1/1 containers are ready. +``` + +If this is occurring in a GKE cluster with private networking enabled, it is likely due to firewall rules blocking the +Traffic Manager's webhook injector from the API server. +To fix this, add a firewall rule allowing your cluster's master nodes to access TCP port `443` in your cluster's pods, +or change the port number that Telepresence is using for the agent injector by providing the number of an allowed port +using the Helm chart value `agentInjector.webhook.port`. +Please refer to the [telepresence install instructions](../install/cloud#gke) or the [GCP docs](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) for information to resolve this. + +## Injected init-container doesn't function properly + +The init-container is injected to insert `iptables` rules that redirects port numbers from the app container to the +traffic-agent sidecar. This is necessary when the service's `targetPort` is numeric. It requires elevated privileges +(`NET_ADMIN` capabilities), and the inserted rules may get overridden by `iptables` rules inserted by other vendors, +such as Istio or Linkerd. + +Injection of the init-container can often be avoided by using a `targetPort` _name_ instead of a number, and ensure +that the corresponding container's `containerPort` is also named. This example uses the name "http", but any valid +name will do: +```yaml +apiVersion: v1 +kind: Pod +metadata: + ... +spec: + ... + containers: + - ... + ports: + - name: http + containerPort: 8080 +--- +apiVersion: v1 +kind: Service +metadata: + ... +spec: + ... + ports: + - port: 80 + targetPort: http +``` + +Telepresence's mutating webhook will refrain from injecting an init-container when the `targetPort` is a name. Instead, +it will do the following during the injection of the traffic-agent: + +1. Rename the designated container's port by prefixing it (i.e., containerPort: http becomes containerPort: tm-http). +2. Let the container port of our injected traffic-agent use the original name (i.e., containerPort: http). + +Kubernetes takes care of the rest and will now associate the service's `targetPort` with our traffic-agent's +`containerPort`. + +### Important note +If the service is "headless" (using `ClusterIP: None`), then using named ports won't help because the `targetPort` will +not get remapped. A headless service will always require the init-container. + +## Error connecting to GKE or EKS cluster + +GKE and EKS require a plugin that utilizes their resepective IAM providers. +You will need to install the [gke](../install/cloud#gke-authentication-plugin) or [eks](../install/cloud#eks-authentication-plugin) plugins +for Telepresence to connect to your cluster. + +## `too many files open` error when running `telepresence connect` on Linux + +If `telepresence connect` on linux fails with a message in the logs `too many files open`, then check if `fs.inotify.max_user_instances` is set too low. Check the current settings with `sysctl fs.notify.max_user_instances` and increase it temporarily with `sudo sysctl -w fs.inotify.max_user_instances=512`. For more information about permanently increasing it see [Kernel inotify watch limit reached](https://unix.stackexchange.com/a/13757/514457). + +## Connected to cluster via VPN but IPs don't resolve + +If `telepresence connect` succeeds, but you find yourself unable to reach services on your cluster, a routing conflict may be to blame. This frequently happens when connecting to a VPN at the same time as telepresence, +as often VPN clients may add routes that conflict with those added by telepresence. To debug this, pick an IP address in the cluster and get its route information. In this case, we'll get the route for `100.124.150.45`, and discover +that it's running through a `tailscale` device. + + + + +```console +$ route -n get 100.124.150.45 + route to: 100.64.2.3 +destination: 100.64.0.0 + mask: 255.192.0.0 + interface: utun4 + flags: + recvpipe sendpipe ssthresh rtt,msec rttvar hopcount mtu expire + 0 0 0 0 0 0 1280 0 +``` + +Note that in macos it's difficult to determine what software the name of a virtual interface corresponds to -- `utun4` doesn't indicate that it was created by tailscale. +One option is to look at the output of `ifconfig` before and after connecting to your VPN to see if the interface in question is being added upon connection + + + + +```console +$ ip route get 100.124.150.45 +100.64.2.3 dev tailscale0 table 52 src 100.111.250.89 uid 0 +``` + + + + +```console +$ Find-NetRoute -RemoteIPAddress 100.124.150.45 + +IPAddress : 100.102.111.26 +InterfaceIndex : 29 +InterfaceAlias : Tailscale +AddressFamily : IPv4 +Type : Unicast +PrefixLength : 32 +PrefixOrigin : Manual +SuffixOrigin : Manual +AddressState : Preferred +ValidLifetime : Infinite ([TimeSpan]::MaxValue) +PreferredLifetime : Infinite ([TimeSpan]::MaxValue) +SkipAsSource : False +PolicyStore : ActiveStore + + +Caption : +Description : +ElementName : +InstanceID : ;::8;;;8 + + +This will tell you which device the traffic is being routed through. As a rule, if the traffic is not being routed by the telepresence device, +your VPN may need to be reconfigured, as its routing configuration is conflicting with telepresence. One way to determine if this is the case +is to run `telepresence quit -s`, check the route for an IP in the cluster (see commands above), run `telepresence connect`, and re-run the commands to see if the output changes. +If it doesn't change, that means telepresence is unable to override your VPN routes, and your VPN may need to be reconfigured. Talk to your network admins +to configure it such that clients do not add routes that conflict with the pod and service CIDRs of the clusters. How this will be done will +vary depending on the VPN provider. +Future versions of telepresence will be smarter about informing you of such conflicts upon connection. diff --git a/docs/telepresence/2.15/versions.yml b/docs/telepresence/2.15/versions.yml new file mode 100644 index 000000000..3ce9080fe --- /dev/null +++ b/docs/telepresence/2.15/versions.yml @@ -0,0 +1,5 @@ +version: "2.15.1" +dlVersion: "latest" +docsVersion: "2.15" +branch: release/v2 +productName: "Telepresence" diff --git a/docs/telepresence/2.2 b/docs/telepresence/2.2 deleted file mode 120000 index f1e85bae6..000000000 --- a/docs/telepresence/2.2 +++ /dev/null @@ -1 +0,0 @@ -../../../docs/telepresence/v2.2 \ No newline at end of file diff --git a/docs/telepresence/2.2/community.md b/docs/telepresence/2.2/community.md new file mode 100644 index 000000000..922457c9d --- /dev/null +++ b/docs/telepresence/2.2/community.md @@ -0,0 +1,12 @@ +# Community + +## Contributor's guide +Please review our [contributor's guide](https://github.com/telepresenceio/telepresence/blob/release/v2/DEVELOPING.md) +on GitHub to learn how you can help make Telepresence better. + +## Changelog +Our [changelog](https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md) +describes new features, bug fixes, and updates to each version of Telepresence. + +## Meetings +Check out our community [meeting schedule](https://github.com/telepresenceio/telepresence/blob/release/v2/MEETING_SCHEDULE.md) for opportunities to interact with Telepresence developers. diff --git a/docs/telepresence/2.2/concepts/context-prop.md b/docs/telepresence/2.2/concepts/context-prop.md new file mode 100644 index 000000000..4ec09396f --- /dev/null +++ b/docs/telepresence/2.2/concepts/context-prop.md @@ -0,0 +1,25 @@ +# Context propagation + +**Context propagation** is the transfer of request metadata across the services and remote processes of a distributed system. Telepresence uses context propagation to intelligently route requests to the appropriate destination. + +This metadata is the context that is transferred across system services. It commonly takes the form of HTTP headers; context propagation is usually referred to as header propagation. A component of the system (like a proxy or performance monitoring tool) injects the headers into requests as it relays them. + +Metadata propagation refers to any service or other middleware not stripping away the headers. Propagation facilitates the movement of the injected contexts between other downstream services and processes. + + +## What is distributed tracing? + +Distributed tracing is a technique for troubleshooting and profiling distributed microservices applications and is a common application for context propagation. It is becoming a key component for debugging. + +In a microservices architecture, a single request may trigger additional requests to other services. The originating service may not cause the failure or slow request directly; a downstream dependent service may instead be to blame. + +An application like Datadog or New Relic will use agents running on services throughout the system to inject traffic with HTTP headers (the context). They will track the request’s entire path from origin to destination to reply, gathering data on routes the requests follow and performance. The injected headers follow the [W3C Trace Context specification](https://www.w3.org/TR/trace-context/) (or another header format, such as [B3 headers](https://github.com/openzipkin/b3-propagation)), which facilitates maintaining the headers through every service without being stripped (the propagation). + + +## What are intercepts and preview URLs? + +[Intercepts](../../reference/intercepts) and [preview URLs](../../howtos/preview-urls/) are functions of Telepresence that enable easy local development from a remote Kubernetes cluster and offer a preview environment for sharing and real-time collaboration. + +Telepresence also uses custom headers and header propagation for controllable intercepts and preview URLs instead of for tracing. The headers facilitate the smart routing of requests either to live services in the cluster or services running locally on a developer’s machine. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to [Ambassador Cloud](https://app.getambassador.io) with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. diff --git a/docs/telepresence/2.2/concepts/devloop.md b/docs/telepresence/2.2/concepts/devloop.md new file mode 100644 index 000000000..886338f32 --- /dev/null +++ b/docs/telepresence/2.2/concepts/devloop.md @@ -0,0 +1,50 @@ +# The developer experience and the inner dev loop + +## How is the developer experience changing? + +The developer experience is the workflow a developer uses to develop, test, deploy, and release software. + +Typically this experience has consisted of both an inner dev loop and an outer dev loop. The inner dev loop is where the individual developer codes and tests, and once the developer pushes their code to version control, the outer dev loop is triggered. + +The outer dev loop is _everything else_ that happens leading up to release. This includes code merge, automated code review, test execution, deployment, [controlled (canary) release](../../../../argo/latest/concepts/canary/), and observation of results. The modern outer dev loop might include, for example, an automated CI/CD pipeline as part of a [GitOps workflow](../../../../argo/latest/concepts/gitops/#what-is-gitops) and a progressive delivery strategy relying on automated canaries, i.e. to make the outer loop as fast, efficient and automated as possible. + +Cloud-native technologies have fundamentally altered the developer experience in two ways: one, developers now have to take extra steps in the inner dev loop; two, developers need to be concerned with the outer dev loop as part of their workflow, even if most of their time is spent in the inner dev loop. + +Engineers now must design and build distributed service-based applications _and_ also assume responsibility for the full development life cycle. The new developer experience means that developers can no longer rely on monolithic application developer best practices, such as checking out the entire codebase and coding locally with a rapid “live-reload” inner development loop. Now developers have to manage external dependencies, build containers, and implement orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this adds development time to the equation. + +## What is the inner dev loop? + +The inner dev loop is the single developer workflow. A single developer should be able to set up and use an inner dev loop to code and test changes quickly. + +Even within the Kubernetes space, developers will find much of the inner dev loop familiar. That is, code can still be written locally at a level that a developer controls and committed to version control. + +In a traditional inner dev loop, if a typical developer codes for 360 minutes (6 hours) a day, with a traditional local iterative development loop of 5 minutes — 3 coding, 1 building, i.e. compiling/deploying/reloading, 1 testing inspecting, and 10-20 seconds for committing code — they can expect to make ~70 iterations of their code per day. Any one of these iterations could be a release candidate. The only “developer tax” being paid here is for the commit process, which is negligible. + +![traditional inner dev loop](../../images/trad-inner-dev-loop.png) + +## In search of lost time: How does containerization change the inner dev loop? + +The inner dev loop is where writing and testing code happens, and time is critical for maximum developer productivity and getting features in front of end users. The faster the feedback loop, the faster developers can refactor and test again. + +Changes to the inner dev loop process, i.e., containerization, threaten to slow this development workflow down. Coding stays the same in the new inner dev loop, but code has to be containerized. The _containerized_ inner dev loop requires a number of new steps: + +* packaging code in containers +* writing a manifest to specify how Kubernetes should run the application (e.g., YAML-based configuration information, such as how much memory should be given to a container) +* pushing the container to the registry +* deploying containers in Kubernetes + +Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. + + +![container inner dev loop](../../images/container-inner-dev-loop.png) + +## Tackling the slow inner dev loop + +A slow inner dev loop can negatively impact frontend and backend teams, delaying work on individual and team levels and slowing releases into production overall. + +For example: + +* Frontend developers have to wait for previews of backend changes on a shared dev/staging environment (for example, until CI/CD deploys a new version) and/or rely on mocks/stubs/virtual services when coding their application locally. These changes are only verifiable by going through the CI/CD process to build and deploy within a target environment. +* Backend developers have to wait for CI/CD to build and deploy their app to a target environment to verify that their code works correctly with cluster or cloud-based dependencies as well as to share their work to get feedback. + +New technologies and tools can facilitate cloud-native, containerized development. And in the case of a sluggish inner dev loop, developers can accelerate productivity with tools that help speed the loop up again. diff --git a/docs/telepresence/2.2/concepts/devworkflow.md b/docs/telepresence/2.2/concepts/devworkflow.md new file mode 100644 index 000000000..b09f186d0 --- /dev/null +++ b/docs/telepresence/2.2/concepts/devworkflow.md @@ -0,0 +1,7 @@ +# The changing development workflow + +A changing workflow is one of the main challenges for developers adopting Kubernetes. Software development itself isn’t the challenge. Developers can continue to [code using the languages and tools with which they are most productive and comfortable](/resources/kubernetes-local-dev-toolkit/). That’s the beauty of containerized development. + +However, the cloud-native, Kubernetes-based approach to development means adopting a new development workflow and development environment. Beyond the basics, such as figuring out how to containerize software, [how to run containers in Kubernetes](/docs/kubernetes/latest/concepts/appdev/), and how to deploy changes into containers, for example, Kubernetes adds complexity before it delivers efficiency. The promise of a “quicker way to develop software” applies at least within the traditional aspects of the inner dev loop, where the single developer codes, builds and tests their software. But both within the inner dev loop and once code is pushed into version control to trigger the outer dev loop, the developer experience changes considerably from what many developers are used to. + +In this new paradigm, new steps are added to the inner dev loop, and more broadly, the developer begins to share responsibility for the full life cycle of their software. Inevitably this means taking new workflows and tools on board to ensure that the full life cycle continues full speed ahead. diff --git a/docs/telepresence/2.2/concepts/faster.md b/docs/telepresence/2.2/concepts/faster.md new file mode 100644 index 000000000..7aa74ad1a --- /dev/null +++ b/docs/telepresence/2.2/concepts/faster.md @@ -0,0 +1,25 @@ +# Making the remote local: Faster feedback, collaboration and debugging + +With the goal of achieving [fast, efficient development](/use-case/local-kubernetes-development/), developers need a set of approaches to bridge the gap between remote Kubernetes clusters and local development, and reduce time to feedback and debugging. + +## How should I set up a Kubernetes development environment? + +[Setting up a development environment](/resources/development-environments-microservices/) for Kubernetes can be much more complex than the set up for traditional web applications. Creating and maintaining a Kubernetes development environment relies on a number of external dependencies, such as databases or authentication. + +While there are several ways to set up a Kubernetes development environment, most introduce complexities and impediments to speed. The dev environment should be set up to easily code and test in conditions where a service can access the resources it depends on. + +A good way to meet the goals of faster feedback, possibilities for collaboration, and scale in a realistic production environment is the "single service local, all other remote" environment. Developing in a fully remote environment offers some benefits, but for developers, it offers the slowest possible feedback loop. With local development in a remote environment, the developer retains considerable control while using tools like [Telepresence](../../quick-start/) to facilitate fast feedback, debugging and collaboration. + +## What is Telepresence? + +Telepresence is an open source tool that lets developers [code and test microservices locally against a remote Kubernetes cluster](../../quick-start/). Telepresence facilitates more efficient development workflows while relieving the need to worry about other service dependencies. + +## How can I get fast, efficient local development? + +The dev loop can be jump-started with the right development environment and Kubernetes development tools to support speed, efficiency and collaboration. Telepresence is designed to let Kubernetes developers code as though their laptop is in their Kubernetes cluster, enabling the service to run locally and be proxied into the remote cluster. Telepresence runs code locally and forwards requests to and from the remote Kubernetes cluster, bypassing the much slower process of waiting for a container to build, pushing it to registry, and deploying to production. + +A rapid and continuous feedback loop is essential for productivity and speed; Telepresence enables the fast, efficient feedback loop to ensure that developers can access the rapid local development loop they rely on without disrupting their own or other developers' workflows. Telepresence safely intercepts traffic from the production cluster and enables near-instant testing of code, local debugging in production, and [preview URL](../../howtos/preview-urls/) functionality to share dev environments with others for multi-user collaboration. + +Telepresence works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This pod proxies data from the Kubernetes environment (e.g., TCP connections, environment variables, volumes) to the local process. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +The intercept proxy works thanks to context propagation, which is most frequently associated with distributed tracing but also plays a key role in controllable intercepts and preview URLs. diff --git a/docs/telepresence/2.2/doc-links.yml b/docs/telepresence/2.2/doc-links.yml new file mode 100644 index 000000000..05b54d73a --- /dev/null +++ b/docs/telepresence/2.2/doc-links.yml @@ -0,0 +1,58 @@ + - title: Quick start + link: quick-start + - title: Install Telepresence + items: + - title: Install + link: install/ + - title: Upgrade + link: install/upgrade/ + - title: Migrate from legacy Telepresence + link: install/migrate-from-legacy/ + - title: Core concepts + items: + - title: The changing development workflow + link: concepts/devworkflow + - title: The developer experience and the inner dev loop + link: concepts/devloop + - title: 'Making the remote local: Faster feedback, collaboration and debugging' + link: concepts/faster + - title: Context propagation + link: concepts/context-prop + - title: How do I... + items: + - title: Intercept a service in your own environment + link: howtos/intercepts + - title: Share dev environments with preview URLs + link: howtos/preview-urls + - title: Proxy outbound traffic to my cluster + link: howtos/outbound + - title: Technical reference + items: + - title: Architecture + link: reference/architecture + - title: Client reference + link: reference/client + - title: Laptop-side configuration + link: reference/config + - title: Cluster-side configuration + link: reference/cluster-config + - title: Using Docker for intercepts + link: reference/docker-run + - title: Environment variables + link: reference/environment + - title: Intercepts + link: reference/intercepts + - title: Volume mounts + link: reference/volume + - title: DNS resolution + link: reference/dns + - title: RBAC + link: reference/rbac + - title: Using Telepresence with Linkerd + link: reference/linkerd + - title: FAQs + link: faqs + - title: Troubleshooting + link: troubleshooting + - title: Community + link: community diff --git a/docs/telepresence/2.2/faqs.md b/docs/telepresence/2.2/faqs.md new file mode 100644 index 000000000..e2a86805d --- /dev/null +++ b/docs/telepresence/2.2/faqs.md @@ -0,0 +1,108 @@ +--- +description: "Learn how Telepresence helps with fast development and debugging in your Kubernetes cluster." +--- + +# FAQs + +** Why Telepresence?** + +Modern microservices-based applications that are deployed into Kubernetes often consist of tens or hundreds of services. The resource constraints and number of these services means that it is often difficult to impossible to run all of this on a local development machine, which makes fast development and debugging very challenging. The fast [inner development loop](../concepts/devloop/) from previous software projects is often a distant memory for cloud developers. + +Telepresence enables you to connect your local development machine seamlessly to the cluster via a two way proxying mechanism. This enables you to code locally and run the majority of your services within a remote Kubernetes cluster -- which in the cloud means you have access to effectively unlimited resources. + +Ultimately, this empowers you to develop services locally and still test integrations with dependent services or data stores running in the remote cluster. + +You can “intercept” any requests made to a target Kubernetes workload, and code and debug your associated service locally using your favourite local IDE and in-process debugger. You can test your integrations by making requests against the remote cluster’s ingress and watching how the resulting internal traffic is handled by your service running locally. + +By using the preview URL functionality you can share access with additional developers or stakeholders to the application via an entry point associated with your intercept and locally developed service. You can make changes that are visible in near real-time to all of the participants authenticated and viewing the preview URL. All other viewers of the application entrypoint will not see the results of your changes. + +** What protocols can be intercepted by Telepresence?** + +All HTTP/1.1 and HTTP/2 protocols can be intercepted. This includes: + +- REST +- JSON/XML over HTTP +- gRPC +- GraphQL + +If you need another protocol supported, please [drop us a line](../../../../feedback) to request it. + +** When using Telepresence to intercept a pod, are the Kubernetes cluster environment variables proxied to my local machine?** + +Yes, you can either set the pod's environment variables on your machine or write the variables to a file to use with Docker or another build process. Please see [the environment variable reference doc](../reference/environment) for more information. + +** When using Telepresence to intercept a pod, can the associated pod volume mounts also be mounted by my local machine?** + +Yes, please see [the volume mounts reference doc](../reference/volume/) for more information. + +** When connected to a Kubernetes cluster via Telepresence, can I access cluster-based services via their DNS name?** + +Yes. After you have successfully connected to your cluster via `telepresence connect` you will be able to access any service in your cluster via their namespace qualified DNS name. + +This means you can curl endpoints directly e.g. `curl .:8080/mypath`. + +If you create an intercept for a service in a namespace, you will be able to use the service name directly. + +This means if you `telepresence intercept -n `, you will be able to resolve just the `` DNS record. + +You can connect to databases or middleware running in the cluster, such as MySQL, PostgreSQL and RabbitMQ, via their service name. + +** When connected to a Kubernetes cluster via Telepresence, can I access cloud-based services and data stores via their DNS name?** + +You can connect to cloud-based data stores and services that are directly addressable within the cluster (e.g. when using an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) Service type), such as AWS RDS, Google pub-sub, or Azure SQL Database. + +** What types of ingress does Telepresence support for the preview URL functionality?** + +The preview URL functionality should work with most ingress configurations, including straightforward load balancer setups. + +Telepresence will discover/prompt during first use for this info and make its best guess at figuring this out and ask you to confirm or update this. + +** Will Telepresence be able to intercept workloads running on a private cluster or cluster running within a virtual private cloud (VPC)?** + +Yes. The cluster has to have outbound access to the internet for the preview URLs to function correctly, but it doesn’t need to have a publicly accessible IP address. + +The cluster must also have access to an external registry in order to be able to download the Traffic Manager and Traffic Agent containers that are deployed when connecting with Telepresence. + +** Why does running Telepresence require sudo access for the local daemon?** + +The local daemon needs sudo to create iptable mappings. Telepresence uses this to create outbound access from the laptop to the cluster. + +On Fedora, Telepresence also creates a virtual network device (a TUN network) for DNS routing. That also requires root access. + +** What components get installed in the cluster when running Telepresence?** + +A single Traffic Manager service is deployed in the `ambassador` namespace within your cluster, and this manages resilient intercepts and connections between your local machine and the cluster. + +A Traffic Agent container is injected per pod that is being intercepted. The first time a workload is intercepted all pods associated with this workload will be restarted with the Traffic Agent automatically injected. + +** How can I remove all of the Telepresence components installed within my cluster?** + +You can run the command `telepresence uninstall --everything` to remove the Traffic Manager service installed in the cluster and Traffic Agent containers injected into each pod being intercepted. + +Running this command will also stop the local daemon running. + +** What language is Telepresence written in?** + +All components of the Telepresence application and cluster components are written using Go. + +** How does Telepresence connect and tunnel into the Kubernetes cluster?** + +The connection between your laptop and cluster is established via the standard `kubectl` mechanisms and SSH tunnelling. + + + +** What identity providers are supported for authenticating to view a preview URL?** + +* GitHub +* GitLab +* Google + +More authentication mechanisms and identity provider support will be added soon. Please [let us know](../../../../feedback) which providers are the most important to you and your team in order for us to prioritize those. + +** Is Telepresence open source?** + +Telepresence will be open source soon, in the meantime it is free to download. We prioritized releasing the binary as soon as possible for community feedback, but are actively working on the open sourcing logistics. + +** How do I share my feedback on Telepresence?** + +Your feedback is always appreciated and helps us build a product that provides as much value as possible for our community. You can chat with us directly on our [feedback page](../../../../feedback), or you can [join our Slack channel](http://a8r.io/slack) to share your thoughts. diff --git a/docs/telepresence/2.2/howtos/intercepts.md b/docs/telepresence/2.2/howtos/intercepts.md new file mode 100644 index 000000000..2a88f7528 --- /dev/null +++ b/docs/telepresence/2.2/howtos/intercepts.md @@ -0,0 +1,280 @@ +--- +description: "Start using Telepresence in your own environment. Follow these steps to intercept your service in your cluster." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Intercept a service in your own environment + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Intercept your service](#3-intercept-your-service) +* [4. Create a preview URL to only intercept certain requests to your service](#4-create-a-preview-url-to-only-intercept-certain-requests-to-your-service) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +For a detailed walk-though on creating intercepts using our sample app, follow the quick start guide. + +## Prerequisites +You’ll need [`kubectl` installed](https://kubernetes.io/docs/tasks/tools/install-kubectl/) and [set up](https://kubernetes.io/docs/tasks/tools/install-kubectl/#verifying-kubectl-configuration) to use a Kubernetes cluster, preferably an empty test cluster. + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +This guide assumes you have a Kubernetes deployment and service accessible publicly by an ingress controller and that you can run a copy of that service on your laptop. + +## 1. Install the Telepresence CLI + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: + `telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: + `curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Intercept your service + +In this section, we will go through the steps required for you to intercept all traffic going to a service in your cluster and route it to your local environment instead. + +1. List the services that you can intercept with `telepresence list` and make sure the one you want to intercept is listed. + + For example, this would confirm that `example-service` can be intercepted by Telepresence: + ``` + $ telepresence list + + ... + example-service: ready to intercept (traffic-agent not yet installed) + ... + ``` + +2. Get the name of the port you want to intercept on your service: + `kubectl get service --output yaml`. + + For example, this would show that the port `80` is named `http` in the `example-service`: + + ``` + $ kubectl get service example-service --output yaml + + ... + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + ... + ``` + +3. Intercept all traffic going to the service in your cluster: + `telepresence intercept --port [:] --env-file `. + + - For the `--port` argument, specify the port on which your local instance of your service will be running. + - If the service you are intercepting exposes more than one port, specify the one you want to intercept after a colon. + - For the `--env-file` argument, specify the path to a file on which Telepresence should write the environment variables that your service is currently running with. This is going to be useful as we start our service. + + For the example below, Telepresence will intercept traffic going to service `example-service` so that requests reaching it on port `http` in the cluster get routed to `8080` on the workstation and write the environment variables of the service to `~/example-service-intercept.env`. + + ``` + $ telepresence intercept example-service --port 8080:http --env-file ~/example-service-intercept.env + + Using Deployment example-service + intercepted + Intercept name: example-service + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Intercepting : all TCP connections + ``` + +4. Start your local environment using the environment variables retrieved in the previous step. + + Here are a few options to pass the environment variables to your local process: + - with `docker run`, provide the path to the file using the [`--env-file` argument](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file) + - with JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.) use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile) + - with Visual Studio Code, specify the path to the environment variables file in the `envFile` field of your configuration + +5. Query the environment in which you intercepted a service the way you usually would and see your local instance being invoked. + + + Didn't work? Make sure the port you're listening on matches the one specified when creating your intercept. + + + + Congratulations! All the traffic usually going to your Kubernetes Service is now being routed to your local environment! + + +You can now: +- Make changes on the fly and see them reflected when interacting with your Kubernetes environment. +- Query services only exposed in your cluster's network. +- Set breakpoints in your IDE to investigate bugs. + +## 4. Create a preview URL to only intercept certain requests to your service + +When working on a development environment with multiple engineers, you don't want your intercepts to impact your +teammates. Ambassador Cloud automatically generates a preview URL when creating an intercept if you are logged in. By +doing so, Telepresence can route only the requests coming from that preview URL to your local environment; the rest will +be routed to your cluster as usual. + +1. Clean up your previous intercept by removing it: +`telepresence leave ` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + ``` + $ telepresence login + + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept --port [:] --env-file ` + + You will be asked for the following information: + 1. **Ingress layer 3 address**: This would usually be the internal address of your ingress controller in the format `.namespace`. For example, if you have a service `ambassador-edge-stack` in the `ambassador` namespace, you would enter `ambassador-edge-stack.ambassador`. + 2. **Ingress port**: The port on which your ingress controller is listening (often 80 for non-TLS and 443 for TLS). + 3. **Ingress TLS encryption**: Whether the ingress controller is expecting TLS communication on the specified port. + 4. **Ingress layer 5 hostname**: If your ingress controller routes traffic based on a domain name (often using the `Host` HTTP header), this is the value you would need to enter here. + + + Telepresence supports any ingress controller, not just Ambassador Edge Stack. + + + For the example below, you will create a preview URL that will send traffic to the `ambassador` service in the `ambassador` namespace on port `443` using TLS encryption and setting the `Host` HTTP header to `dev-environment.edgestack.me`: + + ``` + $ telepresence intercept example-service --port 8080:http --env-file ~/example-service-intercept.env + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Confirm the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: -]: ambassador.ambassador + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [default: -]: 443 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: y + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: ambassador.ambassador]: dev-environment.edgestack.me + + Using Deployment example-service + intercepted + Intercept name : example-service + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":example-service") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname : dev-environment.edgestack.me + ``` + +4. Start your local service as in the previous step. + +5. Go to the preview URL printed after doing the intercept and see that your local service is processing the request. + + + Didn't work? It might be because you have services in between your ingress controller and the service you are intercepting that do not propagate the x-telepresence-intercept-id HTTP Header. Read more on context propagation. + + +6. Make a request on the URL you would usually query for that environment. The request should not be routed to your laptop. + + Normal traffic coming into the cluster through the Ingress (i.e. not coming from the preview URL) will route to services in the cluster like normal. + + + Congratulations! You have now only intercepted traffic coming from your preview URL, without impacting your teammates. + + +You can now: +- Make changes on the fly and see them reflected when interacting with your Kubernetes environment. +- Query services only exposed in your cluster's network. +- Set breakpoints in your IDE to investigate bugs. + +...and all of this without impacting your teammates! +## What's Next? + + diff --git a/docs/telepresence/2.2/howtos/outbound.md b/docs/telepresence/2.2/howtos/outbound.md new file mode 100644 index 000000000..83ec20b01 --- /dev/null +++ b/docs/telepresence/2.2/howtos/outbound.md @@ -0,0 +1,98 @@ +--- +description: "Telepresence can connect to your Kubernetes cluster, letting you access cluster services as if your laptop was another pod in the cluster." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Proxy outbound traffic to my cluster + +While preview URLs are a powerful feature, there are other options to use Telepresence for proxying traffic between your laptop and the cluster. + + We'll assume below that you have the quick start sample web app running in your cluster so that we can test accessing the verylargejavaservice service. That service can be substituted however for any service you are running. + +## Proxying outbound traffic + +Connecting to the cluster instead of running an intercept will allow you to access cluster workloads as if your laptop was another pod in the cluster. You will be able to access other Kubernetes services using `.`, for example by curling a service from your terminal. A service running on your laptop will also be able to interact with other services on the cluster by name. + +Connecting to the cluster starts the background daemon on your machine and installs the [Traffic Manager pod](../../reference/architecture/) into the cluster of your current `kubectl` context. The Traffic Manager handles the service proxying. + +1. Run `telepresence connect`, you will be prompted for your password to run the daemon. + + ``` + $ telepresence connect + Launching Telepresence Daemon v2.1.4 (api v3) + Need root privileges to run "/usr/local/bin/telepresence daemon-foreground /home//.cache/telepresence/logs '' ''" + [sudo] password: + Connecting to traffic manager... + Connected to context default (https://) + ``` + +1. Run `telepresence status` to confirm that you are connected to your cluster and are proxying traffic to it. + + ``` + $ telepresence status + Root Daemon: Running + Version : v2.1.4 (api 3) + Primary DNS : "" + Fallback DNS: "" + User Daemon: Running + Version : v2.1.4 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 0 total + ``` + +1. Now try to access your service by name with `curl verylargejavaservice.default:8080`. Telepresence will route the request to the cluster, as if your laptop is actually running in the cluster. + + ``` + $ curl verylargejavaservice.default:8080 + + + + Welcome to the EdgyCorp WebApp + ... + ``` + +3. Terminate the client with `telepresence quit` and try to access the service again, it will fail because traffic is no longer being proxied from your laptop. + + ``` + $ telepresence quit + Telepresence Daemon quitting...done + ``` + +When using Telepresence in this way, services must be accessed with the namespace qualified DNS name (<service name>.<namespace>) before starting an intercept. After starting an intercept, only <service name> is required. Read more about these differences in DNS resolution here. + +## Controlling outbound connectivity + +By default, Telepresence will provide access to all Services found in all namespaces in the connected cluster. This might lead to problems if the user does not have access permissions to all namespaces via RBAC. The `--mapped-namespaces ` flag was added to give the user control over exactly which namespaces will be accessible. + +When using this option, it is important to include all namespaces containing services to be accessed and also all namespaces that contain services that those intercepted services might use. + +### Using local-only intercepts + +An intercept with the flag`--local-only` can be used to control outbound connectivity to specific namespaces. + +When developing services that have not yet been deployed to the cluster, it can be necessary to provide outbound connectivity to the namespace where the service is intended to be deployed so that it can access other services in that namespace without using qualified names. + + ``` + $ telepresence intercept --namespace --local-only + ``` +The resources in the given namespace can now be accessed using unqualified names as long as the intercept is active. The intercept is deactivated just like any other intercept. + + ``` + $ telepresence leave + ``` +The unqualified name access is now removed provided that no other intercept is active and using the same namespace. + +### External dependencies (formerly `--also-proxy`) + +If you have a resource outside of the cluster that you need access to, you can leverage Headless Services to provide access. This will give you a kubernetes service formatted like all other services (`my-service.prod.svc.cluster.local`), that resolves to your resource. + +If the outside service has a DNS name, you can use the [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) service type, which will create a service that can be used from within your cluster and from your local machine when connected with telepresence. + +If the outside service is an ip, create a [service without selectors](https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors) and then create an endpoint of the same name. + +In both scenarios, Kubernetes will create a service that can be used from within your cluster and from your local machine when connected with telepresence. diff --git a/docs/telepresence/2.2/howtos/preview-urls.md b/docs/telepresence/2.2/howtos/preview-urls.md new file mode 100644 index 000000000..d0934e054 --- /dev/null +++ b/docs/telepresence/2.2/howtos/preview-urls.md @@ -0,0 +1,131 @@ +--- +description: "Telepresence uses Preview URLs to help you collaborate on developing Kubernetes services with teammates." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Share dev environments with preview URLs + +Telepresence can generate sharable preview URLs, allowing you to work on a copy of your service locally and share that environment directly with a teammate for pair programming. While using preview URLs Telepresence will route only the requests coming from that preview URL to your local environment; requests to the ingress will be routed to your cluster as usual. + +Preview URLs are protected behind authentication via Ambassador Cloud, ensuring that only users in your organization can view them. A preview URL can also be set to allow public access for sharing with outside collaborators. + +## Prerequisites + +* You should have the Telepresence CLI [installed](../../install/) on your laptop. + +* If you have Telepresence already installed and have used it previously, please first reset it with `telepresence uninstall --everything`. + +* You will need a service running in your cluster that you would like to intercept. + + +Need a sample app to try with preview URLs? Check out the quick start. It has a multi-service app to install in your cluster with instructions to create a preview URL for that app. + + +## Creating a preview URL + +1. List the services that you can intercept with `telepresence list` and make sure the one you want is listed. + + If it isn't: + + * Only Deployments, ReplicaSets, or StatefulSets are supported, and each of those requires a label matching a Service + + * If the service is in a different namespace, specify it with the `--namespace` flag + +2. Login to Ambassador Cloud where you can manage and share preview URLs: +`telepresence login` + + ``` + $ telepresence login + + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept: +`telepresence intercept --port --env-file ` + + For `--port`, specify the port on which your local instance of your service will be running. If the service you are intercepting exposes more than one port, specify the one you want to intercept after a colon. + + For `--env-file`, specify a file path where Telepresence will write the environment variables that are set in the Pod. This is going to be useful as we start our service locally. + + You will be asked for the following information: + 1. **Ingress layer 3 address**: This would usually be the internal address of your ingress controller in the format `.namespace `. For example, if you have a service `ambassador-edge-stack` in the `ambassador` namespace, you would enter `ambassador-edge-stack.ambassador`. + 2. **Ingress port**: The port on which your ingress controller is listening (often 80 for non-TLS and 443 for TLS). + 3. **Ingress TLS encryption**: Whether the ingress controller is expecting TLS communication on the specified port. + 4. **Ingress layer 5 hostname**: If your ingress controller routes traffic based on a domain name (often using the `Host` HTTP header), enter that value here. + + For the example below, you will create a preview URL for `example-service` which listens on port 8080. The preview URL for ingress will use the `ambassador` service in the `ambassador` namespace on port `443` using TLS encryption and the hostname `dev-environment.edgestack.me`: + + ``` + $ telepresence intercept example-service --port 8080 --env-file ~/ex-svc.env + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Confirm the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: -]: ambassador.ambassador + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [default: -]: 443 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: y + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: ambassador.ambassador]: dev-environment.edgestack.me + + Using deployment example-service + intercepted + Intercept name : example-service + State : ACTIVE + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":example-service") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname : dev-environment.edgestack.me + ``` + +4. Start your local environment using the environment variables retrieved in the previous step. + + Here are a few options to pass the environment variables to your local process: + - with `docker run`, provide the path to the file using the [`--env-file` argument](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file) + - with JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.) use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile) + - with Visual Studio Code, specify the path to the environment variables file in the `envFile` field of your configuration + +5. Go to the preview URL that was provided after starting the intercept (the next to last line in the terminal output above). Your local service will be processing the request. + + + Success! You have intercepted traffic coming from your preview URL without impacting other traffic from your Ingress. + + + + Didn't work? It might be because you have services in between your ingress controller and the service you are intercepting that do not propagate the x-telepresence-intercept-id HTTP Header. Read more on context propagation. + + +6. Make a request on the URL you would usually query for that environment. The request should **not** be routed to your laptop. + + Normal traffic coming into the cluster through the Ingress (i.e. not coming from the preview URL) will route to services in the cluster like normal. + +7. Share with a teammate. + + You can collaborate with teammates by sending your preview URL to them. They will be asked to log in to Ambassador Cloud if they are not already. Upon log in they must select the same identity provider and org as you are using; that is how they are authorized to access the preview URL (see the [list of supported identity providers](../../faqs/#idps)). When they visit the preview URL, they will see the intercepted service running on your laptop. + + + Congratulations! You have now created a dev environment and shared it with a teammate! While you and your partner work together to debug your service, the production version remains unchanged to the rest of your team until you commit your changes. + + +## Sharing a preview URL with people outside your team + +To collaborate with someone outside of your identity provider's organization, you must go to [Ambassador Cloud](https://app.getambassador.io/cloud/), select the preview URL, and click **Make Publicly Accessible**. Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on your laptop. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. Removing the intercept either from the dashboard or by running `telepresence leave ` also removes all access to the preview URL. diff --git a/docs/telepresence/2.2/images/container-inner-dev-loop.png b/docs/telepresence/2.2/images/container-inner-dev-loop.png new file mode 100644 index 000000000..06586cd6e Binary files /dev/null and b/docs/telepresence/2.2/images/container-inner-dev-loop.png differ diff --git a/docs/telepresence/2.2/images/github-login.png b/docs/telepresence/2.2/images/github-login.png new file mode 100644 index 000000000..cfd4d4bf1 Binary files /dev/null and b/docs/telepresence/2.2/images/github-login.png differ diff --git a/docs/telepresence/2.2/images/logo.png b/docs/telepresence/2.2/images/logo.png new file mode 100644 index 000000000..701f63ba8 Binary files /dev/null and b/docs/telepresence/2.2/images/logo.png differ diff --git a/docs/telepresence/2.2/images/trad-inner-dev-loop.png b/docs/telepresence/2.2/images/trad-inner-dev-loop.png new file mode 100644 index 000000000..618b674f8 Binary files /dev/null and b/docs/telepresence/2.2/images/trad-inner-dev-loop.png differ diff --git a/docs/telepresence/2.2/install/index.md b/docs/telepresence/2.2/install/index.md new file mode 100644 index 000000000..a6d423db4 --- /dev/null +++ b/docs/telepresence/2.2/install/index.md @@ -0,0 +1,55 @@ +import Platform from '@src/components/Platform'; + +# Install + +Install Telepresence by running the commands below for your OS. + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## What's Next? + +Follow one of our [quick start guides](../quick-start/) to start using Telepresence, either with our sample app or in your own environment. + +## Installing older versions of Telepresence + +Use these URLs to download an older version for your OS, replacing `x.y.z` with the versions you want. + + + + +``` +https://app.getambassador.io/download/tel2/darwin/amd64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/x.y.z/telepresence +``` + + + diff --git a/docs/telepresence/2.2/install/migrate-from-legacy.md b/docs/telepresence/2.2/install/migrate-from-legacy.md new file mode 100644 index 000000000..a54937ee5 --- /dev/null +++ b/docs/telepresence/2.2/install/migrate-from-legacy.md @@ -0,0 +1,98 @@ +# Migrate from legacy Telepresence + +Telepresence (formerly referenced as Telepresence 2, which is the current major version) has different mechanics and requires a different mental model from [legacy Telepresence 1](https://www.telepresence.io/docs/v1/) when working with local instances of your services. + +In legacy Telepresence, a pod running a service was swapped with a pod running the Telepresence proxy. This proxy received traffic intended for the service, and sent the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment". + +In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time. + +Telepresence introduces a [new architecture](../../reference/architecture/) built around "intercepts" that addresses these problems. With Telepresence, a sidecar proxy is injected onto the pod. The proxy then intercepts traffic intended for the pod and routes it to the workstation/laptop. The advantage of this approach is that the service is running at all times, and no swapping is used. By using the proxy approach, we can also do selective intercepts, where certain types of traffic get routed to the service while other traffic gets routed to your laptop/workstation. + +Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts. + +## Using legacy Telepresence commands + +First please ensure you've [installed Telepresence](../). + +Telepresence is able to translate common legacy Telepresence commands into native Telepresence commands. +So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used +to with the Telepresence binary. + +For example, say you have a deployment (`myserver`) that you want to swap deployment (equivalent to intercept in +Telepresence) with a python server, you could run the following command: + +``` +$ telepresence --swap-deployment myserver --expose 9090 --run python3 -m http.server 9090 +< help text > + +Legacy telepresence command used +Command roughly translates to the following in Telepresence: +telepresence intercept echo-easy --port 9090 -- python3 -m http.server 9090 +running... +Connecting to traffic manager... +Connected to context +Using Deployment myserver +intercepted + Intercept name : myserver + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:9090 + Intercepting : all TCP connections +Serving HTTP on :: port 9090 (http://[::]:9090/) ... +``` + +Telepresence will let you know what the legacy Telepresence command has mapped to and automatically +runs it. So you can get started with Telepresence today, using the commands you are used to +and it will help you learn the Telepresence syntax. + +### Legacy command mapping + +Below is the mapping of legacy Telepresence to Telepresence commands (where they exist and +are supported). + +| Legacy Telepresence Command | Telepresence Command | +|--------------------------------------------------|--------------------------------------------| +| --swap-deployment $workload | intercept $workload | +| --expose localPort[:remotePort] | intercept --port localPort[:remotePort] | +| --swap-deployment $workload --run-shell | intercept $workload -- bash | +| --swap-deployment $workload --run $cmd | intercept $workload -- $cmd | +| --swap-deployment $workload --docker-run $cmd | intercept $workload --docker-run -- $cmd | +| --run-shell | connect -- bash | +| --run $cmd | connect -- $cmd | +| --env-file,--env-json | --env-file, --env-json (haven't changed) | +| --context,--namespace | --context, --namespace (haven't changed) | +| --mount,--docker-mount | --mount, --docker-mount (haven't changed) | + +### Legacy Telepresence command limitations + +Some of the commands and flags from legacy Telepresence either didn't apply to Telepresence or +aren't yet supported in Telepresence. For some known popular commands, such as --method, +Telepresence will include output letting you know that the flag has gone away. For flags that +Telepresence can't translate yet, it will let you know that that flag is "unsupported". + +If Telepresence is missing any flags or functionality that is integral to your usage, please let us know +by [creating an issue](https://github.com/telepresenceio/telepresence/issues) and/or talking to us on our [Slack channel](http://a8r.io/slack)! + +## Telepresence changes + +Telepresence installs a Traffic Manager in the cluster and Traffic Agents alongside workloads when performing intercepts (including +with `--swap-deployment`) and leaves them. If you use `--swap-deployment`, the intercept will be left once the process +dies, but the agent will remain. There's no harm in leaving the agent running alongside your service, but when you +want to remove them from the cluster, the following Telepresence command will help: +``` +$ telepresence uninstall --help +Uninstall telepresence agents and manager + +Usage: + telepresence uninstall [flags] { --agent |--all-agents | --everything } + +Flags: + -d, --agent uninstall intercept agent on specific deployments + -a, --all-agents uninstall intercept agent on all deployments + -e, --everything uninstall agents and the traffic manager + -h, --help help for uninstall + -n, --namespace string If present, the namespace scope for this CLI request +``` + +Since the new architecture deploys a Traffic Manager into the Ambassador namespace, please take a look at +our [rbac guide](../../reference/rbac) if you run into any issues with permissions while upgrading to Telepresence. diff --git a/docs/telepresence/2.2/install/upgrade.md b/docs/telepresence/2.2/install/upgrade.md new file mode 100644 index 000000000..4a2332ab2 --- /dev/null +++ b/docs/telepresence/2.2/install/upgrade.md @@ -0,0 +1,35 @@ +--- +description: "How to upgrade your installation of Telepresence and install previous versions." +--- + +import Platform from '@src/components/Platform'; + +# Upgrade Process +The Telepresence CLI will periodically check for new versions and notify you when an upgrade is available. Running the same commands used for installation will replace your current binary with the latest version. + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +After upgrading your CLI, the Traffic Manager **must be uninstalled** from your cluster. This can be done using `telepresence uninstall --everything` or by `kubectl delete svc,deploy -n ambassador traffic-manager`. The next time you run a `telepresence` command it will deploy an upgraded Traffic Manager. diff --git a/docs/telepresence/2.2/quick-start/TelepresenceQuickStartLanding.js b/docs/telepresence/2.2/quick-start/TelepresenceQuickStartLanding.js new file mode 100644 index 000000000..3e87c3ad6 --- /dev/null +++ b/docs/telepresence/2.2/quick-start/TelepresenceQuickStartLanding.js @@ -0,0 +1,129 @@ +import React from 'react'; + +import Embed from '../../../../src/components/Embed'; +import Icon from '../../../../src/components/Icon'; + +import './telepresence-quickstart-landing.less'; + +/** @type React.FC> */ +const RightArrow = (props) => ( + + + +); + +/** @type React.FC<{color: 'green'|'blue', withConnector: boolean}> */ +const Box = ({ children, color = 'blue', withConnector = false }) => ( + <> + {withConnector && ( +
+ +
+ )} +
{children}
+ +); + +const TelepresenceQuickStartLanding = () => ( +
+

+ Telepresence +

+

+ Explore the use cases of Telepresence with a free remote Kubernetes + cluster, or dive right in using your own. +

+ +
+
+
+

+ Use Our Free Demo Cluster +

+

+ See how Telepresence works without having to mess with your + production environments. +

+
+ +

6 minutes

+

Integration Testing

+

+ See how changes to a single service impact your entire application + without having to run your entire app locally. +

+ + GET STARTED{' '} + + +
+ +

5 minutes

+

Fast code changes

+

+ Make changes to your service locally and see the results instantly, + without waiting for containers to build. +

+ + GET STARTED{' '} + + +
+
+
+
+

+ Use Your Cluster +

+

+ Understand how Telepresence fits in to your Kubernetes development + workflow. +

+
+ +

10 minutes

+

Intercept your service in your cluster

+

+ Query services only exposed in your cluster's network. Make changes + and see them instantly in your K8s environment. +

+ + GET STARTED{' '} + + +
+
+
+ +
+

Watch the Demo

+
+
+

+ See Telepresence in action in our 3-minute demo + video that you can share with your teammates. +

+
    +
  • Instant feedback loops
  • +
  • Infinite-scale development environments
  • +
  • Access to your favorite local tools
  • +
  • Easy collaborative development with teammates
  • +
+
+
+ +
+
+
+
+); + +export default TelepresenceQuickStartLanding; diff --git a/docs/telepresence/2.2/quick-start/demo-node.md b/docs/telepresence/2.2/quick-start/demo-node.md new file mode 100644 index 000000000..9d0aef778 --- /dev/null +++ b/docs/telepresence/2.2/quick-start/demo-node.md @@ -0,0 +1,288 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import QSCards from './qs-cards' + +# Telepresence Quick Start + +
+

Contents

+ +* [1. Download the demo cluster archive](#1-download-the-demo-cluster-archive) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Check out the sample application](#3-check-out-the-sample-application) +* [4. Run a service on your laptop](#4-run-a-service-on-your-laptop) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +In this guide we'll give you **everything you need in a preconfigured demo cluster:** the Telepresence CLI, a config file for connecting to your demo cluster, and code to run a cluster service locally. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js. We have a version in React if you prefer. + + + + Already have a cluster? Switch over to a version of this guide that takes you though the same steps using your own cluster. + + +## 1. Download the demo cluster archive + +1. {window.open('https://app.getambassador.io/cloud/demo-cluster-download-popup', 'ambassador-cloud-demo-cluster', 'menubar=no,location=no,resizable=yes,scrollbars=yes,status=no,width=550,height=750'); e.preventDefault(); }} target="_blank">Sign in to Ambassador Cloud to download your demo cluster archive. The archive contains all the tools and configurations you need to complete this guide. + +2. Extract the archive file, open the `ambassador-demo-cluster` folder, and run the installer script (the commands below might vary based on where your browser saves downloaded files). + + + This step will also install some dependency packages onto your laptop using npm, you can see those packages at ambassador-demo-cluster/edgey-corp-nodejs/DataProcessingService/package.json. + + + ``` + cd ~/Downloads + unzip ambassador-demo-cluster.zip -d ambassador-demo-cluster + cd ambassador-demo-cluster + ./install.sh + ``` + +3. The demo cluster we provided already has a demo app running. List the app's services: + `kubectl get services` + + ``` + $ kubectl get services + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + kubernetes ClusterIP 10.43.0.1 443/TCP 14h + dataprocessingservice ClusterIP 10.43.159.239 3000/TCP 14h + verylargejavaservice ClusterIP 10.43.223.61 8080/TCP 14h + verylargedatastore ClusterIP 10.43.203.19 8080/TCP 14h + ``` + +4. Confirm that the Telepresence CLI is now installed (we expect to see the daemons are not running yet): +`telepresence status` + + ``` + $ telepresence status + + Root Daemon: Not running + User Daemon: Not running + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open System Preferences → Security & Privacy → General. Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence status command. + + + + You now have Telepresence installed on your workstation and a Kubernetes cluster configured in your terminal. + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster (this requires root privileges and will ask for your password): +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Check out the sample application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + +We'll use a sample app that is already installed in your demo cluster. Let's take a quick look at it's architecture before continuing. + +1. Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +2. Since you’ve already connected Telepresence to your cluster, you can access the frontend service in your browser at http://verylargejavaservice.default:8080. + +3. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Run a service on your laptop + +Now start up the DataProcessingService service on your laptop. This version of the code has the UI color set to blue instead of green. + +1. **In a new terminal window**, go the demo application directory in the extracted archive folder: + `cd edgey-corp-nodejs/DataProcessingService` + +2. Start the application: + `npm start` + + ``` + $ npm start + + ... + Welcome to the DataProcessingService! + { _: [] } + Server running on port 3000 + ``` + +4. **Back in your previous terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Node server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + + Didn't work? Make sure you are working in the terminal window where you ran the script because it sets environment variables to access the demo cluster. Those variables will only will apply to that terminal session. + + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + ... + ``` + +2. Go to the frontend service again in your browser at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Node server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-nodejs/DataProcessingService/app.js` in your editor and change line 6 from `blue` to `orange`. Save the file and the Node server will auto reload. + +2. Now visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. The frontend `verylargejavaservice` is still running on the cluster, but it's request to the `DataProcessingService` for retrieve the color to show is being proxied by Telepresence to your laptop. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + This opens your browser; login with your preferred identity provider and choose your org. + + ``` + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n`. The default for the fourth value is correct so hit enter to accept it + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: n + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.2/quick-start/demo-react.md b/docs/telepresence/2.2/quick-start/demo-react.md new file mode 100644 index 000000000..7471f23f1 --- /dev/null +++ b/docs/telepresence/2.2/quick-start/demo-react.md @@ -0,0 +1,255 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import QSCards from './qs-cards' + +# Telepresence Quick Start - React + +
+

Contents

+ +* [1. Download the demo cluster archive](#1-download-the-demo-cluster-archive) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Set up the sample application](#3-set-up-the-sample-application) +* [4. Test app](#4-test-app) +* [5. Run a service on your laptop](#5-run-a-service-on-your-laptop) +* [6. Make a code change](#6-make-a-code-change) +* [7. Intercept all traffic to the service](#7-intercept-all-traffic-to-the-service) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +In this guide we'll give you **everything you need in a preconfigured demo cluster:** the Telepresence CLI, a config file for connecting to your demo cluster, and code to run a cluster service locally. + + + While Telepresence works with any language, this guide uses a sample app with a frontend written in React. We have a version with a Node.js backend if you prefer. + + + + +## 1. Download the demo cluster archive + +1. {window.open('https://app.getambassador.io/cloud/demo-cluster-download-popup', 'ambassador-cloud-demo-cluster', 'menubar=no,location=no,resizable=yes,scrollbars=yes,status=no,width=550,height=750'); e.preventDefault(); }} target="_blank">Sign in to Ambassador Cloud to download your demo cluster archive. The archive contains all the tools and configurations you need to complete this guide. + +2. Extract the archive file, open the `ambassador-demo-cluster` folder, and run the installer script (the commands below might vary based on where your browser saves downloaded files). + + + This step will also install some dependency packages onto your laptop using npm, you can see those packages at ambassador-demo-cluster/edgey-corp-nodejs/DataProcessingService/package.json. + + + ``` + cd ~/Downloads + unzip ambassador-demo-cluster.zip -d ambassador-demo-cluster + cd ambassador-demo-cluster + ./install.sh + # type y to install the npm dependencies when asked + ``` + +3. Confirm that your `kubectl` is configured to use the demo cluster by getting the status of the cluster nodes, you should see a single node named `tpdemo-prod-...`: + `kubectl get nodes` + + ``` + $ kubectl get nodes + + NAME STATUS ROLES AGE VERSION + tpdemo-prod-1234 Ready control-plane,master 5d10h v1.20.2+k3s1 + ``` + +4. Confirm that the Telepresence CLI is now installed (we expect to see the daemons are not running yet): +`telepresence status` + + ``` + $ telepresence status + + Root Daemon: Not running + User Daemon: Not running + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open System Preferences → Security & Privacy → General. Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence status command. + + + + You now have Telepresence installed on your workstation and a Kubernetes cluster configured in your terminal! + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster (this requires **root** privileges and will ask for your password): +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Set up the sample application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + +1. Clone the emojivoto app: +`git clone https://github.com/datawire/emojivoto.git` + +1. Deploy the app to your cluster: +`kubectl apply -k emojivoto/kustomize/deployment` + +1. Change the kubectl namespace: +`kubectl config set-context --current --namespace=emojivoto` + +1. List the Services: +`kubectl get svc` + + ``` + $ kubectl get svc + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + emoji-svc ClusterIP 10.43.162.236 8080/TCP,8801/TCP 29s + voting-svc ClusterIP 10.43.51.201 8080/TCP,8801/TCP 29s + web-app ClusterIP 10.43.242.240 80/TCP 29s + web-svc ClusterIP 10.43.182.119 8080/TCP 29s + ``` + +1. Since you’ve already connected Telepresence to your cluster, you can access the frontend service in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). This is the namespace qualified DNS name in the form of `service.namespace`. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Test app + +1. Vote for some emojis and see how the [leaderboard](http://web-app.emojivoto/leaderboard) changes. + +1. There is one emoji that causes an error when you vote for it. Vote for 🍩 and the leaderboard does not actually update. Also an error is shown on the browser dev console: +`GET http://web-svc.emojivoto:8080/api/vote?choice=:doughnut: 500 (Internal Server Error)` + + + Open the dev console in Chrome or Firefox with Option + ⌘ + J (macOS) or Shift + CTRL + J (Windows/Linux).
+ Open the dev console in Safari with Option + ⌘ + C. +
+ +The error is on a backend service, so **we can add an error page to notify the user** while the bug is fixed. + +## 5. Run a service on your laptop + +Now start up the `web-app` service on your laptop. We'll then make a code change and intercept this service so that we can see the immediate results of a code change to the service. + +1. **In a new terminal window**, change into the repo directory and build the application: + + `cd /emojivoto` + `make web-app-local` + + ``` + $ make web-app-local + + ... + webpack 5.34.0 compiled successfully in 4326 ms + ✨ Done in 5.38s. + ``` + +2. Change into the service's code directory and start the server: + + `cd emojivoto-web-app` + `yarn webpack serve` + + ``` + $ yarn webpack serve + + ... + ℹ 「wds」: Project is running at http://localhost:8080/ + ... + ℹ 「wdm」: Compiled successfully. + ``` + +4. Access the application at [http://localhost:8080](http://localhost:8080) and see how voting for the 🍩 is generating the same error as the application deployed in the cluster. + + + Victory, your local React server is running a-ok! + + +## 6. Make a code change +We’ve now set up a local development environment for the app. Next we'll make and locally test a code change to the app to improve the issue with voting for 🍩. + +1. In the terminal running webpack, stop the server with `Ctrl+c`. + +1. In your preferred editor open the file `emojivoto/emojivoto-web-app/js/components/Vote.jsx` and replace the `render()` function (lines 83 to the end) with [this highlighted code snippet](https://github.com/datawire/emojivoto/blob/main/assets/Vote-fixed.jsx#L83-L149). + +1. Run webpack to fully recompile the code then start the server again: + + `yarn webpack` + `yarn webpack serve` + +1. Reload the browser tab showing [http://localhost:8080](http://localhost:8080) and vote for 🍩. Notice how you see an error instead, improving the user experience. + +## 7. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the app to the version running locally instead. + + + This command must be run in the terminal window where you ran the script because the script set environment variables to access the demo cluster. Those variables will only will apply to that terminal session. + + +1. Start the intercept with the `intercept` command, setting the workload name (a Deployment in this case), namespace, and port: +`telepresence intercept web-app --namespace emojivoto --port 8080` + + ``` + $ telepresence intercept web-app --namespace emojivoto --port 8080 + + Using deployment web-app + intercepted + Intercept name: web-app-emojivoto + State : ACTIVE + ... + ``` + +2. Go to the frontend service again in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). Voting for 🍩 should now show an error message to the user. + + + The web-app Deployment is being intercepted and rerouted to the server on your laptop! + + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## What's Next? + + diff --git a/docs/telepresence/2.2/quick-start/go.md b/docs/telepresence/2.2/quick-start/go.md new file mode 100644 index 000000000..87b5d6009 --- /dev/null +++ b/docs/telepresence/2.2/quick-start/go.md @@ -0,0 +1,343 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Go** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Go application](#3-install-a-sample-go-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites +You’ll need [`kubectl` installed](https://kubernetes.io/docs/tasks/tools/#kubectl) +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Go application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Go. We have versions in Python (Flask), Python (FastAPI), Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-go.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-go.git + + Cloning into 'edgey-corp-go'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-go/DataProcessingService/` + +3. You will use [Fresh](https://pkg.go.dev/github.com/BUGLAN/fresh) to support auto reloading of the Go server, which we'll use later. Confirm it is installed by running: + `go get github.com/pilu/fresh` + Then start the Go server: + `$GOPATH/bin/fresh` + + ``` + $ go get github.com/pilu/fresh + + $ $GOPATH/bin/fresh + + ... + 10:23:41 app | Welcome to the DataProcessingGoService! + ``` + + + Install Go from here and set your GOPATH if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Go server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Go server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-go/DataProcessingService/main.go` in your editor and change `var color string` from `blue` to `orange`. Save the file and the Go server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + This opens your browser; login with your preferred identity provider and choose your org. + + ``` + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.2/quick-start/index.md b/docs/telepresence/2.2/quick-start/index.md new file mode 100644 index 000000000..efcb65b52 --- /dev/null +++ b/docs/telepresence/2.2/quick-start/index.md @@ -0,0 +1,7 @@ +--- + description: Telepresence Quick Start. +--- + +import TelepresenceQuickStartLanding from './TelepresenceQuickStartLanding' + + diff --git a/docs/telepresence/2.2/quick-start/qs-cards.js b/docs/telepresence/2.2/quick-start/qs-cards.js new file mode 100644 index 000000000..31582355b --- /dev/null +++ b/docs/telepresence/2.2/quick-start/qs-cards.js @@ -0,0 +1,70 @@ +import Grid from '@material-ui/core/Grid'; +import Paper from '@material-ui/core/Paper'; +import Typography from '@material-ui/core/Typography'; +import { makeStyles } from '@material-ui/core/styles'; +import React from 'react'; + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + textAlign: 'center', + alignItem: 'stretch', + padding: 0, + }, + paper: { + padding: theme.spacing(1), + textAlign: 'center', + color: 'black', + height: '100%', + }, +})); + +export default function CenteredGrid() { + const classes = useStyles(); + + return ( +
+ + + + + + Collaborating + + + + Use preview URLS to collaborate with your colleagues and others + outside of your organization. + + + + + + + + Outbound Sessions + + + + While connected to the cluster, your laptop can interact with + services as if it was another pod in the cluster. + + + + + + + + FAQs + + + + Learn more about uses cases and the technical implementation of + Telepresence. + + + + +
+ ); +} diff --git a/docs/telepresence/2.2/quick-start/qs-go.md b/docs/telepresence/2.2/quick-start/qs-go.md new file mode 100644 index 000000000..87b5d6009 --- /dev/null +++ b/docs/telepresence/2.2/quick-start/qs-go.md @@ -0,0 +1,343 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Go** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Go application](#3-install-a-sample-go-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites +You’ll need [`kubectl` installed](https://kubernetes.io/docs/tasks/tools/#kubectl) +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Go application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Go. We have versions in Python (Flask), Python (FastAPI), Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-go.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-go.git + + Cloning into 'edgey-corp-go'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-go/DataProcessingService/` + +3. You will use [Fresh](https://pkg.go.dev/github.com/BUGLAN/fresh) to support auto reloading of the Go server, which we'll use later. Confirm it is installed by running: + `go get github.com/pilu/fresh` + Then start the Go server: + `$GOPATH/bin/fresh` + + ``` + $ go get github.com/pilu/fresh + + $ $GOPATH/bin/fresh + + ... + 10:23:41 app | Welcome to the DataProcessingGoService! + ``` + + + Install Go from here and set your GOPATH if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Go server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Go server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-go/DataProcessingService/main.go` in your editor and change `var color string` from `blue` to `orange`. Save the file and the Go server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + This opens your browser; login with your preferred identity provider and choose your org. + + ``` + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.2/quick-start/qs-java.md b/docs/telepresence/2.2/quick-start/qs-java.md new file mode 100644 index 000000000..0b039096b --- /dev/null +++ b/docs/telepresence/2.2/quick-start/qs-java.md @@ -0,0 +1,337 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Java** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Java application](#3-install-a-sample-java-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites +You’ll need [`kubectl` installed](https://kubernetes.io/docs/tasks/tools/#kubectl) +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Java application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Java. We have versions in Python (FastAPI), Python (Flask), Go, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-java.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-java.git + + Cloning into 'edgey-corp-java'... + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-java/DataProcessingService/` + +3. Start the Maven server. + `mvn spring-boot:run` + + + Install Java and Maven first if needed. + + + ``` + $ mvn spring-boot:run + + ... + g.d.DataProcessingServiceJavaApplication : Started DataProcessingServiceJavaApplication in 1.408 seconds (JVM running for 1.684) + + ``` + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Java server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Java server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-java/DataProcessingService/src/main/resources/application.properties` in your editor and change `app.default.color` on line 2 from `blue` to `orange`. Save the file then stop and restart your Java server. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + This opens your browser; login with your preferred identity provider and choose your org. + + ``` + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.2/quick-start/qs-node.md b/docs/telepresence/2.2/quick-start/qs-node.md new file mode 100644 index 000000000..806d9d47d --- /dev/null +++ b/docs/telepresence/2.2/quick-start/qs-node.md @@ -0,0 +1,331 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Node.js** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Node.js application](#3-install-a-sample-nodejs-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites +You’ll need [`kubectl` installed](https://kubernetes.io/docs/tasks/tools/#kubectl) +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Node.js application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js. We have versions in Go, Java,Python using Flask, and Python using FastAPI if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-nodejs.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-nodejs.git + + Cloning into 'edgey-corp-nodejs'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-nodejs/DataProcessingService/` + +3. Install the dependencies and start the Node server: +`npm install && npm start` + + ``` + $ npm install && npm start + + ... + Welcome to the DataProcessingService! + { _: [] } + Server running on port 3000 + ``` + + + Install Node.js from here if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Node server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + See this doc for more information on how Telepresence resolves DNS. + + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Node server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-nodejs/DataProcessingService/app.js` in your editor and change line 6 from `blue` to `orange`. Save the file and the Node server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + This opens your browser; login with your preferred identity provider and choose your org. + + ``` + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.2/quick-start/qs-python-fastapi.md b/docs/telepresence/2.2/quick-start/qs-python-fastapi.md new file mode 100644 index 000000000..24f86037f --- /dev/null +++ b/docs/telepresence/2.2/quick-start/qs-python-fastapi.md @@ -0,0 +1,328 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Python (FastAPI)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites +You’ll need [`kubectl` installed](https://kubernetes.io/docs/tasks/tools/#kubectl) +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the FastAPI framework. We have versions in Python (Flask), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python-fastapi.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python-fastapi.git + + Cloning into 'edgey-corp-python-fastapi'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python-fastapi/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install fastapi uvicorn requests && python app.py + + Collecting fastapi + ... + Application startup complete. + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local service is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python-fastapi/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 17 from `blue` to `orange`. Save the file and the Python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + This opens your browser; login with your preferred identity provider and choose your org. + + ``` + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080) and it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.2/quick-start/qs-python.md b/docs/telepresence/2.2/quick-start/qs-python.md new file mode 100644 index 000000000..4d79336e0 --- /dev/null +++ b/docs/telepresence/2.2/quick-start/qs-python.md @@ -0,0 +1,339 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Python (Flask)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites +You’ll need [`kubectl` installed](https://kubernetes.io/docs/tasks/tools/#kubectl) +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the Flask framework. We have versions in Python (FastAPI), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python.git + + Cloning into 'edgey-corp-python'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install flask requests && python app.py + + Collecting flask + ... + Welcome to the DataServiceProcessingPythonService! + ... + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Python server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 15 from `blue` to `orange`. Save the file and the python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Login to Ambassador Cloud, a web interface for managing and sharing preview URLs: +`telepresence login` + + This opens your browser; login with your preferred identity provider and choose your org. + + ``` + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.2/quick-start/telepresence-quickstart-landing.less b/docs/telepresence/2.2/quick-start/telepresence-quickstart-landing.less new file mode 100644 index 000000000..1a8c3ddc7 --- /dev/null +++ b/docs/telepresence/2.2/quick-start/telepresence-quickstart-landing.less @@ -0,0 +1,185 @@ +@import '~@src/components/Layout/vars.less'; + +.doc-body .telepresence-quickstart-landing { + font-family: @InterFont; + color: @black; + margin: 0 auto 140px; + max-width: @docs-max-width; + min-width: @docs-min-width; + + h1, + h2 { + color: @blue-dark; + font-style: normal; + font-weight: normal; + letter-spacing: 0.25px; + } + + h1 { + font-size: 33px; + line-height: 40px; + + svg { + vertical-align: text-bottom; + } + } + + h2 { + font-size: 23px; + line-height: 33px; + margin: 0 0 1rem; + + .highlight-mark { + background: transparent; + color: @blue-dark; + background: -moz-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: -webkit-gradient( + linear, + left top, + left bottom, + color-stop(0%, transparent), + color-stop(60%, transparent), + color-stop(60%, fade(@blue-electric, 15%)), + color-stop(100%, fade(@blue-electric, 15%)) + ); + background: -webkit-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: -o-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: -ms-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: linear-gradient( + to bottom, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='transparent', endColorstr='fade(@blue-electric, 15%)',GradientType=0 ); + padding: 0 3px; + margin: 0 0.1em 0 0; + } + } + + .telepresence-choice { + background: @white; + border: 2px solid @grey-separator; + box-shadow: -6px 12px 0px fade(@black, 12%); + border-radius: 8px; + padding: 20px; + + strong { + color: @blue; + } + } + + .telepresence-choice-wrapper { + border-bottom: solid 1px @grey-separator; + column-gap: 60px; + display: inline-grid; + grid-template-columns: repeat(2, 1fr); + margin: 20px 0 50px; + padding: 0 0 62px; + width: 100%; + + .telepresence-choice { + ol { + li { + font-size: 14px; + } + } + + .get-started-button { + background-color: @green; + border-radius: 5px; + color: @white; + display: inline-flex; + font-style: normal; + font-weight: 600; + font-size: 14px; + line-height: 24px; + margin: 0 0 15px 5px; + padding: 13px 20px; + align-items: center; + letter-spacing: 1.25px; + text-decoration: none; + text-transform: uppercase; + transition: background-color 200ms linear 0ms; + + svg { + fill: @white; + height: 20px; + width: 20px; + } + + &:hover { + background-color: @green-dark; + text-decoration: none; + } + } + + p { + font-style: normal; + font-weight: normal; + font-size: 16px; + line-height: 26px; + letter-spacing: 0.5px; + } + } + } + + .video-wrapper { + display: flex; + flex-direction: row; + + ul { + li { + font-size: 14px; + margin: 0 10px 10px 0; + } + } + + div { + &.video-container { + flex: 1 1 70%; + position: relative; + width: 100%; + padding-bottom: 39.375%; + + .video { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + border: 0; + } + } + + &.description { + flex: 0 1 30%; + } + } + } +} diff --git a/docs/telepresence/2.2/redirects.yml b/docs/telepresence/2.2/redirects.yml new file mode 100644 index 000000000..5961b3477 --- /dev/null +++ b/docs/telepresence/2.2/redirects.yml @@ -0,0 +1 @@ +- {from: "", to: "quick-start"} diff --git a/docs/telepresence/2.2/reference/architecture.md b/docs/telepresence/2.2/reference/architecture.md new file mode 100644 index 000000000..47facb0b8 --- /dev/null +++ b/docs/telepresence/2.2/reference/architecture.md @@ -0,0 +1,63 @@ +--- +description: "How Telepresence works to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Telepresence Architecture + +
+ +![Telepresence Architecture](../../../../../images/documentation/telepresence-architecture.inline.svg) + +
+ +## Telepresence CLI + +The Telepresence CLI orchestrates all the moving parts: it starts the Telepresence Daemon, installs the Traffic Manager +in your cluster, authenticates against Ambassador Cloud and configure all those elements to communicate with one +another. + +## Telepresence Daemon + +The Telepresence Daemon runs on a developer's workstation and is its main point of communication with the cluster's +network. All requests from and to the cluster go through the Daemon, which communicates with the Traffic Manager. + +## Traffic Manager + +The Traffic Manager is the central point of communication between Traffic Agents in the cluster and Telepresence Daemons +on developer workstations, proxying all relevant inbound and outbound traffic and tracking active intercepts. When +Telepresence is run with either the `connect`, `intercept`, or `list` commands, the Telepresence CLI first checks the +cluster for the Traffic Manager deployment, and if missing it creates it. + +When an intercept gets created with a Preview URL, the Traffic Manager will establish a connection with Ambassador Cloud +so that Preview URL requests can be routed to the cluster. This allows Ambassador Cloud to reach the Traffic Manager +without requiring the Traffic Manager to be publicly exposed. Once the Traffic Manager receives a request from a Preview +URL, it forwards the request to the ingress service specified at the Preview URL creation. + +## Traffic Agent + +The Traffic Agent is a sidecar container that facilitates intercepts. When an intercept is started, the Traffic Agent +container is injected into the workload's pod(s). You can see the Traffic Agent's status by running `kubectl describe +pod `. + +Depending on the type of intercept that gets created, the Traffic Agent will either route the incoming request to the +Traffic Manager so that it gets routed to a developer's workstation, or it will pass it along to the container in the +pod usually handling requests on that port. + +## Ambassador Cloud + +Ambassador Cloud enables Preview URLs by generating random ephemeral domain names and routing requests received on those +domains from authorized users to the appropriate Traffic Manager. + +Ambassador Cloud also lets users manage their Preview URLs: making them publicly accessible, seeing users who have +accessed them and deleting them. + +# Changes from Service Preview + +Using Ambassador's previous offering, Service Preview, the Traffic Agent had to be manually added to a pod by an +annotation. This is no longer required as the Traffic Agent is automatically injected when an intercept is started. + +Service Preview also started an intercept via `edgectl intercept`. The `edgectl` CLI is no longer required to intercept +as this functionality has been moved to the Telepresence CLI. + +For both the Traffic Manager and Traffic Agents, configuring Kubernetes ClusterRoles and ClusterRoleBindings is not +required as it was in Service Preview. Instead, the user running Telepresence must already have sufficient permissions in the cluster to add and modify deployments in the cluster. diff --git a/docs/telepresence/2.2/reference/client.md b/docs/telepresence/2.2/reference/client.md new file mode 100644 index 000000000..2251876cd --- /dev/null +++ b/docs/telepresence/2.2/reference/client.md @@ -0,0 +1,25 @@ +--- +description: "CLI options for Telepresence to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Client reference + +The [Telepresence CLI client](../../quick-start) is used to connect Telepresence to your cluster, start and stop intercepts, and create preview URLs. All commands are run in the form of `telepresence `. + +## Commands + +A list of all CLI commands and flags is available by running `telepresence help`, but here is more detail on the most common ones. + +| Command | Description | +| --- | --- | +| `connect` | Starts the local daemon and connects Telepresence to your cluster and installs the Traffic Manager if it is missing. After connecting, outbound traffic is routed to the cluster so that you can interact with services as if your laptop was another pod (for example, curling a service by it's name) | +| `login` | Authenticates you to Ambassador Cloud to create, manage, and share [preview URLs](../../howtos/preview-urls/) +| `logout` | Logs out out of Ambassador Cloud | +| `dashboard` | Reopens the Ambassador Cloud dashboard in your browser | +| `preview` | Create or remove [preview URLs](../../howtos/preview-urls) for existing intercepts: `telepresence preview create ` | +| `status` | Shows the current connectivity status | +| `quit` | Quits the local daemon, stopping all intercepts and outbound traffic to the cluster| +| `list` | Lists the current active intercepts | +| `intercept` | Intercepts a service, run followed by the service name to be intercepted and what port to proxy to your laptop: `telepresence intercept --port `. This command can also start a process so you can run a local instance of the service you are intercepting. For example the following will intercept the hello service on port 8000 and start a Python web server: `telepresence intercept hello --port 8000 -- python3 -m http.server 8000`. A special flag `--docker-run` can be used to run the local instance [in a docker container](../docker-run). | +| `leave` | Stops an active intercept: `telepresence leave hello` | +| `uninstall` | Uninstalls Telepresence from your cluster, using the `--agent` flag to target the Traffic Agent for a specific workload, the `--all-agents` flag to remove all Traffic Agents from all workloads, or the `--everything` flag to remove all Traffic Agents and the Traffic Manager. diff --git a/docs/telepresence/2.2/reference/cluster-config.md b/docs/telepresence/2.2/reference/cluster-config.md new file mode 100644 index 000000000..125c536aa --- /dev/null +++ b/docs/telepresence/2.2/reference/cluster-config.md @@ -0,0 +1,120 @@ +# Cluster-side configuration + +For the most part, Telepresence doesn't require any special +configuration in the cluster and can be used right away in any +cluster (as long as the user has adequate [RBAC permissions](../rbac)). + +However, some advanced features do require some configuration in the +cluster. + +## TLS + +In this example, other applications in the cluster expect to speak TLS to your +intercepted application (perhaps you're using a service-mesh that does +mTLS). + +In order to use `--mechanism=http` (or any features that imply +`--mechanism=http`) you need to tell Telepresence about the TLS +certificates in use. + +Tell Telepresence about the certificates in use by adjusting your +[workload's](../intercepts/#supported-workloads) Pod template to set a couple of +annotations on the intercepted Pods: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ "getambassador.io/inject-terminating-tls-secret": "your-terminating-secret" # optional ++ "getambassador.io/inject-originating-tls-secret": "your-originating-secret" # optional + spec: ++ serviceAccountName: "your-account-that-has-rbac-to-read-those-secrets" + containers: +``` + +- The `getambassador.io/inject-terminating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS server + certificate to use for decrypting and responding to incoming + requests. + + When Telepresence modifies the Service and workload port + definitions to point at the Telepresence Agent sidecar's port + instead of your application's actual port, the sidecar will use this + certificate to terminate TLS. + +- The `getambassador.io/inject-originating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS + client certificate to use for communicating with your application. + + You will need to set this if your application expects incoming + requests to speak TLS (for example, your + code expects to handle mTLS itself instead of letting a service-mesh + sidecar handle mTLS for it, or the port definition that Telepresence + modified pointed at the service-mesh sidecar instead of at your + application). + + If you do set this, you should to set it to the + same client certificate Secret that you configure the Ambassador + Edge Stack to use for mTLS. + +It is only possible to refer to a Secret that is in the same Namespace +as the Pod. + +The Pod will need to have permission to `get` and `watch` each of +those Secrets. + +Telepresence understands `type: kubernetes.io/tls` Secrets and +`type: istio.io/key-and-cert` Secrets; as well as `type: Opaque` +Secrets that it detects to be formatted as one of those types. + +## Air gapped cluster + +If your cluster is air gapped (it does not have access to the +internet and therefore cannot connect to Ambassador Cloud), some additional +configuration is required to acquire a license use selective intercepts. + +### Create a license + +1. Go to [the teams setting page in Ambassador Cloud](https://auth.datawire.io/redirects/settings/teams) and +select *Licenses* for the team you want to create the license for. + +2. Generate a new license (if one doesn't already exist) by clicking *Generate New License*. + +3. You will be prompted for your Cluster ID. Ensure your +kubeconfig context is using the cluster you want to create a license for then +run this command to generate the Cluster ID: + + ``` + $ telepresence current-cluster-id + + Cluster ID: + ``` + +4. Click *Generate API Key* to finish generating the license. + +### Add license to cluster + +1. On the licenses page, download the license file associated with your cluster. + +2. Use this command to generate a Kubernetes Secret config using the license file: + + ``` + $ telepresence license -f + + apiVersion: v1 + data: + hostDomain: + license: + kind: Secret + metadata: + creationTimestamp: null + name: systema-license + namespace: ambassador + ``` + +3. Save the output as a YAML file and apply it to your +cluster with `kubectl`. Once applied, you will be able to use selective intercepts with the +`--preview-url=false` flag (since use of preview URLs requires a connection to Ambassador Cloud). diff --git a/docs/telepresence/2.2/reference/config.md b/docs/telepresence/2.2/reference/config.md new file mode 100644 index 000000000..ac81202a4 --- /dev/null +++ b/docs/telepresence/2.2/reference/config.md @@ -0,0 +1,32 @@ +# Laptop-side configuration + +Telepresence uses a `config.yml` file to store and change certain values. The location of this file varies based on your OS: + +* macOS: `$HOME/Library/Application Support/telepresence/config.yml` +* Linux: `$XDG_CONFIG_HOME/telepresence/config.yml` or, if that variable is not set, `$HOME/.config/telepresence/config.yml` + +For Linux, the above paths are for a user-level configuration. For system-level configuration, use the file at `$XDG_CONFIG_DIRS/telepresence/config.yml` or, if that variable is empty, `/etc/xdg/telepresence/config.yml`. If a file exists at both the user-level and system-level paths, the user-level path file will take precedence. + +## Values + +The config file currently only supports values for the `timeouts` key, here is an example file: + +```yaml +timeouts: + agentInstall: 1m + intercept: 10s +``` + +Values are all durations either as a number respresenting seconds or a string with a unit suffix of `ms`, `s`, `m`, or `h`. Strings can be fractional (`1.5h`) or combined (`2h45m`). + +These are the valid fields for the `timeouts` key: + +|Field|Description|Default| +|---|---|---| +|`agentInstall`|Waiting for Traffic Agent to be installed|2 minutes| +|`apply`|Waiting for a Kubernetes manifest to be applied|1 minute| +|`clusterConnect`|Waiting for cluster to be connected|20 seconds| +|`intercept`|Waiting for an intercept to become active|5 seconds| +|`proxyDial`|Waiting for an outbound connection to be established|5 seconds| +|`trafficManagerConnect`|Waiting for the Traffic Manager API to connect for port fowards|20 seconds| +|`trafficManagerAPI`|Waiting for connection to the gPRC API after `trafficManagerConnect` is successful|5 seconds| diff --git a/docs/telepresence/2.2/reference/dns.md b/docs/telepresence/2.2/reference/dns.md new file mode 100644 index 000000000..bdae98d6e --- /dev/null +++ b/docs/telepresence/2.2/reference/dns.md @@ -0,0 +1,68 @@ +# DNS resolution + +The Telepresence DNS resolver is dynamically configured to resolve names using the namespaces of currently active intercepts. Processes running locally on the desktop will have network access to all services in the such namespaces by service-name only. + +All intercepts contribute to the DNS resolver, even those that do not use the `--namespace=` option. This is because `--namespace default` is implied, and in this context, `default` is treated just like any other namespace. + +No namespaces are used by the DNS resolver (not even `default`) when no intercepts are active, which means that no service is available by `` only. Without an active intercept, the namespace qualified DNS name must be used (in the form `.`). + +See this demonstrated below, using the [quick start's](../../quick-start/) sample app services. + +No intercepts are currently running, we'll connect to the cluster and list the services that can be intercepted. + +``` +$ telepresence connect + + Connecting to traffic manager... + Connected to context default (https://) + +$ telepresence list + + verylargejavaservice : ready to intercept (traffic-agent not yet installed) + dataprocessingservice: ready to intercept (traffic-agent not yet installed) + verylargedatastore : ready to intercept (traffic-agent not yet installed) + +$ curl verylargejavaservice:8080 + + curl: (6) Could not resolve host: verylargejavaservice + +``` + +This is expected as Telepresence cannot reach the service yet by short name without an active intercept in that namespace. + +``` +$ curl verylargejavaservice.default:8080 + + + + + Welcome to the EdgyCorp WebApp + ... +``` + +Using the namespaced qualified DNS name though does work. +Now we'll start an intercept against another service in the same namespace. Remember, `--namespace default` is implied since it is not specified. + +``` +$ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + +$ curl verylargejavaservice:8080 + + + + + Welcome to the EdgyCorp WebApp + ... +``` + +Now curling that service by its short name works and will as long as the intercept is active. + +The DNS resolver will always be able to resolve services using `.` regardless of intercepts. diff --git a/docs/telepresence/2.2/reference/docker-run.md b/docs/telepresence/2.2/reference/docker-run.md new file mode 100644 index 000000000..2262f0a55 --- /dev/null +++ b/docs/telepresence/2.2/reference/docker-run.md @@ -0,0 +1,31 @@ +--- +Description: "How a Telepresence intercept can run a Docker container with configured environment and volume mounts." +--- + +# Using Docker for intercepts + +If you want your intercept to go to a Docker container on your laptop, use the `--docker-run` option. It creates the intercept, runs your container in the foreground, then automatically ends the intercept when the container exits. + +`telepresence intercept --port --docker-run -- ` + +The `--` separates flags intended for `telepresence intercept` from flags intended for `docker run`. + +## Example + +Imagine you are working on a new version of a your frontend service. It is running in your cluster as a Deployment called `frontend-v1`. You use Docker on your laptop to build an improved version of the container called `frontend-v2`. To test it out, use this command to run the new container on your laptop and start an intercept of the cluster service to your local container. + +`telepresence intercept frontend-v1 --port 8000 --docker-run -- frontend-v2` + +## Ports + +The `--port` flag can specify an additional port when `--docker-run` is used so that the local and container port can be different. This is done using `--port :`. The container port will default to the local port when using the `--port ` syntax. + +## Flags + +Telepresence will automatically pass some relevant flags to Docker in order to connect the container with the intercept. Those flags are combined with the arguments given after `--` on the command line. + +- `--dns-search tel2-search` Enables single label name lookups in intercepted namespaces +- `--env-file ` Loads the intercepted environment +- `--name intercept--` Names the Docker container, this flag is omitted if explicitly given on the command line +- `-p ` The local port for the intercept and the container port +- `-v ` Volume mount specification, see CLI help for `--mount` and `--docker-mount` flags for more info diff --git a/docs/telepresence/2.2/reference/environment.md b/docs/telepresence/2.2/reference/environment.md new file mode 100644 index 000000000..b5a799cce --- /dev/null +++ b/docs/telepresence/2.2/reference/environment.md @@ -0,0 +1,28 @@ +--- +description: "How Telepresence can import environment variables from your Kubernetes cluster to use with code running on your laptop." +--- + +# Environment variables + +Telepresence can import environment variables from the cluster pod when running an intercept. +You can then use these variables with the code running on your laptop of the service being intercepted. + +There are three options available to do this: + +1. `telepresence intercept [service] --port [port] --env-file=FILENAME` + + This will write the environment variables to a Docker Compose `.env` file. This file can be used with `docker-compose` when starting containers locally. Please see the Docker documentation regarding the [file syntax](https://docs.docker.com/compose/env-file/) and [usage](https://docs.docker.com/compose/environment-variables/) for more information. + +2. `telepresence intercept [service] --port [port] --env-json=FILENAME` + + This will write the environment variables to a JSON file. This file can be injected into other build processes. + +3. `telepresence intercept [service] --port [port] -- [COMMAND]` + + This will run a command locally with the pod's environment variables set on your laptop. Once the command quits the intercept is stopped (as if `telepresence leave [service]` was run). This can be used in conjunction with a local server command, such as `python [FILENAME]` or `node [FILENAME]` to run a service locally while using the environment variables that were set on the pod via a ConfigMap or other means. + + Another use would be running a subshell, Bash for example: + + `telepresence intercept [service] --port [port] -- /bin/bash` + + This would start the intercept then launch the subshell on your laptop with all the same variables set as on the pod. diff --git a/docs/telepresence/2.2/reference/intercepts.md b/docs/telepresence/2.2/reference/intercepts.md new file mode 100644 index 000000000..1fa0f1876 --- /dev/null +++ b/docs/telepresence/2.2/reference/intercepts.md @@ -0,0 +1,127 @@ +import Alert from '@material-ui/lab/Alert'; + +# Intercepts + +## Intercept behavior when logged into Ambassador Cloud + +After logging into Ambassador Cloud (with `telepresence login`), Telepresence will default to `--preview-url=true`, which will use Ambassador Cloud to create a sharable preview URL for this intercept. (Creating an intercept without logging in will default to `--preview-url=false`). + +In order to do this, it will prompt you for four options. For the first, `Ingress`, Telepresence tries to intelligently determine the ingress controller deployment and namespace for you. If they are correct, you can hit `enter` to accept the defaults. Set the next two options, `TLS` and `Port`, appropriately based on your ingress service. The fourth is a hostname for the service, if required by your ingress. + +Also because you're logged in, Telepresence will default to `--mechanism=http --http-match=auto` (or just `--http-match=auto`; `--http-match` implies `--mechanism=http`). If you hadn't been logged in it would have defaulted to `--mechanism=tcp`. This tells it to do smart intercepts and only intercept a subset of HTTP requests, rather than just intercepting the entirety of all TCP connections. This is important for working in a shared cluster with teammates, and is important for the preview URL functionality. See `telepresence intercept --help` for information on using `--http-match` to customize which requests it intercepts. + +## Supported workloads + +Kubernetes has various [workloads](https://kubernetes.io/docs/concepts/workloads/). Currently, telepresence supports intercepting Deployments, ReplicaSets, and StatefulSets. + While many of our examples may use Deployments, they would also work on ReplicaSets and StatefulSets + +## Specifying a namespace for an intercept + +The namespace of the intercepted workload is specified using the `--namespace` option. When this option is used, and `--workload` is not used, then the given name is interpreted as the name of the workload and the name of the intercept will be constructed from that name and the namespace. + +``` +telepresence intercept hello --namespace myns --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept +`hello-myns`. In order to remove the intercept, you will need to run +`telepresence leave hello-mydns` instead of just `telepresence leave +hello`. + +The name of the intercept will be left unchanged if the workload is specified. + +``` +telepresence intercept myhello --namespace myns --workload hello --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept `myhello`. + +## Importing environment variables + +Telepresence can import the environment variables from the pod that is being intercepted, see [this doc](../environment/) for more details. + +## Creating an intercept Without a preview URL + +If you *are not* logged into Ambassador Cloud, the following command will intercept all traffic bound to the service and proxy it to your laptop. This includes traffic coming through your ingress controller, so use this option carefully as to not disrupt production environments. + +``` +telepresence intercept --port= +``` + +If you *are* logged into Ambassador Cloud, setting the `preview-url` flag to `false` is necessary. + +``` +telepresence intercept --port= --preview-url=false +``` + +This will output a header that you can set on your request for that traffic to be intercepted: + +``` +$ telepresence intercept --port= --preview-url=false +Using Deployment +intercepted + Intercept name: + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":") +``` + +Run `telepresence status` to see the list of active intercepts. + +``` +$ telepresence status +Root Daemon: Running + Version : v2.1.4 (api 3) + Primary DNS : "" + Fallback DNS: "" +User Daemon: Running + Version : v2.1.4 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 1 total + dataprocessingnodeservice: @ +``` + +Finally, run `telepresence leave ` to stop the intercept. + +## Creating an intercept when a service has multiple ports + +If you are trying to intercept a service that has multiple ports, you need to tell telepresence which service port you are trying to intercept. To specify, you can either use the name of the service port or the port number itself. To see which options might be available to you and your service, use kubectl to describe your service or look in the object's YAML. For more information on multiple ports, see the [Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services). + +``` +$ telepresence intercept --port=: +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +When intercepting a service that has multiple ports, the name of the service port that has been intercepted is also listed. + +If you want to change which port has been intercepted, you can create a new intercept the same way you did above and it will change which service port is being intercepted. + +## Creating an intercept When multiple services match your workload + +Oftentimes, there's a 1-to-1 relationship between a service and a workload, so telepresence is able to auto-detect which service it should intercept based on the workload you are trying to intercept. But if you use something like [Argo](../../../../argo/latest/), it uses two services (that use the same labels) to manage traffic between a canary and a stable service. + +Fortunately, if you know which service you want to use when intercepting a workload, you can use the --service flag. So in the aforementioned demo, if you wanted to use the `echo-stable` service when intercepting your workload, your command would look like this: +``` +$ telepresence intercept echo-rollout- --port --service echo-stable +Using ReplicaSet echo-rollout- +intercepted + Intercept name : echo-rollout- + State : ACTIVE + Workload kind : ReplicaSet + Destination : 127.0.0.1:3000 + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-921196036 + Intercepting : all TCP connections +``` diff --git a/docs/telepresence/2.2/reference/linkerd.md b/docs/telepresence/2.2/reference/linkerd.md new file mode 100644 index 000000000..7b184cb4f --- /dev/null +++ b/docs/telepresence/2.2/reference/linkerd.md @@ -0,0 +1,75 @@ +--- +Description: "How to get Linkerd meshed services working with Telepresence" +--- + +# Using Telepresence with Linkerd + +## Introduction +Getting started with Telepresence on Linkerd services is as simple as adding an annotation to your Deployment: + +```yaml +spec: + template: + metadata: + annotations: + config.linkerd.io/skip-outbound-ports: "8081,8022,6000-7999" +``` + +The Traffic Agent uses port 8081 for its API, 8022 for SSHFS, and 6001 for the actual tunnel between the Traffic Manager and the local system. Telling Linkerd to skip these ports allows the Traffic Agent sidecar to fully communicate with the Traffic Manager, and therefore the rest of the Telepresence system. + +## Prerequisites +1. [Telepresence binary](../../install) +2. Linkerd control plane [installed to cluster](https://linkerd.io/2.10/tasks/install/) +3. Kubectl +4. [Working ingress controller](../../../../edge-stack/latest/howtos/linkerd2) + +## Deploy +Save and deploy the following YAML. Note the `config.linkerd.io/skip-outbound-ports` annotation in the metadata of the pod template. + +```yaml +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: quote +spec: + replicas: 1 + selector: + matchLabels: + app: quote + strategy: + type: RollingUpdate + template: + metadata: + annotations: + linkerd.io/inject: "enabled" + config.linkerd.io/skip-outbound-ports: "8081,8022,6001" + labels: + app: quote + spec: + containers: + - name: backend + image: docker.io/datawire/quote:0.4.1 + ports: + - name: http + containerPort: 8000 + env: + - name: PORT + value: "8000" + resources: + limits: + cpu: "0.1" + memory: 100Mi +``` + +## Connect to Telepresence +Run `telepresence connect` to connect to the cluster. Then `telepresence list` should show the `quote` deployment as `ready to intercept`: + +``` +$ telepresence list + + quote: ready to intercept (traffic-agent not yet installed) +``` + +## Run the intercept +Run `telepresence intercept quote --port 8080:80` to direct traffic from the `quote` deployment to port 8080 on your local system. Assuming you have something listening on 8080, you should now be able to see your local service whenever attempting to access the `quote` service. diff --git a/docs/telepresence/2.2/reference/rbac.md b/docs/telepresence/2.2/reference/rbac.md new file mode 100644 index 000000000..c6bb90282 --- /dev/null +++ b/docs/telepresence/2.2/reference/rbac.md @@ -0,0 +1,199 @@ +import Alert from '@material-ui/lab/Alert'; + +# Telepresence RBAC +The intention of this document is to provide a template for securing and limiting the permissions of Telepresence. +This documentation will not cover the full extent of permissions necessary to administrate Telepresence components in a cluster. [Telepresence administration](/products/telepresence/) requires permissions for creating Service Accounts, ClusterRoles and ClusterRoleBindings, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator. + +There are two general categories for cluster permissions with respect to Telepresence. There are RBAC settings for a User and for an Administrator described above. The User is expected to only have the minimum cluster permissions necessary to create a Telepresence [intercept](../../howtos/intercepts/), and otherwise be unable to affect Kubernetes resources. + +In addition to the above, there is also a consideration of how to manage Users and Groups in Kubernetes which is outside of the scope of the document. This document will use Service Accounts to assign Roles and Bindings. Other methods of RBAC administration and enforcement can be found on the [Kubernetes RBAC documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) page. + +## Requirements + +- Kubernetes version 1.16+ +- Cluster admin privileges to apply RBAC + +## Editing your kubeconfig + +This guide also assumes that you are utilizing a kubeconfig file that is specified by the `KUBECONFIG` environment variable. This is a `yaml` file that contains the cluster's API endpoint information as well as the user data being supplied for authentication. The Service Account name used in the example below is called tp-user. This can be replaced by any value (i.e. John or Jane) as long as references to the Service Account are consistent throughout the `yaml`. After an administrator has applied the RBAC configuration, a user should create a `config.yaml` in your current directory that looks like the following:​ + +```yaml +apiVersion: v1 +kind: Config +clusters: +- name: my-cluster # Must match the cluster value in the contexts config + cluster: + ## The cluster field is highly cloud dependent. +contexts: +- name: my-context + context: + cluster: my-cluster # Must match the name field in the clusters config + user: tp-user +users: +- name: tp-user # Must match the name of the Service Account created by the cluster admin + user: + token: # See note below +``` + +The Service Account token will be obtained by the cluster administrator after they create the user's Service Account. Creating the Service Account will create an associated Secret in the same namespace with the format `-token-`. This token can be obtained by your cluster administrator by running `kubectl get secret -n ambassador -o jsonpath='{.data.token}' | base64 -d`. + +After creating `config.yaml` in your current directory, export the file's location to KUBECONFIG by running `export KUBECONFIG=$(pwd)/config.yaml`. You should then be able to switch to this context by running `kubectl config use-context my-context`. + +## Cluster-wide telepresence user access + +To allow users to make intercepts across all namespaces, but with more limited `kubectl` permissions, the following `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` will allow full `telepresence intercept` functionality. + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate value + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-role +rules: +- apiGroups: + - "" + resources: ["pods"] + verbs: ["get", "list", "create", "watch", "delete"] +- apiGroups: + - "" + resources: ["services"] + verbs: ["get", "list", "watch", "update"] +- apiGroups: + - "" + resources: ["pods/portforward"] + verbs: ["create"] +- apiGroups: + - "apps" + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update"] +- apiGroups: + - "getambassador.io" + resources: ["hosts", "mappings"] + verbs: ["*"] +- apiGroups: + - "" + resources: ["endpoints"] + verbs: ["get", "list", "watch"] +- apiGroups: + - "" + resources: ["namespaces"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-rolebinding +subjects: +- name: tp-user + kind: ServiceAccount + namespace: ambassador +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-role + kind: ClusterRole +``` + +## Namespace only telepresence user access + +RBAC for multi-tenant scenarios where multiple dev teams are sharing a single cluster where users are constrained to a specific namespace(s). + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate user name + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role +rules: +- apiGroups: + - "" + resources: ["pods"] + verbs: ["get", "list", "create", "watch", "delete"] +- apiGroups: + - "" + resources: ["services"] + verbs: ["get", "list", "watch", "update"] +- apiGroups: + - "" + resources: ["pods/portforward"] + verbs: ["create"] +- apiGroups: + - "apps" + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update"] +- apiGroups: + - "getambassador.io" + resources: ["hosts", "mappings"] + verbs: ["*"] +- apiGroups: + - "" + resources: ["endpoints"] + verbs: ["get", "list", "watch"] +--- +kind: RoleBinding # RBAC to access ambassador namespace +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: t2-ambassador-binding + namespace: ambassador +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount + namespace: ambassador +roleRef: + kind: ClusterRole + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +--- +kind: RoleBinding # RoleBinding T2 namespace to be intecpeted +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-test-binding # Update "test" for appropriate namespace to be intercepted + namespace: test # Update "test" for appropriate namespace to be intercepted +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount + namespace: ambassador +roleRef: + kind: ClusterRole + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +​ +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-namespace-role +rules: +- apiGroups: + - "" + resources: ["namespaces"] + verbs: ["get", "list", "watch"] +--- +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-namespace-binding +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount + namespace: ambassador +roleRef: + kind: ClusterRole + name: telepresence-namespace-role + apiGroup: rbac.authorization.k8s.io +``` diff --git a/docs/telepresence/2.2/reference/volume.md b/docs/telepresence/2.2/reference/volume.md new file mode 100644 index 000000000..2e0e8bc5f --- /dev/null +++ b/docs/telepresence/2.2/reference/volume.md @@ -0,0 +1,36 @@ +# Volume mounts + +import Alert from '@material-ui/lab/Alert'; + +Telepresence supports locally mounting of volumes that are mounted to your Pods. You can specify a command to run when starting the intercept, this could be a subshell or local server such as Python or Node. + +``` +telepresence intercept --port --mount=/tmp/ -- /bin/bash +``` + +In this case, Telepresence creates the intercept, mounts the Pod's volumes to locally to `/tmp`, and starts a Bash subshell. + +Telepresence can set a random mount point for you by using `--mount=true` instead, you can then find the mount point in the output of `telepresence list` or using the `$TELEPRESENCE_ROOT` variable. + +``` +$ telepresence intercept --port --mount=true -- /bin/bash +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 + Intercepting : all TCP connections + +bash-3.2$ echo $TELEPRESENCE_ROOT +/var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 +``` + +--mount=true is the default if a mount option is not specified, use --mount=false to disable mounting volumes. + +With either method, the code you run locally either from the subshell or from the intercept command will need to be prepended with the `$TELEPRESENCE_ROOT` environment variable to utilitze the mounted volumes. + +For example, Kubernetes mounts secrets to `/var/run/secrets/kubernetes.io` (even if no `mountPoint` for it exists in the Pod spec). Once mounted, to access these you would need to change your code to use `$TELEPRESENCE_ROOT/var/run/secrets/kubernetes.io`. + +If using --mount=true without a command, you can use either environment variable flag to retrieve the variable. diff --git a/docs/telepresence/2.2/troubleshooting/index.md b/docs/telepresence/2.2/troubleshooting/index.md new file mode 100644 index 000000000..8c6374bfe --- /dev/null +++ b/docs/telepresence/2.2/troubleshooting/index.md @@ -0,0 +1,41 @@ +--- +description: "Troubleshooting issues related to Telepresence." +--- +# Troubleshooting + +## Creating an intercept did not generate a preview URL + +Preview URLs are only generated when you are logged into Ambassador Cloud, so that you can use it to manage all your preview URLs. When not logged in, the intercept will not generate a preview URL and will proxy all traffic. Remove the intercept with `telepresence leave [deployment name]`, run `telepresence login` to login to Ambassador Cloud, then recreate the intercept. See the [intercepts how-to doc](../howtos/intercepts) for more details. + +## Error on accessing preview URL: `First record does not look like a TLS handshake` + +The service you are intercepting is likely not using TLS, however when configuring the intercept you indicated that it does use TLS. Remove the intercept with `telepresence leave [deployment name]` and recreate it, setting `TLS` to `n`. Telepresence tries to intelligently determine these settings for you when creating an intercept and offer them as defaults, but odd service configurations might cause it to suggest the wrong settings. + +## Error on accessing preview URL: Detected a 301 Redirect Loop + +If your ingress is set to redirect HTTP requests to HTTPS and your web app uses HTTPS, but you configure the intercept to not use TLS, you will get this error when opening the preview URL. Remove the intercept with `telepresence leave [deployment name]` and recreate it, selecting the correct port and setting `TLS` to `y` when prompted. + +## Your GitHub organization isn't listed + +Ambassador Cloud needs access granted to your GitHub organization as a third-party OAuth app. If an org isn't listed during login then the correct access has not been granted. + +The quickest way to resolve this is to go to the **Github menu** → **Settings** → **Applications** → **Authorized OAuth Apps** → **Ambassador Labs**. An org owner will have a **Grant** button, anyone not an owner will have **Request** which sends an email to the owner. If an access request has been denied in the past the user will not see the **Request** button, they will have to reach out to the owner. + +Once access is granted, log out of Ambassador Cloud and log back in, you should see the GitHub org listed. + +The org owner can go to the **GitHub menu** → **Your organizations** → **[org name]** → **Settings** → **Third-party access** to see if Ambassador Labs has access already or authorize a request for access (only owners will see **Settings** on the org page). Clicking the pencil icon will show the permissions that were granted. + +GitHub's documentation provides more detail about [managing access granted to third-party applications](https://docs.github.com/en/github/authenticating-to-github/connecting-with-third-party-applications) and [approving access to apps](https://docs.github.com/en/github/setting-up-and-managing-organizations-and-teams/approving-oauth-apps-for-your-organization). + +### Granting or requesting access on initial login + +When using GitHub as your identity provider, the first time you login to Ambassador Cloud GitHub will ask to authorize Ambassador Labs to access your orgs and certain user data. + + + +Any listed org with a green check has already granted access to Ambassador Labs (you still need to authorize to allow Ambassador Labs to read your user data and org membership). + +Any org with a red X requires access to be granted to Ambassador Labs. Owners of the org will see a **Grant** button. Anyone who is not an owner will see a **Request** button. This will send an email to the org owner requesting approval to access the org. If an access request has been denied in the past the user will not see the **Request** button, they will have to reach out to the owner. + +Once approval is granted, you will have to log out of Ambassador Cloud then back in to select the org. + diff --git a/docs/telepresence/2.2/versions.yml b/docs/telepresence/2.2/versions.yml new file mode 100644 index 000000000..620baeb07 --- /dev/null +++ b/docs/telepresence/2.2/versions.yml @@ -0,0 +1,5 @@ +version: "2.2.2" +dlVersion: "2.2.2" +docsVersion: "2.2" +branch: release/v2 +productName: "Telepresence" diff --git a/docs/telepresence/2.3 b/docs/telepresence/2.3 deleted file mode 120000 index d7b96030a..000000000 --- a/docs/telepresence/2.3 +++ /dev/null @@ -1 +0,0 @@ -../../../docs/telepresence/v2.3 \ No newline at end of file diff --git a/docs/telepresence/2.3/community.md b/docs/telepresence/2.3/community.md new file mode 100644 index 000000000..922457c9d --- /dev/null +++ b/docs/telepresence/2.3/community.md @@ -0,0 +1,12 @@ +# Community + +## Contributor's guide +Please review our [contributor's guide](https://github.com/telepresenceio/telepresence/blob/release/v2/DEVELOPING.md) +on GitHub to learn how you can help make Telepresence better. + +## Changelog +Our [changelog](https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md) +describes new features, bug fixes, and updates to each version of Telepresence. + +## Meetings +Check out our community [meeting schedule](https://github.com/telepresenceio/telepresence/blob/release/v2/MEETING_SCHEDULE.md) for opportunities to interact with Telepresence developers. diff --git a/docs/telepresence/2.3/concepts/context-prop.md b/docs/telepresence/2.3/concepts/context-prop.md new file mode 100644 index 000000000..4ec09396f --- /dev/null +++ b/docs/telepresence/2.3/concepts/context-prop.md @@ -0,0 +1,25 @@ +# Context propagation + +**Context propagation** is the transfer of request metadata across the services and remote processes of a distributed system. Telepresence uses context propagation to intelligently route requests to the appropriate destination. + +This metadata is the context that is transferred across system services. It commonly takes the form of HTTP headers; context propagation is usually referred to as header propagation. A component of the system (like a proxy or performance monitoring tool) injects the headers into requests as it relays them. + +Metadata propagation refers to any service or other middleware not stripping away the headers. Propagation facilitates the movement of the injected contexts between other downstream services and processes. + + +## What is distributed tracing? + +Distributed tracing is a technique for troubleshooting and profiling distributed microservices applications and is a common application for context propagation. It is becoming a key component for debugging. + +In a microservices architecture, a single request may trigger additional requests to other services. The originating service may not cause the failure or slow request directly; a downstream dependent service may instead be to blame. + +An application like Datadog or New Relic will use agents running on services throughout the system to inject traffic with HTTP headers (the context). They will track the request’s entire path from origin to destination to reply, gathering data on routes the requests follow and performance. The injected headers follow the [W3C Trace Context specification](https://www.w3.org/TR/trace-context/) (or another header format, such as [B3 headers](https://github.com/openzipkin/b3-propagation)), which facilitates maintaining the headers through every service without being stripped (the propagation). + + +## What are intercepts and preview URLs? + +[Intercepts](../../reference/intercepts) and [preview URLs](../../howtos/preview-urls/) are functions of Telepresence that enable easy local development from a remote Kubernetes cluster and offer a preview environment for sharing and real-time collaboration. + +Telepresence also uses custom headers and header propagation for controllable intercepts and preview URLs instead of for tracing. The headers facilitate the smart routing of requests either to live services in the cluster or services running locally on a developer’s machine. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to [Ambassador Cloud](https://app.getambassador.io) with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. diff --git a/docs/telepresence/2.3/concepts/devloop.md b/docs/telepresence/2.3/concepts/devloop.md new file mode 100644 index 000000000..8b1fbf354 --- /dev/null +++ b/docs/telepresence/2.3/concepts/devloop.md @@ -0,0 +1,50 @@ +# The developer experience and the inner dev loop + +## How is the developer experience changing? + +The developer experience is the workflow a developer uses to develop, test, deploy, and release software. + +Typically this experience has consisted of both an inner dev loop and an outer dev loop. The inner dev loop is where the individual developer codes and tests, and once the developer pushes their code to version control, the outer dev loop is triggered. + +The outer dev loop is _everything else_ that happens leading up to release. This includes code merge, automated code review, test execution, deployment, [controlled (canary) release](https://www.getambassador.io/docs/argo/latest/concepts/canary/), and observation of results. The modern outer dev loop might include, for example, an automated CI/CD pipeline as part of a [GitOps workflow](https://www.getambassador.io/docs/argo/latest/concepts/gitops/#what-is-gitops) and a progressive delivery strategy relying on automated canaries, i.e. to make the outer loop as fast, efficient and automated as possible. + +Cloud-native technologies have fundamentally altered the developer experience in two ways: one, developers now have to take extra steps in the inner dev loop; two, developers need to be concerned with the outer dev loop as part of their workflow, even if most of their time is spent in the inner dev loop. + +Engineers now must design and build distributed service-based applications _and_ also assume responsibility for the full development life cycle. The new developer experience means that developers can no longer rely on monolithic application developer best practices, such as checking out the entire codebase and coding locally with a rapid “live-reload” inner development loop. Now developers have to manage external dependencies, build containers, and implement orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this adds development time to the equation. + +## What is the inner dev loop? + +The inner dev loop is the single developer workflow. A single developer should be able to set up and use an inner dev loop to code and test changes quickly. + +Even within the Kubernetes space, developers will find much of the inner dev loop familiar. That is, code can still be written locally at a level that a developer controls and committed to version control. + +In a traditional inner dev loop, if a typical developer codes for 360 minutes (6 hours) a day, with a traditional local iterative development loop of 5 minutes — 3 coding, 1 building, i.e. compiling/deploying/reloading, 1 testing inspecting, and 10-20 seconds for committing code — they can expect to make ~70 iterations of their code per day. Any one of these iterations could be a release candidate. The only “developer tax” being paid here is for the commit process, which is negligible. + +![traditional inner dev loop](../../images/trad-inner-dev-loop.png) + +## In search of lost time: How does containerization change the inner dev loop? + +The inner dev loop is where writing and testing code happens, and time is critical for maximum developer productivity and getting features in front of end users. The faster the feedback loop, the faster developers can refactor and test again. + +Changes to the inner dev loop process, i.e., containerization, threaten to slow this development workflow down. Coding stays the same in the new inner dev loop, but code has to be containerized. The _containerized_ inner dev loop requires a number of new steps: + +* packaging code in containers +* writing a manifest to specify how Kubernetes should run the application (e.g., YAML-based configuration information, such as how much memory should be given to a container) +* pushing the container to the registry +* deploying containers in Kubernetes + +Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. + + +![container inner dev loop](../../images/container-inner-dev-loop.png) + +## Tackling the slow inner dev loop + +A slow inner dev loop can negatively impact frontend and backend teams, delaying work on individual and team levels and slowing releases into production overall. + +For example: + +* Frontend developers have to wait for previews of backend changes on a shared dev/staging environment (for example, until CI/CD deploys a new version) and/or rely on mocks/stubs/virtual services when coding their application locally. These changes are only verifiable by going through the CI/CD process to build and deploy within a target environment. +* Backend developers have to wait for CI/CD to build and deploy their app to a target environment to verify that their code works correctly with cluster or cloud-based dependencies as well as to share their work to get feedback. + +New technologies and tools can facilitate cloud-native, containerized development. And in the case of a sluggish inner dev loop, developers can accelerate productivity with tools that help speed the loop up again. diff --git a/docs/telepresence/2.3/concepts/devworkflow.md b/docs/telepresence/2.3/concepts/devworkflow.md new file mode 100644 index 000000000..fa24fc2bd --- /dev/null +++ b/docs/telepresence/2.3/concepts/devworkflow.md @@ -0,0 +1,7 @@ +# The changing development workflow + +A changing workflow is one of the main challenges for developers adopting Kubernetes. Software development itself isn’t the challenge. Developers can continue to [code using the languages and tools with which they are most productive and comfortable](https://www.getambassador.io/resources/kubernetes-local-dev-toolkit/). That’s the beauty of containerized development. + +However, the cloud-native, Kubernetes-based approach to development means adopting a new development workflow and development environment. Beyond the basics, such as figuring out how to containerize software, [how to run containers in Kubernetes](https://www.getambassador.io/docs/kubernetes/latest/concepts/appdev/), and how to deploy changes into containers, for example, Kubernetes adds complexity before it delivers efficiency. The promise of a “quicker way to develop software” applies at least within the traditional aspects of the inner dev loop, where the single developer codes, builds and tests their software. But both within the inner dev loop and once code is pushed into version control to trigger the outer dev loop, the developer experience changes considerably from what many developers are used to. + +In this new paradigm, new steps are added to the inner dev loop, and more broadly, the developer begins to share responsibility for the full life cycle of their software. Inevitably this means taking new workflows and tools on board to ensure that the full life cycle continues full speed ahead. diff --git a/docs/telepresence/2.3/concepts/faster.md b/docs/telepresence/2.3/concepts/faster.md new file mode 100644 index 000000000..b649e4153 --- /dev/null +++ b/docs/telepresence/2.3/concepts/faster.md @@ -0,0 +1,25 @@ +# Making the remote local: Faster feedback, collaboration and debugging + +With the goal of achieving [fast, efficient development](https://www.getambassador.io/use-case/local-kubernetes-development/), developers need a set of approaches to bridge the gap between remote Kubernetes clusters and local development, and reduce time to feedback and debugging. + +## How should I set up a Kubernetes development environment? + +[Setting up a development environment](https://www.getambassador.io/resources/development-environments-microservices/) for Kubernetes can be much more complex than the set up for traditional web applications. Creating and maintaining a Kubernetes development environment relies on a number of external dependencies, such as databases or authentication. + +While there are several ways to set up a Kubernetes development environment, most introduce complexities and impediments to speed. The dev environment should be set up to easily code and test in conditions where a service can access the resources it depends on. + +A good way to meet the goals of faster feedback, possibilities for collaboration, and scale in a realistic production environment is the "single service local, all other remote" environment. Developing in a fully remote environment offers some benefits, but for developers, it offers the slowest possible feedback loop. With local development in a remote environment, the developer retains considerable control while using tools like [Telepresence](../../quick-start/) to facilitate fast feedback, debugging and collaboration. + +## What is Telepresence? + +Telepresence is an open source tool that lets developers [code and test microservices locally against a remote Kubernetes cluster](../../quick-start/). Telepresence facilitates more efficient development workflows while relieving the need to worry about other service dependencies. + +## How can I get fast, efficient local development? + +The dev loop can be jump-started with the right development environment and Kubernetes development tools to support speed, efficiency and collaboration. Telepresence is designed to let Kubernetes developers code as though their laptop is in their Kubernetes cluster, enabling the service to run locally and be proxied into the remote cluster. Telepresence runs code locally and forwards requests to and from the remote Kubernetes cluster, bypassing the much slower process of waiting for a container to build, pushing it to registry, and deploying to production. + +A rapid and continuous feedback loop is essential for productivity and speed; Telepresence enables the fast, efficient feedback loop to ensure that developers can access the rapid local development loop they rely on without disrupting their own or other developers' workflows. Telepresence safely intercepts traffic from the production cluster and enables near-instant testing of code, local debugging in production, and [preview URL](../../howtos/preview-urls/) functionality to share dev environments with others for multi-user collaboration. + +Telepresence works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This pod proxies data from the Kubernetes environment (e.g., TCP connections, environment variables, volumes) to the local process. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +The intercept proxy works thanks to context propagation, which is most frequently associated with distributed tracing but also plays a key role in controllable intercepts and preview URLs. diff --git a/docs/telepresence/2.3/doc-links.yml b/docs/telepresence/2.3/doc-links.yml new file mode 100644 index 000000000..376664559 --- /dev/null +++ b/docs/telepresence/2.3/doc-links.yml @@ -0,0 +1,73 @@ + - title: Quick start + link: quick-start + - title: Install Telepresence + items: + - title: Install + link: install/ + - title: Upgrade + link: install/upgrade/ + - title: Install Traffic Manager with Helm + link: install/helm/ + - title: Migrate from legacy Telepresence + link: install/migrate-from-legacy/ + - title: Core concepts + items: + - title: The changing development workflow + link: concepts/devworkflow + - title: The developer experience and the inner dev loop + link: concepts/devloop + - title: 'Making the remote local: Faster feedback, collaboration and debugging' + link: concepts/faster + - title: Context propagation + link: concepts/context-prop + - title: How do I... + items: + - title: Intercept a service in your own environment + link: howtos/intercepts + - title: Share dev environments with preview URLs + link: howtos/preview-urls + - title: Proxy outbound traffic to my cluster + link: howtos/outbound + - title: Send requests to an intercepted service + link: howtos/request + - title: Technical reference + items: + - title: Architecture + link: reference/architecture + - title: Client reference + link: reference/client + items: + - title: login + link: reference/client/login + - title: Laptop-side configuration + link: reference/config + - title: Cluster-side configuration + link: reference/cluster-config + - title: Using Docker for intercepts + link: reference/docker-run + - title: Running Telepresence in a Docker container + link: reference/inside-container + - title: Environment variables + link: reference/environment + - title: Intercepts + link: reference/intercepts + - title: Volume mounts + link: reference/volume + - title: DNS resolution + link: reference/dns + - title: RBAC + link: reference/rbac + - title: Networking through Virtual Network Interface + link: reference/tun-device + - title: Connection Routing + link: reference/routing + - title: Using Telepresence with Linkerd + link: reference/linkerd + - title: FAQs + link: faqs + - title: Troubleshooting + link: troubleshooting + - title: Community + link: community + - title: Release Notes + link: release-notes diff --git a/docs/telepresence/2.3/faqs.md b/docs/telepresence/2.3/faqs.md new file mode 100644 index 000000000..76ea93076 --- /dev/null +++ b/docs/telepresence/2.3/faqs.md @@ -0,0 +1,124 @@ +--- +description: "Learn how Telepresence helps with fast development and debugging in your Kubernetes cluster." +--- + +# FAQs + +** Why Telepresence?** + +Modern microservices-based applications that are deployed into Kubernetes often consist of tens or hundreds of services. The resource constraints and number of these services means that it is often difficult to impossible to run all of this on a local development machine, which makes fast development and debugging very challenging. The fast [inner development loop](../concepts/devloop/) from previous software projects is often a distant memory for cloud developers. + +Telepresence enables you to connect your local development machine seamlessly to the cluster via a two way proxying mechanism. This enables you to code locally and run the majority of your services within a remote Kubernetes cluster -- which in the cloud means you have access to effectively unlimited resources. + +Ultimately, this empowers you to develop services locally and still test integrations with dependent services or data stores running in the remote cluster. + +You can “intercept” any requests made to a target Kubernetes workload, and code and debug your associated service locally using your favourite local IDE and in-process debugger. You can test your integrations by making requests against the remote cluster’s ingress and watching how the resulting internal traffic is handled by your service running locally. + +By using the preview URL functionality you can share access with additional developers or stakeholders to the application via an entry point associated with your intercept and locally developed service. You can make changes that are visible in near real-time to all of the participants authenticated and viewing the preview URL. All other viewers of the application entrypoint will not see the results of your changes. + +** What operating systems does Telepresence work on?** + +Telepresence currently works natively on macOS and Linux. We are working on a native Windows port, but in the meantime, Windows users can use Telepresence with WSL 2. + +** What protocols can be intercepted by Telepresence?** + +All HTTP/1.1 and HTTP/2 protocols can be intercepted. This includes: + +- REST +- JSON/XML over HTTP +- gRPC +- GraphQL + +If you need another protocol supported, please [drop us a line](https://www.getambassador.io/feedback/) to request it. + +** When using Telepresence to intercept a pod, are the Kubernetes cluster environment variables proxied to my local machine?** + +Yes, you can either set the pod's environment variables on your machine or write the variables to a file to use with Docker or another build process. Please see [the environment variable reference doc](../reference/environment) for more information. + +** When using Telepresence to intercept a pod, can the associated pod volume mounts also be mounted by my local machine?** + +Yes, please see [the volume mounts reference doc](../reference/volume/) for more information. + +** When connected to a Kubernetes cluster via Telepresence, can I access cluster-based services via their DNS name?** + +Yes. After you have successfully connected to your cluster via `telepresence connect` you will be able to access any service in your cluster via their namespace qualified DNS name. + +This means you can curl endpoints directly e.g. `curl .:8080/mypath`. + +If you create an intercept for a service in a namespace, you will be able to use the service name directly. + +This means if you `telepresence intercept -n `, you will be able to resolve just the `` DNS record. + +You can connect to databases or middleware running in the cluster, such as MySQL, PostgreSQL and RabbitMQ, via their service name. + +** When connected to a Kubernetes cluster via Telepresence, can I access cloud-based services and data stores via their DNS name?** + +You can connect to cloud-based data stores and services that are directly addressable within the cluster (e.g. when using an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) Service type), such as AWS RDS, Google pub-sub, or Azure SQL Database. + +** What types of ingress does Telepresence support for the preview URL functionality?** + +The preview URL functionality should work with most ingress configurations, including straightforward load balancer setups. + +Telepresence will discover/prompt during first use for this info and make its best guess at figuring this out and ask you to confirm or update this. + +** Why are my intercepts still reporting as active when they've been disconnected?** + + In certain cases, Telepresence might not have been able to communicate back with Ambassador Cloud to update the intercept's status. Worry not, they will get garbage collected after a period of time. + +** Why is my intercept associated with an "Unreported" cluster?** + + Intercepts tagged with "Unreported" clusters simply mean Ambassador Cloud was unable to associate a service instance with a known detailed service from an Edge Stack or API Gateway cluster. [Connecting your cluster to the Service Catalog](/docs/telepresence/latest/quick-start/) will properly match your services from multiple data sources. + +** Will Telepresence be able to intercept workloads running on a private cluster or cluster running within a virtual private cloud (VPC)?** + +Yes. The cluster has to have outbound access to the internet for the preview URLs to function correctly, but it doesn’t need to have a publicly accessible IP address. + +The cluster must also have access to an external registry in order to be able to download the traffic-manager and traffic-agent images that are deployed when connecting with Telepresence. + +** Why does running Telepresence require sudo access for the local daemon?** + +The local daemon needs sudo to create iptable mappings. Telepresence uses this to create outbound access from the laptop to the cluster. + +On Fedora, Telepresence also creates a virtual network device (a TUN network) for DNS routing. That also requires root access. + +** What components get installed in the cluster when running Telepresence?** + +A single `traffic-manager` service is deployed in the `ambassador` namespace within your cluster, and this manages resilient intercepts and connections between your local machine and the cluster. + +A Traffic Agent container is injected per pod that is being intercepted. The first time a workload is intercepted all pods associated with this workload will be restarted with the Traffic Agent automatically injected. + +** How can I remove all of the Telepresence components installed within my cluster?** + +You can run the command `telepresence uninstall --everything` to remove the `traffic-manager` service installed in the cluster and `traffic-agent` containers injected into each pod being intercepted. + +Running this command will also stop the local daemon running. + +** What language is Telepresence written in?** + +All components of the Telepresence application and cluster components are written using Go. + +** How does Telepresence connect and tunnel into the Kubernetes cluster?** + +The connection between your laptop and cluster is established by using +the `kubectl port-forward` machinery (though without actually spawning +a separate program) to establish a TCP connection to Telepresence +Traffic Manager in the cluster, and running Telepresence's custom VPN +protocol over that TCP connection. + + + +** What identity providers are supported for authenticating to view a preview URL?** + +* GitHub +* GitLab +* Google + +More authentication mechanisms and identity provider support will be added soon. Please [let us know](https://www.getambassador.io/feedback/) which providers are the most important to you and your team in order for us to prioritize those. + +** Is Telepresence open source?** + +Telepresence will be open source soon, in the meantime it is free to download. We prioritized releasing the binary as soon as possible for community feedback, but are actively working on the open sourcing logistics. + +** How do I share my feedback on Telepresence?** + +Your feedback is always appreciated and helps us build a product that provides as much value as possible for our community. You can chat with us directly on our [feedback page](https://www.getambassador.io/feedback/), or you can [join our Slack channel](http://a8r.io/slack) to share your thoughts. diff --git a/docs/telepresence/2.3/howtos/intercepts.md b/docs/telepresence/2.3/howtos/intercepts.md new file mode 100644 index 000000000..e2536d1b1 --- /dev/null +++ b/docs/telepresence/2.3/howtos/intercepts.md @@ -0,0 +1,298 @@ +--- +description: "Start using Telepresence in your own environment. Follow these steps to intercept your service in your cluster." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Intercept a service in your own environment + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Intercept your service](#3-intercept-your-service) +* [4. Create a preview URL to only intercept certain requests to your service](#4-create-a-preview-url-to-only-intercept-certain-requests-to-your-service) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +For a detailed walk-though on creating intercepts using our sample app, follow the quick start guide. + +## Prerequisites +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +This guide assumes you have a Kubernetes deployment and service accessible publicly by an ingress controller and that you can run a copy of that service on your laptop. + +## 1. Install the Telepresence CLI + + + + +```shell +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: + `telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: + `curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Intercept your service + +In this section, we will go through the steps required for you to intercept all traffic going to a service in your cluster and route it to your local environment instead. + +1. List the services that you can intercept with `telepresence list` and make sure the one you want to intercept is listed. + + For example, this would confirm that `example-service` can be intercepted by Telepresence: + ``` + $ telepresence list + + ... + example-service: ready to intercept (traffic-agent not yet installed) + ... + ``` + +2. Get the name of the port you want to intercept on your service: + `kubectl get service --output yaml`. + + For example, this would show that the port `80` is named `http` in the `example-service`: + + ``` + $ kubectl get service example-service --output yaml + + ... + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + ... + ``` + +3. Intercept all traffic going to the service in your cluster: + `telepresence intercept --port [:] --env-file `. + + - For the `--port` argument, specify the port on which your local instance of your service will be running. + - If the service you are intercepting exposes more than one port, specify the one you want to intercept after a colon. + - For the `--env-file` argument, specify the path to a file on which Telepresence should write the environment variables that your service is currently running with. This is going to be useful as we start our service. + + For the example below, Telepresence will intercept traffic going to service `example-service` so that requests reaching it on port `http` in the cluster get routed to `8080` on the workstation and write the environment variables of the service to `~/example-service-intercept.env`. + + ``` + $ telepresence intercept example-service --port 8080:http --env-file ~/example-service-intercept.env + + Using Deployment example-service + intercepted + Intercept name: example-service + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Intercepting : all TCP connections + ``` + +4. Start your local environment using the environment variables retrieved in the previous step. + + Here are a few options to pass the environment variables to your local process: + - with `docker run`, provide the path to the file using the [`--env-file` argument](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file) + - with JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.) use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile) + - with Visual Studio Code, specify the path to the environment variables file in the `envFile` field of your configuration + +5. Query the environment in which you intercepted a service the way you usually would and see your local instance being invoked. + + + Didn't work? Make sure the port you're listening on matches the one specified when creating your intercept. + + + + Congratulations! All the traffic usually going to your Kubernetes Service is now being routed to your local environment! + + +You can now: +- Make changes on the fly and see them reflected when interacting with your Kubernetes environment. +- Query services only exposed in your cluster's network. +- Set breakpoints in your IDE to investigate bugs. + +## 4. Create a preview URL to only intercept certain requests to your service + +When working on a development environment with multiple engineers, you +don't want your intercepts to impact your teammates. If you are +[logged in](../../reference/client/login/), then when creating an +intercept, by default Telpresence will automatically talk to +Ambassador Cloud to generate a preview URL. By doing so, Telepresence +can route only the requests coming from that preview URL to your local +environment; the rest will be routed to your cluster as usual. + +1. Clean up your previous intercept by removing it: +`telepresence leave ` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept --port [:] --env-file ` + + You will be asked for the following information: + 1. **Ingress layer 3 address**: This would usually be the internal address of your ingress controller in the format `.namespace`. For example, if you have a service `ambassador-edge-stack` in the `ambassador` namespace, you would enter `ambassador-edge-stack.ambassador`. + 2. **Ingress port**: The port on which your ingress controller is listening (often 80 for non-TLS and 443 for TLS). + 3. **Ingress TLS encryption**: Whether the ingress controller is expecting TLS communication on the specified port. + 4. **Ingress layer 5 hostname**: If your ingress controller routes traffic based on a domain name (often using the `Host` HTTP header), this is the value you would need to enter here. + + + Telepresence supports any ingress controller, not just Ambassador Edge Stack. + + + For the example below, you will create a preview URL that will send traffic to the `ambassador` service in the `ambassador` namespace on port `443` using TLS encryption and setting the `Host` HTTP header to `dev-environment.edgestack.me`: + + ``` + $ telepresence intercept example-service --port 8080:http --env-file ~/example-service-intercept.env + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Confirm the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: -]: ambassador.ambassador + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [default: -]: 443 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: y + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: ambassador.ambassador]: dev-environment.edgestack.me + + Using Deployment example-service + intercepted + Intercept name : example-service + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":example-service") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname : dev-environment.edgestack.me + ``` + +4. Start your local service as in the previous step. + +5. Go to the preview URL printed after doing the intercept and see that your local service is processing the request. + + + Didn't work? It might be because you have services in between your ingress controller and the service you are intercepting that do not propagate the x-telepresence-intercept-id HTTP Header. Read more on context propagation. + + +6. Make a request on the URL you would usually query for that environment. The request should not be routed to your laptop. + + Normal traffic coming into the cluster through the Ingress (i.e. not coming from the preview URL) will route to services in the cluster like normal. + + + Congratulations! You have now only intercepted traffic coming from your preview URL, without impacting your teammates. + + +You can now: +- Make changes on the fly and see them reflected when interacting with your Kubernetes environment. +- Query services only exposed in your cluster's network. +- Set breakpoints in your IDE to investigate bugs. + +...and all of this without impacting your teammates! +## What's Next? + + diff --git a/docs/telepresence/2.3/howtos/outbound.md b/docs/telepresence/2.3/howtos/outbound.md new file mode 100644 index 000000000..318300327 --- /dev/null +++ b/docs/telepresence/2.3/howtos/outbound.md @@ -0,0 +1,99 @@ +--- +description: "Telepresence can connect to your Kubernetes cluster, letting you access cluster services as if your laptop was another pod in the cluster." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Proxy outbound traffic to my cluster + +While preview URLs are a powerful feature, there are other options to use Telepresence for proxying traffic between your laptop and the cluster. + + We'll assume below that you have the quick start sample web app running in your cluster so that we can test accessing the web-app service. That service can be substituted however for any service you are running. + +## Proxying outbound traffic + +Connecting to the cluster instead of running an intercept will allow you to access cluster workloads as if your laptop was another pod in the cluster. You will be able to access other Kubernetes services using `.`, for example by curling a service from your terminal. A service running on your laptop will also be able to interact with other services on the cluster by name. + +Connecting to the cluster starts the background daemon on your machine and installs the [Traffic Manager pod](../../reference/architecture/) into the cluster of your current `kubectl` context. The Traffic Manager handles the service proxying. + +1. Run `telepresence connect`, you will be prompted for your password to run the daemon. + + ``` + $ telepresence connect + Launching Telepresence Daemon v2.3.7 (api v3) + Need root privileges to run "/usr/local/bin/telepresence daemon-foreground /home//.cache/telepresence/logs '' ''" + [sudo] password: + Connecting to traffic manager... + Connected to context default (https://) + ``` + +1. Run `telepresence status` to confirm that you are connected to your cluster and are proxying traffic to it. + + ``` + $ telepresence status + Root Daemon: Running + Version : v2.3.7 (api 3) + Primary DNS : "" + Fallback DNS: "" + User Daemon: Running + Version : v2.3.7 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 0 total + ``` + +1. Now try to access your service by name with `curl web-app.emojivoto:80`. Telepresence will route the request to the cluster, as if your laptop is actually running in the cluster. + + ``` + $ curl web-app.emojivoto:80 + + + + + Emoji Vote + ... + ``` + +3. Terminate the client with `telepresence quit` and try to access the service again, it will fail because traffic is no longer being proxied from your laptop. + + ``` + $ telepresence quit + Telepresence Daemon quitting...done + ``` + +When using Telepresence in this way, services must be accessed with the namespace qualified DNS name (<service name>.<namespace>) before starting an intercept. After starting an intercept, only <service name> is required. Read more about these differences in DNS resolution here. + +## Controlling outbound connectivity + +By default, Telepresence will provide access to all Services found in all namespaces in the connected cluster. This might lead to problems if the user does not have access permissions to all namespaces via RBAC. The `--mapped-namespaces ` flag was added to give the user control over exactly which namespaces will be accessible. + +When using this option, it is important to include all namespaces containing services to be accessed and also all namespaces that contain services that those intercepted services might use. + +### Using local-only intercepts + +An intercept with the flag`--local-only` can be used to control outbound connectivity to specific namespaces. + +When developing services that have not yet been deployed to the cluster, it can be necessary to provide outbound connectivity to the namespace where the service is intended to be deployed so that it can access other services in that namespace without using qualified names. Worth noting though, is that a local-only intercept will not cause outbound connections to originate from the intercepted namespace. Only a real intercept can do that. The reason for this is that in order to establish correct origin, the connection must be routed to a `traffic-agent`of an intercepted pod. For local-only intercepts, the outbound connections will originate from the `traffic-manager`. + + ``` + $ telepresence intercept --namespace --local-only + ``` +The resources in the given namespace can now be accessed using unqualified names as long as the intercept is active. The intercept is deactivated just like any other intercept. + + ``` + $ telepresence leave + ``` +The unqualified name access is now removed provided that no other intercept is active and using the same namespace. + +### External dependencies (formerly `--also-proxy`) + +If you have a resource outside of the cluster that you need access to, you can leverage Headless Services to provide access. This will give you a kubernetes service formatted like all other services (`my-service.prod.svc.cluster.local`), that resolves to your resource. + +If the outside service has a DNS name, you can use the [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) service type, which will create a service that can be used from within your cluster and from your local machine when connected with telepresence. + +If the outside service is an ip, create a [service without selectors](https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors) and then create an endpoint of the same name. + +In both scenarios, Kubernetes will create a service that can be used from within your cluster and from your local machine when connected with telepresence. diff --git a/docs/telepresence/2.3/howtos/preview-urls.md b/docs/telepresence/2.3/howtos/preview-urls.md new file mode 100644 index 000000000..4415a5445 --- /dev/null +++ b/docs/telepresence/2.3/howtos/preview-urls.md @@ -0,0 +1,166 @@ +--- +description: "Telepresence uses Preview URLs to help you collaborate on developing Kubernetes services with teammates." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Share dev environments with preview URLs + +Telepresence can generate sharable preview URLs, allowing you to work on a copy of your service locally and share that environment directly with a teammate for pair programming. While using preview URLs Telepresence will route only the requests coming from that preview URL to your local environment; requests to the ingress will be routed to your cluster as usual. + +Preview URLs are protected behind authentication via Ambassador Cloud, ensuring that only users in your organization can view them. A preview URL can also be set to allow public access for sharing with outside collaborators. + +## Prerequisites + +* You should have the Telepresence CLI [installed](../../install/) on your laptop. + +* If you have Telepresence already installed and have used it previously, please first reset it with `telepresence uninstall --everything`. + +* You will need a service running in your cluster that you would like to intercept. + + +Need a sample app to try with preview URLs? Check out the quick start. It has a multi-service app to install in your cluster with instructions to create a preview URL for that app. + + +## Creating a preview URL + +1. List the services that you can intercept with `telepresence list` and make sure the one you want is listed. + + If it isn't: + + * Only Deployments, ReplicaSets, or StatefulSets are supported, and each of those requires a label matching a Service + + * If the service is in a different namespace, specify it with the `--namespace` flag + +2. Log in to Ambassador Cloud where you can manage and share preview + URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept: +`telepresence intercept --port --env-file ` + + For `--port`, specify the port on which your local instance of your service will be running. If the service you are intercepting exposes more than one port, specify the one you want to intercept after a colon. + + For `--env-file`, specify a file path where Telepresence will write the environment variables that are set in the Pod. This is going to be useful as we start our service locally. + + You will be asked for the following information: + 1. **Ingress layer 3 address**: This would usually be the internal address of your ingress controller in the format `.namespace `. For example, if you have a service `ambassador-edge-stack` in the `ambassador` namespace, you would enter `ambassador-edge-stack.ambassador`. + 2. **Ingress port**: The port on which your ingress controller is listening (often 80 for non-TLS and 443 for TLS). + 3. **Ingress TLS encryption**: Whether the ingress controller is expecting TLS communication on the specified port. + 4. **Ingress layer 5 hostname**: If your ingress controller routes traffic based on a domain name (often using the `Host` HTTP header), enter that value here. + + For the example below, you will create a preview URL for `example-service` which listens on port 8080. The preview URL for ingress will use the `ambassador` service in the `ambassador` namespace on port `443` using TLS encryption and the hostname `dev-environment.edgestack.me`: + + ``` + $ telepresence intercept example-service --port 8080 --env-file ~/ex-svc.env + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Confirm the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: -]: ambassador.ambassador + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [default: -]: 443 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: y + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: ambassador.ambassador]: dev-environment.edgestack.me + + Using deployment example-service + intercepted + Intercept name : example-service + State : ACTIVE + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":example-service") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname : dev-environment.edgestack.me + ``` + +4. Start your local environment using the environment variables retrieved in the previous step. + + Here are a few options to pass the environment variables to your local process: + - with `docker run`, provide the path to the file using the [`--env-file` argument](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file) + - with JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.) use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile) + - with Visual Studio Code, specify the path to the environment variables file in the `envFile` field of your configuration + +5. Go to the preview URL that was provided after starting the intercept (the next to last line in the terminal output above). Your local service will be processing the request. + + + Success! You have intercepted traffic coming from your preview URL without impacting other traffic from your Ingress. + + + + Didn't work? It might be because you have services in between your ingress controller and the service you are intercepting that do not propagate the x-telepresence-intercept-id HTTP Header. Read more on context propagation. + + +6. Make a request on the URL you would usually query for that environment. The request should **not** be routed to your laptop. + + Normal traffic coming into the cluster through the Ingress (i.e. not coming from the preview URL) will route to services in the cluster like normal. + +7. Share with a teammate. + + You can collaborate with teammates by sending your preview URL to + them. They will be asked to log in to Ambassador Cloud if they are + not already. Upon login they must select the same identity + provider and org as you are using; that is how they are authorized + to access the preview URL (see the [list of supported identity + providers](../../faqs/#idps)). When they visit the preview URL, + they will see the intercepted service running on your laptop. + + + Congratulations! You have now created a dev environment and shared it with a teammate! While you and your partner work together to debug your service, the production version remains unchanged to the rest of your team until you commit your changes. + + +## Sharing a preview URL with people outside your team + +To collaborate with someone outside of your identity provider's organization, you must go to [Ambassador Cloud](https://app.getambassador.io/cloud/), navigate to your service's intercepts, select the preview URL details, and click **Make Publicly Accessible**. Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on your laptop. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. Removing the preview URL either from the dashboard or by running `telepresence preview remove ` also removes all access to the preview URL. + +## Change access restrictions + +To collaborate with someone outside of your identity provider's organization, you must make your preview URL publicly accessible. + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) +2. Navigate to the desired service Intercepts page +3. Expand the preview URL details +4. Click **Make Publicly Accessible** + +Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on a local environment. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. + +## Remove a preview URL from an Intercept + +To delete a preview URL and remove all access to the intercepted service, + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) +2. Navigate to the desired service Intercepts page +3. Expand the preview URL details +4. Click **Remove Preview** + +Alternatively, a preview URL can also be removed by running +`telepresence preview remove ` diff --git a/docs/telepresence/2.3/howtos/request.md b/docs/telepresence/2.3/howtos/request.md new file mode 100644 index 000000000..56d598fa7 --- /dev/null +++ b/docs/telepresence/2.3/howtos/request.md @@ -0,0 +1,12 @@ +import Alert from '@material-ui/lab/Alert'; + +# Send requests to an intercepted service + +Ambassador Cloud can inform you about the required request parameters to reach an intercepted service. + + 1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) + 2. Navigate to the desired service Intercepts page + 3. Click the **Query** button to open the pop-up menu. + 4. Toggle between **CURL**, **Headers** and **Browse**. + +The pre-built queries and header information should help you get started to query the desired intercepted service and manage header propagation. diff --git a/docs/telepresence/2.3/images/container-inner-dev-loop.png b/docs/telepresence/2.3/images/container-inner-dev-loop.png new file mode 100644 index 000000000..06586cd6e Binary files /dev/null and b/docs/telepresence/2.3/images/container-inner-dev-loop.png differ diff --git a/docs/telepresence/2.3/images/github-login.png b/docs/telepresence/2.3/images/github-login.png new file mode 100644 index 000000000..cfd4d4bf1 Binary files /dev/null and b/docs/telepresence/2.3/images/github-login.png differ diff --git a/docs/telepresence/2.3/images/logo.png b/docs/telepresence/2.3/images/logo.png new file mode 100644 index 000000000..701f63ba8 Binary files /dev/null and b/docs/telepresence/2.3/images/logo.png differ diff --git a/docs/telepresence/2.3/images/trad-inner-dev-loop.png b/docs/telepresence/2.3/images/trad-inner-dev-loop.png new file mode 100644 index 000000000..618b674f8 Binary files /dev/null and b/docs/telepresence/2.3/images/trad-inner-dev-loop.png differ diff --git a/docs/telepresence/2.3/install/helm.md b/docs/telepresence/2.3/install/helm.md new file mode 100644 index 000000000..603f49e54 --- /dev/null +++ b/docs/telepresence/2.3/install/helm.md @@ -0,0 +1,165 @@ +# Install with Helm + +[Helm](https://helm.sh) is a package manager for Kubernetes that automates the release and management of software on Kubernetes. The Telepresence Traffic Manager can be installed via a Helm chart with a few simple steps. + +## Before you begin + +The Telepresence Helm chart is hosted by Ambassador Labs and published at `https://app.getambassador.io`. + +Start by adding this repo to your Helm client with the following command: + +```shell +helm repo add datawire https://app.getambassador.io +helm repo update +``` + +## Install with Helm + +When you run the Helm chart, it installs all the components required for the Telepresence Traffic Manager. + +1. If you are installing the Telepresence Traffic Manager **for the first time on your cluster**, create the `ambassador` namespace in your cluster: + + ```shell + kubectl create namespace ambassador + ``` + +2. Install the Telepresence Traffic Manager with the following command: + + ```shell + helm install traffic-manager --namespace ambassador datawire/telepresence + ``` + +For more details on what the Helm chart installs and what can be configured, take a look at the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +### Install into custom namespace + +The Helm chart supports being installed into any namespace, not necessarily `ambassador`. Simply pass a different `namespace` argument to `helm install`. +For example, if you wanted to deploy the traffic manager to the `staging` namespace: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence +``` + +Note that users of telepresence will need to configure their kubeconfig to find this installation of the traffic manager: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +See [the kubeconfig documentation](../reference/config#manager) for more information. + +## RBAC + +### Installing a namespace-scoped traffic manager + +You might not want the Traffic Manager to have permissions across the entire kubernetes cluster, or you might want to be able to install multiple traffic managers per cluster (for example, to separate them by environment). +In these cases, the traffic manager supports being installed with a namespace scope, allowing cluster administrators to limit the reach of a traffic manager's permissions. + +For example, suppose you want a Traffic Manager that only works on namespaces `dev` and `staging`. +To do this, create a `values.yaml` like the following: + +```yaml +managerRbac: + create: true + namespaced: true + namespaces: + - dev + - staging +``` + +This can then be installed via: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence -f ./values.yaml +``` + +**NOTE** Do not install namespace-scoped Traffic Managers and a global Traffic Manager in the same cluster, as it could have unexpected effects. + +#### Namespace collision detection + +The Telepresence Helm chart will try to prevent namespace-scoped Traffic Managers from managing the same namespaces. +It will do this by creating a ConfigMap, called `traffic-manager-claim`, in each namespace that a given install manages. + +So, for example, suppose you install one Traffic Manager to manage namespaces `dev` and `staging`, as: + +```bash +helm install traffic-manager --namespace dev datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={dev,staging}' +``` + +You might then attempt to install another Traffic Manager to manage namespaces `staging` and `prod`: + +```bash +helm install traffic-manager --namespace prod datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={staging,prod}' +``` + +This would fail with an error: + +``` +Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap "traffic-manager-claim" in namespace "staging" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "prod": current value is "dev" +``` + +To fix this error, fix the overlap either by removing `staging` from the first install, or from the second. + +#### Namespace scoped user permissions + +Optionally, you can also configure user rbac to be scoped to the same namespaces as the manager itself. +You might want to do this if you don't give your users permissions throughout the cluster, and want to make sure they only have the minimum set required to perform telepresence commands on certain namespaces. + +Continuing with the `dev` and `staging` example from the previous section, simply add the following to `values.yaml` (make sure you set the `subjects`!): + +```yaml +clientRbac: + create: true + + # These are the users or groups to which the user rbac will be bound. + # This MUST be set. + subjects: {} + # - kind: User + # name: jane + # apiGroup: rbac.authorization.k8s.io + + namespaced: true + + namespaces: + - dev + - staging +``` + +#### Namespace-scoped webhook + +If you wish to use the traffic-manager's [mutating webhook](../reference/cluster-config#mutating-webhook) with a namespace-scoped traffic manager, you will have to ensure that each namespace has an `app.kubernetes.io/name` label that is identical to its name: + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: staging + labels: + app.kubernetes.io/name: staging +``` + +You can also use `kubectl label` to add the label to an existing namespace, e.g: + +```shell +kubectl label namespace staging app.kubernetes.io/name=staging +``` + +This is required because the mutating webhook will use the name label to find namespaces to operate on. + +**NOTE** This labelling happens automatically in kubernetes >= 1.21. + +### Installing RBAC only + +Telepresence Traffic Manager does require some [RBAC](../../refrence/rbac/) for the traffic-manager deployment itself, as well as for users. +To make it easier for operators to introspect / manage RBAC separately, you can use `rbac.only=true` to +only create the rbac-related objects. +Additionally, you can use `clientRbac.create=true` and `managerRbac.create=true` to toggle which subset(s) of RBAC objects you wish to create. diff --git a/docs/telepresence/2.3/install/index.md b/docs/telepresence/2.3/install/index.md new file mode 100644 index 000000000..aefbab396 --- /dev/null +++ b/docs/telepresence/2.3/install/index.md @@ -0,0 +1,91 @@ +import Platform from '@src/components/Platform'; + +# Install + +Install Telepresence by running the commands below for your OS. + + + + +```shell +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## What's Next? + +Follow one of our [quick start guides](../quick-start/) to start using Telepresence, either with our sample app or in your own environment. + +## Installing nightly versions of Telepresence + +We build and publish the contents of the default branch, [release/v2](https://github.com/telepresenceio/telepresence), of Telepresence +nightly, Monday through Friday. + +The tags are formatted like so: `vX.Y.Z-nightly-$gitShortHash`. + +`vX.Y.Z` is the most recent release of Telepresence with the patch version (Z) bumped one higher. +For example, if our last release was 2.3.4, nightly builds would start with v2.3.5, until a new +version of Telepresence is released. + +`$gitShortHash` will be the short hash of the git commit of the build. + +Use these URLs to download the most recent nightly build. + + + + +``` +https://app.getambassador.io/download/tel2/darwin/amd64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/nightly/telepresence +``` + + + + +## Installing older versions of Telepresence + +Use these URLs to download an older version for your OS (including older nightly builds), replacing `x.y.z` with the versions you want. + + + + +``` +https://app.getambassador.io/download/tel2/darwin/amd64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/x.y.z/telepresence +``` + + + diff --git a/docs/telepresence/2.3/install/migrate-from-legacy.md b/docs/telepresence/2.3/install/migrate-from-legacy.md new file mode 100644 index 000000000..a54937ee5 --- /dev/null +++ b/docs/telepresence/2.3/install/migrate-from-legacy.md @@ -0,0 +1,98 @@ +# Migrate from legacy Telepresence + +Telepresence (formerly referenced as Telepresence 2, which is the current major version) has different mechanics and requires a different mental model from [legacy Telepresence 1](https://www.telepresence.io/docs/v1/) when working with local instances of your services. + +In legacy Telepresence, a pod running a service was swapped with a pod running the Telepresence proxy. This proxy received traffic intended for the service, and sent the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment". + +In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time. + +Telepresence introduces a [new architecture](../../reference/architecture/) built around "intercepts" that addresses these problems. With Telepresence, a sidecar proxy is injected onto the pod. The proxy then intercepts traffic intended for the pod and routes it to the workstation/laptop. The advantage of this approach is that the service is running at all times, and no swapping is used. By using the proxy approach, we can also do selective intercepts, where certain types of traffic get routed to the service while other traffic gets routed to your laptop/workstation. + +Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts. + +## Using legacy Telepresence commands + +First please ensure you've [installed Telepresence](../). + +Telepresence is able to translate common legacy Telepresence commands into native Telepresence commands. +So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used +to with the Telepresence binary. + +For example, say you have a deployment (`myserver`) that you want to swap deployment (equivalent to intercept in +Telepresence) with a python server, you could run the following command: + +``` +$ telepresence --swap-deployment myserver --expose 9090 --run python3 -m http.server 9090 +< help text > + +Legacy telepresence command used +Command roughly translates to the following in Telepresence: +telepresence intercept echo-easy --port 9090 -- python3 -m http.server 9090 +running... +Connecting to traffic manager... +Connected to context +Using Deployment myserver +intercepted + Intercept name : myserver + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:9090 + Intercepting : all TCP connections +Serving HTTP on :: port 9090 (http://[::]:9090/) ... +``` + +Telepresence will let you know what the legacy Telepresence command has mapped to and automatically +runs it. So you can get started with Telepresence today, using the commands you are used to +and it will help you learn the Telepresence syntax. + +### Legacy command mapping + +Below is the mapping of legacy Telepresence to Telepresence commands (where they exist and +are supported). + +| Legacy Telepresence Command | Telepresence Command | +|--------------------------------------------------|--------------------------------------------| +| --swap-deployment $workload | intercept $workload | +| --expose localPort[:remotePort] | intercept --port localPort[:remotePort] | +| --swap-deployment $workload --run-shell | intercept $workload -- bash | +| --swap-deployment $workload --run $cmd | intercept $workload -- $cmd | +| --swap-deployment $workload --docker-run $cmd | intercept $workload --docker-run -- $cmd | +| --run-shell | connect -- bash | +| --run $cmd | connect -- $cmd | +| --env-file,--env-json | --env-file, --env-json (haven't changed) | +| --context,--namespace | --context, --namespace (haven't changed) | +| --mount,--docker-mount | --mount, --docker-mount (haven't changed) | + +### Legacy Telepresence command limitations + +Some of the commands and flags from legacy Telepresence either didn't apply to Telepresence or +aren't yet supported in Telepresence. For some known popular commands, such as --method, +Telepresence will include output letting you know that the flag has gone away. For flags that +Telepresence can't translate yet, it will let you know that that flag is "unsupported". + +If Telepresence is missing any flags or functionality that is integral to your usage, please let us know +by [creating an issue](https://github.com/telepresenceio/telepresence/issues) and/or talking to us on our [Slack channel](http://a8r.io/slack)! + +## Telepresence changes + +Telepresence installs a Traffic Manager in the cluster and Traffic Agents alongside workloads when performing intercepts (including +with `--swap-deployment`) and leaves them. If you use `--swap-deployment`, the intercept will be left once the process +dies, but the agent will remain. There's no harm in leaving the agent running alongside your service, but when you +want to remove them from the cluster, the following Telepresence command will help: +``` +$ telepresence uninstall --help +Uninstall telepresence agents and manager + +Usage: + telepresence uninstall [flags] { --agent |--all-agents | --everything } + +Flags: + -d, --agent uninstall intercept agent on specific deployments + -a, --all-agents uninstall intercept agent on all deployments + -e, --everything uninstall agents and the traffic manager + -h, --help help for uninstall + -n, --namespace string If present, the namespace scope for this CLI request +``` + +Since the new architecture deploys a Traffic Manager into the Ambassador namespace, please take a look at +our [rbac guide](../../reference/rbac) if you run into any issues with permissions while upgrading to Telepresence. diff --git a/docs/telepresence/2.3/install/upgrade.md b/docs/telepresence/2.3/install/upgrade.md new file mode 100644 index 000000000..7a9c3d60d --- /dev/null +++ b/docs/telepresence/2.3/install/upgrade.md @@ -0,0 +1,39 @@ +--- +description: "How to upgrade your installation of Telepresence and install previous versions." +--- + +import Platform from '@src/components/Platform'; + +# Upgrade Process +The Telepresence CLI will periodically check for new versions and notify you when an upgrade is available. Running the same commands used for installation will replace your current binary with the latest version. + + + + +```shell +# Upgrade via brew: +brew upgrade datawire/blackbird/telepresence + +# OR upgrade manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +After upgrading your CLI, the Traffic Manager **must be uninstalled** from your cluster. This can be done using `telepresence uninstall --everything` or by `kubectl delete svc,deploy -n ambassador traffic-manager`. The next time you run a `telepresence` command it will deploy an upgraded Traffic Manager. diff --git a/docs/telepresence/2.3/quick-start/TelepresenceQuickStartLanding.js b/docs/telepresence/2.3/quick-start/TelepresenceQuickStartLanding.js new file mode 100644 index 000000000..3e87c3ad6 --- /dev/null +++ b/docs/telepresence/2.3/quick-start/TelepresenceQuickStartLanding.js @@ -0,0 +1,129 @@ +import React from 'react'; + +import Embed from '../../../../src/components/Embed'; +import Icon from '../../../../src/components/Icon'; + +import './telepresence-quickstart-landing.less'; + +/** @type React.FC> */ +const RightArrow = (props) => ( + + + +); + +/** @type React.FC<{color: 'green'|'blue', withConnector: boolean}> */ +const Box = ({ children, color = 'blue', withConnector = false }) => ( + <> + {withConnector && ( +
+ +
+ )} +
{children}
+ +); + +const TelepresenceQuickStartLanding = () => ( +
+

+ Telepresence +

+

+ Explore the use cases of Telepresence with a free remote Kubernetes + cluster, or dive right in using your own. +

+ +
+
+
+

+ Use Our Free Demo Cluster +

+

+ See how Telepresence works without having to mess with your + production environments. +

+
+ +

6 minutes

+

Integration Testing

+

+ See how changes to a single service impact your entire application + without having to run your entire app locally. +

+ + GET STARTED{' '} + + +
+ +

5 minutes

+

Fast code changes

+

+ Make changes to your service locally and see the results instantly, + without waiting for containers to build. +

+ + GET STARTED{' '} + + +
+
+
+
+

+ Use Your Cluster +

+

+ Understand how Telepresence fits in to your Kubernetes development + workflow. +

+
+ +

10 minutes

+

Intercept your service in your cluster

+

+ Query services only exposed in your cluster's network. Make changes + and see them instantly in your K8s environment. +

+ + GET STARTED{' '} + + +
+
+
+ +
+

Watch the Demo

+
+
+

+ See Telepresence in action in our 3-minute demo + video that you can share with your teammates. +

+
    +
  • Instant feedback loops
  • +
  • Infinite-scale development environments
  • +
  • Access to your favorite local tools
  • +
  • Easy collaborative development with teammates
  • +
+
+
+ +
+
+
+
+); + +export default TelepresenceQuickStartLanding; diff --git a/docs/telepresence/2.3/quick-start/demo-node.md b/docs/telepresence/2.3/quick-start/demo-node.md new file mode 100644 index 000000000..3b22b5fe2 --- /dev/null +++ b/docs/telepresence/2.3/quick-start/demo-node.md @@ -0,0 +1,148 @@ +--- +description: "Claim a remote demo cluster and learn to use Telepresence to intercept services running in a Kubernetes Cluster, speeding up local development and debugging." +--- + +import {DemoClusterMetadata, ExpirationDate} from '../../../../../src/components/DemoClusterMetadata'; +import { + EmojivotoServicesList, + DCPLink, + Login, + LoginCommand, + DockerCommand, + PreviewUrl, + DemoClusterMetadataError, + ExternalIp, + InterceptsLink, +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards'; +import { UserInterceptCommand } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start + +
+

Contents

+ +* [1. Get a free remote cluster](#1-get-a-free-remote-cluster) +* [2. Try the Emojivoto application](#2-try-the-emojivoto-application) +* [3. Set up your local development environment](#3-set-up-your-local-development-environment) +* [4. Testing our fix](#4-testing-our-fix) +* [5. Preview URLs](#5-preview-urls) +* [6. Go to see your intercepts](#6-go-to-see-your-intercepts) +* [7. How/Why does this all work](#7-howwhy-does-this-all-work) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +In this guide, we'll give you a hands-on tutorial with Telepresence. To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js and Golang. We have a version in React if you prefer. + + + + Note: This documentation will dynamically update with values once you authenticate to Ambassador Cloud in step one below. If you need help, please join the #telepresence Slack channel. + + +## 1. Get a free remote cluster + +Telepresence connects your local workstation with a remote Kubernetes cluster. In this tutorial, we'll start with a pre-configured, remote cluster. + + + Already have a cluster? Switch over to a version of this guide that takes you though the same steps using your own cluster. + + +1. Note where you've downloaded the kubeconfig.yaml file; you'll need the location of this file later in this guide. + + + + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. + + + + +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of three services. Test out the application: + +1. Go to the and vote for some emojis. + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. We're going to use Telepresence shortly to fix this bug, as everyone should be able to vote for 🍩! + + + Congratulations! You've successfully accessed the Emojivoto application on your remote cluster. + + +## 3. Set up your local development environment + +We'll set up a development environment locally on your workstation. We'll then use Telepresence to connect this local development environment to the remote Kubernetes cluster. To save time, the development environment we'll use is pre-packaged as a Docker container. + +1. Run the Docker container locally. In the command below, replace the path to the `kubeconfig.yaml` with the actual location of the `kubeconfig.yaml` you previously noted in [step 1](#1-get-a-free-remote-cluster): + + + +2. The Docker container includes a copy of the Emojivoto application that fixes the bug. Visit the [leaderboard](http://localhost:8083/leaderboard) and notice how it is different from the leaderboard in your Kubernetes cluster. + +3. Vote for 🍩 on your local leaderboard, and you can see that the bug is fixed! + + + Congratulations! You have successfully set up a local development environment, and tested the fix locally. + + +## 4. Testing our fix + +A common use case for Telepresence is to connect your local development environment to a remote cluster. This way, if your application is too big or complex to run locally, you can still develop locally. In this Quick Start, we're also going to show Telepresence can be used for integration testing, by testing our fix against the services in the remote cluster. + +1. First, log in to Telepresence using your API key: + + +2. Create an intercept, which will tell Telepresence to send traffic to the service in our container instead of the service in the cluster: + `telepresence intercept web --port 8080` + + You will be asked for your ingress layer 3 address; specify the front end service: `ambassador.ambassador` + Then, when asked for the port, type `80`, for "use TLS", type `n`. The default for the fourth value is correct so hit enter to accept it. + + + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Preview URLs + +Preview URLs enable you to safely share your development environment with anyone. For example, you may want your UX designer to take a quick look at what you're developing, before you commit the code. Preview URLs enable this easy collaboration. + +2. If you access the Emojivoto application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +1. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version running locally. + + Now you're able to share your fix in your local environment with your team! + + + + To get more information regarding Preview URLs and intercepts, visit the Developer Control Plane . + + +## 6. Visualize and manage your Preview URLs and intercepts + +1. The Developer Control Plane lets you manage & visualize important information about your intercepts. Visit the Developer Control Plane UI to see who's acceced your preview URL. + +## 7. How/Why does this all work + +Telepresence works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +Intercepts and preview URLs are functions of Telepresence that enable easy local development from a remote Kubernetes cluster and offer a preview environment for sharing and real-time collaboration. + +Telepresence also uses custom headers and header propagation for controllable intercepts and preview URLs. The headers facilitate the smart routing of requests either to live services in the cluster or services running locally on a developer’s machine. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to Ambassador Cloud with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. + + + +## What's Next? + + diff --git a/docs/telepresence/2.3/quick-start/demo-react.md b/docs/telepresence/2.3/quick-start/demo-react.md new file mode 100644 index 000000000..d8e5c4879 --- /dev/null +++ b/docs/telepresence/2.3/quick-start/demo-react.md @@ -0,0 +1,257 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import QSCards from './qs-cards'; +import { DownloadDemo } from '../../../../../src/components/Docs/DownloadDemo'; +import { UserInterceptCommand } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start - React + +
+

Contents

+ +* [1. Download the demo cluster archive](#1-download-the-demo-cluster-archive) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Set up the sample application](#3-set-up-the-sample-application) +* [4. Test app](#4-test-app) +* [5. Run a service on your laptop](#5-run-a-service-on-your-laptop) +* [6. Make a code change](#6-make-a-code-change) +* [7. Intercept all traffic to the service](#7-intercept-all-traffic-to-the-service) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +In this guide we'll give you **everything you need in a preconfigured demo cluster:** the Telepresence CLI, a config file for connecting to your demo cluster, and code to run a cluster service locally. + + + While Telepresence works with any language, this guide uses a sample app with a frontend written in React. We have a version with a Node.js backend if you prefer. + + + + +## 1. Download the demo cluster archive + +1. + +2. Extract the archive file, open the `ambassador-demo-cluster` folder, and run the installer script (the commands below might vary based on where your browser saves downloaded files). + + + This step will also install some dependency packages onto your laptop using npm, you can see those packages at ambassador-demo-cluster/edgey-corp-nodejs/DataProcessingService/package.json. + + + ``` + cd ~/Downloads + unzip ambassador-demo-cluster.zip -d ambassador-demo-cluster + cd ambassador-demo-cluster + ./install.sh + # type y to install the npm dependencies when asked + ``` + +3. Confirm that your `kubectl` is configured to use the demo cluster by getting the status of the cluster nodes, you should see a single node named `tpdemo-prod-...`: + `kubectl get nodes` + + ``` + $ kubectl get nodes + + NAME STATUS ROLES AGE VERSION + tpdemo-prod-1234 Ready control-plane,master 5d10h v1.20.2+k3s1 + ``` + +4. Confirm that the Telepresence CLI is now installed (we expect to see the daemons are not running yet): +`telepresence status` + + ``` + $ telepresence status + + Root Daemon: Not running + User Daemon: Not running + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open System Preferences → Security & Privacy → General. Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence status command. + + + + You now have Telepresence installed on your workstation and a Kubernetes cluster configured in your terminal! + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster (this requires **root** privileges and will ask for your password): +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Set up the sample application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + +1. Clone the emojivoto app: +`git clone https://github.com/datawire/emojivoto.git` + +1. Deploy the app to your cluster: +`kubectl apply -k emojivoto/kustomize/deployment` + +1. Change the kubectl namespace: +`kubectl config set-context --current --namespace=emojivoto` + +1. List the Services: +`kubectl get svc` + + ``` + $ kubectl get svc + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + emoji-svc ClusterIP 10.43.162.236 8080/TCP,8801/TCP 29s + voting-svc ClusterIP 10.43.51.201 8080/TCP,8801/TCP 29s + web-app ClusterIP 10.43.242.240 80/TCP 29s + web-svc ClusterIP 10.43.182.119 8080/TCP 29s + ``` + +1. Since you’ve already connected Telepresence to your cluster, you can access the frontend service in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). This is the namespace qualified DNS name in the form of `service.namespace`. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Test app + +1. Vote for some emojis and see how the [leaderboard](http://web-app.emojivoto/leaderboard) changes. + +1. There is one emoji that causes an error when you vote for it. Vote for 🍩 and the leaderboard does not actually update. Also an error is shown on the browser dev console: +`GET http://web-svc.emojivoto:8080/api/vote?choice=:doughnut: 500 (Internal Server Error)` + + + Open the dev console in Chrome or Firefox with Option + ⌘ + J (macOS) or Shift + CTRL + J (Windows/Linux).
+ Open the dev console in Safari with Option + ⌘ + C. +
+ +The error is on a backend service, so **we can add an error page to notify the user** while the bug is fixed. + +## 5. Run a service on your laptop + +Now start up the `web-app` service on your laptop. We'll then make a code change and intercept this service so that we can see the immediate results of a code change to the service. + +1. **In a new terminal window**, change into the repo directory and build the application: + + `cd /emojivoto` + `make web-app-local` + + ``` + $ make web-app-local + + ... + webpack 5.34.0 compiled successfully in 4326 ms + ✨ Done in 5.38s. + ``` + +2. Change into the service's code directory and start the server: + + `cd emojivoto-web-app` + `yarn webpack serve` + + ``` + $ yarn webpack serve + + ... + ℹ 「wds」: Project is running at http://localhost:8080/ + ... + ℹ 「wdm」: Compiled successfully. + ``` + +4. Access the application at [http://localhost:8080](http://localhost:8080) and see how voting for the 🍩 is generating the same error as the application deployed in the cluster. + + + Victory, your local React server is running a-ok! + + +## 6. Make a code change +We’ve now set up a local development environment for the app. Next we'll make and locally test a code change to the app to improve the issue with voting for 🍩. + +1. In the terminal running webpack, stop the server with `Ctrl+c`. + +1. In your preferred editor open the file `emojivoto/emojivoto-web-app/js/components/Vote.jsx` and replace the `render()` function (lines 83 to the end) with [this highlighted code snippet](https://github.com/datawire/emojivoto/blob/main/assets/Vote-fixed.jsx#L83-L149). + +1. Run webpack to fully recompile the code then start the server again: + + `yarn webpack` + `yarn webpack serve` + +1. Reload the browser tab showing [http://localhost:8080](http://localhost:8080) and vote for 🍩. Notice how you see an error instead, improving the user experience. + +## 7. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the app to the version running locally instead. + + + This command must be run in the terminal window where you ran the script because the script set environment variables to access the demo cluster. Those variables will only will apply to that terminal session. + + +1. Start the intercept with the `intercept` command, setting the workload name (a Deployment in this case), namespace, and port: +`telepresence intercept web-app --namespace emojivoto --port 8080` + + ``` + $ telepresence intercept web-app --namespace emojivoto --port 8080 + + Using deployment web-app + intercepted + Intercept name: web-app-emojivoto + State : ACTIVE + ... + ``` + +2. Go to the frontend service again in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). Voting for 🍩 should now show an error message to the user. + + + The web-app Deployment is being intercepted and rerouted to the server on your laptop! + + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## What's Next? + + diff --git a/docs/telepresence/2.3/quick-start/go.md b/docs/telepresence/2.3/quick-start/go.md new file mode 100644 index 000000000..a04ba23a5 --- /dev/null +++ b/docs/telepresence/2.3/quick-start/go.md @@ -0,0 +1,353 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Go** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Go application](#3-install-a-sample-go-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Go application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Go. We have versions in Python (Flask), Python (FastAPI), Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-go.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-go.git + + Cloning into 'edgey-corp-go'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-go/DataProcessingService/` + +3. You will use [Fresh](https://pkg.go.dev/github.com/BUGLAN/fresh) to support auto reloading of the Go server, which we'll use later. Confirm it is installed by running: + `go get github.com/pilu/fresh` + Then start the Go server: + `$GOPATH/bin/fresh` + + ``` + $ go get github.com/pilu/fresh + + $ $GOPATH/bin/fresh + + ... + 10:23:41 app | Welcome to the DataProcessingGoService! + ``` + + + Install Go from here and set your GOPATH if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Go server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Go server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-go/DataProcessingService/main.go` in your editor and change `var color string` from `blue` to `orange`. Save the file and the Go server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.3/quick-start/index.md b/docs/telepresence/2.3/quick-start/index.md new file mode 100644 index 000000000..efcb65b52 --- /dev/null +++ b/docs/telepresence/2.3/quick-start/index.md @@ -0,0 +1,7 @@ +--- + description: Telepresence Quick Start. +--- + +import TelepresenceQuickStartLanding from './TelepresenceQuickStartLanding' + + diff --git a/docs/telepresence/2.3/quick-start/qs-cards.js b/docs/telepresence/2.3/quick-start/qs-cards.js new file mode 100644 index 000000000..31582355b --- /dev/null +++ b/docs/telepresence/2.3/quick-start/qs-cards.js @@ -0,0 +1,70 @@ +import Grid from '@material-ui/core/Grid'; +import Paper from '@material-ui/core/Paper'; +import Typography from '@material-ui/core/Typography'; +import { makeStyles } from '@material-ui/core/styles'; +import React from 'react'; + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + textAlign: 'center', + alignItem: 'stretch', + padding: 0, + }, + paper: { + padding: theme.spacing(1), + textAlign: 'center', + color: 'black', + height: '100%', + }, +})); + +export default function CenteredGrid() { + const classes = useStyles(); + + return ( +
+ + + + + + Collaborating + + + + Use preview URLS to collaborate with your colleagues and others + outside of your organization. + + + + + + + + Outbound Sessions + + + + While connected to the cluster, your laptop can interact with + services as if it was another pod in the cluster. + + + + + + + + FAQs + + + + Learn more about uses cases and the technical implementation of + Telepresence. + + + + +
+ ); +} diff --git a/docs/telepresence/2.3/quick-start/qs-go.md b/docs/telepresence/2.3/quick-start/qs-go.md new file mode 100644 index 000000000..a04ba23a5 --- /dev/null +++ b/docs/telepresence/2.3/quick-start/qs-go.md @@ -0,0 +1,353 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Go** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Go application](#3-install-a-sample-go-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Go application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Go. We have versions in Python (Flask), Python (FastAPI), Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-go.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-go.git + + Cloning into 'edgey-corp-go'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-go/DataProcessingService/` + +3. You will use [Fresh](https://pkg.go.dev/github.com/BUGLAN/fresh) to support auto reloading of the Go server, which we'll use later. Confirm it is installed by running: + `go get github.com/pilu/fresh` + Then start the Go server: + `$GOPATH/bin/fresh` + + ``` + $ go get github.com/pilu/fresh + + $ $GOPATH/bin/fresh + + ... + 10:23:41 app | Welcome to the DataProcessingGoService! + ``` + + + Install Go from here and set your GOPATH if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Go server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Go server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-go/DataProcessingService/main.go` in your editor and change `var color string` from `blue` to `orange`. Save the file and the Go server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.3/quick-start/qs-java.md b/docs/telepresence/2.3/quick-start/qs-java.md new file mode 100644 index 000000000..0478503c5 --- /dev/null +++ b/docs/telepresence/2.3/quick-start/qs-java.md @@ -0,0 +1,347 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Java** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Java application](#3-install-a-sample-java-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Java application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Java. We have versions in Python (FastAPI), Python (Flask), Go, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-java.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-java.git + + Cloning into 'edgey-corp-java'... + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-java/DataProcessingService/` + +3. Start the Maven server. + `mvn spring-boot:run` + + + Install Java and Maven first if needed. + + + ``` + $ mvn spring-boot:run + + ... + g.d.DataProcessingServiceJavaApplication : Started DataProcessingServiceJavaApplication in 1.408 seconds (JVM running for 1.684) + + ``` + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Java server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Java server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-java/DataProcessingService/src/main/resources/application.properties` in your editor and change `app.default.color` on line 2 from `blue` to `orange`. Save the file then stop and restart your Java server. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.3/quick-start/qs-node.md b/docs/telepresence/2.3/quick-start/qs-node.md new file mode 100644 index 000000000..cbc80a649 --- /dev/null +++ b/docs/telepresence/2.3/quick-start/qs-node.md @@ -0,0 +1,341 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Node.js** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Node.js application](#3-install-a-sample-nodejs-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Node.js application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js. We have versions in Go, Java,Python using Flask, and Python using FastAPI if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-nodejs.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-nodejs.git + + Cloning into 'edgey-corp-nodejs'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-nodejs/DataProcessingService/` + +3. Install the dependencies and start the Node server: +`npm install && npm start` + + ``` + $ npm install && npm start + + ... + Welcome to the DataProcessingService! + { _: [] } + Server running on port 3000 + ``` + + + Install Node.js from here if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Node server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + See this doc for more information on how Telepresence resolves DNS. + + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Node server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-nodejs/DataProcessingService/app.js` in your editor and change line 6 from `blue` to `orange`. Save the file and the Node server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.3/quick-start/qs-python-fastapi.md b/docs/telepresence/2.3/quick-start/qs-python-fastapi.md new file mode 100644 index 000000000..9326bdf82 --- /dev/null +++ b/docs/telepresence/2.3/quick-start/qs-python-fastapi.md @@ -0,0 +1,338 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Python (FastAPI)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the FastAPI framework. We have versions in Python (Flask), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python-fastapi.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python-fastapi.git + + Cloning into 'edgey-corp-python-fastapi'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python-fastapi/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install fastapi uvicorn requests && python app.py + + Collecting fastapi + ... + Application startup complete. + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local service is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python-fastapi/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 17 from `blue` to `orange`. Save the file and the Python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080) and it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.3/quick-start/qs-python.md b/docs/telepresence/2.3/quick-start/qs-python.md new file mode 100644 index 000000000..86fcead2d --- /dev/null +++ b/docs/telepresence/2.3/quick-start/qs-python.md @@ -0,0 +1,349 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Python (Flask)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the Flask framework. We have versions in Python (FastAPI), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python.git + + Cloning into 'edgey-corp-python'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install flask requests && python app.py + + Collecting flask + ... + Welcome to the DataServiceProcessingPythonService! + ... + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Python server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 15 from `blue` to `orange`. Save the file and the python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL +Create preview URLs to do selective intercepts, meaning only traffic coming from the preview URL will be intercepted, so you can easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.3/quick-start/telepresence-quickstart-landing.less b/docs/telepresence/2.3/quick-start/telepresence-quickstart-landing.less new file mode 100644 index 000000000..1a8c3ddc7 --- /dev/null +++ b/docs/telepresence/2.3/quick-start/telepresence-quickstart-landing.less @@ -0,0 +1,185 @@ +@import '~@src/components/Layout/vars.less'; + +.doc-body .telepresence-quickstart-landing { + font-family: @InterFont; + color: @black; + margin: 0 auto 140px; + max-width: @docs-max-width; + min-width: @docs-min-width; + + h1, + h2 { + color: @blue-dark; + font-style: normal; + font-weight: normal; + letter-spacing: 0.25px; + } + + h1 { + font-size: 33px; + line-height: 40px; + + svg { + vertical-align: text-bottom; + } + } + + h2 { + font-size: 23px; + line-height: 33px; + margin: 0 0 1rem; + + .highlight-mark { + background: transparent; + color: @blue-dark; + background: -moz-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: -webkit-gradient( + linear, + left top, + left bottom, + color-stop(0%, transparent), + color-stop(60%, transparent), + color-stop(60%, fade(@blue-electric, 15%)), + color-stop(100%, fade(@blue-electric, 15%)) + ); + background: -webkit-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: -o-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: -ms-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: linear-gradient( + to bottom, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='transparent', endColorstr='fade(@blue-electric, 15%)',GradientType=0 ); + padding: 0 3px; + margin: 0 0.1em 0 0; + } + } + + .telepresence-choice { + background: @white; + border: 2px solid @grey-separator; + box-shadow: -6px 12px 0px fade(@black, 12%); + border-radius: 8px; + padding: 20px; + + strong { + color: @blue; + } + } + + .telepresence-choice-wrapper { + border-bottom: solid 1px @grey-separator; + column-gap: 60px; + display: inline-grid; + grid-template-columns: repeat(2, 1fr); + margin: 20px 0 50px; + padding: 0 0 62px; + width: 100%; + + .telepresence-choice { + ol { + li { + font-size: 14px; + } + } + + .get-started-button { + background-color: @green; + border-radius: 5px; + color: @white; + display: inline-flex; + font-style: normal; + font-weight: 600; + font-size: 14px; + line-height: 24px; + margin: 0 0 15px 5px; + padding: 13px 20px; + align-items: center; + letter-spacing: 1.25px; + text-decoration: none; + text-transform: uppercase; + transition: background-color 200ms linear 0ms; + + svg { + fill: @white; + height: 20px; + width: 20px; + } + + &:hover { + background-color: @green-dark; + text-decoration: none; + } + } + + p { + font-style: normal; + font-weight: normal; + font-size: 16px; + line-height: 26px; + letter-spacing: 0.5px; + } + } + } + + .video-wrapper { + display: flex; + flex-direction: row; + + ul { + li { + font-size: 14px; + margin: 0 10px 10px 0; + } + } + + div { + &.video-container { + flex: 1 1 70%; + position: relative; + width: 100%; + padding-bottom: 39.375%; + + .video { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + border: 0; + } + } + + &.description { + flex: 0 1 30%; + } + } + } +} diff --git a/docs/telepresence/2.3/redirects.yml b/docs/telepresence/2.3/redirects.yml new file mode 100644 index 000000000..5961b3477 --- /dev/null +++ b/docs/telepresence/2.3/redirects.yml @@ -0,0 +1 @@ +- {from: "", to: "quick-start"} diff --git a/docs/telepresence/2.3/reference/architecture.md b/docs/telepresence/2.3/reference/architecture.md new file mode 100644 index 000000000..47facb0b8 --- /dev/null +++ b/docs/telepresence/2.3/reference/architecture.md @@ -0,0 +1,63 @@ +--- +description: "How Telepresence works to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Telepresence Architecture + +
+ +![Telepresence Architecture](../../../../../images/documentation/telepresence-architecture.inline.svg) + +
+ +## Telepresence CLI + +The Telepresence CLI orchestrates all the moving parts: it starts the Telepresence Daemon, installs the Traffic Manager +in your cluster, authenticates against Ambassador Cloud and configure all those elements to communicate with one +another. + +## Telepresence Daemon + +The Telepresence Daemon runs on a developer's workstation and is its main point of communication with the cluster's +network. All requests from and to the cluster go through the Daemon, which communicates with the Traffic Manager. + +## Traffic Manager + +The Traffic Manager is the central point of communication between Traffic Agents in the cluster and Telepresence Daemons +on developer workstations, proxying all relevant inbound and outbound traffic and tracking active intercepts. When +Telepresence is run with either the `connect`, `intercept`, or `list` commands, the Telepresence CLI first checks the +cluster for the Traffic Manager deployment, and if missing it creates it. + +When an intercept gets created with a Preview URL, the Traffic Manager will establish a connection with Ambassador Cloud +so that Preview URL requests can be routed to the cluster. This allows Ambassador Cloud to reach the Traffic Manager +without requiring the Traffic Manager to be publicly exposed. Once the Traffic Manager receives a request from a Preview +URL, it forwards the request to the ingress service specified at the Preview URL creation. + +## Traffic Agent + +The Traffic Agent is a sidecar container that facilitates intercepts. When an intercept is started, the Traffic Agent +container is injected into the workload's pod(s). You can see the Traffic Agent's status by running `kubectl describe +pod `. + +Depending on the type of intercept that gets created, the Traffic Agent will either route the incoming request to the +Traffic Manager so that it gets routed to a developer's workstation, or it will pass it along to the container in the +pod usually handling requests on that port. + +## Ambassador Cloud + +Ambassador Cloud enables Preview URLs by generating random ephemeral domain names and routing requests received on those +domains from authorized users to the appropriate Traffic Manager. + +Ambassador Cloud also lets users manage their Preview URLs: making them publicly accessible, seeing users who have +accessed them and deleting them. + +# Changes from Service Preview + +Using Ambassador's previous offering, Service Preview, the Traffic Agent had to be manually added to a pod by an +annotation. This is no longer required as the Traffic Agent is automatically injected when an intercept is started. + +Service Preview also started an intercept via `edgectl intercept`. The `edgectl` CLI is no longer required to intercept +as this functionality has been moved to the Telepresence CLI. + +For both the Traffic Manager and Traffic Agents, configuring Kubernetes ClusterRoles and ClusterRoleBindings is not +required as it was in Service Preview. Instead, the user running Telepresence must already have sufficient permissions in the cluster to add and modify deployments in the cluster. diff --git a/docs/telepresence/2.3/reference/client.md b/docs/telepresence/2.3/reference/client.md new file mode 100644 index 000000000..930236632 --- /dev/null +++ b/docs/telepresence/2.3/reference/client.md @@ -0,0 +1,25 @@ +--- +description: "CLI options for Telepresence to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Client reference + +The [Telepresence CLI client](../../quick-start) is used to connect Telepresence to your cluster, start and stop intercepts, and create preview URLs. All commands are run in the form of `telepresence `. + +## Commands + +A list of all CLI commands and flags is available by running `telepresence help`, but here is more detail on the most common ones. + +| Command | Description | +| --- | --- | +| `connect` | Starts the local daemon and connects Telepresence to your cluster and installs the Traffic Manager if it is missing. After connecting, outbound traffic is routed to the cluster so that you can interact with services as if your laptop was another pod (for example, curling a service by it's name) | +| [`login`](login) | Authenticates you to Ambassador Cloud to create, manage, and share [preview URLs](../../howtos/preview-urls/) +| `logout` | Logs out out of Ambassador Cloud | +| `dashboard` | Reopens the Ambassador Cloud dashboard in your browser | +| `preview` | Create or remove [preview URLs](../../howtos/preview-urls) for existing intercepts: `telepresence preview create ` | +| `status` | Shows the current connectivity status | +| `quit` | Quits the local daemon, stopping all intercepts and outbound traffic to the cluster| +| `list` | Lists the current active intercepts | +| `intercept` | Intercepts a service, run followed by the service name to be intercepted and what port to proxy to your laptop: `telepresence intercept --port `. This command can also start a process so you can run a local instance of the service you are intercepting. For example the following will intercept the hello service on port 8000 and start a Python web server: `telepresence intercept hello --port 8000 -- python3 -m http.server 8000`. A special flag `--docker-run` can be used to run the local instance [in a docker container](../docker-run). | +| `leave` | Stops an active intercept: `telepresence leave hello` | +| `uninstall` | Uninstalls Telepresence from your cluster, using the `--agent` flag to target the Traffic Agent for a specific workload, the `--all-agents` flag to remove all Traffic Agents from all workloads, or the `--everything` flag to remove all Traffic Agents and the Traffic Manager. diff --git a/docs/telepresence/2.3/reference/client/login.md b/docs/telepresence/2.3/reference/client/login.md new file mode 100644 index 000000000..d1d0d8fad --- /dev/null +++ b/docs/telepresence/2.3/reference/client/login.md @@ -0,0 +1,53 @@ +# Telepresence Login + +```console +$ telepresence login --help +Authenticate to Ambassador Cloud + +Usage: + telepresence login [flags] + +Flags: + --apikey string Static API key to use instead of performing an interactive login +``` + +## Description + +Use `telepresence login` to explicitly authenticate with [Ambassador +Cloud](https://www.getambassador.io/docs/cloud). Unless the +[`skipLogin` option](../../config) is set, other commands will +automatically invoke the `telepresence login` interactive login +procedure as necessary, so it is rarely necessary to explicitly run +`telepresence login`; it should only be truly necessary to explictly +run `telepresence login` when you require a non-interactive login. + +The normal interactive login procedure involves launching a web +browser, a user interacting with that web browser, and finally having +the web browser make callbacks to the local Telepresence process. If +it is not possible to do this (perhaps you are using a headless remote +box via SSH, or are using Telepresence in CI), then you may instead +have Ambassador Cloud issue an API key that you pass to `telepresence +login` with the `--apikey` flag. + +## Acquiring an API key + +1. Log in to Ambassador Cloud at https://app.getambassador.io/ . + +2. Click on your profile icon in the upper-left: ![Screenshot with the + mouse pointer over the upper-left profile icon](./apikey-2.png) + +3. Click on the "API Keys" menu button: ![Screenshot with the mouse + pointer over the "API Keys" menu button](./apikey-3.png) + +4. Click on the "generate new key" button in the upper-right: + ![Screenshot with the mouse pointer over the "generate new key" + button](./apikey-4.png) + +5. Enter a description for the key (perhaps the name of your laptop, + or perhaps the "CI"), and click "generate api key" to create it. + +You may now pass the API key as `KEY` to `telepresence login --apikey=KEY`. + +Telepresence will use that "master" API key to create narrower keys +for different components of Telepresence. You will see these appear +in the Ambassador Cloud web interface. diff --git a/docs/telepresence/2.3/reference/client/login/apikey-2.png b/docs/telepresence/2.3/reference/client/login/apikey-2.png new file mode 100644 index 000000000..1379502a9 Binary files /dev/null and b/docs/telepresence/2.3/reference/client/login/apikey-2.png differ diff --git a/docs/telepresence/2.3/reference/client/login/apikey-3.png b/docs/telepresence/2.3/reference/client/login/apikey-3.png new file mode 100644 index 000000000..4559b784d Binary files /dev/null and b/docs/telepresence/2.3/reference/client/login/apikey-3.png differ diff --git a/docs/telepresence/2.3/reference/client/login/apikey-4.png b/docs/telepresence/2.3/reference/client/login/apikey-4.png new file mode 100644 index 000000000..25c6581a4 Binary files /dev/null and b/docs/telepresence/2.3/reference/client/login/apikey-4.png differ diff --git a/docs/telepresence/2.3/reference/cluster-config.md b/docs/telepresence/2.3/reference/cluster-config.md new file mode 100644 index 000000000..a89d8bea2 --- /dev/null +++ b/docs/telepresence/2.3/reference/cluster-config.md @@ -0,0 +1,184 @@ +import Alert from '@material-ui/lab/Alert'; +import { ClusterConfig } from '../../../../../src/components/Docs/Telepresence'; + +# Cluster-side configuration + +For the most part, Telepresence doesn't require any special +configuration in the cluster and can be used right away in any +cluster (as long as the user has adequate [RBAC permissions](../rbac)). + +However, some advanced features do require some configuration in the +cluster. + +## TLS + +In this example, other applications in the cluster expect to speak TLS to your +intercepted application (perhaps you're using a service-mesh that does +mTLS). + +In order to use `--mechanism=http` (or any features that imply +`--mechanism=http`) you need to tell Telepresence about the TLS +certificates in use. + +Tell Telepresence about the certificates in use by adjusting your +[workload's](../intercepts/#supported-workloads) Pod template to set a couple of +annotations on the intercepted Pods: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ "getambassador.io/inject-terminating-tls-secret": "your-terminating-secret" # optional ++ "getambassador.io/inject-originating-tls-secret": "your-originating-secret" # optional + spec: ++ serviceAccountName: "your-account-that-has-rbac-to-read-those-secrets" + containers: +``` + +- The `getambassador.io/inject-terminating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS server + certificate to use for decrypting and responding to incoming + requests. + + When Telepresence modifies the Service and workload port + definitions to point at the Telepresence Agent sidecar's port + instead of your application's actual port, the sidecar will use this + certificate to terminate TLS. + +- The `getambassador.io/inject-originating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS + client certificate to use for communicating with your application. + + You will need to set this if your application expects incoming + requests to speak TLS (for example, your + code expects to handle mTLS itself instead of letting a service-mesh + sidecar handle mTLS for it, or the port definition that Telepresence + modified pointed at the service-mesh sidecar instead of at your + application). + + If you do set this, you should to set it to the + same client certificate Secret that you configure the Ambassador + Edge Stack to use for mTLS. + +It is only possible to refer to a Secret that is in the same Namespace +as the Pod. + +The Pod will need to have permission to `get` and `watch` each of +those Secrets. + +Telepresence understands `type: kubernetes.io/tls` Secrets and +`type: istio.io/key-and-cert` Secrets; as well as `type: Opaque` +Secrets that it detects to be formatted as one of those types. + +## Air gapped cluster + +If your cluster is air gapped (it does not have access to the +internet and therefore cannot connect to Ambassador Cloud), some additional +configuration is required to acquire a license use selective intercepts. + +### Create a license + +1. + +2. Generate a new license (if one doesn't already exist) by clicking *Generate New License*. + +3. You will be prompted for your Cluster ID. Ensure your +kubeconfig context is using the cluster you want to create a license for then +run this command to generate the Cluster ID: + + ``` + $ telepresence current-cluster-id + + Cluster ID: + ``` + +4. Click *Generate API Key* to finish generating the license. + +### Add license to cluster + +1. On the licenses page, download the license file associated with your cluster. + +2. Use this command to generate a Kubernetes Secret config using the license file: + + ``` + $ telepresence license -f + + apiVersion: v1 + data: + hostDomain: + license: + kind: Secret + metadata: + creationTimestamp: null + name: systema-license + namespace: ambassador + ``` + +3. Save the output as a YAML file and apply it to your +cluster with `kubectl`. + +4. Ensure that you have the docker image for the Smart Agent (datawire/ambassador-telepresence-agent:1.8.0) +pulled and in a registry your cluster can pull from. + +5. Have users use the `images` [config key](../config/#images) keys so telepresence uses the aforementioned image for their agent. + +Users will now be able to use selective intercepts with the +`--preview-url=false` flag (since use of preview URLs requires a connection to Ambassador Cloud). + +If using Helm to install the server-side components, see the chart's [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) to learn how to configure the image registry and license secret. + +Have clients use the [skipLogin](../config/#cloud) key to ensure the cli knows it is operating in an +air-gapped environment. + +## Mutating Webhook + +By default, Telepresence updates the intercepted workload (Deployment, StatefulSet, ReplicaSet) +template to add the [Traffic Agent](../architecture/#traffic-agent) sidecar container and update the +port definitions. If you use GitOps workflows (with tools like ArgoCD) to automatically update your +cluster so that it reflects the desired state from an external Git repository, this behavior can make +your workload out of sync with that external desired state. + +To solve this issue, you can use Telepresence's Mutating Webhook alternative mechanism. Intercepted +workloads will then stay untouched and only the underlying pods will be modified to inject the Traffic +Agent sidecar container and update the port definitions. + + +A current limitation of the Mutating Webhook mechanism is that the targetPort of your intercepted +Service needs to point to the name of a port on your container, not the port number itself. + + +Simply add the `telepresence.getambassador.io/inject-traffic-agent: enabled` annotation to your +workload template's annotations: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ telepresence.getambassador.io/inject-traffic-agent: enabled + spec: + containers: +``` + +### Service Port Annotation + +A service port annotation can be added to the workload to make the Mutating Webhook select a specific port +in the service. This is necessary when the service has multiple ports. + +```diff + spec: + template: + metadata: + labels: + service: your-service + annotations: + telepresence.getambassador.io/inject-traffic-agent: enabled ++ telepresence.getambassador.io/inject-service-port: https + spec: + containers: +``` diff --git a/docs/telepresence/2.3/reference/config.md b/docs/telepresence/2.3/reference/config.md new file mode 100644 index 000000000..e6b3ccb70 --- /dev/null +++ b/docs/telepresence/2.3/reference/config.md @@ -0,0 +1,182 @@ +# Laptop-side configuration + +## Global Configuration +Telepresence uses a `config.yml` file to store and change certain global configuration values that will be used for all clusters you use Telepresence with. The location of this file varies based on your OS: + +* macOS: `$HOME/Library/Application Support/telepresence/config.yml` +* Linux: `$XDG_CONFIG_HOME/telepresence/config.yml` or, if that variable is not set, `$HOME/.config/telepresence/config.yml` + +For Linux, the above paths are for a user-level configuration. For system-level configuration, use the file at `$XDG_CONFIG_DIRS/telepresence/config.yml` or, if that variable is empty, `/etc/xdg/telepresence/config.yml`. If a file exists at both the user-level and system-level paths, the user-level path file will take precedence. + +### Values + +The config file currently supports values for the `timeouts`, `logLevels`, `images`, `cloud`, and `grpc` keys. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +timeouts: + agentInstall: 1m + intercept: 10s +logLevels: + userDaemon: debug +images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting +grpc: + maxReceiveSize: 10Mi +``` + +#### Timeouts +Values for `timeouts` are all durations either as a number respresenting seconds or a string with a unit suffix of `ms`, `s`, `m`, or `h`. Strings can be fractional (`1.5h`) or combined (`2h45m`). + +These are the valid fields for the `timeouts` key: + +|Field|Description|Default| +|---|---|---| +|`agentInstall`|Waiting for Traffic Agent to be installed|2 minutes| +|`apply`|Waiting for a Kubernetes manifest to be applied|1 minute| +|`clusterConnect`|Waiting for cluster to be connected|20 seconds| +|`intercept`|Waiting for an intercept to become active|5 seconds| +|`proxyDial`|Waiting for an outbound connection to be established|5 seconds| +|`trafficManagerConnect`|Waiting for the Traffic Manager API to connect for port fowards|20 seconds| +|`trafficManagerAPI`|Waiting for connection to the gPRC API after `trafficManagerConnect` is successful|15 seconds| + +#### Log Levels +Values for `logLevels` are one of the following strings: `trace`, `debug`, `info`, `warning`, `error`, `fatal` and `panic`. +These are the valid fields for the `logLevels` key: + +|Field|Description|Default| +|---|---|---| +|`userDaemon`|Logging level to be used by the User Daemon (logs to connector.log)|debug| +|`rootDaemon`|Logging level to be used for the Root Daemon (logs to daemon.log)|info| + +#### Images +Values for `images` are strings. These values affect the objects that are deployed in the cluster, +so it's important to ensure users have the same configuration. + +Additionally, you can deploy the server-side components with [Helm](../../install/helm), to prevent them +from being overridden by a client's config and use the [mutating-webhook](../cluster-config/#mutating-webhook) +to handle installation of the `traffic-agents`. + +These are the valid fields for the `images` key: + +|Field|Description|Default| +|---|---|---| +|`registry`|Docker registry to be used for installing the Traffic Manager and default Traffic Agent. If not using a helm chart to deploy server-side objects, changing this value will create a new traffic-manager deployment when using Telepresence commands. Additionally, changing this value will update installed default `traffic-agents` to use the new registry when creating a new intercept.|docker.io/datawire| +|`agentImage`|$registry/$imageName:$imageTag to use when installing the Traffic Agent. Changing this value will update pre-existing `traffic-agents` to use this new image. * the `registry` value is not used for the `traffic-agent` if you have this value set *|| +|`webhookRegistry`|The container $registry that the [Traffic Manager](../cluster-config/#mutating-webhook) will use with the `webhookAgentImage` *This value is only used if a new traffic-manager is deployed*|| +|`webhookAgentImage`|The container image that the [Traffic Manager](../cluster-config/#mutating-webhook) will use when installing the Traffic Agent in annotated pods *This value is only used if a new traffic-manager is deployed*|| + +#### Cloud +These fields control how the client interacts with the Cloud service. +Currently there is only one key and it accepts bools: `1`, `t`, `T`, `TRUE`, `true`, `True`, `0`, `f`, `F,` `FALSE` + +|Field|Description|Default| +|---|---|---| +|`skipLogin`|Whether the cli should skip automatic login to Ambassador Cloud. If set to true, you must have a [license](../cluster-config/#air-gapped-cluster) installed in the cluster in order to be able to perform selective intercepts |false| + +Telepresence attempts to auto-detect if the cluster is air-gapped, +be sure to set the `skipLogin` value to `true` + +Reminder: To use selective intercepts, which normally require a login, you +must have a license in your cluster and specify which agentImage should be installed, +by also adding the following to your config.yml: + ``` + images: + agentImage: / + ``` + +#### Grpc +The `maxReceiveSize` determines how large a message that the workstation receives via gRPC can be. The default is 4Mi (determined by gRPC). All traffic to and from the cluster is tunneled via gRPC. + +The size is measured in bytes. You can express it as a plain integer or as a fixed-point number using E, G, M, or K. You can also use the power-of-two equivalents: Gi, Mi, Ki. For example, the following represent roughly the same value: +``` +128974848, 129e6, 129M, 123Mi +``` + +## Per-Cluster Configuration +Some configuration is not global to Telepresence and is actually specific to a cluster. Thus, we store that config information in your kubeconfig file, so that it is easier to maintain per-cluster configuration. + +### Values +The current per-cluster configuration supports `dns`, `alsoProxy`, and `manager` keys. +To add configuration, simply add a `telepresence.io` entry to the cluster in your kubeconfig like so: + +``` +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + dns: + also-proxy: + manager: + name: example-cluster +``` +#### DNS +The fields for `dns` are: local-ip, remote-ip, exclude-suffixes, include-suffixes, and lookup-timeout. + +|Field|Description|Type|Default| +|---|---|---|---| +|`local-ip`|The address of the local DNS server. This entry is only used on Linux system that are not configured to use systemd.resolved|ip|first line of /etc/resolv.conf| +|`remote-ip`|the address of the cluster's DNS service|ip|IP of the kube-dns.kube-system or the dns-default.openshift-dns service| +|`exclude-suffixes`|suffixes for which the DNS resolver will always fail (or fallback in case of the overriding resolver)|list|| +|`include-suffixes`|suffixes for which the DNS resolver will always attempt to do a lookup. Includes have higher priority than excludes.|list|| +|`lookup-timeout`|maximum time to wait for a cluster side host lookup|duration|| + +Here is an example kubeconfig: +``` +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + dns: + include-suffixes: + - .se + exclude-suffixes: + - .com + name: example-cluster +``` + + +#### AlsoProxy +When using `also-proxy`, you provide a list of subnets after the key in your kubeconfig file to be added to the TUN device. All connections to addresses that the subnet spans will be dispatched to the cluster + +Here is an example kubeconfig for the subnet `1.2.3.4/32`: +``` +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + also-proxy: + - 1.2.3.4/32 + name: example-cluster +``` + +#### Manager + +The `manager` key contains configuration for finding the `traffic-manager` that telepresence will connect to. It supports one key, `namespace`, indicating the namespace where the traffic manager is to be found + +Here is an example kubeconfig that will instruct telepresence to connect to a manager in namespace `staging`: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` diff --git a/docs/telepresence/2.3/reference/dns.md b/docs/telepresence/2.3/reference/dns.md new file mode 100644 index 000000000..e38fbc61d --- /dev/null +++ b/docs/telepresence/2.3/reference/dns.md @@ -0,0 +1,75 @@ +# DNS resolution + +The Telepresence DNS resolver is dynamically configured to resolve names using the namespaces of currently active intercepts. Processes running locally on the desktop will have network access to all services in the such namespaces by service-name only. + +All intercepts contribute to the DNS resolver, even those that do not use the `--namespace=` option. This is because `--namespace default` is implied, and in this context, `default` is treated just like any other namespace. + +No namespaces are used by the DNS resolver (not even `default`) when no intercepts are active, which means that no service is available by `` only. Without an active intercept, the namespace qualified DNS name must be used (in the form `.`). + +See this demonstrated below, using the [quick start's](../../quick-start/) sample app services. + +No intercepts are currently running, we'll connect to the cluster and list the services that can be intercepted. + +``` +$ telepresence connect + + Connecting to traffic manager... + Connected to context default (https://) + +$ telepresence list + + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + emoji : ready to intercept (traffic-agent not yet installed) + web : ready to intercept (traffic-agent not yet installed) + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + +$ curl web-app:80 + + curl: (6) Could not resolve host: web-app + +``` + +This is expected as Telepresence cannot reach the service yet by short name without an active intercept in that namespace. + +``` +$ curl web-app.emojivoto:80 + + + + + + Emoji Vote + ... +``` + +Using the namespaced qualified DNS name though does work. +Now we'll start an intercept against another service in the same namespace. Remember, `--namespace default` is implied since it is not specified. + +``` +$ telepresence intercept web --port 8080 + + Using Deployment web + intercepted + Intercept name : web + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Volume Mount Point: /tmp/telfs-166119801 + Intercepting : HTTP requests that match all headers: + 'x-telepresence-intercept-id: 8eac04e3-bf24-4d62-b3ba-35297c16f5cd:web' + +$ curl webapp:80 + + + + + + Emoji Vote + ... +``` + +Now curling that service by its short name works and will as long as the intercept is active. + +The DNS resolver will always be able to resolve services using `.` regardless of intercepts. + +See [Outbound connectivity](../routing/#dns-resolution) for details on DNS lookups. diff --git a/docs/telepresence/2.3/reference/docker-run.md b/docs/telepresence/2.3/reference/docker-run.md new file mode 100644 index 000000000..2262f0a55 --- /dev/null +++ b/docs/telepresence/2.3/reference/docker-run.md @@ -0,0 +1,31 @@ +--- +Description: "How a Telepresence intercept can run a Docker container with configured environment and volume mounts." +--- + +# Using Docker for intercepts + +If you want your intercept to go to a Docker container on your laptop, use the `--docker-run` option. It creates the intercept, runs your container in the foreground, then automatically ends the intercept when the container exits. + +`telepresence intercept --port --docker-run -- ` + +The `--` separates flags intended for `telepresence intercept` from flags intended for `docker run`. + +## Example + +Imagine you are working on a new version of a your frontend service. It is running in your cluster as a Deployment called `frontend-v1`. You use Docker on your laptop to build an improved version of the container called `frontend-v2`. To test it out, use this command to run the new container on your laptop and start an intercept of the cluster service to your local container. + +`telepresence intercept frontend-v1 --port 8000 --docker-run -- frontend-v2` + +## Ports + +The `--port` flag can specify an additional port when `--docker-run` is used so that the local and container port can be different. This is done using `--port :`. The container port will default to the local port when using the `--port ` syntax. + +## Flags + +Telepresence will automatically pass some relevant flags to Docker in order to connect the container with the intercept. Those flags are combined with the arguments given after `--` on the command line. + +- `--dns-search tel2-search` Enables single label name lookups in intercepted namespaces +- `--env-file ` Loads the intercepted environment +- `--name intercept--` Names the Docker container, this flag is omitted if explicitly given on the command line +- `-p ` The local port for the intercept and the container port +- `-v ` Volume mount specification, see CLI help for `--mount` and `--docker-mount` flags for more info diff --git a/docs/telepresence/2.3/reference/environment.md b/docs/telepresence/2.3/reference/environment.md new file mode 100644 index 000000000..b5a799cce --- /dev/null +++ b/docs/telepresence/2.3/reference/environment.md @@ -0,0 +1,28 @@ +--- +description: "How Telepresence can import environment variables from your Kubernetes cluster to use with code running on your laptop." +--- + +# Environment variables + +Telepresence can import environment variables from the cluster pod when running an intercept. +You can then use these variables with the code running on your laptop of the service being intercepted. + +There are three options available to do this: + +1. `telepresence intercept [service] --port [port] --env-file=FILENAME` + + This will write the environment variables to a Docker Compose `.env` file. This file can be used with `docker-compose` when starting containers locally. Please see the Docker documentation regarding the [file syntax](https://docs.docker.com/compose/env-file/) and [usage](https://docs.docker.com/compose/environment-variables/) for more information. + +2. `telepresence intercept [service] --port [port] --env-json=FILENAME` + + This will write the environment variables to a JSON file. This file can be injected into other build processes. + +3. `telepresence intercept [service] --port [port] -- [COMMAND]` + + This will run a command locally with the pod's environment variables set on your laptop. Once the command quits the intercept is stopped (as if `telepresence leave [service]` was run). This can be used in conjunction with a local server command, such as `python [FILENAME]` or `node [FILENAME]` to run a service locally while using the environment variables that were set on the pod via a ConfigMap or other means. + + Another use would be running a subshell, Bash for example: + + `telepresence intercept [service] --port [port] -- /bin/bash` + + This would start the intercept then launch the subshell on your laptop with all the same variables set as on the pod. diff --git a/docs/telepresence/2.3/reference/inside-container.md b/docs/telepresence/2.3/reference/inside-container.md new file mode 100644 index 000000000..f83ef3575 --- /dev/null +++ b/docs/telepresence/2.3/reference/inside-container.md @@ -0,0 +1,37 @@ +# Running Telepresence inside a container + +It is sometimes desirable to run Telepresence inside a container. One reason can be to avoid any side effects on the workstation's network, another can be to establish multiple sessions with the traffic manager, or even work with different clusters simultaneously. + +## Building the container + +Building a container with a ready-to-run Telepresence is easy because there are relatively few external dependencies. Add the following to a `Dockerfile`: + +```Dockerfile +# Dockerfile with telepresence and its prerequisites +FROM alpine:3.13 + +# Install Telepresence prerequisites +RUN apk add --no-cache curl iproute2 sshfs + +# Download and install the telepresence binary +RUN curl -fL https://app.getambassador.io/download/tel2/linux/amd64/latest/telepresence -o telepresence && \ + install -o root -g root -m 0755 telepresence /usr/local/bin/telepresence +``` +In order to build the container, do this in the same directory as the `Dockerfile`: +``` +$ docker build -t tp-in-docker . +``` + +## Running the container + +Telepresence will need access to the `/dev/net/tun` device on your Linux host (or, in case the host isn't Linux, the Linux VM that Docker starts automatically), and a Kubernetes config that identifies the cluster. It will also need `--cap-add=NET_ADMIN` to create its Virtual Network Interface. + +The command to run the container can look like this: +```bash +$ docker run \ + --cap-add=NET_ADMIN \ + --device /dev/net/tun:/dev/net/tun \ + --network=host \ + -v ~/.kube/config:/root/.kube/config \ + -it --rm tp-in-docker +``` diff --git a/docs/telepresence/2.3/reference/intercepts.md b/docs/telepresence/2.3/reference/intercepts.md new file mode 100644 index 000000000..ef4843537 --- /dev/null +++ b/docs/telepresence/2.3/reference/intercepts.md @@ -0,0 +1,170 @@ +import Alert from '@material-ui/lab/Alert'; + +# Intercepts + +## Intercept behavior when logged in to Ambassador Cloud + +After logging in to Ambassador Cloud (with [`telepresence +login`](../client/login/)), Telepresence will default to +`--preview-url=true`, which will use Ambassador Cloud to create a +sharable preview URL for this intercept. (Creating an intercept +without logging in defaults to `--preview-url=false`.) + +In order to do this, it will prompt you for four options. For the first, `Ingress`, Telepresence tries to intelligently determine the ingress controller deployment and namespace for you. If they are correct, you can hit `enter` to accept the defaults. Set the next two options, `TLS` and `Port`, appropriately based on your ingress service. The fourth is a hostname for the service, if required by your ingress. + +Also because you're logged in, Telepresence will default to `--mechanism=http --http-match=auto` (or just `--http-match=auto`; `--http-match` implies `--mechanism=http`). If you hadn't been logged in it would have defaulted to `--mechanism=tcp`. This tells it to do smart intercepts and only intercept a subset of HTTP requests, rather than just intercepting the entirety of all TCP connections. This is important for working in a shared cluster with teammates, and is important for the preview URL functionality. See `telepresence intercept --help` for information on using `--http-match` to customize which requests it intercepts. + +## Supported workloads + +Kubernetes has various [workloads](https://kubernetes.io/docs/concepts/workloads/). Currently, telepresence supports intercepting Deployments, ReplicaSets, and StatefulSets. + While many of our examples may use Deployments, they would also work on ReplicaSets and StatefulSets + +## Specifying a namespace for an intercept + +The namespace of the intercepted workload is specified using the `--namespace` option. When this option is used, and `--workload` is not used, then the given name is interpreted as the name of the workload and the name of the intercept will be constructed from that name and the namespace. + +``` +telepresence intercept hello --namespace myns --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept +`hello-myns`. In order to remove the intercept, you will need to run +`telepresence leave hello-mydns` instead of just `telepresence leave +hello`. + +The name of the intercept will be left unchanged if the workload is specified. + +``` +telepresence intercept myhello --namespace myns --workload hello --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept `myhello`. + +## Importing environment variables + +Telepresence can import the environment variables from the pod that is being intercepted, see [this doc](../environment/) for more details. + +## Creating an intercept without a local process running + +When creating an intercept that is selective (the default if you are +logged in to Ambassador Cloud), the Traffic Agent sends a GET `/` +request to your service and the process running on your local machine +at the port specified in your intercept to determine if they support +HTTP/2. This is required for selective intercepts to behave correctly. + +If you do not have a service running locally, the Traffic Agent will use the result +it gets from the HTTP check against your app in the cluster to configure requests +from the local process once it has started. + +## Creating an intercept Without a preview URL + +If you *are not* logged in to Ambassador Cloud, the following command +will intercept all traffic bound to the service and proxy it to your +laptop. This includes traffic coming through your ingress controller, +so use this option carefully as to not disrupt production +environments. + +``` +telepresence intercept --port= +``` + +If you *are* logged in to Ambassador Cloud, setting the +`--preview-url` flag to `false` is necessary. + +``` +telepresence intercept --port= --preview-url=false +``` + +This will output a header that you can set on your request for that traffic to be intercepted: + +``` +$ telepresence intercept --port= --preview-url=false +Using Deployment +intercepted + Intercept name: + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":") +``` + +Run `telepresence status` to see the list of active intercepts. + +``` +$ telepresence status +Root Daemon: Running + Version : v2.1.4 (api 3) + Primary DNS : "" + Fallback DNS: "" +User Daemon: Running + Version : v2.1.4 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 1 total + dataprocessingnodeservice: @ +``` + +Finally, run `telepresence leave ` to stop the intercept. + +## Creating an intercept when a service has multiple ports + +If you are trying to intercept a service that has multiple ports, you need to tell telepresence which service port you are trying to intercept. To specify, you can either use the name of the service port or the port number itself. To see which options might be available to you and your service, use kubectl to describe your service or look in the object's YAML. For more information on multiple ports, see the [Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services). + +``` +$ telepresence intercept --port=: +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +When intercepting a service that has multiple ports, the name of the service port that has been intercepted is also listed. + +If you want to change which port has been intercepted, you can create a new intercept the same way you did above and it will change which service port is being intercepted. + +## Creating an intercept When multiple services match your workload + +Oftentimes, there's a 1-to-1 relationship between a service and a workload, so telepresence is able to auto-detect which service it should intercept based on the workload you are trying to intercept. But if you use something like [Argo](https://www.getambassador.io/docs/argo/latest/), it uses two services (that use the same labels) to manage traffic between a canary and a stable service. + +Fortunately, if you know which service you want to use when intercepting a workload, you can use the --service flag. So in the aforementioned demo, if you wanted to use the `echo-stable` service when intercepting your workload, your command would look like this: +``` +$ telepresence intercept echo-rollout- --port --service echo-stable +Using ReplicaSet echo-rollout- +intercepted + Intercept name : echo-rollout- + State : ACTIVE + Workload kind : ReplicaSet + Destination : 127.0.0.1:3000 + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-921196036 + Intercepting : all TCP connections +``` + +## Port-forwarding an intercepted container's sidecars + +Sidecars are containers that sit in the same pod as an application container; they usually provide auxiliary functionality to an application, and can usually be reached at `localhost:${SIDECAR_PORT}`. +For example, a common use case for a sidecar is to proxy requests to a database -- your application would connect to `localhost:${SIDECAR_PORT}`, and the sidecar would then connect to the database, perhaps augmenting the connection with TLS or authentication. + +When intercepting a container that uses sidecars, you might want those sidecars' ports to be available to your local application at `localhost:${SIDECAR_PORT}`, exactly as they would be if running in-cluster. +Telepresence's `--to-pod ${PORT}` flag implements this behavior, adding port-forwards for the port given. + +``` +$ telepresence intercept --port=: --to-pod= +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +If there are multiple ports that you need forwarded, simply repeat the flag (`--to-pod= --to-pod=`). diff --git a/docs/telepresence/2.3/reference/linkerd.md b/docs/telepresence/2.3/reference/linkerd.md new file mode 100644 index 000000000..9b903fa76 --- /dev/null +++ b/docs/telepresence/2.3/reference/linkerd.md @@ -0,0 +1,75 @@ +--- +Description: "How to get Linkerd meshed services working with Telepresence" +--- + +# Using Telepresence with Linkerd + +## Introduction +Getting started with Telepresence on Linkerd services is as simple as adding an annotation to your Deployment: + +```yaml +spec: + template: + metadata: + annotations: + config.linkerd.io/skip-outbound-ports: "8081" +``` + +The local system and the Traffic Agent connect to the Traffic Manager using its gRPC API on port 8081. Telling Linkerd to skip that port allows the Traffic Agent sidecar to fully communicate with the Traffic Manager, and therefore the rest of the Telepresence system. + +## Prerequisites +1. [Telepresence binary](../../install) +2. Linkerd control plane [installed to cluster](https://linkerd.io/2.10/tasks/install/) +3. Kubectl +4. [Working ingress controller](https://www.getambassador.io/docs/edge-stack/latest/howtos/linkerd2) + +## Deploy +Save and deploy the following YAML. Note the `config.linkerd.io/skip-outbound-ports` annotation in the metadata of the pod template. + +```yaml +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: quote +spec: + replicas: 1 + selector: + matchLabels: + app: quote + strategy: + type: RollingUpdate + template: + metadata: + annotations: + linkerd.io/inject: "enabled" + config.linkerd.io/skip-outbound-ports: "8081,8022,6001" + labels: + app: quote + spec: + containers: + - name: backend + image: docker.io/datawire/quote:0.4.1 + ports: + - name: http + containerPort: 8000 + env: + - name: PORT + value: "8000" + resources: + limits: + cpu: "0.1" + memory: 100Mi +``` + +## Connect to Telepresence +Run `telepresence connect` to connect to the cluster. Then `telepresence list` should show the `quote` deployment as `ready to intercept`: + +``` +$ telepresence list + + quote: ready to intercept (traffic-agent not yet installed) +``` + +## Run the intercept +Run `telepresence intercept quote --port 8080:80` to direct traffic from the `quote` deployment to port 8080 on your local system. Assuming you have something listening on 8080, you should now be able to see your local service whenever attempting to access the `quote` service. diff --git a/docs/telepresence/2.3/reference/rbac.md b/docs/telepresence/2.3/reference/rbac.md new file mode 100644 index 000000000..4facd8b56 --- /dev/null +++ b/docs/telepresence/2.3/reference/rbac.md @@ -0,0 +1,211 @@ +import Alert from '@material-ui/lab/Alert'; + +# Telepresence RBAC +The intention of this document is to provide a template for securing and limiting the permissions of Telepresence. +This documentation covers the full extent of permissions necessary to administrate Telepresence components in a cluster. + +There are two general categories for cluster permissions with respect to Telepresence. There are RBAC settings for a User and for an Administrator described above. The User is expected to only have the minimum cluster permissions necessary to create a Telepresence [intercept](../../howtos/intercepts/), and otherwise be unable to affect Kubernetes resources. + +In addition to the above, there is also a consideration of how to manage Users and Groups in Kubernetes which is outside of the scope of the document. This document will use Service Accounts to assign Roles and Bindings. Other methods of RBAC administration and enforcement can be found on the [Kubernetes RBAC documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) page. + +## Requirements + +- Kubernetes version 1.16+ +- Cluster admin privileges to apply RBAC + +## Editing your kubeconfig + +This guide also assumes that you are utilizing a kubeconfig file that is specified by the `KUBECONFIG` environment variable. This is a `yaml` file that contains the cluster's API endpoint information as well as the user data being supplied for authentication. The Service Account name used in the example below is called tp-user. This can be replaced by any value (i.e. John or Jane) as long as references to the Service Account are consistent throughout the `yaml`. After an administrator has applied the RBAC configuration, a user should create a `config.yaml` in your current directory that looks like the following:​ + +```yaml +apiVersion: v1 +kind: Config +clusters: +- name: my-cluster # Must match the cluster value in the contexts config + cluster: + ## The cluster field is highly cloud dependent. +contexts: +- name: my-context + context: + cluster: my-cluster # Must match the name field in the clusters config + user: tp-user +users: +- name: tp-user # Must match the name of the Service Account created by the cluster admin + user: + token: # See note below +``` + +The Service Account token will be obtained by the cluster administrator after they create the user's Service Account. Creating the Service Account will create an associated Secret in the same namespace with the format `-token-`. This token can be obtained by your cluster administrator by running `kubectl get secret -n ambassador -o jsonpath='{.data.token}' | base64 -d`. + +After creating `config.yaml` in your current directory, export the file's location to KUBECONFIG by running `export KUBECONFIG=$(pwd)/config.yaml`. You should then be able to switch to this context by running `kubectl config use-context my-context`. + +## Administrating Telepresence + +[Telepresence administration](/products/telepresence/) requires permissions for creating `Namespaces`, `ServiceAccounts`, `ClusterRoles`, `ClusterRoleBindings`, `Secrets`, `Services`, `MutatingWebhookConfiguration`, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator. + +There are two ways to install the traffic-manager: Using `telepresence connect` and installing the [helm chart](../../install/helm/). + +By using `telepresence connect`, Telepresence will use your kubeconfig to create the objects mentioned above in the cluster if they don't already exist. If you want the most introspection into what is being installed, we recommend using the helm chart to install the traffic-manager. + +## Cluster-wide telepresence user access + +To allow users to make intercepts across all namespaces, but with more limited `kubectl` permissions, the following `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` will allow full `telepresence intercept` functionality. + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate value + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-role +rules: +- apiGroups: + - "" + resources: ["pods"] + verbs: ["get", "list", "create", "watch", "delete"] +- apiGroups: + - "" + resources: ["services"] + verbs: ["get", "list", "watch", "update"] +- apiGroups: + - "" + resources: ["pods/portforward"] + verbs: ["create"] +- apiGroups: + - "apps" + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update"] +- apiGroups: + - "getambassador.io" + resources: ["hosts", "mappings"] + verbs: ["*"] +- apiGroups: + - "" + resources: ["endpoints"] + verbs: ["get", "list", "watch"] +- apiGroups: + - "" + resources: ["namespaces"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-rolebinding +subjects: +- name: tp-user + kind: ServiceAccount + namespace: ambassador +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-role + kind: ClusterRole +``` + +## Namespace only telepresence user access + +RBAC for multi-tenant scenarios where multiple dev teams are sharing a single cluster where users are constrained to a specific namespace(s). + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate user name + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role +rules: +- apiGroups: + - "" + resources: ["pods"] + verbs: ["get", "list", "create", "watch", "delete"] +- apiGroups: + - "" + resources: ["services"] + verbs: ["update"] +- apiGroups: + - "" + resources: ["pods/portforward"] + verbs: ["create"] +- apiGroups: + - "apps" + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update"] +- apiGroups: + - "getambassador.io" + resources: ["hosts", "mappings"] + verbs: ["*"] +- apiGroups: + - "" + resources: ["endpoints"] + verbs: ["get", "list", "watch"] +--- +kind: RoleBinding # RBAC to access ambassador namespace +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: t2-ambassador-binding + namespace: ambassador +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount + namespace: ambassador +roleRef: + kind: ClusterRole + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +--- +kind: RoleBinding # RoleBinding T2 namespace to be intecpeted +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-test-binding # Update "test" for appropriate namespace to be intercepted + namespace: test # Update "test" for appropriate namespace to be intercepted +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount + namespace: ambassador +roleRef: + kind: ClusterRole + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +​ +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-namespace-role +rules: +- apiGroups: + - "" + resources: ["namespaces"] + verbs: ["get", "list", "watch"] +- apiGroups: + - "" + resources: ["services"] + verbs: ["get", "list", "watch"] +--- +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-namespace-binding +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount + namespace: ambassador +roleRef: + kind: ClusterRole + name: telepresence-namespace-role + apiGroup: rbac.authorization.k8s.io +``` diff --git a/docs/telepresence/2.3/reference/routing.md b/docs/telepresence/2.3/reference/routing.md new file mode 100644 index 000000000..75e36f00f --- /dev/null +++ b/docs/telepresence/2.3/reference/routing.md @@ -0,0 +1,48 @@ +# Connection Routing + +## Outbound + +### DNS resolution +When requesting a connection to a host, the IP of that host must be determined. Telepresence provides DNS resolvers to help with this task. There are currently three types of resolvers but only one of them will be used on a workstation at any given time. Common for all of them is that they will propagate a selection of the host lookups to be performed in the cluster. The selection normally includes all names ending with `.cluster.local` or a currently mapped namespace but more entries can be added to the list using the `include-suffixes` option in the +[local DNS configuration](../config/#dns) + +#### Cluster side DNS lookups +The cluster side host lookup will be performed by the traffic-manager unless the client has an active intercept, in which case, the agent performing that intercept will be responsible for doing it. If the client has multiple intercepts, then all of them will be asked to perform the lookup, and the response to the client will contain the unique sum of IPs that they produce. It's therefore important to never have multiple intercepts that span more than one namespace[[1](#namespacelimit)]. The reason for asking all of them is that the workstation currently impersonates multiple containers, and it is not possible to determine on behalf of what container the lookup request is made. + +#### macOS resolver +This resolver hooks into the macOS DNS system by creating files under `/etc/resolver`. Those files correspond to some domain and contain the port number of the Telepresence resolver. Telepresence creates one such file for each of the currently mapped namespaces and `include-suffixes` option. The file `telepresence.local` contains a search path that is configured based on current intercepts so that single label names can be resolved correctly. + +#### Linux systemd-resolved resolver +This resolver registers itself as part of telepresence's [VIF](../tun-device) using `systemd-resolved` and uses the DBus API to configure domains and routes that corresponds to the current set of intercepts and namespaces. + +#### Linux overriding resolver +Linux systems that aren't configured with `systemd-resolved` will use this resolver. A Typical case is when running Telepresence [inside a docker container](../inside-container). During initialization, the resolver will first establish a _fallback_ connection to the IP passed as `--dns`, the one configured as `local-ip` in the [local DNS configuration](../config/#dns), or the primary `nameserver` registered in `/etc/resolv.conf`. It will then use iptables to actually override that IP so that requests to it instead end up in the overriding resolver, which unless it succeeds on its own, will use the _fallback_. + +### Routing + +#### Subnets +The Telepresence `traffic-manager` service is responsible for discovering the cluster's Service subnet and all subnets used by the pods. In order to do this, it needs permission to create a dummy service[[2](#servicesubnet)] in its own namespace, and the ability to list, get, and watch nodes and pods. Some clusters will expose the pod subnets as `podCIDR` in the `Node` but some, like Amazon EKS, typically don't. Telepresence will then fall back to deriving the subnets from the IPs of all pods. + +The complete set of subnets that the [VIF](../tun-device) will be configured with is dynamic and may change during a connection's life cycle as new nodes arrive or disappear from the cluster. The set consists of what that the traffic-manager finds in the cluster, and the subnets configured using the [also-proxy](../config#alsoproxy) configuration option. Telepresence will remove subnets that are equal to, or completely covered by, other subnets. + +#### Connection origin +A request to connect to an IP-address that belongs to one of the subnets of the [VIF](../tun-device) will cause a connection request to be made in the cluster. As with host name lookups, the request will originate from the traffic-manager unless the client has ongoing intercepts. If it does, one of the intercepted pods will be chosen, and the request will instead originate from that pod. This is a best-effort approach. Telepresence only knows that the request originated from the workstation. It cannot know that it is intended to originate from a specific pod when multiple intercepts are active. + +A `--local-only` intercept will not have any effect on the connection origin because there is no pod from which the connection can originate. The intercept must be made on a workload that has been deployed in the cluster if there's a requirement for correct connection origin. + +There are multiple reasons for doing this. One is that it is important that the request originates from the correct namespace. Example: + +```bash +curl some-host +``` +results in a http request with header `Host: some-host`. Now, if a service-mesh like Istio performs header based routing, then it will fail to find that host unless the request originates from the same namespace as the host resides in. Another reason is that the configuration of a service mesh can contain very strict rules. If the request then originates from the wrong pod, it will be denied. Only one intercept at a time can be used if there is a need to ensure that the chosen pod is exactly right. + +## Inbound + +The traffic-manager and traffic-agent are mutually responsible for setting up the necessary connection to the workstation when an intercept becomes active. In versions prior to 2.3.2, this would be accomplished by the traffic-manager creating a port dynamically that it would pass to the traffic-agent. The traffic-agent would then forward the intercepted connection to that port, and the traffic-manager would forward it to the workstation. This lead to problems when integrating with service meshes like Istio since those dynamic ports needed to be configured. It also imposed an undesired requirement to be able to use mTLS between the traffic-manager and traffic-agent. + +In 2.3.2, this changes, so that the traffic-agent instead creates a tunnel to the traffic-manager using the already existing gRPC API connection. The traffic-manager then forwards that using another tunnel to the workstation. This is completely invisible to other service meshes and is therefore much easier to configure. + +##### Footnotes: +

1: A future version of Telepresence will not allow concurrent intercepts that span multiple namespaces.

+

2: The error message from an attempt to create a service in a bad subnet contains the service subnet. The trick of creating a dummy service is currently the only way to get Kubernetes to expose that subnet.

diff --git a/docs/telepresence/2.3/reference/tun-device.md b/docs/telepresence/2.3/reference/tun-device.md new file mode 100644 index 000000000..4410f6f3c --- /dev/null +++ b/docs/telepresence/2.3/reference/tun-device.md @@ -0,0 +1,27 @@ +# Networking through Virtual Network Interface + +The Telepresence daemon process creates a Virtual Network Interface (VIF) when Telepresence connects to the cluster. The VIF ensures that the cluster's subnets are available to the workstation. It also intercepts DNS requests and forwards them to the traffic-manager which in turn forwards them to intercepted agents, if any, or performs a host lookup by itself. + +### TUN-Device +The VIF is a TUN-device, which means that it communicates with the workstation in terms of L3 IP-packets. The router will recognize UDP and TCP packets and tunnel their payload to the traffic-manager via its encrypted gRPC API. The traffic-manager will then establish corresponding connections in the cluster. All protocol negotiation takes place in the client because the VIF takes care of the L3 to L4 translation (i.e. the tunnel is L4, not L3). + +## Gains when using the VIF + +### Both TCP and UDP +The TUN-device is capable of routing both TCP and UDP for outbound traffic. Earlier versions of Telepresence would only allow TCP. Future enhancements might be to also route inbound UDP, and perhaps a selection of ICMP packages (to allow for things like `ping`). + +### No SSH required + +The VIF approach is somewhat similar to using `sshuttle` but without +any requirements for extra software, configuration or connections. +Using the VIF means that only one single connection needs to be +forwarded through the Kubernetes apiserver (à la `kubectl +port-forward`), using only one single port. There is no need for +`ssh` in the client nor for `sshd` in the traffic-manager. This also +means that the traffic-manager container can run as the default user. + +#### sshfs without ssh encryption +When a POD is intercepted, and its volumes are mounted on the local machine, this mount is performed by [sshfs](https://github.com/libfuse/sshfs). Telepresence will run `sshfs -o slave` which means that instead of using `ssh` to establish an encrypted communication to an `sshd`, which in turn terminates the encryption and forwards to `sftp`, the `sshfs` will talk `sftp` directly on its `stdin/stdout` pair. Telepresence tunnels that directly to an `sftp` in the agent using its already encrypted gRPC API. As a result, no `sshd` is needed in client nor in the traffic-agent, and the traffic-agent container can run as the default user. + +### No Firewall rules +With the VIF in place, there's no longer any need to tamper with firewalls in order to establish IP routes. The VIF makes the cluster subnets available during connect, and the kernel will perform the routing automatically. When the session ends, the kernel is also responsible for cleaning up. diff --git a/docs/telepresence/2.3/reference/volume.md b/docs/telepresence/2.3/reference/volume.md new file mode 100644 index 000000000..2e0e8bc5f --- /dev/null +++ b/docs/telepresence/2.3/reference/volume.md @@ -0,0 +1,36 @@ +# Volume mounts + +import Alert from '@material-ui/lab/Alert'; + +Telepresence supports locally mounting of volumes that are mounted to your Pods. You can specify a command to run when starting the intercept, this could be a subshell or local server such as Python or Node. + +``` +telepresence intercept --port --mount=/tmp/ -- /bin/bash +``` + +In this case, Telepresence creates the intercept, mounts the Pod's volumes to locally to `/tmp`, and starts a Bash subshell. + +Telepresence can set a random mount point for you by using `--mount=true` instead, you can then find the mount point in the output of `telepresence list` or using the `$TELEPRESENCE_ROOT` variable. + +``` +$ telepresence intercept --port --mount=true -- /bin/bash +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 + Intercepting : all TCP connections + +bash-3.2$ echo $TELEPRESENCE_ROOT +/var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 +``` + +--mount=true is the default if a mount option is not specified, use --mount=false to disable mounting volumes. + +With either method, the code you run locally either from the subshell or from the intercept command will need to be prepended with the `$TELEPRESENCE_ROOT` environment variable to utilitze the mounted volumes. + +For example, Kubernetes mounts secrets to `/var/run/secrets/kubernetes.io` (even if no `mountPoint` for it exists in the Pod spec). Once mounted, to access these you would need to change your code to use `$TELEPRESENCE_ROOT/var/run/secrets/kubernetes.io`. + +If using --mount=true without a command, you can use either environment variable flag to retrieve the variable. diff --git a/docs/telepresence/2.3/release-notes/no-ssh.png b/docs/telepresence/2.3/release-notes/no-ssh.png new file mode 100644 index 000000000..025f20ab7 Binary files /dev/null and b/docs/telepresence/2.3/release-notes/no-ssh.png differ diff --git a/docs/telepresence/2.3/release-notes/run-tp-in-docker.png b/docs/telepresence/2.3/release-notes/run-tp-in-docker.png new file mode 100644 index 000000000..53b66a9b2 Binary files /dev/null and b/docs/telepresence/2.3/release-notes/run-tp-in-docker.png differ diff --git a/docs/telepresence/2.3/release-notes/telepresence-2.2.png b/docs/telepresence/2.3/release-notes/telepresence-2.2.png new file mode 100644 index 000000000..43abc7e89 Binary files /dev/null and b/docs/telepresence/2.3/release-notes/telepresence-2.2.png differ diff --git a/docs/telepresence/2.3/release-notes/telepresence-2.3.0-homebrew.png b/docs/telepresence/2.3/release-notes/telepresence-2.3.0-homebrew.png new file mode 100644 index 000000000..e203a9750 Binary files /dev/null and b/docs/telepresence/2.3/release-notes/telepresence-2.3.0-homebrew.png differ diff --git a/docs/telepresence/2.3/release-notes/telepresence-2.3.0-loglevels.png b/docs/telepresence/2.3/release-notes/telepresence-2.3.0-loglevels.png new file mode 100644 index 000000000..3d628c54a Binary files /dev/null and b/docs/telepresence/2.3/release-notes/telepresence-2.3.0-loglevels.png differ diff --git a/docs/telepresence/2.3/release-notes/telepresence-2.3.1-alsoProxy.png b/docs/telepresence/2.3/release-notes/telepresence-2.3.1-alsoProxy.png new file mode 100644 index 000000000..4052b927b Binary files /dev/null and b/docs/telepresence/2.3/release-notes/telepresence-2.3.1-alsoProxy.png differ diff --git a/docs/telepresence/2.3/release-notes/telepresence-2.3.1-brew.png b/docs/telepresence/2.3/release-notes/telepresence-2.3.1-brew.png new file mode 100644 index 000000000..2af424904 Binary files /dev/null and b/docs/telepresence/2.3/release-notes/telepresence-2.3.1-brew.png differ diff --git a/docs/telepresence/2.3/release-notes/telepresence-2.3.1-dns.png b/docs/telepresence/2.3/release-notes/telepresence-2.3.1-dns.png new file mode 100644 index 000000000..c6335e7a7 Binary files /dev/null and b/docs/telepresence/2.3/release-notes/telepresence-2.3.1-dns.png differ diff --git a/docs/telepresence/2.3/release-notes/telepresence-2.3.1-inject.png b/docs/telepresence/2.3/release-notes/telepresence-2.3.1-inject.png new file mode 100644 index 000000000..aea1003ef Binary files /dev/null and b/docs/telepresence/2.3/release-notes/telepresence-2.3.1-inject.png differ diff --git a/docs/telepresence/2.3/release-notes/telepresence-2.3.1-large-file-transfer.png b/docs/telepresence/2.3/release-notes/telepresence-2.3.1-large-file-transfer.png new file mode 100644 index 000000000..48ceb3817 Binary files /dev/null and b/docs/telepresence/2.3/release-notes/telepresence-2.3.1-large-file-transfer.png differ diff --git a/docs/telepresence/2.3/release-notes/telepresence-2.3.1-trafficmanagerconnect.png b/docs/telepresence/2.3/release-notes/telepresence-2.3.1-trafficmanagerconnect.png new file mode 100644 index 000000000..78128c174 Binary files /dev/null and b/docs/telepresence/2.3/release-notes/telepresence-2.3.1-trafficmanagerconnect.png differ diff --git a/docs/telepresence/2.3/release-notes/telepresence-2.3.2-subnets.png b/docs/telepresence/2.3/release-notes/telepresence-2.3.2-subnets.png new file mode 100644 index 000000000..778c722ab Binary files /dev/null and b/docs/telepresence/2.3/release-notes/telepresence-2.3.2-subnets.png differ diff --git a/docs/telepresence/2.3/release-notes/telepresence-2.3.2-svcport-annotation.png b/docs/telepresence/2.3/release-notes/telepresence-2.3.2-svcport-annotation.png new file mode 100644 index 000000000..1e1e92408 Binary files /dev/null and b/docs/telepresence/2.3/release-notes/telepresence-2.3.2-svcport-annotation.png differ diff --git a/docs/telepresence/2.3/release-notes/telepresence-2.3.3-helm.png b/docs/telepresence/2.3/release-notes/telepresence-2.3.3-helm.png new file mode 100644 index 000000000..7b81480a7 Binary files /dev/null and b/docs/telepresence/2.3/release-notes/telepresence-2.3.3-helm.png differ diff --git a/docs/telepresence/2.3/release-notes/telepresence-2.3.3-namespace-config.png b/docs/telepresence/2.3/release-notes/telepresence-2.3.3-namespace-config.png new file mode 100644 index 000000000..7864d3a30 Binary files /dev/null and b/docs/telepresence/2.3/release-notes/telepresence-2.3.3-namespace-config.png differ diff --git a/docs/telepresence/2.3/release-notes/telepresence-2.3.3-to-pod.png b/docs/telepresence/2.3/release-notes/telepresence-2.3.3-to-pod.png new file mode 100644 index 000000000..aa7be3f63 Binary files /dev/null and b/docs/telepresence/2.3/release-notes/telepresence-2.3.3-to-pod.png differ diff --git a/docs/telepresence/2.3/release-notes/telepresence-2.3.4-improved-error.png b/docs/telepresence/2.3/release-notes/telepresence-2.3.4-improved-error.png new file mode 100644 index 000000000..fa8a12986 Binary files /dev/null and b/docs/telepresence/2.3/release-notes/telepresence-2.3.4-improved-error.png differ diff --git a/docs/telepresence/2.3/release-notes/telepresence-2.3.4-ip-error.png b/docs/telepresence/2.3/release-notes/telepresence-2.3.4-ip-error.png new file mode 100644 index 000000000..1d37380c7 Binary files /dev/null and b/docs/telepresence/2.3/release-notes/telepresence-2.3.4-ip-error.png differ diff --git a/docs/telepresence/2.3/release-notes/telepresence-2.3.5-agent-config.png b/docs/telepresence/2.3/release-notes/telepresence-2.3.5-agent-config.png new file mode 100644 index 000000000..67d6d3e8b Binary files /dev/null and b/docs/telepresence/2.3/release-notes/telepresence-2.3.5-agent-config.png differ diff --git a/docs/telepresence/2.3/release-notes/telepresence-2.3.5-grpc-max-receive-size.png b/docs/telepresence/2.3/release-notes/telepresence-2.3.5-grpc-max-receive-size.png new file mode 100644 index 000000000..32939f9dd Binary files /dev/null and b/docs/telepresence/2.3/release-notes/telepresence-2.3.5-grpc-max-receive-size.png differ diff --git a/docs/telepresence/2.3/release-notes/telepresence-2.3.5-skipLogin.png b/docs/telepresence/2.3/release-notes/telepresence-2.3.5-skipLogin.png new file mode 100644 index 000000000..bf79c1910 Binary files /dev/null and b/docs/telepresence/2.3/release-notes/telepresence-2.3.5-skipLogin.png differ diff --git a/docs/telepresence/2.3/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png b/docs/telepresence/2.3/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png new file mode 100644 index 000000000..d29a05ad7 Binary files /dev/null and b/docs/telepresence/2.3/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png differ diff --git a/docs/telepresence/2.3/release-notes/telepresence-2.3.7-keydesc.png b/docs/telepresence/2.3/release-notes/telepresence-2.3.7-keydesc.png new file mode 100644 index 000000000..9bffe5ccb Binary files /dev/null and b/docs/telepresence/2.3/release-notes/telepresence-2.3.7-keydesc.png differ diff --git a/docs/telepresence/2.3/release-notes/telepresence-2.3.7-newkey.png b/docs/telepresence/2.3/release-notes/telepresence-2.3.7-newkey.png new file mode 100644 index 000000000..c7d47c42d Binary files /dev/null and b/docs/telepresence/2.3/release-notes/telepresence-2.3.7-newkey.png differ diff --git a/docs/telepresence/2.3/release-notes/tunnel.jpg b/docs/telepresence/2.3/release-notes/tunnel.jpg new file mode 100644 index 000000000..59a0397e6 Binary files /dev/null and b/docs/telepresence/2.3/release-notes/tunnel.jpg differ diff --git a/docs/telepresence/2.3/releaseNotes.yml b/docs/telepresence/2.3/releaseNotes.yml new file mode 100644 index 000000000..700272bfd --- /dev/null +++ b/docs/telepresence/2.3/releaseNotes.yml @@ -0,0 +1,452 @@ +# This file should be placed in the folder for the version of the +# product that's meant to be documented. A `/release-notes` page will +# be automatically generated and populated at build time. +# +# Note that an entry needs to be added to the `doc-links.yml` file in +# order to surface the release notes in the table of contents. +# +# The YAML in this file should contain: +# +# changelog: An (optional) URL to the CHANGELOG for the product. +# items: An array of releases with the following attributes: +# - version: The (optional) version number of the release, if applicable. +# - date: The date of the release in the format YYYY-MM-DD. +# - notes: An array of noteworthy changes included in the release, each having the following attributes: +# - type: The type of change, one of `bugfix`, `feature`, `security` or `change`. +# - title: A short title of the noteworthy change. +# - body: >- +# Two or three sentences describing the change and why it +# is noteworthy. This is HTML, not plain text or +# markdown. It is handy to use YAML's ">-" feature to +# allow line-wrapping. +# - image: >- +# The URL of an image that visually represents the +# noteworthy change. This path is relative to the +# `release-notes` directory; if this file is +# `FOO/releaseNotes.yml`, then the image paths are +# relative to `FOO/release-notes/`. +# - docs: The path to the documentation page where additional information can be found. +# - href: A path from the root to a resource on the getambassador website, takes precedence over a docs link. + +docTitle: Telepresence Release Notes +docDescription: >- + Release notes for Telepresence by Ambassador Labs, a CNCF project + that enables developers to iterate rapidly on Kubernetes + microservices by arming them with infinite-scale development + environments, access to instantaneous feedback loops, and highly + customizable development environments. + +changelog: https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md + +items: + + - version: 2.3.7 + date: '2021-07-23' + notes: + + - type: feature + title: Also-proxy in telepresence status + body: >- + An also-proxy entry in the Kubernetes cluster config will + show up in the output of the telepresence status command. + docs: reference/config + + - type: feature + title: Non-interactive telepresence login + body: >- + telepresence login now has an + --apikey=KEY flag that allows for + non-interactive logins. This is useful for headless + environments where launching a web-browser is impossible, + such as cloud shells, Docker containers, or CI. + image: telepresence-2.3.7-newkey.png + docs: reference/client/login/ + + - type: bugfix + title: Mutating webhook injector correctly hides named ports for probes. + body: >- + The mutating webhook injector has been fixed to correctly rename named ports for liveness and readiness probes + docs: reference/cluster-config + + - type: bugfix + title: telepresence current-cluster-id crash fixed + body: >- + Fixed a regression introduced in 2.3.5 that caused `telepresence current-cluster-id` + to crash. + docs: reference/cluster-config + + - type: bugfix + title: Better UX around intercepts with no local process running + body: >- + Requests would hang indefinitely when initiating an intercept before you + had a local process running. This has been fixed and will result in an + Empty reply from server until you start a local process. + docs: reference/intercepts + + - type: bugfix + title: API keys no longer show as "no description" + body: >- + New API keys generated internally for communication with + Ambassador Cloud no longer show up as "no description" in + the Ambassador Cloud web UI. Existing API keys generated by + older versions of Telepresence will still show up this way. + image: telepresence-2.3.7-keydesc.png + + - type: bugfix + title: Fix corruption of user-info.json + body: >- + Fixed a race condition that logging in and logging out + rapidly could cause memory corruption or corruption of the + user-info.json cache file used when + authenticating with Ambassador Cloud. + + - type: bugfix + title: Improved DNS resolver for systemd-resolved + body: Telepresence's systemd-resolved-based DNS resolver is now more + stable and in case it fails to initialize, the overriding resolver + will no longer cause general DNS lookup failures when telepresence defaults to + using it. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Faster telepresence list command + body: The performance of telepresence list has been increased + significantly by reducing the number of calls the command makes to the cluster. + docs: reference/client + + - version: 2.3.6 + date: '2021-07-20' + notes: + + - type: bugfix + title: Fix preview URLs + body: >- + Fixed a regression introduced in 2.3.5 that caused preview + URLs to not work. + + - type: bugfix + title: Fix subnet discovery + body: >- + Fixed a regression introduced in 2.3.5 where the Traffic + Manager's RoleBinding did not correctly appoint + the traffic-manager Role, causing + subnet discovery to not be able to work correctly. + docs: reference/rbac/ + + - type: bugfix + title: Fix root-user configuration loading + body: >- + Fixed a regression introduced in 2.3.5 where the root daemon + did not correctly read the configuration file; ignoring the + user's configured log levels and timeouts. + docs: reference/config/ + + - type: bugfix + title: Fix a user daemon crash + body: >- + Fixed an issue that could cause the user daemon to crash + during shutdown, as during shutdown it unconditionally + attempted to close a channel even though the channel might + already be closed. + + - version: 2.3.5 + date: '2021-07-15' + notes: + - type: feature + title: traffic-manager in multiple namespaces + body: >- + We now support installing multiple traffic managers in the same cluster. + This will allow operators to install deployments of telepresence that are + limited to certain namespaces. + image: ./telepresence-2.3.5-traffic-manager-namespaces.png + docs: install/helm + - type: feature + title: No more dependence on kubectl + body: >- + Telepresence no longer depends on having an external + kubectl binary, which might not be present for + OpenShift users (who have oc instead of + kubectl). + - type: feature + title: Agent image now configurable + body: >- + We now support configuring which agent image + registry to use in the + config. This enables users whose laptop is an air-gapped environment to + create selective intercepts without requiring a login. It also makes it easier + for those who are developing on Telepresence to specify which agent image should + be used. Env vars TELEPRESENCE_AGENT_IMAGE and TELEPRESENCE_REGISTRY are no longer + used. + image: ./telepresence-2.3.5-agent-config.png + docs: reference/config/#images + - type: feature + title: Max gRPC receive size now configurable + body: >- + The default max size of messages received through gRPC (4 MB) is sometimes insufficient. It can now be configured. + image: ./telepresence-2.3.5-grpc-max-receive-size.png + docs: reference/config/#grpc + - type: feature + title: CLI can be used in air-gapped environments + body: >- + While Telepresence will auto-detect if your cluster is in an air-gapped environment, + we've added an option users can add to their config.yml to ensure the cli acts like it + is in an air-gapped environment. Air-gapped environments require a manually installed + licence. + docs: reference/cluster-config/#air-gapped-cluster + image: ./telepresence-2.3.5-skipLogin.png + - version: 2.3.4 + date: '2021-07-09' + notes: + - type: bugfix + title: Improved IP log statements + body: >- + Some log statements were printing incorrect characters, when they should have been IP addresses. + This has been resolved to include more accurate and useful logging. + docs: reference/config/#log-levels + image: ./telepresence-2.3.4-ip-error.png + - type: bugfix + title: Improved messaging when multiple services match a workload + body: >- + If multiple services matched a workload when performing an intercept, Telepresence would crash. + It now gives the correct error message, instructing the user on how to specify which + service the intercept should use. + image: ./telepresence-2.3.4-improved-error.png + docs: reference/intercepts + - type: bugfix + title: Traffic-manger creates services in its own namespace to determine subnet + body: >- + Telepresence will now determine the service subnet by creating a dummy-service in its own + namespace, instead of the default namespace, which was causing RBAC permissions issues in + some clusters. + docs: reference/routing/#subnets + - type: bugfix + title: Telepresence connect respects pre-existing clusterrole + body: >- + When Telepresence connects, if the traffic-manager's desired clusterrole already exists in the + cluster, Telepresence will no longer try to update the clusterrole. + docs: reference/rbac + - type: bugfix + title: Helm Chart fixed for clientRbac.namespaced + body: >- + The Telepresence Helm chart no longer fails when installing with --set clientRbac.namespaced=true. + docs: install/helm + - version: 2.3.3 + date: '2021-07-07' + notes: + - type: feature + title: Traffic Manager Helm Chart + body: >- + Telepresence now supports installing the Traffic Manager via Helm. + This will make it easy for operators to install and configure the + server-side components of Telepresence separately from the CLI (which + in turn allows for better separation of permissions). + image: ./telepresence-2.3.3-helm.png + docs: install/helm/ + - type: feature + title: Traffic-manager in custom namespace + body: >- + As the traffic-manager can now be installed in any + namespace via Helm, Telepresence can now be configured to look for the + Traffic Manager in a namespace other than ambassador. + This can be configured on a per-cluster basis. + image: ./telepresence-2.3.3-namespace-config.png + docs: reference/config + - type: feature + title: Intercept --to-pod + body: >- + telepresence intercept now supports a + --to-pod flag that can be used to port-forward sidecars' + ports from an intercepted pod. + image: ./telepresence-2.3.3-to-pod.png + docs: reference/intercepts + - type: change + title: Change in migration from edgectl + body: >- + Telepresence no longer automatically shuts down the old + api_version=1 edgectl daemon. If migrating + from such an old version of edgectl you must now manually + shut down the edgectl daemon before running Telepresence. + This was already the case when migrating from the newer + api_version=2 edgectl. + - type: bugfix + title: Fixed error during shutdown + body: >- + The root daemon no longer terminates when the user daemon disconnects + from its gRPC streams, and instead waits to be terminated by the CLI. + This could cause problems with things not being cleaned up correctly. + - type: bugfix + title: Intercepts will survive deletion of intercepted pod + body: >- + An intercept will survive deletion of the intercepted pod provided + that another pod is created (or already exists) that can take over. + - version: 2.3.2 + date: '2021-06-18' + notes: + # Headliners + - type: feature + title: Service Port Annotation + body: >- + The mutator webhook for injecting traffic-agents now + recognizes a + telepresence.getambassador.io/inject-service-port + annotation to specify which port to intercept; bringing the + functionality of the --port flag to users who + use the mutator webook in order to control Telepresence via + GitOps. + image: ./telepresence-2.3.2-svcport-annotation.png + docs: reference/cluster-config#service-port-annotation + - type: feature + title: Outbound Connections + body: >- + Outbound connections are now routed through the intercepted + Pods which means that the connections originate from that + Pod from the cluster's perspective. This allows service + meshes to correctly identify the traffic. + docs: reference/routing/#outbound + - type: change + title: Inbound Connections + body: >- + Inbound connections from an intercepted agent are now + tunneled to the manager over the existing gRPC connection, + instead of establishing a new connection to the manager for + each inbound connection. This avoids interference from + certain service mesh configurations. + docs: reference/routing/#inbound + + # RBAC changes + - type: change + title: Traffic Manager needs new RBAC permissions + body: >- + The Traffic Manager requires RBAC + permissions to list Nodes, Pods, and to create a dummy + Service in the manager's namespace. + docs: reference/routing/#subnets + - type: change + title: Reduced developer RBAC requirements + body: >- + The on-laptop client no longer requires RBAC permissions to list the Nodes + in the cluster or to create Services, as that functionality + has been moved to the Traffic Manager. + + # Bugfixes + - type: bugfix + title: Able to detect subnets + body: >- + Telepresence will now detect the Pod CIDR ranges even if + they are not listed in the Nodes. + image: ./telepresence-2.3.2-subnets.png + docs: reference/routing/#subnets + - type: bugfix + title: Dynamic IP ranges + body: >- + The list of cluster subnets that the virtual network + interface will route is now configured dynamically and will + follow changes in the cluster. + - type: bugfix + title: No duplicate subnets + body: >- + Subnets fully covered by other subnets are now pruned + internally and thus never superfluously added to the + laptop's routing table. + docs: reference/routing/#subnets + - type: change # not a bugfix, but it only makes sense to mention after the above bugfixes + title: Change in default timeout + body: >- + The trafficManagerAPI timeout default has + changed from 5 seconds to 15 seconds, in order to facilitate + the extended time it takes for the traffic-manager to do its + initial discovery of cluster info as a result of the above + bugfixes. + - type: bugfix + title: Removal of DNS config files on macOS + body: >- + On macOS, files generated under + /etc/resolver/ as the result of using + include-suffixes in the cluster config are now + properly removed on quit. + docs: reference/routing/#mac-os-resolver + + - type: bugfix + title: Large file transfers + body: >- + Telepresence no longer erroneously terminates connections + early when sending a large HTTP response from an intercepted + service. + - type: bugfix + title: Race condition in shutdown + body: >- + When shutting down the user-daemon or root-daemon on the + laptop, telepresence quit and related commands + no longer return early before everything is fully shut down. + Now it can be counted on that by the time the command has + returned that all of the side-effects on the laptop have + been cleaned up. + - version: 2.3.1 + date: '2021-06-14' + notes: + - title: DNS Resolver Configuration + body: "Telepresence now supports per-cluster configuration for custom dns behavior, which will enable users to determine which local + remote resolver to use and which suffixes should be ignored + included. These can be configured on a per-cluster basis." + image: ./telepresence-2.3.1-dns.png + docs: reference/config + type: feature + - title: AlsoProxy Configuration + body: "Telepresence now supports also proxying user-specified subnets so that they can access external services only accessible to the cluster while connected to Telepresence. These can be configured on a per-cluster basis and each subnet is added to the TUN device so that requests are routed to the cluster for IPs that fall within that subnet." + image: ./telepresence-2.3.1-alsoProxy.png + docs: reference/config + type: feature + - title: Mutating Webhook for Injecting Traffic Agents + body: "The Traffic Manager now contains a mutating webhook to automatically add an agent to pods that have the telepresence.getambassador.io/traffic-agent: enabled annotation. This enables Telepresence to work well with GitOps CD platforms that rely on higher level kubernetes objects matching what is stored in git. For workloads without the annotation, Telepresence will add the agent the way it has in the past" + image: ./telepresence-2.3.1-inject.png + docs: reference/rbac + type: feature + - title: Traffic Manager Connect Timeout + body: "The trafficManagerConnect timeout default has changed from 20 seconds to 60 seconds, in order to facilitate the extended time it takes to apply everything needed for the mutator webhook." + image: ./telepresence-2.3.1-trafficmanagerconnect.png + docs: reference/config + type: change + - title: Fix for large file transfers + body: "Fix a tun-device bug where sometimes large transfers from services on the cluster would hang indefinitely" + image: ./telepresence-2.3.1-large-file-transfer.png + docs: reference/tun-device + type: bugfix + - title: Brew Formula Changed + body: "Now that the Telepresence rewrite is the main version of Telepresence, you can install it via Brew like so: brew install datawire/blackbird/telepresence." + image: ./telepresence-2.3.1-brew.png + docs: install/ + type: change + - version: 2.3.0 + date: '2021-06-01' + notes: + - title: Brew install Telepresence + body: "Telepresence can now be installed via brew on macOS, which makes it easier for users to stay up-to-date with the latest telepresence version. To install via brew, you can use the following command: brew install datawire/blackbird/telepresence2." + image: ./telepresence-2.3.0-homebrew.png + docs: install/ + type: feature + - title: TCP and UDP routing via Virtual Network Interface + body: "Telepresence will now perform routing of outbound TCP and UDP traffic via a Virtual Network Interface (VIF). The VIF is a layer 3 TUN-device that exists while Telepresence is connected. It makes the subnets in the cluster available to the workstation and will also route DNS requests to the cluster and forward them to intercepted pods. This means that pods with custom DNS configuration will work as expected. Prior versions of Telepresence would use firewall rules and were only capable of routing TCP." + image: ./tunnel.jpg + docs: reference/tun-device + type: feature + - title: SSH is no longer used + body: "All traffic between the client and the cluster is now tunneled via the traffic manager gRPC API. This means that Telepresence no longer uses ssh tunnels and that the manager no longer have an sshd installed. Volume mounts are still established using sshfs but it is now configured to communicate using the sftp-protocol directly, which means that the traffic agent also runs without sshd. A desired side effect of this is that the manager and agent containers no longer need a special user configuration." + image: ./no-ssh.png + docs: reference/tun-device/#no-ssh-required + type: change + - title: Running in a Docker container + body: "Telepresence can now be run inside a Docker container. This can be useful for avoiding side effects on a workstation's network, establishing multiple sessions with the traffic manager, or working with different clusters simultaneously." + image: ./run-tp-in-docker.png + docs: reference/inside-container + type: feature + - title: Configurable Log Levels + body: "Telepresence now supports configuring the log level for Root Daemon and User Daemon logs. This provides control over the nature and volume of information that Telepresence generates in daemon.log and connector.log." + image: ./telepresence-2.3.0-loglevels.png + docs: reference/config/#log-levels + type: feature + - version: 2.2.2 + date: '2021-05-17' + notes: + - title: Legacy Telepresence subcommands + body: Telepresence is now able to translate common legacy Telepresence commands into native Telepresence commands. So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used to with the new Telepresence binary. + image: ./telepresence-2.2.png + docs: install/migrate-from-legacy/ + type: feature diff --git a/docs/telepresence/2.3/troubleshooting/index.md b/docs/telepresence/2.3/troubleshooting/index.md new file mode 100644 index 000000000..730cd8660 --- /dev/null +++ b/docs/telepresence/2.3/troubleshooting/index.md @@ -0,0 +1,94 @@ +--- +description: "Troubleshooting issues related to Telepresence." +--- +# Troubleshooting + +## Creating an intercept did not generate a preview URL + +Preview URLs can only be created if Telepresence is [logged in to +Ambassador Cloud](../reference/client/login/). When not logged in, it +will not even try to create a preview URL (additionally, by default it +will intercept all traffic rather than just a subset of the traffic). +Remove the intercept with `telepresence leave [deployment name]`, run +`telepresence login` to login to Ambassador Cloud, then recreate the +intercept. See the [intercepts how-to doc](../howtos/intercepts) for +more details. + +## Error on accessing preview URL: `First record does not look like a TLS handshake` + +The service you are intercepting is likely not using TLS, however when configuring the intercept you indicated that it does use TLS. Remove the intercept with `telepresence leave [deployment name]` and recreate it, setting `TLS` to `n`. Telepresence tries to intelligently determine these settings for you when creating an intercept and offer them as defaults, but odd service configurations might cause it to suggest the wrong settings. + +## Error on accessing preview URL: Detected a 301 Redirect Loop + +If your ingress is set to redirect HTTP requests to HTTPS and your web app uses HTTPS, but you configure the intercept to not use TLS, you will get this error when opening the preview URL. Remove the intercept with `telepresence leave [deployment name]` and recreate it, selecting the correct port and setting `TLS` to `y` when prompted. + +## Your GitHub organization isn't listed + +Ambassador Cloud needs access granted to your GitHub organization as a +third-party OAuth app. If an organization isn't listed during login +then the correct access has not been granted. + +The quickest way to resolve this is to go to the **Github menu** → +**Settings** → **Applications** → **Authorized OAuth Apps** → +**Ambassador Labs**. An organization owner will have a **Grant** +button, anyone not an owner will have **Request** which sends an email +to the owner. If an access request has been denied in the past the +user will not see the **Request** button, they will have to reach out +to the owner. + +Once access is granted, log out of Ambassador Cloud and log back in; +you should see the GitHub organization listed. + +The organization owner can go to the **GitHub menu** → **Your +organizations** → **[org name]** → **Settings** → **Third-party +access** to see if Ambassador Labs has access already or authorize a +request for access (only owners will see **Settings** on the +organization page). Clicking the pencil icon will show the +permissions that were granted. + +GitHub's documentation provides more detail about [managing access granted to third-party applications](https://docs.github.com/en/github/authenticating-to-github/connecting-with-third-party-applications) and [approving access to apps](https://docs.github.com/en/github/setting-up-and-managing-organizations-and-teams/approving-oauth-apps-for-your-organization). + +### Granting or requesting access on initial login + +When using GitHub as your identity provider, the first time you log in +to Ambassador Cloud GitHub will ask to authorize Ambassador Labs to +access your organizations and certain user data. + + + +Any listed organization with a green check has already granted access +to Ambassador Labs (you still need to authorize to allow Ambassador +Labs to read your user data and organization membership). + +Any organization with a red "X" requires access to be granted to +Ambassador Labs. Owners of the organization will see a **Grant** +button. Anyone who is not an owner will see a **Request** button. +This will send an email to the organization owner requesting approval +to access the organization. If an access request has been denied in +the past the user will not see the **Request** button, they will have +to reach out to the owner. + +Once approval is granted, you will have to log out of Ambassador Cloud +then back in to select the organization. + +### Volume mounts are not working on macOS + +It's necessary to have `sshfs` installed in order for volume mounts to work correctly during intercepts. Lately there's been some issues using `brew install sshfs` a macOS workstation because the required component `osxfuse` (now named `macfuse`) isn't open source and hence, no longer supported. As a workaround, you can now use `gromgit/fuse/sshfs-mac` instead. Follow these steps: + +1. Remove old sshfs, macfuse, osxfuse using `brew uninstall` +2. `brew install --cask macfuse` +3. `brew install gromgit/fuse/sshfs-mac` +4. `brew link --overwrite sshfs-mac` + +Now sshfs -V shows you the correct version, e.g.: +``` +$ sshfs -V +SSHFS version 2.10 +FUSE library version: 2.9.9 +fuse: no mount point +``` + +but one more thing must be done before it works OK: +5. Try a mount (or an intercept that performs a mount). It will fail because you need to give permission to “Benjamin Fleischer” to execute a kernel extension (a pop-up appears that takes you to the system preferences). +6. Approve the needed permission +7. Reboot your computer. diff --git a/docs/telepresence/2.3/versions.yml b/docs/telepresence/2.3/versions.yml new file mode 100644 index 000000000..c26bd3e54 --- /dev/null +++ b/docs/telepresence/2.3/versions.yml @@ -0,0 +1,5 @@ +version: "2.3.7" +dlVersion: "2.3.7" +docsVersion: "2.3" +branch: release/v2 +productName: "Telepresence" diff --git a/docs/telepresence/2.4 b/docs/telepresence/2.4 deleted file mode 120000 index 9415d2c7a..000000000 --- a/docs/telepresence/2.4 +++ /dev/null @@ -1 +0,0 @@ -../../../docs/telepresence/v2.4 \ No newline at end of file diff --git a/docs/telepresence/2.4/community.md b/docs/telepresence/2.4/community.md new file mode 100644 index 000000000..922457c9d --- /dev/null +++ b/docs/telepresence/2.4/community.md @@ -0,0 +1,12 @@ +# Community + +## Contributor's guide +Please review our [contributor's guide](https://github.com/telepresenceio/telepresence/blob/release/v2/DEVELOPING.md) +on GitHub to learn how you can help make Telepresence better. + +## Changelog +Our [changelog](https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md) +describes new features, bug fixes, and updates to each version of Telepresence. + +## Meetings +Check out our community [meeting schedule](https://github.com/telepresenceio/telepresence/blob/release/v2/MEETING_SCHEDULE.md) for opportunities to interact with Telepresence developers. diff --git a/docs/telepresence/2.4/concepts/context-prop.md b/docs/telepresence/2.4/concepts/context-prop.md new file mode 100644 index 000000000..dc9ee18f3 --- /dev/null +++ b/docs/telepresence/2.4/concepts/context-prop.md @@ -0,0 +1,36 @@ +# Context propagation + +**Context propagation** is the transfer of request metadata across the services and remote processes of a distributed system. Telepresence uses context propagation to intelligently route requests to the appropriate destination. + +This metadata is the context that is transferred across system services. It commonly takes the form of HTTP headers; context propagation is usually referred to as header propagation. A component of the system (like a proxy or performance monitoring tool) injects the headers into requests as it relays them. + +Metadata propagation refers to any service or other middleware not stripping away the headers. Propagation facilitates the movement of the injected contexts between other downstream services and processes. + + +## What is distributed tracing? + +Distributed tracing is a technique for troubleshooting and profiling distributed microservices applications and is a common application for context propagation. It is becoming a key component for debugging. + +In a microservices architecture, a single request may trigger additional requests to other services. The originating service may not cause the failure or slow request directly; a downstream dependent service may instead be to blame. + +An application like Datadog or New Relic will use agents running on services throughout the system to inject traffic with HTTP headers (the context). They will track the request’s entire path from origin to destination to reply, gathering data on routes the requests follow and performance. The injected headers follow the [W3C Trace Context specification](https://www.w3.org/TR/trace-context/) (or another header format, such as [B3 headers](https://github.com/openzipkin/b3-propagation)), which facilitates maintaining the headers through every service without being stripped (the propagation). + + +## What are intercepts and preview URLs? + +[Intercepts](../../reference/intercepts) and [preview +URLs](../../howtos/preview-urls/) are functions of Telepresence that +enable easy local development from a remote Kubernetes cluster and +offer a preview environment for sharing and real-time collaboration. + +Telepresence uses custom HTTP headers and header propagation to +identify which traffic to intercept both for plain personal intercepts +and for personal intercepts with preview URLs; these techniques are +more commonly used for distributed tracing, so what they are being +used for is a little unorthodox, but the mechanisms for their use are +already widely deployed because of the prevalence of tracing. The +headers facilitate the smart routing of requests either to live +services in the cluster or services running locally on a developer’s +machine. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to [Ambassador Cloud](https://app.getambassador.io) with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. diff --git a/docs/telepresence/2.4/concepts/devloop.md b/docs/telepresence/2.4/concepts/devloop.md new file mode 100644 index 000000000..8b1fbf354 --- /dev/null +++ b/docs/telepresence/2.4/concepts/devloop.md @@ -0,0 +1,50 @@ +# The developer experience and the inner dev loop + +## How is the developer experience changing? + +The developer experience is the workflow a developer uses to develop, test, deploy, and release software. + +Typically this experience has consisted of both an inner dev loop and an outer dev loop. The inner dev loop is where the individual developer codes and tests, and once the developer pushes their code to version control, the outer dev loop is triggered. + +The outer dev loop is _everything else_ that happens leading up to release. This includes code merge, automated code review, test execution, deployment, [controlled (canary) release](https://www.getambassador.io/docs/argo/latest/concepts/canary/), and observation of results. The modern outer dev loop might include, for example, an automated CI/CD pipeline as part of a [GitOps workflow](https://www.getambassador.io/docs/argo/latest/concepts/gitops/#what-is-gitops) and a progressive delivery strategy relying on automated canaries, i.e. to make the outer loop as fast, efficient and automated as possible. + +Cloud-native technologies have fundamentally altered the developer experience in two ways: one, developers now have to take extra steps in the inner dev loop; two, developers need to be concerned with the outer dev loop as part of their workflow, even if most of their time is spent in the inner dev loop. + +Engineers now must design and build distributed service-based applications _and_ also assume responsibility for the full development life cycle. The new developer experience means that developers can no longer rely on monolithic application developer best practices, such as checking out the entire codebase and coding locally with a rapid “live-reload” inner development loop. Now developers have to manage external dependencies, build containers, and implement orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this adds development time to the equation. + +## What is the inner dev loop? + +The inner dev loop is the single developer workflow. A single developer should be able to set up and use an inner dev loop to code and test changes quickly. + +Even within the Kubernetes space, developers will find much of the inner dev loop familiar. That is, code can still be written locally at a level that a developer controls and committed to version control. + +In a traditional inner dev loop, if a typical developer codes for 360 minutes (6 hours) a day, with a traditional local iterative development loop of 5 minutes — 3 coding, 1 building, i.e. compiling/deploying/reloading, 1 testing inspecting, and 10-20 seconds for committing code — they can expect to make ~70 iterations of their code per day. Any one of these iterations could be a release candidate. The only “developer tax” being paid here is for the commit process, which is negligible. + +![traditional inner dev loop](../../images/trad-inner-dev-loop.png) + +## In search of lost time: How does containerization change the inner dev loop? + +The inner dev loop is where writing and testing code happens, and time is critical for maximum developer productivity and getting features in front of end users. The faster the feedback loop, the faster developers can refactor and test again. + +Changes to the inner dev loop process, i.e., containerization, threaten to slow this development workflow down. Coding stays the same in the new inner dev loop, but code has to be containerized. The _containerized_ inner dev loop requires a number of new steps: + +* packaging code in containers +* writing a manifest to specify how Kubernetes should run the application (e.g., YAML-based configuration information, such as how much memory should be given to a container) +* pushing the container to the registry +* deploying containers in Kubernetes + +Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. + + +![container inner dev loop](../../images/container-inner-dev-loop.png) + +## Tackling the slow inner dev loop + +A slow inner dev loop can negatively impact frontend and backend teams, delaying work on individual and team levels and slowing releases into production overall. + +For example: + +* Frontend developers have to wait for previews of backend changes on a shared dev/staging environment (for example, until CI/CD deploys a new version) and/or rely on mocks/stubs/virtual services when coding their application locally. These changes are only verifiable by going through the CI/CD process to build and deploy within a target environment. +* Backend developers have to wait for CI/CD to build and deploy their app to a target environment to verify that their code works correctly with cluster or cloud-based dependencies as well as to share their work to get feedback. + +New technologies and tools can facilitate cloud-native, containerized development. And in the case of a sluggish inner dev loop, developers can accelerate productivity with tools that help speed the loop up again. diff --git a/docs/telepresence/2.4/concepts/devworkflow.md b/docs/telepresence/2.4/concepts/devworkflow.md new file mode 100644 index 000000000..fa24fc2bd --- /dev/null +++ b/docs/telepresence/2.4/concepts/devworkflow.md @@ -0,0 +1,7 @@ +# The changing development workflow + +A changing workflow is one of the main challenges for developers adopting Kubernetes. Software development itself isn’t the challenge. Developers can continue to [code using the languages and tools with which they are most productive and comfortable](https://www.getambassador.io/resources/kubernetes-local-dev-toolkit/). That’s the beauty of containerized development. + +However, the cloud-native, Kubernetes-based approach to development means adopting a new development workflow and development environment. Beyond the basics, such as figuring out how to containerize software, [how to run containers in Kubernetes](https://www.getambassador.io/docs/kubernetes/latest/concepts/appdev/), and how to deploy changes into containers, for example, Kubernetes adds complexity before it delivers efficiency. The promise of a “quicker way to develop software” applies at least within the traditional aspects of the inner dev loop, where the single developer codes, builds and tests their software. But both within the inner dev loop and once code is pushed into version control to trigger the outer dev loop, the developer experience changes considerably from what many developers are used to. + +In this new paradigm, new steps are added to the inner dev loop, and more broadly, the developer begins to share responsibility for the full life cycle of their software. Inevitably this means taking new workflows and tools on board to ensure that the full life cycle continues full speed ahead. diff --git a/docs/telepresence/2.4/concepts/faster.md b/docs/telepresence/2.4/concepts/faster.md new file mode 100644 index 000000000..b649e4153 --- /dev/null +++ b/docs/telepresence/2.4/concepts/faster.md @@ -0,0 +1,25 @@ +# Making the remote local: Faster feedback, collaboration and debugging + +With the goal of achieving [fast, efficient development](https://www.getambassador.io/use-case/local-kubernetes-development/), developers need a set of approaches to bridge the gap between remote Kubernetes clusters and local development, and reduce time to feedback and debugging. + +## How should I set up a Kubernetes development environment? + +[Setting up a development environment](https://www.getambassador.io/resources/development-environments-microservices/) for Kubernetes can be much more complex than the set up for traditional web applications. Creating and maintaining a Kubernetes development environment relies on a number of external dependencies, such as databases or authentication. + +While there are several ways to set up a Kubernetes development environment, most introduce complexities and impediments to speed. The dev environment should be set up to easily code and test in conditions where a service can access the resources it depends on. + +A good way to meet the goals of faster feedback, possibilities for collaboration, and scale in a realistic production environment is the "single service local, all other remote" environment. Developing in a fully remote environment offers some benefits, but for developers, it offers the slowest possible feedback loop. With local development in a remote environment, the developer retains considerable control while using tools like [Telepresence](../../quick-start/) to facilitate fast feedback, debugging and collaboration. + +## What is Telepresence? + +Telepresence is an open source tool that lets developers [code and test microservices locally against a remote Kubernetes cluster](../../quick-start/). Telepresence facilitates more efficient development workflows while relieving the need to worry about other service dependencies. + +## How can I get fast, efficient local development? + +The dev loop can be jump-started with the right development environment and Kubernetes development tools to support speed, efficiency and collaboration. Telepresence is designed to let Kubernetes developers code as though their laptop is in their Kubernetes cluster, enabling the service to run locally and be proxied into the remote cluster. Telepresence runs code locally and forwards requests to and from the remote Kubernetes cluster, bypassing the much slower process of waiting for a container to build, pushing it to registry, and deploying to production. + +A rapid and continuous feedback loop is essential for productivity and speed; Telepresence enables the fast, efficient feedback loop to ensure that developers can access the rapid local development loop they rely on without disrupting their own or other developers' workflows. Telepresence safely intercepts traffic from the production cluster and enables near-instant testing of code, local debugging in production, and [preview URL](../../howtos/preview-urls/) functionality to share dev environments with others for multi-user collaboration. + +Telepresence works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This pod proxies data from the Kubernetes environment (e.g., TCP connections, environment variables, volumes) to the local process. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +The intercept proxy works thanks to context propagation, which is most frequently associated with distributed tracing but also plays a key role in controllable intercepts and preview URLs. diff --git a/docs/telepresence/2.4/concepts/intercepts.md b/docs/telepresence/2.4/concepts/intercepts.md new file mode 100644 index 000000000..f798b9895 --- /dev/null +++ b/docs/telepresence/2.4/concepts/intercepts.md @@ -0,0 +1,167 @@ +--- +title: "Types of intercepts" +description: "Short demonstration of personal vs global intercepts" +--- + +import React from 'react'; + +import Alert from '@material-ui/lab/Alert'; +import AppBar from '@material-ui/core/AppBar'; +import Paper from '@material-ui/core/Paper'; +import Tab from '@material-ui/core/Tab'; +import TabContext from '@material-ui/lab/TabContext'; +import TabList from '@material-ui/lab/TabList'; +import TabPanel from '@material-ui/lab/TabPanel'; +import Animation from '@src/components/InterceptAnimation'; + + +export function TabsContainer({ children, ...props }) { + const [state, setState] = React.useState({curTab: "personal"}); + React.useEffect(() => { + const query = new URLSearchParams(window.location.search); + var interceptType = query.get('intercept') || "personal"; + if (state.curTab != interceptType) { + setState({curTab: interceptType}); + } + }, [state, setState]) + var setURL = function(newTab) { + history.replaceState(null,null, + `?intercept=${newTab}${window.location.hash}`, + ); + }; + return ( +
+ + + {setState({curTab: newTab}); setURL(newTab)}} aria-label="intercept types"> + + + + + + {children} + +
+ ); +}; + +# Types of intercepts + + + + +# No intercept + + + + +This is the normal operation of your cluster without Telepresence. + + + + + +# Global intercept + + + + +**Global intercepts** replace the Kubernetes "Orders" service with the +Orders service running on your laptop. The users see no change, but +with all the traffic coming to your laptop, you can observe and debug +with all your dev tools. + + + +### Creating and using global intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-match=all + ``` + + + + Make sure your current kubectl context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service: + + All requests will be sent to the version of your service that is + running in the local development environment. + + + + +# Personal intercept + +**Personal intercepts** allow you to be selective and intercept only +some of the traffic to a service while not interfering with the rest +of the traffic. This allows you to share a cluster with others on your +team without interfering with their work. + + + + +In the illustration above, **Orange** +requests are being made by Developer 2 on their laptop and the +**green** are made by a teammate, +Developer 1, on a different laptop. + +Each developer can intercept the Orders service for their requests only, +while sharing the rest of the development environment. + + + +### Creating and using personal intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-match=Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b + ``` + + We're using + `Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b` as the + header for the sake of the example, but you can use any + `key=value` pair you want, or `--http-match=auto` to have it + choose something automatically. + + + + Make sure your current kubect context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service by passing the + HTTP header: + + ```http + Personal-Intercept: 126a72c7-be8b-4329-af64-768e207a184b + ``` + + + + Need a browser extension to modify or remove an HTTP-request-headers? + + Chrome + {' '} + Firefox + + + + 3. Using the intercept: Send requests to your service without the + HTTP header: + + Requests without the header will be sent to the version of your + service that is running in the cluster. This enables you to share + the cluster with a team! + + + diff --git a/docs/telepresence/2.4/doc-links.yml b/docs/telepresence/2.4/doc-links.yml new file mode 100644 index 000000000..afbb7b1a5 --- /dev/null +++ b/docs/telepresence/2.4/doc-links.yml @@ -0,0 +1,76 @@ + - title: Quick start + link: quick-start + - title: Install Telepresence + items: + - title: Install + link: install/ + - title: Upgrade + link: install/upgrade/ + - title: Install Traffic Manager with Helm + link: install/helm/ + - title: Migrate from legacy Telepresence + link: install/migrate-from-legacy/ + - title: Create a local Go K8s dev environment + link: install/qs-go-advanced/ + - title: Create a local Java K8s dev environment + link: install/qs-java-advanced/ + - title: User guide + items: + - title: Intercept a service in your own environment + link: howtos/intercepts + - title: Share dev environments with preview URLs + link: howtos/preview-urls + - title: Proxy outbound traffic to my cluster + link: howtos/outbound + - title: Send requests to an intercepted service + link: howtos/request + - title: Technical reference + items: + - title: Architecture + link: reference/architecture + - title: Client reference + link: reference/client + items: + - title: login + link: reference/client/login + - title: Laptop-side configuration + link: reference/config + - title: Cluster-side configuration + link: reference/cluster-config + - title: Using Docker for intercepts + link: reference/docker-run + - title: Running Telepresence in a Docker container + link: reference/inside-container + - title: Environment variables + link: reference/environment + - title: Intercepts + link: reference/intercepts/ + items: + - title: Manually injecting the Traffic Agent + link: reference/intercepts/manual-agent + - title: Volume mounts + link: reference/volume + - title: RESTful API service + link: reference/restapi + - title: DNS resolution + link: reference/dns + - title: RBAC + link: reference/rbac + - title: Telepresence and VPNs + link: reference/vpn + - title: Networking through Virtual Network Interface + link: reference/tun-device + - title: Connection Routing + link: reference/routing + - title: Using Telepresence with Linkerd + link: reference/linkerd + - title: FAQs + link: faqs + - title: Troubleshooting + link: troubleshooting + - title: Community + link: community + - title: Release Notes + link: release-notes + - title: Licenses + link: licenses \ No newline at end of file diff --git a/docs/telepresence/2.4/faqs.md b/docs/telepresence/2.4/faqs.md new file mode 100644 index 000000000..80acd5154 --- /dev/null +++ b/docs/telepresence/2.4/faqs.md @@ -0,0 +1,122 @@ +--- +description: "Learn how Telepresence helps with fast development and debugging in your Kubernetes cluster." +--- + +# FAQs + +** Why Telepresence?** + +Modern microservices-based applications that are deployed into Kubernetes often consist of tens or hundreds of services. The resource constraints and number of these services means that it is often difficult to impossible to run all of this on a local development machine, which makes fast development and debugging very challenging. The fast [inner development loop](../concepts/devloop/) from previous software projects is often a distant memory for cloud developers. + +Telepresence enables you to connect your local development machine seamlessly to the cluster via a two way proxying mechanism. This enables you to code locally and run the majority of your services within a remote Kubernetes cluster -- which in the cloud means you have access to effectively unlimited resources. + +Ultimately, this empowers you to develop services locally and still test integrations with dependent services or data stores running in the remote cluster. + +You can “intercept” any requests made to a target Kubernetes workload, and code and debug your associated service locally using your favourite local IDE and in-process debugger. You can test your integrations by making requests against the remote cluster’s ingress and watching how the resulting internal traffic is handled by your service running locally. + +By using the preview URL functionality you can share access with additional developers or stakeholders to the application via an entry point associated with your intercept and locally developed service. You can make changes that are visible in near real-time to all of the participants authenticated and viewing the preview URL. All other viewers of the application entrypoint will not see the results of your changes. + +** What operating systems does Telepresence work on?** + +Telepresence currently works natively on macOS (Intel and Apple silicon), Linux, and WSL 2. Starting with v2.4.0, we are also releasing a native Windows version of Telepresence that we are considering a Developer Preview. + +** What protocols can be intercepted by Telepresence?** + +All HTTP/1.1 and HTTP/2 protocols can be intercepted. This includes: + +- REST +- JSON/XML over HTTP +- gRPC +- GraphQL + +If you need another protocol supported, please [drop us a line](https://www.getambassador.io/feedback/) to request it. + +** When using Telepresence to intercept a pod, are the Kubernetes cluster environment variables proxied to my local machine?** + +Yes, you can either set the pod's environment variables on your machine or write the variables to a file to use with Docker or another build process. Please see [the environment variable reference doc](../reference/environment) for more information. + +** When using Telepresence to intercept a pod, can the associated pod volume mounts also be mounted by my local machine?** + +Yes, please see [the volume mounts reference doc](../reference/volume/) for more information. + +** When connected to a Kubernetes cluster via Telepresence, can I access cluster-based services via their DNS name?** + +Yes. After you have successfully connected to your cluster via `telepresence connect` you will be able to access any service in your cluster via their namespace qualified DNS name. + +This means you can curl endpoints directly e.g. `curl .:8080/mypath`. + +If you create an intercept for a service in a namespace, you will be able to use the service name directly. + +This means if you `telepresence intercept -n `, you will be able to resolve just the `` DNS record. + +You can connect to databases or middleware running in the cluster, such as MySQL, PostgreSQL and RabbitMQ, via their service name. + +** When connected to a Kubernetes cluster via Telepresence, can I access cloud-based services and data stores via their DNS name?** + +You can connect to cloud-based data stores and services that are directly addressable within the cluster (e.g. when using an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) Service type), such as AWS RDS, Google pub-sub, or Azure SQL Database. + +** What types of ingress does Telepresence support for the preview URL functionality?** + +The preview URL functionality should work with most ingress configurations, including straightforward load balancer setups. + +Telepresence will discover/prompt during first use for this info and make its best guess at figuring this out and ask you to confirm or update this. + +** Why are my intercepts still reporting as active when they've been disconnected?** + + In certain cases, Telepresence might not have been able to communicate back with Ambassador Cloud to update the intercept's status. Worry not, they will get garbage collected after a period of time. + +** Why is my intercept associated with an "Unreported" cluster?** + + Intercepts tagged with "Unreported" clusters simply mean Ambassador Cloud was unable to associate a service instance with a known detailed service from an Edge Stack or API Gateway cluster. [Connecting your cluster to the Service Catalog](https://www.getambassador.io/docs/cloud/latest/service-catalog/quick-start/) will properly match your services from multiple data sources. + +** Will Telepresence be able to intercept workloads running on a private cluster or cluster running within a virtual private cloud (VPC)?** + +Yes. The cluster has to have outbound access to the internet for the preview URLs to function correctly, but it doesn’t need to have a publicly accessible IP address. + +The cluster must also have access to an external registry in order to be able to download the traffic-manager and traffic-agent images that are deployed when connecting with Telepresence. + +** Why does running Telepresence require sudo access for the local daemon?** + +Telepresence creates, and manages, a virtual network device (a TUN network) for routing of outbound traffic to the cluster and perform DNS resolution. That requires elevated access. + +** What components get installed in the cluster when running Telepresence?** + +A single `traffic-manager` service is deployed in the `ambassador` namespace within your cluster, and this manages resilient intercepts and connections between your local machine and the cluster. + +A Traffic Agent container is injected per pod that is being intercepted. The first time a workload is intercepted all pods associated with this workload will be restarted with the Traffic Agent automatically injected. + +** How can I remove all of the Telepresence components installed within my cluster?** + +You can run the command `telepresence uninstall --everything` to remove the `traffic-manager` service installed in the cluster and `traffic-agent` containers injected into each pod being intercepted. + +Running this command will also stop the local daemon running. + +** What language is Telepresence written in?** + +All components of the Telepresence application and cluster components are written using Go. + +** How does Telepresence connect and tunnel into the Kubernetes cluster?** + +The connection between your laptop and cluster is established by using +the `kubectl port-forward` machinery (though without actually spawning +a separate program) to establish a TCP connection to Telepresence +Traffic Manager in the cluster, and running Telepresence's custom VPN +protocol over that TCP connection. + + + +** What identity providers are supported for authenticating to view a preview URL?** + +* GitHub +* GitLab +* Google + +More authentication mechanisms and identity provider support will be added soon. Please [let us know](https://www.getambassador.io/feedback/) which providers are the most important to you and your team in order for us to prioritize those. + +** Is Telepresence open source?** + +Yes it is! You can find its source code on [GitHub](https://github.com/telepresenceio/telepresence). + +** How do I share my feedback on Telepresence?** + +Your feedback is always appreciated and helps us build a product that provides as much value as possible for our community. You can chat with us directly on our [feedback page](https://www.getambassador.io/feedback/), or you can [join our Slack channel](http://a8r.io/slack) to share your thoughts. diff --git a/docs/telepresence/2.4/howtos/intercepts.md b/docs/telepresence/2.4/howtos/intercepts.md new file mode 100644 index 000000000..87bd9f92b --- /dev/null +++ b/docs/telepresence/2.4/howtos/intercepts.md @@ -0,0 +1,108 @@ +--- +description: "Start using Telepresence in your own environment. Follow these steps to intercept your service in your cluster." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Intercept a service in your own environment + +Telepresence enables you to create intercepts to a target Kubernetes workload. Once you have created and intercept, you can code and debug your associated service locally. + +For a detailed walk-though on creating intercepts using our sample app, follow the [quick start guide](../../quick-start/demo-node/). + + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +This guide assumes you have a Kubernetes deployment and service accessible publicly by an ingress controller, and that you can run a copy of that service on your laptop. + + +## Intercept your service with a global intercept + +With Telepresence, you can create [global intercepts](../../concepts/intercepts/?intercept=global) that intercept all traffic going to a service in your cluster and route it to your local environment instead. + +1. Connect to your cluster with `telepresence connect` and connect to the Kubernetes API server: + + ```console + $ curl -ik https://kubernetes.default + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected when you first connect. + + + You now have access to your remote Kubernetes API server as if you were on the same network. You can now use any local tools to connect to any service in the cluster. + + If you have difficulties connecting, make sure you are using Telepresence 2.0.3 or a later version. Check your version by entering `telepresence version` and [upgrade if needed](../../install/upgrade/). + + +2. Enter `telepresence list` and make sure the service you want to intercept is listed. For example: + + ```console + $ telepresence list + ... + example-service: ready to intercept (traffic-agent not yet installed) + ... + ``` + +3. Get the name of the port you want to intercept on your service: + `kubectl get service --output yaml`. + + For example: + + ```console + $ kubectl get service example-service --output yaml + ... + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + ... + ``` + +4. Intercept all traffic going to the service in your cluster: + `telepresence intercept --port [:] --env-file `. + * For `--port`: specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * For `--env-file`: specify a file path for Telepresence to write the environment variables that are set in the pod. + The example below shows Telepresence intercepting traffic going to service `example-service`. Requests now reach the service on port `http` in the cluster get routed to `8080` on the workstation and write the environment variables of the service to `~/example-service-intercept.env`. + ```console + $ telepresence intercept example-service --port 8080:http --env-file ~/example-service-intercept.env + Using Deployment example-service + intercepted + Intercept name: example-service + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Intercepting : all TCP connections + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + The following are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Query the environment in which you intercepted a service and verify your local instance being invoked. + All the traffic previously routed to your Kubernetes Service is now routed to your local environment + +You can now: +- Make changes on the fly and see them reflected when interacting with + your Kubernetes environment. +- Query services only exposed in your cluster's network. +- Set breakpoints in your IDE to investigate bugs. + + + + **Didn't work?** Make sure the port you're listening on matches the one you specified when you created your intercept. + + diff --git a/docs/telepresence/2.4/howtos/outbound.md b/docs/telepresence/2.4/howtos/outbound.md new file mode 100644 index 000000000..bd3c2b4c7 --- /dev/null +++ b/docs/telepresence/2.4/howtos/outbound.md @@ -0,0 +1,98 @@ +--- +description: "Telepresence can connect to your Kubernetes cluster, letting you access cluster services as if your laptop was another pod in the cluster." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Proxy outbound traffic to my cluster + +While preview URLs are a powerful feature, Telepresence offers other options for proxying traffic between your laptop and the cluster. This section discribes how to proxy outbound traffic and control outbound connectivity to your cluster. + + This guide assumes that you have the quick start sample web app running in your cluster to test accessing the web-app service. You can substitute this service for any other service you are running. + +## Proxying outbound traffic + +Connecting to the cluster instead of running an intercept allows you to access cluster workloads as if your laptop was another pod in the cluster. This enables you to access other Kubernetes services using `.`. A service running on your laptop can interact with other services on the cluster by name. + +When you connect to your cluster, the background daemon on your machine runs and installs the [Traffic Manager deployment](../../reference/architecture/) into the cluster of your current `kubectl` context. The Traffic Manager handles the service proxying. + +1. Run `telepresence connect` and enter your password to run the daemon. + + ```console + $ telepresence connect + Launching Telepresence Daemon v2.4.10 (api v3) + Need root privileges to run "/usr/local/bin/telepresence daemon-foreground /home//.cache/telepresence/logs '' ''" + [sudo] password: + Launching Telepresence Root Daemon + Launching Telepresence User Daemon + Connected to context default (https://) + ``` + +Check this [FAQ entry](../../troubleshooting#daemon-service-did-not-start) in case the daemon does not start. + +2. Run `telepresence status` to confirm connection to your cluster and that it is proxying traffic. + + ```console + $ telepresence status + Root Daemon: Running + Version : v2.4.10 (api 3) + DNS : + Remote IP : + Exclude suffixes: [.arpa .com .io .net .org .ru] + Include suffixes: [] + Timeout : 4s + Also Proxy : (0 subnets) + Never Proxy: (1 subnets) + User Daemon: Running + Version : v2.4.10 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 0 total + ``` + +3. Access your service by name with `curl web-app.emojivoto:80`. Telepresence routes the request to the cluster, as if your laptop is actually running in the cluster. + + ```console + $ curl web-app.emojivoto:80 + + + + + Emoji Vote + ... + ``` + +If you terminate the client with `telepresence quit` and try to access the service again, it will fail because traffic is no longer proxied from your laptop. + + ```console + $ telepresence quit + Telepresence Root Daemon quitting... done + Telepresence User Daemon quitting... done + ``` + +When using Telepresence in this way, you need to access services with the namespace qualified DNS name (<service name>.<namespace>) before you start an intercept. After you start an intercept, only <service name> is required. Read more about these differences in the DNS resolution reference guide. + +## Controlling outbound connectivity + +By default, Telepresence provides access to all Services found in all namespaces in the connected cluster. This can lead to problems if the user does not have RBAC access permissions to all namespaces. You can use the `--mapped-namespaces ` flag to control which namespaces are accessible. + +When you use the `--mapped-namespaces` flag, you need to include all namespaces containing services you want to access, as well as all namespaces that contain services related to the intercept. + +### Using local-only intercepts + +When you develop on isolated apps or on a virtualized container, you don't need an outbound connection. However, when developing services that aren't deployed to the cluster, it can be necessary to provide outbound connectivity to the namespace where the service will be deployed. This is because services that aren't exposed through ingress controllers require connectivity to those services. When you provide outbound connectivity, the service can access other services in that namespace without using qualified names. A local-only intercept does not cause outbound connections to originate from the intercepted namespace. The reason for this is to establish correct origin; the connection must be routed to a `traffic-agent`of an intercepted pod. For local-only intercepts, the outbound connections originates from the `traffic-manager`. + +To control outbound connectivity to specific namespaces, add the `--local-only` flag: + + ```console + $ telepresence intercept --namespace --local-only + ``` +The resources in the given namespace can now be accessed using unqualified names as long as the intercept is active. +You can deactivate the intercept with `telepresence leave `. This removes unqualified name access. + +### Proxy outcound connectivity for laptops + +To specify additional hosts or subnets that should be resolved inside of the cluster, see [AlsoProxy](../../reference/config/#alsoproxy) for more details. \ No newline at end of file diff --git a/docs/telepresence/2.4/howtos/preview-urls.md b/docs/telepresence/2.4/howtos/preview-urls.md new file mode 100644 index 000000000..670f72dd3 --- /dev/null +++ b/docs/telepresence/2.4/howtos/preview-urls.md @@ -0,0 +1,126 @@ +--- +description: "Telepresence uses Preview URLs to help you collaborate on developing Kubernetes services with teammates." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Share development environments with preview URLs + +Telepresence can generate sharable preview URLs. This enables you to work on a copy of your service locally, and share that environment with a teammate for pair programming. While using preview URLs, Telepresence will route only the requests coming from that preview URL to your local environment. Requests to the ingress are routed to your cluster as usual. + +Preview URLs are protected behind authentication through Ambassador Cloud, and, access to the URL is only available to users in your organization. You can make the URL publicly accessible for sharing with outside collaborators. + +## Creating a preview URL + +1. Connect to Telepresence and enter the `telepresence list` command in your CLI to verify the service is listed. +Telepresence only supports Deployments, ReplicaSets, and StatefulSet workloads with a label that matches a Service. + +2. Enter `telepresence login` to launch Ambassador Cloud in your browser. + + If you are in an environment you can't launch Telepresence in your local browser, enter If you are in an environment where Telepresence cannot launch in a local browser, pass the [`--apikey` flag to `telepresence login`](../../reference/client/login/). + +3. Start the intercept with `telepresence intercept --port --env-file `and adjust the flags as follows: + Start the intercept: + * **port:** specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * **env-file:** specify a file path for Telepresence to write the environment variables that are set in the pod. + +4. Answer the question prompts. + * **IWhat's your ingress' IP address?**: whether the ingress controller is expecting TLS communication on the specified port. + * **What's your ingress' TCP port number?**: the port your ingress controller is listening to. This is often 443 for TLS ports, and 80 for non-TLS ports. + * **Does that TCP port on your ingress use TLS (as opposed to cleartext)?**: whether the ingress controller is expecting TLS communication on the specified port. + * **If required by your ingress, specify a different hostname (TLS-SNI, HTTP "Host" header) to be used in requests.**: if your ingress controller routes traffic based on a domain name (often using the `Host` HTTP header), enter that value here. + + The example below shows a preview URL for `example-service` which listens on port 8080. The preview URL for ingress will use the `ambassador` service in the `ambassador` namespace on port `443` using TLS encryption and the hostname `dev-environment.edgestack.me`: + + ```console +$ telepresence intercept example-service --port 8080 --env-file ~/ex-svc.env + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Confirm the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: -]: ambassador.ambassador + + 2/4: What's your ingress' TCP port number? + + [default: -]: 80 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: y + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: ambassador.ambassador]: dev-environment.edgestack.me + + Using deployment example-service + intercepted + Intercept name : example-service + State : ACTIVE + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":example-service") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname : dev-environment.edgestack.me + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + Here are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Go to the Preview URL generated from the intercept. +Traffic is now intercepted from your preview URL without impacting other traffic from your Ingress. + + + Didn't work? It might be because you have services in between your ingress controller and the service you are intercepting that do not propagate the x-telepresence-intercept-id HTTP Header. Read more on context propagation. + + +7. Make a request on the URL you would usually query for that environment. Don't route a request to your laptop. + + Normal traffic coming into the cluster through the Ingress (i.e. not coming from the preview URL) routes to services in the cluster like normal. + +8. Share with a teammate. + + You can collaborate with teammates by sending your preview URL to them. Once your teammate logs in, they must select the same identity provider and org as you are using. This authorizes their access to the preview URL. When they visit the preview URL, they see the intercepted service running on your laptop. + You can now collaborate with a teammate to debug the service on the shared intercept URL without impacting the production environment. + +## Sharing a preview URL with people outside your team + +To collaborate with someone outside of your identity provider's organization: +Log into [Ambassador Cloud](https://app.getambassador.io/cloud/). + navigate to your service's intercepts, select the preview URL details, and click **Make Publicly Accessible**. Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on your laptop. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. Removing the preview URL either from the dashboard or by running `telepresence preview remove ` also removes all access to the preview URL. + +## Change access restrictions + +To collaborate with someone outside of your identity provider's organization, you must make your preview URL publicly accessible. + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/). +2. Select the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Make Publicly Accessible**. + +Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on a local environment. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. + +## Remove a preview URL from an Intercept + +To delete a preview URL and remove all access to the intercepted service, + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) +2. Click on the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Remove Preview**. + +Alternatively, you can remove a preview URL with the following command: +`telepresence preview remove ` diff --git a/docs/telepresence/2.4/howtos/request.md b/docs/telepresence/2.4/howtos/request.md new file mode 100644 index 000000000..1109c68df --- /dev/null +++ b/docs/telepresence/2.4/howtos/request.md @@ -0,0 +1,12 @@ +import Alert from '@material-ui/lab/Alert'; + +# Send requests to an intercepted service + +Ambassador Cloud can inform you about the required request parameters to reach an intercepted service. + + 1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) + 2. Navigate to the desired service Intercepts page + 3. Click the **Query** button to open the pop-up menu. + 4. Toggle between **CURL**, **Headers** and **Browse**. + +The pre-built queries and header information will help you get started to query the desired intercepted service and manage header propagation. diff --git a/docs/telepresence/2.4/images/container-inner-dev-loop.png b/docs/telepresence/2.4/images/container-inner-dev-loop.png new file mode 100644 index 000000000..06586cd6e Binary files /dev/null and b/docs/telepresence/2.4/images/container-inner-dev-loop.png differ diff --git a/docs/telepresence/2.4/images/github-login.png b/docs/telepresence/2.4/images/github-login.png new file mode 100644 index 000000000..cfd4d4bf1 Binary files /dev/null and b/docs/telepresence/2.4/images/github-login.png differ diff --git a/docs/telepresence/2.4/images/logo.png b/docs/telepresence/2.4/images/logo.png new file mode 100644 index 000000000..701f63ba8 Binary files /dev/null and b/docs/telepresence/2.4/images/logo.png differ diff --git a/docs/telepresence/2.4/images/split-tunnel.png b/docs/telepresence/2.4/images/split-tunnel.png new file mode 100644 index 000000000..5bf30378e Binary files /dev/null and b/docs/telepresence/2.4/images/split-tunnel.png differ diff --git a/docs/telepresence/2.4/images/trad-inner-dev-loop.png b/docs/telepresence/2.4/images/trad-inner-dev-loop.png new file mode 100644 index 000000000..618b674f8 Binary files /dev/null and b/docs/telepresence/2.4/images/trad-inner-dev-loop.png differ diff --git a/docs/telepresence/2.4/images/tunnelblick.png b/docs/telepresence/2.4/images/tunnelblick.png new file mode 100644 index 000000000..8944d445a Binary files /dev/null and b/docs/telepresence/2.4/images/tunnelblick.png differ diff --git a/docs/telepresence/2.4/images/vpn-dns.png b/docs/telepresence/2.4/images/vpn-dns.png new file mode 100644 index 000000000..eed535c45 Binary files /dev/null and b/docs/telepresence/2.4/images/vpn-dns.png differ diff --git a/docs/telepresence/2.4/install/helm.md b/docs/telepresence/2.4/install/helm.md new file mode 100644 index 000000000..688d2f20a --- /dev/null +++ b/docs/telepresence/2.4/install/helm.md @@ -0,0 +1,181 @@ +# Install with Helm + +[Helm](https://helm.sh) is a package manager for Kubernetes that automates the release and management of software on Kubernetes. The Telepresence Traffic Manager can be installed via a Helm chart with a few simple steps. + +**Note** that installing the Traffic Manager through Helm will prevent `telepresence connect` from ever upgrading it. If you wish to upgrade a Traffic Manager that was installed via the Helm chart, please see the steps [below](#upgrading-the-traffic-manager) + +For more details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +## Before you begin + +The Telepresence Helm chart is hosted by Ambassador Labs and published at `https://app.getambassador.io`. + +Start by adding this repo to your Helm client with the following command: + +```shell +helm repo add datawire https://app.getambassador.io +helm repo update +``` + +## Install with Helm + +When you run the Helm chart, it installs all the components required for the Telepresence Traffic Manager. + +1. If you are installing the Telepresence Traffic Manager **for the first time on your cluster**, create the `ambassador` namespace in your cluster: + + ```shell + kubectl create namespace ambassador + ``` + +2. Install the Telepresence Traffic Manager with the following command: + + ```shell + helm install traffic-manager --namespace ambassador datawire/telepresence + ``` + +### Install into custom namespace + +The Helm chart supports being installed into any namespace, not necessarily `ambassador`. Simply pass a different `namespace` argument to `helm install`. +For example, if you wanted to deploy the traffic manager to the `staging` namespace: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence +``` + +Note that users of Telepresence will need to configure their kubeconfig to find this installation of the Traffic Manager: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +See [the kubeconfig documentation](../../reference/config#manager) for more information. + +### Upgrading the Traffic Manager. + +Versions of the Traffic Manager Helm chart are coupled to the versions of the Telepresence CLI that they are intended for. +Thus, for example, if you wish to use Telepresence `v2.4.0`, you'll need to install version `v2.4.0` of the Traffic Manager Helm chart. + +Upgrading the Traffic Manager is the same as upgrading any other Helm chart; for example, if you installed the release into the `ambassador` namespace, and you just wished to upgrade it to the latest version without changing any configuration values: + +```shell +helm repo up +helm upgrade traffic-manager datawire/telepresence --reuse-values --namespace ambassador +``` + +If you want to upgrade the Traffic-Manager to a specific version, add a `--version` flag with the version number to the upgrade command. For example: `--version v2.4.1` + +## RBAC + +### Installing a namespace-scoped traffic manager + +You might not want the Traffic Manager to have permissions across the entire kubernetes cluster, or you might want to be able to install multiple traffic managers per cluster (for example, to separate them by environment). +In these cases, the traffic manager supports being installed with a namespace scope, allowing cluster administrators to limit the reach of a traffic manager's permissions. + +For example, suppose you want a Traffic Manager that only works on namespaces `dev` and `staging`. +To do this, create a `values.yaml` like the following: + +```yaml +managerRbac: + create: true + namespaced: true + namespaces: + - dev + - staging +``` + +This can then be installed via: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence -f ./values.yaml +``` + +**NOTE** Do not install namespace-scoped Traffic Managers and a global Traffic Manager in the same cluster, as it could have unexpected effects. + +#### Namespace collision detection + +The Telepresence Helm chart will try to prevent namespace-scoped Traffic Managers from managing the same namespaces. +It will do this by creating a ConfigMap, called `traffic-manager-claim`, in each namespace that a given install manages. + +So, for example, suppose you install one Traffic Manager to manage namespaces `dev` and `staging`, as: + +```bash +helm install traffic-manager --namespace dev datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={dev,staging}' +``` + +You might then attempt to install another Traffic Manager to manage namespaces `staging` and `prod`: + +```bash +helm install traffic-manager --namespace prod datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={staging,prod}' +``` + +This would fail with an error: + +``` +Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap "traffic-manager-claim" in namespace "staging" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "prod": current value is "dev" +``` + +To fix this error, fix the overlap either by removing `staging` from the first install, or from the second. + +#### Namespace scoped user permissions + +Optionally, you can also configure user rbac to be scoped to the same namespaces as the manager itself. +You might want to do this if you don't give your users permissions throughout the cluster, and want to make sure they only have the minimum set required to perform telepresence commands on certain namespaces. + +Continuing with the `dev` and `staging` example from the previous section, simply add the following to `values.yaml` (make sure you set the `subjects`!): + +```yaml +clientRbac: + create: true + + # These are the users or groups to which the user rbac will be bound. + # This MUST be set. + subjects: {} + # - kind: User + # name: jane + # apiGroup: rbac.authorization.k8s.io + + namespaced: true + + namespaces: + - dev + - staging +``` + +#### Namespace-scoped webhook + +If you wish to use the traffic-manager's [mutating webhook](../../reference/cluster-config#mutating-webhook) with a namespace-scoped traffic manager, you will have to ensure that each namespace has an `app.kubernetes.io/name` label that is identical to its name: + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: staging + labels: + app.kubernetes.io/name: staging +``` + +You can also use `kubectl label` to add the label to an existing namespace, e.g.: + +```shell +kubectl label namespace staging app.kubernetes.io/name=staging +``` + +This is required because the mutating webhook will use the name label to find namespaces to operate on. + +**NOTE** This labelling happens automatically in kubernetes >= 1.21. + +### Installing RBAC only + +Telepresence Traffic Manager does require some [RBAC](../../reference/rbac/) for the traffic-manager deployment itself, as well as for users. +To make it easier for operators to introspect / manage RBAC separately, you can use `rbac.only=true` to +only create the rbac-related objects. +Additionally, you can use `clientRbac.create=true` and `managerRbac.create=true` to toggle which subset(s) of RBAC objects you wish to create. diff --git a/docs/telepresence/2.4/install/index.md b/docs/telepresence/2.4/install/index.md new file mode 100644 index 000000000..e103afa86 --- /dev/null +++ b/docs/telepresence/2.4/install/index.md @@ -0,0 +1,153 @@ +import Platform from '@src/components/Platform'; + +# Install + +Install Telepresence by running the commands below for your OS. If you are not the administrator of your cluster, you will need [administrative RBAC permissions](../reference/rbac#administrating-telepresence) to install and use Telepresence in your cluster. + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## What's Next? + +Follow one of our [quick start guides](../quick-start/) to start using Telepresence, either with our sample app or in your own environment. + +## Installing nightly versions of Telepresence + +We build and publish the contents of the default branch, [release/v2](https://github.com/telepresenceio/telepresence), of Telepresence +nightly, Monday through Friday, for macOS (Intel and Apple silicon), Linux, and Windows. + +The tags are formatted like so: `vX.Y.Z-nightly-$gitShortHash`. + +`vX.Y.Z` is the most recent release of Telepresence with the patch version (Z) bumped one higher. +For example, if our last release was 2.3.4, nightly builds would start with v2.3.5, until a new +version of Telepresence is released. + +`$gitShortHash` will be the short hash of the git commit of the build. + +Use these URLs to download the most recent nightly build. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/nightly/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/nightly/telepresence.zip +``` + + + + +## Installing older versions of Telepresence + +Use these URLs to download an older version for your OS (including older nightly builds), replacing `x.y.z` with the versions you want. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/x.y.z/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/x.y.z/telepresence +``` + + + diff --git a/docs/telepresence/2.4/install/migrate-from-legacy.md b/docs/telepresence/2.4/install/migrate-from-legacy.md new file mode 100644 index 000000000..61701c9a9 --- /dev/null +++ b/docs/telepresence/2.4/install/migrate-from-legacy.md @@ -0,0 +1,109 @@ +# Migrate from legacy Telepresence + +Telepresence (formerly referenced as Telepresence 2, which is the current major version) has different mechanics and requires a different mental model from [legacy Telepresence 1](https://www.telepresence.io/docs/v1/) when working with local instances of your services. + +In legacy Telepresence, a pod running a service was swapped with a pod running the Telepresence proxy. This proxy received traffic intended for the service, and sent the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment". + +In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time. + +Telepresence 2 introduces a [new +architecture](../../reference/architecture/) built around "intercepts" +that addresses these problems. With the new Telepresence, a sidecar +proxy ("traffic agent") is injected onto the pod. The proxy then +intercepts traffic intended for the Pod and routes it to the +workstation/laptop. The advantage of this approach is that the +service is running at all times, and no swapping is used. By using +the proxy approach, we can also do personal intercepts, where rather +than re-routing all traffic to the laptop/workstation, it only +re-routes the traffic designated as belonging to that user, so that +multiple developers can intercept the same service at the same time +without disrupting normal operation or disrupting eacho. + +Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts. + +## Using legacy Telepresence commands + +First please ensure you've [installed Telepresence](../). + +Telepresence is able to translate common legacy Telepresence commands into native Telepresence commands. +So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used +to with the Telepresence binary. + +For example, say you have a deployment (`myserver`) that you want to swap deployment (equivalent to intercept in +Telepresence) with a python server, you could run the following command: + +``` +$ telepresence --swap-deployment myserver --expose 9090 --run python3 -m http.server 9090 +< help text > + +Legacy telepresence command used +Command roughly translates to the following in Telepresence: +telepresence intercept echo-easy --port 9090 -- python3 -m http.server 9090 +running... +Connecting to traffic manager... +Connected to context +Using Deployment myserver +intercepted + Intercept name : myserver + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:9090 + Intercepting : all TCP connections +Serving HTTP on :: port 9090 (http://[::]:9090/) ... +``` + +Telepresence will let you know what the legacy Telepresence command has mapped to and automatically +runs it. So you can get started with Telepresence today, using the commands you are used to +and it will help you learn the Telepresence syntax. + +### Legacy command mapping + +Below is the mapping of legacy Telepresence to Telepresence commands (where they exist and +are supported). + +| Legacy Telepresence Command | Telepresence Command | +|--------------------------------------------------|--------------------------------------------| +| --swap-deployment $workload | intercept $workload | +| --expose localPort[:remotePort] | intercept --port localPort[:remotePort] | +| --swap-deployment $workload --run-shell | intercept $workload -- bash | +| --swap-deployment $workload --run $cmd | intercept $workload -- $cmd | +| --swap-deployment $workload --docker-run $cmd | intercept $workload --docker-run -- $cmd | +| --run-shell | connect -- bash | +| --run $cmd | connect -- $cmd | +| --env-file,--env-json | --env-file, --env-json (haven't changed) | +| --context,--namespace | --context, --namespace (haven't changed) | +| --mount,--docker-mount | --mount, --docker-mount (haven't changed) | + +### Legacy Telepresence command limitations + +Some of the commands and flags from legacy Telepresence either didn't apply to Telepresence or +aren't yet supported in Telepresence. For some known popular commands, such as --method, +Telepresence will include output letting you know that the flag has gone away. For flags that +Telepresence can't translate yet, it will let you know that that flag is "unsupported". + +If Telepresence is missing any flags or functionality that is integral to your usage, please let us know +by [creating an issue](https://github.com/telepresenceio/telepresence/issues) and/or talking to us on our [Slack channel](http://a8r.io/slack)! + +## Telepresence changes + +Telepresence installs a Traffic Manager in the cluster and Traffic Agents alongside workloads when performing intercepts (including +with `--swap-deployment`) and leaves them. If you use `--swap-deployment`, the intercept will be left once the process +dies, but the agent will remain. There's no harm in leaving the agent running alongside your service, but when you +want to remove them from the cluster, the following Telepresence command will help: +``` +$ telepresence uninstall --help +Uninstall telepresence agents and manager + +Usage: + telepresence uninstall [flags] { --agent |--all-agents | --everything } + +Flags: + -d, --agent uninstall intercept agent on specific deployments + -a, --all-agents uninstall intercept agent on all deployments + -e, --everything uninstall agents and the traffic manager + -h, --help help for uninstall + -n, --namespace string If present, the namespace scope for this CLI request +``` + +Since the new architecture deploys a Traffic Manager into the Ambassador namespace, please take a look at +our [rbac guide](../../reference/rbac) if you run into any issues with permissions while upgrading to Telepresence. diff --git a/docs/telepresence/2.4/install/qs-go-advanced.md b/docs/telepresence/2.4/install/qs-go-advanced.md new file mode 100644 index 000000000..cb3324ba8 --- /dev/null +++ b/docs/telepresence/2.4/install/qs-go-advanced.md @@ -0,0 +1,212 @@ +--- +description: "Create your complete Kubernetes development environment and use Telepresence to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Creating a local Go Kubernetes development environment + +This tutorial shows you how to use Ambassador Cloud to create an effective Kubernetes development environment to enable fast, local development with the ability to interact with services and dependencies that run in a remote Kubernetes cluster. + +For the hands-on part of this guide, you will build upon [this tutorial with the emojivoto application](../../quick-start/go/), which is written in Go. + +## Prerequisites + +To begin, you need a set of services that you can deploy to a Kubernetes cluster. These services must be: + +* [Containerized](https://www.getambassador.io/learn/kubernetes-glossary/container/). + - Best practices for [writing Dockerfiles](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/). + - Many modern code editors, such as [VS Code](https://code.visualstudio.com/docs/containers/overview) and [IntelliJ IDEA](https://code.visualstudio.com/docs/containers/overview), can automatically generate Dockerfiles. +* Have a Kubernetes manifest that can be used to successfully deploy your application to a Kubernetes cluster. This includes YAML config files, or Helm charts, or whatever method you prefer. + - Many modern code editors, such as VS Code, have [plugins](https://marketplace.visualstudio.com/items?itemName=ms-kubernetes-tools.vscode-kubernetes-tools) that will [automatically generate](https://marketplace.visualstudio.com/items?itemName=GoogleCloudTools.cloudcode) a large amount of the Service and Deployment configuration files. + - The kubectl command-line tool includes a number of [config generators](https://v1-22.docs.kubernetes.io/docs/reference/kubectl/conventions/#generators) for creating basic Service and Deployment files. + - For helm users, the [`helm create` command](https://helm.sh/docs/helm/helm_create/) can be used to create the directory and file scaffolding for your chart. +* Follow cloud native application architecture best practices. + - Design services using the [Twelve-Factor Application](https://12factor.net/) approach. + - Ensure that your services and ingress gateway include HTTP [header propagation](https://www.getambassador.io/learn/kubernetes-glossary/header-propagation/) for good observability and diagnostics. Many modern language-specific web frameworks support this out-of-the-box, and the [OpenTelemetry documentation](https://opentelemetry.lightstep.com/core-concepts/context-propagation/) also contains good guidance. + +The emojivoto example you are exploring in the steps below follows all of these prerequisites. + +## Deploy your application to a remote Kubernetes cluster + +First, ensure that your entire application is running in a Kubernetes cluster and available for access to either your users or to yourself acting as a user. + +Use your existing `kubectl apply`, `helm install`, or continuous deployment system to deploy your entire application to the remote cluster: + +1. Ensure that you have set the correct KUBECONFIG in your local command line/shell in order to ensure your local tooling is interacting with the correct Kubernetes cluster. Verify this by executing `kubectl cluster-info` or `kubectl get svc`. +2. Deploy your application (using kubectl, helm or your CD system), and verify that the services are running with `kubectl get svc`. +3. Verify that you can access the application running by visiting the Ingress IP or domain name. We’ll refer to this as ${INGRESS_IP} from now on. + +If you followed the [emojivoto application tutorial](../../quick-start/go/) referenced at the beginning of this guide, you will see that your Kubernetes cluster has all of the necessary services deployed and has the ingress configured to expose your application by way of an IP address. + +## Create a local development container to modify a service + +After you finish your deployment, you need to configure a copy of a single service and run it locally. This example shows you how to do this in a development container with a sample repository. Unlike a production container, a development container contains the full development toolchain and dependencies required to build and run your application. + + +1. Clone your code in your repository with `git clone `. + For example: `git clone https://github.com/danielbryantuk/emojivoto.git`. +2. Change your directory to the source directory with `cd `. + To follow the previous example, enter: `cd emojivoto-voting-svc/api` +3. Ensure that your development environment is configured to support the automatic reloading of the service when your source code changes. + In the example, the Go application source code is being monitored for changes, and the application is rebuilt with [Air's live-reloading utility](https://github.com/cosmtrek/air). +4. Add a Dockerfile for your development. + Alternatively, you can use a Cloud Native Buildpack, such as those provided by Google Cloud. The [Google Go buildpack](https://github.com/GoogleCloudPlatform/buildpacks) has live-reloading configured by default. +5. Next, test that the container is working properly. In the root directory of your source rep, enter: +`docker build -t example-dev-container:0.1 -f Dev.Dockerfile .` +If you ran the the [emojivoto application example](../../quick-start/go/), the container has already been built for you and you can skip this step. +6. Run the development container and mount the current directory as a volume. This way, any code changes you make locally are synchronized into the container. Enter: + `docker run -v $(pwd):/opt/emojivoto/emojivoto-voting-svc/api datawire/telepresence-emojivoto-go-demo` + Now, code changes you make locally trigger a reload of the application in the container. +7. Open the current directory with your source code in your IDE. Make a change to the source code and trigger a build/compilation. + The container logs show that the application has been reloaded. + +If you followed the [emojivoto application tutorial](../../quick-start/go/) referenced at the beginning of this guide, the emojivoto development container is already downloaded. When you examine the `docker run` command you executed, you can see an AMBASSADOR_API_KEY token included as an environment variable. Copy and paste this into the example command below. Clone the emojivoto code repo and run the container with the updated configuration to expose the application's ports locally and volume mount your local copy of the application source code into the container: +``` +$ git clone git@github.com:danielbryantuk/emojivoto.git +$ cd emojivoto-voting-svc/api +$ docker run -d -p8083:8083 -p8081:8081 --name voting-demo --cap-add=NET_ADMIN --device /dev/net/tun:/dev/net/tun --pull always --rm -it -e AMBASSADOR_API_KEY= -v ~/Library/Application\ Support:/root/.host_config -v $(pwd):/opt/emojivoto/emojivoto-voting-svc/api datawire/telepresence-emojivoto-go-demo +``` + +## Connect your local development environment to the remote cluster + +Once you have the development container running, you can integrate your local development environment and the remote cluster. This enables you to access your remote app and instantly see any local changes you have made using your development container. + +1. First, download the latest [Telepresence binary](../../install/) for your operating system and run `telepresence connect`. + Your local service is now able to interact with services and dependencies in your remote cluster. + For example, you can run `curl remote-service-name.namespace:port/path` and get an instant response locally in the same way you would in a remote cluster. +2. Extract the KUBECONFIG from your dev container from the [emojivoto application tutorial](../../quick-start/go/) and then connect your container to the remote cluster with Telepresence: + ``` + $ CONTAINER_ID=$(docker inspect --format="{{.Id}}" "/voting-demo") + $ docker cp $CONTAINER_ID:/opt/telepresence-demo-cluster.yaml ./emojivoto_k8s_context.yaml + ``` +3. Run `telepresence intercept your-service-name` to reroute traffic for the service you’re working on: + ``` + $ telepresence intercept voting --port 8081:8080 + ``` +4. Make a small change in your local code that will cause a visible change that you will be able to see when accessing your app. Build your service to trigger a reload within the container. +5. Now visit your ${INGRESS_IP} and view the change. + Notice the instant feedback of a local change combined with being able to access the remote dependencies! +6. Make another small change in your local code and build the application again. +Refresh your view of the app at ${INGRESS_IP}. + Notice that you didn’t need to re-deploy the container in the remote cluster to view your changes. Any request you make against the remote application that accesses your service will be routed to your local machine allowing you to instantly see the effects of changes you make to the code. +7. Now, put all these commands in a simple shell script, setup-dev-env.sh, which can auto-install Telepresence and configure your local development environment in one command. You can commit this script into your application’s source code repository and your colleagues can easily take advantage of this fast development loop you have created. An example script is included below, which follows the “[Do-nothing scripting](https://blog.danslimmon.com/2019/07/15/do-nothing-scripting-the-key-to-gradual-automation/)"" format from Dan Slimmon: + + ``` + #!/bin/bash + + # global vars + CONTAINER_ID='' + + check_init_config() { + if [[ -z "${AMBASSADOR_API_KEY}" ]]; then + # you will need to set the AMBASSADOR_API_KEY via the command line + # export AMBASSADOR_API_KEY='NTIyOWExZDktYTc5...' + echo 'AMBASSADOR_API_KEY is not currently defined. Please set the environment variable in the shell e.g.' + echo 'export AMBASSADOR_API_KEY=NTIyOWExZDktYTc5...' + exit + fi + } + + run_dev_container() { + echo 'Running dev container (and downloading if necessary)' + + # check if dev container is already running and kill if so + CONTAINER_ID=$(docker inspect --format="{{.Id}}" "/voting-demo" > /dev/null 2>&1 ) + if [ ! -z "$CONTAINER_ID" ]; then + docker kill $CONTAINER_ID + fi + + # run the dev container, exposing 8081 gRPC port and volume mounting code directory + CONTAINER_ID=$(docker run -d -p8083:8083 -p8081:8081 --name voting-demo --cap-add=NET_ADMIN --device /dev/net/tun:/dev/net/tun --pull always --rm -it -e AMBASSADOR_API_KEY=$AMBASSADOR_API_KEY -v ~/Library/Application\ Support:/root/.host_config -v $(pwd):/opt/emojivoto/emojivoto-voting-svc/api datawire/telepresence-emojivoto-go-demo) + } + + connect_to_k8s() { + echo 'Extracting KUBECONFIG from container and connecting to cluster' + until docker cp $CONTAINER_ID:/opt/telepresence-demo-cluster.yaml ./emojivoto_k8s_context.yaml > /dev/null 2>&1; do + echo '.' + sleep 1s + done + + export KUBECONFIG=./emojivoto_k8s_context.yaml + + echo 'Connected to cluster. Listing services in default namespace' + kubectl get svc + } + + install_telepresence() { + echo 'Configuring Telepresence' + if [ ! command -v telepresence &> /dev/null ]; then + echo "Installing Telepresence" + sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/latest/telepresence -o /usr/local/bin/telepresence + sudo chmod a+x /usr/local/bin/telepresence + else + echo "Telepresence already installed" + fi + } + + connect_local_dev_env_to_remote() { + export KUBECONFIG=./emojivoto_k8s_context.yaml + echo 'Connecting local dev env to remote K8s cluster' + telepresence intercept voting --port 8081:8080 + } + + open_editor() { + echo 'Opening editor' + + # replace this line with your editor of choice, e.g. VS code, Intelli J + code . + } + + display_instructions_to_user () { + echo '' + echo 'INSTRUCTIONS FOR DEVELOPMENT' + echo '============================' + echo 'To set the correct Kubernetes context on this shell, please execute:' + echo 'export KUBECONFIG=./emojivoto_k8s_context.yaml' + } + + check_init_config + run_dev_container + connect_to_k8s + install_telepresence + connect_local_dev_env_to_remote + open_editor + display_instructions_to_user + + # happy coding! + + ``` +8. Run the setup-dev-env.sh script locally. Use the $AMBASSADOR_API_KEY you created from Docker in the [emojivoto application tutorial](../../quick-start/go/) or in [Ambassador Cloud](https://app.getambassador.io/cloud/services/). + ``` + export AMBASSADOR_API_KEY= + git clone git@github.com:danielbryantuk/emojivoto.git + cd emojivoto-voting-svc/api + ./setup_dev_env.sh + ``` + + If you are not using Mac OS and not using VS Code, you will need to update the script to download the correct Telepresence binary for your OS and open the correct editor, respectively + + +## Share the result of your local changes with others + +Once you have your local development environment configured for fast feedback, you can securely share access and the ability to view the changes made in your local service with your teammates and stakeholders. + +1. Leave any current Telepresence intercepts you have running: + `telepresence leave your-service-name` +2. Login to Ambassador Cloud using your GitHub account that is affiliated with your organization. This is important because in order to secure control access to your local changes only people with a GitHub account that shares the same organization will be able to view this. + Run `telepresence login`. +3. Run `telepresence intercept your-service-name` again to reroute traffic for the service you’re working on. This time you will be required to answer several questions about your ingress configuration. +4. Once the command completes, take the “previewURL” that was generated as part of the output and share this with your teammates. Ask them to access the application via this URL (rather than the regular application URL). +5. Make a small change in your local code that causes a visible change that you can see when accessing your app. Build your service to trigger a reload within the container. +6. Run the following three commands to generate a link to share with your teammates: + ``` + $ telepresence leave voting + $ telepresence login + $ telepresence intercept voting --port 8081:8080 + ``` +7. Ask your teammates to refresh their view of the application and instantly see the local changes you’ve made. + +## What's Next? + +Learn more about creating intercepts in your Telepresence environment with the [Intercept a service in your own environment](../../howtos/intercepts/) documentation. diff --git a/docs/telepresence/2.4/install/qs-java-advanced.md b/docs/telepresence/2.4/install/qs-java-advanced.md new file mode 100644 index 000000000..2fd4f2a78 --- /dev/null +++ b/docs/telepresence/2.4/install/qs-java-advanced.md @@ -0,0 +1,127 @@ +--- +description: "Create your complete Kubernetes development environment and use Telepresence to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +# Creating a local Kubernetes development environment + +This tutorial shows you how to use Ambassador Cloud to create an effective Kubernetes development environment to enable fast, local development with the ability to interact with services and dependencies that run in a remote Kubernetes cluster. + +## Prerequisites + +To begin, you need a set of services that you can deploy to a Kubernetes cluster. These services must be: + +* [Containerized](https://www.getambassador.io/learn/kubernetes-glossary/container/). + - Best practices for [writing Dockerfiles](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/). + - Many modern code editors, such as [VS Code](https://code.visualstudio.com/docs/containers/overview) and [IntelliJ IDEA](https://code.visualstudio.com/docs/containers/overview), can automatically generate Dockerfiles. +* Have a Kubernetes manifest that can be used to successfully deploy your application to a Kubernetes cluster. This includes YAML config files, or Helm charts, or whatever method you prefer. + - Many modern code editors, such as VS Code, have [plugins](https://marketplace.visualstudio.com/items?itemName=ms-kubernetes-tools.vscode-kubernetes-tools) that will [automatically generate](https://marketplace.visualstudio.com/items?itemName=GoogleCloudTools.cloudcode) a large amount of the Service and Deployment configuration files. + - The kubectl command-line tool includes a number of [config generators](https://v1-22.docs.kubernetes.io/docs/reference/kubectl/conventions/#generators) for creating basic Service and Deployment files. + - For helm users, the [`helm create` command](https://helm.sh/docs/helm/helm_create/) can be used to create the directory and file scaffolding for your chart. +* Follow cloud native application architecture best practices. + - Design services using the [Twelve-Factor Application](https://12factor.net/) approach. + - Ensure that your services and ingress gateway include HTTP [header propagation](https://www.getambassador.io/learn/kubernetes-glossary/header-propagation/) for good observability and diagnostics. Many modern language-specific web frameworks support this out-of-the-box, and the [OpenTelemetry documentation](https://opentelemetry.lightstep.com/core-concepts/context-propagation/) also contains good guidance. + +## Deploy your application to a remote Kubernetes cluster + +First, ensure that your entire application is running in a Kubernetes cluster and available for access to either your users or to yourself acting as a user. + +Use your existing `kubectl apply`, `helm install`, or continuous deployment system to deploy your entire application to the remote cluster: + +1. Ensure that you have set the correct KUBECONFIG in your local command line/shell in order to ensure your local tooling is interacting with the correct Kubernetes cluster. Verify this by executing `kubectl cluster-info` or `kubectl get svc`. +2. Deploy your application (using kubectl, helm or your CD system), and verify that the services are running with `kubectl get svc`. +3. Verify that you can access the application running by visiting the Ingress IP or domain name. We’ll refer to this as ${INGRESS_IP} from now on. + +## Create a local development container to modify a service + +After you finish your deployment, you need to configure a copy of a single service and run it locally. This example shows you how to do this in a development container with a sam[;e repository. Unlike a production container, a development container contains the full development toolchain and dependencies required to build and run your application. + + +1. Clone your code in your repository with `git clone `. + For example: `git clone https://github.com/danielbryantuk/gs-spring-boot.git`. +2. Change your directory to the source directory with `cd `. + To follow the previous example, enter: `cd gs-spring-boot/complete` +3. Ensure that your development environment is configured to support the automatic reloading of the service when your source code changes. + In the example Spring Boot app this is as simple as [adding the spring-boot-devtools dependency to the pom.xml file](https://docs.spring.io/spring-boot/docs/1.5.16.RELEASE/reference/html/using-boot-devtools.html). +4. Add a Dockerfile for your development. + To distinguish this from your production Dockerfile, give the development Dockerfile a separate name, like “Dev.Dockerfile”. + The following is an example for Java: + ```Java + FROM openjdk:16-alpine3.13 + + WORKDIR /app + + COPY .mvn/ .mvn + COPY mvnw pom.xml ./ + RUN ./mvnw dependency:go-offline + + COPY src ./src + + CMD ["./mvnw", "spring-boot:run"] + ``` +5. Next, test that the container is working properly. In the root directory of your source rep, enter: +`docker build -t example-dev-container:0.1 -f Dev.Dockerfile .` +6. Run the development container and mount the current directory as a volume. This way, any code changes you make locally are synchronized into the container. Enter: + `docker run -v $(pwd):/app example-dev-container:0.1` + Now, code changes you make locally trigger a reload of the application in the container. +7. Open the current directory with your source code in your IDE. Make a change to the source code and trigger a build/compilation. + The container logs show that the application has been reloaded. + +## Connect your local development environment to the remote cluster + +Once you have the development container running, you can integrate your local development environment and the remote cluster. This enables you to access your remote app and instantly see any local changes you have made using your development container. + +1. First, download the latest [Telepresence binary](../../install) for your operating system and run `telepresence connect`. + Your local service is now able to interact with services and dependencies in your remote cluster. + For example, you can run `curl remote-service-name.namespace:port/path` and get an instant response locally in the same way you would in a remote cluster. +2. Run `telepresence intercept your-service-name` to reroute traffic for the service you’re working on. +3. Make a small change in your local code that will cause a visible change that you will be able to see when accessing your app. Build your service to trigger a reload within the container. +4. Now visit your ${INGRESS_IP} and view the change. + Notice the instant feedback of a local change combined with being able to access the remote dependencies! +5. Make another small change in your local code and build the application again. +Refresh your view of the app at ${INGRESS_IP}. + Notice that you didn’t need to re-deploy the container in the remote cluster to view your changes. Any request you make against the remote application that accesses your service will be routed to your local machine allowing you to instantly see the effects of changes you make to the code. +6. Now, put all these commands in a simple shell script, setup-dev-env.sh, which can auto-install Telepresence and configure your local development environment in one command. You can commit this script into your application’s source code repository and your colleagues can easily take advantage of this fast development loop you have created. An example script is included below: + ``` + # deploy your services to the remote cluster + echo `Add config to deploy the application to your remote cluster via kubectl or helm etc` + + # clone the service you want to work on + git clone https://github.com/spring-guides/gs-spring-boot.git + cd gs-spring-boot/complete + + # build local dev container + docker build -t example-dev-container:0.1 -f Dev.Dockerfile . + + # run local dev container + # the logs can be viewed by the `docker logs -f ` and the container id can found via `docker container ls` + docker run -d -v $(pwd):/app example-dev-container:0.1 + + # download Telepresence and install (instructions for non Mac users: https://www.getambassador.io/docs/telepresence/latest/install/) + sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/latest/telepresence -o /usr/local/bin/telepresence + sudo chmod a+x /usr/local/bin/telepresence + + # connect your local dev env to the remote cluster + telepresence connect + + # re-route remote traffic to your local service + # telepresence intercept your-service-name + + # happy coding! + + ``` +## Share the result of your local changes with others + +Once you have your local development environment configured for fast feedback, you can securely share access and the ability to view the changes made in your local service with your teammates and stakeholders. + +1. Leave any current Telepresence intercepts you have running: + `telepresence leave your-service-name` +2. Login to Ambassador Cloud using your GitHub account that is affiliated with your organization. This is important because in order to secure control access to your local changes only people with a GitHub account that shares the same organization will be able to view this. + Run `telepresence login`. +3. Run `telepresence intercept your-service-name` again to reroute traffic for the service you’re working on. This time you will be required to answer several questions about your ingress configuration. +4. Once the command completes, take the “previewURL” that was generated as part of the output and share this with your teammates. Ask them to access the application via this URL (rather than the regular application URL). +5. Make a small change in your local code that causes a visible change that you can see when accessing your app. Build your service to trigger a reload within the container. +6. Ask your teammates to refresh their view of the application and instantly see the local changes you’ve made. + +## What's Next? + +Now that you've created a complete Kubernetes development environment, learn more about how to [manage your environment in Ambassador Cloud](https://www.getambassador.io/docs/cloud/latest/service-catalog/howtos/cells) or how to [create Preview URLs in Telepresence](https://www.getambassador.io/docs/telepresence/latest/howtos/preview-urls/). diff --git a/docs/telepresence/2.4/install/upgrade.md b/docs/telepresence/2.4/install/upgrade.md new file mode 100644 index 000000000..c0678450d --- /dev/null +++ b/docs/telepresence/2.4/install/upgrade.md @@ -0,0 +1,81 @@ +--- +description: "How to upgrade your installation of Telepresence and install previous versions." +--- + +import Platform from '@src/components/Platform'; + +# Upgrade Process +The Telepresence CLI will periodically check for new versions and notify you when an upgrade is available. Running the same commands used for installation will replace your current binary with the latest version. + + + + +```shell +# Intel Macs + +# Upgrade via brew: +brew upgrade datawire/blackbird/telepresence + +# OR upgrade manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +After upgrading your CLI you must stop any live Telepresence processes by issuing `telepresence quit`, then upgrade the Traffic Manager by running `telepresence connect` + +**Note** that if the Traffic Manager has been installed via Helm, `telepresence connect` will never upgrade it. If you wish to upgrade a Traffic Manager that was installed via the Helm chart, please see the [the Helm documentation](../helm#upgrading-the-traffic-manager) diff --git a/docs/telepresence/2.4/licenses.md b/docs/telepresence/2.4/licenses.md new file mode 100644 index 000000000..47737aa8a --- /dev/null +++ b/docs/telepresence/2.4/licenses.md @@ -0,0 +1,8 @@ +Telepresence CLI incorporates Free and Open Source software under the following licenses: + +* [2-clause BSD license](https://opensource.org/licenses/BSD-2-Clause) +* [3-clause BSD license](https://opensource.org/licenses/BSD-3-Clause) +* [Apache License 2.0](https://opensource.org/licenses/Apache-2.0) +* [ISC license](https://opensource.org/licenses/ISC) +* [MIT license](https://opensource.org/licenses/MIT) +* [Mozilla Public License 2.0](https://opensource.org/licenses/MPL-2.0) diff --git a/docs/telepresence/2.4/quick-start/TelepresenceQuickStartLanding.js b/docs/telepresence/2.4/quick-start/TelepresenceQuickStartLanding.js new file mode 100644 index 000000000..3e87c3ad6 --- /dev/null +++ b/docs/telepresence/2.4/quick-start/TelepresenceQuickStartLanding.js @@ -0,0 +1,129 @@ +import React from 'react'; + +import Embed from '../../../../src/components/Embed'; +import Icon from '../../../../src/components/Icon'; + +import './telepresence-quickstart-landing.less'; + +/** @type React.FC> */ +const RightArrow = (props) => ( + + + +); + +/** @type React.FC<{color: 'green'|'blue', withConnector: boolean}> */ +const Box = ({ children, color = 'blue', withConnector = false }) => ( + <> + {withConnector && ( +
+ +
+ )} +
{children}
+ +); + +const TelepresenceQuickStartLanding = () => ( +
+

+ Telepresence +

+

+ Explore the use cases of Telepresence with a free remote Kubernetes + cluster, or dive right in using your own. +

+ +
+
+
+

+ Use Our Free Demo Cluster +

+

+ See how Telepresence works without having to mess with your + production environments. +

+
+ +

6 minutes

+

Integration Testing

+

+ See how changes to a single service impact your entire application + without having to run your entire app locally. +

+ + GET STARTED{' '} + + +
+ +

5 minutes

+

Fast code changes

+

+ Make changes to your service locally and see the results instantly, + without waiting for containers to build. +

+ + GET STARTED{' '} + + +
+
+
+
+

+ Use Your Cluster +

+

+ Understand how Telepresence fits in to your Kubernetes development + workflow. +

+
+ +

10 minutes

+

Intercept your service in your cluster

+

+ Query services only exposed in your cluster's network. Make changes + and see them instantly in your K8s environment. +

+ + GET STARTED{' '} + + +
+
+
+ +
+

Watch the Demo

+
+
+

+ See Telepresence in action in our 3-minute demo + video that you can share with your teammates. +

+
    +
  • Instant feedback loops
  • +
  • Infinite-scale development environments
  • +
  • Access to your favorite local tools
  • +
  • Easy collaborative development with teammates
  • +
+
+
+ +
+
+
+
+); + +export default TelepresenceQuickStartLanding; diff --git a/docs/telepresence/2.4/quick-start/demo-node.md b/docs/telepresence/2.4/quick-start/demo-node.md new file mode 100644 index 000000000..088f7db65 --- /dev/null +++ b/docs/telepresence/2.4/quick-start/demo-node.md @@ -0,0 +1,161 @@ +--- +description: "Claim a remote demo cluster and learn to use Telepresence to intercept services running in a Kubernetes Cluster, speeding up local development and debugging." +--- + +import {DemoClusterMetadata, ExpirationDate} from '../../../../../src/components/DemoClusterMetadata'; +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start + +
+

Contents

+ +* [1. Get a free remote cluster](#1-get-a-free-remote-cluster) +* [2. Try the Emojivoto application](#2-try-the-emojivoto-application) +* [3. Set up your local development environment](#3-set-up-your-local-development-environment) +* [4. Testing our fix](#4-testing-our-fix) +* [5. Preview URLs](#5-preview-urls) +* [6. How/Why does this all work](#6-howwhy-does-this-all-work) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +In this guide, we'll give you a hands-on tutorial with Telepresence. To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js and Golang. We have a version in React if you prefer. + + +## 1. Get a free remote cluster + +Telepresence connects your local workstation with a remote Kubernetes cluster. In this tutorial, we'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. Test out the application: + +1. Go to the and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. We're going to use Telepresence shortly to fix this bug, as everyone should be able to vote for 🍩! + + + Congratulations! You've successfully accessed the Emojivoto application on your remote cluster. + + +## 3. Set up your local development environment + +We'll set up a development environment locally on your workstation. We'll then use Telepresence to connect this local development environment to the remote Kubernetes cluster. To save time, the development environment we'll use is pre-packaged as a Docker container. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + + + + + + + +Make sure that ports 8080 and 8083 are free.
+If the Docker engine is not running, the command will fail and you will see docker: unknown server OS in your terminal. +
+ +2. The Docker container includes a copy of the Emojivoto application that fixes the bug. Visit the [leaderboard](http://localhost:8083/leaderboard) and notice how it is different from the leaderboard in your Kubernetes cluster. + +3. Vote for 🍩 on your local leaderboard, and you can see that the bug is fixed! + + + Congratulations! You have successfully set up a local development environment, and tested the fix locally. + + +## 4. Testing our fix + +A common use case for Telepresence is to connect your local development environment to a remote cluster. This way, if your application is too big or complex to run locally, you can still develop locally. In this Quick Start, we're also going to show Telepresence can be used for integration testing, by testing our fix against the services in the remote cluster. + +1. From your Docker container, create an intercept, which will tell Telepresence to send traffic to the service in your container instead of the service in the cluster: + `telepresence intercept web --port 8080` + + When prompted for ingress configuration, all default values should be correct as displayed below. + + + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Preview URLs + +Preview URLs enable you to safely share your development environment with anyone. For example, you may want your UX designer to take a quick look at what you're developing, before you commit the code. Preview URLs enable this easy collaboration. + +1. If you access the Emojivoto application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +2. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version running locally. + + +Now you're able to share your fix in your local environment with your team! + + + + To get more information regarding Preview URLs and intercepts, visit the Developer Control Plane . + + +
+ +## 6. How/Why does this all work? + +Telepresence works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +Intercepts and preview URLs are functions of Telepresence that enable easy local development from a remote Kubernetes cluster and offer a preview environment for sharing and real-time collaboration. + +Telepresence also uses custom headers and header propagation for controllable intercepts and preview URLs. The headers facilitate the smart routing of requests either to live services in the cluster or services running locally on a developer’s machine. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to Ambassador Cloud with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. + +## What's Next? + +Apply what you've learned from this guide and employ the Emojivoto application in your own local development environment. See the Creating a local Kubernetes development environment pages for [Golang](../../install/qs-go-advanced/) and [Java](../../install/qs-java-advanced) page to learn more. + +export const metaData = [ +{name: "Emojivoto app", path: "https://github.com/datawire/emojivoto"}, +{name: "Docker container", path: "https://github.com/datawire/demo-containers"}, +{name: "Login component", path: "https://github.com/datawire/getambassador.io/blob/master/src/components/Docs/Telepresence/Login.js"}, +] diff --git a/docs/telepresence/2.4/quick-start/demo-react.md b/docs/telepresence/2.4/quick-start/demo-react.md new file mode 100644 index 000000000..196eaef5a --- /dev/null +++ b/docs/telepresence/2.4/quick-start/demo-react.md @@ -0,0 +1,259 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import { DownloadDemo } from '../../../../../src/components/Docs/DownloadDemo'; +import { UserInterceptCommand } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start - React + +
+

Contents

+ +* [1. Download the demo cluster archive](#1-download-the-demo-cluster-archive) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Set up the sample application](#3-set-up-the-sample-application) +* [4. Test app](#4-test-app) +* [5. Run a service on your laptop](#5-run-a-service-on-your-laptop) +* [6. Make a code change](#6-make-a-code-change) +* [7. Intercept all traffic to the service](#7-intercept-all-traffic-to-the-service) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +In this guide we'll give you **everything you need in a preconfigured demo cluster:** the Telepresence CLI, a config file for connecting to your demo cluster, and code to run a cluster service locally. + + + While Telepresence works with any language, this guide uses a sample app with a frontend written in React. We have a version with a Node.js backend if you prefer. + + + + +## 1. Download the demo cluster archive + +1. + +2. Extract the archive file, open the `ambassador-demo-cluster` folder, and run the installer script (the commands below might vary based on where your browser saves downloaded files). + + + This step will also install some dependency packages onto your laptop using npm, you can see those packages at ambassador-demo-cluster/edgey-corp-nodejs/DataProcessingService/package.json. + + + ``` + cd ~/Downloads + unzip ambassador-demo-cluster.zip -d ambassador-demo-cluster + cd ambassador-demo-cluster + ./install.sh + # type y to install the npm dependencies when asked + ``` + +3. Confirm that your `kubectl` is configured to use the demo cluster by getting the status of the cluster nodes, you should see a single node named `tpdemo-prod-...`: + `kubectl get nodes` + + ``` + $ kubectl get nodes + + NAME STATUS ROLES AGE VERSION + tpdemo-prod-1234 Ready control-plane,master 5d10h v1.20.2+k3s1 + ``` + +4. Confirm that the Telepresence CLI is now installed (we expect to see the daemons are not running yet): +`telepresence status` + + ``` + $ telepresence status + + Root Daemon: Not running + User Daemon: Not running + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open System Preferences → Security & Privacy → General. Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence status command. + + + + You now have Telepresence installed on your workstation and a Kubernetes cluster configured in your terminal! + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster (this requires **root** privileges and will ask for your password): +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + +Check this [FAQ entry](../../troubleshooting#daemon-service-did-not-start) in case the daemon does not start. + +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Set up the sample application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + +1. Clone the emojivoto app: +`git clone https://github.com/datawire/emojivoto.git` + +1. Deploy the app to your cluster: +`kubectl apply -k emojivoto/kustomize/deployment` + +1. Change the kubectl namespace: +`kubectl config set-context --current --namespace=emojivoto` + +1. List the Services: +`kubectl get svc` + + ``` + $ kubectl get svc + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + emoji-svc ClusterIP 10.43.162.236 8080/TCP,8801/TCP 29s + voting-svc ClusterIP 10.43.51.201 8080/TCP,8801/TCP 29s + web-app ClusterIP 10.43.242.240 80/TCP 29s + web-svc ClusterIP 10.43.182.119 8080/TCP 29s + ``` + +1. Since you’ve already connected Telepresence to your cluster, you can access the frontend service in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). This is the namespace qualified DNS name in the form of `service.namespace`. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Test app + +1. Vote for some emojis and see how the [leaderboard](http://web-app.emojivoto/leaderboard) changes. + +1. There is one emoji that causes an error when you vote for it. Vote for 🍩 and the leaderboard does not actually update. Also an error is shown on the browser dev console: +`GET http://web-svc.emojivoto:8080/api/vote?choice=:doughnut: 500 (Internal Server Error)` + + + Open the dev console in Chrome or Firefox with Option + ⌘ + J (macOS) or Shift + CTRL + J (Windows/Linux).
+ Open the dev console in Safari with Option + ⌘ + C. +
+ +The error is on a backend service, so **we can add an error page to notify the user** while the bug is fixed. + +## 5. Run a service on your laptop + +Now start up the `web-app` service on your laptop. We'll then make a code change and intercept this service so that we can see the immediate results of a code change to the service. + +1. **In a new terminal window**, change into the repo directory and build the application: + + `cd /emojivoto` + `make web-app-local` + + ``` + $ make web-app-local + + ... + webpack 5.34.0 compiled successfully in 4326 ms + ✨ Done in 5.38s. + ``` + +2. Change into the service's code directory and start the server: + + `cd emojivoto-web-app` + `yarn webpack serve` + + ``` + $ yarn webpack serve + + ... + ℹ 「wds」: Project is running at http://localhost:8080/ + ... + ℹ 「wdm」: Compiled successfully. + ``` + +4. Access the application at [http://localhost:8080](http://localhost:8080) and see how voting for the 🍩 is generating the same error as the application deployed in the cluster. + + + Victory, your local React server is running a-ok! + + +## 6. Make a code change +We’ve now set up a local development environment for the app. Next we'll make and locally test a code change to the app to improve the issue with voting for 🍩. + +1. In the terminal running webpack, stop the server with `Ctrl+c`. + +1. In your preferred editor open the file `emojivoto/emojivoto-web-app/js/components/Vote.jsx` and replace the `render()` function (lines 83 to the end) with [this highlighted code snippet](https://github.com/datawire/emojivoto/blob/main/assets/Vote-fixed.jsx#L83-L149). + +1. Run webpack to fully recompile the code then start the server again: + + `yarn webpack` + `yarn webpack serve` + +1. Reload the browser tab showing [http://localhost:8080](http://localhost:8080) and vote for 🍩. Notice how you see an error instead, improving the user experience. + +## 7. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the app to the version running locally instead. + + + This command must be run in the terminal window where you ran the script because the script set environment variables to access the demo cluster. Those variables will only will apply to that terminal session. + + +1. Start the intercept with the `intercept` command, setting the workload name (a Deployment in this case), namespace, and port: +`telepresence intercept web-app --namespace emojivoto --port 8080` + + ``` + $ telepresence intercept web-app --namespace emojivoto --port 8080 + + Using deployment web-app + intercepted + Intercept name: web-app-emojivoto + State : ACTIVE + ... + ``` + +2. Go to the frontend service again in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). Voting for 🍩 should now show an error message to the user. + + + The web-app Deployment is being intercepted and rerouted to the server on your laptop! + + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ + +## What's Next? + +Apply what you've learned from this guide and employ the Emojivoto application in your own local development environment. See the Creating a local Kubernetes development environment pages for [Golang](../../install/qs-go-advanced/) and [Java](../../install/qs-java-advanced) page to learn more. \ No newline at end of file diff --git a/docs/telepresence/2.4/quick-start/go.md b/docs/telepresence/2.4/quick-start/go.md new file mode 100644 index 000000000..a8eb1b055 --- /dev/null +++ b/docs/telepresence/2.4/quick-start/go.md @@ -0,0 +1,191 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + + +# Telepresence Quick Start - **Go** + +This guide provides you with a hands-on tutorial with Telepresence and Golang. To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + +## 1. Get a free remote cluster + +Telepresence connects your local workstation with a remote Kubernetes cluster. In this tutorial, you'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. +Test out the application: + +1. Go to the Emojivoto webapp and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. + +## 3. Run the Docker container + +The bug is present in the `voting-svc` service, you'll run that service locally. To save your time, we prepared a Docker container with this service running and all you'll need to fix the bug. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + +2. The application is failing due to a little bug inside this service which uses gRPC to communicate with the others services. We can use `grpcurl` to test the gRPC endpoint and see the error by running: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + (empty) + + Response trailers received: + content-type: application/grpc + Sent 0 requests and received 0 responses + ERROR: + Code: Unknown + Message: ERROR + ``` + +3. In order to fix the bug, use the Docker container's embedded IDE to fix this error. Go to http://localhost:8083 and open `api/api.go`. Remove the `"fmt"` package by deleting the line 5. + + ```go + 3 import ( + 4 "context" + 5 "fmt" // DELETE THIS LINE + 6 + 7 pb "github.com/buoyantio/emojivoto/emojivoto-voting-svc/gen/proto" + ``` + + and also replace the line `21`: + + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return nil, fmt.Errorf("ERROR") + 22 } + ``` + with + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return pS.vote(":doughnut:") + 22 } + ``` + Then save the file (`Ctrl+s` for Windows, `Cmd+s` for Mac or `Menu -> File -> Save`) and verify that the error is fixed now: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + content-type: application/grpc + + Response contents: + { + } + + Response trailers received: + (empty) + Sent 0 requests and received 1 response + ``` + +## 4. Telepresence intercept + +1. Now the bug is fixed, you can use Telepresence to intercept *all* the traffic through our local service. +Run the following command inside the container: + + ``` + $ telepresence intercept voting --port 8081:8080 + + Using Deployment voting + intercepted + Intercept name : voting + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8081 + Service Port Identifier: 8080 + Volume Mount Point : /tmp/telfs-XXXXXXXXX + Intercepting : all TCP connections + ``` + Now you can go back to Emojivoto webapp and you'll see that voting for 🍩 woks as expected. + +You have created an intercept to tell Telepresence where to send traffic. The `voting-svc` traffic is now destined to the local Dockerized version of the service. This intercepts *all the traffic* to the local `voting-svc` service, which has been fixed with the Telepresence intercept. + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Telepresence intercept with a preview URL + +Preview URLs allows you to safely share your development environment. With this approach, you can try and test your local service more accurately because you have a total control about which traffic is handled through your service, all of this thank to the preview URL. + +1. First leave the current intercept: + + ``` + $ telepresence leave voting + ``` + +2. Then login to telepresence: + + + +3. Create an intercept, which will tell Telepresence to send traffic to the service in our container instead of the service in the cluster. When prompted for ingress configuration, all default values should be correct as displayed below. + + + +4. If you access the Emojivoto webapp application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +5. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version which is running locally. + +
+ +## What's Next? + +Apply what you've learned from this guide and employ the Emojivoto application in your own local development environment. See the [Creating a local Kubernetes development environment](../../install/qs-go-advanced/) page to learn more. diff --git a/docs/telepresence/2.4/quick-start/index.md b/docs/telepresence/2.4/quick-start/index.md new file mode 100644 index 000000000..f2305d721 --- /dev/null +++ b/docs/telepresence/2.4/quick-start/index.md @@ -0,0 +1,7 @@ +--- + description: Telepresence Quick Start. +--- + +import TelepresenceQuickStartLanding from './TelepresenceQuickStartLanding'; + + diff --git a/docs/telepresence/2.4/quick-start/qs-cards.js b/docs/telepresence/2.4/quick-start/qs-cards.js new file mode 100644 index 000000000..2e8ff59fa --- /dev/null +++ b/docs/telepresence/2.4/quick-start/qs-cards.js @@ -0,0 +1,71 @@ +import Grid from '@material-ui/core/Grid'; +import Paper from '@material-ui/core/Paper'; +import Typography from '@material-ui/core/Typography'; +import { makeStyles } from '@material-ui/core/styles'; +import { Link as GatsbyLink } from 'gatsby'; +import React from 'react'; + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + textAlign: 'center', + alignItem: 'stretch', + padding: 0, + }, + paper: { + padding: theme.spacing(1), + textAlign: 'center', + color: 'black', + height: '100%', + }, +})); + +export default function CenteredGrid() { + const classes = useStyles(); + + return ( +
+ + + + + + Create a Local K8s Dev Environment + + + + Read the advanced guide on how to create your own complete + Kubernetes development environment. + + + + + + + + Collaborating + + + + Use preview URLS to collaborate with your colleagues and others + outside of your organization. + + + + + + + + Outbound Sessions + + + + While connected to the cluster, your laptop can interact with + services as if it was another pod in the cluster. + + + + +
+ ); +} diff --git a/docs/telepresence/2.4/quick-start/qs-go.md b/docs/telepresence/2.4/quick-start/qs-go.md new file mode 100644 index 000000000..8bf288211 --- /dev/null +++ b/docs/telepresence/2.4/quick-start/qs-go.md @@ -0,0 +1,400 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Go** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Go application](#3-install-a-sample-go-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Go application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Go. We have versions in Python (Flask), Python (FastAPI), Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-go.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-go.git + + Cloning into 'edgey-corp-go'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-go/DataProcessingService/` + +3. You will use [Fresh](https://pkg.go.dev/github.com/BUGLAN/fresh) to support auto reloading of the Go server, which we'll use later. Confirm it is installed by running: + `go get github.com/pilu/fresh` + Then start the Go server: + `$GOPATH/bin/fresh` + + ``` + $ go get github.com/pilu/fresh + + $ $GOPATH/bin/fresh + + ... + 10:23:41 app | Welcome to the DataProcessingGoService! + ``` + + + Install Go from here and set your GOPATH if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Go server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Go server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-go/DataProcessingService/main.go` in your editor and change `var color string` from `blue` to `orange`. Save the file and the Go server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## Create a complete development environment using this demo application + +Apply what you've learned from this guide and employ the Emojivoto application in your own local development environment. See the [Creating a local Kubernetes development environment](../../install/qs-go-advanced/) page to learn more. + +## What's Next? + + diff --git a/docs/telepresence/2.4/quick-start/qs-java.md b/docs/telepresence/2.4/quick-start/qs-java.md new file mode 100644 index 000000000..a42558c81 --- /dev/null +++ b/docs/telepresence/2.4/quick-start/qs-java.md @@ -0,0 +1,390 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Java** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Java application](#3-install-a-sample-java-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Java application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Java. We have versions in Python (FastAPI), Python (Flask), Go, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-java.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-java.git + + Cloning into 'edgey-corp-java'... + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-java/DataProcessingService/` + +3. Start the Maven server. + `mvn spring-boot:run` + + + Install Java and Maven first if needed. + + + ``` + $ mvn spring-boot:run + + ... + g.d.DataProcessingServiceJavaApplication : Started DataProcessingServiceJavaApplication in 1.408 seconds (JVM running for 1.684) + + ``` + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Java server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Java server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-java/DataProcessingService/src/main/resources/application.properties` in your editor and change `app.default.color` on line 2 from `blue` to `orange`. Save the file then stop and restart your Java server. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.4/quick-start/qs-node.md b/docs/telepresence/2.4/quick-start/qs-node.md new file mode 100644 index 000000000..ff37ffa29 --- /dev/null +++ b/docs/telepresence/2.4/quick-start/qs-node.md @@ -0,0 +1,384 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Node.js** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Node.js application](#3-install-a-sample-nodejs-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Node.js application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js. We have versions in Go, Java,Python using Flask, and Python using FastAPI if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-nodejs.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-nodejs.git + + Cloning into 'edgey-corp-nodejs'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-nodejs/DataProcessingService/` + +3. Install the dependencies and start the Node server: +`npm install && npm start` + + ``` + $ npm install && npm start + + ... + Welcome to the DataProcessingService! + { _: [] } + Server running on port 3000 + ``` + + + Install Node.js from here if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Node server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + See this doc for more information on how Telepresence resolves DNS. + + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Node server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-nodejs/DataProcessingService/app.js` in your editor and change line 6 from `blue` to `orange`. Save the file and the Node server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.4/quick-start/qs-python-fastapi.md b/docs/telepresence/2.4/quick-start/qs-python-fastapi.md new file mode 100644 index 000000000..3fc049314 --- /dev/null +++ b/docs/telepresence/2.4/quick-start/qs-python-fastapi.md @@ -0,0 +1,381 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Python (FastAPI)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the FastAPI framework. We have versions in Python (Flask), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python-fastapi.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python-fastapi.git + + Cloning into 'edgey-corp-python-fastapi'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python-fastapi/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install fastapi uvicorn requests && python app.py + + Collecting fastapi + ... + Application startup complete. + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local service is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python-fastapi/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 17 from `blue` to `orange`. Save the file and the Python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080) and it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.4/quick-start/qs-python.md b/docs/telepresence/2.4/quick-start/qs-python.md new file mode 100644 index 000000000..e4c7b4996 --- /dev/null +++ b/docs/telepresence/2.4/quick-start/qs-python.md @@ -0,0 +1,392 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Python (Flask)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the Flask framework. We have versions in Python (FastAPI), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python.git + + Cloning into 'edgey-corp-python'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install flask requests && python app.py + + Collecting flask + ... + Welcome to the DataServiceProcessingPythonService! + ... + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Python server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 15 from `blue` to `orange`. Save the file and the python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.4/quick-start/telepresence-quickstart-landing.less b/docs/telepresence/2.4/quick-start/telepresence-quickstart-landing.less new file mode 100644 index 000000000..1a8c3ddc7 --- /dev/null +++ b/docs/telepresence/2.4/quick-start/telepresence-quickstart-landing.less @@ -0,0 +1,185 @@ +@import '~@src/components/Layout/vars.less'; + +.doc-body .telepresence-quickstart-landing { + font-family: @InterFont; + color: @black; + margin: 0 auto 140px; + max-width: @docs-max-width; + min-width: @docs-min-width; + + h1, + h2 { + color: @blue-dark; + font-style: normal; + font-weight: normal; + letter-spacing: 0.25px; + } + + h1 { + font-size: 33px; + line-height: 40px; + + svg { + vertical-align: text-bottom; + } + } + + h2 { + font-size: 23px; + line-height: 33px; + margin: 0 0 1rem; + + .highlight-mark { + background: transparent; + color: @blue-dark; + background: -moz-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: -webkit-gradient( + linear, + left top, + left bottom, + color-stop(0%, transparent), + color-stop(60%, transparent), + color-stop(60%, fade(@blue-electric, 15%)), + color-stop(100%, fade(@blue-electric, 15%)) + ); + background: -webkit-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: -o-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: -ms-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: linear-gradient( + to bottom, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='transparent', endColorstr='fade(@blue-electric, 15%)',GradientType=0 ); + padding: 0 3px; + margin: 0 0.1em 0 0; + } + } + + .telepresence-choice { + background: @white; + border: 2px solid @grey-separator; + box-shadow: -6px 12px 0px fade(@black, 12%); + border-radius: 8px; + padding: 20px; + + strong { + color: @blue; + } + } + + .telepresence-choice-wrapper { + border-bottom: solid 1px @grey-separator; + column-gap: 60px; + display: inline-grid; + grid-template-columns: repeat(2, 1fr); + margin: 20px 0 50px; + padding: 0 0 62px; + width: 100%; + + .telepresence-choice { + ol { + li { + font-size: 14px; + } + } + + .get-started-button { + background-color: @green; + border-radius: 5px; + color: @white; + display: inline-flex; + font-style: normal; + font-weight: 600; + font-size: 14px; + line-height: 24px; + margin: 0 0 15px 5px; + padding: 13px 20px; + align-items: center; + letter-spacing: 1.25px; + text-decoration: none; + text-transform: uppercase; + transition: background-color 200ms linear 0ms; + + svg { + fill: @white; + height: 20px; + width: 20px; + } + + &:hover { + background-color: @green-dark; + text-decoration: none; + } + } + + p { + font-style: normal; + font-weight: normal; + font-size: 16px; + line-height: 26px; + letter-spacing: 0.5px; + } + } + } + + .video-wrapper { + display: flex; + flex-direction: row; + + ul { + li { + font-size: 14px; + margin: 0 10px 10px 0; + } + } + + div { + &.video-container { + flex: 1 1 70%; + position: relative; + width: 100%; + padding-bottom: 39.375%; + + .video { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + border: 0; + } + } + + &.description { + flex: 0 1 30%; + } + } + } +} diff --git a/docs/telepresence/2.4/redirects.yml b/docs/telepresence/2.4/redirects.yml new file mode 100644 index 000000000..5961b3477 --- /dev/null +++ b/docs/telepresence/2.4/redirects.yml @@ -0,0 +1 @@ +- {from: "", to: "quick-start"} diff --git a/docs/telepresence/2.4/reference/architecture.md b/docs/telepresence/2.4/reference/architecture.md new file mode 100644 index 000000000..47facb0b8 --- /dev/null +++ b/docs/telepresence/2.4/reference/architecture.md @@ -0,0 +1,63 @@ +--- +description: "How Telepresence works to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Telepresence Architecture + +
+ +![Telepresence Architecture](../../../../../images/documentation/telepresence-architecture.inline.svg) + +
+ +## Telepresence CLI + +The Telepresence CLI orchestrates all the moving parts: it starts the Telepresence Daemon, installs the Traffic Manager +in your cluster, authenticates against Ambassador Cloud and configure all those elements to communicate with one +another. + +## Telepresence Daemon + +The Telepresence Daemon runs on a developer's workstation and is its main point of communication with the cluster's +network. All requests from and to the cluster go through the Daemon, which communicates with the Traffic Manager. + +## Traffic Manager + +The Traffic Manager is the central point of communication between Traffic Agents in the cluster and Telepresence Daemons +on developer workstations, proxying all relevant inbound and outbound traffic and tracking active intercepts. When +Telepresence is run with either the `connect`, `intercept`, or `list` commands, the Telepresence CLI first checks the +cluster for the Traffic Manager deployment, and if missing it creates it. + +When an intercept gets created with a Preview URL, the Traffic Manager will establish a connection with Ambassador Cloud +so that Preview URL requests can be routed to the cluster. This allows Ambassador Cloud to reach the Traffic Manager +without requiring the Traffic Manager to be publicly exposed. Once the Traffic Manager receives a request from a Preview +URL, it forwards the request to the ingress service specified at the Preview URL creation. + +## Traffic Agent + +The Traffic Agent is a sidecar container that facilitates intercepts. When an intercept is started, the Traffic Agent +container is injected into the workload's pod(s). You can see the Traffic Agent's status by running `kubectl describe +pod `. + +Depending on the type of intercept that gets created, the Traffic Agent will either route the incoming request to the +Traffic Manager so that it gets routed to a developer's workstation, or it will pass it along to the container in the +pod usually handling requests on that port. + +## Ambassador Cloud + +Ambassador Cloud enables Preview URLs by generating random ephemeral domain names and routing requests received on those +domains from authorized users to the appropriate Traffic Manager. + +Ambassador Cloud also lets users manage their Preview URLs: making them publicly accessible, seeing users who have +accessed them and deleting them. + +# Changes from Service Preview + +Using Ambassador's previous offering, Service Preview, the Traffic Agent had to be manually added to a pod by an +annotation. This is no longer required as the Traffic Agent is automatically injected when an intercept is started. + +Service Preview also started an intercept via `edgectl intercept`. The `edgectl` CLI is no longer required to intercept +as this functionality has been moved to the Telepresence CLI. + +For both the Traffic Manager and Traffic Agents, configuring Kubernetes ClusterRoles and ClusterRoleBindings is not +required as it was in Service Preview. Instead, the user running Telepresence must already have sufficient permissions in the cluster to add and modify deployments in the cluster. diff --git a/docs/telepresence/2.4/reference/client.md b/docs/telepresence/2.4/reference/client.md new file mode 100644 index 000000000..1fe86a1cf --- /dev/null +++ b/docs/telepresence/2.4/reference/client.md @@ -0,0 +1,32 @@ +--- +description: "CLI options for Telepresence to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Client reference + +The [Telepresence CLI client](../../quick-start) is used to connect Telepresence to your cluster, start and stop intercepts, and create preview URLs. All commands are run in the form of `telepresence `. + +## Commands + +A list of all CLI commands and flags is available by running `telepresence help`, but here is more detail on the most common ones. +You can append `--help` to each command below to get even more information about its usage. + +| Command | Description | +| --- | --- | +| `connect` | Starts the local daemon and connects Telepresence to your cluster and installs the Traffic Manager if it is missing. After connecting, outbound traffic is routed to the cluster so that you can interact with services as if your laptop was another pod (for example, curling a service by it's name) | +| [`login`](login) | Authenticates you to Ambassador Cloud to create, manage, and share [preview URLs](../../howtos/preview-urls/) +| `logout` | Logs out out of Ambassador Cloud | +| `license` | Formats a license from Ambassdor Cloud into a secret that can be [applied to your cluster](../cluster-config#add-license-to-cluster) if you require features of the extension in an air-gapped environment| +| `status` | Shows the current connectivity status | +| `quit` | Tell Telepresence daemons to quit | +| `list` | Lists the current active intercepts | +| `intercept` | Intercepts a service, run followed by the service name to be intercepted and what port to proxy to your laptop: `telepresence intercept --port `. This command can also start a process so you can run a local instance of the service you are intercepting. For example the following will intercept the hello service on port 8000 and start a Python web server: `telepresence intercept hello --port 8000 -- python3 -m http.server 8000`. A special flag `--docker-run` can be used to run the local instance [in a docker container](../docker-run). | +| `leave` | Stops an active intercept: `telepresence leave hello` | +| `preview` | Create or remove [preview URLs](../../howtos/preview-urls) for existing intercepts: `telepresence preview create ` | +| `loglevel` | Temporarily change the log-level of the traffic-manager, traffic-agents, and user and root daemons | +| `gather-logs` | Gather logs from traffic-manager, traffic-agents, user, and root daemons, and export them into a zip file that can be shared with others or included with a github issue. Use `--get-pod-yaml` to include the yaml for the `traffic-manager` and `traffic-agent`s. Use `--anonymize` to replace the actual pod names + namespaces used for the `traffic-manager` and pods containing `traffic-agent`s in the logs. | +| `version` | Show version of Telepresence CLI + Traffic-Manager (if connected) | +| `uninstall` | Uninstalls Telepresence from your cluster, using the `--agent` flag to target the Traffic Agent for a specific workload, the `--all-agents` flag to remove all Traffic Agents from all workloads, or the `--everything` flag to remove all Traffic Agents and the Traffic Manager. +| `dashboard` | Reopens the Ambassador Cloud dashboard in your browser | +| `current-cluster-id` | Get cluster ID for your kubernetes cluster, used for [configuring license](../cluster-config#add-license-to-cluster) in an air-gapped environment | +| `test-vpn` | Run a [configuration check](../vpn#the-test-vpn-command) on a VPN setup | diff --git a/docs/telepresence/2.4/reference/client/login.md b/docs/telepresence/2.4/reference/client/login.md new file mode 100644 index 000000000..d1d0d8fad --- /dev/null +++ b/docs/telepresence/2.4/reference/client/login.md @@ -0,0 +1,53 @@ +# Telepresence Login + +```console +$ telepresence login --help +Authenticate to Ambassador Cloud + +Usage: + telepresence login [flags] + +Flags: + --apikey string Static API key to use instead of performing an interactive login +``` + +## Description + +Use `telepresence login` to explicitly authenticate with [Ambassador +Cloud](https://www.getambassador.io/docs/cloud). Unless the +[`skipLogin` option](../../config) is set, other commands will +automatically invoke the `telepresence login` interactive login +procedure as necessary, so it is rarely necessary to explicitly run +`telepresence login`; it should only be truly necessary to explictly +run `telepresence login` when you require a non-interactive login. + +The normal interactive login procedure involves launching a web +browser, a user interacting with that web browser, and finally having +the web browser make callbacks to the local Telepresence process. If +it is not possible to do this (perhaps you are using a headless remote +box via SSH, or are using Telepresence in CI), then you may instead +have Ambassador Cloud issue an API key that you pass to `telepresence +login` with the `--apikey` flag. + +## Acquiring an API key + +1. Log in to Ambassador Cloud at https://app.getambassador.io/ . + +2. Click on your profile icon in the upper-left: ![Screenshot with the + mouse pointer over the upper-left profile icon](./apikey-2.png) + +3. Click on the "API Keys" menu button: ![Screenshot with the mouse + pointer over the "API Keys" menu button](./apikey-3.png) + +4. Click on the "generate new key" button in the upper-right: + ![Screenshot with the mouse pointer over the "generate new key" + button](./apikey-4.png) + +5. Enter a description for the key (perhaps the name of your laptop, + or perhaps the "CI"), and click "generate api key" to create it. + +You may now pass the API key as `KEY` to `telepresence login --apikey=KEY`. + +Telepresence will use that "master" API key to create narrower keys +for different components of Telepresence. You will see these appear +in the Ambassador Cloud web interface. diff --git a/docs/telepresence/2.4/reference/client/login/apikey-2.png b/docs/telepresence/2.4/reference/client/login/apikey-2.png new file mode 100644 index 000000000..1379502a9 Binary files /dev/null and b/docs/telepresence/2.4/reference/client/login/apikey-2.png differ diff --git a/docs/telepresence/2.4/reference/client/login/apikey-3.png b/docs/telepresence/2.4/reference/client/login/apikey-3.png new file mode 100644 index 000000000..4559b784d Binary files /dev/null and b/docs/telepresence/2.4/reference/client/login/apikey-3.png differ diff --git a/docs/telepresence/2.4/reference/client/login/apikey-4.png b/docs/telepresence/2.4/reference/client/login/apikey-4.png new file mode 100644 index 000000000..25c6581a4 Binary files /dev/null and b/docs/telepresence/2.4/reference/client/login/apikey-4.png differ diff --git a/docs/telepresence/2.4/reference/cluster-config.md b/docs/telepresence/2.4/reference/cluster-config.md new file mode 100644 index 000000000..aad5b64b4 --- /dev/null +++ b/docs/telepresence/2.4/reference/cluster-config.md @@ -0,0 +1,312 @@ +import Alert from '@material-ui/lab/Alert'; +import { ClusterConfig } from '../../../../../src/components/Docs/Telepresence'; + +# Cluster-side configuration + +For the most part, Telepresence doesn't require any special +configuration in the cluster and can be used right away in any +cluster (as long as the user has adequate [RBAC permissions](../rbac) +and the cluster's server version is `1.17.0` or higher). + +However, some advanced features do require some configuration in the +cluster. + +## TLS + +In this example, other applications in the cluster expect to speak TLS to your +intercepted application (perhaps you're using a service-mesh that does +mTLS). + +In order to use `--mechanism=http` (or any features that imply +`--mechanism=http`) you need to tell Telepresence about the TLS +certificates in use. + +Tell Telepresence about the certificates in use by adjusting your +[workload's](../intercepts/#supported-workloads) Pod template to set a couple of +annotations on the intercepted Pods: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ "getambassador.io/inject-terminating-tls-secret": "your-terminating-secret" # optional ++ "getambassador.io/inject-originating-tls-secret": "your-originating-secret" # optional + spec: ++ serviceAccountName: "your-account-that-has-rbac-to-read-those-secrets" + containers: +``` + +- The `getambassador.io/inject-terminating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS server + certificate to use for decrypting and responding to incoming + requests. + + When Telepresence modifies the Service and workload port + definitions to point at the Telepresence Agent sidecar's port + instead of your application's actual port, the sidecar will use this + certificate to terminate TLS. + +- The `getambassador.io/inject-originating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS + client certificate to use for communicating with your application. + + You will need to set this if your application expects incoming + requests to speak TLS (for example, your + code expects to handle mTLS itself instead of letting a service-mesh + sidecar handle mTLS for it, or the port definition that Telepresence + modified pointed at the service-mesh sidecar instead of at your + application). + + If you do set this, you should to set it to the + same client certificate Secret that you configure the Ambassador + Edge Stack to use for mTLS. + +It is only possible to refer to a Secret that is in the same Namespace +as the Pod. + +The Pod will need to have permission to `get` and `watch` each of +those Secrets. + +Telepresence understands `type: kubernetes.io/tls` Secrets and +`type: istio.io/key-and-cert` Secrets; as well as `type: Opaque` +Secrets that it detects to be formatted as one of those types. + +## Air gapped cluster + +If your cluster is on an isolated network such that it cannot +communicate with Ambassador Cloud, then some additional configuration +is required to acquire a license key in order to use personal +intercepts. + +### Create a license + +1. + +2. Generate a new license (if one doesn't already exist) by clicking *Generate New License*. + +3. You will be prompted for your Cluster ID. Ensure your +kubeconfig context is using the cluster you want to create a license for then +run this command to generate the Cluster ID: + + ``` + $ telepresence current-cluster-id + + Cluster ID: + ``` + +4. Click *Generate API Key* to finish generating the license. + +5. On the licenses page, download the license file associated with your cluster. + +### Add license to cluster +There are two separate ways you can add the license to your cluster: manually creating and deploying +the license secret or having the helm chart manage the secret + +You only need to do one of the two options. + +#### Manual deploy of license secret + +1. Use this command to generate a Kubernetes Secret config using the license file: + + ``` + $ telepresence license -f + + apiVersion: v1 + data: + hostDomain: + license: + kind: Secret + metadata: + creationTimestamp: null + name: systema-license + namespace: ambassador + ``` + +2. Save the output as a YAML file and apply it to your +cluster with `kubectl`. + +3. When deploying the `traffic-manager` chart, you must add the additional values when running `helm install` by putting +the following into a file (for the example we'll assume it's called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + secret: + # This tells the helm chart not to create the secret since you've created it yourself + create: false + ``` + +4. Install the helm chart into the cluster + + ``` + helm install traffic-manager -n ambassador datawire/telepresence --create-namespace -f license-values.yaml + ``` + +5. Ensure that you have the docker image for the Smart Agent (datawire/ambassador-telepresence-agent:1.11.0) +pulled and in a registry your cluster can pull from. + +6. Have users use the `images` [config key](../config/#images) keys so telepresence uses the aforementioned image for their agent. + +#### Helm chart manages the secret + +1. Get the jwt token from the downloaded license file + + ``` + $ cat ~/Downloads/ambassador.License_for_yourcluster + eyJhbGnotarealtoken.butanexample + ``` + +2. Create the following values file, substituting your real jwt token in for the one used in the example below. +(for this example we'll assume the following is placed in a file called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + # This is the value from the license file you download. this value is an example and will not work + value: eyJhbGnotarealtoken.butanexample + secret: + # This tells the helm chart to create the secret + create: true + ``` + +3. Install the helm chart into the cluster + + ``` + helm install traffic-manager charts/telepresence -n ambassador --create-namespace -f license-values.yaml + ``` + +Users will now be able to use preview intercepts with the +`--preview-url=false` flag. Even with the license key, preview URLs +cannot be used without enabling direct communication with Ambassador +Cloud, as Ambassador Cloud is essential to their operation. + +If using Helm to install the server-side components, see the chart's [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) to learn how to configure the image registry and license secret. + +Have clients use the [skipLogin](../config/#cloud) key to ensure the cli knows it is operating in an +air-gapped environment. + +## Mutating Webhook + +By default, Telepresence updates the intercepted workload (Deployment, StatefulSet, ReplicaSet) +template to add the [Traffic Agent](../architecture/#traffic-agent) sidecar container and update the +port definitions. If you use GitOps workflows (with tools like ArgoCD) to automatically update your +cluster so that it reflects the desired state from an external Git repository, this behavior can make +your workload out of sync with that external desired state. + +To solve this issue, you can use Telepresence's Mutating Webhook alternative mechanism. Intercepted +workloads will then stay untouched and only the underlying pods will be modified to inject the Traffic +Agent sidecar container and update the port definitions. + +Simply add the `telepresence.getambassador.io/inject-traffic-agent: enabled` annotation to your +workload template's annotations: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ telepresence.getambassador.io/inject-traffic-agent: enabled + spec: + containers: +``` + +### Service Port Annotation + +A service port annotation can be added to the workload to make the Mutating Webhook select a specific port +in the service. This is necessary when the service has multiple ports. + +```diff + spec: + template: + metadata: + labels: + service: your-service + annotations: + telepresence.getambassador.io/inject-traffic-agent: enabled ++ telepresence.getambassador.io/inject-service-port: https + spec: + containers: +``` + +### Service Name Annotation + +A service name annotation can be added to the workload to make the Mutating Webhook select a specific Kubernetes service. +This is necessary when the workload is exposed by multiple services. + +```diff + spec: + template: + metadata: + labels: + service: your-service + annotations: + telepresence.getambassador.io/inject-traffic-agent: enabled ++ telepresence.getambassador.io/inject-service-name: my-service + spec: + containers: +``` + +### Note on Numeric Ports + +If the targetPort of your intercepted service is pointing at a port number, in addition to +injecting the Traffic Agent sidecar, Telepresence will also inject an initContainer that will +reconfigure the pod's firewall rules to redirect traffic to the Traffic Agent. + + +Note that this initContainer requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + + +This requires the Traffic Agent to run as GID 7777. By default, this is disabled on openshift clusters. +To enable running as GID 7777 on a specific openshift namespace, run: +oc adm policy add-scc-to-group anyuid system:serviceaccounts:$NAMESPACE + + +If you need to use numeric ports without the aforementioned capabilities, you can [manually install the agent](../intercepts/manual-agent) + +For example, the following service is using a numeric port, so Telepresence would inject an initContainer into it: +```yaml +apiVersion: v1 +kind: Service +metadata: + name: your-service +spec: + type: ClusterIP + selector: + service: your-service + ports: + - port: 80 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: your-service + labels: + service: your-service +spec: + replicas: 1 + selector: + matchLabels: + service: your-service + template: + metadata: + annotations: + telepresence.getambassador.io/inject-traffic-agent: enabled + labels: + service: your-service + spec: + containers: + - name: your-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 +``` diff --git a/docs/telepresence/2.4/reference/config.md b/docs/telepresence/2.4/reference/config.md new file mode 100644 index 000000000..3d42b005b --- /dev/null +++ b/docs/telepresence/2.4/reference/config.md @@ -0,0 +1,298 @@ +# Laptop-side configuration + +## Global Configuration +Telepresence uses a `config.yml` file to store and change certain global configuration values that will be used for all clusters you use Telepresence with. The location of this file varies based on your OS: + +* macOS: `$HOME/Library/Application Support/telepresence/config.yml` +* Linux: `$XDG_CONFIG_HOME/telepresence/config.yml` or, if that variable is not set, `$HOME/.config/telepresence/config.yml` +* Windows: `%APPDATA%\telepresence\config.yml` + +For Linux, the above paths are for a user-level configuration. For system-level configuration, use the file at `$XDG_CONFIG_DIRS/telepresence/config.yml` or, if that variable is empty, `/etc/xdg/telepresence/config.yml`. If a file exists at both the user-level and system-level paths, the user-level path file will take precedence. + +### Values + +The config file currently supports values for the `timeouts`, `logLevels`, `images`, `cloud`, and `grpc` keys. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +timeouts: + agentInstall: 1m + intercept: 10s +logLevels: + userDaemon: debug +images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting +cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. +grpc: + maxReceiveSize: 10Mi +telepresenceAPI: + port: 9980 +intercept: + appProtocolStrategy: portName + defaultPort: "8088" +``` + +#### Timeouts + +Values for `timeouts` are all durations either as a number of seconds +or as a string with a unit suffix of `ms`, `s`, `m`, or `h`. Strings +can be fractional (`1.5h`) or combined (`2h45m`). + +These are the valid fields for the `timeouts` key: + +| Field | Description | Type | Default | +|-------------------------|------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|------------| +| `agentInstall` | Waiting for Traffic Agent to be installed | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | +| `apply` | Waiting for a Kubernetes manifest to be applied | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 1 minute | +| `clusterConnect` | Waiting for cluster to be connected | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `intercept` | Waiting for an intercept to become active | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `proxyDial` | Waiting for an outbound connection to be established | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `trafficManagerConnect` | Waiting for the Traffic Manager API to connect for port fowards | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `trafficManagerAPI` | Waiting for connection to the gPRC API after `trafficManagerConnect` is successful | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 15 seconds | +| `helm` | Waiting for Helm operations (e.g. `install`) on the Traffic Manager | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | + +#### Log Levels + +Values for the `logLevels` fields are one of the following strings, +case insensitive: + + - `trace` + - `debug` + - `info` + - `warning` or `warn` + - `error` + +For whichever log-level you select, you will get logs labeled with that level and of higher severity. +(e.g. if you use `info`, you will also get logs labeled `error`. You will NOT get logs labeled `debug`. + +These are the valid fields for the `logLevels` key: + +| Field | Description | Type | Default | +|--------------|---------------------------------------------------------------------|---------------------------------------------|---------| +| `userDaemon` | Logging level to be used by the User Daemon (logs to connector.log) | [loglevel][logrus-level] [string][yaml-str] | debug | +| `rootDaemon` | Logging level to be used for the Root Daemon (logs to daemon.log) | [loglevel][logrus-level] [string][yaml-str] | info | + +#### Images +Values for `images` are strings. These values affect the objects that are deployed in the cluster, +so it's important to ensure users have the same configuration. + +Additionally, you can deploy the server-side components with [Helm](../../install/helm), to prevent them +from being overridden by a client's config and use the [mutating-webhook](../cluster-config/#mutating-webhook) +to handle installation of the `traffic-agents`. + +These are the valid fields for the `images` key: + +| Field | Description | Type | Default | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------|----------------------| +| `registry` | Docker registry to be used for installing the Traffic Manager and default Traffic Agent. If not using a helm chart to deploy server-side objects, changing this value will create a new traffic-manager deployment when using Telepresence commands. Additionally, changing this value will update installed default `traffic-agents` to use the new registry when creating a new intercept. | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `agentImage` | `$registry/$imageName:$imageTag` to use when installing the Traffic Agent. Changing this value will update pre-existing `traffic-agents` to use this new image. *The `registry` value is not used for the `traffic-agent` if you have this value set.* | qualified Docker image name [string][yaml-str] | (unset) | +| `webhookRegistry` | The container `$registry` that the [Traffic Manager](../cluster-config/#mutating-webhook) will use with the `webhookAgentImage` *This value is only used if a new `traffic-manager` is deployed* | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `webhookAgentImage` | The container image that the [Traffic Manager](../cluster-config/#mutating-webhook) will pull from the `webhookRegistry` when installing the Traffic Agent in annotated pods *This value is only used if a new `traffic-manager` is deployed* | non-qualified Docker image name [string][yaml-str] | (unset) | + +#### Cloud +Values for `cloud` are listed below and their type varies, so please see the chart for the expected type for each config value. +These fields control how the client interacts with the Cloud service. + +| Field | Description | Type | Default | +|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------|---------| +| `skipLogin` | Whether the CLI should skip automatic login to Ambassador Cloud. If set to true, in order to perform personal intercepts you must have a [license key](../cluster-config/#air-gapped-cluster) installed in the cluster. | [bool][yaml-bool] | false | +| `refreshMessages` | How frequently the CLI should communicate with Ambassador Cloud to get new command messages, which also resets whether the message has been raised or not. You will see each message at most once within the duration given by this config | [duration][go-duration] [string][yaml-str] | 168h | +| `systemaHost` | The host used to communicate with Ambassador Cloud | [string][yaml-str] | app.getambassador.io | +| `systemaPort` | The port used with `systemaHost` to communicate with Ambassador Cloud | [string][yaml-str] | 443 | + +Telepresence attempts to auto-detect if the cluster is capable of +communication with Ambassador Cloud, but may still prompt you to log +in in cases where only the on-laptop client wishes to communicate with +Ambassador Cloud. If you want those auto-login points to be disabled +as well, or would like it to not attempt to communicate with +Ambassador Cloud at all (even for the auto-detection), then be sure to +set the `skipLogin` value to `true`. + +Reminder: To use personal intercepts, which normally require a login, +you must have a license key in your cluster and specify which +`agentImage` should be installed by also adding the following to your +`config.yml`: + +```yaml +images: + agentImage: / +``` + +#### Grpc +The `maxReceiveSize` determines how large a message that the workstation receives via gRPC can be. The default is 4Mi (determined by gRPC). All traffic to and from the cluster is tunneled via gRPC. + +The size is measured in bytes. You can express it as a plain integer or as a fixed-point number using E, G, M, or K. You can also use the power-of-two equivalents: Gi, Mi, Ki. For example, the following represent roughly the same value: +``` +128974848, 129e6, 129M, 123Mi +``` + +#### RESTful API server +The `telepresenceAPI` controls the behavior of Telepresence's RESTful API server that can be queried for additional information about ongoing intercepts. When present, and the `port` is set to a valid port number, it's propagated to the auto-installer so that application containers that can be intercepted gets the `TELEPRESENCE_API_PORT` environment set. The server can then be queried at `localhost:`. In addition, the `traffic-agent` and the `user-daemon` on the workstation that performs an intercept will start the server on that port. +If the `traffic-manager` is auto-installed, its webhook agent injector will be configured to add the `TELEPRESENCE_API_PORT` environment to the app container when the `traffic-agent` is injected. +See [RESTful API server](../restapi) for more info. + +#### Intercept +The `intercept` controls applies to how telepresence will intercept the communications to the intercepted service. + +The `defaultPort` controls which port is selected when no `--port` flag is given to the `telepresence intercept` command. The default value is "8080". + +The `appProtocolStrategy` is only relevant when using personal intercepts. This controls how telepresence selects the application protocol to use when intercepting a service that has no `service.ports.appProtocol` defined. Valid values are: + +| Value | Resulting action | +|--------------|--------------------------------------------------------------------------------------------------------| +| `http2Probe` | The telepresence traffic-agent will probe the intercepted container to check whether it supports http2 | +| `portName` | Telepresence will make an educated guess about the protocol based on the name of the service port | +| `http` | Telepresence will use http | +| `http2` | Telepresence will use http2 | + +When `portName` is used, Telepresence will determine the protocol by the name of the port: `[-suffix]`. The following protocols are recognized: + +| Protocol | Meaning | +|----------|---------------------------------------| +| `http` | Plaintext HTTP/1.1 traffic | +| `http2` | Plaintext HTTP/2 traffic | +| `https` | TLS Encrypted HTTP (1.1 or 2) traffic | +| `grpc` | Same as http2 | + +## Per-Cluster Configuration +Some configuration is not global to Telepresence and is actually specific to a cluster. Thus, we store that config information in your kubeconfig file, so that it is easier to maintain per-cluster configuration. + +### Values +The current per-cluster configuration supports `dns`, `alsoProxy`, and `manager` keys. +To add configuration, simply add a `telepresence.io` entry to the cluster in your kubeconfig like so: + +``` +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + dns: + also-proxy: + manager: + name: example-cluster +``` +#### DNS +The fields for `dns` are: local-ip, remote-ip, exclude-suffixes, include-suffixes, and lookup-timeout. + +| Field | Description | Type | Default | +|--------------------|---------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|-----------------------------------------------------------------------------| +| `local-ip` | The address of the local DNS server. This entry is only used on Linux systems that are not configured to use systemd-resolved. | IP address [string][yaml-str] | first `nameserver` mentioned in `/etc/resolv.conf` | +| `remote-ip` | The address of the cluster's DNS service. | IP address [string][yaml-str] | IP of the `kube-dns.kube-system` or the `dns-default.openshift-dns` service | +| `exclude-suffixes` | Suffixes for which the DNS resolver will always fail (or fallback in case of the overriding resolver) | [sequence][yaml-seq] of [strings][yaml-str] | `[".arpa", ".com", ".io", ".net", ".org", ".ru"]` | +| `include-suffixes` | Suffixes for which the DNS resolver will always attempt to do a lookup. Includes have higher priority than excludes. | [sequence][yaml-seq] of [strings][yaml-str] | `[]` | +| `lookup-timeout` | Maximum time to wait for a cluster side host lookup. | [duration][go-duration] [string][yaml-str] | 4 seconds | + +Here is an example kubeconfig: +``` +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + dns: + include-suffixes: + - .se + exclude-suffixes: + - .com + name: example-cluster +``` + + +#### AlsoProxy + +When using `also-proxy`, you provide a list of subnets after the key in your kubeconfig file to be added to the TUN device. +All connections to addresses that the subnet spans will be dispatched to the cluster + +Here is an example kubeconfig for the subnet `1.2.3.4/32`: +``` +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + also-proxy: + - 1.2.3.4/32 + name: example-cluster +``` + +#### NeverProxy + +When using `never-proxy` you provide a list of subnets after the key in your kubeconfig file. These will never be routed via the +TUN device, even if they fall within the subnets (pod or service) for the cluster. Instead, whatever route they have before +telepresence connects is the route they will keep. + +Here is an example kubeconfig for the subnet `1.2.3.4/32`: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + never-proxy: + - 1.2.3.4/32 + name: example-cluster +``` + +##### Using AlsoProxy together with NeverProxy + +Never proxy and also proxy are implemented as routing rules, meaning that when the two conflict, regular routing routes apply. +Usually this means that the most specific route will win. + +So, for example, if an `also-proxy` subnet falls within a broader `never-proxy` subnet: + +```yaml +never-proxy: [10.0.0.0/16] +also-proxy: [10.0.5.0/24] +``` + +Then the specific `also-proxy` of `10.0.5.0/24` will be proxied by the TUN device, whereas the rest of `10.0.0.0/16` will not. + +Conversely if a `never-proxy` subnet is inside a larger `also-proxy` subnet: + +```yaml +also-proxy: [10.0.0.0/16] +never-proxy: [10.0.5.0/24] +``` + +Then all of the also-proxy of `10.0.0.0/16` will be proxied, with the exception of the specific `never-proxy` of `10.0.5.0/24` + +#### Manager + +The `manager` key contains configuration for finding the `traffic-manager` that telepresence will connect to. It supports one key, `namespace`, indicating the namespace where the traffic manager is to be found + +Here is an example kubeconfig that will instruct telepresence to connect to a manager in namespace `staging`: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +[yaml-bool]: https://yaml.org/type/bool.html +[yaml-float]: https://yaml.org/type/float.html +[yaml-int]: https://yaml.org/type/int.html +[yaml-seq]: https://yaml.org/type/seq.html +[yaml-str]: https://yaml.org/type/str.html +[go-duration]: https://pkg.go.dev/time#ParseDuration +[logrus-level]: https://github.com/sirupsen/logrus/blob/v1.8.1/logrus.go#L25-L45 diff --git a/docs/telepresence/2.4/reference/dns.md b/docs/telepresence/2.4/reference/dns.md new file mode 100644 index 000000000..e38fbc61d --- /dev/null +++ b/docs/telepresence/2.4/reference/dns.md @@ -0,0 +1,75 @@ +# DNS resolution + +The Telepresence DNS resolver is dynamically configured to resolve names using the namespaces of currently active intercepts. Processes running locally on the desktop will have network access to all services in the such namespaces by service-name only. + +All intercepts contribute to the DNS resolver, even those that do not use the `--namespace=` option. This is because `--namespace default` is implied, and in this context, `default` is treated just like any other namespace. + +No namespaces are used by the DNS resolver (not even `default`) when no intercepts are active, which means that no service is available by `` only. Without an active intercept, the namespace qualified DNS name must be used (in the form `.`). + +See this demonstrated below, using the [quick start's](../../quick-start/) sample app services. + +No intercepts are currently running, we'll connect to the cluster and list the services that can be intercepted. + +``` +$ telepresence connect + + Connecting to traffic manager... + Connected to context default (https://) + +$ telepresence list + + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + emoji : ready to intercept (traffic-agent not yet installed) + web : ready to intercept (traffic-agent not yet installed) + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + +$ curl web-app:80 + + curl: (6) Could not resolve host: web-app + +``` + +This is expected as Telepresence cannot reach the service yet by short name without an active intercept in that namespace. + +``` +$ curl web-app.emojivoto:80 + + + + + + Emoji Vote + ... +``` + +Using the namespaced qualified DNS name though does work. +Now we'll start an intercept against another service in the same namespace. Remember, `--namespace default` is implied since it is not specified. + +``` +$ telepresence intercept web --port 8080 + + Using Deployment web + intercepted + Intercept name : web + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Volume Mount Point: /tmp/telfs-166119801 + Intercepting : HTTP requests that match all headers: + 'x-telepresence-intercept-id: 8eac04e3-bf24-4d62-b3ba-35297c16f5cd:web' + +$ curl webapp:80 + + + + + + Emoji Vote + ... +``` + +Now curling that service by its short name works and will as long as the intercept is active. + +The DNS resolver will always be able to resolve services using `.` regardless of intercepts. + +See [Outbound connectivity](../routing/#dns-resolution) for details on DNS lookups. diff --git a/docs/telepresence/2.4/reference/docker-run.md b/docs/telepresence/2.4/reference/docker-run.md new file mode 100644 index 000000000..2262f0a55 --- /dev/null +++ b/docs/telepresence/2.4/reference/docker-run.md @@ -0,0 +1,31 @@ +--- +Description: "How a Telepresence intercept can run a Docker container with configured environment and volume mounts." +--- + +# Using Docker for intercepts + +If you want your intercept to go to a Docker container on your laptop, use the `--docker-run` option. It creates the intercept, runs your container in the foreground, then automatically ends the intercept when the container exits. + +`telepresence intercept --port --docker-run -- ` + +The `--` separates flags intended for `telepresence intercept` from flags intended for `docker run`. + +## Example + +Imagine you are working on a new version of a your frontend service. It is running in your cluster as a Deployment called `frontend-v1`. You use Docker on your laptop to build an improved version of the container called `frontend-v2`. To test it out, use this command to run the new container on your laptop and start an intercept of the cluster service to your local container. + +`telepresence intercept frontend-v1 --port 8000 --docker-run -- frontend-v2` + +## Ports + +The `--port` flag can specify an additional port when `--docker-run` is used so that the local and container port can be different. This is done using `--port :`. The container port will default to the local port when using the `--port ` syntax. + +## Flags + +Telepresence will automatically pass some relevant flags to Docker in order to connect the container with the intercept. Those flags are combined with the arguments given after `--` on the command line. + +- `--dns-search tel2-search` Enables single label name lookups in intercepted namespaces +- `--env-file ` Loads the intercepted environment +- `--name intercept--` Names the Docker container, this flag is omitted if explicitly given on the command line +- `-p ` The local port for the intercept and the container port +- `-v ` Volume mount specification, see CLI help for `--mount` and `--docker-mount` flags for more info diff --git a/docs/telepresence/2.4/reference/environment.md b/docs/telepresence/2.4/reference/environment.md new file mode 100644 index 000000000..7f83ff119 --- /dev/null +++ b/docs/telepresence/2.4/reference/environment.md @@ -0,0 +1,46 @@ +--- +description: "How Telepresence can import environment variables from your Kubernetes cluster to use with code running on your laptop." +--- + +# Environment variables + +Telepresence can import environment variables from the cluster pod when running an intercept. +You can then use these variables with the code running on your laptop of the service being intercepted. + +There are three options available to do this: + +1. `telepresence intercept [service] --port [port] --env-file=FILENAME` + + This will write the environment variables to a Docker Compose `.env` file. This file can be used with `docker-compose` when starting containers locally. Please see the Docker documentation regarding the [file syntax](https://docs.docker.com/compose/env-file/) and [usage](https://docs.docker.com/compose/environment-variables/) for more information. + +2. `telepresence intercept [service] --port [port] --env-json=FILENAME` + + This will write the environment variables to a JSON file. This file can be injected into other build processes. + +3. `telepresence intercept [service] --port [port] -- [COMMAND]` + + This will run a command locally with the pod's environment variables set on your laptop. Once the command quits the intercept is stopped (as if `telepresence leave [service]` was run). This can be used in conjunction with a local server command, such as `python [FILENAME]` or `node [FILENAME]` to run a service locally while using the environment variables that were set on the pod via a ConfigMap or other means. + + Another use would be running a subshell, Bash for example: + + `telepresence intercept [service] --port [port] -- /bin/bash` + + This would start the intercept then launch the subshell on your laptop with all the same variables set as on the pod. + +## Telepresence Environment Variables + +Telepresence adds some useful environment variables in addition to the ones imported from the intercepted pod: + +### TELEPRESENCE_ROOT +Directory where all remote volumes mounts are rooted. See [Volume Mounts](../volume/) for more info. + +### TELEPRESENCE_MOUNTS +Colon separated list of remotely mounted directories. + +### TELEPRESENCE_CONTAINER +The name of the intercepted container. Useful when a pod has several containers, and you want to know which one that was intercepted by Telepresence. + +### TELEPRESENCE_INTERCEPT_ID +ID of the intercept (same as the "x-intercept-id" http header). + +Useful if you need special behavior when intercepting a pod. One example might be when dealing with pub/sub systems like Kafka, where all processes that don't have the `TELEPRESENCE_INTERCEPT_ID` set can filter out all messages that contain an `x-intercept-id` header, while those that do, instead filter based on a matching `x-intercept-id` header. This is to assure that messages belonging to a certain intercept always are consumed by the intercepting process. diff --git a/docs/telepresence/2.4/reference/inside-container.md b/docs/telepresence/2.4/reference/inside-container.md new file mode 100644 index 000000000..f83ef3575 --- /dev/null +++ b/docs/telepresence/2.4/reference/inside-container.md @@ -0,0 +1,37 @@ +# Running Telepresence inside a container + +It is sometimes desirable to run Telepresence inside a container. One reason can be to avoid any side effects on the workstation's network, another can be to establish multiple sessions with the traffic manager, or even work with different clusters simultaneously. + +## Building the container + +Building a container with a ready-to-run Telepresence is easy because there are relatively few external dependencies. Add the following to a `Dockerfile`: + +```Dockerfile +# Dockerfile with telepresence and its prerequisites +FROM alpine:3.13 + +# Install Telepresence prerequisites +RUN apk add --no-cache curl iproute2 sshfs + +# Download and install the telepresence binary +RUN curl -fL https://app.getambassador.io/download/tel2/linux/amd64/latest/telepresence -o telepresence && \ + install -o root -g root -m 0755 telepresence /usr/local/bin/telepresence +``` +In order to build the container, do this in the same directory as the `Dockerfile`: +``` +$ docker build -t tp-in-docker . +``` + +## Running the container + +Telepresence will need access to the `/dev/net/tun` device on your Linux host (or, in case the host isn't Linux, the Linux VM that Docker starts automatically), and a Kubernetes config that identifies the cluster. It will also need `--cap-add=NET_ADMIN` to create its Virtual Network Interface. + +The command to run the container can look like this: +```bash +$ docker run \ + --cap-add=NET_ADMIN \ + --device /dev/net/tun:/dev/net/tun \ + --network=host \ + -v ~/.kube/config:/root/.kube/config \ + -it --rm tp-in-docker +``` diff --git a/docs/telepresence/2.4/reference/intercepts/index.md b/docs/telepresence/2.4/reference/intercepts/index.md new file mode 100644 index 000000000..bd9c5bdce --- /dev/null +++ b/docs/telepresence/2.4/reference/intercepts/index.md @@ -0,0 +1,366 @@ +import Alert from '@material-ui/lab/Alert'; + +# Intercepts + +When intercepting a service, Telepresence installs a *traffic-agent* +sidecar in to the workload. That traffic-agent supports one or more +intercept *mechanisms* that it uses to decide which traffic to +intercept. Telepresence has a simple default traffic-agent, however +you can configure a different traffic-agent with more sophisticated +mechanisms either by setting the [`images.agentImage` field in +`config.yml`](../config/#images) or by writing an +[`extensions/${extension}.yml`][extensions] file that tells +Telepresence about a traffic-agent that it can use, what mechanisms +that traffic-agent supports, and command-line flags to expose to the +user to configure that mechanism. You may tell Telepresence which +known mechanism to use with the `--mechanism=${mechanism}` flag or by +setting one of the `--${mechansim}-XXX` flags, which implicitly set +the mechanism; for example, setting `--http-match=auto` implicitly +sets `--mechanism=http`. + +The default open-source traffic-agent only supports the `tcp` +mechanism, which treats the raw layer 4 TCP streams as opaque and +sends all of that traffic down to the developer's workstation. This +means that it is a "global" intercept, affecting all users of the +cluster. + +In addition to the default open-source traffic-agent, Telepresence +already knows about the Ambassador Cloud +[traffic-agent][ambassador-agent], which supports the `http` +mechanism. The `http` mechanism operates at higher layer, working +with layer 7 HTTP, and may intercept specific HTTP requests, allowing +other HTTP requests through to the regular service. This allows for +"personal" intercepts which only intercept traffic tagged as belonging +to a given developer. + +[extensions]: https://pkg.go.dev/github.com/telepresenceio/telepresence/v2@v$version$/pkg/client/cli/extensions +[ambassador-agent]: https://github.com/telepresenceio/telepresence/blob/release/v2/pkg/client/cli/extensions/builtin.go#L30-L50 + +## Intercept behavior when logged in to Ambassador Cloud + +Logging in to Ambassador Cloud (with [`telepresence +login`](../client/login/)) changes the Telepresence defaults in two +ways. + +First, being logged in to Ambassador Cloud causes Telepresence to +default to `--mechanism=http --http-match=auto` (or just +`--http-match=auto`, as `--http-match` implies `--mechanism=http`). +If you hadn't been logged in it would have defaulted to +`--mechanism=tcp`. This tells Telepresence to use the Ambassador +Cloud traffic-agent to do smart "personal" intercepts and only +intercept a subset of HTTP requests, rather than just intercepting the +entirety of all TCP connections. This is important for working in a +shared cluster with teammates, and is important for the preview URL +functionality below. See `telepresence intercept --help` for +information on using `--http-match` to customize which requests it +intercepts. + +Secondly, being logged in causes Telepresence to default to +`--preview-url=true`. If you hadn't been logged in it would have +defaulted to `--preview-url=false`. This tells Telepresence to take +advantage of Ambassador Cloud to create a preview URL for this +intercept, creating a shareable URL that automatically sets the +appropriate headers to have requests coming from the preview URL be +intercepted. In order to create the preview URL, it will prompt you +for four settings about how your cluster's ingress is configured. For +each, Telepresence tries to intelligently detect the correct value for +your cluster; if it detects it correctly, may simply press "enter" and +accept the default, otherwise you must tell Telepresence the correct +value. + +When you create an intercept with the `http` mechanism, Telepresence +determines whether the application protocol uses HTTP/1.1 or HTTP/2. If the +service's `ports.appProtocol` field is set, Telepresence uses that. If not, +then Telepresence uses the configured application protocol strategy to determine +the protocol. The default behavior (`http2Probe` strategy) sends a +`GET /telepresence-http2-check` request to your service to determine if it supports +HTTP/2. This is required for the intercepts to behave correctly. + +### TLS + +If the intercepted service has been set up for `--mechanism=http`, Telepresence +needs to terminate the TLS connection for the `http` mechanism to function in your +intercepts. Additionally, you need to ensure the +[TLS annotations](../cluster-config/#tls) are properly entered in your workload’s +Pod template to designate that requests leaving your service still speak TLS +outside of the service as expected. + +Use the `--http-plaintext` flag when doing an intercept when the service in the +cluster is using TLS in case you want to use plaintext for the communication with the +process on your local workstation. + +## Supported workloads + +Kubernetes has various +[workloads](https://kubernetes.io/docs/concepts/workloads/). +Currently, Telepresence supports intercepting (installing a +traffic-agent on) `Deployments`, `ReplicaSets`, and `StatefulSets`. + + + +While many of our examples use Deployments, they would also work on +ReplicaSets and StatefulSets + + + +## Specifying a namespace for an intercept + +The namespace of the intercepted workload is specified using the +`--namespace` option. When this option is used, and `--workload` is +not used, then the given name is interpreted as the name of the +workload and the name of the intercept will be constructed from that +name and the namespace. + +```shell +telepresence intercept hello --namespace myns --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept +`hello-myns`. In order to remove the intercept, you will need to run +`telepresence leave hello-mydns` instead of just `telepresence leave +hello`. + +The name of the intercept will be left unchanged if the workload is specified. + +```shell +telepresence intercept myhello --namespace myns --workload hello --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept `myhello`. + +## Importing environment variables + +Telepresence can import the environment variables from the pod that is +being intercepted, see [this doc](../environment/) for more details. + +## Creating an intercept without a preview URL + +If you *are not* logged in to Ambassador Cloud, the following command +will intercept all traffic bound to the service and proxy it to your +laptop. This includes traffic coming through your ingress controller, +so use this option carefully as to not disrupt production +environments. + +```shell +telepresence intercept --port= +``` + +If you *are* logged in to Ambassador Cloud, setting the +`--preview-url` flag to `false` is necessary. + +```shell +telepresence intercept --port= --preview-url=false +``` + +This will output an HTTP header that you can set on your request for +that traffic to be intercepted: + +```console +$ telepresence intercept --port= --preview-url=false +Using Deployment +intercepted + Intercept name: + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":") +``` + +Run `telepresence status` to see the list of active intercepts. + +```console +$ telepresence status +Root Daemon: Running + Version : v2.1.4 (api 3) + Primary DNS : "" + Fallback DNS: "" +User Daemon: Running + Version : v2.1.4 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 1 total + dataprocessingnodeservice: @ +``` + +Finally, run `telepresence leave ` to stop the intercept. + +## Skipping the ingress dialogue + +You can skip the ingress dialogue by setting the relevant parameters using flags. If any of the following flags are set, the dialogue will be skipped and the flag values will be used instead. If any of the required flags are missing, an error will be thrown. + +| Flag | Description | Required | +| -------------- | ------------------------------ | --- | +| `--ingress-host` | The ip address for the ingress | yes | +| `--ingress-port` | The port for the ingress | yes | +| `--ingress-tls` | Whether tls should be used | no | +| `--ingress-l5` | Whether a different ip address should be used in request headers | no | + +## Creating an intercept when a service has multiple ports + +If you are trying to intercept a service that has multiple ports, you +need to tell Telepresence which service port you are trying to +intercept. To specify, you can either use the name of the service +port or the port number itself. To see which options might be +available to you and your service, use kubectl to describe your +service or look in the object's YAML. For more information on multiple +ports, see the [Kubernetes documentation][kube-multi-port-services]. + +[kube-multi-port-services]: https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services + +```console +$ telepresence intercept --port=: +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +When intercepting a service that has multiple ports, the name of the +service port that has been intercepted is also listed. + +If you want to change which port has been intercepted, you can create +a new intercept the same way you did above and it will change which +service port is being intercepted. + +## Creating an intercept When multiple services match your workload + +Oftentimes, there's a 1-to-1 relationship between a service and a +workload, so telepresence is able to auto-detect which service it +should intercept based on the workload you are trying to intercept. +But if you use something like +[Argo](https://www.getambassador.io/docs/argo/latest/quick-start/), there may be +two services (that use the same labels) to manage traffic between a +canary and a stable service. + +Fortunately, if you know which service you want to use when +intercepting a workload, you can use the `--service` flag. So in the +aforementioned example, if you wanted to use the `echo-stable` service +when intercepting your workload, your command would look like this: + +```console +$ telepresence intercept echo-rollout- --port --service echo-stable +Using ReplicaSet echo-rollout- +intercepted + Intercept name : echo-rollout- + State : ACTIVE + Workload kind : ReplicaSet + Destination : 127.0.0.1:3000 + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-921196036 + Intercepting : all TCP connections +``` + +## Port-forwarding an intercepted container's sidecars + +Sidecars are containers that sit in the same pod as an application +container; they usually provide auxiliary functionality to an +application, and can usually be reached at +`localhost:${SIDECAR_PORT}`. For example, a common use case for a +sidecar is to proxy requests to a database, your application would +connect to `localhost:${SIDECAR_PORT}`, and the sidecar would then +connect to the database, perhaps augmenting the connection with TLS or +authentication. + +When intercepting a container that uses sidecars, you might want those +sidecars' ports to be available to your local application at +`localhost:${SIDECAR_PORT}`, exactly as they would be if running +in-cluster. Telepresence's `--to-pod ${PORT}` flag implements this +behavior, adding port-forwards for the port given. + +```console +$ telepresence intercept --port=: --to-pod= +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +If there are multiple ports that you need forwarded, simply repeat the +flag (`--to-pod= --to-pod=`). + +## Intercepting headless services + +Kubernetes supports creating [services without a ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services), +which, when they have a pod selector, serve to provide a DNS record that will directly point to the service's backing pods. +Telepresence supports intercepting these `headless` services as it would a regular service with a ClusterIP. +So, for example, if you have the following service: + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: my-headless +spec: + type: ClusterIP + clusterIP: None + selector: + service: my-headless + ports: + - port: 8080 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: my-headless + labels: + service: my-headless +spec: + replicas: 1 + serviceName: my-headless + selector: + matchLabels: + service: my-headless + template: + metadata: + labels: + service: my-headless + spec: + containers: + - name: my-headless + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +``` + +You can intercept it like any other: + +```console +$ telepresence intercept my-headless --port 8080 +Using StatefulSet my-headless +intercepted + Intercept name : my-headless + State : ACTIVE + Workload kind : StatefulSet + Destination : 127.0.0.1:8080 + Volume Mount Point: /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-524189712 + Intercepting : all TCP connections +``` + + +This utilizes an initContainer that requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + + +This requires the Traffic Agent to run as GID 7777. By default, this is disabled on openshift clusters. +To enable running as GID 7777 on a specific openshift namespace, run: +oc adm policy add-scc-to-group anyuid system:serviceaccounts:$NAMESPACE + + + +Intercepting headless services without a selector is not supported. + diff --git a/docs/telepresence/2.4/reference/intercepts/manual-agent.md b/docs/telepresence/2.4/reference/intercepts/manual-agent.md new file mode 100644 index 000000000..e818171ce --- /dev/null +++ b/docs/telepresence/2.4/reference/intercepts/manual-agent.md @@ -0,0 +1,221 @@ +import Alert from '@material-ui/lab/Alert'; + +# Manually injecting the Traffic Agent + +You can directly modify your workload's YAML configuration to add the Telepresence Traffic Agent and enable it to be intercepted. + +When you use a Telepresence intercept, Telepresence automatically edits the workload and services when you use +`telepresence uninstall --agent `. In some GitOps workflows, you may need to use the +[Telepresence Mutating Webhook](../../cluster-config/#mutating-webhook) to keep intercepted workloads unmodified +while you target changes on specific pods. + + +In situations where you don't have access to the proper permissions for numeric ports, as noted in the Note on numeric ports +section of the documentation, it is possible to manually inject the Traffic Agent. Because this is not the recommended approach +to making a workload interceptable, try the Mutating Webhook before proceeding." + + +## Procedure + +You can manually inject the agent into Deployments, StatefulSets, or ReplicaSets. The example on this page +uses the following Deployment: + + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "my-service" + labels: + service: my-service +spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +``` + +The deployment is being exposed by the following service: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: "my-service" +spec: + type: ClusterIP + selector: + service: my-service + ports: + - port: 80 + targetPort: 8080 +``` + +### 1. Generating the YAML + +First, generate the YAML for the traffic-agent container: + +```console +$ telepresence genyaml container --container-name echo-container --port 8080 --output - --input deployment.yaml +args: +- agent +env: +- name: TELEPRESENCE_CONTAINER + value: echo-container +- name: _TEL_AGENT_LOG_LEVEL + value: info +- name: _TEL_AGENT_NAME + value: my-service +- name: _TEL_AGENT_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace +- name: _TEL_AGENT_POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP +- name: _TEL_AGENT_APP_PORT + value: "8080" +- name: _TEL_AGENT_AGENT_PORT + value: "9900" +- name: _TEL_AGENT_MANAGER_HOST + value: traffic-manager.ambassador +image: docker.io/datawire/tel2:2.4.6 +name: traffic-agent +ports: +- containerPort: 9900 + protocol: TCP +readinessProbe: + exec: + command: + - /bin/stat + - /tmp/agent/ready +resources: {} +volumeMounts: +- mountPath: /tel_pod_info + name: traffic-annotations +``` + +Next, generate the YAML for the volume: + +```console +$ telepresence genyaml volume --output - --input deployment.yaml +downwardAPI: + items: + - fieldRef: + fieldPath: metadata.annotations + path: annotations +name: traffic-annotations +``` + + +Enter `telepresence genyaml container --help` or `telepresence genyaml volume --help` for more information about these flags. + + +### 2. Injecting the YAML into the Deployment + +You need to add the `Deployment` YAML you genereated to include the container and the volume. These are placed as elements of `spec.template.spec.containers` and `spec.template.spec.volumes` respectively. +You also need to modify `spec.template.metadata.annotations` and add the annotation `telepresence.getambassador.io/manually-injected: "true"`. +These changes should look like the following: + +```diff +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "my-service" + labels: + service: my-service +spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service ++ annotations: ++ telepresence.getambassador.io/manually-injected: "true" + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} ++ - args: ++ - agent ++ env: ++ - name: TELEPRESENCE_CONTAINER ++ value: echo-container ++ - name: _TEL_AGENT_LOG_LEVEL ++ value: info ++ - name: _TEL_AGENT_NAME ++ value: my-service ++ - name: _TEL_AGENT_NAMESPACE ++ valueFrom: ++ fieldRef: ++ fieldPath: metadata.namespace ++ - name: _TEL_AGENT_POD_IP ++ valueFrom: ++ fieldRef: ++ fieldPath: status.podIP ++ - name: _TEL_AGENT_APP_PORT ++ value: "8080" ++ - name: _TEL_AGENT_AGENT_PORT ++ value: "9900" ++ - name: _TEL_AGENT_MANAGER_HOST ++ value: traffic-manager.ambassador ++ image: docker.io/datawire/tel2:2.4.6 ++ name: traffic-agent ++ ports: ++ - containerPort: 9900 ++ protocol: TCP ++ readinessProbe: ++ exec: ++ command: ++ - /bin/stat ++ - /tmp/agent/ready ++ resources: {} ++ volumeMounts: ++ - mountPath: /tel_pod_info ++ name: traffic-annotations ++ volumes: ++ - downwardAPI: ++ items: ++ - fieldRef: ++ fieldPath: metadata.annotations ++ path: annotations ++ name: traffic-annotations +``` + +### 3. Modifying the service + +Once the modified deployment YAML has been applied to the cluster, you need to modify the Service to route traffic to the Traffic Agent. +You can do this by changing the exposed `targetPort` to `9900`. The resulting service should look like: + +```diff +apiVersion: v1 +kind: Service +metadata: + name: "my-service" +spec: + type: ClusterIP + selector: + service: my-service + ports: + - port: 80 +- targetPort: 8080 ++ targetPort: 9900 +``` diff --git a/docs/telepresence/2.4/reference/linkerd.md b/docs/telepresence/2.4/reference/linkerd.md new file mode 100644 index 000000000..9b903fa76 --- /dev/null +++ b/docs/telepresence/2.4/reference/linkerd.md @@ -0,0 +1,75 @@ +--- +Description: "How to get Linkerd meshed services working with Telepresence" +--- + +# Using Telepresence with Linkerd + +## Introduction +Getting started with Telepresence on Linkerd services is as simple as adding an annotation to your Deployment: + +```yaml +spec: + template: + metadata: + annotations: + config.linkerd.io/skip-outbound-ports: "8081" +``` + +The local system and the Traffic Agent connect to the Traffic Manager using its gRPC API on port 8081. Telling Linkerd to skip that port allows the Traffic Agent sidecar to fully communicate with the Traffic Manager, and therefore the rest of the Telepresence system. + +## Prerequisites +1. [Telepresence binary](../../install) +2. Linkerd control plane [installed to cluster](https://linkerd.io/2.10/tasks/install/) +3. Kubectl +4. [Working ingress controller](https://www.getambassador.io/docs/edge-stack/latest/howtos/linkerd2) + +## Deploy +Save and deploy the following YAML. Note the `config.linkerd.io/skip-outbound-ports` annotation in the metadata of the pod template. + +```yaml +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: quote +spec: + replicas: 1 + selector: + matchLabels: + app: quote + strategy: + type: RollingUpdate + template: + metadata: + annotations: + linkerd.io/inject: "enabled" + config.linkerd.io/skip-outbound-ports: "8081,8022,6001" + labels: + app: quote + spec: + containers: + - name: backend + image: docker.io/datawire/quote:0.4.1 + ports: + - name: http + containerPort: 8000 + env: + - name: PORT + value: "8000" + resources: + limits: + cpu: "0.1" + memory: 100Mi +``` + +## Connect to Telepresence +Run `telepresence connect` to connect to the cluster. Then `telepresence list` should show the `quote` deployment as `ready to intercept`: + +``` +$ telepresence list + + quote: ready to intercept (traffic-agent not yet installed) +``` + +## Run the intercept +Run `telepresence intercept quote --port 8080:80` to direct traffic from the `quote` deployment to port 8080 on your local system. Assuming you have something listening on 8080, you should now be able to see your local service whenever attempting to access the `quote` service. diff --git a/docs/telepresence/2.4/reference/rbac.md b/docs/telepresence/2.4/reference/rbac.md new file mode 100644 index 000000000..2c9af7c1c --- /dev/null +++ b/docs/telepresence/2.4/reference/rbac.md @@ -0,0 +1,291 @@ +import Alert from '@material-ui/lab/Alert'; + +# Telepresence RBAC +The intention of this document is to provide a template for securing and limiting the permissions of Telepresence. +This documentation covers the full extent of permissions necessary to administrate Telepresence components in a cluster. + +There are two general categories for cluster permissions with respect to Telepresence. There are RBAC settings for a User and for an Administrator described above. The User is expected to only have the minimum cluster permissions necessary to create a Telepresence [intercept](../../howtos/intercepts/), and otherwise be unable to affect Kubernetes resources. + +In addition to the above, there is also a consideration of how to manage Users and Groups in Kubernetes which is outside of the scope of the document. This document will use Service Accounts to assign Roles and Bindings. Other methods of RBAC administration and enforcement can be found on the [Kubernetes RBAC documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) page. + +## Requirements + +- Kubernetes version 1.16+ +- Cluster admin privileges to apply RBAC + +## Editing your kubeconfig + +This guide also assumes that you are utilizing a kubeconfig file that is specified by the `KUBECONFIG` environment variable. This is a `yaml` file that contains the cluster's API endpoint information as well as the user data being supplied for authentication. The Service Account name used in the example below is called tp-user. This can be replaced by any value (i.e. John or Jane) as long as references to the Service Account are consistent throughout the `yaml`. After an administrator has applied the RBAC configuration, a user should create a `config.yaml` in your current directory that looks like the following:​ + +```yaml +apiVersion: v1 +kind: Config +clusters: +- name: my-cluster # Must match the cluster value in the contexts config + cluster: + ## The cluster field is highly cloud dependent. +contexts: +- name: my-context + context: + cluster: my-cluster # Must match the name field in the clusters config + user: tp-user +users: +- name: tp-user # Must match the name of the Service Account created by the cluster admin + user: + token: # See note below +``` + +The Service Account token will be obtained by the cluster administrator after they create the user's Service Account. Creating the Service Account will create an associated Secret in the same namespace with the format `-token-`. This token can be obtained by your cluster administrator by running `kubectl get secret -n ambassador -o jsonpath='{.data.token}' | base64 -d`. + +After creating `config.yaml` in your current directory, export the file's location to KUBECONFIG by running `export KUBECONFIG=$(pwd)/config.yaml`. You should then be able to switch to this context by running `kubectl config use-context my-context`. + +## Administrating Telepresence + +Telepresence administration requires permissions for creating `Namespaces`, `ServiceAccounts`, `ClusterRoles`, `ClusterRoleBindings`, `Secrets`, `Services`, `MutatingWebhookConfiguration`, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator. The following permissions are needed for the installation and use of Telepresence: + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: telepresence-admin + namespace: default +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-admin-role +rules: + - apiGroups: + - "" + resources: ["pods", "pods/log"] + verbs: ["get", "list", "create", "delete", "watch"] + - apiGroups: + - "" + resources: ["services"] + verbs: ["get", "list", "update", "create", "delete"] + - apiGroups: + - "" + resources: ["pods/portforward"] + verbs: ["create"] + - apiGroups: + - "apps" + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update", "create", "delete", "watch"] + - apiGroups: + - "getambassador.io" + resources: ["hosts", "mappings"] + verbs: ["*"] + - apiGroups: + - "" + resources: ["endpoints"] + verbs: ["get", "list"] + - apiGroups: + - "rbac.authorization.k8s.io" + resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"] + verbs: ["get", "list", "watch", "create", "delete"] + - apiGroups: + - "" + resources: ["namespaces"] + verbs: ["get", "list", "watch", "create"] + - apiGroups: + - "" + resources: ["secrets"] + verbs: ["get", "create", "list", "delete"] + - apiGroups: + - "" + resources: ["serviceaccounts"] + verbs: ["get", "create", "delete"] + - apiGroups: + - "admissionregistration.k8s.io" + resources: ["mutatingwebhookconfigurations"] + verbs: ["get", "create", "delete"] + - apiGroups: + - "" + resources: ["nodes"] + verbs: ["list", "get", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-clusterrolebinding +subjects: + - name: telepresence-admin + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-admin-role + kind: ClusterRole +``` + +There are two ways to install the traffic-manager: Using `telepresence connect` and installing the [helm chart](../../install/helm/). + +By using `telepresence connect`, Telepresence will use your kubeconfig to create the objects mentioned above in the cluster if they don't already exist. If you want the most introspection into what is being installed, we recommend using the helm chart to install the traffic-manager. + +## Cluster-wide telepresence user access + +To allow users to make intercepts across all namespaces, but with more limited `kubectl` permissions, the following `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` will allow full `telepresence intercept` functionality. + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate value + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-role +rules: +- apiGroups: + - "" + resources: ["pods", "pods/log"] + verbs: ["get", "list", "create", "delete"] +- apiGroups: + - "" + resources: ["services"] + verbs: ["get", "list", "update"] +- apiGroups: + - "" + resources: ["pods/portforward"] + verbs: ["create"] +- apiGroups: + - "apps" + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update", "patch"] +- apiGroups: + - "getambassador.io" + resources: ["hosts", "mappings"] + verbs: ["*"] +- apiGroups: + - "" + resources: ["endpoints"] + verbs: ["get", "list"] +- apiGroups: + - "rbac.authorization.k8s.io" + resources: ["clusterroles", "clusterrolebindings"] + verbs: ["get", "list", "watch"] +- apiGroups: + - "" + resources: ["namespaces"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-rolebinding +subjects: +- name: tp-user + kind: ServiceAccount + namespace: ambassador +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-role + kind: ClusterRole +``` + +## Namespace only telepresence user access + +RBAC for multi-tenant scenarios where multiple dev teams are sharing a single cluster where users are constrained to a specific namespace(s). + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate user name + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role +rules: +- apiGroups: + - "" + resources: ["pods"] + verbs: ["get", "list", "create", "watch", "delete"] +- apiGroups: + - "" + resources: ["services"] + verbs: ["update"] +- apiGroups: + - "" + resources: ["pods/portforward"] + verbs: ["create"] +- apiGroups: + - "apps" + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update"] +- apiGroups: + - "getambassador.io" + resources: ["hosts", "mappings"] + verbs: ["*"] +- apiGroups: + - "" + resources: ["endpoints"] + verbs: ["get", "list", "watch"] +--- +kind: RoleBinding # RBAC to access ambassador namespace +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: t2-ambassador-binding + namespace: ambassador +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount + namespace: ambassador +roleRef: + kind: ClusterRole + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +--- +kind: RoleBinding # RoleBinding T2 namespace to be intecpeted +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-test-binding # Update "test" for appropriate namespace to be intercepted + namespace: test # Update "test" for appropriate namespace to be intercepted +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount + namespace: ambassador +roleRef: + kind: ClusterRole + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +​ +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-namespace-role +rules: +- apiGroups: + - "" + resources: ["namespaces"] + verbs: ["get", "list", "watch"] +- apiGroups: + - "" + resources: ["services"] + verbs: ["get", "list", "watch"] +--- +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-namespace-binding +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount + namespace: ambassador +roleRef: + kind: ClusterRole + name: telepresence-namespace-role + apiGroup: rbac.authorization.k8s.io +``` diff --git a/docs/telepresence/2.4/reference/restapi.md b/docs/telepresence/2.4/reference/restapi.md new file mode 100644 index 000000000..e3934abd4 --- /dev/null +++ b/docs/telepresence/2.4/reference/restapi.md @@ -0,0 +1,117 @@ +# Telepresence RESTful API server + +Telepresence can run a RESTful API server on the local host, both on the local workstation and in a pod that contains a `traffic-agent`. The server currently has two endpoints. The standard `healthz` endpoint and the `consume-here` endpoint. + +## Enabling the server +The server is enabled by setting the `telepresenceAPI.port` to a valid port number in the [Telepresence Helm Chart](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). The values may be passed explicitly to Helm during install, or configured using the [Telepresence Config](../config#restful-api-server) to impact an auto-install. + +## Querying the server +On the cluster's side, it's the `traffic-agent` of potentially intercepted pods that runs the server. The server can be accessed using `http://localhost:/` from the application container. Telepresence ensures that the container has the `TELEPRESENCE_API_PORT` environment variable set when the `traffic-agent` is installed. On the workstation, it is the `user-daemon` that runs the server. It uses the `TELEPRESENCE_API_PORT` that is conveyed in the environment of the intercept. This means that the server can be accessed the exact same way locally, provided that the environment is propagated correctly to the interceptor process. + +## Endpoints + +### healthz +The `http://localhost:/healthz` endpoint should respond with status code 200 OK. If it doesn't then something isn't configured correctly. Check that the `traffic-agent` container is present and that the `TELEPRESENCE_API_PORT` has been added to the environment of the application container and/or in the environment that is propagated to the interceptor that runs on the local workstation. + +#### test endpoint using curl +A `curl -v` call can be used to test the endpoint when an intercept is active. This example assumes that the API port is configured to be 9980. +```console +$ curl -v localhost:9980/healthz +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /healthz HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Date: Fri, 26 Nov 2021 07:06:18 GMT +< Content-Length: 0 +< +* Connection #0 to host localhost left intact +``` + +### consume-here +`http://localhost:/consume-here` is intended to be queried with a set of headers, typically obtained from a Kafka message or similar, and will respond with "true" (consume the message) or "false" (leave the message on the queue). When running in the cluster, this endpoint will respond with `false` if the headers match an ongoing intercept for the same workload because it's assumed that it's up to the intercept to consume the message. When running locally, the response is inverted. Matching headers means that the message should be consumed. + +Telepresence provides the ID of the intercept in the environment variable [TELEPRESENCE_INTERCEPT_ID](../environment/#telepresence_intercept_id) during an intercept. This ID must be provided in a `x-caller-intercept-id: = ` header. Telepresence needs this to identify the caller correctly. The `` will be empty when running in the cluster, but it's harmless to provide it there too, so there's no need for conditional code. + +#### test endpoint using curl +There are three prerequisites to fulfill before testing this endpoint using `curl -v` on the workstation. +1. An intercept must be active +2. The "/healtz" endpoint must respond with OK +3. The ID of the intercept must be known. It will be visible as `x-telepresence-intercept-id` in the output of the `telepresence intercept` and `telepresence list` commands unless the intercept was started with `--http-match` flags. If it was, the `--env-file ` or `--env-json ` flag must be also be used so that the environment can be examined. The variable to look for in the file is `TELEPRESENCE_INTERCEPT_ID`. + +Assuming that the API-server runs on port 9980, that the intercept was started with `-H 'foo: bar`, we can now check that the "/consume-here" returns "true" for the given headers. +```console +$ curl -v localhost:9980/consume-here -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'foo: bar' +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /consume-here HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> foo: bar +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: text/plain +< Date: Fri, 26 Nov 2021 06:43:28 GMT +< Content-Length: 4 +< +* Connection #0 to host localhost left intact +true% +``` + +If you can run curl from the pod, you can try the exact same URL. The result should be "false" when there's an ongoing intercept. The `x-telepresence-caller-intercept-id` is not needed when the call is made from the pod. +#### Example code: + +Here's an example filter written in Go. It divides the actual URL creation (only needs to run once) from the filter function to make the filter more performant: +```go +const portEnv = "TELEPRESENCE_API_PORT" +const interceptIdEnv = "TELEPRESENCE_INTERCEPT_ID" + +// apiURL creates the generic URL needed to access the service +func apiURL() (string, error) { + pe := os.Getenv(portEnv) + if _, err := strconv.ParseUint(pe, 10, 16); err != nil { + return "", fmt.Errorf("value %q of env %s does not represent a valid port number", pe, portEnv) + } + return "http://localhost:" + pe, nil +} + +// consumeHereURL creates the URL for the "consume-here" endpoint +func consumeHereURL() (string, error) { + apiURL, err := apiURL() + if err != nil { + return "", err + } + return apiURL + "/consume-here", nil +} + +// consumeHere expects an url created using consumeHereURL() and calls the endpoint with the given +// headers and returns the result +func consumeHere(url string, hm map[string]string) (bool, error) { + rq, err := http.NewRequest("GET", url, nil) + if err != nil { + return false, err + } + rq.Header = make(http.Header, len(hm)+1) + rq.Header.Set("X-Telepresence-Caller-Intercept-Id", os.Getenv(interceptIdEnv)) + for k, v := range hm { + rq.Header.Set(k, v) + } + rs, err := http.DefaultClient.Do(rq) + if err != nil { + return false, err + } + defer rs.Body.Close() + b, err := io.ReadAll(rs.Body) + if err != nil { + return false, err + } + return strconv.ParseBool(string(b)) +} +``` \ No newline at end of file diff --git a/docs/telepresence/2.4/reference/routing.md b/docs/telepresence/2.4/reference/routing.md new file mode 100644 index 000000000..671dae5d8 --- /dev/null +++ b/docs/telepresence/2.4/reference/routing.md @@ -0,0 +1,69 @@ +# Connection Routing + +## Outbound + +### DNS resolution +When requesting a connection to a host, the IP of that host must be determined. Telepresence provides DNS resolvers to help with this task. There are currently four types of resolvers but only one of them will be used on a workstation at any given time. Common for all of them is that they will propagate a selection of the host lookups to be performed in the cluster. The selection normally includes all names ending with `.cluster.local` or a currently mapped namespace but more entries can be added to the list using the `include-suffixes` option in the +[local DNS configuration](../config/#dns) + +#### Cluster side DNS lookups +The cluster side host lookup will be performed by the traffic-manager unless the client has an active intercept, in which case, the agent performing that intercept will be responsible for doing it. If the client has multiple intercepts, then all of them will be asked to perform the lookup, and the response to the client will contain the unique sum of IPs that they produce. It's therefore important to never have multiple intercepts that span more than one namespace[[1](#namespacelimit)] running concurrently on the same workstation because that would logically put the workstation in several namespaces and make the DNS resolution ambiguous. The reason for asking all of them is that the workstation currently impersonates multiple containers, and it is not possible to determine on behalf of what container the lookup request is made. + +#### macOS resolver +This resolver hooks into the macOS DNS system by creating files under `/etc/resolver`. Those files correspond to some domain and contain the port number of the Telepresence resolver. Telepresence creates one such file for each of the currently mapped namespaces and `include-suffixes` option. The file `telepresence.local` contains a search path that is configured based on current intercepts so that single label names can be resolved correctly. + +#### Linux systemd-resolved resolver +This resolver registers itself as part of telepresence's [VIF](../tun-device) using `systemd-resolved` and uses the DBus API to configure domains and routes that corresponds to the current set of intercepts and namespaces. + +#### Linux overriding resolver +Linux systems that aren't configured with `systemd-resolved` will use this resolver. A Typical case is when running Telepresence [inside a docker container](../inside-container). During initialization, the resolver will first establish a _fallback_ connection to the IP passed as `--dns`, the one configured as `local-ip` in the [local DNS configuration](../config/#dns), or the primary `nameserver` registered in `/etc/resolv.conf`. It will then use iptables to actually override that IP so that requests to it instead end up in the overriding resolver, which unless it succeeds on its own, will use the _fallback_. + +#### Windows resolver +This resolver uses the DNS resolution capabilities of the [win-tun](https://www.wintun.net/) device in conjunction with [Win32_NetworkAdapterConfiguration SetDNSDomain](https://docs.microsoft.com/en-us/powershell/scripting/samples/performing-networking-tasks?view=powershell-7.2#assigning-the-dns-domain-for-a-network-adapter). + +#### DNS caching +The Telepresence DNS resolver often changes its configuration. This means that Telepresence must either flush the DNS caches on the local host, or ensure that DNS-records returned from the Telepresence resolver aren't cached (or cached for a very short time). All operating systems have different ways of flushing the DNS caches and even different versions of one system may have differences. Also, on some systems it is necessary to actually kill and restart processes to ensure a proper flush, which in turn may result in network instabilities. + +Starting with 2.4.7, Telepresence will no longer flush the host's DNS caches. Instead, all records will have a short Time To Live (TTL) so that such caches evict the entries quickly. This causes increased load on the Telepresence resolver (shorter TTL means more frequent queries) and to cater for that, telepresence now has an internal cache to minimize the number of DNS queries that it sends to the cluster. This cache is flushed as needed without causing instabilities. + +### Routing + +#### Subnets +The Telepresence `traffic-manager` service is responsible for discovering the cluster's service subnet and all subnets used by the pods. In order to do this, it needs permission to create a dummy service[[2](#servicesubnet)] in its own namespace, and the ability to list, get, and watch nodes and pods. Most clusters will expose the pod subnets as `podCIDR` in the `Node` while others, like Amazon EKS, don't. Telepresence will then fall back to deriving the subnets from the IPs of all pods. If you'd like to choose a specific method for discovering subnets, or want to provide the list yourself, you can use the `podCIDRStrategy` configuration value in the [helm](../../install/helm) chart to do that. + +The complete set of subnets that the [VIF](../tun-device) will be configured with is dynamic and may change during a connection's life cycle as new nodes arrive or disappear from the cluster. The set consists of what that the traffic-manager finds in the cluster, and the subnets configured using the [also-proxy](../config#alsoproxy) configuration option. Telepresence will remove subnets that are equal to, or completely covered by, other subnets. + +#### Connection origin +A request to connect to an IP-address that belongs to one of the subnets of the [VIF](../tun-device) will cause a connection request to be made in the cluster. As with host name lookups, the request will originate from the traffic-manager unless the client has ongoing intercepts. If it does, one of the intercepted pods will be chosen, and the request will instead originate from that pod. This is a best-effort approach. Telepresence only knows that the request originated from the workstation. It cannot know that it is intended to originate from a specific pod when multiple intercepts are active. + +A `--local-only` intercept will not have any effect on the connection origin because there is no pod from which the connection can originate. The intercept must be made on a workload that has been deployed in the cluster if there's a requirement for correct connection origin. + +There are multiple reasons for doing this. One is that it is important that the request originates from the correct namespace. Example: + +```bash +curl some-host +``` +results in a http request with header `Host: some-host`. Now, if a service-mesh like Istio performs header based routing, then it will fail to find that host unless the request originates from the same namespace as the host resides in. Another reason is that the configuration of a service mesh can contain very strict rules. If the request then originates from the wrong pod, it will be denied. Only one intercept at a time can be used if there is a need to ensure that the chosen pod is exactly right. + +### Recursion detection +It is common that clusters used in development, such as Minikube, Minishift or k3s, run on the same host as the Telepresence client, often in a Docker container. Such clusters may have access to host network, which means that both DNS and L4 routing may be subjected to recursion. + +#### DNS recursion +When a local cluster's DNS-resolver fails to resolve a hostname, it may fall back to querying the local host network. This means that the Telepresence resolver will be asked to resolve a query that was issued from the cluster. Telepresence must check if such a query is recursive because there is a chance that it actually originated from the Telepresence DNS resolver and was dispatched to the `traffic-manager`, or a `traffic-agent`. + +Telepresence handles this by sending one initial DNS-query to resolve the hostname "tel2-recursion-check.kube-system". If the cluster runs locally, and has access to the local host's network, then that query will recurse back into the Telepresence resolver. Telepresence remembers this and alters its own behavior so that queries that are believed to be recursions are detected and respond with an NXNAME record. Telepresence performs this solution to the best of its ability, but may not be completely accurate in all situations. There's a chance that the DNS-resolver will yield a false negative for the second query if the same hostname is queried more than once in rapid succession, that is when the second query is made before the first query has received a response from the cluster. + +#### Connect recursion +A cluster running locally may dispatch connection attempts to non-existing host:port combinations to the host network. This means that they may reach the Telepresence [VIF](../tun-device). Endless recursions occur if the VIF simply dispatches such attempts on to the cluster. + +The telepresence client handles this by serializing all connection attempts to one specific IP:PORT, trapping all subsequent attempts to connect to that IP:PORT until the first attempt has completed. If the first attempt was deemed a success, then the currently trapped attempts are allowed to proceed. If the first attempt failed, then the currently trapped attempts fail. + +## Inbound + +The traffic-manager and traffic-agent are mutually responsible for setting up the necessary connection to the workstation when an intercept becomes active. In versions prior to 2.3.2, this would be accomplished by the traffic-manager creating a port dynamically that it would pass to the traffic-agent. The traffic-agent would then forward the intercepted connection to that port, and the traffic-manager would forward it to the workstation. This lead to problems when integrating with service meshes like Istio since those dynamic ports needed to be configured. It also imposed an undesired requirement to be able to use mTLS between the traffic-manager and traffic-agent. + +In 2.3.2, this changes, so that the traffic-agent instead creates a tunnel to the traffic-manager using the already existing gRPC API connection. The traffic-manager then forwards that using another tunnel to the workstation. This is completely invisible to other service meshes and is therefore much easier to configure. + +##### Footnotes: +

1: A future version of Telepresence will not allow that the same workstation creates concurrent intercepts that span multiple namespaces.

+

2: The error message from an attempt to create a service in a bad subnet contains the service subnet. The trick of creating a dummy service is currently the only way to get Kubernetes to expose that subnet.

diff --git a/docs/telepresence/2.4/reference/tun-device.md b/docs/telepresence/2.4/reference/tun-device.md new file mode 100644 index 000000000..4410f6f3c --- /dev/null +++ b/docs/telepresence/2.4/reference/tun-device.md @@ -0,0 +1,27 @@ +# Networking through Virtual Network Interface + +The Telepresence daemon process creates a Virtual Network Interface (VIF) when Telepresence connects to the cluster. The VIF ensures that the cluster's subnets are available to the workstation. It also intercepts DNS requests and forwards them to the traffic-manager which in turn forwards them to intercepted agents, if any, or performs a host lookup by itself. + +### TUN-Device +The VIF is a TUN-device, which means that it communicates with the workstation in terms of L3 IP-packets. The router will recognize UDP and TCP packets and tunnel their payload to the traffic-manager via its encrypted gRPC API. The traffic-manager will then establish corresponding connections in the cluster. All protocol negotiation takes place in the client because the VIF takes care of the L3 to L4 translation (i.e. the tunnel is L4, not L3). + +## Gains when using the VIF + +### Both TCP and UDP +The TUN-device is capable of routing both TCP and UDP for outbound traffic. Earlier versions of Telepresence would only allow TCP. Future enhancements might be to also route inbound UDP, and perhaps a selection of ICMP packages (to allow for things like `ping`). + +### No SSH required + +The VIF approach is somewhat similar to using `sshuttle` but without +any requirements for extra software, configuration or connections. +Using the VIF means that only one single connection needs to be +forwarded through the Kubernetes apiserver (à la `kubectl +port-forward`), using only one single port. There is no need for +`ssh` in the client nor for `sshd` in the traffic-manager. This also +means that the traffic-manager container can run as the default user. + +#### sshfs without ssh encryption +When a POD is intercepted, and its volumes are mounted on the local machine, this mount is performed by [sshfs](https://github.com/libfuse/sshfs). Telepresence will run `sshfs -o slave` which means that instead of using `ssh` to establish an encrypted communication to an `sshd`, which in turn terminates the encryption and forwards to `sftp`, the `sshfs` will talk `sftp` directly on its `stdin/stdout` pair. Telepresence tunnels that directly to an `sftp` in the agent using its already encrypted gRPC API. As a result, no `sshd` is needed in client nor in the traffic-agent, and the traffic-agent container can run as the default user. + +### No Firewall rules +With the VIF in place, there's no longer any need to tamper with firewalls in order to establish IP routes. The VIF makes the cluster subnets available during connect, and the kernel will perform the routing automatically. When the session ends, the kernel is also responsible for cleaning up. diff --git a/docs/telepresence/2.4/reference/volume.md b/docs/telepresence/2.4/reference/volume.md new file mode 100644 index 000000000..82df9cafa --- /dev/null +++ b/docs/telepresence/2.4/reference/volume.md @@ -0,0 +1,36 @@ +# Volume mounts + +import Alert from '@material-ui/lab/Alert'; + +Telepresence supports locally mounting of volumes that are mounted to your Pods. You can specify a command to run when starting the intercept, this could be a subshell or local server such as Python or Node. + +``` +telepresence intercept --port --mount=/tmp/ -- /bin/bash +``` + +In this case, Telepresence creates the intercept, mounts the Pod's volumes to locally to `/tmp`, and starts a Bash subshell. + +Telepresence can set a random mount point for you by using `--mount=true` instead, you can then find the mount point in the output of `telepresence list` or using the `$TELEPRESENCE_ROOT` variable. + +``` +$ telepresence intercept --port --mount=true -- /bin/bash +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 + Intercepting : all TCP connections + +bash-3.2$ echo $TELEPRESENCE_ROOT +/var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 +``` + +--mount=true is the default if a mount option is not specified, use --mount=false to disable mounting volumes. + +With either method, the code you run locally either from the subshell or from the intercept command will need to be prepended with the `$TELEPRESENCE_ROOT` environment variable to utilize the mounted volumes. + +For example, Kubernetes mounts secrets to `/var/run/secrets/kubernetes.io` (even if no `mountPoint` for it exists in the Pod spec). Once mounted, to access these you would need to change your code to use `$TELEPRESENCE_ROOT/var/run/secrets/kubernetes.io`. + +If using --mount=true without a command, you can use either environment variable flag to retrieve the variable. diff --git a/docs/telepresence/2.4/reference/vpn.md b/docs/telepresence/2.4/reference/vpn.md new file mode 100644 index 000000000..cb3f8acf2 --- /dev/null +++ b/docs/telepresence/2.4/reference/vpn.md @@ -0,0 +1,157 @@ + +
+ +# Telepresence and VPNs + +## The test-vpn command + +You can make use of the `telepresence test-vpn` command to diagnose issues +with your VPN setup. +This guides you through a series of steps to figure out if there are +conflicts between your VPN configuration and telepresence. + +### Prerequisites + +Before running `telepresence test-vpn` you should ensure that your VPN is +in split-tunnel mode. +This means that only traffic that _must_ pass through the VPN is directed +through it; otherwise, the test results may be inaccurate. + +You may need to configure this on both the client and server sides. +Client-side, taking the Tunnelblick client as an example, you must ensure that +the `Route all IPv4 traffic through the VPN` tickbox is not enabled: + + + +Server-side, taking AWS' ClientVPN as an example, you simply have to enable +split-tunnel mode: + + + +In AWS, this setting can be toggled without reprovisioning the VPN. Other cloud providers may work differently. + +### Testing the VPN configuration + +To run it, enter: + +```console +$ telepresence test-vpn +``` + +The test-vpn tool begins by asking you to disconnect from your VPN; ensure you are disconnected then +press enter: + +``` +Telepresence Root Daemon is already stopped +Telepresence User Daemon is already stopped +Please disconnect from your VPN now and hit enter once you're disconnected... +``` + +Once it's gathered information about your network configuration without an active connection, +it will ask you to connect to the VPN: + +``` +Please connect to your VPN now and hit enter once you're connected... +``` + +It will then connect to the cluster: + + +``` +Launching Telepresence Root Daemon +Launching Telepresence User Daemon +Connected to context arn:aws:eks:us-east-1:914373874199:cluster/josec-tp-test-vpn-cluster (https://07C63820C58A0426296DAEFC73AED10C.gr7.us-east-1.eks.amazonaws.com) +Telepresence Root Daemon quitting... done +Telepresence User Daemon quitting... done +``` + +And show you the results of the test: + +``` +---------- Test Results: +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +✅ svc subnet 10.19.0.0/16 is clear of VPN + +Please see https://www.telepresence.io/docs/latest/reference/vpn for more info on these corrective actions, as well as examples + +Still having issues? Please create a new github issue at https://github.com/telepresenceio/telepresence/issues/new?template=Bug_report.md + Please make sure to add the following to your issue: + * Run `telepresence loglevel debug`, try to connect, then run `telepresence gather_logs`. It will produce a zipfile that you should attach to the issue. + * Which VPN client are you using? + * Which VPN server are you using? + * How is your VPN pushing DNS configuration? It may be useful to add the contents of /etc/resolv.conf +``` + +#### Interpreting test results + +##### Case 1: VPN masked by cluster + +In an instance where the VPN is masked by the cluster, the test-vpn tool informs you that a pod or service subnet is masking a CIDR that the VPN +routes: + +``` +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +``` + +This means that all VPN hosts within `10.0.0.0/19` will be rendered inaccessible while +telepresence is connected. + +The ideal resolution in this case is to move the pods to a different subnet. This is possible, +for example, in Amazon EKS by configuring a [new CIDR range](https://aws.amazon.com/premiumsupport/knowledge-center/eks-multiple-cidr-ranges/) for the pods. +In this case, configuring the pods to be located in `10.1.0.0/19` clears the VPN and allows you +to reach hosts inside the VPC's `10.0.0.0/19` + +However, it is not always possible to move the pods to a different subnet. +In these cases, you should use the [never-proxy](../config#neverproxy) configuration to prevent certain +hosts from being masked. +This might be particularly important for DNS resolution. In an AWS ClientVPN VPN it is often +customary to set the `.2` host as a DNS server (e.g. `10.0.0.2` in this case): + + + +If this is the case for your VPN, you should place the DNS server in the never-proxy list for your +cluster. In your kubeconfig file, add a `telepresence` extension like so: + +```yaml +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + never-proxy: + - 10.0.0.2/32 +``` + +##### Case 2: Cluster masked by VPN + +In an instance where the Cluster is masked by the VPN, the test-vpn tool informs you that a pod or service subnet is being masked by a CIDR +that the VPN routes: + +``` +❌ pod subnet 10.0.0.0/8 being masked by VPN-routed CIDR 10.0.0.0/16. This usually means that Telepresence will not be able to connect to your cluster. To resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, consider shrinking the mask of the 10.0.0.0/16 CIDR (e.g. from /16 to /8), or disabling split-tunneling +``` + +Typically this means that pods within `10.0.0.0/8` are not accessible while the VPN is +connected. + +As with the first case, the ideal resolution is to move the pods away, but this may not always +be possible. In that case, your best bet is to attempt to shrink the VPN's CIDR +(that is, make it route more hosts) to make Telepresence's routes win by virtue of specificity. +One easy way to do this may be by disabling split tunneling (see the [prerequisites](#prerequisites) +section for more on split-tunneling). + +Note that once you fix this, you may find yourself landing again in [Case 1](#case-1-vpn-masked-by-cluster), and may need +to use never-proxy rules to whitelist hosts in the VPN: + +``` +❌ pod subnet 10.0.0.0/8 is masking VPN-routed CIDR 0.0.0.0/1. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 0.0.0.0/1 are placed in the never-proxy list +``` +
diff --git a/docs/telepresence/2.4/release-notes/no-ssh.png b/docs/telepresence/2.4/release-notes/no-ssh.png new file mode 100644 index 000000000..025f20ab7 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/no-ssh.png differ diff --git a/docs/telepresence/2.4/release-notes/run-tp-in-docker.png b/docs/telepresence/2.4/release-notes/run-tp-in-docker.png new file mode 100644 index 000000000..53b66a9b2 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/run-tp-in-docker.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.2.png b/docs/telepresence/2.4/release-notes/telepresence-2.2.png new file mode 100644 index 000000000..43abc7e89 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.2.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.3.0-homebrew.png b/docs/telepresence/2.4/release-notes/telepresence-2.3.0-homebrew.png new file mode 100644 index 000000000..e203a9750 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.3.0-homebrew.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.3.0-loglevels.png b/docs/telepresence/2.4/release-notes/telepresence-2.3.0-loglevels.png new file mode 100644 index 000000000..3d628c54a Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.3.0-loglevels.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.3.1-alsoProxy.png b/docs/telepresence/2.4/release-notes/telepresence-2.3.1-alsoProxy.png new file mode 100644 index 000000000..4052b927b Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.3.1-alsoProxy.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.3.1-brew.png b/docs/telepresence/2.4/release-notes/telepresence-2.3.1-brew.png new file mode 100644 index 000000000..2af424904 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.3.1-brew.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.3.1-dns.png b/docs/telepresence/2.4/release-notes/telepresence-2.3.1-dns.png new file mode 100644 index 000000000..c6335e7a7 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.3.1-dns.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.3.1-inject.png b/docs/telepresence/2.4/release-notes/telepresence-2.3.1-inject.png new file mode 100644 index 000000000..aea1003ef Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.3.1-inject.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.3.1-large-file-transfer.png b/docs/telepresence/2.4/release-notes/telepresence-2.3.1-large-file-transfer.png new file mode 100644 index 000000000..48ceb3817 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.3.1-large-file-transfer.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.3.1-trafficmanagerconnect.png b/docs/telepresence/2.4/release-notes/telepresence-2.3.1-trafficmanagerconnect.png new file mode 100644 index 000000000..78128c174 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.3.1-trafficmanagerconnect.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.3.2-subnets.png b/docs/telepresence/2.4/release-notes/telepresence-2.3.2-subnets.png new file mode 100644 index 000000000..778c722ab Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.3.2-subnets.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.3.2-svcport-annotation.png b/docs/telepresence/2.4/release-notes/telepresence-2.3.2-svcport-annotation.png new file mode 100644 index 000000000..1e1e92408 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.3.2-svcport-annotation.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.3.3-helm.png b/docs/telepresence/2.4/release-notes/telepresence-2.3.3-helm.png new file mode 100644 index 000000000..7b81480a7 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.3.3-helm.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.3.3-namespace-config.png b/docs/telepresence/2.4/release-notes/telepresence-2.3.3-namespace-config.png new file mode 100644 index 000000000..7864d3a30 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.3.3-namespace-config.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.3.3-to-pod.png b/docs/telepresence/2.4/release-notes/telepresence-2.3.3-to-pod.png new file mode 100644 index 000000000..aa7be3f63 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.3.3-to-pod.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.3.4-improved-error.png b/docs/telepresence/2.4/release-notes/telepresence-2.3.4-improved-error.png new file mode 100644 index 000000000..fa8a12986 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.3.4-improved-error.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.3.4-ip-error.png b/docs/telepresence/2.4/release-notes/telepresence-2.3.4-ip-error.png new file mode 100644 index 000000000..1d37380c7 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.3.4-ip-error.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.3.5-agent-config.png b/docs/telepresence/2.4/release-notes/telepresence-2.3.5-agent-config.png new file mode 100644 index 000000000..67d6d3e8b Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.3.5-agent-config.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.3.5-grpc-max-receive-size.png b/docs/telepresence/2.4/release-notes/telepresence-2.3.5-grpc-max-receive-size.png new file mode 100644 index 000000000..32939f9dd Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.3.5-grpc-max-receive-size.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.3.5-skipLogin.png b/docs/telepresence/2.4/release-notes/telepresence-2.3.5-skipLogin.png new file mode 100644 index 000000000..bf79c1910 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.3.5-skipLogin.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png b/docs/telepresence/2.4/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png new file mode 100644 index 000000000..d29a05ad7 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.3.7-keydesc.png b/docs/telepresence/2.4/release-notes/telepresence-2.3.7-keydesc.png new file mode 100644 index 000000000..9bffe5ccb Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.3.7-keydesc.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.3.7-newkey.png b/docs/telepresence/2.4/release-notes/telepresence-2.3.7-newkey.png new file mode 100644 index 000000000..c7d47c42d Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.3.7-newkey.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.4.0-cloud-messages.png b/docs/telepresence/2.4/release-notes/telepresence-2.4.0-cloud-messages.png new file mode 100644 index 000000000..ffd045ae0 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.4.0-cloud-messages.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.4.0-windows.png b/docs/telepresence/2.4/release-notes/telepresence-2.4.0-windows.png new file mode 100644 index 000000000..d27ba254a Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.4.0-windows.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.4.1-systema-vars.png b/docs/telepresence/2.4/release-notes/telepresence-2.4.1-systema-vars.png new file mode 100644 index 000000000..c098b439f Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.4.1-systema-vars.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.4.10-actions.png b/docs/telepresence/2.4/release-notes/telepresence-2.4.10-actions.png new file mode 100644 index 000000000..6d849ac21 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.4.10-actions.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.4.10-intercept-config.png b/docs/telepresence/2.4/release-notes/telepresence-2.4.10-intercept-config.png new file mode 100644 index 000000000..e3f1136ac Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.4.10-intercept-config.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.4.4-gather-logs.png b/docs/telepresence/2.4/release-notes/telepresence-2.4.4-gather-logs.png new file mode 100644 index 000000000..7db541735 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.4.4-gather-logs.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.4.5-logs-anonymize.png b/docs/telepresence/2.4/release-notes/telepresence-2.4.5-logs-anonymize.png new file mode 100644 index 000000000..edd01fde4 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.4.5-logs-anonymize.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.4.5-pod-yaml.png b/docs/telepresence/2.4/release-notes/telepresence-2.4.5-pod-yaml.png new file mode 100644 index 000000000..3f565c4f8 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.4.5-pod-yaml.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.4.5-preview-url-questions.png b/docs/telepresence/2.4/release-notes/telepresence-2.4.5-preview-url-questions.png new file mode 100644 index 000000000..1823aaa14 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.4.5-preview-url-questions.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.4.6-help-text.png b/docs/telepresence/2.4/release-notes/telepresence-2.4.6-help-text.png new file mode 100644 index 000000000..aab9178ad Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.4.6-help-text.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.4.8-health-check.png b/docs/telepresence/2.4/release-notes/telepresence-2.4.8-health-check.png new file mode 100644 index 000000000..e10a0b472 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.4.8-health-check.png differ diff --git a/docs/telepresence/2.4/release-notes/telepresence-2.4.8-vpn.png b/docs/telepresence/2.4/release-notes/telepresence-2.4.8-vpn.png new file mode 100644 index 000000000..fbb215882 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/telepresence-2.4.8-vpn.png differ diff --git a/docs/telepresence/2.4/release-notes/tunnel.jpg b/docs/telepresence/2.4/release-notes/tunnel.jpg new file mode 100644 index 000000000..59a0397e6 Binary files /dev/null and b/docs/telepresence/2.4/release-notes/tunnel.jpg differ diff --git a/docs/telepresence/2.4/releaseNotes.yml b/docs/telepresence/2.4/releaseNotes.yml new file mode 100644 index 000000000..b91a78ecd --- /dev/null +++ b/docs/telepresence/2.4/releaseNotes.yml @@ -0,0 +1,1085 @@ +# This file should be placed in the folder for the version of the +# product that's meant to be documented. A `/release-notes` page will +# be automatically generated and populated at build time. +# +# Note that an entry needs to be added to the `doc-links.yml` file in +# order to surface the release notes in the table of contents. +# +# The YAML in this file should contain: +# +# changelog: An (optional) URL to the CHANGELOG for the product. +# items: An array of releases with the following attributes: +# - version: The (optional) version number of the release, if applicable. +# - date: The date of the release in the format YYYY-MM-DD. +# - notes: An array of noteworthy changes included in the release, each having the following attributes: +# - type: The type of change, one of `bugfix`, `feature`, `security` or `change`. +# - title: A short title of the noteworthy change. +# - body: >- +# Two or three sentences describing the change and why it +# is noteworthy. This is HTML, not plain text or +# markdown. It is handy to use YAML's ">-" feature to +# allow line-wrapping. +# - image: >- +# The URL of an image that visually represents the +# noteworthy change. This path is relative to the +# `release-notes` directory; if this file is +# `FOO/releaseNotes.yml`, then the image paths are +# relative to `FOO/release-notes/`. +# - docs: The path to the documentation page where additional information can be found. + +docTitle: Telepresence Release Notes +docDescription: >- + Release notes for Telepresence by Ambassador Labs, a CNCF project + that enables developers to iterate rapidly on Kubernetes + microservices by arming them with infinite-scale development + environments, access to instantaneous feedback loops, and highly + customizable development environments. + +changelog: https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md + +items: + - version: 2.4.11 + date: "2022-02-10" + notes: + - type: change + title: Add additional logging to troubleshoot intermittent issues with intercepts + body: >- + We've noticed some issues with intercepts in v2.4.10, so we are releasing a version + with enhanced logging to help debug and fix the issue. + - version: 2.4.10 + date: "2022-01-13" + notes: + - type: feature + title: Application Protocol Strategy + body: >- + The strategy used when selecting the application protocol for personal intercepts can now be configured using + the intercept.appProtocolStrategy in the config.yml file. + docs: reference/config/#intercept + image: telepresence-2.4.10-intercept-config.png + - type: feature + title: Helm value for the Application Protocol Strategy + body: >- + The strategy when selecting the application protocol for personal intercepts in agents injected by the + mutating webhook can now be configured using the agentInjector.appProtocolStrategy in the Helm chart. + docs: install/helm + - type: feature + title: New --http-plaintext option + body: >- + The flag --http-plaintext can be used to ensure that an intercept uses plaintext http or grpc when + communicating with the workstation process. + docs: reference/intercepts/#tls + - type: feature + title: Configure the default intercept port + body: >- + The port used by default in the telepresence intercept command (8080), can now be changed by setting + the intercept.defaultPort in the config.yml file. + docs: reference/config/#intercept + - type: change + title: Telepresence CI now uses Github Actions + body: >- + Telepresence now uses Github Actions for doing unit and integration testing. It is + now easier for contributors to run tests on PRs since maintainers can add an + "ok to test" label to PRs (including from forks) to run integration tests. + docs: https://github.com/telepresenceio/telepresence/actions + image: telepresence-2.4.10-actions.png + - type: bugfix + title: Check conditions before asking questions + body: >- + User will not be asked to log in or add ingress information when creating an intercept until a check has been + made that the intercept is possible. + docs: reference/intercepts/ + - type: bugfix + title: Fix invalid log statement + body: >- + Telepresence will no longer log invalid: "unhandled connection control message: code DIAL_OK" errors. + - type: bugfix + title: Log errors from sshfs/sftp + body: >- + Output to stderr from the traffic-agent's sftp and the client's sshfs processes + are properly logged as errors. + - type: bugfix + title: Don't use Windows path separators in workload pod template + body: >- + Auto installer will no longer not emit backslash separators for the /tel-app-mounts paths in the + traffic-agent container spec when running on Windows. + - version: 2.4.9 + date: "2021-12-09" + notes: + - type: bugfix + title: Helm upgrade nil pointer error + body: >- + A helm upgrade using the --reuse-values flag no longer fails on a "nil pointer" error caused by a nil + telpresenceAPI value. + docs: install/helm#upgrading-the-traffic-manager + - version: 2.4.8 + date: "2021-12-03" + notes: + - type: feature + title: VPN diagnostics tool + body: >- + There is a new subcommand, test-vpn, that can be used to diagnose connectivity issues with a VPN. + See the VPN docs for more information on how to use it. + docs: reference/vpn + image: telepresence-2.4.8-vpn.png + + - type: feature + title: RESTful API service + body: >- + A RESTful service was added to Telepresence, both locally to the client and to the traffic-agent to + help determine if messages with a set of headers should be consumed or not from a message queue where the + intercept headers are added to the messages. + docs: reference/restapi + image: telepresence-2.4.8-health-check.png + + - type: change + title: TELEPRESENCE_LOGIN_CLIENT_ID env variable no longer used + body: >- + You could previously configure this value, but there was no reason to change it, so the value + was removed. + + - type: bugfix + title: Tunneled network connections behave more like ordinary TCP connections. + body: >- + When using Telepresence with an external cloud provider for extensions, those tunneled + connections now behave more like TCP connections, especially when it comes to timeouts. + We've also added increased testing around these types of connections. + - version: 2.4.7 + date: "2021-11-24" + notes: + - type: feature + title: Injector service-name annotation + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-service-name, that can be used to set the name of the service to be intercepted. + This will help disambiguate which service to intercept for when a workload is exposed by multiple services, such as can happen with Argo Rollouts + docs: reference/cluster-config#service-name-annotation + - type: feature + title: Skip the Ingress Dialogue + body: >- + You can now skip the ingress dialogue by setting the ingress parameters in the corresponding flags. + docs: reference/intercepts#skipping-the-ingress-dialogue + - type: feature + title: Never proxy subnets + body: >- + The kubeconfig extensions now support a never-proxy argument, + analogous to also-proxy, that defines a set of subnets that + will never be proxied via telepresence. + docs: reference/config#neverproxy + - type: change + title: Daemon versions check + body: >- + Telepresence now checks the versions of the client and the daemons and asks the user to quit and restart if they don't match. + - type: change + title: No explicit DNS flushes + body: >- + Telepresence DNS now uses a very short TTL instead of explicitly flushing DNS by killing the mDNSResponder or doing resolvectl flush-caches + docs: reference/routing#dns-caching + - type: bugfix + title: Legacy flags now work with global flags + body: >- + Legacy flags such as `--swap-deployment` can now be used together with global flags. + - type: bugfix + title: Outbound connection closing + body: >- + Outbound connections are now properly closed when the peer closes. + - type: bugfix + title: Prevent DNS recursion + body: >- + The DNS-resolver will trap recursive resolution attempts (may happen when the cluster runs in a docker-container on the client). + docs: reference/routing#dns-recursion + - type: bugfix + title: Prevent network recursion + body: >- + The TUN-device will trap failed connection attempts that results in recursive calls back into the TUN-device (may happen when the + cluster runs in a docker-container on the client). + docs: reference/routing#connect-recursion + - type: bugfix + title: Traffic Manager deadlock fix + body: >- + The Traffic Manager no longer runs a risk of entering a deadlock when a new Traffic agent arrives. + - type: bugfix + title: webhookRegistry config propagation + body: >- + The configured webhookRegistry is now propagated to the webhook installer even if no webhookAgentImage has been set. + docs: reference/config#images + - type: bugfix + title: Login refreshes expired tokens + body: >- + When a user's token has expired, telepresence login + will prompt the user to log in again to get a new token. Previously, + the user had to telepresence quit and telepresence logout + to get a new token. + docs: https://github.com/telepresenceio/telepresence/issues/2062 + - version: 2.4.6 + date: "2021-11-02" + notes: + - type: feature + title: Manually injecting Traffic Agent + body: >- + Telepresence now supports manually injecting the traffic-agent YAML into workload manifests. + Use the genyaml command to create the sidecar YAML, then add the telepresence.getambassador.io/manually-injected: "true" annotation to your pods to allow Telepresence to intercept them. + docs: reference/intercepts/manual-agent + + - type: feature + title: Telepresence CLI released for Apple silicon + body: >- + Telepresence is now built and released for Apple silicon. + docs: install/?os=macos + + - type: change + title: Telepresence help text now links to telepresence.io + body: >- + We now include a link to our documentation when you run telepresence --help. This will make it easier + for users to find this page whether they acquire Telepresence through Brew or some other mechanism. + image: telepresence-2.4.6-help-text.png + + - type: bugfix + title: Fixed bug when API server is inside CIDR range of pods/services + body: >- + If the API server for your kubernetes cluster had an IP that fell within the + subnet generated from pods/services in a kubernetes cluster, it would proxy traffic + to the API server which would result in hanging or a failed connection. We now ensure + that the API server is explicitly not proxied. + - version: 2.4.5 + date: "2021-10-15" + notes: + - type: feature + title: Get pod yaml with gather-logs command + body: >- + Adding the flag --get-pod-yaml to your request will get the + pod yaml manifest for all kubernetes components you are getting logs for + ( traffic-manager and/or pods containing a + traffic-agent container). This flag is set to false + by default. + docs: reference/client + image: telepresence-2.4.5-pod-yaml.png + + - type: feature + title: Anonymize pod name + namespace when using gather-logs command + body: >- + Adding the flag --anonymize to your command will + anonymize your pod names + namespaces in the output file. We replace the + sensitive names with simple names (e.g. pod-1, namespace-2) to maintain + relationships between the objects without exposing the real names of your + objects. This flag is set to false by default. + docs: reference/client + image: telepresence-2.4.5-logs-anonymize.png + + - type: feature + title: Added context and defaults to ingress questions when creating a preview URL + body: >- + Previously, we referred to OSI model layers when asking these questions, but this + terminology is not commonly used. The questions now provide a clearer context for the user, along with a default answer as an example. + docs: howtos/preview-urls + image: telepresence-2.4.5-preview-url-questions.png + + - type: feature + title: Support for intercepting headless services + body: >- + Intercepting headless services is now officially supported. You can request a + headless service on whatever port it exposes and get a response from the + intercept. This leverages the same approach as intercepting numeric ports when + using the mutating webhook injector, mainly requires the initContainer + to have NET_ADMIN capabilities. + docs: reference/intercepts/#intercepting-headless-services + + - type: change + title: Use one tunnel per connection instead of multiplexing into one tunnel + body: >- + We have changed Telepresence so that it uses one tunnel per connection instead + of multiplexing all connections into one tunnel. This will provide substantial + performance improvements. Clients will still be backwards compatible with older + managers that only support multiplexing. + + - type: bugfix + title: Added checks for Telepresence kubernetes compatibility + body: >- + Telepresence currently works with Kubernetes server versions 1.17.0 + and higher. We have added logs in the connector and traffic-manager + to let users know when they are using Telepresence with a cluster it doesn't support. + docs: reference/cluster-config + + - type: bugfix + title: Traffic Agent security context is now only added when necessary + body: >- + When creating an intercept, Telepresence will now only set the traffic agent's GID + when strictly necessary (i.e. when using headless services or numeric ports). This mitigates + an issue on openshift clusters where the traffic agent can fail to be created due to + openshift's security policies banning arbitrary GIDs. + + - version: 2.4.4 + date: "2021-09-27" + notes: + - type: feature + title: Numeric ports in agent injector + body: >- + The agent injector now supports injecting Traffic Agents into pods that have unnamed ports. + docs: reference/cluster-config/#note-on-numeric-ports + + - type: feature + title: New subcommand to gather logs and export into zip file + body: >- + Telepresence has logs for various components (the + traffic-manager, traffic-agents, the root and + user daemons), which are integral for understanding and debugging + Telepresence behavior. We have added the telepresence + gather-logs command to make it simple to compile logs for + all Telepresence components and export them in a zip file that can + be shared to others and/or included in a github issue. For more + information on usage, run telepresence gather-logs --help + . + docs: reference/client + image: telepresence-2.4.4-gather-logs.png + + - type: feature + title: Pod CIDR strategy is configurable in Helm chart + body: >- + Telepresence now enables you to directly configure how to get + pod CIDRs when deploying Telepresence with the Helm chart. + The default behavior remains the same. We've also introduced + the ability to explicitly set what the pod CIDRs should be. + docs: install/helm + + - type: bugfix + title: Compute pod CIDRs more efficiently + body: >- + When computing subnets using the pod CIDRs, the traffic-manager + now uses less CPU cycles. + docs: reference/routing/#subnets + + - type: bugfix + title: Prevent busy loop in traffic-manager + body: >- + In some circumstances, the traffic-manager's CPU + would max out and get pinned at its limit. This required a + shutdown or pod restart to fix. We've added some fixes + to prevent the traffic-manager from getting into this state. + + - type: bugfix + title: Added a fixed buffer size to TUN-device + body: >- + The TUN-device now has a max buffer size of 64K. This prevents the + buffer from growing limitlessly until it receies a PSH, which could + be a blocking operation when receiving lots of TCP-packets. + docs: reference/tun-device + + - type: bugfix + title: Fix hanging user daemon + body: >- + When Telepresence encountered an issue connecting to the cluster or + the root daemon, it could hang indefintely. It now will error correctly + when it encounters that situation. + + - type: bugfix + title: Improved proprietary agent connectivity + body: >- + To determine whether the environment cluster is air-gapped, the + proprietary agent attempts to connect to the cloud during startup. + To deal with a possible initial failure, the agent backs off + and retries the connection with an increasing backoff duration. + + - type: bugfix + title: Telepresence correctly reports intercept port conflict + body: >- + When creating a second intercept targetting the same local port, + it now gives the user an informative error message. Additionally, + it tells them which intercept is currently using that port to make + it easier to remedy. + + - version: 2.4.3 + date: "2021-09-15" + notes: + - type: feature + title: Environment variable TELEPRESENCE_INTERCEPT_ID available in interceptor's environment + body: >- + When you perform an intercept, we now include a TELEPRESENCE_INTERCEPT_ID environment + variable in the environment. + docs: reference/environment/#telepresence-environment-variables + + - type: bugfix + title: Improved daemon stability + body: >- + Fixed a timing bug that sometimes caused a "daemon did not start" failure. + + - type: bugfix + title: Complete logs for Windows + body: >- + Crash stack traces and other errors were incorrectly not written to log files. This has + been fixed so logs for Windows should be at parity with the ones in MacOS and Linux. + + - type: bugfix + title: Log rotation fix for Linux kernel 4.11+ + body: >- + On Linux kernel 4.11 and above, the log file rotation now properly reads the + birth-time of the log file. Older kernels continue to use the old behavior + of using the change-time in place of the birth-time. + + - type: bugfix + title: Improved error messaging + body: >- + When Telepresence encounters an error, it tells the user where they should look for + logs related to the error. We have refined this so that it only tells users to look + for errors in the daemon logs for issues that are logged there. + + - type: bugfix + title: Stop resolving localhost + body: >- + When using the overriding DNS resolver, it will no longer apply search paths when + resolving localhost, since that should be resolved on the user's machine + instead of the cluster. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Variable cluster domain + body: >- + Previously, the cluster domain was hardcoded to cluster.local. While this + is true for many kubernetes clusters, it is not for all of them. Now this value is + retrieved from the traffic-manager. + + - type: bugfix + title: Improved cleanup of traffic-agents + body: >- + Telepresence now uninstalls traffic-agents installed via mutating webhook + when using telepresence uninstall --everything. + + - type: bugfix + title: More large file transfer fixes + body: >- + Downloading large files during an intercept will no longer cause timeouts and hanging + traffic-agents. + + - type: bugfix + title: Setting --mount to false when intercepting works as expected + body: >- + When using --mount=false while performing an intercept, the file system + was still mounted. This has been remedied so the intercept behavior respects the + flag. + docs: reference/volume + + - type: bugfix + title: Traffic-manager establishes outbound connections in parallel + body: >- + Previously, the traffic-manager established outbound connections + sequentially. This resulted in slow (and failing) Dial calls would + block all outbound traffic from the workstation (for up to 30 seconds). We now + establish these connections in parallel so that won't occur. + docs: reference/routing/#outbound + + - type: bugfix + title: Status command reports correct DNS settings + body: >- + Telepresence status now correctly reports DNS settings for all operating + systems, instead of Local IP:nil, Remote IP:nil when they don't exist. + + - version: 2.4.2 + date: "2021-09-01" + notes: + - type: feature + title: New subcommand to temporarily change log-level + body: >- + We have added a new telepresence loglevel subcommand that enables users + to temporarily change the log-level for the local demons, the traffic-manager and + the traffic-agents. While the logLevels settings from the config will + still be used by default, this can be helpful if you are currently experiencing an issue and + want to have higher fidelity logs, without doing a telepresence quit and + telepresence connect. You can use telepresence loglevel --help to get + more information on options for the command. + docs: reference/config + + - type: change + title: All components have info as the default log-level + body: >- + We've now set the default for all components of Telepresence (traffic-agent, + traffic-manager, local daemons) to use info as the default log-level. + + - type: bugfix + title: Updating RBAC in helm chart to fix cluster-id regression + body: >- + In 2.4.1, we enabled the traffic-manager to get the cluster ID by getting the UID + of the default namespace. The helm chart was not updated to give the traffic-manager + those permissions, which has since been fixed. This impacted users who use licensed features of + the Telepresence extension in an air-gapped environment. + docs: reference/cluster-config/#air-gapped-cluster + + - type: bugfix + title: Timeouts for Helm actions are now respected + body: >- + The user-defined timeout for Helm actions wasn't always respected, causing the daemon to hang + indefinitely when failing to install the traffic-manager. + docs: reference/config#timeouts + + - version: 2.4.1 + date: "2021-08-30" + notes: + - type: feature + title: External cloud variables are now configurable + body: >- + We now support configuring the host and port for the cloud in your config.yml. These + are used when logging in to utilize features provided by an extension, and are also passed + along as environment variables when installing the `traffic-manager`. Additionally, we + now run our testsuite with these variables set to localhost to continue to ensure Telepresence + is fully fuctional without depeneding on an external service. The SYSTEMA_HOST and SYSTEMA_PORT + environment variables are no longer used. + image: telepresence-2.4.1-systema-vars.png + docs: reference/config/#cloud + + - type: feature + title: Helm chart can now regenerate certificate used for mutating webhook on-demand. + body: >- + You can now set agentInjector.certificate.regenerate when deploying Telepresence + with the Helm chart to automatically regenerate the certificate used by the agent injector webhook. + docs: install/helm + + - type: change + title: Traffic Manager installed via helm + body: >- + The traffic-manager is now installed via an embedded version of the Helm chart when telepresence connect is first performed on a cluster. + This change is transparent to the user. + A new configuration flag, timeouts.helm sets the timeouts for all helm operations performed by the Telepresence binary. + docs: reference/config#timeouts + + - type: change + title: traffic-manager gets cluster ID itself instead of via environment variable + body: >- + The traffic-manager used to get the cluster ID as an environment variable when running + telepresence connnect or via adding the value in the helm chart. This was + clunky so now the traffic-manager gets the value itself as long as it has permissions + to "get" and "list" namespaces (this has been updated in the helm chart). + docs: install/helm + + - type: bugfix + title: Telepresence now mounts all directories from /var/run/secrets + body: >- + In the past, we only mounted secret directories in /var/run/secrets/kubernetes.io. + We now mount *all* directories in /var/run/secrets, which, for example, includes + directories like eks.amazonaws.com used for IRSA tokens. + docs: reference/volume + + - type: bugfix + title: Max gRPC receive size correctly propagates to all grpc servers + body: >- + This fixes a bug where the max gRPC receive size was only propagated to some of the + grpc servers, causing failures when the message size was over the default. + docs: reference/config/#grpc + + - type: bugfix + title: Updated our Homebrew packaging to run manually + body: >- + We made some updates to our script that packages Telepresence for Homebrew so that it + can be run manually. This will enable maintainers of Telepresence to run the script manually + should we ever need to rollback a release and have latest point to an older verison. + docs: install/ + + - type: bugfix + title: Telepresence uses namespace from kubeconfig context on each call + body: >- + In the past, Telepresence would use whatever namespace was specified in the kubeconfig's current-context + for the entirety of the time a user was connected to Telepresence. This would lead to confusing behavior + when a user changed the context in their kubeconfig and expected Telepresence to acknowledge that change. + Telepresence now will do that and use the namespace designated by the context on each call. + + - type: bugfix + title: Idle outbound TCP connections timeout increased to 7200 seconds + body: >- + Some users were noticing that their intercepts would start failing after 60 seconds. + This was because the keep idle outbound TCP connections were set to 60 seconds, which we have + now bumped to 7200 seconds to match Linux's tcp_keepalive_time default. + + - type: bugfix + title: Telepresence will automatically remove a socket upon ungraceful termination + body: >- + When a Telepresence process terminates ungracefully, it would inform users that "this usually means + that the process has terminated ungracefully" and implied that they should remove the socket. We've + now made it so Telepresence will automatically attempt to remove the socket upon ungraceful termination. + + - type: bugfix + title: Fixed user daemon deadlock + body: >- + Remedied a situation where the user daemon could hang when a user was logged in. + + - type: bugfix + title: Fixed agentImage config setting + body: >- + The config setting images.agentImages is no longer required to contain the repository, and it + will use the value at images.repository. + docs: reference/config/#images + + - version: 2.4.0 + date: "2021-08-04" + notes: + - type: feature + title: Windows Client Developer Preview + body: >- + There is now a native Windows client for Telepresence that is being released as a Developer Preview. + All the same features supported by the MacOS and Linux client are available on Windows. + image: telepresence-2.4.0-windows.png + docs: install + + - type: feature + title: CLI raises helpful messages from Ambassador Cloud + body: >- + Telepresence can now receive messages from Ambassador Cloud and raise + them to the user when they perform certain commands. This enables us + to send you messages that may enhance your Telepresence experience when + using certain commands. Frequency of messages can be configured in your + config.yml. + image: telepresence-2.4.0-cloud-messages.png + docs: reference/config#cloud + + - type: bugfix + title: Improved stability of systemd-resolved-based DNS + body: >- + When initializing the systemd-resolved-based DNS, the routing domain + is set to improve stability in non-standard configurations. This also enables the + overriding resolver to do a proper take over once the DNS service ends. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Fixed an edge case when intercepting a container with multiple ports + body: >- + When specifying a port of a container to intercept, if there was a container in the + pod without ports, it was automatically selected. This has been fixed so we'll only + choose the container with "no ports" if there's no container that explicitly matches + the port used in your intercept. + docs: reference/intercepts/#creating-an-intercept-when-a-service-has-multiple-ports + + - type: bugfix + title: $(NAME) references in agent's environments are now interpolated correctly. + body: >- + If you had an environment variable $(NAME) in your workload that referenced another, intercepts + would not correctly interpolate $(NAME). This has been fixed and works automatically. + + - type: bugfix + title: Telepresence no longer prints INFO message when there is no config.yml + body: >- + Fixed a regression that printed an INFO message to the terminal when there wasn't a + config.yml present. The config is optional, so this message has been + removed. + docs: reference/config + + - type: bugfix + title: Telepresence no longer panics when using --http-match + body: >- + Fixed a bug where Telepresence would panic if the value passed to --http-match + didn't contain an equal sign, which has been fixed. The correct syntax is in the --help + string and looks like --http-match=HTTP2_HEADER=REGEX + docs: reference/intercepts/#intercept-behavior-when-logged-in-to-ambassador-cloud + + - type: bugfix + title: Improved subnet updates + body: >- + The `traffic-manager` used to update subnets whenever the `Nodes` or `Pods` changed, even if + the underlying subnet hadn't changed, which created a lot of unnecessary traffic between the + client and the `traffic-manager`. This has been fixed so we only send updates when the subnets + themselves actually change. + docs: reference/routing/#subnets + + - version: 2.3.7 + date: "2021-07-23" + notes: + - type: feature + title: Also-proxy in telepresence status + body: >- + An also-proxy entry in the Kubernetes cluster config will + show up in the output of the telepresence status command. + docs: reference/config + + - type: feature + title: Non-interactive telepresence login + body: >- + telepresence login now has an + --apikey=KEY flag that allows for + non-interactive logins. This is useful for headless + environments where launching a web-browser is impossible, + such as cloud shells, Docker containers, or CI. + image: telepresence-2.3.7-newkey.png + docs: reference/client/login/ + + - type: bugfix + title: Mutating webhook injector correctly hides named ports for probes. + body: >- + The mutating webhook injector has been fixed to correctly rename named ports for liveness and readiness probes + docs: reference/cluster-config + + - type: bugfix + title: telepresence current-cluster-id crash fixed + body: >- + Fixed a regression introduced in 2.3.5 that caused `telepresence current-cluster-id` + to crash. + docs: reference/cluster-config + + - type: bugfix + title: Better UX around intercepts with no local process running + body: >- + Requests would hang indefinitely when initiating an intercept before you + had a local process running. This has been fixed and will result in an + Empty reply from server until you start a local process. + docs: reference/intercepts + + - type: bugfix + title: API keys no longer show as "no description" + body: >- + New API keys generated internally for communication with + Ambassador Cloud no longer show up as "no description" in + the Ambassador Cloud web UI. Existing API keys generated by + older versions of Telepresence will still show up this way. + image: telepresence-2.3.7-keydesc.png + + - type: bugfix + title: Fix corruption of user-info.json + body: >- + Fixed a race condition that logging in and logging out + rapidly could cause memory corruption or corruption of the + user-info.json cache file used when + authenticating with Ambassador Cloud. + + - type: bugfix + title: Improved DNS resolver for systemd-resolved + body: + Telepresence's systemd-resolved-based DNS resolver is now more + stable and in case it fails to initialize, the overriding resolver + will no longer cause general DNS lookup failures when telepresence defaults to + using it. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Faster telepresence list command + body: + The performance of telepresence list has been increased + significantly by reducing the number of calls the command makes to the cluster. + docs: reference/client + + - version: 2.3.6 + date: "2021-07-20" + notes: + - type: bugfix + title: Fix preview URLs + body: >- + Fixed a regression introduced in 2.3.5 that caused preview + URLs to not work. + + - type: bugfix + title: Fix subnet discovery + body: >- + Fixed a regression introduced in 2.3.5 where the Traffic + Manager's RoleBinding did not correctly appoint + the traffic-manager Role, causing + subnet discovery to not be able to work correctly. + docs: reference/rbac/ + + - type: bugfix + title: Fix root-user configuration loading + body: >- + Fixed a regression introduced in 2.3.5 where the root daemon + did not correctly read the configuration file; ignoring the + user's configured log levels and timeouts. + docs: reference/config/ + + - type: bugfix + title: Fix a user daemon crash + body: >- + Fixed an issue that could cause the user daemon to crash + during shutdown, as during shutdown it unconditionally + attempted to close a channel even though the channel might + already be closed. + + - version: 2.3.5 + date: "2021-07-15" + notes: + - type: feature + title: traffic-manager in multiple namespaces + body: >- + We now support installing multiple traffic managers in the same cluster. + This will allow operators to install deployments of telepresence that are + limited to certain namespaces. + image: ./telepresence-2.3.5-traffic-manager-namespaces.png + docs: install/helm + - type: feature + title: No more dependence on kubectl + body: >- + Telepresence no longer depends on having an external + kubectl binary, which might not be present for + OpenShift users (who have oc instead of + kubectl). + - type: feature + title: Agent image now configurable + body: >- + We now support configuring which agent image + registry to use in the + config. This enables users whose laptop is an air-gapped environment to + create personal intercepts without requiring a login. It also makes it easier + for those who are developing on Telepresence to specify which agent image should + be used. Env vars TELEPRESENCE_AGENT_IMAGE and TELEPRESENCE_REGISTRY are no longer + used. + image: ./telepresence-2.3.5-agent-config.png + docs: reference/config/#images + - type: feature + title: Max gRPC receive size now configurable + body: >- + The default max size of messages received through gRPC (4 MB) is sometimes insufficient. It can now be configured. + image: ./telepresence-2.3.5-grpc-max-receive-size.png + docs: reference/config/#grpc + - type: feature + title: CLI can be used in air-gapped environments + body: >- + While Telepresence will auto-detect if your cluster is in an air-gapped environment, + we've added an option users can add to their config.yml to ensure the cli acts like it + is in an air-gapped environment. Air-gapped environments require a manually installed + licence. + docs: reference/cluster-config/#air-gapped-cluster + image: ./telepresence-2.3.5-skipLogin.png + - version: 2.3.4 + date: "2021-07-09" + notes: + - type: bugfix + title: Improved IP log statements + body: >- + Some log statements were printing incorrect characters, when they should have been IP addresses. + This has been resolved to include more accurate and useful logging. + docs: reference/config/#log-levels + image: ./telepresence-2.3.4-ip-error.png + - type: bugfix + title: Improved messaging when multiple services match a workload + body: >- + If multiple services matched a workload when performing an intercept, Telepresence would crash. + It now gives the correct error message, instructing the user on how to specify which + service the intercept should use. + image: ./telepresence-2.3.4-improved-error.png + docs: reference/intercepts + - type: bugfix + title: Traffic-manger creates services in its own namespace to determine subnet + body: >- + Telepresence will now determine the service subnet by creating a dummy-service in its own + namespace, instead of the default namespace, which was causing RBAC permissions issues in + some clusters. + docs: reference/routing/#subnets + - type: bugfix + title: Telepresence connect respects pre-existing clusterrole + body: >- + When Telepresence connects, if the traffic-manager's desired clusterrole already exists in the + cluster, Telepresence will no longer try to update the clusterrole. + docs: reference/rbac + - type: bugfix + title: Helm Chart fixed for clientRbac.namespaced + body: >- + The Telepresence Helm chart no longer fails when installing with --set clientRbac.namespaced=true. + docs: install/helm + - version: 2.3.3 + date: "2021-07-07" + notes: + - type: feature + title: Traffic Manager Helm Chart + body: >- + Telepresence now supports installing the Traffic Manager via Helm. + This will make it easy for operators to install and configure the + server-side components of Telepresence separately from the CLI (which + in turn allows for better separation of permissions). + image: ./telepresence-2.3.3-helm.png + docs: install/helm/ + - type: feature + title: Traffic-manager in custom namespace + body: >- + As the traffic-manager can now be installed in any + namespace via Helm, Telepresence can now be configured to look for the + Traffic Manager in a namespace other than ambassador. + This can be configured on a per-cluster basis. + image: ./telepresence-2.3.3-namespace-config.png + docs: reference/config + - type: feature + title: Intercept --to-pod + body: >- + telepresence intercept now supports a + --to-pod flag that can be used to port-forward sidecars' + ports from an intercepted pod. + image: ./telepresence-2.3.3-to-pod.png + docs: reference/intercepts + - type: change + title: Change in migration from edgectl + body: >- + Telepresence no longer automatically shuts down the old + api_version=1 edgectl daemon. If migrating + from such an old version of edgectl you must now manually + shut down the edgectl daemon before running Telepresence. + This was already the case when migrating from the newer + api_version=2 edgectl. + - type: bugfix + title: Fixed error during shutdown + body: >- + The root daemon no longer terminates when the user daemon disconnects + from its gRPC streams, and instead waits to be terminated by the CLI. + This could cause problems with things not being cleaned up correctly. + - type: bugfix + title: Intercepts will survive deletion of intercepted pod + body: >- + An intercept will survive deletion of the intercepted pod provided + that another pod is created (or already exists) that can take over. + - version: 2.3.2 + date: "2021-06-18" + notes: + # Headliners + - type: feature + title: Service Port Annotation + body: >- + The mutator webhook for injecting traffic-agents now + recognizes a + telepresence.getambassador.io/inject-service-port + annotation to specify which port to intercept; bringing the + functionality of the --port flag to users who + use the mutator webook in order to control Telepresence via + GitOps. + image: ./telepresence-2.3.2-svcport-annotation.png + docs: reference/cluster-config#service-port-annotation + - type: feature + title: Outbound Connections + body: >- + Outbound connections are now routed through the intercepted + Pods which means that the connections originate from that + Pod from the cluster's perspective. This allows service + meshes to correctly identify the traffic. + docs: reference/routing/#outbound + - type: change + title: Inbound Connections + body: >- + Inbound connections from an intercepted agent are now + tunneled to the manager over the existing gRPC connection, + instead of establishing a new connection to the manager for + each inbound connection. This avoids interference from + certain service mesh configurations. + docs: reference/routing/#inbound + + # RBAC changes + - type: change + title: Traffic Manager needs new RBAC permissions + body: >- + The Traffic Manager requires RBAC + permissions to list Nodes, Pods, and to create a dummy + Service in the manager's namespace. + docs: reference/routing/#subnets + - type: change + title: Reduced developer RBAC requirements + body: >- + The on-laptop client no longer requires RBAC permissions to list the Nodes + in the cluster or to create Services, as that functionality + has been moved to the Traffic Manager. + + # Bugfixes + - type: bugfix + title: Able to detect subnets + body: >- + Telepresence will now detect the Pod CIDR ranges even if + they are not listed in the Nodes. + image: ./telepresence-2.3.2-subnets.png + docs: reference/routing/#subnets + - type: bugfix + title: Dynamic IP ranges + body: >- + The list of cluster subnets that the virtual network + interface will route is now configured dynamically and will + follow changes in the cluster. + - type: bugfix + title: No duplicate subnets + body: >- + Subnets fully covered by other subnets are now pruned + internally and thus never superfluously added to the + laptop's routing table. + docs: reference/routing/#subnets + - type: change # not a bugfix, but it only makes sense to mention after the above bugfixes + title: Change in default timeout + body: >- + The trafficManagerAPI timeout default has + changed from 5 seconds to 15 seconds, in order to facilitate + the extended time it takes for the traffic-manager to do its + initial discovery of cluster info as a result of the above + bugfixes. + - type: bugfix + title: Removal of DNS config files on macOS + body: >- + On macOS, files generated under + /etc/resolver/ as the result of using + include-suffixes in the cluster config are now + properly removed on quit. + docs: reference/routing/#macos-resolver + + - type: bugfix + title: Large file transfers + body: >- + Telepresence no longer erroneously terminates connections + early when sending a large HTTP response from an intercepted + service. + - type: bugfix + title: Race condition in shutdown + body: >- + When shutting down the user-daemon or root-daemon on the + laptop, telepresence quit and related commands + no longer return early before everything is fully shut down. + Now it can be counted on that by the time the command has + returned that all of the side-effects on the laptop have + been cleaned up. + - version: 2.3.1 + date: "2021-06-14" + notes: + - title: DNS Resolver Configuration + body: "Telepresence now supports per-cluster configuration for custom dns behavior, which will enable users to determine which local + remote resolver to use and which suffixes should be ignored + included. These can be configured on a per-cluster basis." + image: ./telepresence-2.3.1-dns.png + docs: reference/config + type: feature + - title: AlsoProxy Configuration + body: "Telepresence now supports also proxying user-specified subnets so that they can access external services only accessible to the cluster while connected to Telepresence. These can be configured on a per-cluster basis and each subnet is added to the TUN device so that requests are routed to the cluster for IPs that fall within that subnet." + image: ./telepresence-2.3.1-alsoProxy.png + docs: reference/config + type: feature + - title: Mutating Webhook for Injecting Traffic Agents + body: "The Traffic Manager now contains a mutating webhook to automatically add an agent to pods that have the telepresence.getambassador.io/traffic-agent: enabled annotation. This enables Telepresence to work well with GitOps CD platforms that rely on higher level kubernetes objects matching what is stored in git. For workloads without the annotation, Telepresence will add the agent the way it has in the past" + image: ./telepresence-2.3.1-inject.png + docs: reference/rbac + type: feature + - title: Traffic Manager Connect Timeout + body: "The trafficManagerConnect timeout default has changed from 20 seconds to 60 seconds, in order to facilitate the extended time it takes to apply everything needed for the mutator webhook." + image: ./telepresence-2.3.1-trafficmanagerconnect.png + docs: reference/config + type: change + - title: Fix for large file transfers + body: "Fix a tun-device bug where sometimes large transfers from services on the cluster would hang indefinitely" + image: ./telepresence-2.3.1-large-file-transfer.png + docs: reference/tun-device + type: bugfix + - title: Brew Formula Changed + body: "Now that the Telepresence rewrite is the main version of Telepresence, you can install it via Brew like so: brew install datawire/blackbird/telepresence." + image: ./telepresence-2.3.1-brew.png + docs: install/ + type: change + - version: 2.3.0 + date: "2021-06-01" + notes: + - title: Brew install Telepresence + body: "Telepresence can now be installed via brew on macOS, which makes it easier for users to stay up-to-date with the latest telepresence version. To install via brew, you can use the following command: brew install datawire/blackbird/telepresence2." + image: ./telepresence-2.3.0-homebrew.png + docs: install/ + type: feature + - title: TCP and UDP routing via Virtual Network Interface + body: "Telepresence will now perform routing of outbound TCP and UDP traffic via a Virtual Network Interface (VIF). The VIF is a layer 3 TUN-device that exists while Telepresence is connected. It makes the subnets in the cluster available to the workstation and will also route DNS requests to the cluster and forward them to intercepted pods. This means that pods with custom DNS configuration will work as expected. Prior versions of Telepresence would use firewall rules and were only capable of routing TCP." + image: ./tunnel.jpg + docs: reference/tun-device + type: feature + - title: SSH is no longer used + body: "All traffic between the client and the cluster is now tunneled via the traffic manager gRPC API. This means that Telepresence no longer uses ssh tunnels and that the manager no longer have an sshd installed. Volume mounts are still established using sshfs but it is now configured to communicate using the sftp-protocol directly, which means that the traffic agent also runs without sshd. A desired side effect of this is that the manager and agent containers no longer need a special user configuration." + image: ./no-ssh.png + docs: reference/tun-device/#no-ssh-required + type: change + - title: Running in a Docker container + body: "Telepresence can now be run inside a Docker container. This can be useful for avoiding side effects on a workstation's network, establishing multiple sessions with the traffic manager, or working with different clusters simultaneously." + image: ./run-tp-in-docker.png + docs: reference/inside-container + type: feature + - title: Configurable Log Levels + body: "Telepresence now supports configuring the log level for Root Daemon and User Daemon logs. This provides control over the nature and volume of information that Telepresence generates in daemon.log and connector.log." + image: ./telepresence-2.3.0-loglevels.png + docs: reference/config/#log-levels + type: feature + - version: 2.2.2 + date: "2021-05-17" + notes: + - title: Legacy Telepresence subcommands + body: Telepresence is now able to translate common legacy Telepresence commands into native Telepresence commands. So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used to with the new Telepresence binary. + image: ./telepresence-2.2.png + docs: install/migrate-from-legacy/ + type: feature diff --git a/docs/telepresence/2.4/troubleshooting/index.md b/docs/telepresence/2.4/troubleshooting/index.md new file mode 100644 index 000000000..d3a72dc62 --- /dev/null +++ b/docs/telepresence/2.4/troubleshooting/index.md @@ -0,0 +1,118 @@ +--- +description: "Troubleshooting issues related to Telepresence." +--- +# Troubleshooting + +## Creating an intercept did not generate a preview URL + +Preview URLs can only be created if Telepresence is [logged in to +Ambassador Cloud](../reference/client/login/). When not logged in, it +will not even try to create a preview URL (additionally, by default it +will intercept all traffic rather than just a subset of the traffic). +Remove the intercept with `telepresence leave [deployment name]`, run +`telepresence login` to login to Ambassador Cloud, then recreate the +intercept. See the [intercepts how-to doc](../howtos/intercepts) for +more details. + +## Error on accessing preview URL: `First record does not look like a TLS handshake` + +The service you are intercepting is likely not using TLS, however when configuring the intercept you indicated that it does use TLS. Remove the intercept with `telepresence leave [deployment name]` and recreate it, setting `TLS` to `n`. Telepresence tries to intelligently determine these settings for you when creating an intercept and offer them as defaults, but odd service configurations might cause it to suggest the wrong settings. + +## Error on accessing preview URL: Detected a 301 Redirect Loop + +If your ingress is set to redirect HTTP requests to HTTPS and your web app uses HTTPS, but you configure the intercept to not use TLS, you will get this error when opening the preview URL. Remove the intercept with `telepresence leave [deployment name]` and recreate it, selecting the correct port and setting `TLS` to `y` when prompted. + +## Connecting to a cluster via VPN doesn't work. + +There are a few different issues that could arise when working with a VPN. Please see the [dedicated page](../reference/vpn) on Telepresence and VPNs to learn more on how to fix these. + +## Your GitHub organization isn't listed + +Ambassador Cloud needs access granted to your GitHub organization as a +third-party OAuth app. If an organization isn't listed during login +then the correct access has not been granted. + +The quickest way to resolve this is to go to the **Github menu** → +**Settings** → **Applications** → **Authorized OAuth Apps** → +**Ambassador Labs**. An organization owner will have a **Grant** +button, anyone not an owner will have **Request** which sends an email +to the owner. If an access request has been denied in the past the +user will not see the **Request** button, they will have to reach out +to the owner. + +Once access is granted, log out of Ambassador Cloud and log back in; +you should see the GitHub organization listed. + +The organization owner can go to the **GitHub menu** → **Your +organizations** → **[org name]** → **Settings** → **Third-party +access** to see if Ambassador Labs has access already or authorize a +request for access (only owners will see **Settings** on the +organization page). Clicking the pencil icon will show the +permissions that were granted. + +GitHub's documentation provides more detail about [managing access granted to third-party applications](https://docs.github.com/en/github/authenticating-to-github/connecting-with-third-party-applications) and [approving access to apps](https://docs.github.com/en/github/setting-up-and-managing-organizations-and-teams/approving-oauth-apps-for-your-organization). + +### Granting or requesting access on initial login + +When using GitHub as your identity provider, the first time you log in +to Ambassador Cloud GitHub will ask to authorize Ambassador Labs to +access your organizations and certain user data. + + + +Any listed organization with a green check has already granted access +to Ambassador Labs (you still need to authorize to allow Ambassador +Labs to read your user data and organization membership). + +Any organization with a red "X" requires access to be granted to +Ambassador Labs. Owners of the organization will see a **Grant** +button. Anyone who is not an owner will see a **Request** button. +This will send an email to the organization owner requesting approval +to access the organization. If an access request has been denied in +the past the user will not see the **Request** button, they will have +to reach out to the owner. + +Once approval is granted, you will have to log out of Ambassador Cloud +then back in to select the organization. + +### Volume mounts are not working on macOS + +It's necessary to have `sshfs` installed in order for volume mounts to work correctly during intercepts. Lately there's been some issues using `brew install sshfs` a macOS workstation because the required component `osxfuse` (now named `macfuse`) isn't open source and hence, no longer supported. As a workaround, you can now use `gromgit/fuse/sshfs-mac` instead. Follow these steps: + +1. Remove old sshfs, macfuse, osxfuse using `brew uninstall` +2. `brew install --cask macfuse` +3. `brew install gromgit/fuse/sshfs-mac` +4. `brew link --overwrite sshfs-mac` + +Now sshfs -V shows you the correct version, e.g.: +``` +$ sshfs -V +SSHFS version 2.10 +FUSE library version: 2.9.9 +fuse: no mount point +``` + +but one more thing must be done before it works OK: +5. Try a mount (or an intercept that performs a mount). It will fail because you need to give permission to “Benjamin Fleischer” to execute a kernel extension (a pop-up appears that takes you to the system preferences). +6. Approve the needed permission +7. Reboot your computer. + +### Daemon service did not start + +An attempt to do `telepresence connect` results in the error message `daemon service did not start: timeout while waiting for daemon to start` and +the logs show no helpful error. + +The likely cause of this is that the user lack permission to run `sudo --preserve-env`. Here is a workaround for this problem. Edit the +sudoers file with: + +```command +$ sudo visudo +``` + +and add the following line: + +``` + ALL=(ALL) NOPASSWD: SETENV: /usr/local/bin/telepresence +``` + +DO NOT fix this by making the telepresence binary a SUID root. It must only run as root when invoked with `--daemon-foreground`. \ No newline at end of file diff --git a/docs/telepresence/2.4/versions.yml b/docs/telepresence/2.4/versions.yml new file mode 100644 index 000000000..6f16c9bdc --- /dev/null +++ b/docs/telepresence/2.4/versions.yml @@ -0,0 +1,5 @@ +version: "2.4.11" +dlVersion: "2.4.11" +docsVersion: "2.4" +branch: release/v2 +productName: "Telepresence" diff --git a/docs/telepresence/2.5 b/docs/telepresence/2.5 deleted file mode 120000 index 86c44f853..000000000 --- a/docs/telepresence/2.5 +++ /dev/null @@ -1 +0,0 @@ -../../../docs/telepresence/v2.5 \ No newline at end of file diff --git a/docs/telepresence/2.5/community.md b/docs/telepresence/2.5/community.md new file mode 100644 index 000000000..922457c9d --- /dev/null +++ b/docs/telepresence/2.5/community.md @@ -0,0 +1,12 @@ +# Community + +## Contributor's guide +Please review our [contributor's guide](https://github.com/telepresenceio/telepresence/blob/release/v2/DEVELOPING.md) +on GitHub to learn how you can help make Telepresence better. + +## Changelog +Our [changelog](https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md) +describes new features, bug fixes, and updates to each version of Telepresence. + +## Meetings +Check out our community [meeting schedule](https://github.com/telepresenceio/telepresence/blob/release/v2/MEETING_SCHEDULE.md) for opportunities to interact with Telepresence developers. diff --git a/docs/telepresence/2.5/concepts/context-prop.md b/docs/telepresence/2.5/concepts/context-prop.md new file mode 100644 index 000000000..b3eb41e32 --- /dev/null +++ b/docs/telepresence/2.5/concepts/context-prop.md @@ -0,0 +1,37 @@ +# Context propagation + +**Context propagation** is the transfer of request metadata across the services and remote processes of a distributed system. Telepresence uses context propagation to intelligently route requests to the appropriate destination. + +This metadata is the context that is transferred across system services. It commonly takes the form of HTTP headers; context propagation is usually referred to as header propagation. A component of the system (like a proxy or performance monitoring tool) injects the headers into requests as it relays them. + +Metadata propagation refers to any service or other middleware not stripping away the headers. Propagation facilitates the movement of the injected contexts between other downstream services and processes. + + +## What is distributed tracing? + +Distributed tracing is a technique for troubleshooting and profiling distributed microservices applications and is a common application for context propagation. It is becoming a key component for debugging. + +In a microservices architecture, a single request may trigger additional requests to other services. The originating service may not cause the failure or slow request directly; a downstream dependent service may instead be to blame. + +An application like Datadog or New Relic will use agents running on services throughout the system to inject traffic with HTTP headers (the context). They will track the request’s entire path from origin to destination to reply, gathering data on routes the requests follow and performance. The injected headers follow the [W3C Trace Context specification](https://www.w3.org/TR/trace-context/) (or another header format, such as [B3 headers](https://github.com/openzipkin/b3-propagation)), which facilitates maintaining the headers through every service without being stripped (the propagation). + + +## What are intercepts and preview URLs? + +[Intercepts](../../reference/intercepts) and [preview +URLs](../../howtos/preview-urls/) are functions of Telepresence that +enable easy local development from a remote Kubernetes cluster and +offer a preview environment for sharing and real-time collaboration. + +Telepresence uses custom HTTP headers and header propagation to +identify which traffic to intercept both for plain personal intercepts +and for personal intercepts with preview URLs; these techniques are +more commonly used for distributed tracing, so what they are being +used for is a little unorthodox, but the mechanisms for their use are +already widely deployed because of the prevalence of tracing. The +headers facilitate the smart routing of requests either to live +services in the cluster or services running locally on a developer’s +machine. The intercepted traffic can be further limited by using path +based routing. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to [Ambassador Cloud](https://app.getambassador.io) with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. diff --git a/docs/telepresence/2.5/concepts/devloop.md b/docs/telepresence/2.5/concepts/devloop.md new file mode 100644 index 000000000..bcb924c91 --- /dev/null +++ b/docs/telepresence/2.5/concepts/devloop.md @@ -0,0 +1,50 @@ +# The developer experience and the inner dev loop + +## How is the developer experience changing? + +The developer experience is the workflow a developer uses to develop, test, deploy, and release software. + +Typically this experience has consisted of both an inner dev loop and an outer dev loop. The inner dev loop is where the individual developer codes and tests, and once the developer pushes their code to version control, the outer dev loop is triggered. + +The outer dev loop is _everything else_ that happens leading up to release. This includes code merge, automated code review, test execution, deployment, [controlled (canary) release](https://www.getambassador.io/docs/argo/latest/concepts/canary/), and observation of results. The modern outer dev loop might include, for example, an automated CI/CD pipeline as part of a [GitOps workflow](https://www.getambassador.io/docs/argo/latest/concepts/gitops/#what-is-gitops) and a [progressive delivery](/docs/argo/latest/concepts/cicd/) strategy relying on automated canaries, i.e. to make the outer loop as fast, efficient and automated as possible. + +Cloud-native technologies have fundamentally altered the developer experience in two ways: one, developers now have to take extra steps in the inner dev loop; two, developers need to be concerned with the outer dev loop as part of their workflow, even if most of their time is spent in the inner dev loop. + +Engineers now must design and build distributed service-based applications _and_ also assume responsibility for the full development life cycle. The new developer experience means that developers can no longer rely on monolithic application developer best practices, such as checking out the entire codebase and coding locally with a rapid “live-reload” inner development loop. Now developers have to manage external dependencies, build containers, and implement orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this adds development time to the equation. + +## What is the inner dev loop? + +The inner dev loop is the single developer workflow. A single developer should be able to set up and use an inner dev loop to code and test changes quickly. + +Even within the Kubernetes space, developers will find much of the inner dev loop familiar. That is, code can still be written locally at a level that a developer controls and committed to version control. + +In a traditional inner dev loop, if a typical developer codes for 360 minutes (6 hours) a day, with a traditional local iterative development loop of 5 minutes — 3 coding, 1 building, i.e. compiling/deploying/reloading, 1 testing inspecting, and 10-20 seconds for committing code — they can expect to make ~70 iterations of their code per day. Any one of these iterations could be a release candidate. The only “developer tax” being paid here is for the commit process, which is negligible. + +![traditional inner dev loop](../images/trad-inner-dev-loop.png) + +## In search of lost time: How does containerization change the inner dev loop? + +The inner dev loop is where writing and testing code happens, and time is critical for maximum developer productivity and getting features in front of end users. The faster the feedback loop, the faster developers can refactor and test again. + +Changes to the inner dev loop process, i.e., containerization, threaten to slow this development workflow down. Coding stays the same in the new inner dev loop, but code has to be containerized. The _containerized_ inner dev loop requires a number of new steps: + +* packaging code in containers +* writing a manifest to specify how Kubernetes should run the application (e.g., YAML-based configuration information, such as how much memory should be given to a container) +* pushing the container to the registry +* deploying containers in Kubernetes + +Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. + + +![container inner dev loop](../images/container-inner-dev-loop.png) + +## Tackling the slow inner dev loop + +A slow inner dev loop can negatively impact frontend and backend teams, delaying work on individual and team levels and slowing releases into production overall. + +For example: + +* Frontend developers have to wait for previews of backend changes on a shared dev/staging environment (for example, until CI/CD deploys a new version) and/or rely on mocks/stubs/virtual services when coding their application locally. These changes are only verifiable by going through the CI/CD process to build and deploy within a target environment. +* Backend developers have to wait for CI/CD to build and deploy their app to a target environment to verify that their code works correctly with cluster or cloud-based dependencies as well as to share their work to get feedback. + +New technologies and tools can facilitate cloud-native, containerized development. And in the case of a sluggish inner dev loop, developers can accelerate productivity with tools that help speed the loop up again. diff --git a/docs/telepresence/2.5/concepts/devworkflow.md b/docs/telepresence/2.5/concepts/devworkflow.md new file mode 100644 index 000000000..fa24fc2bd --- /dev/null +++ b/docs/telepresence/2.5/concepts/devworkflow.md @@ -0,0 +1,7 @@ +# The changing development workflow + +A changing workflow is one of the main challenges for developers adopting Kubernetes. Software development itself isn’t the challenge. Developers can continue to [code using the languages and tools with which they are most productive and comfortable](https://www.getambassador.io/resources/kubernetes-local-dev-toolkit/). That’s the beauty of containerized development. + +However, the cloud-native, Kubernetes-based approach to development means adopting a new development workflow and development environment. Beyond the basics, such as figuring out how to containerize software, [how to run containers in Kubernetes](https://www.getambassador.io/docs/kubernetes/latest/concepts/appdev/), and how to deploy changes into containers, for example, Kubernetes adds complexity before it delivers efficiency. The promise of a “quicker way to develop software” applies at least within the traditional aspects of the inner dev loop, where the single developer codes, builds and tests their software. But both within the inner dev loop and once code is pushed into version control to trigger the outer dev loop, the developer experience changes considerably from what many developers are used to. + +In this new paradigm, new steps are added to the inner dev loop, and more broadly, the developer begins to share responsibility for the full life cycle of their software. Inevitably this means taking new workflows and tools on board to ensure that the full life cycle continues full speed ahead. diff --git a/docs/telepresence/2.5/concepts/faster.md b/docs/telepresence/2.5/concepts/faster.md new file mode 100644 index 000000000..b649e4153 --- /dev/null +++ b/docs/telepresence/2.5/concepts/faster.md @@ -0,0 +1,25 @@ +# Making the remote local: Faster feedback, collaboration and debugging + +With the goal of achieving [fast, efficient development](https://www.getambassador.io/use-case/local-kubernetes-development/), developers need a set of approaches to bridge the gap between remote Kubernetes clusters and local development, and reduce time to feedback and debugging. + +## How should I set up a Kubernetes development environment? + +[Setting up a development environment](https://www.getambassador.io/resources/development-environments-microservices/) for Kubernetes can be much more complex than the set up for traditional web applications. Creating and maintaining a Kubernetes development environment relies on a number of external dependencies, such as databases or authentication. + +While there are several ways to set up a Kubernetes development environment, most introduce complexities and impediments to speed. The dev environment should be set up to easily code and test in conditions where a service can access the resources it depends on. + +A good way to meet the goals of faster feedback, possibilities for collaboration, and scale in a realistic production environment is the "single service local, all other remote" environment. Developing in a fully remote environment offers some benefits, but for developers, it offers the slowest possible feedback loop. With local development in a remote environment, the developer retains considerable control while using tools like [Telepresence](../../quick-start/) to facilitate fast feedback, debugging and collaboration. + +## What is Telepresence? + +Telepresence is an open source tool that lets developers [code and test microservices locally against a remote Kubernetes cluster](../../quick-start/). Telepresence facilitates more efficient development workflows while relieving the need to worry about other service dependencies. + +## How can I get fast, efficient local development? + +The dev loop can be jump-started with the right development environment and Kubernetes development tools to support speed, efficiency and collaboration. Telepresence is designed to let Kubernetes developers code as though their laptop is in their Kubernetes cluster, enabling the service to run locally and be proxied into the remote cluster. Telepresence runs code locally and forwards requests to and from the remote Kubernetes cluster, bypassing the much slower process of waiting for a container to build, pushing it to registry, and deploying to production. + +A rapid and continuous feedback loop is essential for productivity and speed; Telepresence enables the fast, efficient feedback loop to ensure that developers can access the rapid local development loop they rely on without disrupting their own or other developers' workflows. Telepresence safely intercepts traffic from the production cluster and enables near-instant testing of code, local debugging in production, and [preview URL](../../howtos/preview-urls/) functionality to share dev environments with others for multi-user collaboration. + +Telepresence works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This pod proxies data from the Kubernetes environment (e.g., TCP connections, environment variables, volumes) to the local process. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +The intercept proxy works thanks to context propagation, which is most frequently associated with distributed tracing but also plays a key role in controllable intercepts and preview URLs. diff --git a/docs/telepresence/2.5/concepts/intercepts.md b/docs/telepresence/2.5/concepts/intercepts.md new file mode 100644 index 000000000..dea68338d --- /dev/null +++ b/docs/telepresence/2.5/concepts/intercepts.md @@ -0,0 +1,205 @@ +--- +title: "Types of intercepts" +description: "Short demonstration of personal vs global intercepts" +--- + +import React from 'react'; + +import Alert from '@material-ui/lab/Alert'; +import AppBar from '@material-ui/core/AppBar'; +import Paper from '@material-ui/core/Paper'; +import Tab from '@material-ui/core/Tab'; +import TabContext from '@material-ui/lab/TabContext'; +import TabList from '@material-ui/lab/TabList'; +import TabPanel from '@material-ui/lab/TabPanel'; +import Animation from '@src/components/InterceptAnimation'; + + +export function TabsContainer({ children, ...props }) { + const [state, setState] = React.useState({curTab: "personal"}); + React.useEffect(() => { + const query = new URLSearchParams(window.location.search); + var interceptType = query.get('intercept') || "personal"; + if (state.curTab != interceptType) { + setState({curTab: interceptType}); + } + }, [state, setState]) + var setURL = function(newTab) { + history.replaceState(null,null, + `?intercept=${newTab}${window.location.hash}`, + ); + }; + return ( +
+ + + {setState({curTab: newTab}); setURL(newTab)}} aria-label="intercept types"> + + + + + + {children} + +
+ ); +}; + +# Types of intercepts + + + + +# No intercept + + + +This is the normal operation of your cluster without Telepresence. + + + + + +# Global intercept + + + + +**Global intercepts** replace the Kubernetes "Orders" service with the +Orders service running on your laptop. The users see no change, but +with all the traffic coming to your laptop, you can observe and debug +with all your dev tools. + + + +### Creating and using global intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=all + ``` + + + + Make sure your current kubectl context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service: + + All requests will be sent to the version of your service that is + running in the local development environment. + + + + +# Personal intercept + +**Personal intercepts** allow you to be selective and intercept only +some of the traffic to a service while not interfering with the rest +of the traffic. This allows you to share a cluster with others on your +team without interfering with their work. + + + + +In the illustration above, **Orange** +requests are being made by Developer 2 on their laptop and the +**green** are made by a teammate, +Developer 1, on a different laptop. + +Each developer can intercept the Orders service for their requests only, +while sharing the rest of the development environment. + + + +### Creating and using personal intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b + ``` + + We're using + `Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b` as the + header for the sake of the example, but you can use any + `key=value` pair you want, or `--http-header=auto` to have it + choose something automatically. + + + + Make sure your current kubect context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service by passing the + HTTP header: + + ```http + Personal-Intercept: 126a72c7-be8b-4329-af64-768e207a184b + ``` + + + + Need a browser extension to modify or remove an HTTP-request-headers? + + Chrome + {' '} + Firefox + + + + 3. Using the intercept: Send requests to your service without the + HTTP header: + + Requests without the header will be sent to the version of your + service that is running in the cluster. This enables you to share + the cluster with a team! + +### Intercepting a specific endpoint + +It's not uncommon to have one service serving several endpoints. Telepresence is capable of limiting an +intercept to only affect the endpoints you want to work with by using one of the `--http-path-xxx` +flags below in addition to using `--http-header` flags. Only one such flag can be used in an intercept +and, contrary to the `--http-header` flag, it cannot be repeated. + +The following flags are available: + +| Flag | Meaning | +|-------------------------------|------------------------------------------------------------------| +| `--http-path-equal ` | Only intercept the endpoint for this exact path | +| `--http-path-prefix ` | Only intercept endpoints with a matching path prefix | +| `--http-path-regex ` | Only intercept endpoints that match the given regular expression | + +#### Examples: + +1. A personal intercept using the header "Coder: Bob" limited to all endpoints that start with "/api': + + ```shell + telepresence intercept SERVICENAME --http-path-prefix=/api --http-header=Coder=Bob + ``` + +2. A personal intercept using the auto generated header that applies only to the endpoint "/api/version": + + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version --http-header=auto + ``` + or, since `--http-header=auto` is the implicit when using `--http` options, just: + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version + ``` + +3. A personal intercept using the auto generated header limited to all endpoints matching the regular expression "(staging-)?api/.*": + + ```shell + telepresence intercept SERVICENAME --http-path-regex='/(staging-)?api/.*' + ``` + + + diff --git a/docs/telepresence/2.5/doc-links.yml b/docs/telepresence/2.5/doc-links.yml new file mode 100644 index 000000000..d63d89302 --- /dev/null +++ b/docs/telepresence/2.5/doc-links.yml @@ -0,0 +1,92 @@ + - title: Quick start + link: quick-start + - title: Install Telepresence + items: + - title: Install + link: install/ + - title: Upgrade + link: install/upgrade/ + - title: Install Traffic Manager with Helm + link: install/helm/ + - title: Migrate from legacy Telepresence + link: install/migrate-from-legacy/ + - title: Core concepts + items: + - title: The changing development workflow + link: concepts/devworkflow + - title: The developer experience and the inner dev loop + link: concepts/devloop + - title: 'Making the remote local: Faster feedback, collaboration and debugging' + link: concepts/faster + - title: Context propagation + link: concepts/context-prop + - title: Types of intercepts + link: concepts/intercepts + - title: How do I... + items: + - title: Intercept a service in your own environment + link: howtos/intercepts + - title: Share dev environments with preview URLs + link: howtos/preview-urls + - title: Proxy outbound traffic to my cluster + link: howtos/outbound + - title: Send requests to an intercepted service + link: howtos/request + - title: Telepresence for Docker + items: + - title: What is Telepresence for Docker + link: extension/intro + - title: Install into Docker-Desktop + link: extension/install + - title: Intercept into a Docker Container + link: extension/intercept + - title: Technical reference + items: + - title: Architecture + link: reference/architecture + - title: Client reference + link: reference/client + items: + - title: login + link: reference/client/login + - title: Laptop-side configuration + link: reference/config + - title: Cluster-side configuration + link: reference/cluster-config + - title: Using Docker for intercepts + link: reference/docker-run + - title: Running Telepresence in a Docker container + link: reference/inside-container + - title: Environment variables + link: reference/environment + - title: Intercepts + link: reference/intercepts/ + items: + - title: Manually injecting the Traffic Agent + link: reference/intercepts/manual-agent + - title: Volume mounts + link: reference/volume + - title: RESTful API service + link: reference/restapi + - title: DNS resolution + link: reference/dns + - title: RBAC + link: reference/rbac + - title: Telepresence and VPNs + link: reference/vpn + - title: Networking through Virtual Network Interface + link: reference/tun-device + - title: Connection Routing + link: reference/routing + - title: Using Telepresence with Linkerd + link: reference/linkerd + - title: FAQs + link: faqs + - title: Troubleshooting + link: troubleshooting + - title: Community + link: community + - title: Release Notes + link: release-notes + - title: Licenses + link: licenses diff --git a/docs/telepresence/2.5/extension/install.md b/docs/telepresence/2.5/extension/install.md new file mode 100644 index 000000000..471752775 --- /dev/null +++ b/docs/telepresence/2.5/extension/install.md @@ -0,0 +1,39 @@ +--- +title: "Telepresence for Docker installation and connection guide" +description: "Learn how to install and update Ambassador Labs' Telepresence for Docker." +indexable: true +--- + +# Install and connect the Telepresence Docker extension + +[Docker](https://docker.com), the popular containerized runtime environment, now offers the [Telepresence](../../../../../kubernetes-learning-center/telepresence-docker-extension/) extension for Docker Desktop. With this extension, you can quickly install Telepresence and begin using its features with your Docker containers in a matter of minutes. + +## Install Telepresence for Docker + +Telepresence for Docker is available through the Docker Destktop. To install Telepresence for Docker: + +1. Open Docker Desktop. +2. In the Docker Dashboard, click **Add Extensions** in the left navigation bar. +3. In the Extensions Marketplace, search for the Ambassador Telepresence extension. +4. Click **Install**. + +## Connect to Ambassador Cloud through the Telepresence extension. + + After you install the Telepresence extension in Docker Desktop, you need to generate an API key to connect the Telepresence extension to Ambassador Cloud. + + 1. Click the Telepresence extension in Docker Desktop, then click **Get Started**. + + 2. Click the **Get API Key** button to open Ambassador Cloud in a browser window. + + 3. Sign in with your Google, Github, or Gitlab account. + Ambassador Cloud opens to your profile and displays the API key. + + 4. Copy the API key and paste it into the API key field in the Docker Dashboard. + +## Connect to your cluster in Docker Desktop. + + 1. Select the desired clister from the dropdown menu and click **Next**. + This cluster is now set to kubectl's current context. + + 2. Click **Connect to [your cluster]**. + Your cluster is connected and you can now create [intercepts](../intercept/). \ No newline at end of file diff --git a/docs/telepresence/2.5/extension/intercept.md b/docs/telepresence/2.5/extension/intercept.md new file mode 100644 index 000000000..3868407a8 --- /dev/null +++ b/docs/telepresence/2.5/extension/intercept.md @@ -0,0 +1,48 @@ +--- +title: "Create an intercept with Telepreence for Docker" +description: "Create an intercept with Telepresence for Docker. With Telepresence, you can create intercepts to debug, " +indexable: true +--- + +# Create an intercept + +With the Telepresence for Docker extension, you can create [personal intercepts](../../concepts/intercepts/?intercept=personal). These intercepts route the cluster traffic through a proxy UTL to your local Docker container. Follow the instructions below to create an intercept with Docker Desktop. + +## Prerequisites + +Before you begin, you need: +- [Docker Desktop](https://www.docker.com/products/docker-desktop). +- The [Telepresence ](../../../../../kubernetes-learning-center/telepresence-docker-extension/) extension [installed](../install). +- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/), the Kubernetes command-line tool. + +This guide assumes you have a Kubernetes deployment with a running service, and that you can run a copy of that service in a docker container on your laptop. + +## Copy the service you want to intercept + +Once you have the Telepresence extension [installed and connected](../install/) the Telepresence extension, you need to copy the service. To do this, use the `docker run` command with the following flags: + + ```console + $ docker run --rm -it --network host + ``` + +The Telepresence extension requires the target service to be on the host network. This allows Telepresence to share a network with your container. The mounted network device redirects cluster-related traffic back into the cluster. + +## Intercept a service + +In Docker Desktop, the Telepresence extension shows all the services in the namespace. + + 1. Choose a service to intercept and click the **Intercept** button. + + 2. Select the service port for the intercept from the dropdown. + + 3. Enter the target port of the service you previously copied in the Docker container. + + 4. Click **Submit** to create the intercept. + +The intercept now shows up in the Docker Telepresence extension. + +## Test your code + +Now you can make your code changes in your preferred IDE. When you're finished, build a new container with your code changes and run your container on Docker's host network. All the traffic previously routed to and from your Kubernetes service is now routed to and from your local container. + +Click the globe icon next to your intercept to get the preview URL. From here, you can view the intercept details in Ambassador Cloud, open the preview URL in your browser to see the changes you've made in realtime, or you can share the preview URL with teammates so they can review your work. \ No newline at end of file diff --git a/docs/telepresence/2.5/extension/intro.md b/docs/telepresence/2.5/extension/intro.md new file mode 100644 index 000000000..6a653ae06 --- /dev/null +++ b/docs/telepresence/2.5/extension/intro.md @@ -0,0 +1,29 @@ +--- +title: "Telepresence for Docker introduction" +description: "Learn about the Telepresence extension for Docker." +indexable: true +--- + +# Telepresence for Docker + +Telepresence is now available as a [Docker Extension](https://www.docker.com/products/extensions/) for Docker Desktop. + +## What is the Telepresence extension for Docker? + +The [Telepresence Docker extension](../../../../../kubernetes-learning-center/telepresence-docker-extension/) is an extension that runs in Docker Desktop. This extension allows you to spin up a selection of your application and run the Telepresence daemons in that container. The Telepresence extension allows you to intercept a service and redirect cloud traffic to other containers on the Docker host network. + +## What does the Telepresence Docker extension do? + +Telepresence for Docker is designed to simplify your coding experience and test your code faster. Traditionally, you need to build a container within docker with your code changes, push them, wait for it to upload, deploy the changes, verify them, view them, and repeat that process as you continually test your changes. This makes it a slow and cumbersome process when you need to continually test changes. + +With the Telepresence extension for Docker Desktop, you can use intercepts to immediately preview changes as you make them, without the need to redeploy after every change. Because the Telepresence extension also enables you to isolate your machine and operate it entirely within the Docker runtime, this means you can make changes without root permission on your machine. + +## How does Telepresence for Docker work? + +The Telepresence extension is configured to use Docker's host network (VM network for Windows and Mac, host network on Linux). + +Telepresence runs entirely within containers. The Telepresence daemons run in a container, which can be given commands using the extension UI. When Telepresence intercepts a service, it redirects cloud traffic to other containers on the Docker host network. + +## What do I need to begin? + +All you need is [Docker Desktop](https://www.docker.com/products/docker-desktop) with the [Ambassador Telepresence extension installed](../install) and the Kubernetes command-line tool [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/). diff --git a/docs/telepresence/2.5/faqs.md b/docs/telepresence/2.5/faqs.md new file mode 100644 index 000000000..3c37f1cc5 --- /dev/null +++ b/docs/telepresence/2.5/faqs.md @@ -0,0 +1,124 @@ +--- +description: "Learn how Telepresence helps with fast development and debugging in your Kubernetes cluster." +--- + +# FAQs + +** Why Telepresence?** + +Modern microservices-based applications that are deployed into Kubernetes often consist of tens or hundreds of services. The resource constraints and number of these services means that it is often difficult to impossible to run all of this on a local development machine, which makes fast development and debugging very challenging. The fast [inner development loop](../concepts/devloop/) from previous software projects is often a distant memory for cloud developers. + +Telepresence enables you to connect your local development machine seamlessly to the cluster via a two way proxying mechanism. This enables you to code locally and run the majority of your services within a remote Kubernetes cluster -- which in the cloud means you have access to effectively unlimited resources. + +Ultimately, this empowers you to develop services locally and still test integrations with dependent services or data stores running in the remote cluster. + +You can “intercept” any requests made to a target Kubernetes workload, and code and debug your associated service locally using your favourite local IDE and in-process debugger. You can test your integrations by making requests against the remote cluster’s ingress and watching how the resulting internal traffic is handled by your service running locally. + +By using the preview URL functionality you can share access with additional developers or stakeholders to the application via an entry point associated with your intercept and locally developed service. You can make changes that are visible in near real-time to all of the participants authenticated and viewing the preview URL. All other viewers of the application entrypoint will not see the results of your changes. + +** What operating systems does Telepresence work on?** + +Telepresence currently works natively on macOS (Intel and Apple silicon), Linux, and WSL 2. Starting with v2.4.0, we are also releasing a native Windows version of Telepresence that we are considering a Developer Preview. + +** What protocols can be intercepted by Telepresence?** + +All HTTP/1.1 and HTTP/2 protocols can be intercepted. This includes: + +- REST +- JSON/XML over HTTP +- gRPC +- GraphQL + +If you need another protocol supported, please [drop us a line](https://www.getambassador.io/feedback/) to request it. + +** When using Telepresence to intercept a pod, are the Kubernetes cluster environment variables proxied to my local machine?** + +Yes, you can either set the pod's environment variables on your machine or write the variables to a file to use with Docker or another build process. Please see [the environment variable reference doc](../reference/environment) for more information. + +** When using Telepresence to intercept a pod, can the associated pod volume mounts also be mounted by my local machine?** + +Yes, please see [the volume mounts reference doc](../reference/volume/) for more information. + +** When connected to a Kubernetes cluster via Telepresence, can I access cluster-based services via their DNS name?** + +Yes. After you have successfully connected to your cluster via `telepresence connect` you will be able to access any service in your cluster via their namespace qualified DNS name. + +This means you can curl endpoints directly e.g. `curl .:8080/mypath`. + +If you create an intercept for a service in a namespace, you will be able to use the service name directly. + +This means if you `telepresence intercept -n `, you will be able to resolve just the `` DNS record. + +You can connect to databases or middleware running in the cluster, such as MySQL, PostgreSQL and RabbitMQ, via their service name. + +** When connected to a Kubernetes cluster via Telepresence, can I access cloud-based services and data stores via their DNS name?** + +You can connect to cloud-based data stores and services that are directly addressable within the cluster (e.g. when using an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) Service type), such as AWS RDS, Google pub-sub, or Azure SQL Database. + +** What types of ingress does Telepresence support for the preview URL functionality?** + +The preview URL functionality should work with most ingress configurations, including straightforward load balancer setups. + +Telepresence will discover/prompt during first use for this info and make its best guess at figuring this out and ask you to confirm or update this. + +** Why are my intercepts still reporting as active when they've been disconnected?** + + In certain cases, Telepresence might not have been able to communicate back with Ambassador Cloud to update the intercept's status. Worry not, they will get garbage collected after a period of time. + +** Why is my intercept associated with an "Unreported" cluster?** + + Intercepts tagged with "Unreported" clusters simply mean Ambassador Cloud was unable to associate a service instance with a known detailed service from an Edge Stack or API Gateway cluster. [Connecting your cluster to the Service Catalog](/docs/telepresence/latest/quick-start/) will properly match your services from multiple data sources. + +** Will Telepresence be able to intercept workloads running on a private cluster or cluster running within a virtual private cloud (VPC)?** + +Yes. The cluster has to have outbound access to the internet for the preview URLs to function correctly, but it doesn’t need to have a publicly accessible IP address. + +The cluster must also have access to an external registry in order to be able to download the traffic-manager and traffic-agent images that are deployed when connecting with Telepresence. + +** Why does running Telepresence require sudo access for the local daemon?** + +The local daemon needs sudo to create iptable mappings. Telepresence uses this to create outbound access from the laptop to the cluster. + +On Fedora, Telepresence also creates a virtual network device (a TUN network) for DNS routing. That also requires root access. + +** What components get installed in the cluster when running Telepresence?** + +A single `traffic-manager` service is deployed in the `ambassador` namespace within your cluster, and this manages resilient intercepts and connections between your local machine and the cluster. + +A Traffic Agent container is injected per pod that is being intercepted. The first time a workload is intercepted all pods associated with this workload will be restarted with the Traffic Agent automatically injected. + +** How can I remove all of the Telepresence components installed within my cluster?** + +You can run the command `telepresence uninstall --everything` to remove the `traffic-manager` service installed in the cluster and `traffic-agent` containers injected into each pod being intercepted. + +Running this command will also stop the local daemon running. + +** What language is Telepresence written in?** + +All components of the Telepresence application and cluster components are written using Go. + +** How does Telepresence connect and tunnel into the Kubernetes cluster?** + +The connection between your laptop and cluster is established by using +the `kubectl port-forward` machinery (though without actually spawning +a separate program) to establish a TCP connection to Telepresence +Traffic Manager in the cluster, and running Telepresence's custom VPN +protocol over that TCP connection. + + + +** What identity providers are supported for authenticating to view a preview URL?** + +* GitHub +* GitLab +* Google + +More authentication mechanisms and identity provider support will be added soon. Please [let us know](https://www.getambassador.io/feedback/) which providers are the most important to you and your team in order for us to prioritize those. + +** Is Telepresence open source?** + +Yes it is! You can find its source code on [GitHub](https://github.com/telepresenceio/telepresence). + +** How do I share my feedback on Telepresence?** + +Your feedback is always appreciated and helps us build a product that provides as much value as possible for our community. You can chat with us directly on our [feedback page](https://www.getambassador.io/feedback/), or you can [join our Slack channel](http://a8r.io/slack) to share your thoughts. diff --git a/docs/telepresence/2.5/howtos/intercepts.md b/docs/telepresence/2.5/howtos/intercepts.md new file mode 100644 index 000000000..87bd9f92b --- /dev/null +++ b/docs/telepresence/2.5/howtos/intercepts.md @@ -0,0 +1,108 @@ +--- +description: "Start using Telepresence in your own environment. Follow these steps to intercept your service in your cluster." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Intercept a service in your own environment + +Telepresence enables you to create intercepts to a target Kubernetes workload. Once you have created and intercept, you can code and debug your associated service locally. + +For a detailed walk-though on creating intercepts using our sample app, follow the [quick start guide](../../quick-start/demo-node/). + + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +This guide assumes you have a Kubernetes deployment and service accessible publicly by an ingress controller, and that you can run a copy of that service on your laptop. + + +## Intercept your service with a global intercept + +With Telepresence, you can create [global intercepts](../../concepts/intercepts/?intercept=global) that intercept all traffic going to a service in your cluster and route it to your local environment instead. + +1. Connect to your cluster with `telepresence connect` and connect to the Kubernetes API server: + + ```console + $ curl -ik https://kubernetes.default + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected when you first connect. + + + You now have access to your remote Kubernetes API server as if you were on the same network. You can now use any local tools to connect to any service in the cluster. + + If you have difficulties connecting, make sure you are using Telepresence 2.0.3 or a later version. Check your version by entering `telepresence version` and [upgrade if needed](../../install/upgrade/). + + +2. Enter `telepresence list` and make sure the service you want to intercept is listed. For example: + + ```console + $ telepresence list + ... + example-service: ready to intercept (traffic-agent not yet installed) + ... + ``` + +3. Get the name of the port you want to intercept on your service: + `kubectl get service --output yaml`. + + For example: + + ```console + $ kubectl get service example-service --output yaml + ... + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + ... + ``` + +4. Intercept all traffic going to the service in your cluster: + `telepresence intercept --port [:] --env-file `. + * For `--port`: specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * For `--env-file`: specify a file path for Telepresence to write the environment variables that are set in the pod. + The example below shows Telepresence intercepting traffic going to service `example-service`. Requests now reach the service on port `http` in the cluster get routed to `8080` on the workstation and write the environment variables of the service to `~/example-service-intercept.env`. + ```console + $ telepresence intercept example-service --port 8080:http --env-file ~/example-service-intercept.env + Using Deployment example-service + intercepted + Intercept name: example-service + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Intercepting : all TCP connections + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + The following are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Query the environment in which you intercepted a service and verify your local instance being invoked. + All the traffic previously routed to your Kubernetes Service is now routed to your local environment + +You can now: +- Make changes on the fly and see them reflected when interacting with + your Kubernetes environment. +- Query services only exposed in your cluster's network. +- Set breakpoints in your IDE to investigate bugs. + + + + **Didn't work?** Make sure the port you're listening on matches the one you specified when you created your intercept. + + diff --git a/docs/telepresence/2.5/howtos/outbound.md b/docs/telepresence/2.5/howtos/outbound.md new file mode 100644 index 000000000..e148023e0 --- /dev/null +++ b/docs/telepresence/2.5/howtos/outbound.md @@ -0,0 +1,89 @@ +--- +description: "Telepresence can connect to your Kubernetes cluster, letting you access cluster services as if your laptop was another pod in the cluster." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Proxy outbound traffic to my cluster + +While preview URLs are a powerful feature, Telepresence offers other options for proxying traffic between your laptop and the cluster. This section discribes how to proxy outbound traffic and control outbound connectivity to your cluster. + + This guide assumes that you have the quick start sample web app running in your cluster to test accessing the web-app service. You can substitute this service for any other service you are running. + +## Proxying outbound traffic + +Connecting to the cluster instead of running an intercept allows you to access cluster workloads as if your laptop was another pod in the cluster. This enables you to access other Kubernetes services using `.`. A service running on your laptop can interact with other services on the cluster by name. + +When you connect to your cluster, the background daemon on your machine runs and installs the [Traffic Manager deployment](../../reference/architecture/) into the cluster of your current `kubectl` context. The Traffic Manager handles the service proxying. + +1. Run `telepresence connect` and enter your password to run the daemon. + + ``` + $ telepresence connect + Launching Telepresence Daemon v2.3.7 (api v3) + Need root privileges to run "/usr/local/bin/telepresence daemon-foreground /home//.cache/telepresence/logs '' ''" + [sudo] password: + Connecting to traffic manager... + Connected to context default (https://) + ``` + +2. Run `telepresence status` to confirm connection to your cluster and that it is proxying traffic. + + ``` + $ telepresence status + Root Daemon: Running + Version : v2.3.7 (api 3) + Primary DNS : "" + Fallback DNS: "" + User Daemon: Running + Version : v2.3.7 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 0 total + ``` + +3. Access your service by name with `curl web-app.emojivoto:80`. Telepresence routes the request to the cluster, as if your laptop is actually running in the cluster. + + ``` + $ curl web-app.emojivoto:80 + + + + + Emoji Vote + ... + ``` + +If you terminate the client with `telepresence quit` and try to access the service again, it will fail because traffic is no longer proxied from your laptop. + + ``` + $ telepresence quit + Telepresence Daemon quitting...done + ``` + +When using Telepresence in this way, you need to access services with the namespace qualified DNS name (<service name>.<namespace>) before you start an intercept. After you start an intercept, only <service name> is required. Read more about these differences in the DNS resolution reference guide. + +## Controlling outbound connectivity + +By default, Telepresence provides access to all Services found in all namespaces in the connected cluster. This can lead to problems if the user does not have RBAC access permissions to all namespaces. You can use the `--mapped-namespaces ` flag to control which namespaces are accessible. + +When you use the `--mapped-namespaces` flag, you need to include all namespaces containing services you want to access, as well as all namespaces that contain services related to the intercept. + +### Using local-only intercepts + +When you develop on isolated apps or on a virtualized container, you don't need an outbound connection. However, when developing services that aren't deployed to the cluster, it can be necessary to provide outbound connectivity to the namespace where the service will be deployed. This is because services that aren't exposed through ingress controllers require connectivity to those services. When you provide outbound connectivity, the service can access other services in that namespace without using qualified names. A local-only intercept does not cause outbound connections to originate from the intercepted namespace. The reason for this is to establish correct origin; the connection must be routed to a `traffic-agent`of an intercepted pod. For local-only intercepts, the outbound connections originates from the `traffic-manager`. + +To control outbound connectivity to specific namespaces, add the `--local-only` flag: + + ``` + $ telepresence intercept --namespace --local-only + ``` +The resources in the given namespace can now be accessed using unqualified names as long as the intercept is active. +You can deactivate the intercept with `telepresence leave `. This removes unqualified name access. + +### Proxy outcound connectivity for laptops + +To specify additional hosts or subnets that should be resolved inside of the cluster, see [AlsoProxy](../../reference/config/#alsoproxy) for more details. \ No newline at end of file diff --git a/docs/telepresence/2.5/howtos/preview-urls.md b/docs/telepresence/2.5/howtos/preview-urls.md new file mode 100644 index 000000000..670f72dd3 --- /dev/null +++ b/docs/telepresence/2.5/howtos/preview-urls.md @@ -0,0 +1,126 @@ +--- +description: "Telepresence uses Preview URLs to help you collaborate on developing Kubernetes services with teammates." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Share development environments with preview URLs + +Telepresence can generate sharable preview URLs. This enables you to work on a copy of your service locally, and share that environment with a teammate for pair programming. While using preview URLs, Telepresence will route only the requests coming from that preview URL to your local environment. Requests to the ingress are routed to your cluster as usual. + +Preview URLs are protected behind authentication through Ambassador Cloud, and, access to the URL is only available to users in your organization. You can make the URL publicly accessible for sharing with outside collaborators. + +## Creating a preview URL + +1. Connect to Telepresence and enter the `telepresence list` command in your CLI to verify the service is listed. +Telepresence only supports Deployments, ReplicaSets, and StatefulSet workloads with a label that matches a Service. + +2. Enter `telepresence login` to launch Ambassador Cloud in your browser. + + If you are in an environment you can't launch Telepresence in your local browser, enter If you are in an environment where Telepresence cannot launch in a local browser, pass the [`--apikey` flag to `telepresence login`](../../reference/client/login/). + +3. Start the intercept with `telepresence intercept --port --env-file `and adjust the flags as follows: + Start the intercept: + * **port:** specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * **env-file:** specify a file path for Telepresence to write the environment variables that are set in the pod. + +4. Answer the question prompts. + * **IWhat's your ingress' IP address?**: whether the ingress controller is expecting TLS communication on the specified port. + * **What's your ingress' TCP port number?**: the port your ingress controller is listening to. This is often 443 for TLS ports, and 80 for non-TLS ports. + * **Does that TCP port on your ingress use TLS (as opposed to cleartext)?**: whether the ingress controller is expecting TLS communication on the specified port. + * **If required by your ingress, specify a different hostname (TLS-SNI, HTTP "Host" header) to be used in requests.**: if your ingress controller routes traffic based on a domain name (often using the `Host` HTTP header), enter that value here. + + The example below shows a preview URL for `example-service` which listens on port 8080. The preview URL for ingress will use the `ambassador` service in the `ambassador` namespace on port `443` using TLS encryption and the hostname `dev-environment.edgestack.me`: + + ```console +$ telepresence intercept example-service --port 8080 --env-file ~/ex-svc.env + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Confirm the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: -]: ambassador.ambassador + + 2/4: What's your ingress' TCP port number? + + [default: -]: 80 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: y + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: ambassador.ambassador]: dev-environment.edgestack.me + + Using deployment example-service + intercepted + Intercept name : example-service + State : ACTIVE + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":example-service") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname : dev-environment.edgestack.me + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + Here are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Go to the Preview URL generated from the intercept. +Traffic is now intercepted from your preview URL without impacting other traffic from your Ingress. + + + Didn't work? It might be because you have services in between your ingress controller and the service you are intercepting that do not propagate the x-telepresence-intercept-id HTTP Header. Read more on context propagation. + + +7. Make a request on the URL you would usually query for that environment. Don't route a request to your laptop. + + Normal traffic coming into the cluster through the Ingress (i.e. not coming from the preview URL) routes to services in the cluster like normal. + +8. Share with a teammate. + + You can collaborate with teammates by sending your preview URL to them. Once your teammate logs in, they must select the same identity provider and org as you are using. This authorizes their access to the preview URL. When they visit the preview URL, they see the intercepted service running on your laptop. + You can now collaborate with a teammate to debug the service on the shared intercept URL without impacting the production environment. + +## Sharing a preview URL with people outside your team + +To collaborate with someone outside of your identity provider's organization: +Log into [Ambassador Cloud](https://app.getambassador.io/cloud/). + navigate to your service's intercepts, select the preview URL details, and click **Make Publicly Accessible**. Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on your laptop. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. Removing the preview URL either from the dashboard or by running `telepresence preview remove ` also removes all access to the preview URL. + +## Change access restrictions + +To collaborate with someone outside of your identity provider's organization, you must make your preview URL publicly accessible. + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/). +2. Select the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Make Publicly Accessible**. + +Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on a local environment. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. + +## Remove a preview URL from an Intercept + +To delete a preview URL and remove all access to the intercepted service, + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) +2. Click on the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Remove Preview**. + +Alternatively, you can remove a preview URL with the following command: +`telepresence preview remove ` diff --git a/docs/telepresence/2.5/howtos/request.md b/docs/telepresence/2.5/howtos/request.md new file mode 100644 index 000000000..1109c68df --- /dev/null +++ b/docs/telepresence/2.5/howtos/request.md @@ -0,0 +1,12 @@ +import Alert from '@material-ui/lab/Alert'; + +# Send requests to an intercepted service + +Ambassador Cloud can inform you about the required request parameters to reach an intercepted service. + + 1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) + 2. Navigate to the desired service Intercepts page + 3. Click the **Query** button to open the pop-up menu. + 4. Toggle between **CURL**, **Headers** and **Browse**. + +The pre-built queries and header information will help you get started to query the desired intercepted service and manage header propagation. diff --git a/docs/telepresence/2.5/images/apple.png b/docs/telepresence/2.5/images/apple.png new file mode 100644 index 000000000..8b8277f16 Binary files /dev/null and b/docs/telepresence/2.5/images/apple.png differ diff --git a/docs/telepresence/2.5/images/container-inner-dev-loop.png b/docs/telepresence/2.5/images/container-inner-dev-loop.png new file mode 100644 index 000000000..06586cd6e Binary files /dev/null and b/docs/telepresence/2.5/images/container-inner-dev-loop.png differ diff --git a/docs/telepresence/2.5/images/docker-header-containers.png b/docs/telepresence/2.5/images/docker-header-containers.png new file mode 100644 index 000000000..06f422a93 Binary files /dev/null and b/docs/telepresence/2.5/images/docker-header-containers.png differ diff --git a/docs/telepresence/2.5/images/github-login.png b/docs/telepresence/2.5/images/github-login.png new file mode 100644 index 000000000..cfd4d4bf1 Binary files /dev/null and b/docs/telepresence/2.5/images/github-login.png differ diff --git a/docs/telepresence/2.5/images/linux.png b/docs/telepresence/2.5/images/linux.png new file mode 100644 index 000000000..1832c5940 Binary files /dev/null and b/docs/telepresence/2.5/images/linux.png differ diff --git a/docs/telepresence/2.5/images/logo.png b/docs/telepresence/2.5/images/logo.png new file mode 100644 index 000000000..701f63ba8 Binary files /dev/null and b/docs/telepresence/2.5/images/logo.png differ diff --git a/docs/telepresence/2.5/images/split-tunnel.png b/docs/telepresence/2.5/images/split-tunnel.png new file mode 100644 index 000000000..5bf30378e Binary files /dev/null and b/docs/telepresence/2.5/images/split-tunnel.png differ diff --git a/docs/telepresence/2.5/images/tp-tutorial-1.png b/docs/telepresence/2.5/images/tp-tutorial-1.png new file mode 100644 index 000000000..ee68dc7db Binary files /dev/null and b/docs/telepresence/2.5/images/tp-tutorial-1.png differ diff --git a/docs/telepresence/2.5/images/tp-tutorial-2.png b/docs/telepresence/2.5/images/tp-tutorial-2.png new file mode 100644 index 000000000..129dc6ee3 Binary files /dev/null and b/docs/telepresence/2.5/images/tp-tutorial-2.png differ diff --git a/docs/telepresence/2.5/images/tp-tutorial-3.png b/docs/telepresence/2.5/images/tp-tutorial-3.png new file mode 100644 index 000000000..946629fc3 Binary files /dev/null and b/docs/telepresence/2.5/images/tp-tutorial-3.png differ diff --git a/docs/telepresence/2.5/images/tp-tutorial-4.png b/docs/telepresence/2.5/images/tp-tutorial-4.png new file mode 100644 index 000000000..cb6e7a9d2 Binary files /dev/null and b/docs/telepresence/2.5/images/tp-tutorial-4.png differ diff --git a/docs/telepresence/2.5/images/trad-inner-dev-loop.png b/docs/telepresence/2.5/images/trad-inner-dev-loop.png new file mode 100644 index 000000000..618b674f8 Binary files /dev/null and b/docs/telepresence/2.5/images/trad-inner-dev-loop.png differ diff --git a/docs/telepresence/2.5/images/tunnelblick.png b/docs/telepresence/2.5/images/tunnelblick.png new file mode 100644 index 000000000..8944d445a Binary files /dev/null and b/docs/telepresence/2.5/images/tunnelblick.png differ diff --git a/docs/telepresence/2.5/images/vpn-dns.png b/docs/telepresence/2.5/images/vpn-dns.png new file mode 100644 index 000000000..eed535c45 Binary files /dev/null and b/docs/telepresence/2.5/images/vpn-dns.png differ diff --git a/docs/telepresence/2.5/install/helm.md b/docs/telepresence/2.5/install/helm.md new file mode 100644 index 000000000..688d2f20a --- /dev/null +++ b/docs/telepresence/2.5/install/helm.md @@ -0,0 +1,181 @@ +# Install with Helm + +[Helm](https://helm.sh) is a package manager for Kubernetes that automates the release and management of software on Kubernetes. The Telepresence Traffic Manager can be installed via a Helm chart with a few simple steps. + +**Note** that installing the Traffic Manager through Helm will prevent `telepresence connect` from ever upgrading it. If you wish to upgrade a Traffic Manager that was installed via the Helm chart, please see the steps [below](#upgrading-the-traffic-manager) + +For more details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +## Before you begin + +The Telepresence Helm chart is hosted by Ambassador Labs and published at `https://app.getambassador.io`. + +Start by adding this repo to your Helm client with the following command: + +```shell +helm repo add datawire https://app.getambassador.io +helm repo update +``` + +## Install with Helm + +When you run the Helm chart, it installs all the components required for the Telepresence Traffic Manager. + +1. If you are installing the Telepresence Traffic Manager **for the first time on your cluster**, create the `ambassador` namespace in your cluster: + + ```shell + kubectl create namespace ambassador + ``` + +2. Install the Telepresence Traffic Manager with the following command: + + ```shell + helm install traffic-manager --namespace ambassador datawire/telepresence + ``` + +### Install into custom namespace + +The Helm chart supports being installed into any namespace, not necessarily `ambassador`. Simply pass a different `namespace` argument to `helm install`. +For example, if you wanted to deploy the traffic manager to the `staging` namespace: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence +``` + +Note that users of Telepresence will need to configure their kubeconfig to find this installation of the Traffic Manager: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +See [the kubeconfig documentation](../../reference/config#manager) for more information. + +### Upgrading the Traffic Manager. + +Versions of the Traffic Manager Helm chart are coupled to the versions of the Telepresence CLI that they are intended for. +Thus, for example, if you wish to use Telepresence `v2.4.0`, you'll need to install version `v2.4.0` of the Traffic Manager Helm chart. + +Upgrading the Traffic Manager is the same as upgrading any other Helm chart; for example, if you installed the release into the `ambassador` namespace, and you just wished to upgrade it to the latest version without changing any configuration values: + +```shell +helm repo up +helm upgrade traffic-manager datawire/telepresence --reuse-values --namespace ambassador +``` + +If you want to upgrade the Traffic-Manager to a specific version, add a `--version` flag with the version number to the upgrade command. For example: `--version v2.4.1` + +## RBAC + +### Installing a namespace-scoped traffic manager + +You might not want the Traffic Manager to have permissions across the entire kubernetes cluster, or you might want to be able to install multiple traffic managers per cluster (for example, to separate them by environment). +In these cases, the traffic manager supports being installed with a namespace scope, allowing cluster administrators to limit the reach of a traffic manager's permissions. + +For example, suppose you want a Traffic Manager that only works on namespaces `dev` and `staging`. +To do this, create a `values.yaml` like the following: + +```yaml +managerRbac: + create: true + namespaced: true + namespaces: + - dev + - staging +``` + +This can then be installed via: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence -f ./values.yaml +``` + +**NOTE** Do not install namespace-scoped Traffic Managers and a global Traffic Manager in the same cluster, as it could have unexpected effects. + +#### Namespace collision detection + +The Telepresence Helm chart will try to prevent namespace-scoped Traffic Managers from managing the same namespaces. +It will do this by creating a ConfigMap, called `traffic-manager-claim`, in each namespace that a given install manages. + +So, for example, suppose you install one Traffic Manager to manage namespaces `dev` and `staging`, as: + +```bash +helm install traffic-manager --namespace dev datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={dev,staging}' +``` + +You might then attempt to install another Traffic Manager to manage namespaces `staging` and `prod`: + +```bash +helm install traffic-manager --namespace prod datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={staging,prod}' +``` + +This would fail with an error: + +``` +Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap "traffic-manager-claim" in namespace "staging" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "prod": current value is "dev" +``` + +To fix this error, fix the overlap either by removing `staging` from the first install, or from the second. + +#### Namespace scoped user permissions + +Optionally, you can also configure user rbac to be scoped to the same namespaces as the manager itself. +You might want to do this if you don't give your users permissions throughout the cluster, and want to make sure they only have the minimum set required to perform telepresence commands on certain namespaces. + +Continuing with the `dev` and `staging` example from the previous section, simply add the following to `values.yaml` (make sure you set the `subjects`!): + +```yaml +clientRbac: + create: true + + # These are the users or groups to which the user rbac will be bound. + # This MUST be set. + subjects: {} + # - kind: User + # name: jane + # apiGroup: rbac.authorization.k8s.io + + namespaced: true + + namespaces: + - dev + - staging +``` + +#### Namespace-scoped webhook + +If you wish to use the traffic-manager's [mutating webhook](../../reference/cluster-config#mutating-webhook) with a namespace-scoped traffic manager, you will have to ensure that each namespace has an `app.kubernetes.io/name` label that is identical to its name: + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: staging + labels: + app.kubernetes.io/name: staging +``` + +You can also use `kubectl label` to add the label to an existing namespace, e.g.: + +```shell +kubectl label namespace staging app.kubernetes.io/name=staging +``` + +This is required because the mutating webhook will use the name label to find namespaces to operate on. + +**NOTE** This labelling happens automatically in kubernetes >= 1.21. + +### Installing RBAC only + +Telepresence Traffic Manager does require some [RBAC](../../reference/rbac/) for the traffic-manager deployment itself, as well as for users. +To make it easier for operators to introspect / manage RBAC separately, you can use `rbac.only=true` to +only create the rbac-related objects. +Additionally, you can use `clientRbac.create=true` and `managerRbac.create=true` to toggle which subset(s) of RBAC objects you wish to create. diff --git a/docs/telepresence/2.5/install/index.md b/docs/telepresence/2.5/install/index.md new file mode 100644 index 000000000..624cb33d6 --- /dev/null +++ b/docs/telepresence/2.5/install/index.md @@ -0,0 +1,153 @@ +import Platform from '@src/components/Platform'; + +# Install + +Install Telepresence by running the commands below for your OS. If you are not the administrator of your cluster, you will need [administrative RBAC permissions](../reference/rbac#administrating-telepresence) to install and use Telepresence in your cluster. + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## What's Next? + +Follow one of our [quick start guides](../quick-start/) to start using Telepresence, either with our sample app or in your own environment. + +## Installing nightly versions of Telepresence + +We build and publish the contents of the default branch, [release/v2](https://github.com/telepresenceio/telepresence), of Telepresence +nightly, Monday through Friday, for macOS (Intel and Apple silicon), Linux, and Windows. + +The tags are formatted like so: `vX.Y.Z-nightly-$gitShortHash`. + +`vX.Y.Z` is the most recent release of Telepresence with the patch version (Z) bumped one higher. +For example, if our last release was 2.3.4, nightly builds would start with v2.3.5, until a new +version of Telepresence is released. + +`$gitShortHash` will be the short hash of the git commit of the build. + +Use these URLs to download the most recent nightly build. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/nightly/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/nightly/telepresence.zip +``` + + + + +## Installing older versions of Telepresence + +Use these URLs to download an older version for your OS (including older nightly builds), replacing `x.y.z` with the versions you want. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/x.y.z/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/x.y.z/telepresence +``` + + + diff --git a/docs/telepresence/2.5/install/migrate-from-legacy.md b/docs/telepresence/2.5/install/migrate-from-legacy.md new file mode 100644 index 000000000..b00f84294 --- /dev/null +++ b/docs/telepresence/2.5/install/migrate-from-legacy.md @@ -0,0 +1,109 @@ +# Migrate from legacy Telepresence + +[Telepresence](/products/telepresence/) (formerly referenced as Telepresence 2, which is the current major version) has different mechanics and requires a different mental model from [legacy Telepresence 1](https://www.telepresence.io/docs/v1/) when working with local instances of your services. + +In legacy Telepresence, a pod running a service was swapped with a pod running the Telepresence proxy. This proxy received traffic intended for the service, and sent the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment". + +In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time. + +Telepresence 2 introduces a [new +architecture](../../reference/architecture/) built around "intercepts" +that addresses these problems. With the new Telepresence, a sidecar +proxy ("traffic agent") is injected onto the pod. The proxy then +intercepts traffic intended for the Pod and routes it to the +workstation/laptop. The advantage of this approach is that the +service is running at all times, and no swapping is used. By using +the proxy approach, we can also do personal intercepts, where rather +than re-routing all traffic to the laptop/workstation, it only +re-routes the traffic designated as belonging to that user, so that +multiple developers can intercept the same service at the same time +without disrupting normal operation or disrupting eacho. + +Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts. + +## Using legacy Telepresence commands + +First please ensure you've [installed Telepresence](../). + +Telepresence is able to translate common legacy Telepresence commands into native Telepresence commands. +So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used +to with the Telepresence binary. + +For example, say you have a deployment (`myserver`) that you want to swap deployment (equivalent to intercept in +Telepresence) with a python server, you could run the following command: + +``` +$ telepresence --swap-deployment myserver --expose 9090 --run python3 -m http.server 9090 +< help text > + +Legacy telepresence command used +Command roughly translates to the following in Telepresence: +telepresence intercept echo-easy --port 9090 -- python3 -m http.server 9090 +running... +Connecting to traffic manager... +Connected to context +Using Deployment myserver +intercepted + Intercept name : myserver + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:9090 + Intercepting : all TCP connections +Serving HTTP on :: port 9090 (http://[::]:9090/) ... +``` + +Telepresence will let you know what the legacy Telepresence command has mapped to and automatically +runs it. So you can get started with Telepresence today, using the commands you are used to +and it will help you learn the Telepresence syntax. + +### Legacy command mapping + +Below is the mapping of legacy Telepresence to Telepresence commands (where they exist and +are supported). + +| Legacy Telepresence Command | Telepresence Command | +|--------------------------------------------------|--------------------------------------------| +| --swap-deployment $workload | intercept $workload | +| --expose localPort[:remotePort] | intercept --port localPort[:remotePort] | +| --swap-deployment $workload --run-shell | intercept $workload -- bash | +| --swap-deployment $workload --run $cmd | intercept $workload -- $cmd | +| --swap-deployment $workload --docker-run $cmd | intercept $workload --docker-run -- $cmd | +| --run-shell | connect -- bash | +| --run $cmd | connect -- $cmd | +| --env-file,--env-json | --env-file, --env-json (haven't changed) | +| --context,--namespace | --context, --namespace (haven't changed) | +| --mount,--docker-mount | --mount, --docker-mount (haven't changed) | + +### Legacy Telepresence command limitations + +Some of the commands and flags from legacy Telepresence either didn't apply to Telepresence or +aren't yet supported in Telepresence. For some known popular commands, such as --method, +Telepresence will include output letting you know that the flag has gone away. For flags that +Telepresence can't translate yet, it will let you know that that flag is "unsupported". + +If Telepresence is missing any flags or functionality that is integral to your usage, please let us know +by [creating an issue](https://github.com/telepresenceio/telepresence/issues) and/or talking to us on our [Slack channel](http://a8r.io/slack)! + +## Telepresence changes + +Telepresence installs a Traffic Manager in the cluster and Traffic Agents alongside workloads when performing intercepts (including +with `--swap-deployment`) and leaves them. If you use `--swap-deployment`, the intercept will be left once the process +dies, but the agent will remain. There's no harm in leaving the agent running alongside your service, but when you +want to remove them from the cluster, the following Telepresence command will help: +``` +$ telepresence uninstall --help +Uninstall telepresence agents and manager + +Usage: + telepresence uninstall [flags] { --agent |--all-agents | --everything } + +Flags: + -d, --agent uninstall intercept agent on specific deployments + -a, --all-agents uninstall intercept agent on all deployments + -e, --everything uninstall agents and the traffic manager + -h, --help help for uninstall + -n, --namespace string If present, the namespace scope for this CLI request +``` + +Since the new architecture deploys a Traffic Manager into the Ambassador namespace, please take a look at +our [rbac guide](../../reference/rbac) if you run into any issues with permissions while upgrading to Telepresence. diff --git a/docs/telepresence/2.5/install/upgrade.md b/docs/telepresence/2.5/install/upgrade.md new file mode 100644 index 000000000..c0678450d --- /dev/null +++ b/docs/telepresence/2.5/install/upgrade.md @@ -0,0 +1,81 @@ +--- +description: "How to upgrade your installation of Telepresence and install previous versions." +--- + +import Platform from '@src/components/Platform'; + +# Upgrade Process +The Telepresence CLI will periodically check for new versions and notify you when an upgrade is available. Running the same commands used for installation will replace your current binary with the latest version. + + + + +```shell +# Intel Macs + +# Upgrade via brew: +brew upgrade datawire/blackbird/telepresence + +# OR upgrade manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +After upgrading your CLI you must stop any live Telepresence processes by issuing `telepresence quit`, then upgrade the Traffic Manager by running `telepresence connect` + +**Note** that if the Traffic Manager has been installed via Helm, `telepresence connect` will never upgrade it. If you wish to upgrade a Traffic Manager that was installed via the Helm chart, please see the [the Helm documentation](../helm#upgrading-the-traffic-manager) diff --git a/docs/telepresence/2.5/quick-start/TelepresenceQuickStartLanding.js b/docs/telepresence/2.5/quick-start/TelepresenceQuickStartLanding.js new file mode 100644 index 000000000..bd375dee0 --- /dev/null +++ b/docs/telepresence/2.5/quick-start/TelepresenceQuickStartLanding.js @@ -0,0 +1,118 @@ +import queryString from 'query-string'; +import React, { useEffect, useState } from 'react'; + +import Embed from '../../../../src/components/Embed'; +import Icon from '../../../../src/components/Icon'; +import Link from '../../../../src/components/Link'; + +import './telepresence-quickstart-landing.less'; + +/** @type React.FC> */ +const RightArrow = (props) => ( + + + +); + +const TelepresenceQuickStartLanding = () => { + const [getStartedUrl, setGetStartedUrl] = useState( + 'https://app.getambassador.io/cloud/welcome?docs_source=telepresence-quick-start', + ); + + const getUrlFromQueryParams = () => { + const { docs_source, docs_campaign } = queryString.parse( + window.location.search, + ); + + if (docs_source === 'cloud-quickstart-ad' && docs_campaign === 'loops') { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=loops', + ); + } else if ( + docs_source === 'cloud-quickstart-ad' && + docs_campaign === 'environments' + ) { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=environments', + ); + } + }; + + useEffect(() => { + getUrlFromQueryParams(); + }, []); + + return ( +
+

+ Telepresence +

+

+ Set up your ideal development environment for Kubernetes in seconds. + Accelerate your inner development loop with hot reload using your + existing IDE, and workflow. +

+ +
+
+
+

+ Set Up Telepresence with Ambassador Cloud +

+

+ Seamlessly integrate Telepresence into your existing Kubernetes + environment by following our 3-step setup guide. +

+ + Get Started + +
+
+

+ + Do it Yourself: + {' '} + install Telepresence and manually connect to your Kubernetes + workloads. +

+
+ +
+
+
+

+ What Can Telepresence Do for You? +

+

Telepresence gives Kubernetes application developers:

+
    +
  • Instant feedback loops
  • +
  • Remote development environments
  • +
  • Access to your favorite local tools
  • +
  • Easy collaborative development with teammates
  • +
+ + LEARN MORE{' '} + + +
+
+ +
+
+
+
+ ); +}; + +export default TelepresenceQuickStartLanding; diff --git a/docs/telepresence/2.5/quick-start/demo-node.md b/docs/telepresence/2.5/quick-start/demo-node.md new file mode 100644 index 000000000..bfa485d27 --- /dev/null +++ b/docs/telepresence/2.5/quick-start/demo-node.md @@ -0,0 +1,155 @@ +--- +description: "Claim a remote demo cluster and learn to use Telepresence to intercept services running in a Kubernetes Cluster, speeding up local development and debugging." +--- + +import {DemoClusterMetadata, ExpirationDate} from '../../../../../src/components/DemoClusterMetadata'; +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start + +
+

Contents

+ +* [1. Get a free remote cluster](#1-get-a-free-remote-cluster) +* [2. Try the Emojivoto application](#2-try-the-emojivoto-application) +* [3. Set up your local development environment](#3-set-up-your-local-development-environment) +* [4. Testing our fix](#4-testing-our-fix) +* [5. Preview URLs](#5-preview-urls) +* [6. How/Why does this all work](#6-howwhy-does-this-all-work) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +In this guide, we'll give you a hands-on tutorial with [Telepresence](/products/telepresence/). To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js and Golang. We have a version in React if you prefer. + + +## 1. Get a free remote cluster + +[Telepresence](/docs/telepresence/) connects your local workstation with a remote Kubernetes cluster. In this tutorial, we'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. Test out the application: + +1. Go to the and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. We're going to use Telepresence shortly to fix this bug, as everyone should be able to vote for 🍩! + + + Congratulations! You've successfully accessed the Emojivoto application on your remote cluster. + + +## 3. Set up your local development environment + +We'll set up a development environment locally on your workstation. We'll then use [Telepresence](../../reference/inside-container/) to connect this local development environment to the remote Kubernetes cluster. To save time, the development environment we'll use is pre-packaged as a Docker container. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + + + + + + + +Make sure that ports 8080 and 8083 are free.
+If the Docker engine is not running, the command will fail and you will see docker: unknown server OS in your terminal. +
+ +2. The Docker container includes a copy of the Emojivoto application that fixes the bug. Visit the [leaderboard](http://localhost:8083/leaderboard) and notice how it is different from the leaderboard in your Kubernetes cluster. + +3. Vote for 🍩 on your local leaderboard, and you can see that the bug is fixed! + + + Congratulations! You have successfully set up a local development environment, and tested the fix locally. + + +## 4. Testing our fix + +A common use case for Telepresence is to connect your local development environment to a remote cluster. This way, if your application is too big or complex to run locally, you can still develop locally. In this Quick Start, we're also going to show Telepresence can be used for integration testing, by testing our fix against the services in the remote cluster. + +1. From your Docker container, create an intercept, which will tell Telepresence to send traffic to the service in your container instead of the service in the cluster: + `telepresence intercept web --port 8080` + + When prompted for ingress configuration, all default values should be correct as displayed below. + + + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Preview URLs + +Preview URLs enable you to safely share your development environment with anyone. For example, you may want your UX designer to take a quick look at what you're developing, before you commit the code. Preview URLs enable this easy collaboration. + +1. If you access the Emojivoto application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +2. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version running locally. + + +Now you're able to share your fix in your local environment with your team! + + + + To get more information regarding Preview URLs and intercepts, visit Ambassador Cloud. + + +
+ +## 6. How/Why does this all work? + +[Telepresence](../qs-go/) works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +Intercepts and preview URLs are functions of Telepresence that enable easy local development from a remote Kubernetes cluster and offer a preview environment for sharing and real-time collaboration. + +Telepresence also uses custom headers and header propagation for controllable intercepts and preview URLs. The headers facilitate the smart routing of requests either to live services in the cluster or services running locally on a developer’s machine. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to Ambassador Cloud with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. + +## What's Next? + + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! diff --git a/docs/telepresence/2.5/quick-start/demo-react.md b/docs/telepresence/2.5/quick-start/demo-react.md new file mode 100644 index 000000000..28bf2a895 --- /dev/null +++ b/docs/telepresence/2.5/quick-start/demo-react.md @@ -0,0 +1,257 @@ +--- +description: "Telepresence Quick Start - React. In this guide we'll give you everything you need in a preconfigured demo cluster: the Telepresence CLI, a config file for..." +--- + +import Alert from '@material-ui/lab/Alert'; +import QSCards25 from './qs-cards'; +import { DownloadDemo } from '../../../../../src/components/Docs/DownloadDemo'; +import { UserInterceptCommand } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start - React + +
+

Contents

+ +* [1. Download the demo cluster archive](#1-download-the-demo-cluster-archive) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Set up the sample application](#3-set-up-the-sample-application) +* [4. Test app](#4-test-app) +* [5. Run a service on your laptop](#5-run-a-service-on-your-laptop) +* [6. Make a code change](#6-make-a-code-change) +* [7. Intercept all traffic to the service](#7-intercept-all-traffic-to-the-service) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +In this guide we'll give you **everything you need in a preconfigured demo cluster:** the [Telepresence](/products/telepresence/) CLI, a config file for connecting to your demo cluster, and code to run a cluster service locally. + + + While Telepresence works with any language, this guide uses a sample app with a frontend written in React. We have a version with a Node.js backend if you prefer. + + + + +## 1. Download the demo cluster archive + +1. + +2. Extract the archive file, open the `ambassador-demo-cluster` folder, and run the installer script (the commands below might vary based on where your browser saves downloaded files). + + + This step will also install some dependency packages onto your laptop using npm, you can see those packages at ambassador-demo-cluster/edgey-corp-nodejs/DataProcessingService/package.json. + + + ``` + cd ~/Downloads + unzip ambassador-demo-cluster.zip -d ambassador-demo-cluster + cd ambassador-demo-cluster + ./install.sh + # type y to install the npm dependencies when asked + ``` + +3. Confirm that your `kubectl` is configured to use the demo cluster by getting the status of the cluster nodes, you should see a single node named `tpdemo-prod-...`: + `kubectl get nodes` + + ``` + $ kubectl get nodes + + NAME STATUS ROLES AGE VERSION + tpdemo-prod-1234 Ready control-plane,master 5d10h v1.20.2+k3s1 + ``` + +4. Confirm that the Telepresence CLI is now installed (we expect to see the daemons are not running yet): +`telepresence status` + + ``` + $ telepresence status + + Root Daemon: Not running + User Daemon: Not running + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open System Preferences → Security & Privacy → General. Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence status command. + + + + You now have Telepresence installed on your workstation and a Kubernetes cluster configured in your terminal! + + +## 2. Test Telepresence + +[Telepresence](../../reference/client/login/) connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster (this requires **root** privileges and will ask for your password): +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Set up the sample application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + +1. Clone the emojivoto app: +`git clone https://github.com/datawire/emojivoto.git` + +1. Deploy the app to your cluster: +`kubectl apply -k emojivoto/kustomize/deployment` + +1. Change the kubectl namespace: +`kubectl config set-context --current --namespace=emojivoto` + +1. List the Services: +`kubectl get svc` + + ``` + $ kubectl get svc + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + emoji-svc ClusterIP 10.43.162.236 8080/TCP,8801/TCP 29s + voting-svc ClusterIP 10.43.51.201 8080/TCP,8801/TCP 29s + web-app ClusterIP 10.43.242.240 80/TCP 29s + web-svc ClusterIP 10.43.182.119 8080/TCP 29s + ``` + +1. Since you’ve already connected Telepresence to your cluster, you can access the frontend service in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). This is the namespace qualified DNS name in the form of `service.namespace`. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Test app + +1. Vote for some emojis and see how the [leaderboard](http://web-app.emojivoto/leaderboard) changes. + +1. There is one emoji that causes an error when you vote for it. Vote for 🍩 and the leaderboard does not actually update. Also an error is shown on the browser dev console: +`GET http://web-svc.emojivoto:8080/api/vote?choice=:doughnut: 500 (Internal Server Error)` + + + Open the dev console in Chrome or Firefox with Option + ⌘ + J (macOS) or Shift + CTRL + J (Windows/Linux).
+ Open the dev console in Safari with Option + ⌘ + C. +
+ +The error is on a backend service, so **we can add an error page to notify the user** while the bug is fixed. + +## 5. Run a service on your laptop + +Now start up the `web-app` service on your laptop. We'll then make a code change and intercept this service so that we can see the immediate results of a code change to the service. + +1. **In a new terminal window**, change into the repo directory and build the application: + + `cd /emojivoto` + `make web-app-local` + + ``` + $ make web-app-local + + ... + webpack 5.34.0 compiled successfully in 4326 ms + ✨ Done in 5.38s. + ``` + +2. Change into the service's code directory and start the server: + + `cd emojivoto-web-app` + `yarn webpack serve` + + ``` + $ yarn webpack serve + + ... + ℹ 「wds」: Project is running at http://localhost:8080/ + ... + ℹ 「wdm」: Compiled successfully. + ``` + +4. Access the application at [http://localhost:8080](http://localhost:8080) and see how voting for the 🍩 is generating the same error as the application deployed in the cluster. + + + Victory, your local React server is running a-ok! + + +## 6. Make a code change +We’ve now set up a local development environment for the app. Next we'll make and locally test a code change to the app to improve the issue with voting for 🍩. + +1. In the terminal running webpack, stop the server with `Ctrl+c`. + +1. In your preferred editor open the file `emojivoto/emojivoto-web-app/js/components/Vote.jsx` and replace the `render()` function (lines 83 to the end) with [this highlighted code snippet](https://github.com/datawire/emojivoto/blob/main/assets/Vote-fixed.jsx#L83-L149). + +1. Run webpack to fully recompile the code then start the server again: + + `yarn webpack` + `yarn webpack serve` + +1. Reload the browser tab showing [http://localhost:8080](http://localhost:8080) and vote for 🍩. Notice how you see an error instead, improving the user experience. + +## 7. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the app to the version running locally instead. + + + This command must be run in the terminal window where you ran the script because the script set environment variables to access the demo cluster. Those variables will only will apply to that terminal session. + + +1. Start the intercept with the `intercept` command, setting the workload name (a Deployment in this case), namespace, and port: +`telepresence intercept web-app --namespace emojivoto --port 8080` + + ``` + $ telepresence intercept web-app --namespace emojivoto --port 8080 + + Using deployment web-app + intercepted + Intercept name: web-app-emojivoto + State : ACTIVE + ... + ``` + +2. Go to the frontend service again in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). Voting for 🍩 should now show an error message to the user. + + + The web-app Deployment is being intercepted and rerouted to the server on your laptop! + + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## What's Next? + + diff --git a/docs/telepresence/2.5/quick-start/go.md b/docs/telepresence/2.5/quick-start/go.md new file mode 100644 index 000000000..0a4d0d871 --- /dev/null +++ b/docs/telepresence/2.5/quick-start/go.md @@ -0,0 +1,190 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + + +# Telepresence Quick Start - **Go** + +This guide provides you with a hands-on tutorial with Telepresence and Golang. To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + +## 1. Get a free remote cluster + +Telepresence connects your local workstation with a remote Kubernetes cluster. In this tutorial, you'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. +Test out the application: + +1. Go to the Emojivoto webapp and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. + +## 3. Run the Docker container + +The bug is present in the `voting-svc` service, you'll run that service locally. To save your time, we prepared a Docker container with this service running and all you'll need to fix the bug. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + +2. The application is failing due to a little bug inside this service which uses gRPC to communicate with the others services. We can use `grpcurl` to test the gRPC endpoint and see the error by running: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + (empty) + + Response trailers received: + content-type: application/grpc + Sent 0 requests and received 0 responses + ERROR: + Code: Unknown + Message: ERROR + ``` + +3. In order to fix the bug, use the Docker container's embedded IDE to fix this error. Go to http://localhost:8083 and open `api/api.go`. Remove the `"fmt"` package by deleting the line 5. + + ```go + 3 import ( + 4 "context" + 5 "fmt" // DELETE THIS LINE + 6 + 7 pb "github.com/buoyantio/emojivoto/emojivoto-voting-svc/gen/proto" + ``` + + and also replace the line `21`: + + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return nil, fmt.Errorf("ERROR") + 22 } + ``` + with + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return pS.vote(":doughnut:") + 22 } + ``` + Then save the file (`Ctrl+s` for Windows, `Cmd+s` for Mac or `Menu -> File -> Save`) and verify that the error is fixed now: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + content-type: application/grpc + + Response contents: + { + } + + Response trailers received: + (empty) + Sent 0 requests and received 1 response + ``` + +## 4. Telepresence intercept + +1. Now the bug is fixed, you can use Telepresence to intercept *all* the traffic through our local service. +Run the following command inside the container: + + ``` + $ telepresence intercept voting --port 8081:8080 + + Using Deployment voting + intercepted + Intercept name : voting + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8081 + Service Port Identifier: 8080 + Volume Mount Point : /tmp/telfs-XXXXXXXXX + Intercepting : all TCP connections + ``` + Now you can go back to Emojivoto webapp and you'll see that voting for 🍩 woks as expected. + +You have created an intercept to tell Telepresence where to send traffic. The `voting-svc` traffic is now destined to the local Dockerized version of the service. This intercepts *all the traffic* to the local `voting-svc` service, which has been fixed with the Telepresence intercept. + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Telepresence intercept with a preview URL + +Preview URLs allows you to safely share your development environment. With this approach, you can try and test your local service more accurately because you have a total control about which traffic is handled through your service, all of this thank to the preview URL. + +1. First leave the current intercept: + + ``` + $ telepresence leave voting + ``` + +2. Then login to telepresence: + + + +3. Create an intercept, which will tell Telepresence to send traffic to the service in our container instead of the service in the cluster. When prompted for ingress configuration, all default values should be correct as displayed below. + + + +4. If you access the Emojivoto webapp application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +5. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version which is running locally. + +
+ +## What's Next? + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! diff --git a/docs/telepresence/2.5/quick-start/index.md b/docs/telepresence/2.5/quick-start/index.md new file mode 100644 index 000000000..e0d26fa9e --- /dev/null +++ b/docs/telepresence/2.5/quick-start/index.md @@ -0,0 +1,7 @@ +--- +description: Telepresence Quick Start. +--- + +import NewTelepresenceQuickStartLanding from './TelepresenceQuickStartLanding' + + diff --git a/docs/telepresence/2.5/quick-start/qs-cards.js b/docs/telepresence/2.5/quick-start/qs-cards.js new file mode 100644 index 000000000..515d7200c --- /dev/null +++ b/docs/telepresence/2.5/quick-start/qs-cards.js @@ -0,0 +1,71 @@ +import Grid from '@material-ui/core/Grid'; +import Paper from '@material-ui/core/Paper'; +import Typography from '@material-ui/core/Typography'; +import { makeStyles } from '@material-ui/core/styles'; +import { Link as GatsbyLink } from 'gatsby'; +import React from 'react'; + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + textAlign: 'center', + alignItem: 'stretch', + padding: 0, + }, + paper: { + padding: theme.spacing(1), + textAlign: 'center', + color: 'black', + height: '100%', + }, +})); + +export default function CenteredGrid() { + const classes = useStyles(); + + return ( +
+ + + + + + Collaborating + + + + Use preview URLS to collaborate with your colleagues and others + outside of your organization. + + + + + + + + Outbound Sessions + + + + While connected to the cluster, your laptop can interact with + services as if it was another pod in the cluster. + + + + + + + + FAQs + + + + Learn more about uses cases and the technical implementation of + Telepresence. + + + + +
+ ); +} diff --git a/docs/telepresence/2.5/quick-start/qs-go.md b/docs/telepresence/2.5/quick-start/qs-go.md new file mode 100644 index 000000000..92de34cee --- /dev/null +++ b/docs/telepresence/2.5/quick-start/qs-go.md @@ -0,0 +1,396 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Go** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Go application](#3-install-a-sample-go-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used [Telepresence](/products/telepresence/) previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Go application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Go. We have versions in Python (Flask), Python (FastAPI), Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-go.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-go.git + + Cloning into 'edgey-corp-go'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-go/DataProcessingService/` + +3. You will use [Fresh](https://pkg.go.dev/github.com/BUGLAN/fresh) to support auto reloading of the Go server, which we'll use later. Confirm it is installed by running: + `go get github.com/pilu/fresh` + Then start the Go server: + `$GOPATH/bin/fresh` + + ``` + $ go get github.com/pilu/fresh + + $ $GOPATH/bin/fresh + + ... + 10:23:41 app | Welcome to the DataProcessingGoService! + ``` + + + Install Go from here and set your GOPATH if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Go server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Go server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-go/DataProcessingService/main.go` in your editor and change `var color string` from `blue` to `orange`. Save the file and the Go server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.5/quick-start/qs-java.md b/docs/telepresence/2.5/quick-start/qs-java.md new file mode 100644 index 000000000..4a4f437d3 --- /dev/null +++ b/docs/telepresence/2.5/quick-start/qs-java.md @@ -0,0 +1,390 @@ +--- +description: "Install Telepresence Quick Start Java and learn to use it to intercept services running in your Kubernetes cluster. Speeding up local development and debugging" +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Java** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Java application](#3-install-a-sample-java-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Java application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Java. We have versions in Python (FastAPI), Python (Flask), Go, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-java.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-java.git + + Cloning into 'edgey-corp-java'... + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-java/DataProcessingService/` + +3. Start the Maven server. + `mvn spring-boot:run` + + + Install Java and Maven first if needed. + + + ``` + $ mvn spring-boot:run + + ... + g.d.DataProcessingServiceJavaApplication : Started DataProcessingServiceJavaApplication in 1.408 seconds (JVM running for 1.684) + + ``` + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Java server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Java server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-java/DataProcessingService/src/main/resources/application.properties` in your editor and change `app.default.color` on line 2 from `blue` to `orange`. Save the file then stop and restart your Java server. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.5/quick-start/qs-node.md b/docs/telepresence/2.5/quick-start/qs-node.md new file mode 100644 index 000000000..9d7c32a0b --- /dev/null +++ b/docs/telepresence/2.5/quick-start/qs-node.md @@ -0,0 +1,384 @@ +--- +description: "Install Telepresence Quick Start - Node.js and learn to use it to intercept services running in your Kubernetes cluster. Speeding up local development and..." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Node.js** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Node.js application](#3-install-a-sample-nodejs-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Node.js application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js. We have versions in Go, Java,Python using Flask, and Python using FastAPI if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-nodejs.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-nodejs.git + + Cloning into 'edgey-corp-nodejs'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-nodejs/DataProcessingService/` + +3. Install the dependencies and start the Node server: +`npm install && npm start` + + ``` + $ npm install && npm start + + ... + Welcome to the DataProcessingService! + { _: [] } + Server running on port 3000 + ``` + + + Install Node.js from here if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Node server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + See this doc for more information on how Telepresence resolves DNS. + + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Node server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-nodejs/DataProcessingService/app.js` in your editor and change line 6 from `blue` to `orange`. Save the file and the Node server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.5/quick-start/qs-python-fastapi.md b/docs/telepresence/2.5/quick-start/qs-python-fastapi.md new file mode 100644 index 000000000..dadce5ed0 --- /dev/null +++ b/docs/telepresence/2.5/quick-start/qs-python-fastapi.md @@ -0,0 +1,381 @@ +--- +description: "Install Telepresence Quick Start - Python (FastAPI) and learn to use it to intercept services running in your Kubernetes cluster. Speeding up local development" +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Python (FastAPI)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the FastAPI framework. We have versions in Python (Flask), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python-fastapi.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python-fastapi.git + + Cloning into 'edgey-corp-python-fastapi'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python-fastapi/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install fastapi uvicorn requests && python app.py + + Collecting fastapi + ... + Application startup complete. + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local service is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python-fastapi/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 17 from `blue` to `orange`. Save the file and the Python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080) and it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.5/quick-start/qs-python.md b/docs/telepresence/2.5/quick-start/qs-python.md new file mode 100644 index 000000000..deae6c1d6 --- /dev/null +++ b/docs/telepresence/2.5/quick-start/qs-python.md @@ -0,0 +1,392 @@ +--- +description: "Install Telepresence Quick Start - Python (Flask) and learn to use it to intercept services running in your Kubernetes cluster. Speeding up local development..." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Python (Flask)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zipe.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the Flask framework. We have versions in Python (FastAPI), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python.git + + Cloning into 'edgey-corp-python'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install flask requests && python app.py + + Collecting flask + ... + Welcome to the DataServiceProcessingPythonService! + ... + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Python server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 15 from `blue` to `orange`. Save the file and the python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.5/quick-start/telepresence-quickstart-landing.less b/docs/telepresence/2.5/quick-start/telepresence-quickstart-landing.less new file mode 100644 index 000000000..e2a83df4f --- /dev/null +++ b/docs/telepresence/2.5/quick-start/telepresence-quickstart-landing.less @@ -0,0 +1,152 @@ +@import '~@src/components/Layout/vars.less'; + +.doc-body .telepresence-quickstart-landing { + font-family: @InterFont; + color: @black; + margin: -8.4px auto 48px; + max-width: 1050px; + min-width: @docs-min-width; + width: 100%; + + h1 { + color: @blue-dark; + font-weight: normal; + letter-spacing: 0.25px; + font-size: 33px; + margin: 0 0 15px; + } + p { + font-size: 0.875rem; + line-height: 24px; + margin: 0; + padding: 0; + } + + .demo-cluster-container { + display: grid; + margin: 40px 0; + grid-template-columns: 1fr; + grid-template-columns: 1fr; + @media screen and (max-width: 900px) { + grid-template-columns: repeat(1, 1fr); + } + } + .main-title-container { + display: flex; + flex-direction: column; + align-items: center; + p { + text-align: center; + font-size: 0.875rem; + } + } + h2 { + font-size: 23px; + color: @black; + margin: 0 0 20px 0; + padding: 0; + &.underlined { + padding-bottom: 2px; + border-bottom: 3px solid @grey-separator; + text-align: center; + } + strong { + font-weight: 800; + } + &.subtitle { + margin-bottom: 10px; + font-size: 19px; + line-height: 28px; + } + } + .learn-more, + .get-started { + font-size: 14px; + font-weight: 600; + letter-spacing: 1.25px; + display: flex; + align-items: center; + text-decoration: none; + &.inline { + display: inline-block; + text-decoration: underline; + font-size: unset; + font-weight: normal; + &:hover { + text-decoration: none; + } + } + &.blue { + color: @blue-5; + } + &.blue:hover { + color: @blue-dark; + } + } + + .learn-more { + margin-top: 20px; + padding: 13px 0; + } + + .box-container { + &.border { + border: 1.5px solid @grey-separator; + border-radius: 5px; + padding: 10px; + } + &::before { + content: ''; + position: absolute; + width: 14px; + height: 14px; + border-radius: 50%; + top: 0; + left: 50%; + transform: translate(-50%, -50%); + } + p { + font-size: 0.875rem; + line-height: 24px; + padding: 0; + } + } + + .telepresence-video { + border: 2px solid @grey-separator; + box-shadow: -6px 12px 0px fade(@black, 12%); + border-radius: 8px; + padding: 18px; + h2.telepresence-video-title { + font-weight: 400; + font-size: 23px; + line-height: 33px; + color: @blue-6; + } + } + + .video-section { + display: grid; + grid-template-columns: 1fr 1fr; + column-gap: 20px; + @media screen and (max-width: 800px) { + grid-template-columns: 1fr; + } + ul { + font-size: 14px; + margin: 0 10px 6px 0; + } + .video-container { + position: relative; + padding-bottom: 56.25%; // 16:9 aspect ratio + height: 0; + iframe { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + } + } + } +} diff --git a/docs/telepresence/2.5/redirects.yml b/docs/telepresence/2.5/redirects.yml new file mode 100644 index 000000000..5961b3477 --- /dev/null +++ b/docs/telepresence/2.5/redirects.yml @@ -0,0 +1 @@ +- {from: "", to: "quick-start"} diff --git a/docs/telepresence/2.5/reference/architecture.md b/docs/telepresence/2.5/reference/architecture.md new file mode 100644 index 000000000..38e7aff86 --- /dev/null +++ b/docs/telepresence/2.5/reference/architecture.md @@ -0,0 +1,95 @@ +--- +description: "How Telepresence works to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Telepresence Architecture + +
+ +![Telepresence Architecture](../../../../../images/documentation/telepresence-architecture.inline.svg) + +
+ +## Telepresence CLI + +The Telepresence CLI orchestrates all the moving parts: it starts the Telepresence User-Daemon, installs the Traffic Manager +in your cluster, authenticates against Ambassador Cloud and configures all those elements to communicate with one +another. + +## Telepresence Daemons +Telepresence has Daemons that run on a developer's workstation and act as the main point of communication to the cluster's +network in order to communicate with the cluster and handle intercepted traffic. + +### User-Daemon +The User-Daemon installs the Traffic Manager in your cluster and coordinates the creation and deletion of intercepts +by communicating with the [Traffic Manager](#traffic-manager) once it is running. + +When you run telepresence login, Telepresence installs an enhanced version of the User-Daemon. This replaces the existing User-Daemon and +allows you to create intercepts on your local machine from Ambassador Cloud. + +### Root-Daemon +The Root-Daemon manages the networking necessary to handle traffic between the local workstation and the cluster by setting up a TUN device. +For a detailed description of how the TUN device manages traffic and why it is necessary please refer to this blog post: [Implementing Telepresence Networking with a TUN Device](https://blog.getambassador.io/implementing-telepresence-networking-with-a-tun-device-a23a786d51e9). + +## Traffic Manager + +The Traffic Manager is the central point of communication between Traffic Agents in the cluster and Telepresence Daemons +on developer workstations, proxying all relevant inbound and outbound traffic and tracking active intercepts. When +Telepresence is run with either the `connect`, `intercept`, or `list` commands, the Telepresence CLI first checks the +cluster for the Traffic Manager deployment, and if missing it creates it. + +When an intercept gets created with a Preview URL, the Traffic Manager will establish a connection with Ambassador Cloud +so that Preview URL requests can be routed to the cluster. This allows Ambassador Cloud to reach the Traffic Manager +without requiring the Traffic Manager to be publicly exposed. Once the Traffic Manager receives a request from a Preview +URL, it forwards the request to the ingress service specified at the Preview URL creation. + +## Traffic Agent + +The Traffic Agent is a sidecar container that facilitates intercepts. When an intercept is started, the Traffic Agent +container is injected into the workload's pod(s). You can see the Traffic Agent's status by running `kubectl describe +pod `. + +Depending on the type of intercept that gets created, the Traffic Agent will either route the incoming request to the +Traffic Manager so that it gets routed to a developer's workstation, or it will pass it along to the container in the +pod usually handling requests on that port. + +## Ambassador Cloud + +Ambassador Cloud enables Preview URLs by generating random ephemeral domain names and routing requests received on those +domains from authorized users to the appropriate Traffic Manager. + +Ambassador Cloud also lets users manage their Preview URLs: making them publicly accessible, seeing users who have +accessed them and deleting them. + +## Pod-Daemon + +The Pod-Daemon is a modified version of the [Telepresence User-Daemon](#user-daemon) built as a container image so that +it can be inserted into a `Deployment` manifest as an additional container. This allows users to create intercepts completely +within the cluster with the benefit that the intercept stays active until the deployment with the Pod-Daemon container is removed. + +The Pod-Daemon will take arguments and environment variables as part of the `Deployment` manifest to specify which service the intercept +should be run on and to provide similar configuration that would be provided when using Telepresence intercepts from the command line. + +After being deployed to the cluster, it behaves similarly to the Telepresence User-Daemon and installs the [Traffic Agent Sidecar](#traffic-agent) +on the service that is being intercepted. After the intercept is created, traffic can then be redirected to the `Deployment` with the Pod-Daemon +container instead. The Pod-Daemon will automatically generate a Preview URL so that the intercept can be accessed from outside the cluster. +The Preview URL can be obtained from the Pod-Daemon logs if you are deploying it manually. + +The Pod-Daemon was created for use as a component of Deployment Previews in order to automatically create intercepts with development images built +by CI so that changes from pull requests can be quickly visualized in a live cluster before changes are landed by accessing the Preview URL +link which would be posted to an associated GitHub pull request when using Deployment Previews. + +See the [Deployment Previews quick-start](../../../../cloud/latest/deployment-previews/quick-start) for information on how to get started with Deployment Previews +or for a reference on how Pod-Daemon can be manually deployed to the cluster. + + +# Changes from Service Preview + +Using Ambassador's previous offering, Service Preview, the Traffic Agent had to be manually added to a pod by an +annotation. This is no longer required as the Traffic Agent is automatically injected when an intercept is started. + +Service Preview also started an intercept via `edgectl intercept`. The `edgectl` CLI is no longer required to intercept +as this functionality has been moved to the Telepresence CLI. + +For both the Traffic Manager and Traffic Agents, configuring Kubernetes ClusterRoles and ClusterRoleBindings is not +required as it was in Service Preview. Instead, the user running Telepresence must already have sufficient permissions in the cluster to add and modify deployments in the cluster. diff --git a/docs/telepresence/2.5/reference/client.md b/docs/telepresence/2.5/reference/client.md new file mode 100644 index 000000000..491dbbb8e --- /dev/null +++ b/docs/telepresence/2.5/reference/client.md @@ -0,0 +1,31 @@ +--- +description: "CLI options for Telepresence to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Client reference + +The [Telepresence CLI client](../../quick-start) is used to connect Telepresence to your cluster, start and stop intercepts, and create preview URLs. All commands are run in the form of `telepresence `. + +## Commands + +A list of all CLI commands and flags is available by running `telepresence help`, but here is more detail on the most common ones. +You can append `--help` to each command below to get even more information about its usage. + +| Command | Description | +| --- | --- | +| `connect` | Starts the local daemon and connects Telepresence to your cluster and installs the Traffic Manager if it is missing. After connecting, outbound traffic is routed to the cluster so that you can interact with services as if your laptop was another pod (for example, curling a service by it's name) | +| [`login`](login) | Authenticates you to Ambassador Cloud to create, manage, and share [preview URLs](../../howtos/preview-urls/) +| `logout` | Logs out out of Ambassador Cloud | +| `license` | Formats a license from Ambassdor Cloud into a secret that can be [applied to your cluster](../cluster-config#add-license-to-cluster) if you require features of the extension in an air-gapped environment| +| `status` | Shows the current connectivity status | +| `quit` | Tell Telepresence daemons to quit | +| `list` | Lists the current active intercepts | +| `intercept` | Intercepts a service, run followed by the service name to be intercepted and what port to proxy to your laptop: `telepresence intercept --port `. This command can also start a process so you can run a local instance of the service you are intercepting. For example the following will intercept the hello service on port 8000 and start a Python web server: `telepresence intercept hello --port 8000 -- python3 -m http.server 8000`. A special flag `--docker-run` can be used to run the local instance [in a docker container](../docker-run). | +| `leave` | Stops an active intercept: `telepresence leave hello` | +| `preview` | Create or remove [preview URLs](../../howtos/preview-urls) for existing intercepts: `telepresence preview create ` | +| `loglevel` | Temporarily change the log-level of the traffic-manager, traffic-agents, and user and root daemons | +| `gather-logs` | Gather logs from traffic-manager, traffic-agents, user, and root daemons, and export them into a zip file that can be shared with others or included with a github issue. Use `--get-pod-yaml` to include the yaml for the `traffic-manager` and `traffic-agent`s. Use `--anonymize` to replace the actual pod names + namespaces used for the `traffic-manager` and pods containing `traffic-agent`s in the logs. | +| `version` | Show version of Telepresence CLI + Traffic-Manager (if connected) | +| `uninstall` | Uninstalls Telepresence from your cluster, using the `--agent` flag to target the Traffic Agent for a specific workload, the `--all-agents` flag to remove all Traffic Agents from all workloads, or the `--everything` flag to remove all Traffic Agents and the Traffic Manager. +| `dashboard` | Reopens the Ambassador Cloud dashboard in your browser | +| `current-cluster-id` | Get cluster ID for your kubernetes cluster, used for [configuring license](../cluster-config#add-license-to-cluster) in an air-gapped environment | diff --git a/docs/telepresence/2.5/reference/client/login.md b/docs/telepresence/2.5/reference/client/login.md new file mode 100644 index 000000000..cab123256 --- /dev/null +++ b/docs/telepresence/2.5/reference/client/login.md @@ -0,0 +1,62 @@ +# Telepresence Login + +```console +$ telepresence login --help +Authenticate to Ambassador Cloud + +Usage: + telepresence login [flags] + +Flags: + --apikey string Static API key to use instead of performing an interactive login +``` + +## Description + +Use `telepresence login` to explicitly authenticate with [Ambassador +Cloud](https://www.getambassador.io/docs/cloud). Unless the +[`skipLogin` option](../../config) is set, other commands will +automatically invoke the `telepresence login` interactive login +procedure as nescessary, so it is rarely nescessary to explicitly run +`telepresence login`; it should only be truly nescessary to explictly +run `telepresence login` when you require a non-interactive login. + +The normal interactive login procedure involves launching a web +browser, a user interacting with that web browser, and finally having +the web browser make callbacks to the local [Telepresence](/products/telepresence/) process. If +it is not possible to do this (perhaps you are using a headless remote +box via SSH, or are using Telepresence in CI), then you may instead +have Ambassador Cloud issue an API key that you pass to `telepresence +login` with the `--apikey` flag. + +## Telepresence + +When you run `telepresence login`, the CLI installs +the Telepresence binary. The Telepresence enhanced free client [User +Daemon](../../architecture) communicates with the Ambassador Cloud to +provide fremium features including the ability to create intercepts from +Ambassador Cloud. + + +## Acquiring an API key + +1. Log in to Ambassador Cloud at https://app.getambassador.io/ . + +2. Click on your profile icon in the upper-left: ![Screenshot with the + mouse pointer over the upper-left profile icon](./apikey-2.png) + +3. Click on the "API Keys" menu button: ![Screenshot with the mouse + pointer over the "API Keys" menu button](./apikey-3.png) + +4. Click on the "generate new key" button in the upper-right: + ![Screenshot with the mouse pointer over the "generate new key" + button](./apikey-4.png) + +5. Enter a description for the key (perhaps the name of your laptop, + or perhaps the "CI"), and click "generate api key" to create it. + +You may now pass the API key as `KEY` to `telepresence login --apikey=KEY`. + +Telepresence will use that "master" API key to create narrower keys +for different components of Telepresence. You will see these appear +in the Ambassador Cloud web interface. diff --git a/docs/telepresence/2.5/reference/client/login/apikey-2.png b/docs/telepresence/2.5/reference/client/login/apikey-2.png new file mode 100644 index 000000000..1379502a9 Binary files /dev/null and b/docs/telepresence/2.5/reference/client/login/apikey-2.png differ diff --git a/docs/telepresence/2.5/reference/client/login/apikey-3.png b/docs/telepresence/2.5/reference/client/login/apikey-3.png new file mode 100644 index 000000000..4559b784d Binary files /dev/null and b/docs/telepresence/2.5/reference/client/login/apikey-3.png differ diff --git a/docs/telepresence/2.5/reference/client/login/apikey-4.png b/docs/telepresence/2.5/reference/client/login/apikey-4.png new file mode 100644 index 000000000..25c6581a4 Binary files /dev/null and b/docs/telepresence/2.5/reference/client/login/apikey-4.png differ diff --git a/docs/telepresence/2.5/reference/cluster-config.md b/docs/telepresence/2.5/reference/cluster-config.md new file mode 100644 index 000000000..aad5b64b4 --- /dev/null +++ b/docs/telepresence/2.5/reference/cluster-config.md @@ -0,0 +1,312 @@ +import Alert from '@material-ui/lab/Alert'; +import { ClusterConfig } from '../../../../../src/components/Docs/Telepresence'; + +# Cluster-side configuration + +For the most part, Telepresence doesn't require any special +configuration in the cluster and can be used right away in any +cluster (as long as the user has adequate [RBAC permissions](../rbac) +and the cluster's server version is `1.17.0` or higher). + +However, some advanced features do require some configuration in the +cluster. + +## TLS + +In this example, other applications in the cluster expect to speak TLS to your +intercepted application (perhaps you're using a service-mesh that does +mTLS). + +In order to use `--mechanism=http` (or any features that imply +`--mechanism=http`) you need to tell Telepresence about the TLS +certificates in use. + +Tell Telepresence about the certificates in use by adjusting your +[workload's](../intercepts/#supported-workloads) Pod template to set a couple of +annotations on the intercepted Pods: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ "getambassador.io/inject-terminating-tls-secret": "your-terminating-secret" # optional ++ "getambassador.io/inject-originating-tls-secret": "your-originating-secret" # optional + spec: ++ serviceAccountName: "your-account-that-has-rbac-to-read-those-secrets" + containers: +``` + +- The `getambassador.io/inject-terminating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS server + certificate to use for decrypting and responding to incoming + requests. + + When Telepresence modifies the Service and workload port + definitions to point at the Telepresence Agent sidecar's port + instead of your application's actual port, the sidecar will use this + certificate to terminate TLS. + +- The `getambassador.io/inject-originating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS + client certificate to use for communicating with your application. + + You will need to set this if your application expects incoming + requests to speak TLS (for example, your + code expects to handle mTLS itself instead of letting a service-mesh + sidecar handle mTLS for it, or the port definition that Telepresence + modified pointed at the service-mesh sidecar instead of at your + application). + + If you do set this, you should to set it to the + same client certificate Secret that you configure the Ambassador + Edge Stack to use for mTLS. + +It is only possible to refer to a Secret that is in the same Namespace +as the Pod. + +The Pod will need to have permission to `get` and `watch` each of +those Secrets. + +Telepresence understands `type: kubernetes.io/tls` Secrets and +`type: istio.io/key-and-cert` Secrets; as well as `type: Opaque` +Secrets that it detects to be formatted as one of those types. + +## Air gapped cluster + +If your cluster is on an isolated network such that it cannot +communicate with Ambassador Cloud, then some additional configuration +is required to acquire a license key in order to use personal +intercepts. + +### Create a license + +1. + +2. Generate a new license (if one doesn't already exist) by clicking *Generate New License*. + +3. You will be prompted for your Cluster ID. Ensure your +kubeconfig context is using the cluster you want to create a license for then +run this command to generate the Cluster ID: + + ``` + $ telepresence current-cluster-id + + Cluster ID: + ``` + +4. Click *Generate API Key* to finish generating the license. + +5. On the licenses page, download the license file associated with your cluster. + +### Add license to cluster +There are two separate ways you can add the license to your cluster: manually creating and deploying +the license secret or having the helm chart manage the secret + +You only need to do one of the two options. + +#### Manual deploy of license secret + +1. Use this command to generate a Kubernetes Secret config using the license file: + + ``` + $ telepresence license -f + + apiVersion: v1 + data: + hostDomain: + license: + kind: Secret + metadata: + creationTimestamp: null + name: systema-license + namespace: ambassador + ``` + +2. Save the output as a YAML file and apply it to your +cluster with `kubectl`. + +3. When deploying the `traffic-manager` chart, you must add the additional values when running `helm install` by putting +the following into a file (for the example we'll assume it's called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + secret: + # This tells the helm chart not to create the secret since you've created it yourself + create: false + ``` + +4. Install the helm chart into the cluster + + ``` + helm install traffic-manager -n ambassador datawire/telepresence --create-namespace -f license-values.yaml + ``` + +5. Ensure that you have the docker image for the Smart Agent (datawire/ambassador-telepresence-agent:1.11.0) +pulled and in a registry your cluster can pull from. + +6. Have users use the `images` [config key](../config/#images) keys so telepresence uses the aforementioned image for their agent. + +#### Helm chart manages the secret + +1. Get the jwt token from the downloaded license file + + ``` + $ cat ~/Downloads/ambassador.License_for_yourcluster + eyJhbGnotarealtoken.butanexample + ``` + +2. Create the following values file, substituting your real jwt token in for the one used in the example below. +(for this example we'll assume the following is placed in a file called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + # This is the value from the license file you download. this value is an example and will not work + value: eyJhbGnotarealtoken.butanexample + secret: + # This tells the helm chart to create the secret + create: true + ``` + +3. Install the helm chart into the cluster + + ``` + helm install traffic-manager charts/telepresence -n ambassador --create-namespace -f license-values.yaml + ``` + +Users will now be able to use preview intercepts with the +`--preview-url=false` flag. Even with the license key, preview URLs +cannot be used without enabling direct communication with Ambassador +Cloud, as Ambassador Cloud is essential to their operation. + +If using Helm to install the server-side components, see the chart's [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) to learn how to configure the image registry and license secret. + +Have clients use the [skipLogin](../config/#cloud) key to ensure the cli knows it is operating in an +air-gapped environment. + +## Mutating Webhook + +By default, Telepresence updates the intercepted workload (Deployment, StatefulSet, ReplicaSet) +template to add the [Traffic Agent](../architecture/#traffic-agent) sidecar container and update the +port definitions. If you use GitOps workflows (with tools like ArgoCD) to automatically update your +cluster so that it reflects the desired state from an external Git repository, this behavior can make +your workload out of sync with that external desired state. + +To solve this issue, you can use Telepresence's Mutating Webhook alternative mechanism. Intercepted +workloads will then stay untouched and only the underlying pods will be modified to inject the Traffic +Agent sidecar container and update the port definitions. + +Simply add the `telepresence.getambassador.io/inject-traffic-agent: enabled` annotation to your +workload template's annotations: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ telepresence.getambassador.io/inject-traffic-agent: enabled + spec: + containers: +``` + +### Service Port Annotation + +A service port annotation can be added to the workload to make the Mutating Webhook select a specific port +in the service. This is necessary when the service has multiple ports. + +```diff + spec: + template: + metadata: + labels: + service: your-service + annotations: + telepresence.getambassador.io/inject-traffic-agent: enabled ++ telepresence.getambassador.io/inject-service-port: https + spec: + containers: +``` + +### Service Name Annotation + +A service name annotation can be added to the workload to make the Mutating Webhook select a specific Kubernetes service. +This is necessary when the workload is exposed by multiple services. + +```diff + spec: + template: + metadata: + labels: + service: your-service + annotations: + telepresence.getambassador.io/inject-traffic-agent: enabled ++ telepresence.getambassador.io/inject-service-name: my-service + spec: + containers: +``` + +### Note on Numeric Ports + +If the targetPort of your intercepted service is pointing at a port number, in addition to +injecting the Traffic Agent sidecar, Telepresence will also inject an initContainer that will +reconfigure the pod's firewall rules to redirect traffic to the Traffic Agent. + + +Note that this initContainer requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + + +This requires the Traffic Agent to run as GID 7777. By default, this is disabled on openshift clusters. +To enable running as GID 7777 on a specific openshift namespace, run: +oc adm policy add-scc-to-group anyuid system:serviceaccounts:$NAMESPACE + + +If you need to use numeric ports without the aforementioned capabilities, you can [manually install the agent](../intercepts/manual-agent) + +For example, the following service is using a numeric port, so Telepresence would inject an initContainer into it: +```yaml +apiVersion: v1 +kind: Service +metadata: + name: your-service +spec: + type: ClusterIP + selector: + service: your-service + ports: + - port: 80 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: your-service + labels: + service: your-service +spec: + replicas: 1 + selector: + matchLabels: + service: your-service + template: + metadata: + annotations: + telepresence.getambassador.io/inject-traffic-agent: enabled + labels: + service: your-service + spec: + containers: + - name: your-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 +``` diff --git a/docs/telepresence/2.5/reference/config.md b/docs/telepresence/2.5/reference/config.md new file mode 100644 index 000000000..6722bc936 --- /dev/null +++ b/docs/telepresence/2.5/reference/config.md @@ -0,0 +1,285 @@ +# Laptop-side configuration + +## Global Configuration +Telepresence uses a `config.yml` file to store and change certain global configuration values that will be used for all clusters you use Telepresence with. The location of this file varies based on your OS: + +* macOS: `$HOME/Library/Application Support/telepresence/config.yml` +* Linux: `$XDG_CONFIG_HOME/telepresence/config.yml` or, if that variable is not set, `$HOME/.config/telepresence/config.yml` +* Windows: `%APPDATA%\telepresence\config.yml` + +For Linux, the above paths are for a user-level configuration. For system-level configuration, use the file at `$XDG_CONFIG_DIRS/telepresence/config.yml` or, if that variable is empty, `/etc/xdg/telepresence/config.yml`. If a file exists at both the user-level and system-level paths, the user-level path file will take precedence. + +### Values + +The config file currently supports values for the `timeouts`, `logLevels`, `images`, `cloud`, and `grpc` keys. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +timeouts: + agentInstall: 1m + intercept: 10s +logLevels: + userDaemon: debug +images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting +cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. +grpc: + maxReceiveSize: 10Mi +telepresenceAPI: + port: 9980 +``` + +#### Timeouts + +Values for `timeouts` are all durations either as a number of seconds +or as a string with a unit suffix of `ms`, `s`, `m`, or `h`. Strings +can be fractional (`1.5h`) or combined (`2h45m`). + +These are the valid fields for the `timeouts` key: + +| Field | Description | Type | Default | +|-------------------------|------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|------------| +| `agentInstall` | Waiting for Traffic Agent to be installed | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | +| `apply` | Waiting for a Kubernetes manifest to be applied | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 1 minute | +| `clusterConnect` | Waiting for cluster to be connected | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `intercept` | Waiting for an intercept to become active | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `proxyDial` | Waiting for an outbound connection to be established | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `trafficManagerConnect` | Waiting for the Traffic Manager API to connect for port fowards | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `trafficManagerAPI` | Waiting for connection to the gPRC API after `trafficManagerConnect` is successful | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 15 seconds | +| `helm` | Waiting for Helm operations (e.g. `install`) on the Traffic Manager | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | + +#### Log Levels + +Values for the `logLevels` fields are one of the following strings, +case insensitive: + + - `trace` + - `debug` + - `info` + - `warning` or `warn` + - `error` + - `fatal` + - `panic` + +For whichever log-level you select, you will get logs labeled with that level and of higher severity. +(e.g. if you use `info`, you will also get logs labeled `error`. You will NOT get logs labeled `debug`. + +These are the valid fields for the `logLevels` key: + +| Field | Description | Type | Default | +|--------------|---------------------------------------------------------------------|---------------------------------------------|---------| +| `userDaemon` | Logging level to be used by the User Daemon (logs to connector.log) | [loglevel][logrus-level] [string][yaml-str] | debug | +| `rootDaemon` | Logging level to be used for the Root Daemon (logs to daemon.log) | [loglevel][logrus-level] [string][yaml-str] | info | + +#### Images +Values for `images` are strings. These values affect the objects that are deployed in the cluster, +so it's important to ensure users have the same configuration. + +Additionally, you can deploy the server-side components with [Helm](../../install/helm), to prevent them +from being overridden by a client's config and use the [mutating-webhook](../cluster-config/#mutating-webhook) +to handle installation of the `traffic-agents`. + +These are the valid fields for the `images` key: + +| Field | Description | Type | Default | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------|----------------------| +| `registry` | Docker registry to be used for installing the Traffic Manager and default Traffic Agent. If not using a helm chart to deploy server-side objects, changing this value will create a new traffic-manager deployment when using Telepresence commands. Additionally, changing this value will update installed default `traffic-agents` to use the new registry when creating a new intercept. | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `agentImage` | `$registry/$imageName:$imageTag` to use when installing the Traffic Agent. Changing this value will update pre-existing `traffic-agents` to use this new image. *The `registry` value is not used for the `traffic-agent` if you have this value set.* | qualified Docker image name [string][yaml-str] | (unset) | +| `webhookRegistry` | The container `$registry` that the [Traffic Manager](../cluster-config/#mutating-webhook) will use with the `webhookAgentImage` *This value is only used if a new `traffic-manager` is deployed* | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `webhookAgentImage` | The container image that the [Traffic Manager](../cluster-config/#mutating-webhook) will pull from the `webhookRegistry` when installing the Traffic Agent in annotated pods *This value is only used if a new `traffic-manager` is deployed* | non-qualified Docker image name [string][yaml-str] | (unset) | + +#### Cloud +Values for `cloud` are listed below and their type varies, so please see the chart for the expected type for each config value. +These fields control how the client interacts with the Cloud service. + +| Field | Description | Type | Default | +|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------|---------| +| `skipLogin` | Whether the CLI should skip automatic login to Ambassador Cloud. If set to true, in order to perform personal intercepts you must have a [license key](../cluster-config/#air-gapped-cluster) installed in the cluster. | [bool][yaml-bool] | false | +| `refreshMessages` | How frequently the CLI should communicate with Ambassador Cloud to get new command messages, which also resets whether the message has been raised or not. You will see each message at most once within the duration given by this config | [duration][go-duration] [string][yaml-str] | 168h | +| `systemaHost` | The host used to communicate with Ambassador Cloud | [string][yaml-str] | app.getambassador.io | +| `systemaPort` | The port used with `systemaHost` to communicate with Ambassador Cloud | [string][yaml-str] | 443 | + +Telepresence attempts to auto-detect if the cluster is capable of +communication with Ambassador Cloud, but may still prompt you to log +in in cases where only the on-laptop client wishes to communicate with +Ambassador Cloud. If you want those auto-login points to be disabled +as well, or would like it to not attempt to communicate with +Ambassador Cloud at all (even for the auto-detection), then be sure to +set the `skipLogin` value to `true`. + +Reminder: To use personal intercepts, which normally require a login, +you must have a license key in your cluster and specify which +`agentImage` should be installed by also adding the following to your +`config.yml`: + +```yaml +images: + agentImage: / +``` + +#### Grpc +The `maxReceiveSize` determines how large a message that the workstation receives via gRPC can be. The default is 4Mi (determined by gRPC). All traffic to and from the cluster is tunneled via gRPC. + +The size is measured in bytes. You can express it as a plain integer or as a fixed-point number using E, G, M, or K. You can also use the power-of-two equivalents: Gi, Mi, Ki. For example, the following represent roughly the same value: +``` +128974848, 129e6, 129M, 123Mi +``` + +#### RESTful API server +The `telepresenceAPI` controls the behavior of Telepresence's RESTful API server that can be queried for additional information about ongoing intercepts. When present, and the `port` is set to a valid port number, it's propagated to the auto-installer so that application containers that can be intercepted gets the `TELEPRESENCE_API_PORT` environment set. The server can then be queried at `localhost:`. In addition, the `traffic-agent` and the `user-daemon` on the workstation that performs an intercept will start the server on that port. +If the `traffic-manager` is auto-installed, its webhook agent injector will be configured to add the `TELEPRESENCE_API_PORT` environment to the app container when the `traffic-agent` is injected. +See [RESTful API server](../restapi) for more info. + +#### Daemons + +`daemons` controls which binary to use for the user daemon. By default it will +use the Telepresence binary. For example, this can be used to tell Telepresence to +use the Telepresence Pro binary. + +| Field | Description | Type | Default | +|--------------------|-------------------------------------------------------------|--------------------|--------------------------------------| +| `userDaemonBinary` | The path to the binary you want to use for the User Daemon. | [string][yaml-str] | The path to Telepresence executable | + + +## Per-Cluster Configuration +Some configuration is not global to Telepresence and is actually specific to a cluster. Thus, we store that config information in your kubeconfig file, so that it is easier to maintain per-cluster configuration. + +### Values +The current per-cluster configuration supports `dns`, `alsoProxy`, and `manager` keys. +To add configuration, simply add a `telepresence.io` entry to the cluster in your kubeconfig like so: + +``` +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + dns: + also-proxy: + manager: + name: example-cluster +``` +#### DNS +The fields for `dns` are: local-ip, remote-ip, exclude-suffixes, include-suffixes, and lookup-timeout. + +| Field | Description | Type | Default | +|--------------------|---------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|-----------------------------------------------------------------------------| +| `local-ip` | The address of the local DNS server. This entry is only used on Linux systems that are not configured to use systemd-resolved. | IP address [string][yaml-str] | first `nameserver` mentioned in `/etc/resolv.conf` | +| `remote-ip` | The address of the cluster's DNS service. | IP address [string][yaml-str] | IP of the `kube-dns.kube-system` or the `dns-default.openshift-dns` service | +| `exclude-suffixes` | Suffixes for which the DNS resolver will always fail (or fallback in case of the overriding resolver) | [sequence][yaml-seq] of [strings][yaml-str] | `[".arpa", ".com", ".io", ".net", ".org", ".ru"]` | +| `include-suffixes` | Suffixes for which the DNS resolver will always attempt to do a lookup. Includes have higher priority than excludes. | [sequence][yaml-seq] of [strings][yaml-str] | `[]` | +| `lookup-timeout` | Maximum time to wait for a cluster side host lookup. | [duration][go-duration] [string][yaml-str] | 4 seconds | + +Here is an example kubeconfig: +``` +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + dns: + include-suffixes: + - .se + exclude-suffixes: + - .com + name: example-cluster +``` + + +#### AlsoProxy + +When using `also-proxy`, you provide a list of subnets after the key in your kubeconfig file to be added to the TUN device. +All connections to addresses that the subnet spans will be dispatched to the cluster + +Here is an example kubeconfig for the subnet `1.2.3.4/32`: +``` +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + also-proxy: + - 1.2.3.4/32 + name: example-cluster +``` + +#### NeverProxy + +When using `never-proxy` you provide a list of subnets after the key in your kubeconfig file. These will never be routed via the +TUN device, even if they fall within the subnets (pod or service) for the cluster. Instead, whatever route they have before +telepresence connects is the route they will keep. + +Here is an example kubeconfig for the subnet `1.2.3.4/32`: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + never-proxy: + - 1.2.3.4/32 + name: example-cluster +``` + +##### Using AlsoProxy together with NeverProxy + +Never proxy and also proxy are implemented as routing rules, meaning that when the two conflict, regular routing routes apply. +Usually this means that the most specific route will win. + +So, for example, if an `also-proxy` subnet falls within a broader `never-proxy` subnet: + +```yaml +never-proxy: [10.0.0.0/16] +also-proxy: [10.0.5.0/24] +``` + +Then the specific `also-proxy` of `10.0.5.0/24` will be proxied by the TUN device, whereas the rest of `10.0.0.0/16` will not. + +Conversely if a `never-proxy` subnet is inside a larger `also-proxy` subnet: + +```yaml +also-proxy: [10.0.0.0/16] +never-proxy: [10.0.5.0/24] +``` + +Then all of the also-proxy of `10.0.0.0/16` will be proxied, with the exception of the specific `never-proxy` of `10.0.5.0/24` + +#### Manager + +The `manager` key contains configuration for finding the `traffic-manager` that telepresence will connect to. It supports one key, `namespace`, indicating the namespace where the traffic manager is to be found + +Here is an example kubeconfig that will instruct telepresence to connect to a manager in namespace `staging`: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +[yaml-bool]: https://yaml.org/type/bool.html +[yaml-float]: https://yaml.org/type/float.html +[yaml-int]: https://yaml.org/type/int.html +[yaml-seq]: https://yaml.org/type/seq.html +[yaml-str]: https://yaml.org/type/str.html +[go-duration]: https://pkg.go.dev/time#ParseDuration +[logrus-level]: https://github.com/sirupsen/logrus/blob/v1.8.1/logrus.go#L25-L45 diff --git a/docs/telepresence/2.5/reference/dns.md b/docs/telepresence/2.5/reference/dns.md new file mode 100644 index 000000000..e38fbc61d --- /dev/null +++ b/docs/telepresence/2.5/reference/dns.md @@ -0,0 +1,75 @@ +# DNS resolution + +The Telepresence DNS resolver is dynamically configured to resolve names using the namespaces of currently active intercepts. Processes running locally on the desktop will have network access to all services in the such namespaces by service-name only. + +All intercepts contribute to the DNS resolver, even those that do not use the `--namespace=` option. This is because `--namespace default` is implied, and in this context, `default` is treated just like any other namespace. + +No namespaces are used by the DNS resolver (not even `default`) when no intercepts are active, which means that no service is available by `` only. Without an active intercept, the namespace qualified DNS name must be used (in the form `.`). + +See this demonstrated below, using the [quick start's](../../quick-start/) sample app services. + +No intercepts are currently running, we'll connect to the cluster and list the services that can be intercepted. + +``` +$ telepresence connect + + Connecting to traffic manager... + Connected to context default (https://) + +$ telepresence list + + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + emoji : ready to intercept (traffic-agent not yet installed) + web : ready to intercept (traffic-agent not yet installed) + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + +$ curl web-app:80 + + curl: (6) Could not resolve host: web-app + +``` + +This is expected as Telepresence cannot reach the service yet by short name without an active intercept in that namespace. + +``` +$ curl web-app.emojivoto:80 + + + + + + Emoji Vote + ... +``` + +Using the namespaced qualified DNS name though does work. +Now we'll start an intercept against another service in the same namespace. Remember, `--namespace default` is implied since it is not specified. + +``` +$ telepresence intercept web --port 8080 + + Using Deployment web + intercepted + Intercept name : web + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Volume Mount Point: /tmp/telfs-166119801 + Intercepting : HTTP requests that match all headers: + 'x-telepresence-intercept-id: 8eac04e3-bf24-4d62-b3ba-35297c16f5cd:web' + +$ curl webapp:80 + + + + + + Emoji Vote + ... +``` + +Now curling that service by its short name works and will as long as the intercept is active. + +The DNS resolver will always be able to resolve services using `.` regardless of intercepts. + +See [Outbound connectivity](../routing/#dns-resolution) for details on DNS lookups. diff --git a/docs/telepresence/2.5/reference/docker-run.md b/docs/telepresence/2.5/reference/docker-run.md new file mode 100644 index 000000000..2262f0a55 --- /dev/null +++ b/docs/telepresence/2.5/reference/docker-run.md @@ -0,0 +1,31 @@ +--- +Description: "How a Telepresence intercept can run a Docker container with configured environment and volume mounts." +--- + +# Using Docker for intercepts + +If you want your intercept to go to a Docker container on your laptop, use the `--docker-run` option. It creates the intercept, runs your container in the foreground, then automatically ends the intercept when the container exits. + +`telepresence intercept --port --docker-run -- ` + +The `--` separates flags intended for `telepresence intercept` from flags intended for `docker run`. + +## Example + +Imagine you are working on a new version of a your frontend service. It is running in your cluster as a Deployment called `frontend-v1`. You use Docker on your laptop to build an improved version of the container called `frontend-v2`. To test it out, use this command to run the new container on your laptop and start an intercept of the cluster service to your local container. + +`telepresence intercept frontend-v1 --port 8000 --docker-run -- frontend-v2` + +## Ports + +The `--port` flag can specify an additional port when `--docker-run` is used so that the local and container port can be different. This is done using `--port :`. The container port will default to the local port when using the `--port ` syntax. + +## Flags + +Telepresence will automatically pass some relevant flags to Docker in order to connect the container with the intercept. Those flags are combined with the arguments given after `--` on the command line. + +- `--dns-search tel2-search` Enables single label name lookups in intercepted namespaces +- `--env-file ` Loads the intercepted environment +- `--name intercept--` Names the Docker container, this flag is omitted if explicitly given on the command line +- `-p ` The local port for the intercept and the container port +- `-v ` Volume mount specification, see CLI help for `--mount` and `--docker-mount` flags for more info diff --git a/docs/telepresence/2.5/reference/environment.md b/docs/telepresence/2.5/reference/environment.md new file mode 100644 index 000000000..7f83ff119 --- /dev/null +++ b/docs/telepresence/2.5/reference/environment.md @@ -0,0 +1,46 @@ +--- +description: "How Telepresence can import environment variables from your Kubernetes cluster to use with code running on your laptop." +--- + +# Environment variables + +Telepresence can import environment variables from the cluster pod when running an intercept. +You can then use these variables with the code running on your laptop of the service being intercepted. + +There are three options available to do this: + +1. `telepresence intercept [service] --port [port] --env-file=FILENAME` + + This will write the environment variables to a Docker Compose `.env` file. This file can be used with `docker-compose` when starting containers locally. Please see the Docker documentation regarding the [file syntax](https://docs.docker.com/compose/env-file/) and [usage](https://docs.docker.com/compose/environment-variables/) for more information. + +2. `telepresence intercept [service] --port [port] --env-json=FILENAME` + + This will write the environment variables to a JSON file. This file can be injected into other build processes. + +3. `telepresence intercept [service] --port [port] -- [COMMAND]` + + This will run a command locally with the pod's environment variables set on your laptop. Once the command quits the intercept is stopped (as if `telepresence leave [service]` was run). This can be used in conjunction with a local server command, such as `python [FILENAME]` or `node [FILENAME]` to run a service locally while using the environment variables that were set on the pod via a ConfigMap or other means. + + Another use would be running a subshell, Bash for example: + + `telepresence intercept [service] --port [port] -- /bin/bash` + + This would start the intercept then launch the subshell on your laptop with all the same variables set as on the pod. + +## Telepresence Environment Variables + +Telepresence adds some useful environment variables in addition to the ones imported from the intercepted pod: + +### TELEPRESENCE_ROOT +Directory where all remote volumes mounts are rooted. See [Volume Mounts](../volume/) for more info. + +### TELEPRESENCE_MOUNTS +Colon separated list of remotely mounted directories. + +### TELEPRESENCE_CONTAINER +The name of the intercepted container. Useful when a pod has several containers, and you want to know which one that was intercepted by Telepresence. + +### TELEPRESENCE_INTERCEPT_ID +ID of the intercept (same as the "x-intercept-id" http header). + +Useful if you need special behavior when intercepting a pod. One example might be when dealing with pub/sub systems like Kafka, where all processes that don't have the `TELEPRESENCE_INTERCEPT_ID` set can filter out all messages that contain an `x-intercept-id` header, while those that do, instead filter based on a matching `x-intercept-id` header. This is to assure that messages belonging to a certain intercept always are consumed by the intercepting process. diff --git a/docs/telepresence/2.5/reference/inside-container.md b/docs/telepresence/2.5/reference/inside-container.md new file mode 100644 index 000000000..637e0cdfd --- /dev/null +++ b/docs/telepresence/2.5/reference/inside-container.md @@ -0,0 +1,37 @@ +# Running Telepresence inside a container + +It is sometimes desirable to run [Telepresence](/products/telepresence/) inside a container. One reason can be to avoid any side effects on the workstation's network, another can be to establish multiple sessions with the traffic manager, or even work with different clusters simultaneously. + +## Building the container + +Building a container with a ready-to-run Telepresence is easy because there are relatively few external dependencies. Add the following to a `Dockerfile`: + +```Dockerfile +# Dockerfile with telepresence and its prerequisites +FROM alpine:3.13 + +# Install Telepresence prerequisites +RUN apk add --no-cache curl iproute2 sshfs + +# Download and install the telepresence binary +RUN curl -fL https://app.getambassador.io/download/tel2/linux/amd64/latest/telepresence -o telepresence && \ + install -o root -g root -m 0755 telepresence /usr/local/bin/telepresence +``` +In order to build the container, do this in the same directory as the `Dockerfile`: +``` +$ docker build -t tp-in-docker . +``` + +## Running the container + +Telepresence will need access to the `/dev/net/tun` device on your Linux host (or, in case the host isn't Linux, the Linux VM that Docker starts automatically), and a Kubernetes config that identifies the cluster. It will also need `--cap-add=NET_ADMIN` to create its Virtual Network Interface. + +The command to run the container can look like this: +```bash +$ docker run \ + --cap-add=NET_ADMIN \ + --device /dev/net/tun:/dev/net/tun \ + --network=host \ + -v ~/.kube/config:/root/.kube/config \ + -it --rm tp-in-docker +``` diff --git a/docs/telepresence/2.5/reference/intercepts/index.md b/docs/telepresence/2.5/reference/intercepts/index.md new file mode 100644 index 000000000..1d573d9bd --- /dev/null +++ b/docs/telepresence/2.5/reference/intercepts/index.md @@ -0,0 +1,354 @@ +import Alert from '@material-ui/lab/Alert'; + +# Intercepts + +When intercepting a service, Telepresence installs a *traffic-agent* +sidecar in to the workload. That traffic-agent supports one or more +intercept *mechanisms* that it uses to decide which traffic to +intercept. Telepresence has a simple default traffic-agent, however +you can configure a different traffic-agent with more sophisticated +mechanisms either by setting the [`images.agentImage` field in +`config.yml`](../config/#images) or by writing an +[`extensions/${extension}.yml`][extensions] file that tells +Telepresence about a traffic-agent that it can use, what mechanisms +that traffic-agent supports, and command-line flags to expose to the +user to configure that mechanism. You may tell Telepresence which +known mechanism to use with the `--mechanism=${mechanism}` flag or by +setting one of the `--${mechansim}-XXX` flags, which implicitly set +the mechanism; for example, setting `--http-header=auto` implicitly +sets `--mechanism=http`. + +The default open-source traffic-agent only supports the `tcp` +mechanism, which treats the raw layer 4 TCP streams as opaque and +sends all of that traffic down to the developer's workstation. This +means that it is a "global" intercept, affecting all users of the +cluster. + +In addition to the default open-source traffic-agent, Telepresence +already knows about the Ambassador Cloud +[traffic-agent][ambassador-agent], which supports the `http` +mechanism. The `http` mechanism operates at higher layer, working +with layer 7 HTTP, and may intercept specific HTTP requests, allowing +other HTTP requests through to the regular service. This allows for +"personal" intercepts which only intercept traffic tagged as belonging +to a given developer. + +[extensions]: https://pkg.go.dev/github.com/telepresenceio/telepresence/v2@v$version$/pkg/client/cli/extensions +[ambassador-agent]: https://github.com/telepresenceio/telepresence/blob/release/v2/pkg/client/cli/extensions/builtin.go#L30-L50 + +## Intercept behavior when logged in to Ambassador Cloud + +Logging in to Ambassador Cloud (with [`telepresence +login`](../client/login/)) changes the Telepresence defaults in two +ways. + +First, being logged in to Ambassador Cloud causes Telepresence to +default to `--mechanism=http --http-header=auto --http-path-prefix=/` ( +`--mechanism=http` is redundant. It is implied by other `--http-xxx` flags). +If you hadn't been logged in it would have defaulted to +`--mechanism=tcp`. This tells Telepresence to use the Ambassador +Cloud traffic-agent to do smart "personal" intercepts and only +intercept a subset of HTTP requests, rather than just intercepting the +entirety of all TCP connections. This is important for working in a +shared cluster with teammates, and is important for the preview URL +functionality below. See `telepresence intercept --help` for +information on using the `--http-header` and `--http-path-xxx` flags to +customize which requests that are intercepted. + +Secondly, being logged in causes Telepresence to default to +`--preview-url=true`. If you hadn't been logged in it would have +defaulted to `--preview-url=false`. This tells Telepresence to take +advantage of Ambassador Cloud to create a preview URL for this +intercept, creating a shareable URL that automatically sets the +appropriate headers to have requests coming from the preview URL be +intercepted. In order to create the preview URL, it will prompt you +for four settings about how your cluster's ingress is configured. For +each, Telepresence tries to intelligently detect the correct value for +your cluster; if it detects it correctly, may simply press "enter" and +accept the default, otherwise you must tell Telepresence the correct +value. + +When creating an intercept with the `http` mechanism, the +traffic-agent sends a `GET /telepresence-http2-check` request to your +service and to the process running on your local machine at the port +specified in your intercept, in order to determine if they support +HTTP/2. This is required for the intercepts to behave correctly. If +you do not have a service running locally when the intercept is +created, the traffic-agent will use the result it got from checking +the in-cluster service. + +## Supported workloads + +Kubernetes has various +[workloads](https://kubernetes.io/docs/concepts/workloads/). +Currently, Telepresence supports intercepting (installing a +traffic-agent on) `Deployments`, `ReplicaSets`, and `StatefulSets`. + + + +While many of our examples use Deployments, they would also work on +ReplicaSets and StatefulSets + + + +## Specifying a namespace for an intercept + +The namespace of the intercepted workload is specified using the +`--namespace` option. When this option is used, and `--workload` is +not used, then the given name is interpreted as the name of the +workload and the name of the intercept will be constructed from that +name and the namespace. + +```shell +telepresence intercept hello --namespace myns --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept +`hello-myns`. In order to remove the intercept, you will need to run +`telepresence leave hello-mydns` instead of just `telepresence leave +hello`. + +The name of the intercept will be left unchanged if the workload is specified. + +```shell +telepresence intercept myhello --namespace myns --workload hello --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept `myhello`. + +## Importing environment variables + +Telepresence can import the environment variables from the pod that is +being intercepted, see [this doc](../environment/) for more details. + +## Creating an intercept without a preview URL + +If you *are not* logged in to Ambassador Cloud, the following command +will intercept all traffic bound to the service and proxy it to your +laptop. This includes traffic coming through your ingress controller, +so use this option carefully as to not disrupt production +environments. + +```shell +telepresence intercept --port= +``` + +If you *are* logged in to Ambassador Cloud, setting the +`--preview-url` flag to `false` is necessary. + +```shell +telepresence intercept --port= --preview-url=false +``` + +This will output an HTTP header that you can set on your request for +that traffic to be intercepted: + +```console +$ telepresence intercept --port= --preview-url=false +Using Deployment +intercepted + Intercept name: + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":") +``` + +Run `telepresence status` to see the list of active intercepts. + +```console +$ telepresence status +Root Daemon: Running + Version : v2.1.4 (api 3) + Primary DNS : "" + Fallback DNS: "" +User Daemon: Running + Version : v2.1.4 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 1 total + dataprocessingnodeservice: @ +``` + +Finally, run `telepresence leave ` to stop the intercept. + +## Skipping the ingress dialogue + +You can skip the ingress dialogue by setting the relevant parameters using flags. If any of the following flags are set, the dialogue will be skipped and the flag values will be used instead. If any of the required flags are missing, an error will be thrown. + +| Flag | Description | Required | +|------------------|------------------------------------------------------------------|------------| +| `--ingress-host` | The ip address for the ingress | yes | +| `--ingress-port` | The port for the ingress | yes | +| `--ingress-tls` | Whether tls should be used | no | +| `--ingress-l5` | Whether a different ip address should be used in request headers | no | + +## Creating an intercept when a service has multiple ports + +If you are trying to intercept a service that has multiple ports, you +need to tell Telepresence which service port you are trying to +intercept. To specify, you can either use the name of the service +port or the port number itself. To see which options might be +available to you and your service, use kubectl to describe your +service or look in the object's YAML. For more information on multiple +ports, see the [Kubernetes documentation][kube-multi-port-services]. + +[kube-multi-port-services]: https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services + +```console +$ telepresence intercept --port=: +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +When intercepting a service that has multiple ports, the name of the +service port that has been intercepted is also listed. + +If you want to change which port has been intercepted, you can create +a new intercept the same way you did above and it will change which +service port is being intercepted. + +## Creating an intercept When multiple services match your workload + +Oftentimes, there's a 1-to-1 relationship between a service and a +workload, so telepresence is able to auto-detect which service it +should intercept based on the workload you are trying to intercept. +But if you use something like +[Argo](https://www.getambassador.io/docs/argo/latest/), there may be +two services (that use the same labels) to manage traffic between a +canary and a stable service. + +Fortunately, if you know which service you want to use when +intercepting a workload, you can use the `--service` flag. So in the +aforementioned example, if you wanted to use the `echo-stable` service +when intercepting your workload, your command would look like this: + +```console +$ telepresence intercept echo-rollout- --port --service echo-stable +Using ReplicaSet echo-rollout- +intercepted + Intercept name : echo-rollout- + State : ACTIVE + Workload kind : ReplicaSet + Destination : 127.0.0.1:3000 + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-921196036 + Intercepting : all TCP connections +``` + +## Port-forwarding an intercepted container's sidecars + +Sidecars are containers that sit in the same pod as an application +container; they usually provide auxiliary functionality to an +application, and can usually be reached at +`localhost:${SIDECAR_PORT}`. For example, a common use case for a +sidecar is to proxy requests to a database, your application would +connect to `localhost:${SIDECAR_PORT}`, and the sidecar would then +connect to the database, perhaps augmenting the connection with TLS or +authentication. + +When intercepting a container that uses sidecars, you might want those +sidecars' ports to be available to your local application at +`localhost:${SIDECAR_PORT}`, exactly as they would be if running +in-cluster. Telepresence's `--to-pod ${PORT}` flag implements this +behavior, adding port-forwards for the port given. + +```console +$ telepresence intercept --port=: --to-pod= +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +If there are multiple ports that you need forwarded, simply repeat the +flag (`--to-pod= --to-pod=`). + +## Intercepting headless services + +Kubernetes supports creating [services without a ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services), +which, when they have a pod selector, serve to provide a DNS record that will directly point to the service's backing pods. +Telepresence supports intercepting these `headless` services as it would a regular service with a ClusterIP. +So, for example, if you have the following service: + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: my-headless +spec: + type: ClusterIP + clusterIP: None + selector: + service: my-headless + ports: + - port: 8080 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: my-headless + labels: + service: my-headless +spec: + replicas: 1 + serviceName: my-headless + selector: + matchLabels: + service: my-headless + template: + metadata: + labels: + service: my-headless + spec: + containers: + - name: my-headless + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +``` + +You can intercept it like any other: + +```console +$ telepresence intercept my-headless --port 8080 +Using StatefulSet my-headless +intercepted + Intercept name : my-headless + State : ACTIVE + Workload kind : StatefulSet + Destination : 127.0.0.1:8080 + Volume Mount Point: /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-524189712 + Intercepting : all TCP connections +``` + + +This utilizes an initContainer that requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + + +This requires the Traffic Agent to run as GID 7777. By default, this is disabled on openshift clusters. +To enable running as GID 7777 on a specific openshift namespace, run: +oc adm policy add-scc-to-group anyuid system:serviceaccounts:$NAMESPACE + + + +Intercepting headless services without a selector is not supported. + diff --git a/docs/telepresence/2.5/reference/intercepts/manual-agent.md b/docs/telepresence/2.5/reference/intercepts/manual-agent.md new file mode 100644 index 000000000..e818171ce --- /dev/null +++ b/docs/telepresence/2.5/reference/intercepts/manual-agent.md @@ -0,0 +1,221 @@ +import Alert from '@material-ui/lab/Alert'; + +# Manually injecting the Traffic Agent + +You can directly modify your workload's YAML configuration to add the Telepresence Traffic Agent and enable it to be intercepted. + +When you use a Telepresence intercept, Telepresence automatically edits the workload and services when you use +`telepresence uninstall --agent `. In some GitOps workflows, you may need to use the +[Telepresence Mutating Webhook](../../cluster-config/#mutating-webhook) to keep intercepted workloads unmodified +while you target changes on specific pods. + + +In situations where you don't have access to the proper permissions for numeric ports, as noted in the Note on numeric ports +section of the documentation, it is possible to manually inject the Traffic Agent. Because this is not the recommended approach +to making a workload interceptable, try the Mutating Webhook before proceeding." + + +## Procedure + +You can manually inject the agent into Deployments, StatefulSets, or ReplicaSets. The example on this page +uses the following Deployment: + + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "my-service" + labels: + service: my-service +spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +``` + +The deployment is being exposed by the following service: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: "my-service" +spec: + type: ClusterIP + selector: + service: my-service + ports: + - port: 80 + targetPort: 8080 +``` + +### 1. Generating the YAML + +First, generate the YAML for the traffic-agent container: + +```console +$ telepresence genyaml container --container-name echo-container --port 8080 --output - --input deployment.yaml +args: +- agent +env: +- name: TELEPRESENCE_CONTAINER + value: echo-container +- name: _TEL_AGENT_LOG_LEVEL + value: info +- name: _TEL_AGENT_NAME + value: my-service +- name: _TEL_AGENT_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace +- name: _TEL_AGENT_POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP +- name: _TEL_AGENT_APP_PORT + value: "8080" +- name: _TEL_AGENT_AGENT_PORT + value: "9900" +- name: _TEL_AGENT_MANAGER_HOST + value: traffic-manager.ambassador +image: docker.io/datawire/tel2:2.4.6 +name: traffic-agent +ports: +- containerPort: 9900 + protocol: TCP +readinessProbe: + exec: + command: + - /bin/stat + - /tmp/agent/ready +resources: {} +volumeMounts: +- mountPath: /tel_pod_info + name: traffic-annotations +``` + +Next, generate the YAML for the volume: + +```console +$ telepresence genyaml volume --output - --input deployment.yaml +downwardAPI: + items: + - fieldRef: + fieldPath: metadata.annotations + path: annotations +name: traffic-annotations +``` + + +Enter `telepresence genyaml container --help` or `telepresence genyaml volume --help` for more information about these flags. + + +### 2. Injecting the YAML into the Deployment + +You need to add the `Deployment` YAML you genereated to include the container and the volume. These are placed as elements of `spec.template.spec.containers` and `spec.template.spec.volumes` respectively. +You also need to modify `spec.template.metadata.annotations` and add the annotation `telepresence.getambassador.io/manually-injected: "true"`. +These changes should look like the following: + +```diff +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "my-service" + labels: + service: my-service +spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service ++ annotations: ++ telepresence.getambassador.io/manually-injected: "true" + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} ++ - args: ++ - agent ++ env: ++ - name: TELEPRESENCE_CONTAINER ++ value: echo-container ++ - name: _TEL_AGENT_LOG_LEVEL ++ value: info ++ - name: _TEL_AGENT_NAME ++ value: my-service ++ - name: _TEL_AGENT_NAMESPACE ++ valueFrom: ++ fieldRef: ++ fieldPath: metadata.namespace ++ - name: _TEL_AGENT_POD_IP ++ valueFrom: ++ fieldRef: ++ fieldPath: status.podIP ++ - name: _TEL_AGENT_APP_PORT ++ value: "8080" ++ - name: _TEL_AGENT_AGENT_PORT ++ value: "9900" ++ - name: _TEL_AGENT_MANAGER_HOST ++ value: traffic-manager.ambassador ++ image: docker.io/datawire/tel2:2.4.6 ++ name: traffic-agent ++ ports: ++ - containerPort: 9900 ++ protocol: TCP ++ readinessProbe: ++ exec: ++ command: ++ - /bin/stat ++ - /tmp/agent/ready ++ resources: {} ++ volumeMounts: ++ - mountPath: /tel_pod_info ++ name: traffic-annotations ++ volumes: ++ - downwardAPI: ++ items: ++ - fieldRef: ++ fieldPath: metadata.annotations ++ path: annotations ++ name: traffic-annotations +``` + +### 3. Modifying the service + +Once the modified deployment YAML has been applied to the cluster, you need to modify the Service to route traffic to the Traffic Agent. +You can do this by changing the exposed `targetPort` to `9900`. The resulting service should look like: + +```diff +apiVersion: v1 +kind: Service +metadata: + name: "my-service" +spec: + type: ClusterIP + selector: + service: my-service + ports: + - port: 80 +- targetPort: 8080 ++ targetPort: 9900 +``` diff --git a/docs/telepresence/2.5/reference/linkerd.md b/docs/telepresence/2.5/reference/linkerd.md new file mode 100644 index 000000000..9b903fa76 --- /dev/null +++ b/docs/telepresence/2.5/reference/linkerd.md @@ -0,0 +1,75 @@ +--- +Description: "How to get Linkerd meshed services working with Telepresence" +--- + +# Using Telepresence with Linkerd + +## Introduction +Getting started with Telepresence on Linkerd services is as simple as adding an annotation to your Deployment: + +```yaml +spec: + template: + metadata: + annotations: + config.linkerd.io/skip-outbound-ports: "8081" +``` + +The local system and the Traffic Agent connect to the Traffic Manager using its gRPC API on port 8081. Telling Linkerd to skip that port allows the Traffic Agent sidecar to fully communicate with the Traffic Manager, and therefore the rest of the Telepresence system. + +## Prerequisites +1. [Telepresence binary](../../install) +2. Linkerd control plane [installed to cluster](https://linkerd.io/2.10/tasks/install/) +3. Kubectl +4. [Working ingress controller](https://www.getambassador.io/docs/edge-stack/latest/howtos/linkerd2) + +## Deploy +Save and deploy the following YAML. Note the `config.linkerd.io/skip-outbound-ports` annotation in the metadata of the pod template. + +```yaml +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: quote +spec: + replicas: 1 + selector: + matchLabels: + app: quote + strategy: + type: RollingUpdate + template: + metadata: + annotations: + linkerd.io/inject: "enabled" + config.linkerd.io/skip-outbound-ports: "8081,8022,6001" + labels: + app: quote + spec: + containers: + - name: backend + image: docker.io/datawire/quote:0.4.1 + ports: + - name: http + containerPort: 8000 + env: + - name: PORT + value: "8000" + resources: + limits: + cpu: "0.1" + memory: 100Mi +``` + +## Connect to Telepresence +Run `telepresence connect` to connect to the cluster. Then `telepresence list` should show the `quote` deployment as `ready to intercept`: + +``` +$ telepresence list + + quote: ready to intercept (traffic-agent not yet installed) +``` + +## Run the intercept +Run `telepresence intercept quote --port 8080:80` to direct traffic from the `quote` deployment to port 8080 on your local system. Assuming you have something listening on 8080, you should now be able to see your local service whenever attempting to access the `quote` service. diff --git a/docs/telepresence/2.5/reference/rbac.md b/docs/telepresence/2.5/reference/rbac.md new file mode 100644 index 000000000..6c39739e9 --- /dev/null +++ b/docs/telepresence/2.5/reference/rbac.md @@ -0,0 +1,291 @@ +import Alert from '@material-ui/lab/Alert'; + +# Telepresence RBAC +The intention of this document is to provide a template for securing and limiting the permissions of Telepresence. +This documentation covers the full extent of permissions necessary to administrate Telepresence components in a cluster. + +There are two general categories for cluster permissions with respect to Telepresence. There are RBAC settings for a User and for an Administrator described above. The User is expected to only have the minimum cluster permissions necessary to create a Telepresence [intercept](../../howtos/intercepts/), and otherwise be unable to affect Kubernetes resources. + +In addition to the above, there is also a consideration of how to manage Users and Groups in Kubernetes which is outside of the scope of the document. This document will use Service Accounts to assign Roles and Bindings. Other methods of RBAC administration and enforcement can be found on the [Kubernetes RBAC documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) page. + +## Requirements + +- Kubernetes version 1.16+ +- Cluster admin privileges to apply RBAC + +## Editing your kubeconfig + +This guide also assumes that you are utilizing a kubeconfig file that is specified by the `KUBECONFIG` environment variable. This is a `yaml` file that contains the cluster's API endpoint information as well as the user data being supplied for authentication. The Service Account name used in the example below is called tp-user. This can be replaced by any value (i.e. John or Jane) as long as references to the Service Account are consistent throughout the `yaml`. After an administrator has applied the RBAC configuration, a user should create a `config.yaml` in your current directory that looks like the following:​ + +```yaml +apiVersion: v1 +kind: Config +clusters: +- name: my-cluster # Must match the cluster value in the contexts config + cluster: + ## The cluster field is highly cloud dependent. +contexts: +- name: my-context + context: + cluster: my-cluster # Must match the name field in the clusters config + user: tp-user +users: +- name: tp-user # Must match the name of the Service Account created by the cluster admin + user: + token: # See note below +``` + +The Service Account token will be obtained by the cluster administrator after they create the user's Service Account. Creating the Service Account will create an associated Secret in the same namespace with the format `-token-`. This token can be obtained by your cluster administrator by running `kubectl get secret -n ambassador -o jsonpath='{.data.token}' | base64 -d`. + +After creating `config.yaml` in your current directory, export the file's location to KUBECONFIG by running `export KUBECONFIG=$(pwd)/config.yaml`. You should then be able to switch to this context by running `kubectl config use-context my-context`. + +## Administrating Telepresence + +Telepresence administration requires permissions for creating `Namespaces`, `ServiceAccounts`, `ClusterRoles`, `ClusterRoleBindings`, `Secrets`, `Services`, `MutatingWebhookConfiguration`, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator. The following permissions are needed for the installation and use of Telepresence: + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: telepresence-admin + namespace: default +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-admin-role +rules: + - apiGroups: + - "" + resources: ["pods", "pods/log"] + verbs: ["get", "list", "create", "delete", "watch"] + - apiGroups: + - "" + resources: ["services"] + verbs: ["get", "list", "update", "create", "delete"] + - apiGroups: + - "" + resources: ["pods/portforward"] + verbs: ["create"] + - apiGroups: + - "apps" + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update", "create", "delete", "watch"] + - apiGroups: + - "getambassador.io" + resources: ["hosts", "mappings"] + verbs: ["*"] + - apiGroups: + - "" + resources: ["endpoints"] + verbs: ["get", "list"] + - apiGroups: + - "rbac.authorization.k8s.io" + resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"] + verbs: ["get", "list", "watch", "create", "delete"] + - apiGroups: + - "" + resources: ["namespaces"] + verbs: ["get", "list", "watch", "create"] + - apiGroups: + - "" + resources: ["secrets"] + verbs: ["get", "create", "list", "delete"] + - apiGroups: + - "" + resources: ["serviceaccounts"] + verbs: ["get", "create", "delete"] + - apiGroups: + - "admissionregistration.k8s.io" + resources: ["mutatingwebhookconfigurations"] + verbs: ["get", "create", "delete"] + - apiGroups: + - "" + resources: ["nodes"] + verbs: ["list", "get", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-clusterrolebinding +subjects: + - name: telepresence-admin + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-admin-role + kind: ClusterRole +``` + +There are two ways to install the traffic-manager: Using `telepresence connect` and installing the [helm chart](../../install/helm/). + +By using `telepresence connect`, Telepresence will use your kubeconfig to create the objects mentioned above in the cluster if they don't already exist. If you want the most introspection into what is being installed, we recommend using the helm chart to install the traffic-manager. + +## Cluster-wide telepresence user access + +To allow users to make intercepts across all namespaces, but with more limited `kubectl` permissions, the following `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` will allow full `telepresence intercept` functionality. + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate value + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-role +rules: +- apiGroups: + - "" + resources: ["pods", "pods/log"] + verbs: ["get", "list", "create", "delete"] +- apiGroups: + - "" + resources: ["services"] + verbs: ["get", "list", "update", "watch"] +- apiGroups: + - "" + resources: ["pods/portforward"] + verbs: ["create"] +- apiGroups: + - "apps" + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update", "patch", "watch"] +- apiGroups: + - "getambassador.io" + resources: ["hosts", "mappings"] + verbs: ["*"] +- apiGroups: + - "" + resources: ["endpoints"] + verbs: ["get", "list"] +- apiGroups: + - "rbac.authorization.k8s.io" + resources: ["clusterroles", "clusterrolebindings"] + verbs: ["get", "list", "watch"] +- apiGroups: + - "" + resources: ["namespaces"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-rolebinding +subjects: +- name: tp-user + kind: ServiceAccount + namespace: ambassador +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-role + kind: ClusterRole +``` + +## Namespace only telepresence user access + +RBAC for multi-tenant scenarios where multiple dev teams are sharing a single cluster where users are constrained to a specific namespace(s). + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate user name + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role +rules: +- apiGroups: + - "" + resources: ["pods"] + verbs: ["get", "list", "create", "watch", "delete"] +- apiGroups: + - "" + resources: ["services"] + verbs: ["update"] +- apiGroups: + - "" + resources: ["pods/portforward"] + verbs: ["create"] +- apiGroups: + - "apps" + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update", "watch"] +- apiGroups: + - "getambassador.io" + resources: ["hosts", "mappings"] + verbs: ["*"] +- apiGroups: + - "" + resources: ["endpoints"] + verbs: ["get", "list", "watch"] +--- +kind: RoleBinding # RBAC to access ambassador namespace +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: t2-ambassador-binding + namespace: ambassador +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount + namespace: ambassador +roleRef: + kind: ClusterRole + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +--- +kind: RoleBinding # RoleBinding T2 namespace to be intecpeted +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-test-binding # Update "test" for appropriate namespace to be intercepted + namespace: test # Update "test" for appropriate namespace to be intercepted +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount + namespace: ambassador +roleRef: + kind: ClusterRole + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +​ +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-namespace-role +rules: +- apiGroups: + - "" + resources: ["namespaces"] + verbs: ["get", "list", "watch"] +- apiGroups: + - "" + resources: ["services"] + verbs: ["get", "list", "watch"] +--- +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-namespace-binding +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount + namespace: ambassador +roleRef: + kind: ClusterRole + name: telepresence-namespace-role + apiGroup: rbac.authorization.k8s.io +``` diff --git a/docs/telepresence/2.5/reference/restapi.md b/docs/telepresence/2.5/reference/restapi.md new file mode 100644 index 000000000..4be1924a3 --- /dev/null +++ b/docs/telepresence/2.5/reference/restapi.md @@ -0,0 +1,93 @@ +# Telepresence RESTful API server + +[Telepresence](/products/telepresence/) can run a RESTful API server on the local host, both on the local workstation and in a pod that contains a `traffic-agent`. The server currently has two endpoints. The standard `healthz` endpoint and the `consume-here` endpoint. + +## Enabling the server +The server is enabled by setting the `telepresenceAPI.port` to a valid port number in the [Telepresence Helm Chart](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). The values may be passed explicitly to Helm during install, or configured using the [Telepresence Config](../config#restful-api-server) to impact an auto-install. + +## Querying the server +On the cluster's side, it's the `traffic-agent` of potentially intercepted pods that runs the server. The server can be accessed using `http://localhost:/` from the application container. Telepresence ensures that the container has the `TELEPRESENCE_API_PORT` environment variable set when the `traffic-agent` is installed. On the workstation, it is the `user-daemon` that runs the server. It uses the `TELEPRESENCE_API_PORT` that is conveyed in the environment of the intercept. This means that the server can be accessed the exact same way locally, provided that the environment is propagated correctly to the interceptor process. + +## Endpoints + +The `consume-here` and `intercept-info` endpoints are both intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar. Telepresence provides the ID of the intercept in the environment variable [TELEPRESENCE_INTERCEPT_ID](../environment/#telepresence_intercept_id) during an intercept. This ID must be provided in a `x-telepresence-caller-intercept-id: = ` header. [Telepresence](/products/telepresence/) needs this to identify the caller correctly. The `` will be empty when running in the cluster, but it's harmless to provide it there too, so there's no need for conditional code. + +There are three prerequisites to fulfill before testing The `consume-here` and `intercept-info` endpoints using `curl -v` on the workstation: +1. An intercept must be active +2. The "/healthz" endpoint must respond with OK +3. The ID of the intercept must be known. It will be visible as `ID` in the output of `telepresence list --debug`. + +### healthz +The `http://localhost:/healthz` endpoint should respond with status code 200 OK. If it doesn't then something isn't configured correctly. Check that the `traffic-agent` container is present and that the `TELEPRESENCE_API_PORT` has been added to the environment of the application container and/or in the environment that is propagated to the interceptor that runs on the local workstation. + +#### test endpoint using curl +A `curl -v` call can be used to test the endpoint when an intercept is active. This example assumes that the API port is configured to be 9980. +```console +$ curl -v localhost:9980/healthz +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /healthz HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Date: Fri, 26 Nov 2021 07:06:18 GMT +< Content-Length: 0 +< +* Connection #0 to host localhost left intact +``` + +### consume-here +`http://localhost:/consume-here` will respond with "true" (consume the message) or "false" (leave the message on the queue). When running in the cluster, this endpoint will respond with `false` if the headers match an ongoing intercept for the same workload because it's assumed that it's up to the intercept to consume the message. When running locally, the response is inverted. Matching headers means that the message should be consumed. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api`, we can now check that the "/consume-here" returns "true" for the path "/api" and given headers. +```console +$ curl -v localhost:9980/consume-here?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /consume-here?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Fri, 26 Nov 2021 06:43:28 GMT +< Content-Length: 4 +< +* Connection #0 to host localhost left intact +true +``` + +If you can run curl from the pod, you can try the exact same URL. The result should be "false" when there's an ongoing intercept. The `x-telepresence-caller-intercept-id` is not needed when the call is made from the pod. + +### intercept-info +`http://localhost:/intercept-info` is intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar, and will respond with a JSON structure containing the two booleans `clientSide` and `intercepted`, and a `metadata` map which corresponds to the `--http-meta` key pairs used when the intercept was created. This field is always omitted in case `intercepted` is `false`. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api --http-meta a=b --http-meta b=c`, we can now check that the "/intercept-info" returns information for the given path and headers. +```console +$ curl -v localhost:9980/intercept-info?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980...* Connected to localhost (127.0.0.1) port 9980 (#0) +> GET /intercept-info?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.79.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Tue, 01 Feb 2022 11:39:55 GMT +< Content-Length: 68 +< +{"intercepted":true,"clientSide":true,"metadata":{"a":"b","b":"c"}} +* Connection #0 to host localhost left intact +``` diff --git a/docs/telepresence/2.5/reference/routing.md b/docs/telepresence/2.5/reference/routing.md new file mode 100644 index 000000000..061ba8fa9 --- /dev/null +++ b/docs/telepresence/2.5/reference/routing.md @@ -0,0 +1,69 @@ +# Connection Routing + +## Outbound + +### DNS resolution +When requesting a connection to a host, the IP of that host must be determined. Telepresence provides DNS resolvers to help with this task. There are currently four types of resolvers but only one of them will be used on a workstation at any given time. Common for all of them is that they will propagate a selection of the host lookups to be performed in the cluster. The selection normally includes all names ending with `.cluster.local` or a currently mapped namespace but more entries can be added to the list using the `include-suffixes` option in the +[local DNS configuration](../config/#dns) + +#### Cluster side DNS lookups +The cluster side host lookup will be performed by the traffic-manager unless the client has an active intercept, in which case, the agent performing that intercept will be responsible for doing it. If the client has multiple intercepts, then all of them will be asked to perform the lookup, and the response to the client will contain the unique sum of IPs that they produce. It's therefore important to never have multiple intercepts that span more than one namespace[[1](#namespacelimit)]. The reason for asking all of them is that the workstation currently impersonates multiple containers, and it is not possible to determine on behalf of what container the lookup request is made. + +#### macOS resolver +This resolver hooks into the macOS DNS system by creating files under `/etc/resolver`. Those files correspond to some domain and contain the port number of the Telepresence resolver. Telepresence creates one such file for each of the currently mapped namespaces and `include-suffixes` option. The file `telepresence.local` contains a search path that is configured based on current intercepts so that single label names can be resolved correctly. + +#### Linux systemd-resolved resolver +This resolver registers itself as part of telepresence's [VIF](../tun-device) using `systemd-resolved` and uses the DBus API to configure domains and routes that corresponds to the current set of intercepts and namespaces. + +#### Linux overriding resolver +Linux systems that aren't configured with `systemd-resolved` will use this resolver. A Typical case is when running Telepresence [inside a docker container](../inside-container). During initialization, the resolver will first establish a _fallback_ connection to the IP passed as `--dns`, the one configured as `local-ip` in the [local DNS configuration](../config/#dns), or the primary `nameserver` registered in `/etc/resolv.conf`. It will then use iptables to actually override that IP so that requests to it instead end up in the overriding resolver, which unless it succeeds on its own, will use the _fallback_. + +#### Windows resolver +This resolver uses the DNS resolution capabilities of the [win-tun](https://www.wintun.net/) device in conjunction with [Win32_NetworkAdapterConfiguration SetDNSDomain](https://docs.microsoft.com/en-us/powershell/scripting/samples/performing-networking-tasks?view=powershell-7.2#assigning-the-dns-domain-for-a-network-adapter). + +#### DNS caching +The Telepresence DNS resolver often changes its configuration. This means that Telepresence must either flush the DNS caches on the local host, or ensure that DNS-records returned from the Telepresence resolver aren't cached (or cached for a very short time). All operating systems have different ways of flushing the DNS caches and even different versions of one system may have differences. Also, on some systems it is necessary to actually kill and restart processes to ensure a proper flush, which in turn may result in network instabilities. + +Starting with 2.4.7, Telepresence will no longer flush the host's DNS caches. Instead, all records will have a short Time To Live (TTL) so that such caches evict the entries quickly. This causes increased load on the Telepresence resolver (shorter TTL means more frequent queries) and to cater for that, telepresence now has an internal cache to minimize the number of DNS queries that it sends to the cluster. This cache is flushed as needed without causing instabilities. + +### Routing + +#### Subnets +The Telepresence `traffic-manager` service is responsible for discovering the cluster's service subnet and all subnets used by the pods. In order to do this, it needs permission to create a dummy service[[2](#servicesubnet)] in its own namespace, and the ability to list, get, and watch nodes and pods. Most clusters will expose the pod subnets as `podCIDR` in the `Node` while others, like Amazon EKS, don't. Telepresence will then fall back to deriving the subnets from the IPs of all pods. If you'd like to choose a specific method for discovering subnets, or want to provide the list yourself, you can use the `podCIDRStrategy` configuration value in the [helm](../../install/helm) chart to do that. + +The complete set of subnets that the [VIF](../tun-device) will be configured with is dynamic and may change during a connection's life cycle as new nodes arrive or disappear from the cluster. The set consists of what that the traffic-manager finds in the cluster, and the subnets configured using the [also-proxy](../config#alsoproxy) configuration option. Telepresence will remove subnets that are equal to, or completely covered by, other subnets. + +#### Connection origin +A request to connect to an IP-address that belongs to one of the subnets of the [VIF](../tun-device) will cause a connection request to be made in the cluster. As with host name lookups, the request will originate from the traffic-manager unless the client has ongoing intercepts. If it does, one of the intercepted pods will be chosen, and the request will instead originate from that pod. This is a best-effort approach. Telepresence only knows that the request originated from the workstation. It cannot know that it is intended to originate from a specific pod when multiple intercepts are active. + +A `--local-only` intercept will not have any effect on the connection origin because there is no pod from which the connection can originate. The intercept must be made on a workload that has been deployed in the cluster if there's a requirement for correct connection origin. + +There are multiple reasons for doing this. One is that it is important that the request originates from the correct namespace. Example: + +```bash +curl some-host +``` +results in a http request with header `Host: some-host`. Now, if a service-mesh like Istio performs header based routing, then it will fail to find that host unless the request originates from the same namespace as the host resides in. Another reason is that the configuration of a service mesh can contain very strict rules. If the request then originates from the wrong pod, it will be denied. Only one intercept at a time can be used if there is a need to ensure that the chosen pod is exactly right. + +### Recursion detection +It is common that clusters used in development, such as Minikube, Minishift or k3s, run on the same host as the Telepresence client, often in a Docker container. Such clusters may have access to host network, which means that both DNS and L4 routing may be subjected to recursion. + +#### DNS recursion +When a local cluster's DNS-resolver fails to resolve a hostname, it may fall back to querying the local host network. This means that the Telepresence resolver will be asked to resolve a query that was issued from the cluster. Telepresence must check if such a query is recursive because there is a chance that it actually originated from the Telepresence DNS resolver and was dispatched to the `traffic-manager`, or a `traffic-agent`. + +Telepresence handles this by sending one initial DNS-query to resolve the hostname "tel2-recursion-check.kube-system". If the cluster runs locally, and has access to the local host's network, then that query will recurse back into the Telepresence resolver. Telepresence remembers this and alters its own behavior so that queries that are believed to be recursions are detected and respond with an NXNAME record. Telepresence performs this solution to the best of its ability, but may not be completely accurate in all situations. There's a chance that the DNS-resolver will yield a false negative for the second query if the same hostname is queried more than once in rapid succession, that is when the second query is made before the first query has received a response from the cluster. + +#### Connect recursion +A cluster running locally may dispatch connection attempts to non-existing host:port combinations to the host network. This means that they may reach the Telepresence [VIF](../tun-device). Endless recursions occur if the VIF simply dispatches such attempts on to the cluster. + +The telepresence client handles this by serializing all connection attempts to one specific IP:PORT, trapping all subsequent attempts to connect to that IP:PORT until the first attempt has completed. If the first attempt was deemed a success, then the currently trapped attempts are allowed to proceed. If the first attempt failed, then the currently trapped attempts fail. + +## Inbound + +The traffic-manager and traffic-agent are mutually responsible for setting up the necessary connection to the workstation when an intercept becomes active. In versions prior to 2.3.2, this would be accomplished by the traffic-manager creating a port dynamically that it would pass to the traffic-agent. The traffic-agent would then forward the intercepted connection to that port, and the traffic-manager would forward it to the workstation. This lead to problems when integrating with service meshes like Istio since those dynamic ports needed to be configured. It also imposed an undesired requirement to be able to use mTLS between the traffic-manager and traffic-agent. + +In 2.3.2, this changes, so that the traffic-agent instead creates a tunnel to the traffic-manager using the already existing gRPC API connection. The traffic-manager then forwards that using another tunnel to the workstation. This is completely invisible to other service meshes and is therefore much easier to configure. + +##### Footnotes: +

1: A future version of Telepresence will not allow concurrent intercepts that span multiple namespaces.

+

2: The error message from an attempt to create a service in a bad subnet contains the service subnet. The trick of creating a dummy service is currently the only way to get Kubernetes to expose that subnet.

diff --git a/docs/telepresence/2.5/reference/tun-device.md b/docs/telepresence/2.5/reference/tun-device.md new file mode 100644 index 000000000..4410f6f3c --- /dev/null +++ b/docs/telepresence/2.5/reference/tun-device.md @@ -0,0 +1,27 @@ +# Networking through Virtual Network Interface + +The Telepresence daemon process creates a Virtual Network Interface (VIF) when Telepresence connects to the cluster. The VIF ensures that the cluster's subnets are available to the workstation. It also intercepts DNS requests and forwards them to the traffic-manager which in turn forwards them to intercepted agents, if any, or performs a host lookup by itself. + +### TUN-Device +The VIF is a TUN-device, which means that it communicates with the workstation in terms of L3 IP-packets. The router will recognize UDP and TCP packets and tunnel their payload to the traffic-manager via its encrypted gRPC API. The traffic-manager will then establish corresponding connections in the cluster. All protocol negotiation takes place in the client because the VIF takes care of the L3 to L4 translation (i.e. the tunnel is L4, not L3). + +## Gains when using the VIF + +### Both TCP and UDP +The TUN-device is capable of routing both TCP and UDP for outbound traffic. Earlier versions of Telepresence would only allow TCP. Future enhancements might be to also route inbound UDP, and perhaps a selection of ICMP packages (to allow for things like `ping`). + +### No SSH required + +The VIF approach is somewhat similar to using `sshuttle` but without +any requirements for extra software, configuration or connections. +Using the VIF means that only one single connection needs to be +forwarded through the Kubernetes apiserver (à la `kubectl +port-forward`), using only one single port. There is no need for +`ssh` in the client nor for `sshd` in the traffic-manager. This also +means that the traffic-manager container can run as the default user. + +#### sshfs without ssh encryption +When a POD is intercepted, and its volumes are mounted on the local machine, this mount is performed by [sshfs](https://github.com/libfuse/sshfs). Telepresence will run `sshfs -o slave` which means that instead of using `ssh` to establish an encrypted communication to an `sshd`, which in turn terminates the encryption and forwards to `sftp`, the `sshfs` will talk `sftp` directly on its `stdin/stdout` pair. Telepresence tunnels that directly to an `sftp` in the agent using its already encrypted gRPC API. As a result, no `sshd` is needed in client nor in the traffic-agent, and the traffic-agent container can run as the default user. + +### No Firewall rules +With the VIF in place, there's no longer any need to tamper with firewalls in order to establish IP routes. The VIF makes the cluster subnets available during connect, and the kernel will perform the routing automatically. When the session ends, the kernel is also responsible for cleaning up. diff --git a/docs/telepresence/2.5/reference/volume.md b/docs/telepresence/2.5/reference/volume.md new file mode 100644 index 000000000..82df9cafa --- /dev/null +++ b/docs/telepresence/2.5/reference/volume.md @@ -0,0 +1,36 @@ +# Volume mounts + +import Alert from '@material-ui/lab/Alert'; + +Telepresence supports locally mounting of volumes that are mounted to your Pods. You can specify a command to run when starting the intercept, this could be a subshell or local server such as Python or Node. + +``` +telepresence intercept --port --mount=/tmp/ -- /bin/bash +``` + +In this case, Telepresence creates the intercept, mounts the Pod's volumes to locally to `/tmp`, and starts a Bash subshell. + +Telepresence can set a random mount point for you by using `--mount=true` instead, you can then find the mount point in the output of `telepresence list` or using the `$TELEPRESENCE_ROOT` variable. + +``` +$ telepresence intercept --port --mount=true -- /bin/bash +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 + Intercepting : all TCP connections + +bash-3.2$ echo $TELEPRESENCE_ROOT +/var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 +``` + +--mount=true is the default if a mount option is not specified, use --mount=false to disable mounting volumes. + +With either method, the code you run locally either from the subshell or from the intercept command will need to be prepended with the `$TELEPRESENCE_ROOT` environment variable to utilize the mounted volumes. + +For example, Kubernetes mounts secrets to `/var/run/secrets/kubernetes.io` (even if no `mountPoint` for it exists in the Pod spec). Once mounted, to access these you would need to change your code to use `$TELEPRESENCE_ROOT/var/run/secrets/kubernetes.io`. + +If using --mount=true without a command, you can use either environment variable flag to retrieve the variable. diff --git a/docs/telepresence/2.5/reference/vpn.md b/docs/telepresence/2.5/reference/vpn.md new file mode 100644 index 000000000..f02aafc81 --- /dev/null +++ b/docs/telepresence/2.5/reference/vpn.md @@ -0,0 +1,157 @@ + +
+ +# Telepresence and VPNs + +## The test-vpn command + +You can make use of the `telepresence test-vpn` command to diagnose issues +with your VPN setup. +This guides you through a series of steps to figure out if there are +conflicts between your VPN configuration and [telepresence](/products/telepresence/). + +### Prerequisites + +Before running `telepresence test-vpn` you should ensure that your VPN is +in split-tunnel mode. +This means that only traffic that _must_ pass through the VPN is directed +through it; otherwise, the test results may be inaccurate. + +You may need to configure this on both the client and server sides. +Client-side, taking the Tunnelblick client as an example, you must ensure that +the `Route all IPv4 traffic through the VPN` tickbox is not enabled: + +![Tunnelblick](../../images/tunnelblick.png) + +Server-side, taking AWS' ClientVPN as an example, you simply have to enable +split-tunnel mode: + +![Modify client VPN Endpoint](../../images/split-tunnel.png) + +In AWS, this setting can be toggled without reprovisioning the VPN. Other cloud providers may work differently. + +### Testing the VPN configuration + +To run it, enter: + +```console +$ telepresence test-vpn +``` + +The test-vpn tool begins by asking you to disconnect from your VPN; ensure you are disconnected then +press enter: + +``` +Telepresence Root Daemon is already stopped +Telepresence User Daemon is already stopped +Please disconnect from your VPN now and hit enter once you're disconnected... +``` + +Once it's gathered information about your network configuration without an active connection, +it will ask you to connect to the VPN: + +``` +Please connect to your VPN now and hit enter once you're connected... +``` + +It will then connect to the cluster: + + +``` +Launching Telepresence Root Daemon +Launching Telepresence User Daemon +Connected to context arn:aws:eks:us-east-1:914373874199:cluster/josec-tp-test-vpn-cluster (https://07C63820C58A0426296DAEFC73AED10C.gr7.us-east-1.eks.amazonaws.com) +Telepresence Root Daemon quitting... done +Telepresence User Daemon quitting... done +``` + +And show you the results of the test: + +``` +---------- Test Results: +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +✅ svc subnet 10.19.0.0/16 is clear of VPN + +Please see https://www.telepresence.io/docs/latest/reference/vpn for more info on these corrective actions, as well as examples + +Still having issues? Please create a new github issue at https://github.com/telepresenceio/telepresence/issues/new?template=Bug_report.md + Please make sure to add the following to your issue: + * Run `telepresence loglevel debug`, try to connect, then run `telepresence gather_logs`. It will produce a zipfile that you should attach to the issue. + * Which VPN client are you using? + * Which VPN server are you using? + * How is your VPN pushing DNS configuration? It may be useful to add the contents of /etc/resolv.conf +``` + +#### Interpreting test results + +##### Case 1: VPN masked by cluster + +In an instance where the VPN is masked by the cluster, the test-vpn tool informs you that a pod or service subnet is masking a CIDR that the VPN +routes: + +``` +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +``` + +This means that all VPN hosts within `10.0.0.0/19` will be rendered inaccessible while +telepresence is connected. + +The ideal resolution in this case is to move the pods to a different subnet. This is possible, +for example, in Amazon EKS by configuring a [new CIDR range](https://aws.amazon.com/premiumsupport/knowledge-center/eks-multiple-cidr-ranges/) for the pods. +In this case, configuring the pods to be located in `10.1.0.0/19` clears the VPN and allows you +to reach hosts inside the VPC's `10.0.0.0/19` + +However, it is not always possible to move the pods to a different subnet. +In these cases, you should use the [never-proxy](../config#neverproxy) configuration to prevent certain +hosts from being masked. +This might be particularly important for DNS resolution. In an AWS ClientVPN VPN it is often +customary to set the `.2` host as a DNS server (e.g. `10.0.0.2` in this case): + +Modify Client VPN Endpoint + +If this is the case for your VPN, you should place the DNS server in the never-proxy list for your +cluster. In your kubeconfig file, add a `telepresence` extension like so: + +```yaml +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + never-proxy: + - 10.0.0.2/32 +``` + +##### Case 2: Cluster masked by VPN + +In an instance where the Cluster is masked by the VPN, the test-vpn tool informs you that a pod or service subnet is being masked by a CIDR +that the VPN routes: + +``` +❌ pod subnet 10.0.0.0/8 being masked by VPN-routed CIDR 10.0.0.0/16. This usually means that Telepresence will not be able to connect to your cluster. To resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, consider shrinking the mask of the 10.0.0.0/16 CIDR (e.g. from /16 to /8), or disabling split-tunneling +``` + +Typically this means that pods within `10.0.0.0/8` are not accessible while the VPN is +connected. + +As with the first case, the ideal resolution is to move the pods away, but this may not always +be possible. In that case, your best bet is to attempt to shrink the VPN's CIDR +(that is, make it route more hosts) to make Telepresence's routes win by virtue of specificity. +One easy way to do this may be by disabling split tunneling (see the [prerequisites](#prerequisites) +section for more on split-tunneling). + +Note that once you fix this, you may find yourself landing again in [Case 1](#case-1-vpn-masked-by-cluster), and may need +to use never-proxy rules to whitelist hosts in the VPN: + +``` +❌ pod subnet 10.0.0.0/8 is masking VPN-routed CIDR 0.0.0.0/1. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 0.0.0.0/1 are placed in the never-proxy list +``` +
diff --git a/docs/telepresence/2.5/release-notes/no-ssh.png b/docs/telepresence/2.5/release-notes/no-ssh.png new file mode 100644 index 000000000..025f20ab7 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/no-ssh.png differ diff --git a/docs/telepresence/2.5/release-notes/run-tp-in-docker.png b/docs/telepresence/2.5/release-notes/run-tp-in-docker.png new file mode 100644 index 000000000..53b66a9b2 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/run-tp-in-docker.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.2.png b/docs/telepresence/2.5/release-notes/telepresence-2.2.png new file mode 100644 index 000000000..43abc7e89 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.2.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.3.0-homebrew.png b/docs/telepresence/2.5/release-notes/telepresence-2.3.0-homebrew.png new file mode 100644 index 000000000..e203a9750 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.3.0-homebrew.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.3.0-loglevels.png b/docs/telepresence/2.5/release-notes/telepresence-2.3.0-loglevels.png new file mode 100644 index 000000000..3d628c54a Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.3.0-loglevels.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.3.1-alsoProxy.png b/docs/telepresence/2.5/release-notes/telepresence-2.3.1-alsoProxy.png new file mode 100644 index 000000000..4052b927b Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.3.1-alsoProxy.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.3.1-brew.png b/docs/telepresence/2.5/release-notes/telepresence-2.3.1-brew.png new file mode 100644 index 000000000..2af424904 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.3.1-brew.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.3.1-dns.png b/docs/telepresence/2.5/release-notes/telepresence-2.3.1-dns.png new file mode 100644 index 000000000..c6335e7a7 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.3.1-dns.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.3.1-inject.png b/docs/telepresence/2.5/release-notes/telepresence-2.3.1-inject.png new file mode 100644 index 000000000..aea1003ef Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.3.1-inject.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.3.1-large-file-transfer.png b/docs/telepresence/2.5/release-notes/telepresence-2.3.1-large-file-transfer.png new file mode 100644 index 000000000..48ceb3817 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.3.1-large-file-transfer.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.3.1-trafficmanagerconnect.png b/docs/telepresence/2.5/release-notes/telepresence-2.3.1-trafficmanagerconnect.png new file mode 100644 index 000000000..78128c174 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.3.1-trafficmanagerconnect.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.3.2-subnets.png b/docs/telepresence/2.5/release-notes/telepresence-2.3.2-subnets.png new file mode 100644 index 000000000..778c722ab Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.3.2-subnets.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.3.2-svcport-annotation.png b/docs/telepresence/2.5/release-notes/telepresence-2.3.2-svcport-annotation.png new file mode 100644 index 000000000..1e1e92408 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.3.2-svcport-annotation.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.3.3-helm.png b/docs/telepresence/2.5/release-notes/telepresence-2.3.3-helm.png new file mode 100644 index 000000000..7b81480a7 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.3.3-helm.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.3.3-namespace-config.png b/docs/telepresence/2.5/release-notes/telepresence-2.3.3-namespace-config.png new file mode 100644 index 000000000..7864d3a30 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.3.3-namespace-config.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.3.3-to-pod.png b/docs/telepresence/2.5/release-notes/telepresence-2.3.3-to-pod.png new file mode 100644 index 000000000..aa7be3f63 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.3.3-to-pod.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.3.4-improved-error.png b/docs/telepresence/2.5/release-notes/telepresence-2.3.4-improved-error.png new file mode 100644 index 000000000..fa8a12986 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.3.4-improved-error.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.3.4-ip-error.png b/docs/telepresence/2.5/release-notes/telepresence-2.3.4-ip-error.png new file mode 100644 index 000000000..1d37380c7 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.3.4-ip-error.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.3.5-agent-config.png b/docs/telepresence/2.5/release-notes/telepresence-2.3.5-agent-config.png new file mode 100644 index 000000000..67d6d3e8b Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.3.5-agent-config.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.3.5-grpc-max-receive-size.png b/docs/telepresence/2.5/release-notes/telepresence-2.3.5-grpc-max-receive-size.png new file mode 100644 index 000000000..32939f9dd Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.3.5-grpc-max-receive-size.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.3.5-skipLogin.png b/docs/telepresence/2.5/release-notes/telepresence-2.3.5-skipLogin.png new file mode 100644 index 000000000..bf79c1910 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.3.5-skipLogin.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png b/docs/telepresence/2.5/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png new file mode 100644 index 000000000..d29a05ad7 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.3.7-keydesc.png b/docs/telepresence/2.5/release-notes/telepresence-2.3.7-keydesc.png new file mode 100644 index 000000000..9bffe5ccb Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.3.7-keydesc.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.3.7-newkey.png b/docs/telepresence/2.5/release-notes/telepresence-2.3.7-newkey.png new file mode 100644 index 000000000..c7d47c42d Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.3.7-newkey.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.4.0-cloud-messages.png b/docs/telepresence/2.5/release-notes/telepresence-2.4.0-cloud-messages.png new file mode 100644 index 000000000..ffd045ae0 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.4.0-cloud-messages.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.4.0-windows.png b/docs/telepresence/2.5/release-notes/telepresence-2.4.0-windows.png new file mode 100644 index 000000000..d27ba254a Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.4.0-windows.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.4.1-systema-vars.png b/docs/telepresence/2.5/release-notes/telepresence-2.4.1-systema-vars.png new file mode 100644 index 000000000..c098b439f Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.4.1-systema-vars.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.4.4-gather-logs.png b/docs/telepresence/2.5/release-notes/telepresence-2.4.4-gather-logs.png new file mode 100644 index 000000000..7db541735 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.4.4-gather-logs.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.4.5-logs-anonymize.png b/docs/telepresence/2.5/release-notes/telepresence-2.4.5-logs-anonymize.png new file mode 100644 index 000000000..edd01fde4 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.4.5-logs-anonymize.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.4.5-pod-yaml.png b/docs/telepresence/2.5/release-notes/telepresence-2.4.5-pod-yaml.png new file mode 100644 index 000000000..3f565c4f8 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.4.5-pod-yaml.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.4.5-preview-url-questions.png b/docs/telepresence/2.5/release-notes/telepresence-2.4.5-preview-url-questions.png new file mode 100644 index 000000000..1823aaa14 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.4.5-preview-url-questions.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.4.6-help-text.png b/docs/telepresence/2.5/release-notes/telepresence-2.4.6-help-text.png new file mode 100644 index 000000000..aab9178ad Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.4.6-help-text.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.4.8-health-check.png b/docs/telepresence/2.5/release-notes/telepresence-2.4.8-health-check.png new file mode 100644 index 000000000..e10a0b472 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.4.8-health-check.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.4.8-vpn.png b/docs/telepresence/2.5/release-notes/telepresence-2.4.8-vpn.png new file mode 100644 index 000000000..fbb215882 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.4.8-vpn.png differ diff --git a/docs/telepresence/2.5/release-notes/telepresence-2.5.0-pro-daemon.png b/docs/telepresence/2.5/release-notes/telepresence-2.5.0-pro-daemon.png new file mode 100644 index 000000000..5b82fc769 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/telepresence-2.5.0-pro-daemon.png differ diff --git a/docs/telepresence/2.5/release-notes/tunnel.jpg b/docs/telepresence/2.5/release-notes/tunnel.jpg new file mode 100644 index 000000000..59a0397e6 Binary files /dev/null and b/docs/telepresence/2.5/release-notes/tunnel.jpg differ diff --git a/docs/telepresence/2.5/releaseNotes.yml b/docs/telepresence/2.5/releaseNotes.yml new file mode 100644 index 000000000..9063edea1 --- /dev/null +++ b/docs/telepresence/2.5/releaseNotes.yml @@ -0,0 +1,1275 @@ +# This file should be placed in the folder for the version of the +# product that's meant to be documented. A `/release-notes` page will +# be automatically generated and populated at build time. +# +# Note that an entry needs to be added to the `doc-links.yml` file in +# order to surface the release notes in the table of contents. +# +# The YAML in this file should contain: +# +# changelog: An (optional) URL to the CHANGELOG for the product. +# items: An array of releases with the following attributes: +# - version: The (optional) version number of the release, if applicable. +# - date: The date of the release in the format YYYY-MM-DD. +# - notes: An array of noteworthy changes included in the release, each having the following attributes: +# - type: The type of change, one of `bugfix`, `feature`, `security` or `change`. +# - title: A short title of the noteworthy change. +# - body: >- +# Two or three sentences describing the change and why it +# is noteworthy. This is HTML, not plain text or +# markdown. It is handy to use YAML's ">-" feature to +# allow line-wrapping. +# - image: >- +# The URL of an image that visually represents the +# noteworthy change. This path is relative to the +# `release-notes` directory; if this file is +# `FOO/releaseNotes.yml`, then the image paths are +# relative to `FOO/release-notes/`. +# - docs: The path to the documentation page where additional information can be found. +# - href: A path from the root to a resource on the getambassador website, takes precedence over a docs link. + +docTitle: Telepresence Release Notes +docDescription: >- + Release notes for Telepresence by Ambassador Labs, a CNCF project + that enables developers to iterate rapidly on Kubernetes + microservices by arming them with infinite-scale development + environments, access to instantaneous feedback loops, and highly + customizable development environments. + +changelog: https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md + +items: + - version: 2.5.8 + date: "2022-04-27" + notes: + - type: bugfix + title: Folder creation on `telepresence login` + body: >- + Fixed a bug where the telepresence config folder would not be created if the user ran `telepresence login` before other commands. + - version: 2.5.7 + date: "2022-04-25" + notes: + - type: change + title: RBAC requirements + body: >- + A namespaced traffic-manager will no longer require cluster wide RBAC. Only Roles and RoleBindings are now used. + - type: bugfix + title: Windows DNS + body: >- + The DNS recursion detector didn't work correctly on Windows, resulting in sporadic failures to resolve names that were resolved correctly at other times. + - type: bugfix + title: Session TTL and Reconnect + body: >- + A telepresence session will now last for 24 hours after the user's last connectivity. If a session expires, the connector will automatically try to reconnect. + - version: 2.5.6 + date: "2022-04-18" + notes: + - type: change + title: Less Watchers + body: >- + Telepresence agents watcher will now only watch namespaces that the user has accessed since the last `connect`. + - type: bugfix + title: More Efficient `gather-logs` + body: >- + The `gather-logs` command will no longer send any logs through `gRPC`. + - version: 2.5.5 + date: "2022-04-08" + notes: + - type: change + title: Traffic Manager Permissions + body: >- + The traffic-manager now requires permissions to read pods across namespaces even if installed with limited permissions + - type: bugfix + title: Linux DNS Cache + body: >- + The DNS resolver used on Linux with systemd-resolved now flushes the cache when the search path changes. + - type: bugfix + title: Automatic Connect Sync + body: >- + The `telepresence list` command will produce a correct listing even when not preceded by a `telepresence connect`. + - type: bugfix + title: Disconnect Reconnect Stability + body: >- + The root daemon will no longer get into a bad state when a disconnect is rapidly followed by a new connect. + - type: bugfix + title: Limit Watched Namespaces + body: >- + The client will now only watch agents from accessible namespaces, and is also constrained to namespaces explicitly mapped using the `connect` command's `--mapped-namespaces` flag. + - type: bugfix + title: Limit Namespaces used in `gather-logs` + body: >- + The `gather-logs` command will only gather traffic-agent logs from accessible namespaces, and is also constrained to namespaces explicitly mapped using the `connect` command's `--mapped-namespaces` flag. + - version: 2.5.4 + date: "2022-03-29" + notes: + - type: bugfix + title: Linux DNS Concurrency + body: >- + The DNS fallback resolver on Linux now correctly handles concurrent requests without timing them out + - type: bugfix + title: Non-Functional Flag + body: >- + The ingress-l5 flag will no longer be forcefully set to equal the --ingress-host flag + - type: bugfix + title: Automatically Remove Failed Intercepts + body: >- + Intercepts that fail to create are now consistently removed to prevent non-working dangling intercepts from sticking around. + - type: bugfix + title: Agent UID + body: >- + Agent container is no longer sensitive to a random UID or an UID imposed by a SecurityContext. + - type: bugfix + title: Gather-Logs Output Filepath + body: >- + Removed a bad concatenation that corrupted the output path of `telepresence gather-logs`. + - type: change + title: Remove Unnecessary Error Advice + body: >- + An advice to "see logs for details" is no longer printed when the argument count is incorrect in a CLI command. + - type: bugfix + title: Garbage Collection + body: >- + Client and agent sessions no longer leaves dangling waiters in the traffic-manager when they depart. + - type: bugfix + title: Limit Gathered Logs + body: >- + The client's gather logs command and agent watcher will now respect the configured grpc.maxReceiveSize + - type: change + title: In-Cluster Checks + body: >- + The TUN device will no longer route pod or service subnets if it is running in a machine that's already connected to the cluster + - type: change + title: Expanded Status Command + body: >- + The status command includes the install id, user id, account id, and user email in its result, and can print output as JSON + - type: change + title: List Command Shows All Intercepts + body: >- + The list command, when used with the `--intercepts` flag, will list the users intercepts from all namespaces + - version: 2.5.3 + date: "2022-02-25" + notes: + - type: bugfix + title: TCP Connectivity + body: >- + Fixed bug in the TCP stack causing timeouts after repeated connects to the same address + - type: feature + title: Linux Binaries + body: >- + Client-side binaries for the arm64 architecture are now available for linux + - version: 2.5.2 + date: "2022-02-23" + notes: + - type: bugfix + title: DNS server bugfix + body: >- + Fixed a bug where Telepresence would use the last server in resolv.conf + - version: 2.5.1 + date: "2022-02-19" + notes: + - type: bugfix + title: Fix GKE auth issue + body: >- + Fixed a bug where using a GKE cluster would error with: No Auth Provider found for name "gcp" + - version: 2.5.0 + date: "2022-02-18" + notes: + - type: feature + title: Intercept specific endpoints + body: >- + The flags --http-path-equal, --http-path-prefix, and --http-path-regex can can be used in + addition to the --http-match flag to filter personal intercepts by the request URL path + docs: concepts/intercepts#intercepting-a-specific-endpoint + - type: feature + title: Intercept metadata + body: >- + The flag --http-meta can be used to declare metadata key value pairs that will be returned by the Telepresence rest + API endpoint /intercept-info + docs: reference/restapi#intercept-info + - type: change + title: --http-match renamed to --http-header + body: >- + The `telepresence intercept` command line flag --http-match was renamed to --http-header. The old flag + still works, but it is deprecated and doesn't show up in the help. + docs: concepts/intercepts#creating-and-using-personal-intercepts + - type: change + title: Client RBAC watch + body: >- + The verb "watch" was added to the set of required verbs when accessing services and workloads for the client RBAC + ClusterRole + docs: reference/rbac + - type: change + title: Dropped backward compatibility with versions <=2.4.4 + body: >- + Telepresence is no longer backward compatible with versions 2.4.4 or older because the deprecated multiplexing tunnel + functionality was removed. + - type: change + title: No global networking flags + body: >- + The global networking flags are no longer used and using them will render a deprecation warning unless they are supported by the + command. The subcommands that support networking flags are connect, current-cluster-id, + and genyaml. + - type: bugfix + title: Output of status command + body: >- + The also-proxy and never-proxy subnets are now displayed correctly when using the + telepresence status command. + - type: bugfix + title: SETENV sudo privilege no longer needed + body: >- + Telepresence longer requires SETENV privileges when starting the root daemon. + - type: bugfix + title: Network device names containing dash + body: >- + Telepresence will now parse device names containing dashes correctly when determining routes that it should never block. + - type: bugfix + title: Linux uses cluster.local as domain instead of search + body: >- + The cluster domain (typically "cluster.local") is no longer added to the DNS search on Linux using + systemd-resolved. Instead, it is added as a domain so that names ending with it are routed + to the DNS server. + - version: 2.4.11 + date: "2022-02-10" + notes: + - type: change + title: Add additional logging to troubleshoot intermittent issues with intercepts + body: >- + We've noticed some issues with intercepts in v2.4.10, so we are releasing a version + with enhanced logging to help debug and fix the issue. + - version: 2.4.10 + date: "2022-01-13" + notes: + - type: feature + title: Application Protocol Strategy + body: >- + The strategy used when selecting the application protocol for personal intercepts can now be configured using + the intercept.appProtocolStrategy in the config.yml file. + docs: reference/config/#intercept + image: telepresence-2.4.10-intercept-config.png + - type: feature + title: Helm value for the Application Protocol Strategy + body: >- + The strategy when selecting the application protocol for personal intercepts in agents injected by the + mutating webhook can now be configured using the agentInjector.appProtocolStrategy in the Helm chart. + docs: install/helm + - type: feature + title: New --http-plaintext option + body: >- + The flag --http-plaintext can be used to ensure that an intercept uses plaintext http or grpc when + communicating with the workstation process. + docs: reference/intercepts/#tls + - type: feature + title: Configure the default intercept port + body: >- + The port used by default in the telepresence intercept command (8080), can now be changed by setting + the intercept.defaultPort in the config.yml file. + docs: reference/config/#intercept + - type: change + title: Telepresence CI now uses Github Actions + body: >- + Telepresence now uses Github Actions for doing unit and integration testing. It is + now easier for contributors to run tests on PRs since maintainers can add an + "ok to test" label to PRs (including from forks) to run integration tests. + docs: https://github.com/telepresenceio/telepresence/actions + image: telepresence-2.4.10-actions.png + - type: bugfix + title: Check conditions before asking questions + body: >- + User will not be asked to log in or add ingress information when creating an intercept until a check has been + made that the intercept is possible. + docs: reference/intercepts/ + - type: bugfix + title: Fix invalid log statement + body: >- + Telepresence will no longer log invalid: "unhandled connection control message: code DIAL_OK" errors. + - type: bugfix + title: Log errors from sshfs/sftp + body: >- + Output to stderr from the traffic-agent's sftp and the client's sshfs processes + are properly logged as errors. + - type: bugfix + title: Don't use Windows path separators in workload pod template + body: >- + Auto installer will no longer not emit backslash separators for the /tel-app-mounts paths in the + traffic-agent container spec when running on Windows. + - version: 2.4.9 + date: "2021-12-09" + notes: + - type: bugfix + title: Helm upgrade nil pointer error + body: >- + A helm upgrade using the --reuse-values flag no longer fails on a "nil pointer" error caused by a nil + telpresenceAPI value. + docs: install/helm#upgrading-the-traffic-manager + - version: 2.4.8 + date: "2021-12-03" + notes: + - type: feature + title: VPN diagnostics tool + body: >- + There is a new subcommand, test-vpn, that can be used to diagnose connectivity issues with a VPN. + See the VPN docs for more information on how to use it. + docs: reference/vpn + image: telepresence-2.4.8-vpn.png + + - type: feature + title: RESTful API service + body: >- + A RESTful service was added to Telepresence, both locally to the client and to the traffic-agent to + help determine if messages with a set of headers should be consumed or not from a message queue where the + intercept headers are added to the messages. + docs: reference/restapi + image: telepresence-2.4.8-health-check.png + + - type: change + title: TELEPRESENCE_LOGIN_CLIENT_ID env variable no longer used + body: >- + You could previously configure this value, but there was no reason to change it, so the value + was removed. + + - type: bugfix + title: Tunneled network connections behave more like ordinary TCP connections. + body: >- + When using Telepresence with an external cloud provider for extensions, those tunneled + connections now behave more like TCP connections, especially when it comes to timeouts. + We've also added increased testing around these types of connections. + - version: 2.4.7 + date: "2021-11-24" + notes: + - type: feature + title: Injector service-name annotation + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-service-name, that can be used to set the name of the service to be intercepted. + This will help disambiguate which service to intercept for when a workload is exposed by multiple services, such as can happen with Argo Rollouts + docs: reference/cluster-config#service-name-annotation + - type: feature + title: Skip the Ingress Dialogue + body: >- + You can now skip the ingress dialogue by setting the ingress parameters in the corresponding flags. + docs: reference/intercepts#skipping-the-ingress-dialogue + - type: feature + title: Never proxy subnets + body: >- + The kubeconfig extensions now support a never-proxy argument, + analogous to also-proxy, that defines a set of subnets that + will never be proxied via telepresence. + docs: reference/config#neverproxy + - type: change + title: Daemon versions check + body: >- + Telepresence now checks the versions of the client and the daemons and asks the user to quit and restart if they don't match. + - type: change + title: No explicit DNS flushes + body: >- + Telepresence DNS now uses a very short TTL instead of explicitly flushing DNS by killing the mDNSResponder or doing resolvectl flush-caches + docs: reference/routing#dns-caching + - type: bugfix + title: Legacy flags now work with global flags + body: >- + Legacy flags such as `--swap-deployment` can now be used together with global flags. + - type: bugfix + title: Outbound connection closing + body: >- + Outbound connections are now properly closed when the peer closes. + - type: bugfix + title: Prevent DNS recursion + body: >- + The DNS-resolver will trap recursive resolution attempts (may happen when the cluster runs in a docker-container on the client). + docs: reference/routing#dns-recursion + - type: bugfix + title: Prevent network recursion + body: >- + The TUN-device will trap failed connection attempts that results in recursive calls back into the TUN-device (may happen when the + cluster runs in a docker-container on the client). + docs: reference/routing#connect-recursion + - type: bugfix + title: Traffic Manager deadlock fix + body: >- + The Traffic Manager no longer runs a risk of entering a deadlock when a new Traffic agent arrives. + - type: bugfix + title: webhookRegistry config propagation + body: >- + The configured webhookRegistry is now propagated to the webhook installer even if no webhookAgentImage has been set. + docs: reference/config#images + - type: bugfix + title: Login refreshes expired tokens + body: >- + When a user's token has expired, telepresence login + will prompt the user to log in again to get a new token. Previously, + the user had to telepresence quit and telepresence logout + to get a new token. + docs: https://github.com/telepresenceio/telepresence/issues/2062 + - version: 2.4.6 + date: "2021-11-02" + notes: + - type: feature + title: Manually injecting Traffic Agent + body: >- + Telepresence now supports manually injecting the traffic-agent YAML into workload manifests. + Use the genyaml command to create the sidecar YAML, then add the telepresence.getambassador.io/manually-injected: "true" annotation to your pods to allow Telepresence to intercept them. + docs: reference/intercepts/manual-agent + + - type: feature + title: Telepresence CLI released for Apple silicon + body: >- + Telepresence is now built and released for Apple silicon. + docs: install/?os=macos + + - type: change + title: Telepresence help text now links to telepresence.io + body: >- + We now include a link to our documentation when you run telepresence --help. This will make it easier + for users to find this page whether they acquire Telepresence through Brew or some other mechanism. + image: telepresence-2.4.6-help-text.png + + - type: bugfix + title: Fixed bug when API server is inside CIDR range of pods/services + body: >- + If the API server for your kubernetes cluster had an IP that fell within the + subnet generated from pods/services in a kubernetes cluster, it would proxy traffic + to the API server which would result in hanging or a failed connection. We now ensure + that the API server is explicitly not proxied. + - version: 2.4.5 + date: "2021-10-15" + notes: + - type: feature + title: Get pod yaml with gather-logs command + body: >- + Adding the flag --get-pod-yaml to your request will get the + pod yaml manifest for all kubernetes components you are getting logs for + ( traffic-manager and/or pods containing a + traffic-agent container). This flag is set to false + by default. + docs: reference/client + image: telepresence-2.4.5-pod-yaml.png + + - type: feature + title: Anonymize pod name + namespace when using gather-logs command + body: >- + Adding the flag --anonymize to your command will + anonymize your pod names + namespaces in the output file. We replace the + sensitive names with simple names (e.g. pod-1, namespace-2) to maintain + relationships between the objects without exposing the real names of your + objects. This flag is set to false by default. + docs: reference/client + image: telepresence-2.4.5-logs-anonymize.png + + - type: feature + title: Added context and defaults to ingress questions when creating a preview URL + body: >- + Previously, we referred to OSI model layers when asking these questions, but this + terminology is not commonly used. The questions now provide a clearer context for the user, along with a default answer as an example. + docs: howtos/preview-urls + image: telepresence-2.4.5-preview-url-questions.png + + - type: feature + title: Support for intercepting headless services + body: >- + Intercepting headless services is now officially supported. You can request a + headless service on whatever port it exposes and get a response from the + intercept. This leverages the same approach as intercepting numeric ports when + using the mutating webhook injector, mainly requires the initContainer + to have NET_ADMIN capabilities. + docs: reference/intercepts/#intercepting-headless-services + + - type: change + title: Use one tunnel per connection instead of multiplexing into one tunnel + body: >- + We have changed Telepresence so that it uses one tunnel per connection instead + of multiplexing all connections into one tunnel. This will provide substantial + performance improvements. Clients will still be backwards compatible with older + managers that only support multiplexing. + + - type: bugfix + title: Added checks for Telepresence kubernetes compatibility + body: >- + Telepresence currently works with Kubernetes server versions 1.17.0 + and higher. We have added logs in the connector and traffic-manager + to let users know when they are using Telepresence with a cluster it doesn't support. + docs: reference/cluster-config + + - type: bugfix + title: Traffic Agent security context is now only added when necessary + body: >- + When creating an intercept, Telepresence will now only set the traffic agent's GID + when strictly necessary (i.e. when using headless services or numeric ports). This mitigates + an issue on openshift clusters where the traffic agent can fail to be created due to + openshift's security policies banning arbitrary GIDs. + + - version: 2.4.4 + date: "2021-09-27" + notes: + - type: feature + title: Numeric ports in agent injector + body: >- + The agent injector now supports injecting Traffic Agents into pods that have unnamed ports. + docs: reference/cluster-config/#note-on-numeric-ports + + - type: feature + title: New subcommand to gather logs and export into zip file + body: >- + Telepresence has logs for various components (the + traffic-manager, traffic-agents, the root and + user daemons), which are integral for understanding and debugging + Telepresence behavior. We have added the telepresence + gather-logs command to make it simple to compile logs for + all Telepresence components and export them in a zip file that can + be shared to others and/or included in a github issue. For more + information on usage, run telepresence gather-logs --help + . + docs: reference/client + image: telepresence-2.4.4-gather-logs.png + + - type: feature + title: Pod CIDR strategy is configurable in Helm chart + body: >- + Telepresence now enables you to directly configure how to get + pod CIDRs when deploying Telepresence with the Helm chart. + The default behavior remains the same. We've also introduced + the ability to explicitly set what the pod CIDRs should be. + docs: install/helm + + - type: bugfix + title: Compute pod CIDRs more efficiently + body: >- + When computing subnets using the pod CIDRs, the traffic-manager + now uses less CPU cycles. + docs: reference/routing/#subnets + + - type: bugfix + title: Prevent busy loop in traffic-manager + body: >- + In some circumstances, the traffic-manager's CPU + would max out and get pinned at its limit. This required a + shutdown or pod restart to fix. We've added some fixes + to prevent the traffic-manager from getting into this state. + + - type: bugfix + title: Added a fixed buffer size to TUN-device + body: >- + The TUN-device now has a max buffer size of 64K. This prevents the + buffer from growing limitlessly until it receies a PSH, which could + be a blocking operation when receiving lots of TCP-packets. + docs: reference/tun-device + + - type: bugfix + title: Fix hanging user daemon + body: >- + When Telepresence encountered an issue connecting to the cluster or + the root daemon, it could hang indefintely. It now will error correctly + when it encounters that situation. + + - type: bugfix + title: Improved proprietary agent connectivity + body: >- + To determine whether the environment cluster is air-gapped, the + proprietary agent attempts to connect to the cloud during startup. + To deal with a possible initial failure, the agent backs off + and retries the connection with an increasing backoff duration. + + - type: bugfix + title: Telepresence correctly reports intercept port conflict + body: >- + When creating a second intercept targetting the same local port, + it now gives the user an informative error message. Additionally, + it tells them which intercept is currently using that port to make + it easier to remedy. + + - version: 2.4.3 + date: "2021-09-15" + notes: + - type: feature + title: Environment variable TELEPRESENCE_INTERCEPT_ID available in interceptor's environment + body: >- + When you perform an intercept, we now include a TELEPRESENCE_INTERCEPT_ID environment + variable in the environment. + docs: reference/environment/#telepresence-environment-variables + + - type: bugfix + title: Improved daemon stability + body: >- + Fixed a timing bug that sometimes caused a "daemon did not start" failure. + + - type: bugfix + title: Complete logs for Windows + body: >- + Crash stack traces and other errors were incorrectly not written to log files. This has + been fixed so logs for Windows should be at parity with the ones in MacOS and Linux. + + - type: bugfix + title: Log rotation fix for Linux kernel 4.11+ + body: >- + On Linux kernel 4.11 and above, the log file rotation now properly reads the + birth-time of the log file. Older kernels continue to use the old behavior + of using the change-time in place of the birth-time. + + - type: bugfix + title: Improved error messaging + body: >- + When Telepresence encounters an error, it tells the user where they should look for + logs related to the error. We have refined this so that it only tells users to look + for errors in the daemon logs for issues that are logged there. + + - type: bugfix + title: Stop resolving localhost + body: >- + When using the overriding DNS resolver, it will no longer apply search paths when + resolving localhost, since that should be resolved on the user's machine + instead of the cluster. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Variable cluster domain + body: >- + Previously, the cluster domain was hardcoded to cluster.local. While this + is true for many kubernetes clusters, it is not for all of them. Now this value is + retrieved from the traffic-manager. + + - type: bugfix + title: Improved cleanup of traffic-agents + body: >- + Telepresence now uninstalls traffic-agents installed via mutating webhook + when using telepresence uninstall --everything. + + - type: bugfix + title: More large file transfer fixes + body: >- + Downloading large files during an intercept will no longer cause timeouts and hanging + traffic-agents. + + - type: bugfix + title: Setting --mount to false when intercepting works as expected + body: >- + When using --mount=false while performing an intercept, the file system + was still mounted. This has been remedied so the intercept behavior respects the + flag. + docs: reference/volume + + - type: bugfix + title: Traffic-manager establishes outbound connections in parallel + body: >- + Previously, the traffic-manager established outbound connections + sequentially. This resulted in slow (and failing) Dial calls would + block all outbound traffic from the workstation (for up to 30 seconds). We now + establish these connections in parallel so that won't occur. + docs: reference/routing/#outbound + + - type: bugfix + title: Status command reports correct DNS settings + body: >- + Telepresence status now correctly reports DNS settings for all operating + systems, instead of Local IP:nil, Remote IP:nil when they don't exist. + + - version: 2.4.2 + date: "2021-09-01" + notes: + - type: feature + title: New subcommand to temporarily change log-level + body: >- + We have added a new telepresence loglevel subcommand that enables users + to temporarily change the log-level for the local demons, the traffic-manager and + the traffic-agents. While the logLevels settings from the config will + still be used by default, this can be helpful if you are currently experiencing an issue and + want to have higher fidelity logs, without doing a telepresence quit and + telepresence connect. You can use telepresence loglevel --help to get + more information on options for the command. + docs: reference/config + + - type: change + title: All components have info as the default log-level + body: >- + We've now set the default for all components of Telepresence (traffic-agent, + traffic-manager, local daemons) to use info as the default log-level. + + - type: bugfix + title: Updating RBAC in helm chart to fix cluster-id regression + body: >- + In 2.4.1, we enabled the traffic-manager to get the cluster ID by getting the UID + of the default namespace. The helm chart was not updated to give the traffic-manager + those permissions, which has since been fixed. This impacted users who use licensed features of + the Telepresence extension in an air-gapped environment. + docs: reference/cluster-config/#air-gapped-cluster + + - type: bugfix + title: Timeouts for Helm actions are now respected + body: >- + The user-defined timeout for Helm actions wasn't always respected, causing the daemon to hang + indefinitely when failing to install the traffic-manager. + docs: reference/config#timeouts + + - version: 2.4.1 + date: "2021-08-30" + notes: + - type: feature + title: External cloud variables are now configurable + body: >- + We now support configuring the host and port for the cloud in your config.yml. These + are used when logging in to utilize features provided by an extension, and are also passed + along as environment variables when installing the `traffic-manager`. Additionally, we + now run our testsuite with these variables set to localhost to continue to ensure Telepresence + is fully fuctional without depeneding on an external service. The SYSTEMA_HOST and SYSTEMA_PORT + environment variables are no longer used. + image: telepresence-2.4.1-systema-vars.png + docs: reference/config/#cloud + + - type: feature + title: Helm chart can now regenerate certificate used for mutating webhook on-demand. + body: >- + You can now set agentInjector.certificate.regenerate when deploying Telepresence + with the Helm chart to automatically regenerate the certificate used by the agent injector webhook. + docs: install/helm + + - type: change + title: Traffic Manager installed via helm + body: >- + The traffic-manager is now installed via an embedded version of the Helm chart when telepresence connect is first performed on a cluster. + This change is transparent to the user. + A new configuration flag, timeouts.helm sets the timeouts for all helm operations performed by the Telepresence binary. + docs: reference/config#timeouts + + - type: change + title: traffic-manager gets cluster ID itself instead of via environment variable + body: >- + The traffic-manager used to get the cluster ID as an environment variable when running + telepresence connnect or via adding the value in the helm chart. This was + clunky so now the traffic-manager gets the value itself as long as it has permissions + to "get" and "list" namespaces (this has been updated in the helm chart). + docs: install/helm + + - type: bugfix + title: Telepresence now mounts all directories from /var/run/secrets + body: >- + In the past, we only mounted secret directories in /var/run/secrets/kubernetes.io. + We now mount *all* directories in /var/run/secrets, which, for example, includes + directories like eks.amazonaws.com used for IRSA tokens. + docs: reference/volume + + - type: bugfix + title: Max gRPC receive size correctly propagates to all grpc servers + body: >- + This fixes a bug where the max gRPC receive size was only propagated to some of the + grpc servers, causing failures when the message size was over the default. + docs: reference/config/#grpc + + - type: bugfix + title: Updated our Homebrew packaging to run manually + body: >- + We made some updates to our script that packages Telepresence for Homebrew so that it + can be run manually. This will enable maintainers of Telepresence to run the script manually + should we ever need to rollback a release and have latest point to an older verison. + docs: install/ + + - type: bugfix + title: Telepresence uses namespace from kubeconfig context on each call + body: >- + In the past, Telepresence would use whatever namespace was specified in the kubeconfig's current-context + for the entirety of the time a user was connected to Telepresence. This would lead to confusing behavior + when a user changed the context in their kubeconfig and expected Telepresence to acknowledge that change. + Telepresence now will do that and use the namespace designated by the context on each call. + + - type: bugfix + title: Idle outbound TCP connections timeout increased to 7200 seconds + body: >- + Some users were noticing that their intercepts would start failing after 60 seconds. + This was because the keep idle outbound TCP connections were set to 60 seconds, which we have + now bumped to 7200 seconds to match Linux's tcp_keepalive_time default. + + - type: bugfix + title: Telepresence will automatically remove a socket upon ungraceful termination + body: >- + When a Telepresence process terminates ungracefully, it would inform users that "this usually means + that the process has terminated ungracefully" and implied that they should remove the socket. We've + now made it so Telepresence will automatically attempt to remove the socket upon ungraceful termination. + + - type: bugfix + title: Fixed user daemon deadlock + body: >- + Remedied a situation where the user daemon could hang when a user was logged in. + + - type: bugfix + title: Fixed agentImage config setting + body: >- + The config setting images.agentImages is no longer required to contain the repository, and it + will use the value at images.repository. + docs: reference/config/#images + + - version: 2.4.0 + date: "2021-08-04" + notes: + - type: feature + title: Windows Client Developer Preview + body: >- + There is now a native Windows client for Telepresence that is being released as a Developer Preview. + All the same features supported by the MacOS and Linux client are available on Windows. + image: telepresence-2.4.0-windows.png + docs: install + + - type: feature + title: CLI raises helpful messages from Ambassador Cloud + body: >- + Telepresence can now receive messages from Ambassador Cloud and raise + them to the user when they perform certain commands. This enables us + to send you messages that may enhance your Telepresence experience when + using certain commands. Frequency of messages can be configured in your + config.yml. + image: telepresence-2.4.0-cloud-messages.png + docs: reference/config#cloud + + - type: bugfix + title: Improved stability of systemd-resolved-based DNS + body: >- + When initializing the systemd-resolved-based DNS, the routing domain + is set to improve stability in non-standard configurations. This also enables the + overriding resolver to do a proper take over once the DNS service ends. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Fixed an edge case when intercepting a container with multiple ports + body: >- + When specifying a port of a container to intercept, if there was a container in the + pod without ports, it was automatically selected. This has been fixed so we'll only + choose the container with "no ports" if there's no container that explicitly matches + the port used in your intercept. + docs: reference/intercepts/#creating-an-intercept-when-a-service-has-multiple-ports + + - type: bugfix + title: $(NAME) references in agent's environments are now interpolated correctly. + body: >- + If you had an environment variable $(NAME) in your workload that referenced another, intercepts + would not correctly interpolate $(NAME). This has been fixed and works automatically. + + - type: bugfix + title: Telepresence no longer prints INFO message when there is no config.yml + body: >- + Fixed a regression that printed an INFO message to the terminal when there wasn't a + config.yml present. The config is optional, so this message has been + removed. + docs: reference/config + + - type: bugfix + title: Telepresence no longer panics when using --http-match + body: >- + Fixed a bug where Telepresence would panic if the value passed to --http-match + didn't contain an equal sign, which has been fixed. The correct syntax is in the --help + string and looks like --http-match=HTTP2_HEADER=REGEX + docs: reference/intercepts/#intercept-behavior-when-logged-in-to-ambassador-cloud + + - type: bugfix + title: Improved subnet updates + body: >- + The `traffic-manager` used to update subnets whenever the `Nodes` or `Pods` changed, even if + the underlying subnet hadn't changed, which created a lot of unnecessary traffic between the + client and the `traffic-manager`. This has been fixed so we only send updates when the subnets + themselves actually change. + docs: reference/routing/#subnets + + - version: 2.3.7 + date: "2021-07-23" + notes: + - type: feature + title: Also-proxy in telepresence status + body: >- + An also-proxy entry in the Kubernetes cluster config will + show up in the output of the telepresence status command. + docs: reference/config + + - type: feature + title: Non-interactive telepresence login + body: >- + telepresence login now has an + --apikey=KEY flag that allows for + non-interactive logins. This is useful for headless + environments where launching a web-browser is impossible, + such as cloud shells, Docker containers, or CI. + image: telepresence-2.3.7-newkey.png + docs: reference/client/login/ + + - type: bugfix + title: Mutating webhook injector correctly hides named ports for probes. + body: >- + The mutating webhook injector has been fixed to correctly rename named ports for liveness and readiness probes + docs: reference/cluster-config + + - type: bugfix + title: telepresence current-cluster-id crash fixed + body: >- + Fixed a regression introduced in 2.3.5 that caused `telepresence current-cluster-id` + to crash. + docs: reference/cluster-config + + - type: bugfix + title: Better UX around intercepts with no local process running + body: >- + Requests would hang indefinitely when initiating an intercept before you + had a local process running. This has been fixed and will result in an + Empty reply from server until you start a local process. + docs: reference/intercepts + + - type: bugfix + title: API keys no longer show as "no description" + body: >- + New API keys generated internally for communication with + Ambassador Cloud no longer show up as "no description" in + the Ambassador Cloud web UI. Existing API keys generated by + older versions of Telepresence will still show up this way. + image: telepresence-2.3.7-keydesc.png + + - type: bugfix + title: Fix corruption of user-info.json + body: >- + Fixed a race condition that logging in and logging out + rapidly could cause memory corruption or corruption of the + user-info.json cache file used when + authenticating with Ambassador Cloud. + + - type: bugfix + title: Improved DNS resolver for systemd-resolved + body: + Telepresence's systemd-resolved-based DNS resolver is now more + stable and in case it fails to initialize, the overriding resolver + will no longer cause general DNS lookup failures when telepresence defaults to + using it. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Faster telepresence list command + body: + The performance of telepresence list has been increased + significantly by reducing the number of calls the command makes to the cluster. + docs: reference/client + + - version: 2.3.6 + date: "2021-07-20" + notes: + - type: bugfix + title: Fix preview URLs + body: >- + Fixed a regression introduced in 2.3.5 that caused preview + URLs to not work. + + - type: bugfix + title: Fix subnet discovery + body: >- + Fixed a regression introduced in 2.3.5 where the Traffic + Manager's RoleBinding did not correctly appoint + the traffic-manager Role, causing + subnet discovery to not be able to work correctly. + docs: reference/rbac/ + + - type: bugfix + title: Fix root-user configuration loading + body: >- + Fixed a regression introduced in 2.3.5 where the root daemon + did not correctly read the configuration file; ignoring the + user's configured log levels and timeouts. + docs: reference/config/ + + - type: bugfix + title: Fix a user daemon crash + body: >- + Fixed an issue that could cause the user daemon to crash + during shutdown, as during shutdown it unconditionally + attempted to close a channel even though the channel might + already be closed. + + - version: 2.3.5 + date: "2021-07-15" + notes: + - type: feature + title: traffic-manager in multiple namespaces + body: >- + We now support installing multiple traffic managers in the same cluster. + This will allow operators to install deployments of telepresence that are + limited to certain namespaces. + image: ./telepresence-2.3.5-traffic-manager-namespaces.png + docs: install/helm + - type: feature + title: No more dependence on kubectl + body: >- + Telepresence no longer depends on having an external + kubectl binary, which might not be present for + OpenShift users (who have oc instead of + kubectl). + - type: feature + title: Agent image now configurable + body: >- + We now support configuring which agent image + registry to use in the + config. This enables users whose laptop is an air-gapped environment to + create personal intercepts without requiring a login. It also makes it easier + for those who are developing on Telepresence to specify which agent image should + be used. Env vars TELEPRESENCE_AGENT_IMAGE and TELEPRESENCE_REGISTRY are no longer + used. + image: ./telepresence-2.3.5-agent-config.png + docs: reference/config/#images + - type: feature + title: Max gRPC receive size now configurable + body: >- + The default max size of messages received through gRPC (4 MB) is sometimes insufficient. It can now be configured. + image: ./telepresence-2.3.5-grpc-max-receive-size.png + docs: reference/config/#grpc + - type: feature + title: CLI can be used in air-gapped environments + body: >- + While Telepresence will auto-detect if your cluster is in an air-gapped environment, + we've added an option users can add to their config.yml to ensure the cli acts like it + is in an air-gapped environment. Air-gapped environments require a manually installed + licence. + docs: reference/cluster-config/#air-gapped-cluster + image: ./telepresence-2.3.5-skipLogin.png + - version: 2.3.4 + date: "2021-07-09" + notes: + - type: bugfix + title: Improved IP log statements + body: >- + Some log statements were printing incorrect characters, when they should have been IP addresses. + This has been resolved to include more accurate and useful logging. + docs: reference/config/#log-levels + image: ./telepresence-2.3.4-ip-error.png + - type: bugfix + title: Improved messaging when multiple services match a workload + body: >- + If multiple services matched a workload when performing an intercept, Telepresence would crash. + It now gives the correct error message, instructing the user on how to specify which + service the intercept should use. + image: ./telepresence-2.3.4-improved-error.png + docs: reference/intercepts + - type: bugfix + title: Traffic-manger creates services in its own namespace to determine subnet + body: >- + Telepresence will now determine the service subnet by creating a dummy-service in its own + namespace, instead of the default namespace, which was causing RBAC permissions issues in + some clusters. + docs: reference/routing/#subnets + - type: bugfix + title: Telepresence connect respects pre-existing clusterrole + body: >- + When Telepresence connects, if the traffic-manager's desired clusterrole already exists in the + cluster, Telepresence will no longer try to update the clusterrole. + docs: reference/rbac + - type: bugfix + title: Helm Chart fixed for clientRbac.namespaced + body: >- + The Telepresence Helm chart no longer fails when installing with --set clientRbac.namespaced=true. + docs: install/helm + - version: 2.3.3 + date: "2021-07-07" + notes: + - type: feature + title: Traffic Manager Helm Chart + body: >- + Telepresence now supports installing the Traffic Manager via Helm. + This will make it easy for operators to install and configure the + server-side components of Telepresence separately from the CLI (which + in turn allows for better separation of permissions). + image: ./telepresence-2.3.3-helm.png + docs: install/helm/ + - type: feature + title: Traffic-manager in custom namespace + body: >- + As the traffic-manager can now be installed in any + namespace via Helm, Telepresence can now be configured to look for the + Traffic Manager in a namespace other than ambassador. + This can be configured on a per-cluster basis. + image: ./telepresence-2.3.3-namespace-config.png + docs: reference/config + - type: feature + title: Intercept --to-pod + body: >- + telepresence intercept now supports a + --to-pod flag that can be used to port-forward sidecars' + ports from an intercepted pod. + image: ./telepresence-2.3.3-to-pod.png + docs: reference/intercepts + - type: change + title: Change in migration from edgectl + body: >- + Telepresence no longer automatically shuts down the old + api_version=1 edgectl daemon. If migrating + from such an old version of edgectl you must now manually + shut down the edgectl daemon before running Telepresence. + This was already the case when migrating from the newer + api_version=2 edgectl. + - type: bugfix + title: Fixed error during shutdown + body: >- + The root daemon no longer terminates when the user daemon disconnects + from its gRPC streams, and instead waits to be terminated by the CLI. + This could cause problems with things not being cleaned up correctly. + - type: bugfix + title: Intercepts will survive deletion of intercepted pod + body: >- + An intercept will survive deletion of the intercepted pod provided + that another pod is created (or already exists) that can take over. + - version: 2.3.2 + date: "2021-06-18" + notes: + # Headliners + - type: feature + title: Service Port Annotation + body: >- + The mutator webhook for injecting traffic-agents now + recognizes a + telepresence.getambassador.io/inject-service-port + annotation to specify which port to intercept; bringing the + functionality of the --port flag to users who + use the mutator webook in order to control Telepresence via + GitOps. + image: ./telepresence-2.3.2-svcport-annotation.png + docs: reference/cluster-config#service-port-annotation + - type: feature + title: Outbound Connections + body: >- + Outbound connections are now routed through the intercepted + Pods which means that the connections originate from that + Pod from the cluster's perspective. This allows service + meshes to correctly identify the traffic. + docs: reference/routing/#outbound + - type: change + title: Inbound Connections + body: >- + Inbound connections from an intercepted agent are now + tunneled to the manager over the existing gRPC connection, + instead of establishing a new connection to the manager for + each inbound connection. This avoids interference from + certain service mesh configurations. + docs: reference/routing/#inbound + + # RBAC changes + - type: change + title: Traffic Manager needs new RBAC permissions + body: >- + The Traffic Manager requires RBAC + permissions to list Nodes, Pods, and to create a dummy + Service in the manager's namespace. + docs: reference/routing/#subnets + - type: change + title: Reduced developer RBAC requirements + body: >- + The on-laptop client no longer requires RBAC permissions to list the Nodes + in the cluster or to create Services, as that functionality + has been moved to the Traffic Manager. + + # Bugfixes + - type: bugfix + title: Able to detect subnets + body: >- + Telepresence will now detect the Pod CIDR ranges even if + they are not listed in the Nodes. + image: ./telepresence-2.3.2-subnets.png + docs: reference/routing/#subnets + - type: bugfix + title: Dynamic IP ranges + body: >- + The list of cluster subnets that the virtual network + interface will route is now configured dynamically and will + follow changes in the cluster. + - type: bugfix + title: No duplicate subnets + body: >- + Subnets fully covered by other subnets are now pruned + internally and thus never superfluously added to the + laptop's routing table. + docs: reference/routing/#subnets + - type: change # not a bugfix, but it only makes sense to mention after the above bugfixes + title: Change in default timeout + body: >- + The trafficManagerAPI timeout default has + changed from 5 seconds to 15 seconds, in order to facilitate + the extended time it takes for the traffic-manager to do its + initial discovery of cluster info as a result of the above + bugfixes. + - type: bugfix + title: Removal of DNS config files on macOS + body: >- + On macOS, files generated under + /etc/resolver/ as the result of using + include-suffixes in the cluster config are now + properly removed on quit. + docs: reference/routing/#macos-resolver + + - type: bugfix + title: Large file transfers + body: >- + Telepresence no longer erroneously terminates connections + early when sending a large HTTP response from an intercepted + service. + - type: bugfix + title: Race condition in shutdown + body: >- + When shutting down the user-daemon or root-daemon on the + laptop, telepresence quit and related commands + no longer return early before everything is fully shut down. + Now it can be counted on that by the time the command has + returned that all of the side-effects on the laptop have + been cleaned up. + - version: 2.3.1 + date: "2021-06-14" + notes: + - title: DNS Resolver Configuration + body: "Telepresence now supports per-cluster configuration for custom dns behavior, which will enable users to determine which local + remote resolver to use and which suffixes should be ignored + included. These can be configured on a per-cluster basis." + image: ./telepresence-2.3.1-dns.png + docs: reference/config + type: feature + - title: AlsoProxy Configuration + body: "Telepresence now supports also proxying user-specified subnets so that they can access external services only accessible to the cluster while connected to Telepresence. These can be configured on a per-cluster basis and each subnet is added to the TUN device so that requests are routed to the cluster for IPs that fall within that subnet." + image: ./telepresence-2.3.1-alsoProxy.png + docs: reference/config + type: feature + - title: Mutating Webhook for Injecting Traffic Agents + body: "The Traffic Manager now contains a mutating webhook to automatically add an agent to pods that have the telepresence.getambassador.io/traffic-agent: enabled annotation. This enables Telepresence to work well with GitOps CD platforms that rely on higher level kubernetes objects matching what is stored in git. For workloads without the annotation, Telepresence will add the agent the way it has in the past" + image: ./telepresence-2.3.1-inject.png + docs: reference/rbac + type: feature + - title: Traffic Manager Connect Timeout + body: "The trafficManagerConnect timeout default has changed from 20 seconds to 60 seconds, in order to facilitate the extended time it takes to apply everything needed for the mutator webhook." + image: ./telepresence-2.3.1-trafficmanagerconnect.png + docs: reference/config + type: change + - title: Fix for large file transfers + body: "Fix a tun-device bug where sometimes large transfers from services on the cluster would hang indefinitely" + image: ./telepresence-2.3.1-large-file-transfer.png + docs: reference/tun-device + type: bugfix + - title: Brew Formula Changed + body: "Now that the Telepresence rewrite is the main version of Telepresence, you can install it via Brew like so: brew install datawire/blackbird/telepresence." + image: ./telepresence-2.3.1-brew.png + docs: install/ + type: change + - version: 2.3.0 + date: "2021-06-01" + notes: + - title: Brew install Telepresence + body: "Telepresence can now be installed via brew on macOS, which makes it easier for users to stay up-to-date with the latest telepresence version. To install via brew, you can use the following command: brew install datawire/blackbird/telepresence2." + image: ./telepresence-2.3.0-homebrew.png + docs: install/ + type: feature + - title: TCP and UDP routing via Virtual Network Interface + body: "Telepresence will now perform routing of outbound TCP and UDP traffic via a Virtual Network Interface (VIF). The VIF is a layer 3 TUN-device that exists while Telepresence is connected. It makes the subnets in the cluster available to the workstation and will also route DNS requests to the cluster and forward them to intercepted pods. This means that pods with custom DNS configuration will work as expected. Prior versions of Telepresence would use firewall rules and were only capable of routing TCP." + image: ./tunnel.jpg + docs: reference/tun-device + type: feature + - title: SSH is no longer used + body: "All traffic between the client and the cluster is now tunneled via the traffic manager gRPC API. This means that Telepresence no longer uses ssh tunnels and that the manager no longer have an sshd installed. Volume mounts are still established using sshfs but it is now configured to communicate using the sftp-protocol directly, which means that the traffic agent also runs without sshd. A desired side effect of this is that the manager and agent containers no longer need a special user configuration." + image: ./no-ssh.png + docs: reference/tun-device/#no-ssh-required + type: change + - title: Running in a Docker container + body: "Telepresence can now be run inside a Docker container. This can be useful for avoiding side effects on a workstation's network, establishing multiple sessions with the traffic manager, or working with different clusters simultaneously." + image: ./run-tp-in-docker.png + docs: reference/inside-container + type: feature + - title: Configurable Log Levels + body: "Telepresence now supports configuring the log level for Root Daemon and User Daemon logs. This provides control over the nature and volume of information that Telepresence generates in daemon.log and connector.log." + image: ./telepresence-2.3.0-loglevels.png + docs: reference/config/#log-levels + type: feature + - version: 2.2.2 + date: "2021-05-17" + notes: + - title: Legacy Telepresence subcommands + body: Telepresence is now able to translate common legacy Telepresence commands into native Telepresence commands. So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used to with the new Telepresence binary. + image: ./telepresence-2.2.png + docs: install/migrate-from-legacy/ + type: feature diff --git a/docs/telepresence/2.5/troubleshooting/index.md b/docs/telepresence/2.5/troubleshooting/index.md new file mode 100644 index 000000000..e4250aa2c --- /dev/null +++ b/docs/telepresence/2.5/troubleshooting/index.md @@ -0,0 +1,106 @@ +--- +description: "Troubleshooting issues related to Telepresence." +--- +# Troubleshooting + +## Creating an intercept did not generate a preview URL + +Preview URLs can only be created if Telepresence is [logged in to +Ambassador Cloud](../reference/client/login/). When not logged in, it +will not even try to create a preview URL (additionally, by default it +will intercept all traffic rather than just a subset of the traffic). +Remove the intercept with `telepresence leave [deployment name]`, run +`telepresence login` to login to Ambassador Cloud, then recreate the +intercept. See the [intercepts how-to doc](../howtos/intercepts) for +more details. + +## Error on accessing preview URL: `First record does not look like a TLS handshake` + +The service you are intercepting is likely not using TLS, however when configuring the intercept you indicated that it does use TLS. Remove the intercept with `telepresence leave [deployment name]` and recreate it, setting `TLS` to `n`. Telepresence tries to intelligently determine these settings for you when creating an intercept and offer them as defaults, but odd service configurations might cause it to suggest the wrong settings. + +## Error on accessing preview URL: Detected a 301 Redirect Loop + +If your ingress is set to redirect HTTP requests to HTTPS and your web app uses HTTPS, but you configure the intercept to not use TLS, you will get this error when opening the preview URL. Remove the intercept with `telepresence leave [deployment name]` and recreate it, selecting the correct port and setting `TLS` to `y` when prompted. + +## Connecting to a cluster via VPN doesn't work. + +There are a few different issues that could arise when working with a VPN. Please see the [dedicated page](../reference/vpn) on Telepresence and VPNs to learn more on how to fix these. + +## Your GitHub organization isn't listed + +Ambassador Cloud needs access granted to your GitHub organization as a +third-party OAuth app. If an organization isn't listed during login +then the correct access has not been granted. + +The quickest way to resolve this is to go to the **Github menu** → +**Settings** → **Applications** → **Authorized OAuth Apps** → +**Ambassador Labs**. An organization owner will have a **Grant** +button, anyone not an owner will have **Request** which sends an email +to the owner. If an access request has been denied in the past the +user will not see the **Request** button, they will have to reach out +to the owner. + +Once access is granted, log out of Ambassador Cloud and log back in; +you should see the GitHub organization listed. + +The organization owner can go to the **GitHub menu** → **Your +organizations** → **[org name]** → **Settings** → **Third-party +access** to see if Ambassador Labs has access already or authorize a +request for access (only owners will see **Settings** on the +organization page). Clicking the pencil icon will show the +permissions that were granted. + +GitHub's documentation provides more detail about [managing access granted to third-party applications](https://docs.github.com/en/github/authenticating-to-github/connecting-with-third-party-applications) and [approving access to apps](https://docs.github.com/en/github/setting-up-and-managing-organizations-and-teams/approving-oauth-apps-for-your-organization). + +## Granting or requesting access on initial login + +When using GitHub as your identity provider, the first time you log in +to Ambassador Cloud GitHub will ask to authorize Ambassador Labs to +access your organizations and certain user data. + +![Authorize Ambassador labs form](../images/github-login.png) + +Any listed organization with a green check has already granted access +to Ambassador Labs (you still need to authorize to allow Ambassador +Labs to read your user data and organization membership). + +Any organization with a red "X" requires access to be granted to +Ambassador Labs. Owners of the organization will see a **Grant** +button. Anyone who is not an owner will see a **Request** button. +This will send an email to the organization owner requesting approval +to access the organization. If an access request has been denied in +the past the user will not see the **Request** button, they will have +to reach out to the owner. + +Once approval is granted, you will have to log out of Ambassador Cloud +then back in to select the organization. + +## Volume mounts are not working on macOS + +It's necessary to have `sshfs` installed in order for volume mounts to work correctly during intercepts. Lately there's been some issues using `brew install sshfs` a macOS workstation because the required component `osxfuse` (now named `macfuse`) isn't open source and hence, no longer supported. As a workaround, you can now use `gromgit/fuse/sshfs-mac` instead. Follow these steps: + +1. Remove old sshfs, macfuse, osxfuse using `brew uninstall` +2. `brew install --cask macfuse` +3. `brew install gromgit/fuse/sshfs-mac` +4. `brew link --overwrite sshfs-mac` + +Now sshfs -V shows you the correct version, e.g.: +``` +$ sshfs -V +SSHFS version 2.10 +FUSE library version: 2.9.9 +fuse: no mount point +``` + +but one more thing must be done before it works OK: +5. Try a mount (or an intercept that performs a mount). It will fail because you need to give permission to “Benjamin Fleischer” to execute a kernel extension (a pop-up appears that takes you to the system preferences). +6. Approve the needed permission +7. Reboot your computer. + +## Authorization for Preview URLs +Services that require authentication may not function correctly with preview URLs. When accessing a preview URL, it is necessary to configure your intercept to use custom authentication headers for the preview URL. If you don't, you may receive an unauthorized response or be redirected to the login page for Ambassador Cloud. + +You can accomplish this by using a browser extension such as the `ModHeader extension` for [Chrome](https://chrome.google.com/webstore/detail/modheader/idgpnmonknjnojddfkpgkljpfnnfcklj) +or [Firefox](https://addons.mozilla.org/en-CA/firefox/addon/modheader-firefox/). + +It is important to note that Ambassador Cloud does not support OAuth browser flows when accessing a preview URL, but other auth schemes such as Basic access authentication and session cookies will work. diff --git a/docs/telepresence/2.5/tutorial.md b/docs/telepresence/2.5/tutorial.md new file mode 100644 index 000000000..7a4af5365 --- /dev/null +++ b/docs/telepresence/2.5/tutorial.md @@ -0,0 +1,231 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Telepresence Quick Start + +In this guide you will explore some of the key features of Telepresence. First, you will install the Telepresence CLI and set up a test cluster with a demo web app. Then, you will run one of the app's services on your laptop, using Telepresence to intercept requests to the service on the cluster and see your changes live via a preview URL. + +## Prerequisites + +It is recommended to use an empty development cluster for this guide. You must have access via RBAC to create and update deployments and services in the cluster. You must also have [Node.js installed](https://nodejs.org/en/download/package-manager/) on your laptop to run the demo app code. + +Finally, you will need the Telepresence CLI. Run the commands for +your OS to install it and log in to Ambassador Cloud in your browser. +Follow the prompts to log in with GitHub then select your +organization. You will be redirected to the Ambassador Cloud +dashboard; later you will manage your preview URLs here. + +### macOS + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/latest/telepresence \ +-o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# 3. Login with the CLI: +telepresence login + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/latest/telepresence \ +-o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# 3. Login with the CLI: +telepresence login +``` + +If you receive an error saying the developer cannot be verified, open System Preferences → Security & Privacy → General. Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence login command. + +If you are in an environment where Telepresence cannot launch a local +browser for you to interact with, you will need to pass the +[`--apikey` flag to `telepresence +login`](../reference/client/login/). + +### Linux + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/latest/telepresence \ +-o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# 3. Login with the CLI: +telepresence login +``` + +If you are in an environment where Telepresence cannot launch a local +browser for you to interact with, you will need to pass the +[`--apikey` flag to `telepresence +login`](../reference/client/login/). + +## Cluster Setup + +1. You will use a sample Java app for this guide. Later, after deploying the app into your cluster, we will review its architecture. Start by cloning the repo: + + ``` + git clone https://github.com/datawire/amb-code-quickstart-app.git + ``` + +2. Install [Edge Stack](../../../../../../products/edge-stack/) to use as an ingress controller for your cluster. We need an ingress controller to allow access to the web app from the internet. + + Change into the repo directory, then into `k8s-config`, and apply the YAML files to deploy Edge Stack. + + ``` + cd amb-code-quickstart-app/k8s-config + kubectl apply -f 1-aes-crds.yml && kubectl wait --for condition=established --timeout=90s crd -lproduct=aes + kubectl apply -f 2-aes.yml && kubectl wait -n ambassador deploy -lproduct=aes --for condition=available --timeout=90s + ``` + +3. Install the web app by applying its manifest: + + ``` + kubectl apply -f edgy-corp-web-app.yaml + ``` + +4. Wait a few moments for the external load balancer to become available, then retrieve its IP address: + + ``` + kubectl get service -n ambassador ambassador -o jsonpath='{.status.loadBalancer.ingress[0].ip}' + ``` + + + + + + +
  1. Wait until all the pods start, then access the the Edgy Corp web app in your browser at http://<load-balancer-ip/>. Be sure you use http, not https!
    You should see the landing page for the web app with an architecture diagram. The web app is composed of three services, with the frontend VeryLargeJavaService dependent on the two backend services.
+ +## Developing with Telepresence + +Now that your app is all wired up you're ready to start doing development work with Telepresence. Imagine you are a Java developer and first on your to-do list for the day is a change on the `DataProcessingNodeService`. One thing this service does is set the color for the title and a pod in the diagram. The production version of the app on the cluster uses green elements, but you want to see a version with these elements set to blue. + +The `DataProcessingNodeService` service is dependent on the `VeryLargeJavaService` and `VeryLargeDataStore` services to run. Local development would require one of the two following setups, neither of which is ideal. + +First, you could run the two dependent services on your laptop. However, as their names suggest, they are too large to run locally. This option also doesn't scale well. Two services isn't a lot to manage, but more complex apps requiring many more dependencies is not feasible to manage running on your laptop. + +Second, you could run everything in a development cluster. However, the cycle of writing code then waiting on containers to build and deploy is incredibly disruptive. The lengthening of the [inner dev loop](../concepts/devloop) in this way can have a significant impact on developer productivity. + +## Intercepting a Service + +Alternatively, you can use Telepresence's `intercept` command to proxy traffic bound for a service to your laptop. This will let you test and debug services on code running locally without needing to run dependent services or redeploy code updates to your cluster on every change. It also will generate a preview URL, which loads your web app from the cluster ingress but with requests to the intercepted service proxied to your laptop. + +1. You started this guide by installing the Telepresence CLI and + logging in to Ambassador Cloud. The Cloud dashboard is used to + manage your intercepts and share them with colleagues. You must be + logged in to create personal intercepts as we are going to do here. + + Run telepresence dashboard if you are already logged in and just need to reopen the dashboard. + +2. In your terminal and run `telepresence list`. This will connect to your cluster, install the [Traffic Manager](../reference/architecture) to proxy the traffic, and return a list of services that Telepresence is able to intercept. + +3. Navigate up one directory to the root of the repo then into `DataProcessingNodeService`. Install the Node.js dependencies and start the app passing the `blue` argument, which is used by the app to set the title and pod color in the diagram you saw earlier. + + ``` + cd ../DataProcessingNodeService + npm install + node app -c blue + ``` + +4. In a new terminal window start the intercept with the command below. This will proxy requests to the `DataProcessingNodeService` service to your laptop. It will also generate a preview URL, which will let you view the app with the intercepted service in your browser. + + The intercept requires you specify the name of the deployment to be intercepted and the port to proxy. + + ``` + telepresence intercept dataprocessingnodeservice --port 3000 + ``` + + You will be prompted with a few options. Telepresence tries to intelligently determine the deployment and namespace of your ingress controller. Hit `enter` to accept the default value of `ambassador.ambassador` for `Ingress`. For simplicity's sake, our app uses 80 for the port and does *not* use TLS, so use those options when prompted for the `port` and `TLS` settings. Your output should be similar to this: + + ``` + $ telepresence intercept dataprocessingnodeservice --port 3000 + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Select the ingress to use. + + 1/4: What's your ingress' layer 3 (IP) address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [no default]: verylargejavaservice.default + + 2/4: What's your ingress' layer 4 address (TCP port number)? + + [no default]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different layer 5 hostname + (TLS-SNI, HTTP "Host" header) to access this service. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + + + + + + +
  1. Open the preview URL in your browser to see the intercepted version of the app. The Node server on your laptop replies back to the cluster with the blue option enabled; you will see a blue title and blue pod in the diagram. Remember that previously these elements were green.
    You will also see a banner at the bottom on the page informing that you are viewing a preview URL with your name and org name.
+ + + + + + +
  1. Switch back in your browser to the dashboard page and refresh it to see your preview URL listed. Click the box to expand out options where you can disable authentication or remove the preview.
    If there were other developers in your organization also creating preview URLs, you would see them here as well.
+ +This diagram demonstrates the flow of requests using the intercept. The laptop on the left visits the preview URL, the request is redirected to the cluster ingress, and requests to and from the `DataProcessingNodeService` by other pods are proxied to the developer laptop running Telepresence. + +![Intercept Architecture](./images/tp-tutorial-4.png) + +7. Clean up your environment by first typing `Ctrl+C` in the terminal running Node. Then stop the intercept with the `leave` command and `quit` to stop the daemon. Finally, use `uninstall --everything` to remove the Traffic Manager and Agents from your cluster. + + ``` + telepresence leave dataprocessingnodeservice + telepresence quit + telepresence uninstall --everything + ``` + +8. Refresh the dashboard page again and you will see the intercept was removed after running the `leave` command. Refresh the browser tab with the preview URL and you will see that it has been disabled. + +## What's Next? + +Telepresence and preview URLS open up powerful possibilities for [collaborating](../howtos/preview-urls) with your colleagues and others outside of your organization. + +Learn more about how Telepresence handles [outbound sessions](../howtos/outbound), allowing locally running services to interact with cluster services without an intercept. + +Read the [FAQs](../faqs) to learn more about uses cases and the technical implementation of Telepresence. diff --git a/docs/telepresence/2.5/versions.yml b/docs/telepresence/2.5/versions.yml new file mode 100644 index 000000000..a8bf8c43f --- /dev/null +++ b/docs/telepresence/2.5/versions.yml @@ -0,0 +1,5 @@ +version: "2.5.8" +dlVersion: "latest" +docsVersion: "2.5" +branch: release/v2 +productName: "Telepresence" diff --git a/docs/telepresence/2.6 b/docs/telepresence/2.6 deleted file mode 120000 index 1d90e2997..000000000 --- a/docs/telepresence/2.6 +++ /dev/null @@ -1 +0,0 @@ -../../../docs/telepresence/v2.6 \ No newline at end of file diff --git a/docs/telepresence/2.6/ci/github-actions.md b/docs/telepresence/2.6/ci/github-actions.md new file mode 100644 index 000000000..810a2d239 --- /dev/null +++ b/docs/telepresence/2.6/ci/github-actions.md @@ -0,0 +1,176 @@ +--- +title: GitHub Actions for Telepresence +description: "Learn more about GitHub Actions for Telepresence and how to integrate them in your processes to run tests for your own environments and improve your CI/CD pipeline. " +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Telepresence with GitHub Actions + +Telepresence combined with [GitHub Actions](https://docs.github.com/en/actions) allows you to run integration tests in your continuous integration/continuous delivery (CI/CD) pipeline without the need to run any dependant service. When you connect to the target Kubernetes cluster, you can intercept traffic of the remote services and send it to an instance of the local service running in CI. This way, you can quickly test the bugfixes, updates, and features that you develop in your project. + +You can [register here](https://app.getambassador.io/auth/realms/production/protocol/openid-connect/auth?client_id=telepresence-github-actions&response_type=code&code_challenge=qhXI67CwarbmH-pqjDIV1ZE6kqggBKvGfs69cxst43w&code_challenge_method=S256&redirect_uri=https://app.getambassador.io) to get a free Ambassador Cloud account to try the GitHub Actions for Telepresence yourself. + +## GitHub Actions for Telepresence + +Ambassador Labs has created a set of GitHub Actions for Telepresence that enable you to run integration tests in your CI pipeline against any existing remote cluster. The GitHub Actions for Telepresence are the following: + + - **configure**: Initial configuration setup for Telepresence that is needed to run the actions successfully. + - **install**: Installs Telepresence in your CI server with latest version or the one specified by you. + - **login**: Logs into Telepresence, you can create a [personal intercept](/docs/telepresence/latest/concepts/intercepts/#personal-intercept). You'll need a Telepresence API key and set it as an environment variable in your workflow. See the [acquiring an API key guide](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) for instructions on how to get one. + - **connect**: Connects to the remote target environment. + - **intercept**: Redirects traffic to the remote service to the version of the service running in CI so you can run integration tests. + +Each action contains a post-action script to clean up resources. This includes logging out of Telepresence, closing the connection to the remote cluster, and stopping the intercept process. These post scripts are executed automatically, regardless of job result. This way, you don't have to worry about terminating the session yourself. You can look at the [GitHub Actions for Telepresence repository](https://github.com/datawire/telepresence-actions) for more information. + +# Using Telepresence in your GitHub Actions CI pipeline + +## Prerequisites + +To enable GitHub Actions with telepresence, you need: + +* A [Telepresence API KEY](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) and set it as an environment variable in your workflow. +* Access to your remote Kubernetes cluster, like a `kubeconfig.yaml` file with the information to connect to the cluster. +* If your remote cluster already has Telepresence installed, you need to know whether Telepresence is installed [Cluster wide](/docs/telepresence/latest/reference/rbac/#cluster-wide-telepresence-user-access) or [Namespace only](/docs/telepresence/latest/reference/rbac/#namespace-only-telepresence-user-access). If telepresence is configured for namespace only, verify that your `kubeconfig.yaml` is configured to find the installation of the Traffic Manager. For example: + + ```yaml + apiVersion: v1 + clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: traffic-manager-namespace + name: example-cluster + ``` + +* If Telepresence is installed, you also need to know the version of Telepresence running in the cluster. You can run the command `kubectl describe service traffic-manager -n namespace`. The version is listed in the `labels` section of the output. +* You need a GitHub Actions secret named `TELEPRESENCE_API_KEY` in your repository that has your Telepresence API key. See [GitHub docs](https://docs.github.com/en/github-ae@latest/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository) for instructions on how to create GitHub Actions secrets. +* You need a GitHub Actions secret named `KUBECONFIG_FILE` in your repository with the content of your `kubeconfig.yaml`). + +**Does your environment look different?** We're actively working on making GitHub Actions for Telepresence more useful for more + +
+ + +
+ +## Initial configuration setup + +To be able to use the GitHub Actions for Telepresence, you need to do an initial setup to [configure Telepresence](../../reference/config/) so the repository is able to run your workflow. To complete the Telepresence setup: + + +This action only supports Ubuntu runners at the moment. + +1. In your main branch, create a `.github/workflows` directory in your GitHub repository if it does not already exist. +1. Next, in the `.github/workflows` directory, create a new YAML file named `configure-telepresence.yaml`: + + ```yaml + name: Configuring telepresence + on: workflow_dispatch + jobs: + configuring: + name: Configure telepresence + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + steps: + - name : Checkout + uses: actions/checkout@v3 + #---- here run your custom command to connect to your cluster + #- name: Connect to cluster + # shell: bash + # run: ./connnect to cluster + #---- + - name: Congifuring Telepresence + uses: datawire/telepresence-actions/configure@v1.0-rc + with: + version: latest + ``` + +1. Push the `configure-telepresence.yaml` file to your repository. +1. Run the `Configuring Telepresence Workflow` [manually](https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow) in your repository's Actions tab. + +When the workflow runs, the action caches the configuration directory of Telepresence and a Telepresence configuration file if is provided by you. This configuration file should be placed in the`/.github/telepresence-config/config.yml` with your own [Telepresence config](../../reference/config/). If you update your this file with a new configuration, you must run the `Configuring Telepresence Workflow` action manually on your main branch so your workflow detects the new configuration. + + +When you create a branch, do not remove the .telepresence/config.yml file. This is required for Telepresence to run GitHub action properly when there is a new push to the branch in your repository. + + +## Using Telepresence in your GitHub Actions workflows + +1. In the `.github/workflows` directory create a new YAML file named `run-integration-tests.yaml` and modify placeholders with real actions to run your service and perform integration tests. + + ```yaml + name: Run Integration Tests + on: + push: + branches-ignore: + - 'main' + jobs: + my-job: + name: Run Integration Test using Remote Cluster + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + KUBECONFIG_FILE: ${{ secrets.KUBECONFIG_FILE }} + KUBECONFIG: /opt/kubeconfig + steps: + - name : Checkout + uses: actions/checkout@v3 + with: + ref: ${{ github.event.pull_request.head.sha }} + #---- here run your custom command to run your service + #- name: Run your service to test + # shell: bash + # run: ./run_local_service + #---- + # First you need to log in to Telepresence, with your api key + - name: Create kubeconfig file + run: | + cat < /opt/kubeconfig + ${{ env.KUBECONFIG_FILE }} + EOF + - name: Install Telepresence + uses: datawire/telepresence-actions/install@v1.0-rc + with: + version: 2.5.8 # Change the version number here according to the version of Telepresence in your cluster or omit this parameter to install the latest version + - name: Telepresence connect + uses: datawire/telepresence-actions/connect@v1.0-rc + - name: Login + uses: datawire/telepresence-actions/login@v1.0-rc + with: + telepresence_api_key: ${{ secrets.TELEPRESENCE_API_KEY }} + - name: Intercept the service + uses: datawire/telepresence-actions/intercept@v1.0-rc + with: + service_name: service-name + service_port: 8081:8080 + namespace: namespacename-of-your-service + http_header: "x-telepresence-intercept-id=service-intercepted" + print_logs: true # Flag to instruct the action to print out Telepresence logs and export an artifact with them + #---- here run your custom command + #- name: Run integrations test + # shell: bash + # run: ./run_integration_test + #---- + ``` + +The previous is an example of a workflow that: + +* Checks out the repository code. +* Has a placeholder step to run the service during CI. +* Creates the `/opt/kubeconfig` file with the contents of the `secrets.KUBECONFIG_FILE` to make it available for Telepresence. +* Installs Telepresence. +* Runs Telepresence Connect. +* Logs into Telepresence. +* Intercepts traffic to the service running in the remote cluster. +* A placeholder for an action that would run integration tests, such as one that makes HTTP requests to your running service and verifies it works while dependent services run in the remote cluster. + +This workflow gives you the ability to run integration tests during the CI run against an ephemeral instance of your service to verify that the any change that is pushed to the working branch works as expected. After you push the changes, the CI server will run the integration tests against the intercept. You can view the results on your GitHub repository, under "Actions" tab. diff --git a/docs/telepresence/2.6/community.md b/docs/telepresence/2.6/community.md new file mode 100644 index 000000000..922457c9d --- /dev/null +++ b/docs/telepresence/2.6/community.md @@ -0,0 +1,12 @@ +# Community + +## Contributor's guide +Please review our [contributor's guide](https://github.com/telepresenceio/telepresence/blob/release/v2/DEVELOPING.md) +on GitHub to learn how you can help make Telepresence better. + +## Changelog +Our [changelog](https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md) +describes new features, bug fixes, and updates to each version of Telepresence. + +## Meetings +Check out our community [meeting schedule](https://github.com/telepresenceio/telepresence/blob/release/v2/MEETING_SCHEDULE.md) for opportunities to interact with Telepresence developers. diff --git a/docs/telepresence/2.6/concepts/context-prop.md b/docs/telepresence/2.6/concepts/context-prop.md new file mode 100644 index 000000000..b3eb41e32 --- /dev/null +++ b/docs/telepresence/2.6/concepts/context-prop.md @@ -0,0 +1,37 @@ +# Context propagation + +**Context propagation** is the transfer of request metadata across the services and remote processes of a distributed system. Telepresence uses context propagation to intelligently route requests to the appropriate destination. + +This metadata is the context that is transferred across system services. It commonly takes the form of HTTP headers; context propagation is usually referred to as header propagation. A component of the system (like a proxy or performance monitoring tool) injects the headers into requests as it relays them. + +Metadata propagation refers to any service or other middleware not stripping away the headers. Propagation facilitates the movement of the injected contexts between other downstream services and processes. + + +## What is distributed tracing? + +Distributed tracing is a technique for troubleshooting and profiling distributed microservices applications and is a common application for context propagation. It is becoming a key component for debugging. + +In a microservices architecture, a single request may trigger additional requests to other services. The originating service may not cause the failure or slow request directly; a downstream dependent service may instead be to blame. + +An application like Datadog or New Relic will use agents running on services throughout the system to inject traffic with HTTP headers (the context). They will track the request’s entire path from origin to destination to reply, gathering data on routes the requests follow and performance. The injected headers follow the [W3C Trace Context specification](https://www.w3.org/TR/trace-context/) (or another header format, such as [B3 headers](https://github.com/openzipkin/b3-propagation)), which facilitates maintaining the headers through every service without being stripped (the propagation). + + +## What are intercepts and preview URLs? + +[Intercepts](../../reference/intercepts) and [preview +URLs](../../howtos/preview-urls/) are functions of Telepresence that +enable easy local development from a remote Kubernetes cluster and +offer a preview environment for sharing and real-time collaboration. + +Telepresence uses custom HTTP headers and header propagation to +identify which traffic to intercept both for plain personal intercepts +and for personal intercepts with preview URLs; these techniques are +more commonly used for distributed tracing, so what they are being +used for is a little unorthodox, but the mechanisms for their use are +already widely deployed because of the prevalence of tracing. The +headers facilitate the smart routing of requests either to live +services in the cluster or services running locally on a developer’s +machine. The intercepted traffic can be further limited by using path +based routing. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to [Ambassador Cloud](https://app.getambassador.io) with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. diff --git a/docs/telepresence/2.6/concepts/devloop.md b/docs/telepresence/2.6/concepts/devloop.md new file mode 100644 index 000000000..bcb924c91 --- /dev/null +++ b/docs/telepresence/2.6/concepts/devloop.md @@ -0,0 +1,50 @@ +# The developer experience and the inner dev loop + +## How is the developer experience changing? + +The developer experience is the workflow a developer uses to develop, test, deploy, and release software. + +Typically this experience has consisted of both an inner dev loop and an outer dev loop. The inner dev loop is where the individual developer codes and tests, and once the developer pushes their code to version control, the outer dev loop is triggered. + +The outer dev loop is _everything else_ that happens leading up to release. This includes code merge, automated code review, test execution, deployment, [controlled (canary) release](https://www.getambassador.io/docs/argo/latest/concepts/canary/), and observation of results. The modern outer dev loop might include, for example, an automated CI/CD pipeline as part of a [GitOps workflow](https://www.getambassador.io/docs/argo/latest/concepts/gitops/#what-is-gitops) and a [progressive delivery](/docs/argo/latest/concepts/cicd/) strategy relying on automated canaries, i.e. to make the outer loop as fast, efficient and automated as possible. + +Cloud-native technologies have fundamentally altered the developer experience in two ways: one, developers now have to take extra steps in the inner dev loop; two, developers need to be concerned with the outer dev loop as part of their workflow, even if most of their time is spent in the inner dev loop. + +Engineers now must design and build distributed service-based applications _and_ also assume responsibility for the full development life cycle. The new developer experience means that developers can no longer rely on monolithic application developer best practices, such as checking out the entire codebase and coding locally with a rapid “live-reload” inner development loop. Now developers have to manage external dependencies, build containers, and implement orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this adds development time to the equation. + +## What is the inner dev loop? + +The inner dev loop is the single developer workflow. A single developer should be able to set up and use an inner dev loop to code and test changes quickly. + +Even within the Kubernetes space, developers will find much of the inner dev loop familiar. That is, code can still be written locally at a level that a developer controls and committed to version control. + +In a traditional inner dev loop, if a typical developer codes for 360 minutes (6 hours) a day, with a traditional local iterative development loop of 5 minutes — 3 coding, 1 building, i.e. compiling/deploying/reloading, 1 testing inspecting, and 10-20 seconds for committing code — they can expect to make ~70 iterations of their code per day. Any one of these iterations could be a release candidate. The only “developer tax” being paid here is for the commit process, which is negligible. + +![traditional inner dev loop](../images/trad-inner-dev-loop.png) + +## In search of lost time: How does containerization change the inner dev loop? + +The inner dev loop is where writing and testing code happens, and time is critical for maximum developer productivity and getting features in front of end users. The faster the feedback loop, the faster developers can refactor and test again. + +Changes to the inner dev loop process, i.e., containerization, threaten to slow this development workflow down. Coding stays the same in the new inner dev loop, but code has to be containerized. The _containerized_ inner dev loop requires a number of new steps: + +* packaging code in containers +* writing a manifest to specify how Kubernetes should run the application (e.g., YAML-based configuration information, such as how much memory should be given to a container) +* pushing the container to the registry +* deploying containers in Kubernetes + +Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. + + +![container inner dev loop](../images/container-inner-dev-loop.png) + +## Tackling the slow inner dev loop + +A slow inner dev loop can negatively impact frontend and backend teams, delaying work on individual and team levels and slowing releases into production overall. + +For example: + +* Frontend developers have to wait for previews of backend changes on a shared dev/staging environment (for example, until CI/CD deploys a new version) and/or rely on mocks/stubs/virtual services when coding their application locally. These changes are only verifiable by going through the CI/CD process to build and deploy within a target environment. +* Backend developers have to wait for CI/CD to build and deploy their app to a target environment to verify that their code works correctly with cluster or cloud-based dependencies as well as to share their work to get feedback. + +New technologies and tools can facilitate cloud-native, containerized development. And in the case of a sluggish inner dev loop, developers can accelerate productivity with tools that help speed the loop up again. diff --git a/docs/telepresence/2.6/concepts/devworkflow.md b/docs/telepresence/2.6/concepts/devworkflow.md new file mode 100644 index 000000000..fa24fc2bd --- /dev/null +++ b/docs/telepresence/2.6/concepts/devworkflow.md @@ -0,0 +1,7 @@ +# The changing development workflow + +A changing workflow is one of the main challenges for developers adopting Kubernetes. Software development itself isn’t the challenge. Developers can continue to [code using the languages and tools with which they are most productive and comfortable](https://www.getambassador.io/resources/kubernetes-local-dev-toolkit/). That’s the beauty of containerized development. + +However, the cloud-native, Kubernetes-based approach to development means adopting a new development workflow and development environment. Beyond the basics, such as figuring out how to containerize software, [how to run containers in Kubernetes](https://www.getambassador.io/docs/kubernetes/latest/concepts/appdev/), and how to deploy changes into containers, for example, Kubernetes adds complexity before it delivers efficiency. The promise of a “quicker way to develop software” applies at least within the traditional aspects of the inner dev loop, where the single developer codes, builds and tests their software. But both within the inner dev loop and once code is pushed into version control to trigger the outer dev loop, the developer experience changes considerably from what many developers are used to. + +In this new paradigm, new steps are added to the inner dev loop, and more broadly, the developer begins to share responsibility for the full life cycle of their software. Inevitably this means taking new workflows and tools on board to ensure that the full life cycle continues full speed ahead. diff --git a/docs/telepresence/2.6/concepts/faster.md b/docs/telepresence/2.6/concepts/faster.md new file mode 100644 index 000000000..81a6d11db --- /dev/null +++ b/docs/telepresence/2.6/concepts/faster.md @@ -0,0 +1,25 @@ +# Making the remote local: Faster feedback, collaboration and debugging + +With the goal of achieving [fast, efficient development](https://www.getambassador.io/use-case/local-kubernetes-development/), developers need a set of approaches to bridge the gap between remote Kubernetes clusters and local development, and reduce time to feedback and debugging. + +## How should I set up a Kubernetes development environment? + +[Setting up a development environment](https://www.getambassador.io/resources/development-environments-microservices/) for Kubernetes can be much more complex than the setup for traditional web applications. Creating and maintaining a Kubernetes development environment relies on a number of external dependencies, such as databases or authentication. + +While there are several ways to set up a Kubernetes development environment, most introduce complexities and impediments to speed. The dev environment should be set up to easily code and test in conditions where a service can access the resources it depends on. + +A good way to meet the goals of faster feedback, possibilities for collaboration, and scale in a realistic production environment is the "single service local, all other remote" environment. Developing in a fully remote environment offers some benefits, but for developers, it offers the slowest possible feedback loop. With local development in a remote environment, the developer retains considerable control while using tools like [Telepresence](../../quick-start/) to facilitate fast feedback, debugging and collaboration. + +## What is Telepresence? + +Telepresence is an open source tool that lets developers [code and test microservices locally against a remote Kubernetes cluster](../../quick-start/). Telepresence facilitates more efficient development workflows while relieving the need to worry about other service dependencies. + +## How can I get fast, efficient local development? + +The dev loop can be jump-started with the right development environment and Kubernetes development tools to support speed, efficiency and collaboration. Telepresence is designed to let Kubernetes developers code as though their laptop is in their Kubernetes cluster, enabling the service to run locally and be proxied into the remote cluster. Telepresence runs code locally and forwards requests to and from the remote Kubernetes cluster, bypassing the much slower process of waiting for a container to build, pushing it to registry, and deploying to production. + +A rapid and continuous feedback loop is essential for productivity and speed; Telepresence enables the fast, efficient feedback loop to ensure that developers can access the rapid local development loop they rely on without disrupting their own or other developers' workflows. Telepresence safely intercepts traffic from the production cluster and enables near-instant testing of code, local debugging in production, and [preview URL](../../howtos/preview-urls/) functionality to share dev environments with others for multi-user collaboration. + +Telepresence works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This pod proxies data from the Kubernetes environment (e.g., TCP connections, environment variables, volumes) to the local process. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +The intercept proxy works thanks to context propagation, which is most frequently associated with distributed tracing but also plays a key role in controllable intercepts and preview URLs. diff --git a/docs/telepresence/2.6/concepts/intercepts.md b/docs/telepresence/2.6/concepts/intercepts.md new file mode 100644 index 000000000..0a2909be2 --- /dev/null +++ b/docs/telepresence/2.6/concepts/intercepts.md @@ -0,0 +1,208 @@ +--- +title: "Types of intercepts" +description: "Short demonstration of personal vs global intercepts" +--- + +import React from 'react'; + +import Alert from '@material-ui/lab/Alert'; +import AppBar from '@material-ui/core/AppBar'; +import Paper from '@material-ui/core/Paper'; +import Tab from '@material-ui/core/Tab'; +import TabContext from '@material-ui/lab/TabContext'; +import TabList from '@material-ui/lab/TabList'; +import TabPanel from '@material-ui/lab/TabPanel'; +import Animation from '@src/components/InterceptAnimation'; + +export function TabsContainer({ children, ...props }) { + const [state, setState] = React.useState({curTab: "personal"}); + React.useEffect(() => { + const query = new URLSearchParams(window.location.search); + var interceptType = query.get('intercept') || "personal"; + if (state.curTab != interceptType) { + setState({curTab: interceptType}); + } + }, [state, setState]) + var setURL = function(newTab) { + history.replaceState(null,null, + `?intercept=${newTab}${window.location.hash}`, + ); + }; + return ( +
+ + + {setState({curTab: newTab}); setURL(newTab)}} aria-label="intercept types"> + + + + + + {children} + +
+ ); +}; + +# Types of intercepts + + + + +# No intercept + + + + +This is the normal operation of your cluster without Telepresence. + + + + + +# Global intercept + + + + +**Global intercepts** replace the Kubernetes "Orders" service with the +Orders service running on your laptop. The users see no change, but +with all the traffic coming to your laptop, you can observe and debug +with all your dev tools. + + + +### Creating and using global intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=all + ``` + + + + Make sure your current kubectl context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service: + + All requests will be sent to the version of your service that is + running in the local development environment. + + + + +# Personal intercept + +**Personal intercepts** allow you to be selective and intercept only +some of the traffic to a service while not interfering with the rest +of the traffic. This allows you to share a cluster with others on your +team without interfering with their work. + +Personal intercepts are subject to the Ambassador Cloud active service and user limit quotas. +To read more about these quotas limits, see the [subscription management page](../../../cloud/latest/subscriptions/howtos/manage-my-subscriptions). + + + + +In the illustration above, **Orange** +requests are being made by Developer 2 on their laptop and the +**green** are made by a teammate, +Developer 1, on a different laptop. + +Each developer can intercept the Orders service for their requests only, +while sharing the rest of the development environment. + + + +### Creating and using personal intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b + ``` + + We're using + `Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b` as the + header for the sake of the example, but you can use any + `key=value` pair you want, or `--http-header=auto` to have it + choose something automatically. + + + + Make sure your current kubect context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service by passing the + HTTP header: + + ```http + Personal-Intercept: 126a72c7-be8b-4329-af64-768e207a184b + ``` + + + + Need a browser extension to modify or remove an HTTP-request-headers? + + Chrome + {' '} + Firefox + + + + 3. Using the intercept: Send requests to your service without the + HTTP header: + + Requests without the header will be sent to the version of your + service that is running in the cluster. This enables you to share + the cluster with a team! + +### Intercepting a specific endpoint + +It's not uncommon to have one service serving several endpoints. Telepresence is capable of limiting an +intercept to only affect the endpoints you want to work with by using one of the `--http-path-xxx` +flags below in addition to using `--http-header` flags. Only one such flag can be used in an intercept +and, contrary to the `--http-header` flag, it cannot be repeated. + +The following flags are available: + +| Flag | Meaning | +|-------------------------------|------------------------------------------------------------------| +| `--http-path-equal ` | Only intercept the endpoint for this exact path | +| `--http-path-prefix ` | Only intercept endpoints with a matching path prefix | +| `--http-path-regex ` | Only intercept endpoints that match the given regular expression | + +#### Examples: + +1. A personal intercept using the header "Coder: Bob" limited to all endpoints that start with "/api': + + ```shell + telepresence intercept SERVICENAME --http-path-prefix=/api --http-header=Coder=Bob + ``` + +2. A personal intercept using the auto generated header that applies only to the endpoint "/api/version": + + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version --http-header=auto + ``` + or, since `--http-header=auto` is the implicit when using `--http` options, just: + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version + ``` + +3. A personal intercept using the auto generated header limited to all endpoints matching the regular expression "(staging-)?api/.*": + + ```shell + telepresence intercept SERVICENAME --http-path-regex='/(staging-)?api/.*' + ``` + + + diff --git a/docs/telepresence/2.6/doc-links.yml b/docs/telepresence/2.6/doc-links.yml new file mode 100644 index 000000000..92015b4f4 --- /dev/null +++ b/docs/telepresence/2.6/doc-links.yml @@ -0,0 +1,96 @@ +- title: Quick start + link: quick-start +- title: Install Telepresence + items: + - title: Install + link: install/ + - title: Upgrade + link: install/upgrade/ + - title: Install Traffic Manager with Helm + link: install/helm/ + - title: Migrate from legacy Telepresence + link: install/migrate-from-legacy/ +- title: Core concepts + items: + - title: The changing development workflow + link: concepts/devworkflow + - title: The developer experience and the inner dev loop + link: concepts/devloop + - title: 'Making the remote local: Faster feedback, collaboration and debugging' + link: concepts/faster + - title: Context propagation + link: concepts/context-prop + - title: Types of intercepts + link: concepts/intercepts +- title: How do I... + items: + - title: Intercept a service in your own environment + link: howtos/intercepts + - title: Share dev environments with preview URLs + link: howtos/preview-urls + - title: Proxy outbound traffic to my cluster + link: howtos/outbound + - title: Send requests to an intercepted service + link: howtos/request +- title: Telepresence for Docker + items: + - title: What is Telepresence for Docker + link: extension/intro + - title: Install into Docker-Desktop + link: extension/install + - title: Intercept into a Docker Container + link: extension/intercept +- title: Telepresence for CI + items: + - title: Github Actions + link: ci/github-actions +- title: Technical reference + items: + - title: Architecture + link: reference/architecture + - title: Client reference + link: reference/client + items: + - title: login + link: reference/client/login + - title: Laptop-side configuration + link: reference/config + - title: Cluster-side configuration + link: reference/cluster-config + - title: Using Docker for intercepts + link: reference/docker-run + - title: Running Telepresence in a Docker container + link: reference/inside-container + - title: Environment variables + link: reference/environment + - title: Intercepts + link: reference/intercepts/ + items: + - title: Manually injecting the Traffic Agent + link: reference/intercepts/manual-agent + - title: Volume mounts + link: reference/volume + - title: RESTful API service + link: reference/restapi + - title: DNS resolution + link: reference/dns + - title: RBAC + link: reference/rbac + - title: Telepresence and VPNs + link: reference/vpn + - title: Networking through Virtual Network Interface + link: reference/tun-device + - title: Connection Routing + link: reference/routing + - title: Using Telepresence with Linkerd + link: reference/linkerd +- title: FAQs + link: faqs +- title: Troubleshooting + link: troubleshooting +- title: Community + link: community +- title: Release Notes + link: release-notes +- title: Licenses + link: licenses diff --git a/docs/telepresence/2.6/extension/install.md b/docs/telepresence/2.6/extension/install.md new file mode 100644 index 000000000..471752775 --- /dev/null +++ b/docs/telepresence/2.6/extension/install.md @@ -0,0 +1,39 @@ +--- +title: "Telepresence for Docker installation and connection guide" +description: "Learn how to install and update Ambassador Labs' Telepresence for Docker." +indexable: true +--- + +# Install and connect the Telepresence Docker extension + +[Docker](https://docker.com), the popular containerized runtime environment, now offers the [Telepresence](../../../../../kubernetes-learning-center/telepresence-docker-extension/) extension for Docker Desktop. With this extension, you can quickly install Telepresence and begin using its features with your Docker containers in a matter of minutes. + +## Install Telepresence for Docker + +Telepresence for Docker is available through the Docker Destktop. To install Telepresence for Docker: + +1. Open Docker Desktop. +2. In the Docker Dashboard, click **Add Extensions** in the left navigation bar. +3. In the Extensions Marketplace, search for the Ambassador Telepresence extension. +4. Click **Install**. + +## Connect to Ambassador Cloud through the Telepresence extension. + + After you install the Telepresence extension in Docker Desktop, you need to generate an API key to connect the Telepresence extension to Ambassador Cloud. + + 1. Click the Telepresence extension in Docker Desktop, then click **Get Started**. + + 2. Click the **Get API Key** button to open Ambassador Cloud in a browser window. + + 3. Sign in with your Google, Github, or Gitlab account. + Ambassador Cloud opens to your profile and displays the API key. + + 4. Copy the API key and paste it into the API key field in the Docker Dashboard. + +## Connect to your cluster in Docker Desktop. + + 1. Select the desired clister from the dropdown menu and click **Next**. + This cluster is now set to kubectl's current context. + + 2. Click **Connect to [your cluster]**. + Your cluster is connected and you can now create [intercepts](../intercept/). \ No newline at end of file diff --git a/docs/telepresence/2.6/extension/intercept.md b/docs/telepresence/2.6/extension/intercept.md new file mode 100644 index 000000000..3868407a8 --- /dev/null +++ b/docs/telepresence/2.6/extension/intercept.md @@ -0,0 +1,48 @@ +--- +title: "Create an intercept with Telepreence for Docker" +description: "Create an intercept with Telepresence for Docker. With Telepresence, you can create intercepts to debug, " +indexable: true +--- + +# Create an intercept + +With the Telepresence for Docker extension, you can create [personal intercepts](../../concepts/intercepts/?intercept=personal). These intercepts route the cluster traffic through a proxy UTL to your local Docker container. Follow the instructions below to create an intercept with Docker Desktop. + +## Prerequisites + +Before you begin, you need: +- [Docker Desktop](https://www.docker.com/products/docker-desktop). +- The [Telepresence ](../../../../../kubernetes-learning-center/telepresence-docker-extension/) extension [installed](../install). +- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/), the Kubernetes command-line tool. + +This guide assumes you have a Kubernetes deployment with a running service, and that you can run a copy of that service in a docker container on your laptop. + +## Copy the service you want to intercept + +Once you have the Telepresence extension [installed and connected](../install/) the Telepresence extension, you need to copy the service. To do this, use the `docker run` command with the following flags: + + ```console + $ docker run --rm -it --network host + ``` + +The Telepresence extension requires the target service to be on the host network. This allows Telepresence to share a network with your container. The mounted network device redirects cluster-related traffic back into the cluster. + +## Intercept a service + +In Docker Desktop, the Telepresence extension shows all the services in the namespace. + + 1. Choose a service to intercept and click the **Intercept** button. + + 2. Select the service port for the intercept from the dropdown. + + 3. Enter the target port of the service you previously copied in the Docker container. + + 4. Click **Submit** to create the intercept. + +The intercept now shows up in the Docker Telepresence extension. + +## Test your code + +Now you can make your code changes in your preferred IDE. When you're finished, build a new container with your code changes and run your container on Docker's host network. All the traffic previously routed to and from your Kubernetes service is now routed to and from your local container. + +Click the globe icon next to your intercept to get the preview URL. From here, you can view the intercept details in Ambassador Cloud, open the preview URL in your browser to see the changes you've made in realtime, or you can share the preview URL with teammates so they can review your work. \ No newline at end of file diff --git a/docs/telepresence/2.6/extension/intro.md b/docs/telepresence/2.6/extension/intro.md new file mode 100644 index 000000000..6a653ae06 --- /dev/null +++ b/docs/telepresence/2.6/extension/intro.md @@ -0,0 +1,29 @@ +--- +title: "Telepresence for Docker introduction" +description: "Learn about the Telepresence extension for Docker." +indexable: true +--- + +# Telepresence for Docker + +Telepresence is now available as a [Docker Extension](https://www.docker.com/products/extensions/) for Docker Desktop. + +## What is the Telepresence extension for Docker? + +The [Telepresence Docker extension](../../../../../kubernetes-learning-center/telepresence-docker-extension/) is an extension that runs in Docker Desktop. This extension allows you to spin up a selection of your application and run the Telepresence daemons in that container. The Telepresence extension allows you to intercept a service and redirect cloud traffic to other containers on the Docker host network. + +## What does the Telepresence Docker extension do? + +Telepresence for Docker is designed to simplify your coding experience and test your code faster. Traditionally, you need to build a container within docker with your code changes, push them, wait for it to upload, deploy the changes, verify them, view them, and repeat that process as you continually test your changes. This makes it a slow and cumbersome process when you need to continually test changes. + +With the Telepresence extension for Docker Desktop, you can use intercepts to immediately preview changes as you make them, without the need to redeploy after every change. Because the Telepresence extension also enables you to isolate your machine and operate it entirely within the Docker runtime, this means you can make changes without root permission on your machine. + +## How does Telepresence for Docker work? + +The Telepresence extension is configured to use Docker's host network (VM network for Windows and Mac, host network on Linux). + +Telepresence runs entirely within containers. The Telepresence daemons run in a container, which can be given commands using the extension UI. When Telepresence intercepts a service, it redirects cloud traffic to other containers on the Docker host network. + +## What do I need to begin? + +All you need is [Docker Desktop](https://www.docker.com/products/docker-desktop) with the [Ambassador Telepresence extension installed](../install) and the Kubernetes command-line tool [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/). diff --git a/docs/telepresence/2.6/faqs.md b/docs/telepresence/2.6/faqs.md new file mode 100644 index 000000000..3c37f1cc5 --- /dev/null +++ b/docs/telepresence/2.6/faqs.md @@ -0,0 +1,124 @@ +--- +description: "Learn how Telepresence helps with fast development and debugging in your Kubernetes cluster." +--- + +# FAQs + +** Why Telepresence?** + +Modern microservices-based applications that are deployed into Kubernetes often consist of tens or hundreds of services. The resource constraints and number of these services means that it is often difficult to impossible to run all of this on a local development machine, which makes fast development and debugging very challenging. The fast [inner development loop](../concepts/devloop/) from previous software projects is often a distant memory for cloud developers. + +Telepresence enables you to connect your local development machine seamlessly to the cluster via a two way proxying mechanism. This enables you to code locally and run the majority of your services within a remote Kubernetes cluster -- which in the cloud means you have access to effectively unlimited resources. + +Ultimately, this empowers you to develop services locally and still test integrations with dependent services or data stores running in the remote cluster. + +You can “intercept” any requests made to a target Kubernetes workload, and code and debug your associated service locally using your favourite local IDE and in-process debugger. You can test your integrations by making requests against the remote cluster’s ingress and watching how the resulting internal traffic is handled by your service running locally. + +By using the preview URL functionality you can share access with additional developers or stakeholders to the application via an entry point associated with your intercept and locally developed service. You can make changes that are visible in near real-time to all of the participants authenticated and viewing the preview URL. All other viewers of the application entrypoint will not see the results of your changes. + +** What operating systems does Telepresence work on?** + +Telepresence currently works natively on macOS (Intel and Apple silicon), Linux, and WSL 2. Starting with v2.4.0, we are also releasing a native Windows version of Telepresence that we are considering a Developer Preview. + +** What protocols can be intercepted by Telepresence?** + +All HTTP/1.1 and HTTP/2 protocols can be intercepted. This includes: + +- REST +- JSON/XML over HTTP +- gRPC +- GraphQL + +If you need another protocol supported, please [drop us a line](https://www.getambassador.io/feedback/) to request it. + +** When using Telepresence to intercept a pod, are the Kubernetes cluster environment variables proxied to my local machine?** + +Yes, you can either set the pod's environment variables on your machine or write the variables to a file to use with Docker or another build process. Please see [the environment variable reference doc](../reference/environment) for more information. + +** When using Telepresence to intercept a pod, can the associated pod volume mounts also be mounted by my local machine?** + +Yes, please see [the volume mounts reference doc](../reference/volume/) for more information. + +** When connected to a Kubernetes cluster via Telepresence, can I access cluster-based services via their DNS name?** + +Yes. After you have successfully connected to your cluster via `telepresence connect` you will be able to access any service in your cluster via their namespace qualified DNS name. + +This means you can curl endpoints directly e.g. `curl .:8080/mypath`. + +If you create an intercept for a service in a namespace, you will be able to use the service name directly. + +This means if you `telepresence intercept -n `, you will be able to resolve just the `` DNS record. + +You can connect to databases or middleware running in the cluster, such as MySQL, PostgreSQL and RabbitMQ, via their service name. + +** When connected to a Kubernetes cluster via Telepresence, can I access cloud-based services and data stores via their DNS name?** + +You can connect to cloud-based data stores and services that are directly addressable within the cluster (e.g. when using an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) Service type), such as AWS RDS, Google pub-sub, or Azure SQL Database. + +** What types of ingress does Telepresence support for the preview URL functionality?** + +The preview URL functionality should work with most ingress configurations, including straightforward load balancer setups. + +Telepresence will discover/prompt during first use for this info and make its best guess at figuring this out and ask you to confirm or update this. + +** Why are my intercepts still reporting as active when they've been disconnected?** + + In certain cases, Telepresence might not have been able to communicate back with Ambassador Cloud to update the intercept's status. Worry not, they will get garbage collected after a period of time. + +** Why is my intercept associated with an "Unreported" cluster?** + + Intercepts tagged with "Unreported" clusters simply mean Ambassador Cloud was unable to associate a service instance with a known detailed service from an Edge Stack or API Gateway cluster. [Connecting your cluster to the Service Catalog](/docs/telepresence/latest/quick-start/) will properly match your services from multiple data sources. + +** Will Telepresence be able to intercept workloads running on a private cluster or cluster running within a virtual private cloud (VPC)?** + +Yes. The cluster has to have outbound access to the internet for the preview URLs to function correctly, but it doesn’t need to have a publicly accessible IP address. + +The cluster must also have access to an external registry in order to be able to download the traffic-manager and traffic-agent images that are deployed when connecting with Telepresence. + +** Why does running Telepresence require sudo access for the local daemon?** + +The local daemon needs sudo to create iptable mappings. Telepresence uses this to create outbound access from the laptop to the cluster. + +On Fedora, Telepresence also creates a virtual network device (a TUN network) for DNS routing. That also requires root access. + +** What components get installed in the cluster when running Telepresence?** + +A single `traffic-manager` service is deployed in the `ambassador` namespace within your cluster, and this manages resilient intercepts and connections between your local machine and the cluster. + +A Traffic Agent container is injected per pod that is being intercepted. The first time a workload is intercepted all pods associated with this workload will be restarted with the Traffic Agent automatically injected. + +** How can I remove all of the Telepresence components installed within my cluster?** + +You can run the command `telepresence uninstall --everything` to remove the `traffic-manager` service installed in the cluster and `traffic-agent` containers injected into each pod being intercepted. + +Running this command will also stop the local daemon running. + +** What language is Telepresence written in?** + +All components of the Telepresence application and cluster components are written using Go. + +** How does Telepresence connect and tunnel into the Kubernetes cluster?** + +The connection between your laptop and cluster is established by using +the `kubectl port-forward` machinery (though without actually spawning +a separate program) to establish a TCP connection to Telepresence +Traffic Manager in the cluster, and running Telepresence's custom VPN +protocol over that TCP connection. + + + +** What identity providers are supported for authenticating to view a preview URL?** + +* GitHub +* GitLab +* Google + +More authentication mechanisms and identity provider support will be added soon. Please [let us know](https://www.getambassador.io/feedback/) which providers are the most important to you and your team in order for us to prioritize those. + +** Is Telepresence open source?** + +Yes it is! You can find its source code on [GitHub](https://github.com/telepresenceio/telepresence). + +** How do I share my feedback on Telepresence?** + +Your feedback is always appreciated and helps us build a product that provides as much value as possible for our community. You can chat with us directly on our [feedback page](https://www.getambassador.io/feedback/), or you can [join our Slack channel](http://a8r.io/slack) to share your thoughts. diff --git a/docs/telepresence/2.6/howtos/cluster-in-vm.md b/docs/telepresence/2.6/howtos/cluster-in-vm.md new file mode 100644 index 000000000..f7623491b --- /dev/null +++ b/docs/telepresence/2.6/howtos/cluster-in-vm.md @@ -0,0 +1,191 @@ +--- +description: "Use Telepresence to intercept services in a cluster running in a hosted virtual machine." +--- + +# Network considerations for locally hosted clusters + +## The problem +Telepresence creates a Virtual Network Interface ([VIF](../../reference/tun-device)) that maps the clusters subnets to the host machine when it connects. If you're running Kubernetes locally (e.g., k3s, Minikube, Docker for Desktop), you may encounter network problems because the devices in the host are also accessible from the cluster's nodes. + +### Example: +A k3s cluster runs in a headless VirtualBox machine that uses a "host-only" network. This network will allow both host-to-guest and guest-to-host connections. In other words, the cluster will have access to the host's network and, while Telepresence is connected, also to its VIF. This means that from the cluster's perspective, there will now be more than one interface that maps the cluster's subnets; the ones already present in the cluster's nodes, and then the Telepresence VIF, mapping them again. + +Now, if a request arrives to Telepresence that is covered by a subnet mapped by the VIF, the request is routed to the cluster. If the cluster for some reason doesn't find a corresponding listener that can handle the request, it will eventually try the host network, and find the VIF. The VIF routes the request to the cluster and now the recursion is in motion. The final outcome of the request will likely be a timeout but since the recursion is very resource intensive (a large amount of very rapid connection requests), this will likely also affect other connections in a bad way. + +## Solution + +### Create a bridge network +A bridge network is a Link Layer (L2) device that forwards traffic between network segments. By creating a bridge network, you can bypass the host's network stack which enable the Kubernetes cluster to connect directly to the same router as your host. + +To create a bridge network, you need to change the network settings of the guest running a cluster's node so that it connects directly to a physical network device on your host. The details on how to configure the bridge depends on what type of virtualization solution you're using. + +### Vagrant + Virtualbox + k3s example +Here's a sample `Vagrantfile` that will spin up a server node and two agent nodes in three headless instances using a bridged network. It also adds the configuration needed for the cluster to host a docker repository (very handy in case you want to save bandwidth). The Kubernetes registry manifest must be applied using `kubectl -f registry.yaml` once the cluster is up and running. + +#### Vagrantfile +```ruby +# -*- mode: ruby -*- +# vi: set ft=ruby : + +# bridge is the name of the host's default network device +$bridge = 'wlp5s0' + +# default_route should be the IP of the host's default route. +$default_route = '192.168.1.1' + +# nameserver must be the IP of an external DNS, such as 8.8.8.8 +$nameserver = '8.8.8.8' + +# server_name should also be added to the host's /etc/hosts file and point to the server_ip +# for easy access when pushing docker images +server_name = 'multi' + +# static IPs for the server and agents. Those IPs must be on the default router's subnet +server_ip = '192.168.1.110' +agents = { + 'agent1' => '192.168.1.111', + 'agent2' => '192.168.1.112', +} + +# Extra parameters in INSTALL_K3S_EXEC variable because of +# K3s picking up the wrong interface when starting server and agent +# https://github.com/alexellis/k3sup/issues/306 +server_script = <<-SHELL + sudo -i + apk add curl + export INSTALL_K3S_EXEC="--bind-address=#{server_ip} --node-external-ip=#{server_ip} --flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + echo "Sleeping for 5 seconds to wait for k3s to start" + sleep 5 + cp /var/lib/rancher/k3s/server/token /vagrant_shared + cp /etc/rancher/k3s/k3s.yaml /vagrant_shared + cp /etc/rancher/k3s/registries.yaml /vagrant_shared + SHELL + +agent_script = <<-SHELL + sudo -i + apk add curl + export K3S_TOKEN_FILE=/vagrant_shared/token + export K3S_URL=https://#{server_ip}:6443 + export INSTALL_K3S_EXEC="--flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + SHELL + +def config_vm(name, ip, script, vm) + # The network_script has two objectives: + # 1. Ensure that the guest's default route is the bridged network (bypass the network of the host) + # 2. Ensure that the DNS points to an external DNS service, as opposed to the DNS of the host that + # the NAT network provides. + network_script = <<-SHELL + sudo -i + ip route delete default 2>&1 >/dev/null || true; ip route add default via #{$default_route} + cp /etc/resolv.conf /etc/resolv.conf.orig + sed 's/^nameserver.*/nameserver #{$nameserver}/' /etc/resolv.conf.orig > /etc/resolv.conf + SHELL + + vm.hostname = name + vm.network 'public_network', bridge: $bridge, ip: ip + vm.synced_folder './shared', '/vagrant_shared' + vm.provider 'virtualbox' do |vb| + vb.memory = '4096' + vb.cpus = '2' + end + vm.provision 'shell', inline: script + vm.provision 'shell', inline: network_script, run: 'always' +end + +Vagrant.configure('2') do |config| + config.vm.box = 'generic/alpine314' + + config.vm.define 'server', primary: true do |server| + config_vm(server_name, server_ip, server_script, server.vm) + end + + agents.each do |agent_name, agent_ip| + config.vm.define agent_name do |agent| + config_vm(agent_name, agent_ip, agent_script, agent.vm) + end + end +end +``` + +The Kubernetes manifest to add the registry: + +#### registry.yaml +```yaml +apiVersion: v1 +kind: ReplicationController +metadata: + name: kube-registry-v0 + namespace: kube-system + labels: + k8s-app: kube-registry + version: v0 +spec: + replicas: 1 + selector: + app: kube-registry + version: v0 + template: + metadata: + labels: + app: kube-registry + version: v0 + spec: + containers: + - name: registry + image: registry:2 + resources: + limits: + cpu: 100m + memory: 200Mi + env: + - name: REGISTRY_HTTP_ADDR + value: :5000 + - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY + value: /var/lib/registry + volumeMounts: + - name: image-store + mountPath: /var/lib/registry + ports: + - containerPort: 5000 + name: registry + protocol: TCP + volumes: + - name: image-store + hostPath: + path: /var/lib/registry-storage +--- +apiVersion: v1 +kind: Service +metadata: + name: kube-registry + namespace: kube-system + labels: + app: kube-registry + kubernetes.io/name: "KubeRegistry" +spec: + selector: + app: kube-registry + ports: + - name: registry + port: 5000 + targetPort: 5000 + protocol: TCP + type: LoadBalancer +``` + diff --git a/docs/telepresence/2.6/howtos/intercepts.md b/docs/telepresence/2.6/howtos/intercepts.md new file mode 100644 index 000000000..87bd9f92b --- /dev/null +++ b/docs/telepresence/2.6/howtos/intercepts.md @@ -0,0 +1,108 @@ +--- +description: "Start using Telepresence in your own environment. Follow these steps to intercept your service in your cluster." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Intercept a service in your own environment + +Telepresence enables you to create intercepts to a target Kubernetes workload. Once you have created and intercept, you can code and debug your associated service locally. + +For a detailed walk-though on creating intercepts using our sample app, follow the [quick start guide](../../quick-start/demo-node/). + + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +This guide assumes you have a Kubernetes deployment and service accessible publicly by an ingress controller, and that you can run a copy of that service on your laptop. + + +## Intercept your service with a global intercept + +With Telepresence, you can create [global intercepts](../../concepts/intercepts/?intercept=global) that intercept all traffic going to a service in your cluster and route it to your local environment instead. + +1. Connect to your cluster with `telepresence connect` and connect to the Kubernetes API server: + + ```console + $ curl -ik https://kubernetes.default + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected when you first connect. + + + You now have access to your remote Kubernetes API server as if you were on the same network. You can now use any local tools to connect to any service in the cluster. + + If you have difficulties connecting, make sure you are using Telepresence 2.0.3 or a later version. Check your version by entering `telepresence version` and [upgrade if needed](../../install/upgrade/). + + +2. Enter `telepresence list` and make sure the service you want to intercept is listed. For example: + + ```console + $ telepresence list + ... + example-service: ready to intercept (traffic-agent not yet installed) + ... + ``` + +3. Get the name of the port you want to intercept on your service: + `kubectl get service --output yaml`. + + For example: + + ```console + $ kubectl get service example-service --output yaml + ... + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + ... + ``` + +4. Intercept all traffic going to the service in your cluster: + `telepresence intercept --port [:] --env-file `. + * For `--port`: specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * For `--env-file`: specify a file path for Telepresence to write the environment variables that are set in the pod. + The example below shows Telepresence intercepting traffic going to service `example-service`. Requests now reach the service on port `http` in the cluster get routed to `8080` on the workstation and write the environment variables of the service to `~/example-service-intercept.env`. + ```console + $ telepresence intercept example-service --port 8080:http --env-file ~/example-service-intercept.env + Using Deployment example-service + intercepted + Intercept name: example-service + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Intercepting : all TCP connections + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + The following are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Query the environment in which you intercepted a service and verify your local instance being invoked. + All the traffic previously routed to your Kubernetes Service is now routed to your local environment + +You can now: +- Make changes on the fly and see them reflected when interacting with + your Kubernetes environment. +- Query services only exposed in your cluster's network. +- Set breakpoints in your IDE to investigate bugs. + + + + **Didn't work?** Make sure the port you're listening on matches the one you specified when you created your intercept. + + diff --git a/docs/telepresence/2.6/howtos/outbound.md b/docs/telepresence/2.6/howtos/outbound.md new file mode 100644 index 000000000..d1a9676a9 --- /dev/null +++ b/docs/telepresence/2.6/howtos/outbound.md @@ -0,0 +1,89 @@ +--- +description: "Telepresence can connect to your Kubernetes cluster, letting you access cluster services as if your laptop was another pod in the cluster." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Proxy outbound traffic to my cluster + +While preview URLs are a powerful feature, Telepresence offers other options for proxying traffic between your laptop and the cluster. This section discribes how to proxy outbound traffic and control outbound connectivity to your cluster. + + This guide assumes that you have the quick start sample web app running in your cluster to test accessing the web-app service. You can substitute this service for any other service you are running. + +## Proxying outbound traffic + +Connecting to the cluster instead of running an intercept allows you to access cluster workloads as if your laptop was another pod in the cluster. This enables you to access other Kubernetes services using `.`. A service running on your laptop can interact with other services on the cluster by name. + +When you connect to your cluster, the background daemon on your machine runs and installs the [Traffic Manager deployment](../../reference/architecture/) into the cluster of your current `kubectl` context. The Traffic Manager handles the service proxying. + +1. Run `telepresence connect` and enter your password to run the daemon. + + ``` + $ telepresence connect + Launching Telepresence Daemon v2.3.7 (api v3) + Need root privileges to run "/usr/local/bin/telepresence daemon-foreground /home//.cache/telepresence/logs '' ''" + [sudo] password: + Connecting to traffic manager... + Connected to context default (https://) + ``` + +2. Run `telepresence status` to confirm connection to your cluster and that it is proxying traffic. + + ``` + $ telepresence status + Root Daemon: Running + Version : v2.3.7 (api 3) + Primary DNS : "" + Fallback DNS: "" + User Daemon: Running + Version : v2.3.7 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 0 total + ``` + +3. Access your service by name with `curl web-app.emojivoto:80`. Telepresence routes the request to the cluster, as if your laptop is actually running in the cluster. + + ``` + $ curl web-app.emojivoto:80 + + + + + Emoji Vote + ... + ``` + +If you terminate the client with `telepresence quit` and try to access the service again, it will fail because traffic is no longer proxied from your laptop. + + ``` + $ telepresence quit + Telepresence Daemon quitting...done + ``` + +When using Telepresence in this way, you need to access services with the namespace qualified DNS name (<service name>.<namespace>) before you start an intercept. After you start an intercept, only <service name> is required. Read more about these differences in the DNS resolution reference guide. + +## Controlling outbound connectivity + +By default, Telepresence provides access to all Services found in all namespaces in the connected cluster. This can lead to problems if the user does not have RBAC access permissions to all namespaces. You can use the `--mapped-namespaces ` flag to control which namespaces are accessible. + +When you use the `--mapped-namespaces` flag, you need to include all namespaces containing services you want to access, as well as all namespaces that contain services related to the intercept. + +### Using local-only intercepts + +When you develop on isolated apps or on a virtualized container, you don't need an outbound connection. However, when developing services that aren't deployed to the cluster, it can be necessary to provide outbound connectivity to the namespace where the service will be deployed. This is because services that aren't exposed through ingress controllers require connectivity to those services. When you provide outbound connectivity, the service can access other services in that namespace without using qualified names. A local-only intercept does not cause outbound connections to originate from the intercepted namespace. The reason for this is to establish correct origin; the connection must be routed to a `traffic-agent`of an intercepted pod. For local-only intercepts, the outbound connections originates from the `traffic-manager`. + +To control outbound connectivity to specific namespaces, add the `--local-only` flag: + + ``` + $ telepresence intercept --namespace --local-only + ``` +The resources in the given namespace can now be accessed using unqualified names as long as the intercept is active. +You can deactivate the intercept with `telepresence leave `. This removes unqualified name access. + +### Proxy outcound connectivity for laptops + +To specify additional hosts or subnets that should be resolved inside the cluster, see [AlsoProxy](../../reference/config/#alsoproxy) for more details. diff --git a/docs/telepresence/2.6/howtos/preview-urls.md b/docs/telepresence/2.6/howtos/preview-urls.md new file mode 100644 index 000000000..670f72dd3 --- /dev/null +++ b/docs/telepresence/2.6/howtos/preview-urls.md @@ -0,0 +1,126 @@ +--- +description: "Telepresence uses Preview URLs to help you collaborate on developing Kubernetes services with teammates." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Share development environments with preview URLs + +Telepresence can generate sharable preview URLs. This enables you to work on a copy of your service locally, and share that environment with a teammate for pair programming. While using preview URLs, Telepresence will route only the requests coming from that preview URL to your local environment. Requests to the ingress are routed to your cluster as usual. + +Preview URLs are protected behind authentication through Ambassador Cloud, and, access to the URL is only available to users in your organization. You can make the URL publicly accessible for sharing with outside collaborators. + +## Creating a preview URL + +1. Connect to Telepresence and enter the `telepresence list` command in your CLI to verify the service is listed. +Telepresence only supports Deployments, ReplicaSets, and StatefulSet workloads with a label that matches a Service. + +2. Enter `telepresence login` to launch Ambassador Cloud in your browser. + + If you are in an environment you can't launch Telepresence in your local browser, enter If you are in an environment where Telepresence cannot launch in a local browser, pass the [`--apikey` flag to `telepresence login`](../../reference/client/login/). + +3. Start the intercept with `telepresence intercept --port --env-file `and adjust the flags as follows: + Start the intercept: + * **port:** specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * **env-file:** specify a file path for Telepresence to write the environment variables that are set in the pod. + +4. Answer the question prompts. + * **IWhat's your ingress' IP address?**: whether the ingress controller is expecting TLS communication on the specified port. + * **What's your ingress' TCP port number?**: the port your ingress controller is listening to. This is often 443 for TLS ports, and 80 for non-TLS ports. + * **Does that TCP port on your ingress use TLS (as opposed to cleartext)?**: whether the ingress controller is expecting TLS communication on the specified port. + * **If required by your ingress, specify a different hostname (TLS-SNI, HTTP "Host" header) to be used in requests.**: if your ingress controller routes traffic based on a domain name (often using the `Host` HTTP header), enter that value here. + + The example below shows a preview URL for `example-service` which listens on port 8080. The preview URL for ingress will use the `ambassador` service in the `ambassador` namespace on port `443` using TLS encryption and the hostname `dev-environment.edgestack.me`: + + ```console +$ telepresence intercept example-service --port 8080 --env-file ~/ex-svc.env + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Confirm the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: -]: ambassador.ambassador + + 2/4: What's your ingress' TCP port number? + + [default: -]: 80 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: y + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: ambassador.ambassador]: dev-environment.edgestack.me + + Using deployment example-service + intercepted + Intercept name : example-service + State : ACTIVE + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":example-service") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname : dev-environment.edgestack.me + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + Here are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Go to the Preview URL generated from the intercept. +Traffic is now intercepted from your preview URL without impacting other traffic from your Ingress. + + + Didn't work? It might be because you have services in between your ingress controller and the service you are intercepting that do not propagate the x-telepresence-intercept-id HTTP Header. Read more on context propagation. + + +7. Make a request on the URL you would usually query for that environment. Don't route a request to your laptop. + + Normal traffic coming into the cluster through the Ingress (i.e. not coming from the preview URL) routes to services in the cluster like normal. + +8. Share with a teammate. + + You can collaborate with teammates by sending your preview URL to them. Once your teammate logs in, they must select the same identity provider and org as you are using. This authorizes their access to the preview URL. When they visit the preview URL, they see the intercepted service running on your laptop. + You can now collaborate with a teammate to debug the service on the shared intercept URL without impacting the production environment. + +## Sharing a preview URL with people outside your team + +To collaborate with someone outside of your identity provider's organization: +Log into [Ambassador Cloud](https://app.getambassador.io/cloud/). + navigate to your service's intercepts, select the preview URL details, and click **Make Publicly Accessible**. Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on your laptop. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. Removing the preview URL either from the dashboard or by running `telepresence preview remove ` also removes all access to the preview URL. + +## Change access restrictions + +To collaborate with someone outside of your identity provider's organization, you must make your preview URL publicly accessible. + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/). +2. Select the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Make Publicly Accessible**. + +Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on a local environment. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. + +## Remove a preview URL from an Intercept + +To delete a preview URL and remove all access to the intercepted service, + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) +2. Click on the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Remove Preview**. + +Alternatively, you can remove a preview URL with the following command: +`telepresence preview remove ` diff --git a/docs/telepresence/2.6/howtos/request.md b/docs/telepresence/2.6/howtos/request.md new file mode 100644 index 000000000..1109c68df --- /dev/null +++ b/docs/telepresence/2.6/howtos/request.md @@ -0,0 +1,12 @@ +import Alert from '@material-ui/lab/Alert'; + +# Send requests to an intercepted service + +Ambassador Cloud can inform you about the required request parameters to reach an intercepted service. + + 1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) + 2. Navigate to the desired service Intercepts page + 3. Click the **Query** button to open the pop-up menu. + 4. Toggle between **CURL**, **Headers** and **Browse**. + +The pre-built queries and header information will help you get started to query the desired intercepted service and manage header propagation. diff --git a/docs/telepresence/2.6/images/container-inner-dev-loop.png b/docs/telepresence/2.6/images/container-inner-dev-loop.png new file mode 100644 index 000000000..06586cd6e Binary files /dev/null and b/docs/telepresence/2.6/images/container-inner-dev-loop.png differ diff --git a/docs/telepresence/2.6/images/docker-header-containers.png b/docs/telepresence/2.6/images/docker-header-containers.png new file mode 100644 index 000000000..06f422a93 Binary files /dev/null and b/docs/telepresence/2.6/images/docker-header-containers.png differ diff --git a/docs/telepresence/2.6/images/github-login.png b/docs/telepresence/2.6/images/github-login.png new file mode 100644 index 000000000..cfd4d4bf1 Binary files /dev/null and b/docs/telepresence/2.6/images/github-login.png differ diff --git a/docs/telepresence/2.6/images/logo.png b/docs/telepresence/2.6/images/logo.png new file mode 100644 index 000000000..701f63ba8 Binary files /dev/null and b/docs/telepresence/2.6/images/logo.png differ diff --git a/docs/telepresence/2.6/images/split-tunnel.png b/docs/telepresence/2.6/images/split-tunnel.png new file mode 100644 index 000000000..5bf30378e Binary files /dev/null and b/docs/telepresence/2.6/images/split-tunnel.png differ diff --git a/docs/telepresence/2.6/images/trad-inner-dev-loop.png b/docs/telepresence/2.6/images/trad-inner-dev-loop.png new file mode 100644 index 000000000..618b674f8 Binary files /dev/null and b/docs/telepresence/2.6/images/trad-inner-dev-loop.png differ diff --git a/docs/telepresence/2.6/images/tunnelblick.png b/docs/telepresence/2.6/images/tunnelblick.png new file mode 100644 index 000000000..8944d445a Binary files /dev/null and b/docs/telepresence/2.6/images/tunnelblick.png differ diff --git a/docs/telepresence/2.6/images/vpn-dns.png b/docs/telepresence/2.6/images/vpn-dns.png new file mode 100644 index 000000000..eed535c45 Binary files /dev/null and b/docs/telepresence/2.6/images/vpn-dns.png differ diff --git a/docs/telepresence/2.6/install/helm.md b/docs/telepresence/2.6/install/helm.md new file mode 100644 index 000000000..688d2f20a --- /dev/null +++ b/docs/telepresence/2.6/install/helm.md @@ -0,0 +1,181 @@ +# Install with Helm + +[Helm](https://helm.sh) is a package manager for Kubernetes that automates the release and management of software on Kubernetes. The Telepresence Traffic Manager can be installed via a Helm chart with a few simple steps. + +**Note** that installing the Traffic Manager through Helm will prevent `telepresence connect` from ever upgrading it. If you wish to upgrade a Traffic Manager that was installed via the Helm chart, please see the steps [below](#upgrading-the-traffic-manager) + +For more details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +## Before you begin + +The Telepresence Helm chart is hosted by Ambassador Labs and published at `https://app.getambassador.io`. + +Start by adding this repo to your Helm client with the following command: + +```shell +helm repo add datawire https://app.getambassador.io +helm repo update +``` + +## Install with Helm + +When you run the Helm chart, it installs all the components required for the Telepresence Traffic Manager. + +1. If you are installing the Telepresence Traffic Manager **for the first time on your cluster**, create the `ambassador` namespace in your cluster: + + ```shell + kubectl create namespace ambassador + ``` + +2. Install the Telepresence Traffic Manager with the following command: + + ```shell + helm install traffic-manager --namespace ambassador datawire/telepresence + ``` + +### Install into custom namespace + +The Helm chart supports being installed into any namespace, not necessarily `ambassador`. Simply pass a different `namespace` argument to `helm install`. +For example, if you wanted to deploy the traffic manager to the `staging` namespace: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence +``` + +Note that users of Telepresence will need to configure their kubeconfig to find this installation of the Traffic Manager: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +See [the kubeconfig documentation](../../reference/config#manager) for more information. + +### Upgrading the Traffic Manager. + +Versions of the Traffic Manager Helm chart are coupled to the versions of the Telepresence CLI that they are intended for. +Thus, for example, if you wish to use Telepresence `v2.4.0`, you'll need to install version `v2.4.0` of the Traffic Manager Helm chart. + +Upgrading the Traffic Manager is the same as upgrading any other Helm chart; for example, if you installed the release into the `ambassador` namespace, and you just wished to upgrade it to the latest version without changing any configuration values: + +```shell +helm repo up +helm upgrade traffic-manager datawire/telepresence --reuse-values --namespace ambassador +``` + +If you want to upgrade the Traffic-Manager to a specific version, add a `--version` flag with the version number to the upgrade command. For example: `--version v2.4.1` + +## RBAC + +### Installing a namespace-scoped traffic manager + +You might not want the Traffic Manager to have permissions across the entire kubernetes cluster, or you might want to be able to install multiple traffic managers per cluster (for example, to separate them by environment). +In these cases, the traffic manager supports being installed with a namespace scope, allowing cluster administrators to limit the reach of a traffic manager's permissions. + +For example, suppose you want a Traffic Manager that only works on namespaces `dev` and `staging`. +To do this, create a `values.yaml` like the following: + +```yaml +managerRbac: + create: true + namespaced: true + namespaces: + - dev + - staging +``` + +This can then be installed via: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence -f ./values.yaml +``` + +**NOTE** Do not install namespace-scoped Traffic Managers and a global Traffic Manager in the same cluster, as it could have unexpected effects. + +#### Namespace collision detection + +The Telepresence Helm chart will try to prevent namespace-scoped Traffic Managers from managing the same namespaces. +It will do this by creating a ConfigMap, called `traffic-manager-claim`, in each namespace that a given install manages. + +So, for example, suppose you install one Traffic Manager to manage namespaces `dev` and `staging`, as: + +```bash +helm install traffic-manager --namespace dev datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={dev,staging}' +``` + +You might then attempt to install another Traffic Manager to manage namespaces `staging` and `prod`: + +```bash +helm install traffic-manager --namespace prod datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={staging,prod}' +``` + +This would fail with an error: + +``` +Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap "traffic-manager-claim" in namespace "staging" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "prod": current value is "dev" +``` + +To fix this error, fix the overlap either by removing `staging` from the first install, or from the second. + +#### Namespace scoped user permissions + +Optionally, you can also configure user rbac to be scoped to the same namespaces as the manager itself. +You might want to do this if you don't give your users permissions throughout the cluster, and want to make sure they only have the minimum set required to perform telepresence commands on certain namespaces. + +Continuing with the `dev` and `staging` example from the previous section, simply add the following to `values.yaml` (make sure you set the `subjects`!): + +```yaml +clientRbac: + create: true + + # These are the users or groups to which the user rbac will be bound. + # This MUST be set. + subjects: {} + # - kind: User + # name: jane + # apiGroup: rbac.authorization.k8s.io + + namespaced: true + + namespaces: + - dev + - staging +``` + +#### Namespace-scoped webhook + +If you wish to use the traffic-manager's [mutating webhook](../../reference/cluster-config#mutating-webhook) with a namespace-scoped traffic manager, you will have to ensure that each namespace has an `app.kubernetes.io/name` label that is identical to its name: + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: staging + labels: + app.kubernetes.io/name: staging +``` + +You can also use `kubectl label` to add the label to an existing namespace, e.g.: + +```shell +kubectl label namespace staging app.kubernetes.io/name=staging +``` + +This is required because the mutating webhook will use the name label to find namespaces to operate on. + +**NOTE** This labelling happens automatically in kubernetes >= 1.21. + +### Installing RBAC only + +Telepresence Traffic Manager does require some [RBAC](../../reference/rbac/) for the traffic-manager deployment itself, as well as for users. +To make it easier for operators to introspect / manage RBAC separately, you can use `rbac.only=true` to +only create the rbac-related objects. +Additionally, you can use `clientRbac.create=true` and `managerRbac.create=true` to toggle which subset(s) of RBAC objects you wish to create. diff --git a/docs/telepresence/2.6/install/index.md b/docs/telepresence/2.6/install/index.md new file mode 100644 index 000000000..624cb33d6 --- /dev/null +++ b/docs/telepresence/2.6/install/index.md @@ -0,0 +1,153 @@ +import Platform from '@src/components/Platform'; + +# Install + +Install Telepresence by running the commands below for your OS. If you are not the administrator of your cluster, you will need [administrative RBAC permissions](../reference/rbac#administrating-telepresence) to install and use Telepresence in your cluster. + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## What's Next? + +Follow one of our [quick start guides](../quick-start/) to start using Telepresence, either with our sample app or in your own environment. + +## Installing nightly versions of Telepresence + +We build and publish the contents of the default branch, [release/v2](https://github.com/telepresenceio/telepresence), of Telepresence +nightly, Monday through Friday, for macOS (Intel and Apple silicon), Linux, and Windows. + +The tags are formatted like so: `vX.Y.Z-nightly-$gitShortHash`. + +`vX.Y.Z` is the most recent release of Telepresence with the patch version (Z) bumped one higher. +For example, if our last release was 2.3.4, nightly builds would start with v2.3.5, until a new +version of Telepresence is released. + +`$gitShortHash` will be the short hash of the git commit of the build. + +Use these URLs to download the most recent nightly build. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/nightly/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/nightly/telepresence.zip +``` + + + + +## Installing older versions of Telepresence + +Use these URLs to download an older version for your OS (including older nightly builds), replacing `x.y.z` with the versions you want. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/x.y.z/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/x.y.z/telepresence +``` + + + diff --git a/docs/telepresence/2.6/install/migrate-from-legacy.md b/docs/telepresence/2.6/install/migrate-from-legacy.md new file mode 100644 index 000000000..b00f84294 --- /dev/null +++ b/docs/telepresence/2.6/install/migrate-from-legacy.md @@ -0,0 +1,109 @@ +# Migrate from legacy Telepresence + +[Telepresence](/products/telepresence/) (formerly referenced as Telepresence 2, which is the current major version) has different mechanics and requires a different mental model from [legacy Telepresence 1](https://www.telepresence.io/docs/v1/) when working with local instances of your services. + +In legacy Telepresence, a pod running a service was swapped with a pod running the Telepresence proxy. This proxy received traffic intended for the service, and sent the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment". + +In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time. + +Telepresence 2 introduces a [new +architecture](../../reference/architecture/) built around "intercepts" +that addresses these problems. With the new Telepresence, a sidecar +proxy ("traffic agent") is injected onto the pod. The proxy then +intercepts traffic intended for the Pod and routes it to the +workstation/laptop. The advantage of this approach is that the +service is running at all times, and no swapping is used. By using +the proxy approach, we can also do personal intercepts, where rather +than re-routing all traffic to the laptop/workstation, it only +re-routes the traffic designated as belonging to that user, so that +multiple developers can intercept the same service at the same time +without disrupting normal operation or disrupting eacho. + +Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts. + +## Using legacy Telepresence commands + +First please ensure you've [installed Telepresence](../). + +Telepresence is able to translate common legacy Telepresence commands into native Telepresence commands. +So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used +to with the Telepresence binary. + +For example, say you have a deployment (`myserver`) that you want to swap deployment (equivalent to intercept in +Telepresence) with a python server, you could run the following command: + +``` +$ telepresence --swap-deployment myserver --expose 9090 --run python3 -m http.server 9090 +< help text > + +Legacy telepresence command used +Command roughly translates to the following in Telepresence: +telepresence intercept echo-easy --port 9090 -- python3 -m http.server 9090 +running... +Connecting to traffic manager... +Connected to context +Using Deployment myserver +intercepted + Intercept name : myserver + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:9090 + Intercepting : all TCP connections +Serving HTTP on :: port 9090 (http://[::]:9090/) ... +``` + +Telepresence will let you know what the legacy Telepresence command has mapped to and automatically +runs it. So you can get started with Telepresence today, using the commands you are used to +and it will help you learn the Telepresence syntax. + +### Legacy command mapping + +Below is the mapping of legacy Telepresence to Telepresence commands (where they exist and +are supported). + +| Legacy Telepresence Command | Telepresence Command | +|--------------------------------------------------|--------------------------------------------| +| --swap-deployment $workload | intercept $workload | +| --expose localPort[:remotePort] | intercept --port localPort[:remotePort] | +| --swap-deployment $workload --run-shell | intercept $workload -- bash | +| --swap-deployment $workload --run $cmd | intercept $workload -- $cmd | +| --swap-deployment $workload --docker-run $cmd | intercept $workload --docker-run -- $cmd | +| --run-shell | connect -- bash | +| --run $cmd | connect -- $cmd | +| --env-file,--env-json | --env-file, --env-json (haven't changed) | +| --context,--namespace | --context, --namespace (haven't changed) | +| --mount,--docker-mount | --mount, --docker-mount (haven't changed) | + +### Legacy Telepresence command limitations + +Some of the commands and flags from legacy Telepresence either didn't apply to Telepresence or +aren't yet supported in Telepresence. For some known popular commands, such as --method, +Telepresence will include output letting you know that the flag has gone away. For flags that +Telepresence can't translate yet, it will let you know that that flag is "unsupported". + +If Telepresence is missing any flags or functionality that is integral to your usage, please let us know +by [creating an issue](https://github.com/telepresenceio/telepresence/issues) and/or talking to us on our [Slack channel](http://a8r.io/slack)! + +## Telepresence changes + +Telepresence installs a Traffic Manager in the cluster and Traffic Agents alongside workloads when performing intercepts (including +with `--swap-deployment`) and leaves them. If you use `--swap-deployment`, the intercept will be left once the process +dies, but the agent will remain. There's no harm in leaving the agent running alongside your service, but when you +want to remove them from the cluster, the following Telepresence command will help: +``` +$ telepresence uninstall --help +Uninstall telepresence agents and manager + +Usage: + telepresence uninstall [flags] { --agent |--all-agents | --everything } + +Flags: + -d, --agent uninstall intercept agent on specific deployments + -a, --all-agents uninstall intercept agent on all deployments + -e, --everything uninstall agents and the traffic manager + -h, --help help for uninstall + -n, --namespace string If present, the namespace scope for this CLI request +``` + +Since the new architecture deploys a Traffic Manager into the Ambassador namespace, please take a look at +our [rbac guide](../../reference/rbac) if you run into any issues with permissions while upgrading to Telepresence. diff --git a/docs/telepresence/2.6/install/upgrade.md b/docs/telepresence/2.6/install/upgrade.md new file mode 100644 index 000000000..95c28cb19 --- /dev/null +++ b/docs/telepresence/2.6/install/upgrade.md @@ -0,0 +1,89 @@ +--- +description: "How to upgrade your installation of Telepresence and install previous versions." +--- + +import Platform from '@src/components/Platform'; + +# Important note about upgrading to 2.6.0 +Telepresence 2.6.0 introduces a new way of configuring the traffic-agent sidecar, and will no longer modify the workloads (deployments, +replicasets, or statefulsets) in order to inject it. Instead, all sidecar injection is performed by a mutating webhook. Because of this +change, the traffic-manager will reject connections from older clients, which means that when installing a 2.6.0 traffic-manager in a +cluster, all clients must also upgrade. The 2.6.0 client will work with older traffic-managers. + +Please see [Whats new in 2.6.0](../../new-in-2.6) for more info. + +# Upgrade Process +The Telepresence CLI will periodically check for new versions and notify you when an upgrade is available. Running the same commands used for installation will replace your current binary with the latest version. + + + + +```shell +# Intel Macs + +# Upgrade via brew: +brew upgrade datawire/blackbird/telepresence + +# OR upgrade manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +After upgrading your CLI you must stop any live Telepresence processes by issuing `telepresence quit`, then upgrade the Traffic Manager by running `telepresence connect` + +**Note** that if the Traffic Manager has been installed via Helm, `telepresence connect` will never upgrade it. If you wish to upgrade a Traffic Manager that was installed via the Helm chart, please see the [Helm documentation](../helm#upgrading-the-traffic-manager) diff --git a/docs/telepresence/2.6/new-in-2.6.md b/docs/telepresence/2.6/new-in-2.6.md new file mode 100644 index 000000000..390106ebf --- /dev/null +++ b/docs/telepresence/2.6/new-in-2.6.md @@ -0,0 +1,64 @@ +# What’s new in Telepresence 2.6.0? + +## No more workload modifications +Prior to 2.6.0, Telepresence's default behavior when intercepting was to update the intercepted workload (Deployment, StatefulSet, ReplicaSet) template to add the +Traffic Agent sidecar container and update the port definitions. +The alternative was to use the Mutating Webhook (also known as the agent injector). +This has been possible for some time, but considered a more advanced use-case, and not that commonly used. +In 2.6.0, the workload modifications approach is removed and everything relies solely on the Mutating Webhook. +This brings a number of advantages: + +- Workflows like Argo-CD no longer break (because the workload now remains stable). +- Client doesn’t need RBAC rules that allow modification of workloads. +- The modification is faster and more reliable. +- The Telepresence code-base will shrink once compatibility with older traffic-managers is dropped. +- Upgrading is much easier (Helm chart hooks send requests to the agent-injector). + +## Agent configuration in ConfigMap +The old sidecar was configured using environment variables. Some variables were copied from the intercepted container (there could only be one) +and the others were added by the installer. This approach was not suitable when the demands on the sidecar grew beyond intercepting one port on +one container. Now, the sidecar is instead configured using a hierarchical ConfigMap entry, which allows for more complex structures. Each namespace +that contains sidecars have a “telepresence-agents” ConfigMap, with one entry for each intercepted workload. The ConfigMap is maintained by +the traffic-manager and new entries are added to it automatically when a client requests an intercept on a workflow that hasn’t already been +intercepted. + +## Intercept multiple containers and ports +The sidecar is now capable of intercepting multiple containers and multiple ports on each container. As before, an intercepted port must be +a service port that is connected to a port in a container in the intercepted workload. The difference is that now there can be any number of +such connections, and the user can choose which ones to intercept. Even the OSS-sidecar can do this, but it’s limited to one intercept at a +time. See [Intercepting multiple ports](../reference/intercepts#intercepting-multiple-ports) for more info + +## Smarter agent +The OSS-sidecar is only capable of handling the TCP mechanism. It offers no “personal” intercepts. This remains true. What’s different is +that while the old “smart” agent was able to handle HTTP intercepts only, the new one can handle both HTTP and TCP intercepts. This +means that it can handle all use-cases. A user that isn’t logged in will default to TCP and thus still block every other attempt to +intercept the same container/pod combo on the intercepted workflow, but there’s no longer a need to reinstall the agent in order for it to +handle that user. In fact, once the smart agent has been installed, it can remain there forever. + +## New intercept flow +### Flow of old-style intercept +- Client asks SystemA for the preferred agent image (if logged in). +- Client finds the workload. +- Client finds the service based on the workflow’s container. It fails unless a unique service/container can be found (user can assist by + providing service name/service port). +- Client checks if the agent is present, and if not, alters the workload: + - Rename the container port (and the corresponding port in probes). + - Add the sidecar with the original port name. + - The client applies the modified workload. +- The client requests an intercept activation from the traffic-manager. +- The client creates the necessary mounts, and optionally starts a process (or docker container). + +### Flow of new-style intercept +- Client asks the traffic-manager to prepare the intercept. +- Traffic-manager asks SystemA for the preferred sidecar-image (once, the result is then cached). +- Traffic-manager finds the workload. +- Traffic-manager ensures that an existing sidecar configuration exists and is current. +- If a new configuration must be created or an existing config was altered: + - Traffic-manager creates a config based on all possible service/container connections. + - The config is stored in the “telepresence-agents” configmap. + - A watcher of the configmap receives an event with the new configuration, and ensures that the corresponding workload is rolled out. + - The mutating webhook receives events for each pod and injects a sidecar based on the configuration. +- Traffic-manager checks if the prepared intercept is unique enough to proceed. If not , the prepare-request returns an error and the client + is asked to provide info about service and/or service-port. +- The client requests an intercept activation from the traffic-manager. +- The client creates the necessary mounts, and optionally starts a process (or docker container). diff --git a/docs/telepresence/2.6/quick-start/TelepresenceQuickStartLanding.js b/docs/telepresence/2.6/quick-start/TelepresenceQuickStartLanding.js new file mode 100644 index 000000000..bd375dee0 --- /dev/null +++ b/docs/telepresence/2.6/quick-start/TelepresenceQuickStartLanding.js @@ -0,0 +1,118 @@ +import queryString from 'query-string'; +import React, { useEffect, useState } from 'react'; + +import Embed from '../../../../src/components/Embed'; +import Icon from '../../../../src/components/Icon'; +import Link from '../../../../src/components/Link'; + +import './telepresence-quickstart-landing.less'; + +/** @type React.FC> */ +const RightArrow = (props) => ( + + + +); + +const TelepresenceQuickStartLanding = () => { + const [getStartedUrl, setGetStartedUrl] = useState( + 'https://app.getambassador.io/cloud/welcome?docs_source=telepresence-quick-start', + ); + + const getUrlFromQueryParams = () => { + const { docs_source, docs_campaign } = queryString.parse( + window.location.search, + ); + + if (docs_source === 'cloud-quickstart-ad' && docs_campaign === 'loops') { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=loops', + ); + } else if ( + docs_source === 'cloud-quickstart-ad' && + docs_campaign === 'environments' + ) { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=environments', + ); + } + }; + + useEffect(() => { + getUrlFromQueryParams(); + }, []); + + return ( +
+

+ Telepresence +

+

+ Set up your ideal development environment for Kubernetes in seconds. + Accelerate your inner development loop with hot reload using your + existing IDE, and workflow. +

+ +
+
+
+

+ Set Up Telepresence with Ambassador Cloud +

+

+ Seamlessly integrate Telepresence into your existing Kubernetes + environment by following our 3-step setup guide. +

+ + Get Started + +
+
+

+ + Do it Yourself: + {' '} + install Telepresence and manually connect to your Kubernetes + workloads. +

+
+ +
+
+
+

+ What Can Telepresence Do for You? +

+

Telepresence gives Kubernetes application developers:

+
    +
  • Instant feedback loops
  • +
  • Remote development environments
  • +
  • Access to your favorite local tools
  • +
  • Easy collaborative development with teammates
  • +
+ + LEARN MORE{' '} + + +
+
+ +
+
+
+
+ ); +}; + +export default TelepresenceQuickStartLanding; diff --git a/docs/telepresence/2.6/quick-start/demo-node.md b/docs/telepresence/2.6/quick-start/demo-node.md new file mode 100644 index 000000000..c0f5a9218 --- /dev/null +++ b/docs/telepresence/2.6/quick-start/demo-node.md @@ -0,0 +1,155 @@ +--- +description: "Claim a remote demo cluster and learn to use Telepresence to intercept services running in a Kubernetes Cluster, speeding up local development and debugging." +--- + +import {DemoClusterMetadata, ExpirationDate} from '../../../../../src/components/DemoClusterMetadata'; +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start + +
+

Contents

+ +* [1. Get a free remote cluster](#1-get-a-free-remote-cluster) +* [2. Try the Emojivoto application](#2-try-the-emojivoto-application) +* [3. Set up your local development environment](#3-set-up-your-local-development-environment) +* [4. Testing our fix](#4-testing-our-fix) +* [5. Preview URLs](#5-preview-urls) +* [6. How/Why does this all work](#6-howwhy-does-this-all-work) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +In this guide, we'll give you a hands-on tutorial with [Telepresence](/products/telepresence/). To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js and Golang. We have a version in React if you prefer. + + +## 1. Get a free remote cluster + +[Telepresence](/docs/telepresence/) connects your local workstation with a remote Kubernetes cluster. In this tutorial, we'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. Test out the application: + +1. Go to the and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. We're going to use Telepresence shortly to fix this bug, as everyone should be able to vote for 🍩! + + + Congratulations! You've successfully accessed the Emojivoto application on your remote cluster. + + +## 3. Set up your local development environment + +We'll set up a development environment locally on your workstation. We'll then use [Telepresence](../../reference/inside-container/) to connect this local development environment to the remote Kubernetes cluster. To save time, the development environment we'll use is pre-packaged as a Docker container. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + + + + + + + +Make sure that ports 8080 and 8083 are free.
+If the Docker engine is not running, the command will fail and you will see docker: unknown server OS in your terminal. +
+ +2. The Docker container includes a copy of the Emojivoto application that fixes the bug. Visit the [leaderboard](http://localhost:8083/leaderboard) and notice how it is different from the leaderboard in your Kubernetes cluster. + +3. Vote for 🍩 on your local leaderboard, and you can see that the bug is fixed! + + + Congratulations! You have successfully set up a local development environment, and tested the fix locally. + + +## 4. Testing our fix + +A common use case for Telepresence is to connect your local development environment to a remote cluster. This way, if your application is too big or complex to run locally, you can still develop locally. In this Quick Start, we're also going to show Telepresence can be used for integration testing, by testing our fix against the services in the remote cluster. + +1. From your Docker container, create an intercept, which will tell Telepresence to send traffic to the service in your container instead of the service in the cluster: + `telepresence intercept web --port 8080` + + When prompted for ingress configuration, all default values should be correct as displayed below. + + + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Preview URLs + +Preview URLs enable you to safely share your development environment with anyone. For example, you may want your UX designer to take a quick look at what you're developing, before you commit the code. Preview URLs enable this easy collaboration. + +1. If you access the Emojivoto application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +2. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version running locally. + + +Now you're able to share your fix in your local environment with your team! + + + + To get more information regarding Preview URLs and intercepts, visit Ambassador Cloud. + + +
+ +## 6. How/Why does this all work? + +[Telepresence](../qs-go/) works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +Intercepts and preview URLs are functions of Telepresence that enable easy local development from a remote Kubernetes cluster and offer a preview environment for sharing and real-time collaboration. + +Telepresence also uses custom headers and header propagation for controllable intercepts and preview URLs. The headers facilitate the smart routing of requests either to live services in the cluster or services running locally on a developer’s machine. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to Ambassador Cloud with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. + +## What's Next? + + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/2.6/howtos/intercepts/)! diff --git a/docs/telepresence/2.6/quick-start/demo-react.md b/docs/telepresence/2.6/quick-start/demo-react.md new file mode 100644 index 000000000..2312dbbbc --- /dev/null +++ b/docs/telepresence/2.6/quick-start/demo-react.md @@ -0,0 +1,257 @@ +--- +description: "Telepresence Quick Start - React. In this guide we'll give you everything you need in a preconfigured demo cluster: the Telepresence CLI, a config file for..." +--- + +import Alert from '@material-ui/lab/Alert'; +import QSCards26 from './qs-cards'; +import { DownloadDemo } from '../../../../../src/components/Docs/DownloadDemo'; +import { UserInterceptCommand } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start - React + +
+

Contents

+ +* [1. Download the demo cluster archive](#1-download-the-demo-cluster-archive) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Set up the sample application](#3-set-up-the-sample-application) +* [4. Test app](#4-test-app) +* [5. Run a service on your laptop](#5-run-a-service-on-your-laptop) +* [6. Make a code change](#6-make-a-code-change) +* [7. Intercept all traffic to the service](#7-intercept-all-traffic-to-the-service) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +In this guide we'll give you **everything you need in a preconfigured demo cluster:** the [Telepresence](/products/telepresence/) CLI, a config file for connecting to your demo cluster, and code to run a cluster service locally. + + + While Telepresence works with any language, this guide uses a sample app with a frontend written in React. We have a version with a Node.js backend if you prefer. + + + + +## 1. Download the demo cluster archive + +1. + +2. Extract the archive file, open the `ambassador-demo-cluster` folder, and run the installer script (the commands below might vary based on where your browser saves downloaded files). + + + This step will also install some dependency packages onto your laptop using npm, you can see those packages at ambassador-demo-cluster/edgey-corp-nodejs/DataProcessingService/package.json. + + + ``` + cd ~/Downloads + unzip ambassador-demo-cluster.zip -d ambassador-demo-cluster + cd ambassador-demo-cluster + ./install.sh + # type y to install the npm dependencies when asked + ``` + +3. Confirm that your `kubectl` is configured to use the demo cluster by getting the status of the cluster nodes, you should see a single node named `tpdemo-prod-...`: + `kubectl get nodes` + + ``` + $ kubectl get nodes + + NAME STATUS ROLES AGE VERSION + tpdemo-prod-1234 Ready control-plane,master 5d10h v1.20.2+k3s1 + ``` + +4. Confirm that the Telepresence CLI is now installed (we expect to see the daemons are not running yet): +`telepresence status` + + ``` + $ telepresence status + + Root Daemon: Not running + User Daemon: Not running + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open System Preferences → Security & Privacy → General. Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence status command. + + + + You now have Telepresence installed on your workstation and a Kubernetes cluster configured in your terminal! + + +## 2. Test Telepresence + +[Telepresence](../../reference/client/login/) connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster (this requires **root** privileges and will ask for your password): +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Set up the sample application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + +1. Clone the emojivoto app: +`git clone https://github.com/datawire/emojivoto.git` + +1. Deploy the app to your cluster: +`kubectl apply -k emojivoto/kustomize/deployment` + +1. Change the kubectl namespace: +`kubectl config set-context --current --namespace=emojivoto` + +1. List the Services: +`kubectl get svc` + + ``` + $ kubectl get svc + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + emoji-svc ClusterIP 10.43.162.236 8080/TCP,8801/TCP 29s + voting-svc ClusterIP 10.43.51.201 8080/TCP,8801/TCP 29s + web-app ClusterIP 10.43.242.240 80/TCP 29s + web-svc ClusterIP 10.43.182.119 8080/TCP 29s + ``` + +1. Since you’ve already connected Telepresence to your cluster, you can access the frontend service in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). This is the namespace qualified DNS name in the form of `service.namespace`. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Test app + +1. Vote for some emojis and see how the [leaderboard](http://web-app.emojivoto/leaderboard) changes. + +1. There is one emoji that causes an error when you vote for it. Vote for 🍩 and the leaderboard does not actually update. Also an error is shown on the browser dev console: +`GET http://web-svc.emojivoto:8080/api/vote?choice=:doughnut: 500 (Internal Server Error)` + + + Open the dev console in Chrome or Firefox with Option + ⌘ + J (macOS) or Shift + CTRL + J (Windows/Linux).
+ Open the dev console in Safari with Option + ⌘ + C. +
+ +The error is on a backend service, so **we can add an error page to notify the user** while the bug is fixed. + +## 5. Run a service on your laptop + +Now start up the `web-app` service on your laptop. We'll then make a code change and intercept this service so that we can see the immediate results of a code change to the service. + +1. **In a new terminal window**, change into the repo directory and build the application: + + `cd /emojivoto` + `make web-app-local` + + ``` + $ make web-app-local + + ... + webpack 5.34.0 compiled successfully in 4326 ms + ✨ Done in 5.38s. + ``` + +2. Change into the service's code directory and start the server: + + `cd emojivoto-web-app` + `yarn webpack serve` + + ``` + $ yarn webpack serve + + ... + ℹ 「wds」: Project is running at http://localhost:8080/ + ... + ℹ 「wdm」: Compiled successfully. + ``` + +4. Access the application at [http://localhost:8080](http://localhost:8080) and see how voting for the 🍩 is generating the same error as the application deployed in the cluster. + + + Victory, your local React server is running a-ok! + + +## 6. Make a code change +We’ve now set up a local development environment for the app. Next we'll make and locally test a code change to the app to improve the issue with voting for 🍩. + +1. In the terminal running webpack, stop the server with `Ctrl+c`. + +1. In your preferred editor open the file `emojivoto/emojivoto-web-app/js/components/Vote.jsx` and replace the `render()` function (lines 83 to the end) with [this highlighted code snippet](https://github.com/datawire/emojivoto/blob/main/assets/Vote-fixed.jsx#L83-L149). + +1. Run webpack to fully recompile the code then start the server again: + + `yarn webpack` + `yarn webpack serve` + +1. Reload the browser tab showing [http://localhost:8080](http://localhost:8080) and vote for 🍩. Notice how you see an error instead, improving the user experience. + +## 7. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the app to the version running locally instead. + + + This command must be run in the terminal window where you ran the script because the script set environment variables to access the demo cluster. Those variables will only will apply to that terminal session. + + +1. Start the intercept with the `intercept` command, setting the workload name (a Deployment in this case), namespace, and port: +`telepresence intercept web-app --namespace emojivoto --port 8080` + + ``` + $ telepresence intercept web-app --namespace emojivoto --port 8080 + + Using deployment web-app + intercepted + Intercept name: web-app-emojivoto + State : ACTIVE + ... + ``` + +2. Go to the frontend service again in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). Voting for 🍩 should now show an error message to the user. + + + The web-app Deployment is being intercepted and rerouted to the server on your laptop! + + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## What's Next? + + diff --git a/docs/telepresence/2.6/quick-start/go.md b/docs/telepresence/2.6/quick-start/go.md new file mode 100644 index 000000000..32d118f0b --- /dev/null +++ b/docs/telepresence/2.6/quick-start/go.md @@ -0,0 +1,190 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + + +# Telepresence Quick Start - **Go** + +This guide provides you with a hands-on tutorial with Telepresence and Golang. To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + +## 1. Get a free remote cluster + +Telepresence connects your local workstation with a remote Kubernetes cluster. In this tutorial, you'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. +Test out the application: + +1. Go to the Emojivoto webapp and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. + +## 3. Run the Docker container + +The bug is present in the `voting-svc` service, you'll run that service locally. To save your time, we prepared a Docker container with this service running and all you'll need to fix the bug. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + +2. The application is failing due to a little bug inside this service which uses gRPC to communicate with the others services. We can use `grpcurl` to test the gRPC endpoint and see the error by running: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + (empty) + + Response trailers received: + content-type: application/grpc + Sent 0 requests and received 0 responses + ERROR: + Code: Unknown + Message: ERROR + ``` + +3. In order to fix the bug, use the Docker container's embedded IDE to fix this error. Go to http://localhost:8083 and open `api/api.go`. Remove the `"fmt"` package by deleting the line 5. + + ```go + 3 import ( + 4 "context" + 5 "fmt" // DELETE THIS LINE + 6 + 7 pb "github.com/buoyantio/emojivoto/emojivoto-voting-svc/gen/proto" + ``` + + and also replace the line `21`: + + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return nil, fmt.Errorf("ERROR") + 22 } + ``` + with + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return pS.vote(":doughnut:") + 22 } + ``` + Then save the file (`Ctrl+s` for Windows, `Cmd+s` for Mac or `Menu -> File -> Save`) and verify that the error is fixed now: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + content-type: application/grpc + + Response contents: + { + } + + Response trailers received: + (empty) + Sent 0 requests and received 1 response + ``` + +## 4. Telepresence intercept + +1. Now the bug is fixed, you can use Telepresence to intercept *all* the traffic through our local service. +Run the following command inside the container: + + ``` + $ telepresence intercept voting --port 8081:8080 + + Using Deployment voting + intercepted + Intercept name : voting + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8081 + Service Port Identifier: 8080 + Volume Mount Point : /tmp/telfs-XXXXXXXXX + Intercepting : all TCP connections + ``` + Now you can go back to Emojivoto webapp and you'll see that voting for 🍩 woks as expected. + +You have created an intercept to tell Telepresence where to send traffic. The `voting-svc` traffic is now destined to the local Dockerized version of the service. This intercepts *all the traffic* to the local `voting-svc` service, which has been fixed with the Telepresence intercept. + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Telepresence intercept with a preview URL + +Preview URLs allows you to safely share your development environment. With this approach, you can try and test your local service more accurately because you have a total control about which traffic is handled through your service, all of this thanks to the preview URL. + +1. First leave the current intercept: + + ``` + $ telepresence leave voting + ``` + +2. Then login to telepresence: + + + +3. Create an intercept, which will tell Telepresence to send traffic to the service in our container instead of the service in the cluster. When prompted for ingress configuration, all default values should be correct as displayed below. + + + +4. If you access the Emojivoto webapp application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +5. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version which is running locally. + +
+ +## What's Next? + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/2.6/howtos/intercepts/)! diff --git a/docs/telepresence/2.6/quick-start/index.md b/docs/telepresence/2.6/quick-start/index.md new file mode 100644 index 000000000..e0d26fa9e --- /dev/null +++ b/docs/telepresence/2.6/quick-start/index.md @@ -0,0 +1,7 @@ +--- +description: Telepresence Quick Start. +--- + +import NewTelepresenceQuickStartLanding from './TelepresenceQuickStartLanding' + + diff --git a/docs/telepresence/2.6/quick-start/qs-cards.js b/docs/telepresence/2.6/quick-start/qs-cards.js new file mode 100644 index 000000000..5b68aa4ae --- /dev/null +++ b/docs/telepresence/2.6/quick-start/qs-cards.js @@ -0,0 +1,71 @@ +import Grid from '@material-ui/core/Grid'; +import Paper from '@material-ui/core/Paper'; +import Typography from '@material-ui/core/Typography'; +import { makeStyles } from '@material-ui/core/styles'; +import { Link as GatsbyLink } from 'gatsby'; +import React from 'react'; + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + textAlign: 'center', + alignItem: 'stretch', + padding: 0, + }, + paper: { + padding: theme.spacing(1), + textAlign: 'center', + color: 'black', + height: '100%', + }, +})); + +export default function CenteredGrid() { + const classes = useStyles(); + + return ( +
+ + + + + + Collaborating + + + + Use preview URLS to collaborate with your colleagues and others + outside of your organization. + + + + + + + + Outbound Sessions + + + + While connected to the cluster, your laptop can interact with + services as if it was another pod in the cluster. + + + + + + + + FAQs + + + + Learn more about uses cases and the technical implementation of + Telepresence. + + + + +
+ ); +} diff --git a/docs/telepresence/2.6/quick-start/qs-go.md b/docs/telepresence/2.6/quick-start/qs-go.md new file mode 100644 index 000000000..2e140f6a7 --- /dev/null +++ b/docs/telepresence/2.6/quick-start/qs-go.md @@ -0,0 +1,396 @@ +--- +description: "Telepresence Quick Start Go. You will need kubectl or oc installed and set up (Linux / macOS / Windows) to use a Kubernetes cluster, preferably an empty." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Go** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Go application](#3-install-a-sample-go-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used [Telepresence](/products/telepresence/) previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Go application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Go. We have versions in Python (Flask), Python (FastAPI), Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-go.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-go.git + + Cloning into 'edgey-corp-go'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-go/DataProcessingService/` + +3. You will use [Fresh](https://pkg.go.dev/github.com/BUGLAN/fresh) to support auto reloading of the Go server, which we'll use later. Confirm it is installed by running: + `go get github.com/pilu/fresh` + Then start the Go server: + `$GOPATH/bin/fresh` + + ``` + $ go get github.com/pilu/fresh + + $ $GOPATH/bin/fresh + + ... + 10:23:41 app | Welcome to the DataProcessingGoService! + ``` + + + Install Go from here and set your GOPATH if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Go server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Go server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-go/DataProcessingService/main.go` in your editor and change `var color string` from `blue` to `orange`. Save the file and the Go server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.6/quick-start/qs-java.md b/docs/telepresence/2.6/quick-start/qs-java.md new file mode 100644 index 000000000..9056d61cd --- /dev/null +++ b/docs/telepresence/2.6/quick-start/qs-java.md @@ -0,0 +1,390 @@ +--- +description: "Telepresence Quick Start - Java. This document uses kubectl in all example commands, but OpenShift users should have no problem substituting in the oc command." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Java** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Java application](#3-install-a-sample-java-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Java application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Java. We have versions in Python (FastAPI), Python (Flask), Go, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-java.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-java.git + + Cloning into 'edgey-corp-java'... + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-java/DataProcessingService/` + +3. Start the Maven server. + `mvn spring-boot:run` + + + Install Java and Maven first if needed. + + + ``` + $ mvn spring-boot:run + + ... + g.d.DataProcessingServiceJavaApplication : Started DataProcessingServiceJavaApplication in 1.408 seconds (JVM running for 1.684) + + ``` + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Java server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Java server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-java/DataProcessingService/src/main/resources/application.properties` in your editor and change `app.default.color` on line 2 from `blue` to `orange`. Save the file then stop and restart your Java server. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.6/quick-start/qs-node.md b/docs/telepresence/2.6/quick-start/qs-node.md new file mode 100644 index 000000000..d4282240f --- /dev/null +++ b/docs/telepresence/2.6/quick-start/qs-node.md @@ -0,0 +1,384 @@ +--- +description: "Telepresence Quick Start Node.js. This document uses kubectl in all example commands. OpenShift users should have no problem substituting in the oc command..." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Node.js** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Node.js application](#3-install-a-sample-nodejs-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Node.js application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js. We have versions in Go, Java,Python using Flask, and Python using FastAPI if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-nodejs.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-nodejs.git + + Cloning into 'edgey-corp-nodejs'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-nodejs/DataProcessingService/` + +3. Install the dependencies and start the Node server: +`npm install && npm start` + + ``` + $ npm install && npm start + + ... + Welcome to the DataProcessingService! + { _: [] } + Server running on port 3000 + ``` + + + Install Node.js from here if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Node server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + See this doc for more information on how Telepresence resolves DNS. + + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Node server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-nodejs/DataProcessingService/app.js` in your editor and change line 6 from `blue` to `orange`. Save the file and the Node server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.6/quick-start/qs-python-fastapi.md b/docs/telepresence/2.6/quick-start/qs-python-fastapi.md new file mode 100644 index 000000000..dacfd9f25 --- /dev/null +++ b/docs/telepresence/2.6/quick-start/qs-python-fastapi.md @@ -0,0 +1,381 @@ +--- +description: "Telepresence Quick Start - Python (FastAPI) You need kubectl or oc installed & set up (Linux/macOS/Windows) to use Kubernetes cluster, preferably an empty test." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Python (FastAPI)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the FastAPI framework. We have versions in Python (Flask), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python-fastapi.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python-fastapi.git + + Cloning into 'edgey-corp-python-fastapi'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python-fastapi/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install fastapi uvicorn requests && python app.py + + Collecting fastapi + ... + Application startup complete. + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local service is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python-fastapi/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 17 from `blue` to `orange`. Save the file and the Python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080) and it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.6/quick-start/qs-python.md b/docs/telepresence/2.6/quick-start/qs-python.md new file mode 100644 index 000000000..02ad7de97 --- /dev/null +++ b/docs/telepresence/2.6/quick-start/qs-python.md @@ -0,0 +1,392 @@ +--- +description: "Telepresence Quick Start - Python (Flask). This document uses kubectl in all example commands, but OpenShift users should have no problem substituting in the oc." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Python (Flask)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the Flask framework. We have versions in Python (FastAPI), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python.git + + Cloning into 'edgey-corp-python'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install flask requests && python app.py + + Collecting flask + ... + Welcome to the DataServiceProcessingPythonService! + ... + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Python server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 15 from `blue` to `orange`. Save the file and the python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.6/quick-start/telepresence-quickstart-landing.less b/docs/telepresence/2.6/quick-start/telepresence-quickstart-landing.less new file mode 100644 index 000000000..e2a83df4f --- /dev/null +++ b/docs/telepresence/2.6/quick-start/telepresence-quickstart-landing.less @@ -0,0 +1,152 @@ +@import '~@src/components/Layout/vars.less'; + +.doc-body .telepresence-quickstart-landing { + font-family: @InterFont; + color: @black; + margin: -8.4px auto 48px; + max-width: 1050px; + min-width: @docs-min-width; + width: 100%; + + h1 { + color: @blue-dark; + font-weight: normal; + letter-spacing: 0.25px; + font-size: 33px; + margin: 0 0 15px; + } + p { + font-size: 0.875rem; + line-height: 24px; + margin: 0; + padding: 0; + } + + .demo-cluster-container { + display: grid; + margin: 40px 0; + grid-template-columns: 1fr; + grid-template-columns: 1fr; + @media screen and (max-width: 900px) { + grid-template-columns: repeat(1, 1fr); + } + } + .main-title-container { + display: flex; + flex-direction: column; + align-items: center; + p { + text-align: center; + font-size: 0.875rem; + } + } + h2 { + font-size: 23px; + color: @black; + margin: 0 0 20px 0; + padding: 0; + &.underlined { + padding-bottom: 2px; + border-bottom: 3px solid @grey-separator; + text-align: center; + } + strong { + font-weight: 800; + } + &.subtitle { + margin-bottom: 10px; + font-size: 19px; + line-height: 28px; + } + } + .learn-more, + .get-started { + font-size: 14px; + font-weight: 600; + letter-spacing: 1.25px; + display: flex; + align-items: center; + text-decoration: none; + &.inline { + display: inline-block; + text-decoration: underline; + font-size: unset; + font-weight: normal; + &:hover { + text-decoration: none; + } + } + &.blue { + color: @blue-5; + } + &.blue:hover { + color: @blue-dark; + } + } + + .learn-more { + margin-top: 20px; + padding: 13px 0; + } + + .box-container { + &.border { + border: 1.5px solid @grey-separator; + border-radius: 5px; + padding: 10px; + } + &::before { + content: ''; + position: absolute; + width: 14px; + height: 14px; + border-radius: 50%; + top: 0; + left: 50%; + transform: translate(-50%, -50%); + } + p { + font-size: 0.875rem; + line-height: 24px; + padding: 0; + } + } + + .telepresence-video { + border: 2px solid @grey-separator; + box-shadow: -6px 12px 0px fade(@black, 12%); + border-radius: 8px; + padding: 18px; + h2.telepresence-video-title { + font-weight: 400; + font-size: 23px; + line-height: 33px; + color: @blue-6; + } + } + + .video-section { + display: grid; + grid-template-columns: 1fr 1fr; + column-gap: 20px; + @media screen and (max-width: 800px) { + grid-template-columns: 1fr; + } + ul { + font-size: 14px; + margin: 0 10px 6px 0; + } + .video-container { + position: relative; + padding-bottom: 56.25%; // 16:9 aspect ratio + height: 0; + iframe { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + } + } + } +} diff --git a/docs/telepresence/2.6/redirects.yml b/docs/telepresence/2.6/redirects.yml new file mode 100644 index 000000000..5961b3477 --- /dev/null +++ b/docs/telepresence/2.6/redirects.yml @@ -0,0 +1 @@ +- {from: "", to: "quick-start"} diff --git a/docs/telepresence/2.6/reference/architecture.md b/docs/telepresence/2.6/reference/architecture.md new file mode 100644 index 000000000..8aa90b267 --- /dev/null +++ b/docs/telepresence/2.6/reference/architecture.md @@ -0,0 +1,102 @@ +--- +description: "How Telepresence works to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Telepresence Architecture + +
+ +![Telepresence Architecture](https://www.getambassador.io/images/documentation/telepresence-architecture.inline.svg) + +
+ +## Telepresence CLI + +The Telepresence CLI orchestrates the moving parts on the workstation: it starts the Telepresence Daemons, +authenticates against Ambassador Cloud, and then acts as a user-friendly interface to the Telepresence User Daemon. + +## Telepresence Daemons +Telepresence has Daemons that run on a developer's workstation and act as the main point of communication to the cluster's +network in order to communicate with the cluster and handle intercepted traffic. + +### User-Daemon +The User-Daemon coordinates the creation and deletion of intercepts by communicating with the [Traffic Manager](#traffic-manager). +All requests from and to the cluster go through this Daemon. + +When you run telepresence login, Telepresence installs an enhanced version of the User-Daemon. This replaces the existing User-Daemon and +allows you to create intercepts on your local machine from Ambassador Cloud. + +### Root-Daemon +The Root-Daemon manages the networking necessary to handle traffic between the local workstation and the cluster by setting up a +[Virtual Network Device](../tun-device) (VIF). For a detailed description of how the VIF manages traffic and why it is necessary +please refer to this blog post: +[Implementing Telepresence Networking with a TUN Device](https://blog.getambassador.io/implementing-telepresence-networking-with-a-tun-device-a23a786d51e9). + +When you run telepresence login, Telepresence installs an enhanced Telepresence User Daemon. This replaces the open source +User Daemon and allows you to create intercepts on your local machine from Ambassador Cloud. + +## Traffic Manager + +The Traffic Manager is the central point of communication between Traffic Agents in the cluster and Telepresence Daemons +on developer workstations. It is responsible for injecting the Traffic Agent sidecar into intercepted pods, proxying all +relevant inbound and outbound traffic, and tracking active intercepts. + +The Traffic-Manager is installed, either by a cluster administrator using a Helm Chart, or on demand by the Telepresence +User Daemon. When the User Daemon performs its initial connect, it first checks the cluster for the Traffic Manager +deployment, and if missing it will make an attempt to install it using its embedded Helm Chart. + +When an intercept gets created with a Preview URL, the Traffic Manager will establish a connection with Ambassador Cloud +so that Preview URL requests can be routed to the cluster. This allows Ambassador Cloud to reach the Traffic Manager +without requiring the Traffic Manager to be publicly exposed. Once the Traffic Manager receives a request from a Preview +URL, it forwards the request to the ingress service specified at the Preview URL creation. + +## Traffic Agent + +The Traffic Agent is a sidecar container that facilitates intercepts. When an intercept is first started, the Traffic Agent +container is injected into the workload's pod(s). You can see the Traffic Agent's status by running `telepresence list` +or `kubectl describe pod `. + +Depending on the type of intercept that gets created, the Traffic Agent will either route the incoming request to the +Traffic Manager so that it gets routed to a developer's workstation, or it will pass it along to the container in the +pod usually handling requests on that port. + +## Ambassador Cloud + +Ambassador Cloud enables Preview URLs by generating random ephemeral domain names and routing requests received on those +domains from authorized users to the appropriate Traffic Manager. + +Ambassador Cloud also lets users manage their Preview URLs: making them publicly accessible, seeing users who have +accessed them and deleting them. + +## Pod-Daemon + +The Pod-Daemon is a modified version of the [Telepresence User-Daemon](#user-daemon) built as a container image so that +it can be inserted into a `Deployment` manifest as an additional container. This allows users to create intercepts completely +within the cluster with the benefit that the intercept stays active until the deployment with the Pod-Daemon container is removed. + +The Pod-Daemon will take arguments and environment variables as part of the `Deployment` manifest to specify which service the intercept +should be run on and to provide similar configuration that would be provided when using Telepresence intercepts from the command line. + +After being deployed to the cluster, it behaves similarly to the Telepresence User-Daemon and installs the [Traffic Agent Sidecar](#traffic-agent) +on the service that is being intercepted. After the intercept is created, traffic can then be redirected to the `Deployment` with the Pod-Daemon +container instead. The Pod-Daemon will automatically generate a Preview URL so that the intercept can be accessed from outside the cluster. +The Preview URL can be obtained from the Pod-Daemon logs if you are deploying it manually. + +The Pod-Daemon was created for use as a component of Deployment Previews in order to automatically create intercepts with development images built +by CI so that changes from pull requests can be quickly visualized in a live cluster before changes are landed by accessing the Preview URL +link which would be posted to an associated GitHub pull request when using Deployment Previews. + +See the [Deployment Previews quick-start](https://www.getambassador.io/docs/cloud/latest/deployment-previews/quick-start) for information on how to get started with Deployment Previews +or for a reference on how Pod-Daemon can be manually deployed to the cluster. + + +# Changes from Service Preview + +Using Ambassador's previous offering, Service Preview, the Traffic Agent had to be manually added to a pod by an +annotation. This is no longer required as the Traffic Agent is automatically injected when an intercept is started. + +Service Preview also started an intercept via `edgectl intercept`. The `edgectl` CLI is no longer required to intercept +as this functionality has been moved to the Telepresence CLI. + +For both the Traffic Manager and Traffic Agents, configuring Kubernetes ClusterRoles and ClusterRoleBindings is not +required as it was in Service Preview. Instead, the user running Telepresence must already have sufficient permissions in the cluster to add and modify deployments in the cluster. diff --git a/docs/telepresence/2.6/reference/client.md b/docs/telepresence/2.6/reference/client.md new file mode 100644 index 000000000..146fe0956 --- /dev/null +++ b/docs/telepresence/2.6/reference/client.md @@ -0,0 +1,31 @@ +--- +description: "CLI options for Telepresence to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Client reference + +The [Telepresence CLI client](../../quick-start) is used to connect Telepresence to your cluster, start and stop intercepts, and create preview URLs. All commands are run in the form of `telepresence `. + +## Commands + +A list of all CLI commands and flags is available by running `telepresence help`, but here is more detail on the most common ones. +You can append `--help` to each command below to get even more information about its usage. + +| Command | Description | +| --- | --- | +| `helm` | Installs, upgrades, or uninstalls the traffic-manager in the cluster | +| `connect` | Starts the local daemon and connects Telepresence to your cluster. After connecting, outbound traffic is routed to the cluster so that you can interact with services as if your laptop was another pod (for example, curling a service by it's name) | +| [`login`](login) | Authenticates you to Ambassador Cloud to create, manage, and share [preview URLs](../../howtos/preview-urls/) +| `logout` | Logs out out of Ambassador Cloud | +| `license` | Formats a license from Ambassdor Cloud into a secret that can be [applied to your cluster](../cluster-config#add-license-to-cluster) if you require features of the extension in an air-gapped environment| +| `status` | Shows the current connectivity status | +| `quit` | Tell Telepresence daemons to quit | +| `list` | Lists the current active intercepts | +| `intercept` | Intercepts a service, run followed by the service name to be intercepted and what port to proxy to your laptop: `telepresence intercept --port `. This command can also start a process so you can run a local instance of the service you are intercepting. For example the following will intercept the hello service on port 8000 and start a Python web server: `telepresence intercept hello --port 8000 -- python3 -m http.server 8000`. A special flag `--docker-run` can be used to run the local instance [in a docker container](../docker-run). | +| `leave` | Stops an active intercept: `telepresence leave hello` | +| `preview` | Create or remove [preview URLs](../../howtos/preview-urls) for existing intercepts: `telepresence preview create ` | +| `loglevel` | Temporarily change the log-level of the traffic-manager, traffic-agents, and user and root daemons | +| `gather-logs` | Gather logs from traffic-manager, traffic-agents, user, and root daemons, and export them into a zip file that can be shared with others or included with a github issue. Use `--get-pod-yaml` to include the yaml for the `traffic-manager` and `traffic-agent`s. Use `--anonymize` to replace the actual pod names + namespaces used for the `traffic-manager` and pods containing `traffic-agent`s in the logs. | +| `version` | Show version of Telepresence CLI + Traffic-Manager (if connected) | +| `uninstall` | Uninstalls Telepresence Traffic Agents from your cluster, using the `--agent` flag to target the Traffic Agent for a specific workload, or the `--all-agents` flag to remove all Traffic Agents from all workloads | +| `current-cluster-id` | Get cluster ID for your kubernetes cluster, used for [configuring license](../cluster-config#add-license-to-cluster) in an air-gapped environment | diff --git a/docs/telepresence/2.6/reference/client/login.md b/docs/telepresence/2.6/reference/client/login.md new file mode 100644 index 000000000..ab4319a54 --- /dev/null +++ b/docs/telepresence/2.6/reference/client/login.md @@ -0,0 +1,61 @@ +# Telepresence Login + +```console +$ telepresence login --help +Authenticate to Ambassador Cloud + +Usage: + telepresence login [flags] + +Flags: + --apikey string Static API key to use instead of performing an interactive login +``` + +## Description + +Use `telepresence login` to explicitly authenticate with [Ambassador +Cloud](https://www.getambassador.io/docs/cloud). Unless the +[`skipLogin` option](../../config) is set, other commands will +automatically invoke the `telepresence login` interactive login +procedure as nescessary, so it is rarely nescessary to explicitly run +`telepresence login`; it should only be truly nescessary to explictly +run `telepresence login` when you require a non-interactive login. + +The normal interactive login procedure involves launching a web +browser, a user interacting with that web browser, and finally having +the web browser make callbacks to the local Telepresence process. If +it is not possible to do this (perhaps you are using a headless remote +box via SSH, or are using Telepresence in CI), then you may instead +have Ambassador Cloud issue an API key that you pass to `telepresence +login` with the `--apikey` flag. + +## Telepresence + +When you run `telepresence login`, the CLI installs +a Telepresence binary. The Telepresence enhanced free client of the [User +Daemon](../../architecture) communicates with the Ambassador Cloud to +provide fremium features including the ability to create intercepts from +Ambassador Cloud. + +## Acquiring an API key + +1. Log in to Ambassador Cloud at https://app.getambassador.io/ . + +2. Click on your profile icon in the upper-left: ![Screenshot with the + mouse pointer over the upper-left profile icon](./login/apikey-2.png) + +3. Click on the "API Keys" menu button: ![Screenshot with the mouse + pointer over the "API Keys" menu button](./login/apikey-3.png) + +4. Click on the "generate new key" button in the upper-right: + ![Screenshot with the mouse pointer over the "generate new key" + button](./login/apikey-4.png) + +5. Enter a description for the key (perhaps the name of your laptop, + or perhaps the "CI"), and click "generate api key" to create it. + +You may now pass the API key as `KEY` to `telepresence login --apikey=KEY`. + +Telepresence will use that "master" API key to create narrower keys +for different components of Telepresence. You will see these appear +in the Ambassador Cloud web interface. diff --git a/docs/telepresence/2.6/reference/client/login/apikey-2.png b/docs/telepresence/2.6/reference/client/login/apikey-2.png new file mode 100644 index 000000000..1379502a9 Binary files /dev/null and b/docs/telepresence/2.6/reference/client/login/apikey-2.png differ diff --git a/docs/telepresence/2.6/reference/client/login/apikey-3.png b/docs/telepresence/2.6/reference/client/login/apikey-3.png new file mode 100644 index 000000000..4559b784d Binary files /dev/null and b/docs/telepresence/2.6/reference/client/login/apikey-3.png differ diff --git a/docs/telepresence/2.6/reference/client/login/apikey-4.png b/docs/telepresence/2.6/reference/client/login/apikey-4.png new file mode 100644 index 000000000..25c6581a4 Binary files /dev/null and b/docs/telepresence/2.6/reference/client/login/apikey-4.png differ diff --git a/docs/telepresence/2.6/reference/cluster-config.md b/docs/telepresence/2.6/reference/cluster-config.md new file mode 100644 index 000000000..77b020499 --- /dev/null +++ b/docs/telepresence/2.6/reference/cluster-config.md @@ -0,0 +1,285 @@ +import Alert from '@material-ui/lab/Alert'; +import { ClusterConfig, PaidPlansDisclaimer } from '../../../../../src/components/Docs/Telepresence'; + +# Cluster-side configuration + +For the most part, Telepresence doesn't require any special +configuration in the cluster and can be used right away in any +cluster (as long as the user has adequate [RBAC permissions](../rbac) +and the cluster's server version is `1.17.0` or higher). + +However, some advanced features do require some configuration in the +cluster. + +## TLS + +In this example, other applications in the cluster expect to speak TLS to your +intercepted application (perhaps you're using a service-mesh that does +mTLS). + +In order to use `--mechanism=http` (or any features that imply +`--mechanism=http`) you need to tell Telepresence about the TLS +certificates in use. + +Tell Telepresence about the certificates in use by adjusting your +[workload's](../intercepts/#supported-workloads) Pod template to set a couple of +annotations on the intercepted Pods: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ "getambassador.io/inject-terminating-tls-secret": "your-terminating-secret" # optional ++ "getambassador.io/inject-originating-tls-secret": "your-originating-secret" # optional + spec: ++ serviceAccountName: "your-account-that-has-rbac-to-read-those-secrets" + containers: +``` + +- The `getambassador.io/inject-terminating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS server + certificate to use for decrypting and responding to incoming + requests. + + When Telepresence modifies the Service and workload port + definitions to point at the Telepresence Agent sidecar's port + instead of your application's actual port, the sidecar will use this + certificate to terminate TLS. + +- The `getambassador.io/inject-originating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS + client certificate to use for communicating with your application. + + You will need to set this if your application expects incoming + requests to speak TLS (for example, your + code expects to handle mTLS itself instead of letting a service-mesh + sidecar handle mTLS for it, or the port definition that Telepresence + modified pointed at the service-mesh sidecar instead of at your + application). + + If you do set this, you should to set it to the + same client certificate Secret that you configure the Ambassador + Edge Stack to use for mTLS. + +It is only possible to refer to a Secret that is in the same Namespace +as the Pod. + +The Pod will need to have permission to `get` and `watch` each of +those Secrets. + +Telepresence understands `type: kubernetes.io/tls` Secrets and +`type: istio.io/key-and-cert` Secrets; as well as `type: Opaque` +Secrets that it detects to be formatted as one of those types. + +## Air-gapped cluster + + +If your cluster is on an isolated network such that it cannot +communicate with Ambassador Cloud, then some additional configuration +is required to acquire a license key in order to use personal +intercepts. + +### Create a license + +1. + +2. Generate a new license (if one doesn't already exist) by clicking *Generate New License*. + +3. You will be prompted for your Cluster ID. Ensure your +kubeconfig context is using the cluster you want to create a license for then +run this command to generate the Cluster ID: + + ``` + $ telepresence current-cluster-id + + Cluster ID: + ``` + +4. Click *Generate API Key* to finish generating the license. + +5. On the licenses page, download the license file associated with your cluster. + +### Add license to cluster +There are two separate ways you can add the license to your cluster: manually creating and deploying +the license secret or having the helm chart manage the secret + +You only need to do one of the two options. + +#### Manual deploy of license secret + +1. Use this command to generate a Kubernetes Secret config using the license file: + + ``` + $ telepresence license -f + + apiVersion: v1 + data: + hostDomain: + license: + kind: Secret + metadata: + creationTimestamp: null + name: systema-license + namespace: ambassador + ``` + +2. Save the output as a YAML file and apply it to your +cluster with `kubectl`. + +3. When deploying the `traffic-manager` chart, you must add the additional values when running `helm install` by putting +the following into a file (for the example we'll assume it's called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + secret: + # This tells the helm chart not to create the secret since you've created it yourself + create: false + ``` + +4. Install the helm chart into the cluster + + ``` + helm install traffic-manager -n ambassador datawire/telepresence --create-namespace -f license-values.yaml + ``` + +5. Ensure that you have the docker image for the Smart Agent (datawire/ambassador-telepresence-agent:1.11.0) +pulled and in a registry your cluster can pull from. + +6. Have users use the `images` [config key](../config/#images) keys so telepresence uses the aforementioned image for their agent. + +#### Helm chart manages the secret + +1. Get the jwt token from the downloaded license file + + ``` + $ cat ~/Downloads/ambassador.License_for_yourcluster + eyJhbGnotarealtoken.butanexample + ``` + +2. Create the following values file, substituting your real jwt token in for the one used in the example below. +(for this example we'll assume the following is placed in a file called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + # This is the value from the license file you download. this value is an example and will not work + value: eyJhbGnotarealtoken.butanexample + secret: + # This tells the helm chart to create the secret + create: true + ``` + +3. Install the helm chart into the cluster + + ``` + helm install traffic-manager charts/telepresence -n ambassador --create-namespace -f license-values.yaml + ``` + +Users will now be able to use preview intercepts with the +`--preview-url=false` flag. Even with the license key, preview URLs +cannot be used without enabling direct communication with Ambassador +Cloud, as Ambassador Cloud is essential to their operation. + +If using Helm to install the server-side components, see the chart's [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) to learn how to configure the image registry and license secret. + +Have clients use the [skipLogin](../config/#cloud) key to ensure the cli knows it is operating in an +air-gapped environment. + +## Mutating Webhook + +Telepresence uses a Mutating Webhook to inject the [Traffic Agent](../architecture/#traffic-agent) sidecar container and update the +port definitions. This means that an intercepted workload (Deployment, StatefulSet, ReplicaSet) will remain untouched +and in sync as far as GitOps workflows (such as ArgoCD) are concerned. + +The injection will happen on demand the first time an attempt is made to intercept the workload. + +If you want to prevent that the injection ever happens, simply add the `telepresence.getambassador.io/inject-traffic-agent: disabled` +annotation to your workload template's annotations: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ telepresence.getambassador.io/inject-traffic-agent: disabled + spec: + containers: +``` + +### Service Name and Port Annotations + +Telepresence will automatically find all services and all ports that will connect to a workload and make them available +for an intercept, but you can explicitly define that only one service and/or port can be intercepted. + +```diff + spec: + template: + metadata: + labels: + service: your-service + annotations: ++ telepresence.getambassador.io/inject-service-name: my-service ++ telepresence.getambassador.io/inject-service-port: https + spec: + containers: +``` + +### Note on Numeric Ports + +If the targetPort of your intercepted service is pointing at a port number, in addition to +injecting the Traffic Agent sidecar, Telepresence will also inject an initContainer that will +reconfigure the pod's firewall rules to redirect traffic to the Traffic Agent. + + +Note that this initContainer requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + +If you need to use numeric ports without the aforementioned capabilities, you can [manually install the agent](../intercepts/manual-agent) + +For example, the following service is using a numeric port, so Telepresence would inject an initContainer into it: +```yaml +apiVersion: v1 +kind: Service +metadata: + name: your-service +spec: + type: ClusterIP + selector: + service: your-service + ports: + - port: 80 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: your-service + labels: + service: your-service +spec: + replicas: 1 + selector: + matchLabels: + service: your-service + template: + metadata: + annotations: + telepresence.getambassador.io/inject-traffic-agent: enabled + labels: + service: your-service + spec: + containers: + - name: your-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 +``` diff --git a/docs/telepresence/2.6/reference/config.md b/docs/telepresence/2.6/reference/config.md new file mode 100644 index 000000000..0ee52c13a --- /dev/null +++ b/docs/telepresence/2.6/reference/config.md @@ -0,0 +1,284 @@ +# Laptop-side configuration + +## Global Configuration +Telepresence uses a `config.yml` file to store and change certain global configuration values that will be used for all clusters you use Telepresence with. The location of this file varies based on your OS: + +* macOS: `$HOME/Library/Application Support/telepresence/config.yml` +* Linux: `$XDG_CONFIG_HOME/telepresence/config.yml` or, if that variable is not set, `$HOME/.config/telepresence/config.yml` +* Windows: `%APPDATA%\telepresence\config.yml` + +For Linux, the above paths are for a user-level configuration. For system-level configuration, use the file at `$XDG_CONFIG_DIRS/telepresence/config.yml` or, if that variable is empty, `/etc/xdg/telepresence/config.yml`. If a file exists at both the user-level and system-level paths, the user-level path file will take precedence. + +### Values + +The config file currently supports values for the `timeouts`, `logLevels`, `images`, `cloud`, and `grpc` keys. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +timeouts: + agentInstall: 1m + intercept: 10s +logLevels: + userDaemon: debug +images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting +cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. +grpc: + maxReceiveSize: 10Mi +telepresenceAPI: + port: 9980 +``` + +#### Timeouts + +Values for `timeouts` are all durations either as a number of seconds +or as a string with a unit suffix of `ms`, `s`, `m`, or `h`. Strings +can be fractional (`1.5h`) or combined (`2h45m`). + +These are the valid fields for the `timeouts` key: + +| Field | Description | Type | Default | +|-------------------------|------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|------------| +| `agentInstall` | Waiting for Traffic Agent to be installed | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | +| `apply` | Waiting for a Kubernetes manifest to be applied | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 1 minute | +| `clusterConnect` | Waiting for cluster to be connected | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `intercept` | Waiting for an intercept to become active | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `proxyDial` | Waiting for an outbound connection to be established | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `trafficManagerConnect` | Waiting for the Traffic Manager API to connect for port forwards | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `trafficManagerAPI` | Waiting for connection to the gPRC API after `trafficManagerConnect` is successful | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 15 seconds | +| `helm` | Waiting for Helm operations (e.g. `install`) on the Traffic Manager | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | + +#### Log Levels + +Values for the `logLevels` fields are one of the following strings, +case-insensitive: + + - `trace` + - `debug` + - `info` + - `warning` or `warn` + - `error` + - `fatal` + - `panic` + +For whichever log-level you select, you will get logs labeled with that level and of higher severity. +(e.g. if you use `info`, you will also get logs labeled `error`. You will NOT get logs labeled `debug`. + +These are the valid fields for the `logLevels` key: + +| Field | Description | Type | Default | +|--------------|---------------------------------------------------------------------|---------------------------------------------|---------| +| `userDaemon` | Logging level to be used by the User Daemon (logs to connector.log) | [loglevel][logrus-level] [string][yaml-str] | debug | +| `rootDaemon` | Logging level to be used for the Root Daemon (logs to daemon.log) | [loglevel][logrus-level] [string][yaml-str] | info | + +#### Images +Values for `images` are strings. These values affect the objects that are deployed in the cluster, +so it's important to ensure users have the same configuration. + +Additionally, you can deploy the server-side components with [Helm](../../install/helm), to prevent them +from being overridden by a client's config and use the [mutating-webhook](../cluster-config/#mutating-webhook) +to handle installation of the `traffic-agents`. + +These are the valid fields for the `images` key: + +| Field | Description | Type | Default | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------|----------------------| +| `registry` | Docker registry to be used for installing the Traffic Manager and default Traffic Agent. If not using a helm chart to deploy server-side objects, changing this value will create a new traffic-manager deployment when using Telepresence commands. Additionally, changing this value will update installed default `traffic-agents` to use the new registry when creating a new intercept. | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `agentImage` | `$registry/$imageName:$imageTag` to use when installing the Traffic Agent. Changing this value will update pre-existing `traffic-agents` to use this new image. *The `registry` value is not used for the `traffic-agent` if you have this value set.* | qualified Docker image name [string][yaml-str] | (unset) | +| `webhookRegistry` | The container `$registry` that the [Traffic Manager](../cluster-config/#mutating-webhook) will use with the `webhookAgentImage` *This value is only used if a new `traffic-manager` is deployed* | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `webhookAgentImage` | The container image that the [Traffic Manager](../cluster-config/#mutating-webhook) will pull from the `webhookRegistry` when installing the Traffic Agent in annotated pods *This value is only used if a new `traffic-manager` is deployed* | non-qualified Docker image name [string][yaml-str] | (unset) | + +#### Cloud +Values for `cloud` are listed below and their type varies, so please see the chart for the expected type for each config value. +These fields control how the client interacts with the Cloud service. + +| Field | Description | Type | Default | +|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------|----------------------| +| `skipLogin` | Whether the CLI should skip automatic login to Ambassador Cloud. If set to true, in order to perform personal intercepts you must have a [license key](../cluster-config/#air-gapped-cluster) installed in the cluster. | [bool][yaml-bool] | false | +| `refreshMessages` | How frequently the CLI should communicate with Ambassador Cloud to get new command messages, which also resets whether the message has been raised or not. You will see each message at most once within the duration given by this config | [duration][go-duration] [string][yaml-str] | 168h | +| `systemaHost` | The host used to communicate with Ambassador Cloud | [string][yaml-str] | app.getambassador.io | +| `systemaPort` | The port used with `systemaHost` to communicate with Ambassador Cloud | [string][yaml-str] | 443 | + +Telepresence attempts to auto-detect if the cluster is capable of +communication with Ambassador Cloud, but in cases where only the on-laptop client wishes to communicate with +Ambassador Cloud Telepresence may still prompt you to log in. If you want those auto-login points to be disabled +as well, or would like it to not attempt to communicate with +Ambassador Cloud at all (even for the auto-detection), then be sure to +set the `skipLogin` value to `true`. + +Reminder: To use personal intercepts, which normally require a login, +you must have a license key in your cluster and specify which +`agentImage` should be installed by also adding the following to your +`config.yml`: + +```yaml +images: + agentImage: / +``` + +#### Grpc +The `maxReceiveSize` determines how large a message that the workstation receives via gRPC can be. The default is 4Mi (determined by gRPC). All traffic to and from the cluster is tunneled via gRPC. + +The size is measured in bytes. You can express it as a plain integer or as a fixed-point number using E, G, M, or K. You can also use the power-of-two equivalents: Gi, Mi, Ki. For example, the following represent roughly the same value: +``` +128974848, 129e6, 129M, 123Mi +``` + +#### RESTful API server +The `telepresenceAPI` controls the behavior of Telepresence's RESTful API server that can be queried for additional information about ongoing intercepts. When present, and the `port` is set to a valid port number, it's propagated to the auto-installer so that application containers that can be intercepted gets the `TELEPRESENCE_API_PORT` environment set. The server can then be queried at `localhost:`. In addition, the `traffic-agent` and the `user-daemon` on the workstation that performs an intercept will start the server on that port. +If the `traffic-manager` is auto-installed, its webhook agent injector will be configured to add the `TELEPRESENCE_API_PORT` environment to the app container when the `traffic-agent` is injected. +See [RESTful API server](../restapi) for more info. + +#### Daemons + +`daemons` controls which binary to use for the user daemon. By default it will +use the Telepresence binary. For example, this can be used to tell Telepresence to +use the Telepresence Pro binary. + +| Field | Description | Type | Default | +|--------------------|-------------------------------------------------------------|--------------------|--------------------------------------| +| `userDaemonBinary` | The path to the binary you want to use for the User Daemon. | [string][yaml-str] | The path to Telepresence executable | + + +## Per-Cluster Configuration +Some configuration is not global to Telepresence and is actually specific to a cluster. Thus, we store that config information in your kubeconfig file, so that it is easier to maintain per-cluster configuration. + +### Values +The current per-cluster configuration supports `dns`, `alsoProxy`, and `manager` keys. +To add configuration, simply add a `telepresence.io` entry to the cluster in your kubeconfig like so: + +``` +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + dns: + also-proxy: + manager: + name: example-cluster +``` +#### DNS +The fields for `dns` are: local-ip, remote-ip, exclude-suffixes, include-suffixes, and lookup-timeout. + +| Field | Description | Type | Default | +|--------------------|---------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|-----------------------------------------------------------------------------| +| `local-ip` | The address of the local DNS server. This entry is only used on Linux systems that are not configured to use systemd-resolved. | IP address [string][yaml-str] | first `nameserver` mentioned in `/etc/resolv.conf` | +| `remote-ip` | The address of the cluster's DNS service. | IP address [string][yaml-str] | IP of the `kube-dns.kube-system` or the `dns-default.openshift-dns` service | +| `exclude-suffixes` | Suffixes for which the DNS resolver will always fail (or fallback in case of the overriding resolver) | [sequence][yaml-seq] of [strings][yaml-str] | `[".arpa", ".com", ".io", ".net", ".org", ".ru"]` | +| `include-suffixes` | Suffixes for which the DNS resolver will always attempt to do a lookup. Includes have higher priority than excludes. | [sequence][yaml-seq] of [strings][yaml-str] | `[]` | +| `lookup-timeout` | Maximum time to wait for a cluster side host lookup. | [duration][go-duration] [string][yaml-str] | 4 seconds | + +Here is an example kubeconfig: +``` +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + dns: + include-suffixes: + - .se + exclude-suffixes: + - .com + name: example-cluster +``` + + +#### AlsoProxy + +When using `also-proxy`, you provide a list of subnets after the key in your kubeconfig file to be added to the TUN device. +All connections to addresses that the subnet spans will be dispatched to the cluster + +Here is an example kubeconfig for the subnet `1.2.3.4/32`: +``` +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + also-proxy: + - 1.2.3.4/32 + name: example-cluster +``` + +#### NeverProxy + +When using `never-proxy` you provide a list of subnets after the key in your kubeconfig file. These will never be routed via the +TUN device, even if they fall within the subnets (pod or service) for the cluster. Instead, whatever route they have before +telepresence connects is the route they will keep. + +Here is an example kubeconfig for the subnet `1.2.3.4/32`: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + never-proxy: + - 1.2.3.4/32 + name: example-cluster +``` + +##### Using AlsoProxy together with NeverProxy + +Never proxy and also proxy are implemented as routing rules, meaning that when the two conflict, regular routing routes apply. +Usually this means that the most specific route will win. + +So, for example, if an `also-proxy` subnet falls within a broader `never-proxy` subnet: + +```yaml +never-proxy: [10.0.0.0/16] +also-proxy: [10.0.5.0/24] +``` + +Then the specific `also-proxy` of `10.0.5.0/24` will be proxied by the TUN device, whereas the rest of `10.0.0.0/16` will not. + +Conversely if a `never-proxy` subnet is inside a larger `also-proxy` subnet: + +```yaml +also-proxy: [10.0.0.0/16] +never-proxy: [10.0.5.0/24] +``` + +Then all of the also-proxy of `10.0.0.0/16` will be proxied, with the exception of the specific `never-proxy` of `10.0.5.0/24` + +#### Manager + +The `manager` key contains configuration for finding the `traffic-manager` that telepresence will connect to. It supports one key, `namespace`, indicating the namespace where the traffic manager is to be found + +Here is an example kubeconfig that will instruct telepresence to connect to a manager in namespace `staging`: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +[yaml-bool]: https://yaml.org/type/bool.html +[yaml-float]: https://yaml.org/type/float.html +[yaml-int]: https://yaml.org/type/int.html +[yaml-seq]: https://yaml.org/type/seq.html +[yaml-str]: https://yaml.org/type/str.html +[go-duration]: https://pkg.go.dev/time#ParseDuration +[logrus-level]: https://github.com/sirupsen/logrus/blob/v1.8.1/logrus.go#L25-L45 diff --git a/docs/telepresence/2.6/reference/dns.md b/docs/telepresence/2.6/reference/dns.md new file mode 100644 index 000000000..e38fbc61d --- /dev/null +++ b/docs/telepresence/2.6/reference/dns.md @@ -0,0 +1,75 @@ +# DNS resolution + +The Telepresence DNS resolver is dynamically configured to resolve names using the namespaces of currently active intercepts. Processes running locally on the desktop will have network access to all services in the such namespaces by service-name only. + +All intercepts contribute to the DNS resolver, even those that do not use the `--namespace=` option. This is because `--namespace default` is implied, and in this context, `default` is treated just like any other namespace. + +No namespaces are used by the DNS resolver (not even `default`) when no intercepts are active, which means that no service is available by `` only. Without an active intercept, the namespace qualified DNS name must be used (in the form `.`). + +See this demonstrated below, using the [quick start's](../../quick-start/) sample app services. + +No intercepts are currently running, we'll connect to the cluster and list the services that can be intercepted. + +``` +$ telepresence connect + + Connecting to traffic manager... + Connected to context default (https://) + +$ telepresence list + + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + emoji : ready to intercept (traffic-agent not yet installed) + web : ready to intercept (traffic-agent not yet installed) + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + +$ curl web-app:80 + + curl: (6) Could not resolve host: web-app + +``` + +This is expected as Telepresence cannot reach the service yet by short name without an active intercept in that namespace. + +``` +$ curl web-app.emojivoto:80 + + + + + + Emoji Vote + ... +``` + +Using the namespaced qualified DNS name though does work. +Now we'll start an intercept against another service in the same namespace. Remember, `--namespace default` is implied since it is not specified. + +``` +$ telepresence intercept web --port 8080 + + Using Deployment web + intercepted + Intercept name : web + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Volume Mount Point: /tmp/telfs-166119801 + Intercepting : HTTP requests that match all headers: + 'x-telepresence-intercept-id: 8eac04e3-bf24-4d62-b3ba-35297c16f5cd:web' + +$ curl webapp:80 + + + + + + Emoji Vote + ... +``` + +Now curling that service by its short name works and will as long as the intercept is active. + +The DNS resolver will always be able to resolve services using `.` regardless of intercepts. + +See [Outbound connectivity](../routing/#dns-resolution) for details on DNS lookups. diff --git a/docs/telepresence/2.6/reference/docker-run.md b/docs/telepresence/2.6/reference/docker-run.md new file mode 100644 index 000000000..8aa7852e5 --- /dev/null +++ b/docs/telepresence/2.6/reference/docker-run.md @@ -0,0 +1,31 @@ +--- +Description: "How a Telepresence intercept can run a Docker container with configured environment and volume mounts." +--- + +# Using Docker for intercepts + +If you want your intercept to go to a Docker container on your laptop, use the `--docker-run` option. It creates the intercept, runs your container in the foreground, then automatically ends the intercept when the container exits. + +`telepresence intercept --port --docker-run -- ` + +The `--` separates flags intended for `telepresence intercept` from flags intended for `docker run`. + +## Example + +Imagine you are working on a new version of your frontend service. It is running in your cluster as a Deployment called `frontend-v1`. You use Docker on your laptop to build an improved version of the container called `frontend-v2`. To test it out, use this command to run the new container on your laptop and start an intercept of the cluster service to your local container. + +`telepresence intercept frontend-v1 --port 8000 --docker-run -- frontend-v2` + +## Ports + +The `--port` flag can specify an additional port when `--docker-run` is used so that the local and container port can be different. This is done using `--port :`. The container port will default to the local port when using the `--port ` syntax. + +## Flags + +Telepresence will automatically pass some relevant flags to Docker in order to connect the container with the intercept. Those flags are combined with the arguments given after `--` on the command line. + +- `--dns-search tel2-search` Enables single label name lookups in intercepted namespaces +- `--env-file ` Loads the intercepted environment +- `--name intercept--` Names the Docker container, this flag is omitted if explicitly given on the command line +- `-p ` The local port for the intercept and the container port +- `-v ` Volume mount specification, see CLI help for `--mount` and `--docker-mount` flags for more info diff --git a/docs/telepresence/2.6/reference/environment.md b/docs/telepresence/2.6/reference/environment.md new file mode 100644 index 000000000..7f83ff119 --- /dev/null +++ b/docs/telepresence/2.6/reference/environment.md @@ -0,0 +1,46 @@ +--- +description: "How Telepresence can import environment variables from your Kubernetes cluster to use with code running on your laptop." +--- + +# Environment variables + +Telepresence can import environment variables from the cluster pod when running an intercept. +You can then use these variables with the code running on your laptop of the service being intercepted. + +There are three options available to do this: + +1. `telepresence intercept [service] --port [port] --env-file=FILENAME` + + This will write the environment variables to a Docker Compose `.env` file. This file can be used with `docker-compose` when starting containers locally. Please see the Docker documentation regarding the [file syntax](https://docs.docker.com/compose/env-file/) and [usage](https://docs.docker.com/compose/environment-variables/) for more information. + +2. `telepresence intercept [service] --port [port] --env-json=FILENAME` + + This will write the environment variables to a JSON file. This file can be injected into other build processes. + +3. `telepresence intercept [service] --port [port] -- [COMMAND]` + + This will run a command locally with the pod's environment variables set on your laptop. Once the command quits the intercept is stopped (as if `telepresence leave [service]` was run). This can be used in conjunction with a local server command, such as `python [FILENAME]` or `node [FILENAME]` to run a service locally while using the environment variables that were set on the pod via a ConfigMap or other means. + + Another use would be running a subshell, Bash for example: + + `telepresence intercept [service] --port [port] -- /bin/bash` + + This would start the intercept then launch the subshell on your laptop with all the same variables set as on the pod. + +## Telepresence Environment Variables + +Telepresence adds some useful environment variables in addition to the ones imported from the intercepted pod: + +### TELEPRESENCE_ROOT +Directory where all remote volumes mounts are rooted. See [Volume Mounts](../volume/) for more info. + +### TELEPRESENCE_MOUNTS +Colon separated list of remotely mounted directories. + +### TELEPRESENCE_CONTAINER +The name of the intercepted container. Useful when a pod has several containers, and you want to know which one that was intercepted by Telepresence. + +### TELEPRESENCE_INTERCEPT_ID +ID of the intercept (same as the "x-intercept-id" http header). + +Useful if you need special behavior when intercepting a pod. One example might be when dealing with pub/sub systems like Kafka, where all processes that don't have the `TELEPRESENCE_INTERCEPT_ID` set can filter out all messages that contain an `x-intercept-id` header, while those that do, instead filter based on a matching `x-intercept-id` header. This is to assure that messages belonging to a certain intercept always are consumed by the intercepting process. diff --git a/docs/telepresence/2.6/reference/inside-container.md b/docs/telepresence/2.6/reference/inside-container.md new file mode 100644 index 000000000..637e0cdfd --- /dev/null +++ b/docs/telepresence/2.6/reference/inside-container.md @@ -0,0 +1,37 @@ +# Running Telepresence inside a container + +It is sometimes desirable to run [Telepresence](/products/telepresence/) inside a container. One reason can be to avoid any side effects on the workstation's network, another can be to establish multiple sessions with the traffic manager, or even work with different clusters simultaneously. + +## Building the container + +Building a container with a ready-to-run Telepresence is easy because there are relatively few external dependencies. Add the following to a `Dockerfile`: + +```Dockerfile +# Dockerfile with telepresence and its prerequisites +FROM alpine:3.13 + +# Install Telepresence prerequisites +RUN apk add --no-cache curl iproute2 sshfs + +# Download and install the telepresence binary +RUN curl -fL https://app.getambassador.io/download/tel2/linux/amd64/latest/telepresence -o telepresence && \ + install -o root -g root -m 0755 telepresence /usr/local/bin/telepresence +``` +In order to build the container, do this in the same directory as the `Dockerfile`: +``` +$ docker build -t tp-in-docker . +``` + +## Running the container + +Telepresence will need access to the `/dev/net/tun` device on your Linux host (or, in case the host isn't Linux, the Linux VM that Docker starts automatically), and a Kubernetes config that identifies the cluster. It will also need `--cap-add=NET_ADMIN` to create its Virtual Network Interface. + +The command to run the container can look like this: +```bash +$ docker run \ + --cap-add=NET_ADMIN \ + --device /dev/net/tun:/dev/net/tun \ + --network=host \ + -v ~/.kube/config:/root/.kube/config \ + -it --rm tp-in-docker +``` diff --git a/docs/telepresence/2.6/reference/intercepts/index.md b/docs/telepresence/2.6/reference/intercepts/index.md new file mode 100644 index 000000000..3016a4a86 --- /dev/null +++ b/docs/telepresence/2.6/reference/intercepts/index.md @@ -0,0 +1,389 @@ +import Alert from '@material-ui/lab/Alert'; + +# Intercepts + +When intercepting a service, Telepresence installs a *traffic-agent* +sidecar in to the workload. That traffic-agent supports one or more +intercept *mechanisms* that it uses to decide which traffic to +intercept. Telepresence has a simple default traffic-agent, however +you can configure a different traffic-agent with more sophisticated +mechanisms either by setting the [`images.agentImage` field in +`config.yml`](../config/#images) or by writing an +[`extensions/${extension}.yml`][extensions] file that tells +Telepresence about a traffic-agent that it can use, what mechanisms +that traffic-agent supports, and command-line flags to expose to the +user to configure that mechanism. You may tell Telepresence which +known mechanism to use with the `--mechanism=${mechanism}` flag or by +setting one of the `--${mechansim}-XXX` flags, which implicitly set +the mechanism; for example, setting `--http-header=auto` implicitly +sets `--mechanism=http`. + +The default open-source traffic-agent only supports the `tcp` +mechanism, which treats the raw layer 4 TCP streams as opaque and +sends all of that traffic down to the developer's workstation. This +means that it is a "global" intercept, affecting all users of the +cluster. + +In addition to the default open-source traffic-agent, Telepresence +already knows about the Ambassador Cloud +[traffic-agent][ambassador-agent], which supports the `http` +mechanism. The `http` mechanism operates at higher layer, working +with layer 7 HTTP, and may intercept specific HTTP requests, allowing +other HTTP requests through to the regular service. This allows for +"personal" intercepts which only intercept traffic tagged as belonging +to a given developer. + +[extensions]: https://pkg.go.dev/github.com/telepresenceio/telepresence/v2@v$version$/pkg/client/cli/extensions +[ambassador-agent]: https://github.com/telepresenceio/telepresence/blob/release/v2/pkg/client/cli/extensions/builtin.go#L30-L50 + +## Intercept behavior when logged in to Ambassador Cloud + +Logging in to Ambassador Cloud (with [`telepresence +login`](../client/login/)) changes the Telepresence defaults in two +ways. + +First, being logged in to Ambassador Cloud causes Telepresence to +default to `--mechanism=http --http-header=auto --http-path-prefix=/` ( +`--mechanism=http` is redundant. It is implied by other `--http-xxx` flags). +If you hadn't been logged in it would have defaulted to +`--mechanism=tcp`. This tells Telepresence to use the Ambassador +Cloud traffic-agent to do smart "personal" intercepts and only +intercept a subset of HTTP requests, rather than just intercepting the +entirety of all TCP connections. This is important for working in a +shared cluster with teammates, and is important for the preview URL +functionality below. See `telepresence intercept --help` for +information on using the `--http-header` and `--http-path-xxx` flags to +customize which requests that are intercepted. + +Secondly, being logged in causes Telepresence to default to +`--preview-url=true`. If you hadn't been logged in it would have +defaulted to `--preview-url=false`. This tells Telepresence to take +advantage of Ambassador Cloud to create a preview URL for this +intercept, creating a shareable URL that automatically sets the +appropriate headers to have requests coming from the preview URL be +intercepted. In order to create the preview URL, it will prompt you +for four settings about how your cluster's ingress is configured. For +each, Telepresence tries to intelligently detect the correct value for +your cluster; if it detects it correctly, may simply press "enter" and +accept the default, otherwise you must tell Telepresence the correct +value. + +When creating an intercept with the `http` mechanism, the +traffic-agent sends a `GET /telepresence-http2-check` request to your +service and to the process running on your local machine at the port +specified in your intercept, in order to determine if they support +HTTP/2. This is required for the intercepts to behave correctly. If +you do not have a service running locally when the intercept is +created, the traffic-agent will use the result it got from checking +the in-cluster service. + +## Supported workloads + +Kubernetes has various +[workloads](https://kubernetes.io/docs/concepts/workloads/). +Currently, Telepresence supports intercepting (installing a +traffic-agent on) `Deployments`, `ReplicaSets`, and `StatefulSets`. + + + +While many of our examples use Deployments, they would also work on +ReplicaSets and StatefulSets + + + +## Specifying a namespace for an intercept + +The namespace of the intercepted workload is specified using the +`--namespace` option. When this option is used, and `--workload` is +not used, then the given name is interpreted as the name of the +workload and the name of the intercept will be constructed from that +name and the namespace. + +```shell +telepresence intercept hello --namespace myns --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept +`hello-myns`. In order to remove the intercept, you will need to run +`telepresence leave hello-mydns` instead of just `telepresence leave +hello`. + +The name of the intercept will be left unchanged if the workload is specified. + +```shell +telepresence intercept myhello --namespace myns --workload hello --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept `myhello`. + +## Importing environment variables + +Telepresence can import the environment variables from the pod that is +being intercepted, see [this doc](../environment/) for more details. + +## Creating an intercept without a preview URL + +If you *are not* logged in to Ambassador Cloud, the following command +will intercept all traffic bound to the service and proxy it to your +laptop. This includes traffic coming through your ingress controller, +so use this option carefully as to not disrupt production +environments. + +```shell +telepresence intercept --port= +``` + +If you *are* logged in to Ambassador Cloud, setting the +`--preview-url` flag to `false` is necessary. + +```shell +telepresence intercept --port= --preview-url=false +``` + +This will output an HTTP header that you can set on your request for +that traffic to be intercepted: + +```console +$ telepresence intercept --port= --preview-url=false +Using Deployment +intercepted + Intercept name: + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":") +``` + +Run `telepresence status` to see the list of active intercepts. + +```console +$ telepresence status +Root Daemon: Running + Version : v2.1.4 (api 3) + Primary DNS : "" + Fallback DNS: "" +User Daemon: Running + Version : v2.1.4 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 1 total + dataprocessingnodeservice: @ +``` + +Finally, run `telepresence leave ` to stop the intercept. + +## Skipping the ingress dialogue + +You can skip the ingress dialogue by setting the relevant parameters using flags. If any of the following flags are set, the dialogue will be skipped and the flag values will be used instead. If any of the required flags are missing, an error will be thrown. + +| Flag | Description | Required | +|------------------|------------------------------------------------------------------|------------| +| `--ingress-host` | The ip address for the ingress | yes | +| `--ingress-port` | The port for the ingress | yes | +| `--ingress-tls` | Whether tls should be used | no | +| `--ingress-l5` | Whether a different ip address should be used in request headers | no | + +## Creating an intercept when a service has multiple ports + +If you are trying to intercept a service that has multiple ports, you +need to tell Telepresence which service port you are trying to +intercept. To specify, you can either use the name of the service +port or the port number itself. To see which options might be +available to you and your service, use kubectl to describe your +service or look in the object's YAML. For more information on multiple +ports, see the [Kubernetes documentation][kube-multi-port-services]. + +[kube-multi-port-services]: https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services + +```console +$ telepresence intercept --port=: +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +When intercepting a service that has multiple ports, the name of the +service port that has been intercepted is also listed. + +If you want to change which port has been intercepted, you can create +a new intercept the same way you did above and it will change which +service port is being intercepted. + +## Creating an intercept When multiple services match your workload + +Oftentimes, there's a 1-to-1 relationship between a service and a +workload, so telepresence is able to auto-detect which service it +should intercept based on the workload you are trying to intercept. +But if you use something like +[Argo](https://www.getambassador.io/docs/argo/latest/), there may be +two services (that use the same labels) to manage traffic between a +canary and a stable service. + +Fortunately, if you know which service you want to use when +intercepting a workload, you can use the `--service` flag. So in the +aforementioned example, if you wanted to use the `echo-stable` service +when intercepting your workload, your command would look like this: + +```console +$ telepresence intercept echo-rollout- --port --service echo-stable +Using ReplicaSet echo-rollout- +intercepted + Intercept name : echo-rollout- + State : ACTIVE + Workload kind : ReplicaSet + Destination : 127.0.0.1:3000 + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-921196036 + Intercepting : all TCP connections +``` + +## Intercepting multiple ports + +It is possible to intercept more than one service and/or service port that are using the same workload. You do this +by creating more than one intercept that identify the same workload using the `--workload` flag. + +Let's assume that we have a service `multi-echo` with the two ports `http` and `grpc`. They are both +targeting the same `multi-echo` deployment. + +```console +$ telepresence intercept multi-echo-http --workload multi-echo --port 8080:http --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-http + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Volume Mount Point : /tmp/telfs-893700837 + Intercepting : all TCP requests + Preview URL : https://sleepy-bassi-1140.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +$ telepresence intercept multi-echo-grpc --workload multi-echo --port 8443:grpc --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-grpc + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8443 + Service Port Identifier: extra + Volume Mount Point : /tmp/telfs-1277723591 + Intercepting : all TCP requests + Preview URL : https://upbeat-thompson-6613.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +``` + +## Port-forwarding an intercepted container's sidecars + +Sidecars are containers that sit in the same pod as an application +container; they usually provide auxiliary functionality to an +application, and can usually be reached at +`localhost:${SIDECAR_PORT}`. For example, a common use case for a +sidecar is to proxy requests to a database, your application would +connect to `localhost:${SIDECAR_PORT}`, and the sidecar would then +connect to the database, perhaps augmenting the connection with TLS or +authentication. + +When intercepting a container that uses sidecars, you might want those +sidecars' ports to be available to your local application at +`localhost:${SIDECAR_PORT}`, exactly as they would be if running +in-cluster. Telepresence's `--to-pod ${PORT}` flag implements this +behavior, adding port-forwards for the port given. + +```console +$ telepresence intercept --port=: --to-pod= +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +If there are multiple ports that you need forwarded, simply repeat the +flag (`--to-pod= --to-pod=`). + +## Intercepting headless services + +Kubernetes supports creating [services without a ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services), +which, when they have a pod selector, serve to provide a DNS record that will directly point to the service's backing pods. +Telepresence supports intercepting these `headless` services as it would a regular service with a ClusterIP. +So, for example, if you have the following service: + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: my-headless +spec: + type: ClusterIP + clusterIP: None + selector: + service: my-headless + ports: + - port: 8080 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: my-headless + labels: + service: my-headless +spec: + replicas: 1 + serviceName: my-headless + selector: + matchLabels: + service: my-headless + template: + metadata: + labels: + service: my-headless + spec: + containers: + - name: my-headless + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +``` + +You can intercept it like any other: + +```console +$ telepresence intercept my-headless --port 8080 +Using StatefulSet my-headless +intercepted + Intercept name : my-headless + State : ACTIVE + Workload kind : StatefulSet + Destination : 127.0.0.1:8080 + Volume Mount Point: /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-524189712 + Intercepting : all TCP connections +``` + + +This utilizes an initContainer that requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + + +This requires the Traffic Agent to run as GID 7777. By default, this is disabled on openshift clusters. +To enable running as GID 7777 on a specific openshift namespace, run: +oc adm policy add-scc-to-group anyuid system:serviceaccounts:$NAMESPACE + + + +Intercepting headless services without a selector is not supported. + diff --git a/docs/telepresence/2.6/reference/intercepts/manual-agent.md b/docs/telepresence/2.6/reference/intercepts/manual-agent.md new file mode 100644 index 000000000..d5cfbda6c --- /dev/null +++ b/docs/telepresence/2.6/reference/intercepts/manual-agent.md @@ -0,0 +1,267 @@ +import Alert from '@material-ui/lab/Alert'; + +# Manually injecting the Traffic Agent + +You can directly modify your workload's YAML configuration to add the Telepresence Traffic Agent and enable it to be intercepted. + +When you use a Telepresence intercept for the first time on a Pod, the [Telepresence Mutating Webhook](../../cluster-config/#mutating-webhook) +will automatically inject a Traffic Agent sidecar into it. There might be some situations where this approach cannot be used, such +as very strict company security policies preventing it. + + +Although it is possible to manually inject the Traffic Agent, it is not the recommended approach to making a workload interceptable, +try the Mutating Webhook before proceeding. + + +## Procedure + +You can manually inject the agent into Deployments, StatefulSets, or ReplicaSets. The example on this page +uses the following Deployment and Service. It's a prerequisite that they have been applied to the cluster: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "my-service" + labels: + service: my-service +spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +--- +apiVersion: v1 +kind: Service +metadata: + name: "my-service" +spec: + type: ClusterIP + selector: + service: my-service + ports: + - port: 80 + targetPort: 8080 +``` + +### 1. Generating the YAML + +First, generate the YAML for the traffic-agent configmap entry. It's important that the generated file have +the same name as the service, and no extension: + +```console +$ telepresence genyaml config --workload my-service -o /tmp/my-service +$ cat /tmp/my-service-config.yaml +agentImage: docker.io/datawire/tel2:2.6.0 +agentName: my-service +containers: +- Mounts: null + envPrefix: A_ + intercepts: + - agentPort: 9900 + containerPort: 8080 + protocol: TCP + serviceName: my-service + servicePort: 80 + serviceUID: f6680334-10ef-4703-aa4e-bb1f9d1665fd + mountPoint: /tel_app_mounts/echo-container + name: echo-container +logLevel: info +managerHost: traffic-manager.ambassador +managerPort: 8081 +manual: true +namespace: default +workloadKind: Deployment +workloadName: my-service +``` + +Next, generate the YAML for the traffic-agent container: + +```console +$ telepresence genyaml container --config /tmp/my-service -o /tmp/my-service-agent.yaml +$ cat /tmp/my-service-agent.yaml +args: +- agent +env: +- name: _TEL_AGENT_POD_IP + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: status.podIP +image: docker.io/datawire/tel2:2.6.0-beta.12 +name: traffic-agent +ports: +- containerPort: 9900 + protocol: TCP +readinessProbe: + exec: + command: + - /bin/stat + - /tmp/agent/ready +resources: {} +volumeMounts: +- mountPath: /tel_pod_info + name: traffic-annotations +- mountPath: /etc/traffic-agent + name: traffic-config +- mountPath: /tel_app_exports + name: export-volume + name: traffic-annotations +``` + +Next, generate the init-container + +```console +$ telepresence genyaml initcontainer --config /tmp/my-service -o /tmp/my-service-init.yaml +$ cat /tmp/my-service-init.yaml +args: +- agent-init +image: docker.io/datawire/tel2:2.6.0-beta.12 +name: tel-agent-init +resources: {} +securityContext: + capabilities: + add: + - NET_ADMIN +volumeMounts: +- mountPath: /etc/traffic-agent + name: traffic-config +``` + +Next, generate the YAML for the volumes: + +```console +$ telepresence genyaml volume --workload my-service -o /tmp/my-service-volume.yaml +$ cat /tmp/my-service-volume.yaml +- downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.annotations + path: annotations + name: traffic-annotations +- configMap: + items: + - key: my-service + path: config.yaml + name: telepresence-agents + name: traffic-config +- emptyDir: {} + name: export-volume + +``` + + +Enter `telepresence genyaml container --help` or `telepresence genyaml volume --help` for more information about these flags. + + +### 2. Creating (or updating) the configmap + +The generated configmap entry must be insterted into the `telepresence-agents` `ConfigMap` in the same namespace as the +modified `Deployment`. If the `ConfigMap` doesn't exist yet, it can be created using the following command: + +```console +$ kubectl create configmap telepresence-agents --from-file=/tmp/my-service +``` + +If it already exists, new entries can be added under the `Data` key using `kubectl edit configmap telepresence-agents`. + +### 3. Injecting the YAML into the Deployment + +You need to add the `Deployment` YAML you genereated to include the container and the volume. These are placed as elements +of `spec.template.spec.containers`, `spec.template.spec.initContainers`, and `spec.template.spec.volumes` respectively. +You also need to modify `spec.template.metadata.annotations` and add the annotation +`telepresence.getambassador.io/manually-injected: "true"`. These changes should look like the following: + +```diff + apiVersion: apps/v1 + kind: Deployment + metadata: + name: "my-service" + labels: + service: my-service + spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service ++ annotations: ++ telepresence.getambassador.io/manually-injected: "true" + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} ++ - args: ++ - agent ++ env: ++ - name: _TEL_AGENT_POD_IP ++ valueFrom: ++ fieldRef: ++ apiVersion: v1 ++ fieldPath: status.podIP ++ image: docker.io/datawire/tel2:2.6.0-beta.12 ++ name: traffic-agent ++ ports: ++ - containerPort: 9900 ++ protocol: TCP ++ readinessProbe: ++ exec: ++ command: ++ - /bin/stat ++ - /tmp/agent/ready ++ resources: { } ++ volumeMounts: ++ - mountPath: /tel_pod_info ++ name: traffic-annotations ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ - mountPath: /tel_app_exports ++ name: export-volume ++ initContainers: ++ - args: ++ - agent-init ++ image: docker.io/datawire/tel2:2.6.0-beta.12 ++ name: tel-agent-init ++ resources: { } ++ securityContext: ++ capabilities: ++ add: ++ - NET_ADMIN ++ volumeMounts: ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ volumes: ++ - downwardAPI: ++ items: ++ - fieldRef: ++ apiVersion: v1 ++ fieldPath: metadata.annotations ++ path: annotations ++ name: traffic-annotations ++ - configMap: ++ items: ++ - key: my-service ++ path: config.yaml ++ name: telepresence-agents ++ name: traffic-config ++ - emptyDir: { } ++ name: export-volume +``` diff --git a/docs/telepresence/2.6/reference/linkerd.md b/docs/telepresence/2.6/reference/linkerd.md new file mode 100644 index 000000000..9b903fa76 --- /dev/null +++ b/docs/telepresence/2.6/reference/linkerd.md @@ -0,0 +1,75 @@ +--- +Description: "How to get Linkerd meshed services working with Telepresence" +--- + +# Using Telepresence with Linkerd + +## Introduction +Getting started with Telepresence on Linkerd services is as simple as adding an annotation to your Deployment: + +```yaml +spec: + template: + metadata: + annotations: + config.linkerd.io/skip-outbound-ports: "8081" +``` + +The local system and the Traffic Agent connect to the Traffic Manager using its gRPC API on port 8081. Telling Linkerd to skip that port allows the Traffic Agent sidecar to fully communicate with the Traffic Manager, and therefore the rest of the Telepresence system. + +## Prerequisites +1. [Telepresence binary](../../install) +2. Linkerd control plane [installed to cluster](https://linkerd.io/2.10/tasks/install/) +3. Kubectl +4. [Working ingress controller](https://www.getambassador.io/docs/edge-stack/latest/howtos/linkerd2) + +## Deploy +Save and deploy the following YAML. Note the `config.linkerd.io/skip-outbound-ports` annotation in the metadata of the pod template. + +```yaml +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: quote +spec: + replicas: 1 + selector: + matchLabels: + app: quote + strategy: + type: RollingUpdate + template: + metadata: + annotations: + linkerd.io/inject: "enabled" + config.linkerd.io/skip-outbound-ports: "8081,8022,6001" + labels: + app: quote + spec: + containers: + - name: backend + image: docker.io/datawire/quote:0.4.1 + ports: + - name: http + containerPort: 8000 + env: + - name: PORT + value: "8000" + resources: + limits: + cpu: "0.1" + memory: 100Mi +``` + +## Connect to Telepresence +Run `telepresence connect` to connect to the cluster. Then `telepresence list` should show the `quote` deployment as `ready to intercept`: + +``` +$ telepresence list + + quote: ready to intercept (traffic-agent not yet installed) +``` + +## Run the intercept +Run `telepresence intercept quote --port 8080:80` to direct traffic from the `quote` deployment to port 8080 on your local system. Assuming you have something listening on 8080, you should now be able to see your local service whenever attempting to access the `quote` service. diff --git a/docs/telepresence/2.6/reference/rbac.md b/docs/telepresence/2.6/reference/rbac.md new file mode 100644 index 000000000..d78133441 --- /dev/null +++ b/docs/telepresence/2.6/reference/rbac.md @@ -0,0 +1,236 @@ +import Alert from '@material-ui/lab/Alert'; + +# Telepresence RBAC +The intention of this document is to provide a template for securing and limiting the permissions of Telepresence. +This documentation covers the full extent of permissions necessary to administrate Telepresence components in a cluster. + +There are two general categories for cluster permissions with respect to Telepresence. There are RBAC settings for a User and for an Administrator described above. The User is expected to only have the minimum cluster permissions necessary to create a Telepresence [intercept](../../howtos/intercepts/), and otherwise be unable to affect Kubernetes resources. + +In addition to the above, there is also a consideration of how to manage Users and Groups in Kubernetes which is outside of the scope of the document. This document will use Service Accounts to assign Roles and Bindings. Other methods of RBAC administration and enforcement can be found on the [Kubernetes RBAC documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) page. + +## Requirements + +- Kubernetes version 1.16+ +- Cluster admin privileges to apply RBAC + +## Editing your kubeconfig + +This guide also assumes that you are utilizing a kubeconfig file that is specified by the `KUBECONFIG` environment variable. This is a `yaml` file that contains the cluster's API endpoint information as well as the user data being supplied for authentication. The Service Account name used in the example below is called tp-user. This can be replaced by any value (i.e. John or Jane) as long as references to the Service Account are consistent throughout the `yaml`. After an administrator has applied the RBAC configuration, a user should create a `config.yaml` in your current directory that looks like the following:​ + +```yaml +apiVersion: v1 +kind: Config +clusters: +- name: my-cluster # Must match the cluster value in the contexts config + cluster: + ## The cluster field is highly cloud dependent. +contexts: +- name: my-context + context: + cluster: my-cluster # Must match the name field in the clusters config + user: tp-user +users: +- name: tp-user # Must match the name of the Service Account created by the cluster admin + user: + token: # See note below +``` + +The Service Account token will be obtained by the cluster administrator after they create the user's Service Account. Creating the Service Account will create an associated Secret in the same namespace with the format `-token-`. This token can be obtained by your cluster administrator by running `kubectl get secret -n ambassador -o jsonpath='{.data.token}' | base64 -d`. + +After creating `config.yaml` in your current directory, export the file's location to KUBECONFIG by running `export KUBECONFIG=$(pwd)/config.yaml`. You should then be able to switch to this context by running `kubectl config use-context my-context`. + +## Administrating Telepresence + +Telepresence administration requires permissions for creating `Namespaces`, `ServiceAccounts`, `ClusterRoles`, `ClusterRoleBindings`, `Secrets`, `Services`, `MutatingWebhookConfiguration`, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator. The following permissions are needed for the installation and use of Telepresence: + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: telepresence-admin + namespace: default +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-admin-role +rules: + - apiGroups: [""] + resources: ["pods", "pods/log"] + verbs: ["get", "list", "create", "delete", "watch"] + - apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "update", "create", "delete"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] + - apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update", "create", "delete", "watch"] + - apiGroups: ["rbac.authorization.k8s.io"] + resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"] + verbs: ["get", "list", "watch", "create", "delete"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["create"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["get", "list", "watch", "delete"] + resourceNames: ["telepresence-agents"] + - apiGroups: [""] + resources: ["namespaces"] + verbs: ["get", "list", "watch", "create"] + - apiGroups: [""] + resources: ["secrets"] + verbs: ["get", "create", "list", "delete"] + - apiGroups: [""] + resources: ["serviceaccounts"] + verbs: ["get", "create", "delete"] + - apiGroups: ["admissionregistration.k8s.io"] + resources: ["mutatingwebhookconfigurations"] + verbs: ["get", "create", "delete"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["list", "get", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-clusterrolebinding +subjects: + - name: telepresence-admin + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-admin-role + kind: ClusterRole +``` + +There are two ways to install the traffic-manager: Using `telepresence connect` and installing the [helm chart](../../install/helm/). + +By using `telepresence connect`, Telepresence will use your kubeconfig to create the objects mentioned above in the cluster if they don't already exist. If you want the most introspection into what is being installed, we recommend using the helm chart to install the traffic-manager. + +## Cluster-wide telepresence user access + +To allow users to make intercepts across all namespaces, but with more limited `kubectl` permissions, the following `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` will allow full `telepresence intercept` functionality. + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate value + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-role +rules: +# For gather-logs command +- apiGroups: [""] + resources: ["pods/log"] + verbs: ["get"] +- apiGroups: [""] + resources: ["pods"] + verbs: ["list"] +# Needed in order to maintain a list of workloads +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +- apiGroups: [""] + resources: ["namespaces", "services"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-rolebinding +subjects: +- name: tp-user + kind: ServiceAccount + namespace: ambassador +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-role + kind: ClusterRole +``` + +### Traffic Manager connect permission +In addition to the cluster-wide permissions, the client will also need the following namespace scoped permissions +in the traffic-manager's namespace in order to establish the needed port-forward to the traffic-manager. +```yaml +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: traffic-manager-connect +rules: + - apiGroups: [""] + resources: ["pods"] + verbs: ["get", "list", "watch"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: traffic-manager-connect +subjects: + - name: telepresence-test-developer + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: traffic-manager-connect + kind: Role +``` + +## Namespace only telepresence user access + +RBAC for multi-tenant scenarios where multiple dev teams are sharing a single cluster where users are constrained to a specific namespace(s). + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +For each accessible namespace +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate user name + namespace: tp-namespace # Update value for appropriate namespace +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +rules: +- apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "watch"] +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role-binding + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount +roleRef: + kind: Role + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +``` + +The user will also need the [Traffic Manager connect permission](#traffic-manager-connect-permission) described above. diff --git a/docs/telepresence/2.6/reference/restapi.md b/docs/telepresence/2.6/reference/restapi.md new file mode 100644 index 000000000..4be1924a3 --- /dev/null +++ b/docs/telepresence/2.6/reference/restapi.md @@ -0,0 +1,93 @@ +# Telepresence RESTful API server + +[Telepresence](/products/telepresence/) can run a RESTful API server on the local host, both on the local workstation and in a pod that contains a `traffic-agent`. The server currently has two endpoints. The standard `healthz` endpoint and the `consume-here` endpoint. + +## Enabling the server +The server is enabled by setting the `telepresenceAPI.port` to a valid port number in the [Telepresence Helm Chart](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). The values may be passed explicitly to Helm during install, or configured using the [Telepresence Config](../config#restful-api-server) to impact an auto-install. + +## Querying the server +On the cluster's side, it's the `traffic-agent` of potentially intercepted pods that runs the server. The server can be accessed using `http://localhost:/` from the application container. Telepresence ensures that the container has the `TELEPRESENCE_API_PORT` environment variable set when the `traffic-agent` is installed. On the workstation, it is the `user-daemon` that runs the server. It uses the `TELEPRESENCE_API_PORT` that is conveyed in the environment of the intercept. This means that the server can be accessed the exact same way locally, provided that the environment is propagated correctly to the interceptor process. + +## Endpoints + +The `consume-here` and `intercept-info` endpoints are both intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar. Telepresence provides the ID of the intercept in the environment variable [TELEPRESENCE_INTERCEPT_ID](../environment/#telepresence_intercept_id) during an intercept. This ID must be provided in a `x-telepresence-caller-intercept-id: = ` header. [Telepresence](/products/telepresence/) needs this to identify the caller correctly. The `` will be empty when running in the cluster, but it's harmless to provide it there too, so there's no need for conditional code. + +There are three prerequisites to fulfill before testing The `consume-here` and `intercept-info` endpoints using `curl -v` on the workstation: +1. An intercept must be active +2. The "/healthz" endpoint must respond with OK +3. The ID of the intercept must be known. It will be visible as `ID` in the output of `telepresence list --debug`. + +### healthz +The `http://localhost:/healthz` endpoint should respond with status code 200 OK. If it doesn't then something isn't configured correctly. Check that the `traffic-agent` container is present and that the `TELEPRESENCE_API_PORT` has been added to the environment of the application container and/or in the environment that is propagated to the interceptor that runs on the local workstation. + +#### test endpoint using curl +A `curl -v` call can be used to test the endpoint when an intercept is active. This example assumes that the API port is configured to be 9980. +```console +$ curl -v localhost:9980/healthz +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /healthz HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Date: Fri, 26 Nov 2021 07:06:18 GMT +< Content-Length: 0 +< +* Connection #0 to host localhost left intact +``` + +### consume-here +`http://localhost:/consume-here` will respond with "true" (consume the message) or "false" (leave the message on the queue). When running in the cluster, this endpoint will respond with `false` if the headers match an ongoing intercept for the same workload because it's assumed that it's up to the intercept to consume the message. When running locally, the response is inverted. Matching headers means that the message should be consumed. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api`, we can now check that the "/consume-here" returns "true" for the path "/api" and given headers. +```console +$ curl -v localhost:9980/consume-here?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /consume-here?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Fri, 26 Nov 2021 06:43:28 GMT +< Content-Length: 4 +< +* Connection #0 to host localhost left intact +true +``` + +If you can run curl from the pod, you can try the exact same URL. The result should be "false" when there's an ongoing intercept. The `x-telepresence-caller-intercept-id` is not needed when the call is made from the pod. + +### intercept-info +`http://localhost:/intercept-info` is intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar, and will respond with a JSON structure containing the two booleans `clientSide` and `intercepted`, and a `metadata` map which corresponds to the `--http-meta` key pairs used when the intercept was created. This field is always omitted in case `intercepted` is `false`. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api --http-meta a=b --http-meta b=c`, we can now check that the "/intercept-info" returns information for the given path and headers. +```console +$ curl -v localhost:9980/intercept-info?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980...* Connected to localhost (127.0.0.1) port 9980 (#0) +> GET /intercept-info?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.79.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Tue, 01 Feb 2022 11:39:55 GMT +< Content-Length: 68 +< +{"intercepted":true,"clientSide":true,"metadata":{"a":"b","b":"c"}} +* Connection #0 to host localhost left intact +``` diff --git a/docs/telepresence/2.6/reference/routing.md b/docs/telepresence/2.6/reference/routing.md new file mode 100644 index 000000000..061ba8fa9 --- /dev/null +++ b/docs/telepresence/2.6/reference/routing.md @@ -0,0 +1,69 @@ +# Connection Routing + +## Outbound + +### DNS resolution +When requesting a connection to a host, the IP of that host must be determined. Telepresence provides DNS resolvers to help with this task. There are currently four types of resolvers but only one of them will be used on a workstation at any given time. Common for all of them is that they will propagate a selection of the host lookups to be performed in the cluster. The selection normally includes all names ending with `.cluster.local` or a currently mapped namespace but more entries can be added to the list using the `include-suffixes` option in the +[local DNS configuration](../config/#dns) + +#### Cluster side DNS lookups +The cluster side host lookup will be performed by the traffic-manager unless the client has an active intercept, in which case, the agent performing that intercept will be responsible for doing it. If the client has multiple intercepts, then all of them will be asked to perform the lookup, and the response to the client will contain the unique sum of IPs that they produce. It's therefore important to never have multiple intercepts that span more than one namespace[[1](#namespacelimit)]. The reason for asking all of them is that the workstation currently impersonates multiple containers, and it is not possible to determine on behalf of what container the lookup request is made. + +#### macOS resolver +This resolver hooks into the macOS DNS system by creating files under `/etc/resolver`. Those files correspond to some domain and contain the port number of the Telepresence resolver. Telepresence creates one such file for each of the currently mapped namespaces and `include-suffixes` option. The file `telepresence.local` contains a search path that is configured based on current intercepts so that single label names can be resolved correctly. + +#### Linux systemd-resolved resolver +This resolver registers itself as part of telepresence's [VIF](../tun-device) using `systemd-resolved` and uses the DBus API to configure domains and routes that corresponds to the current set of intercepts and namespaces. + +#### Linux overriding resolver +Linux systems that aren't configured with `systemd-resolved` will use this resolver. A Typical case is when running Telepresence [inside a docker container](../inside-container). During initialization, the resolver will first establish a _fallback_ connection to the IP passed as `--dns`, the one configured as `local-ip` in the [local DNS configuration](../config/#dns), or the primary `nameserver` registered in `/etc/resolv.conf`. It will then use iptables to actually override that IP so that requests to it instead end up in the overriding resolver, which unless it succeeds on its own, will use the _fallback_. + +#### Windows resolver +This resolver uses the DNS resolution capabilities of the [win-tun](https://www.wintun.net/) device in conjunction with [Win32_NetworkAdapterConfiguration SetDNSDomain](https://docs.microsoft.com/en-us/powershell/scripting/samples/performing-networking-tasks?view=powershell-7.2#assigning-the-dns-domain-for-a-network-adapter). + +#### DNS caching +The Telepresence DNS resolver often changes its configuration. This means that Telepresence must either flush the DNS caches on the local host, or ensure that DNS-records returned from the Telepresence resolver aren't cached (or cached for a very short time). All operating systems have different ways of flushing the DNS caches and even different versions of one system may have differences. Also, on some systems it is necessary to actually kill and restart processes to ensure a proper flush, which in turn may result in network instabilities. + +Starting with 2.4.7, Telepresence will no longer flush the host's DNS caches. Instead, all records will have a short Time To Live (TTL) so that such caches evict the entries quickly. This causes increased load on the Telepresence resolver (shorter TTL means more frequent queries) and to cater for that, telepresence now has an internal cache to minimize the number of DNS queries that it sends to the cluster. This cache is flushed as needed without causing instabilities. + +### Routing + +#### Subnets +The Telepresence `traffic-manager` service is responsible for discovering the cluster's service subnet and all subnets used by the pods. In order to do this, it needs permission to create a dummy service[[2](#servicesubnet)] in its own namespace, and the ability to list, get, and watch nodes and pods. Most clusters will expose the pod subnets as `podCIDR` in the `Node` while others, like Amazon EKS, don't. Telepresence will then fall back to deriving the subnets from the IPs of all pods. If you'd like to choose a specific method for discovering subnets, or want to provide the list yourself, you can use the `podCIDRStrategy` configuration value in the [helm](../../install/helm) chart to do that. + +The complete set of subnets that the [VIF](../tun-device) will be configured with is dynamic and may change during a connection's life cycle as new nodes arrive or disappear from the cluster. The set consists of what that the traffic-manager finds in the cluster, and the subnets configured using the [also-proxy](../config#alsoproxy) configuration option. Telepresence will remove subnets that are equal to, or completely covered by, other subnets. + +#### Connection origin +A request to connect to an IP-address that belongs to one of the subnets of the [VIF](../tun-device) will cause a connection request to be made in the cluster. As with host name lookups, the request will originate from the traffic-manager unless the client has ongoing intercepts. If it does, one of the intercepted pods will be chosen, and the request will instead originate from that pod. This is a best-effort approach. Telepresence only knows that the request originated from the workstation. It cannot know that it is intended to originate from a specific pod when multiple intercepts are active. + +A `--local-only` intercept will not have any effect on the connection origin because there is no pod from which the connection can originate. The intercept must be made on a workload that has been deployed in the cluster if there's a requirement for correct connection origin. + +There are multiple reasons for doing this. One is that it is important that the request originates from the correct namespace. Example: + +```bash +curl some-host +``` +results in a http request with header `Host: some-host`. Now, if a service-mesh like Istio performs header based routing, then it will fail to find that host unless the request originates from the same namespace as the host resides in. Another reason is that the configuration of a service mesh can contain very strict rules. If the request then originates from the wrong pod, it will be denied. Only one intercept at a time can be used if there is a need to ensure that the chosen pod is exactly right. + +### Recursion detection +It is common that clusters used in development, such as Minikube, Minishift or k3s, run on the same host as the Telepresence client, often in a Docker container. Such clusters may have access to host network, which means that both DNS and L4 routing may be subjected to recursion. + +#### DNS recursion +When a local cluster's DNS-resolver fails to resolve a hostname, it may fall back to querying the local host network. This means that the Telepresence resolver will be asked to resolve a query that was issued from the cluster. Telepresence must check if such a query is recursive because there is a chance that it actually originated from the Telepresence DNS resolver and was dispatched to the `traffic-manager`, or a `traffic-agent`. + +Telepresence handles this by sending one initial DNS-query to resolve the hostname "tel2-recursion-check.kube-system". If the cluster runs locally, and has access to the local host's network, then that query will recurse back into the Telepresence resolver. Telepresence remembers this and alters its own behavior so that queries that are believed to be recursions are detected and respond with an NXNAME record. Telepresence performs this solution to the best of its ability, but may not be completely accurate in all situations. There's a chance that the DNS-resolver will yield a false negative for the second query if the same hostname is queried more than once in rapid succession, that is when the second query is made before the first query has received a response from the cluster. + +#### Connect recursion +A cluster running locally may dispatch connection attempts to non-existing host:port combinations to the host network. This means that they may reach the Telepresence [VIF](../tun-device). Endless recursions occur if the VIF simply dispatches such attempts on to the cluster. + +The telepresence client handles this by serializing all connection attempts to one specific IP:PORT, trapping all subsequent attempts to connect to that IP:PORT until the first attempt has completed. If the first attempt was deemed a success, then the currently trapped attempts are allowed to proceed. If the first attempt failed, then the currently trapped attempts fail. + +## Inbound + +The traffic-manager and traffic-agent are mutually responsible for setting up the necessary connection to the workstation when an intercept becomes active. In versions prior to 2.3.2, this would be accomplished by the traffic-manager creating a port dynamically that it would pass to the traffic-agent. The traffic-agent would then forward the intercepted connection to that port, and the traffic-manager would forward it to the workstation. This lead to problems when integrating with service meshes like Istio since those dynamic ports needed to be configured. It also imposed an undesired requirement to be able to use mTLS between the traffic-manager and traffic-agent. + +In 2.3.2, this changes, so that the traffic-agent instead creates a tunnel to the traffic-manager using the already existing gRPC API connection. The traffic-manager then forwards that using another tunnel to the workstation. This is completely invisible to other service meshes and is therefore much easier to configure. + +##### Footnotes: +

1: A future version of Telepresence will not allow concurrent intercepts that span multiple namespaces.

+

2: The error message from an attempt to create a service in a bad subnet contains the service subnet. The trick of creating a dummy service is currently the only way to get Kubernetes to expose that subnet.

diff --git a/docs/telepresence/2.6/reference/tun-device.md b/docs/telepresence/2.6/reference/tun-device.md new file mode 100644 index 000000000..4410f6f3c --- /dev/null +++ b/docs/telepresence/2.6/reference/tun-device.md @@ -0,0 +1,27 @@ +# Networking through Virtual Network Interface + +The Telepresence daemon process creates a Virtual Network Interface (VIF) when Telepresence connects to the cluster. The VIF ensures that the cluster's subnets are available to the workstation. It also intercepts DNS requests and forwards them to the traffic-manager which in turn forwards them to intercepted agents, if any, or performs a host lookup by itself. + +### TUN-Device +The VIF is a TUN-device, which means that it communicates with the workstation in terms of L3 IP-packets. The router will recognize UDP and TCP packets and tunnel their payload to the traffic-manager via its encrypted gRPC API. The traffic-manager will then establish corresponding connections in the cluster. All protocol negotiation takes place in the client because the VIF takes care of the L3 to L4 translation (i.e. the tunnel is L4, not L3). + +## Gains when using the VIF + +### Both TCP and UDP +The TUN-device is capable of routing both TCP and UDP for outbound traffic. Earlier versions of Telepresence would only allow TCP. Future enhancements might be to also route inbound UDP, and perhaps a selection of ICMP packages (to allow for things like `ping`). + +### No SSH required + +The VIF approach is somewhat similar to using `sshuttle` but without +any requirements for extra software, configuration or connections. +Using the VIF means that only one single connection needs to be +forwarded through the Kubernetes apiserver (à la `kubectl +port-forward`), using only one single port. There is no need for +`ssh` in the client nor for `sshd` in the traffic-manager. This also +means that the traffic-manager container can run as the default user. + +#### sshfs without ssh encryption +When a POD is intercepted, and its volumes are mounted on the local machine, this mount is performed by [sshfs](https://github.com/libfuse/sshfs). Telepresence will run `sshfs -o slave` which means that instead of using `ssh` to establish an encrypted communication to an `sshd`, which in turn terminates the encryption and forwards to `sftp`, the `sshfs` will talk `sftp` directly on its `stdin/stdout` pair. Telepresence tunnels that directly to an `sftp` in the agent using its already encrypted gRPC API. As a result, no `sshd` is needed in client nor in the traffic-agent, and the traffic-agent container can run as the default user. + +### No Firewall rules +With the VIF in place, there's no longer any need to tamper with firewalls in order to establish IP routes. The VIF makes the cluster subnets available during connect, and the kernel will perform the routing automatically. When the session ends, the kernel is also responsible for cleaning up. diff --git a/docs/telepresence/2.6/reference/volume.md b/docs/telepresence/2.6/reference/volume.md new file mode 100644 index 000000000..82df9cafa --- /dev/null +++ b/docs/telepresence/2.6/reference/volume.md @@ -0,0 +1,36 @@ +# Volume mounts + +import Alert from '@material-ui/lab/Alert'; + +Telepresence supports locally mounting of volumes that are mounted to your Pods. You can specify a command to run when starting the intercept, this could be a subshell or local server such as Python or Node. + +``` +telepresence intercept --port --mount=/tmp/ -- /bin/bash +``` + +In this case, Telepresence creates the intercept, mounts the Pod's volumes to locally to `/tmp`, and starts a Bash subshell. + +Telepresence can set a random mount point for you by using `--mount=true` instead, you can then find the mount point in the output of `telepresence list` or using the `$TELEPRESENCE_ROOT` variable. + +``` +$ telepresence intercept --port --mount=true -- /bin/bash +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 + Intercepting : all TCP connections + +bash-3.2$ echo $TELEPRESENCE_ROOT +/var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 +``` + +--mount=true is the default if a mount option is not specified, use --mount=false to disable mounting volumes. + +With either method, the code you run locally either from the subshell or from the intercept command will need to be prepended with the `$TELEPRESENCE_ROOT` environment variable to utilize the mounted volumes. + +For example, Kubernetes mounts secrets to `/var/run/secrets/kubernetes.io` (even if no `mountPoint` for it exists in the Pod spec). Once mounted, to access these you would need to change your code to use `$TELEPRESENCE_ROOT/var/run/secrets/kubernetes.io`. + +If using --mount=true without a command, you can use either environment variable flag to retrieve the variable. diff --git a/docs/telepresence/2.6/reference/vpn.md b/docs/telepresence/2.6/reference/vpn.md new file mode 100644 index 000000000..ceabd4c0c --- /dev/null +++ b/docs/telepresence/2.6/reference/vpn.md @@ -0,0 +1,157 @@ + +
+ +# Telepresence and VPNs + +## The test-vpn command + +You can make use of the `telepresence test-vpn` command to diagnose issues +with your VPN setup. +This guides you through a series of steps to figure out if there are +conflicts between your VPN configuration and [Telepresence](/products/telepresence/). + +### Prerequisites + +Before running `telepresence test-vpn` you should ensure that your VPN is +in split-tunnel mode. +This means that only traffic that _must_ pass through the VPN is directed +through it; otherwise, the test results may be inaccurate. + +You may need to configure this on both the client and server sides. +Client-side, taking the Tunnelblick client as an example, you must ensure that +the `Route all IPv4 traffic through the VPN` tickbox is not enabled: + +![Tunnelblick](../images/tunnelblick.png) + +Server-side, taking AWS' ClientVPN as an example, you simply have to enable +split-tunnel mode: + +![Modify client VPN Endpoint](../images/split-tunnel.png) + +In AWS, this setting can be toggled without reprovisioning the VPN. Other cloud providers may work differently. + +### Testing the VPN configuration + +To run it, enter: + +```console +$ telepresence test-vpn +``` + +The test-vpn tool begins by asking you to disconnect from your VPN; ensure you are disconnected then +press enter: + +``` +Telepresence Root Daemon is already stopped +Telepresence User Daemon is already stopped +Please disconnect from your VPN now and hit enter once you're disconnected... +``` + +Once it's gathered information about your network configuration without an active connection, +it will ask you to connect to the VPN: + +``` +Please connect to your VPN now and hit enter once you're connected... +``` + +It will then connect to the cluster: + + +``` +Launching Telepresence Root Daemon +Launching Telepresence User Daemon +Connected to context arn:aws:eks:us-east-1:914373874199:cluster/josec-tp-test-vpn-cluster (https://07C63820C58A0426296DAEFC73AED10C.gr7.us-east-1.eks.amazonaws.com) +Telepresence Root Daemon quitting... done +Telepresence User Daemon quitting... done +``` + +And show you the results of the test: + +``` +---------- Test Results: +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +✅ svc subnet 10.19.0.0/16 is clear of VPN + +Please see https://www.telepresence.io/docs/latest/reference/vpn for more info on these corrective actions, as well as examples + +Still having issues? Please create a new github issue at https://github.com/telepresenceio/telepresence/issues/new?template=Bug_report.md + Please make sure to add the following to your issue: + * Run `telepresence loglevel debug`, try to connect, then run `telepresence gather_logs`. It will produce a zipfile that you should attach to the issue. + * Which VPN client are you using? + * Which VPN server are you using? + * How is your VPN pushing DNS configuration? It may be useful to add the contents of /etc/resolv.conf +``` + +#### Interpreting test results + +##### Case 1: VPN masked by cluster + +In an instance where the VPN is masked by the cluster, the test-vpn tool informs you that a pod or service subnet is masking a CIDR that the VPN +routes: + +``` +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +``` + +This means that all VPN hosts within `10.0.0.0/19` will be rendered inaccessible while +telepresence is connected. + +The ideal resolution in this case is to move the pods to a different subnet. This is possible, +for example, in Amazon EKS by configuring a [new CIDR range](https://aws.amazon.com/premiumsupport/knowledge-center/eks-multiple-cidr-ranges/) for the pods. +In this case, configuring the pods to be located in `10.1.0.0/19` clears the VPN and allows you +to reach hosts inside the VPC's `10.0.0.0/19` + +However, it is not always possible to move the pods to a different subnet. +In these cases, you should use the [never-proxy](../config#neverproxy) configuration to prevent certain +hosts from being masked. +This might be particularly important for DNS resolution. In an AWS ClientVPN VPN it is often +customary to set the `.2` host as a DNS server (e.g. `10.0.0.2` in this case): + +![Modify Client VPN Endpoint](../images/vpn-dns.png) + +If this is the case for your VPN, you should place the DNS server in the never-proxy list for your +cluster. In your kubeconfig file, add a `telepresence` extension like so: + +```yaml +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + never-proxy: + - 10.0.0.2/32 +``` + +##### Case 2: Cluster masked by VPN + +In an instance where the Cluster is masked by the VPN, the test-vpn tool informs you that a pod or service subnet is being masked by a CIDR +that the VPN routes: + +``` +❌ pod subnet 10.0.0.0/8 being masked by VPN-routed CIDR 10.0.0.0/16. This usually means that Telepresence will not be able to connect to your cluster. To resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, consider shrinking the mask of the 10.0.0.0/16 CIDR (e.g. from /16 to /8), or disabling split-tunneling +``` + +Typically this means that pods within `10.0.0.0/8` are not accessible while the VPN is +connected. + +As with the first case, the ideal resolution is to move the pods away, but this may not always +be possible. In that case, your best bet is to attempt to shrink the VPN's CIDR +(that is, make it route more hosts) to make Telepresence's routes win by virtue of specificity. +One easy way to do this may be by disabling split tunneling (see the [prerequisites](#prerequisites) +section for more on split-tunneling). + +Note that once you fix this, you may find yourself landing again in [Case 1](#case-1-vpn-masked-by-cluster), and may need +to use never-proxy rules to whitelist hosts in the VPN: + +``` +❌ pod subnet 10.0.0.0/8 is masking VPN-routed CIDR 0.0.0.0/1. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 0.0.0.0/1 are placed in the never-proxy list +``` +
diff --git a/docs/telepresence/2.6/release-notes/no-ssh.png b/docs/telepresence/2.6/release-notes/no-ssh.png new file mode 100644 index 000000000..025f20ab7 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/no-ssh.png differ diff --git a/docs/telepresence/2.6/release-notes/run-tp-in-docker.png b/docs/telepresence/2.6/release-notes/run-tp-in-docker.png new file mode 100644 index 000000000..53b66a9b2 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/run-tp-in-docker.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.2.png b/docs/telepresence/2.6/release-notes/telepresence-2.2.png new file mode 100644 index 000000000..43abc7e89 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.2.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.3.0-homebrew.png b/docs/telepresence/2.6/release-notes/telepresence-2.3.0-homebrew.png new file mode 100644 index 000000000..e203a9750 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.3.0-homebrew.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.3.0-loglevels.png b/docs/telepresence/2.6/release-notes/telepresence-2.3.0-loglevels.png new file mode 100644 index 000000000..3d628c54a Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.3.0-loglevels.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.3.1-alsoProxy.png b/docs/telepresence/2.6/release-notes/telepresence-2.3.1-alsoProxy.png new file mode 100644 index 000000000..4052b927b Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.3.1-alsoProxy.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.3.1-brew.png b/docs/telepresence/2.6/release-notes/telepresence-2.3.1-brew.png new file mode 100644 index 000000000..2af424904 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.3.1-brew.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.3.1-dns.png b/docs/telepresence/2.6/release-notes/telepresence-2.3.1-dns.png new file mode 100644 index 000000000..c6335e7a7 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.3.1-dns.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.3.1-inject.png b/docs/telepresence/2.6/release-notes/telepresence-2.3.1-inject.png new file mode 100644 index 000000000..aea1003ef Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.3.1-inject.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.3.1-large-file-transfer.png b/docs/telepresence/2.6/release-notes/telepresence-2.3.1-large-file-transfer.png new file mode 100644 index 000000000..48ceb3817 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.3.1-large-file-transfer.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.3.1-trafficmanagerconnect.png b/docs/telepresence/2.6/release-notes/telepresence-2.3.1-trafficmanagerconnect.png new file mode 100644 index 000000000..78128c174 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.3.1-trafficmanagerconnect.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.3.2-subnets.png b/docs/telepresence/2.6/release-notes/telepresence-2.3.2-subnets.png new file mode 100644 index 000000000..778c722ab Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.3.2-subnets.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.3.2-svcport-annotation.png b/docs/telepresence/2.6/release-notes/telepresence-2.3.2-svcport-annotation.png new file mode 100644 index 000000000..1e1e92408 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.3.2-svcport-annotation.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.3.3-helm.png b/docs/telepresence/2.6/release-notes/telepresence-2.3.3-helm.png new file mode 100644 index 000000000..7b81480a7 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.3.3-helm.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.3.3-namespace-config.png b/docs/telepresence/2.6/release-notes/telepresence-2.3.3-namespace-config.png new file mode 100644 index 000000000..7864d3a30 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.3.3-namespace-config.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.3.3-to-pod.png b/docs/telepresence/2.6/release-notes/telepresence-2.3.3-to-pod.png new file mode 100644 index 000000000..aa7be3f63 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.3.3-to-pod.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.3.4-improved-error.png b/docs/telepresence/2.6/release-notes/telepresence-2.3.4-improved-error.png new file mode 100644 index 000000000..fa8a12986 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.3.4-improved-error.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.3.4-ip-error.png b/docs/telepresence/2.6/release-notes/telepresence-2.3.4-ip-error.png new file mode 100644 index 000000000..1d37380c7 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.3.4-ip-error.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.3.5-agent-config.png b/docs/telepresence/2.6/release-notes/telepresence-2.3.5-agent-config.png new file mode 100644 index 000000000..67d6d3e8b Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.3.5-agent-config.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.3.5-grpc-max-receive-size.png b/docs/telepresence/2.6/release-notes/telepresence-2.3.5-grpc-max-receive-size.png new file mode 100644 index 000000000..32939f9dd Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.3.5-grpc-max-receive-size.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.3.5-skipLogin.png b/docs/telepresence/2.6/release-notes/telepresence-2.3.5-skipLogin.png new file mode 100644 index 000000000..bf79c1910 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.3.5-skipLogin.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png b/docs/telepresence/2.6/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png new file mode 100644 index 000000000..d29a05ad7 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.3.7-keydesc.png b/docs/telepresence/2.6/release-notes/telepresence-2.3.7-keydesc.png new file mode 100644 index 000000000..9bffe5ccb Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.3.7-keydesc.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.3.7-newkey.png b/docs/telepresence/2.6/release-notes/telepresence-2.3.7-newkey.png new file mode 100644 index 000000000..c7d47c42d Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.3.7-newkey.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.4.0-cloud-messages.png b/docs/telepresence/2.6/release-notes/telepresence-2.4.0-cloud-messages.png new file mode 100644 index 000000000..ffd045ae0 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.4.0-cloud-messages.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.4.0-windows.png b/docs/telepresence/2.6/release-notes/telepresence-2.4.0-windows.png new file mode 100644 index 000000000..d27ba254a Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.4.0-windows.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.4.1-systema-vars.png b/docs/telepresence/2.6/release-notes/telepresence-2.4.1-systema-vars.png new file mode 100644 index 000000000..c098b439f Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.4.1-systema-vars.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.4.4-gather-logs.png b/docs/telepresence/2.6/release-notes/telepresence-2.4.4-gather-logs.png new file mode 100644 index 000000000..7db541735 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.4.4-gather-logs.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.4.5-logs-anonymize.png b/docs/telepresence/2.6/release-notes/telepresence-2.4.5-logs-anonymize.png new file mode 100644 index 000000000..edd01fde4 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.4.5-logs-anonymize.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.4.5-pod-yaml.png b/docs/telepresence/2.6/release-notes/telepresence-2.4.5-pod-yaml.png new file mode 100644 index 000000000..3f565c4f8 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.4.5-pod-yaml.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.4.5-preview-url-questions.png b/docs/telepresence/2.6/release-notes/telepresence-2.4.5-preview-url-questions.png new file mode 100644 index 000000000..1823aaa14 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.4.5-preview-url-questions.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.4.6-help-text.png b/docs/telepresence/2.6/release-notes/telepresence-2.4.6-help-text.png new file mode 100644 index 000000000..aab9178ad Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.4.6-help-text.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.4.8-health-check.png b/docs/telepresence/2.6/release-notes/telepresence-2.4.8-health-check.png new file mode 100644 index 000000000..e10a0b472 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.4.8-health-check.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.4.8-vpn.png b/docs/telepresence/2.6/release-notes/telepresence-2.4.8-vpn.png new file mode 100644 index 000000000..fbb215882 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.4.8-vpn.png differ diff --git a/docs/telepresence/2.6/release-notes/telepresence-2.5.0-pro-daemon.png b/docs/telepresence/2.6/release-notes/telepresence-2.5.0-pro-daemon.png new file mode 100644 index 000000000..5b82fc769 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/telepresence-2.5.0-pro-daemon.png differ diff --git a/docs/telepresence/2.6/release-notes/tunnel.jpg b/docs/telepresence/2.6/release-notes/tunnel.jpg new file mode 100644 index 000000000..59a0397e6 Binary files /dev/null and b/docs/telepresence/2.6/release-notes/tunnel.jpg differ diff --git a/docs/telepresence/2.6/releaseNotes.yml b/docs/telepresence/2.6/releaseNotes.yml new file mode 100644 index 000000000..be5217e74 --- /dev/null +++ b/docs/telepresence/2.6/releaseNotes.yml @@ -0,0 +1,1475 @@ +# This file should be placed in the folder for the version of the +# product that's meant to be documented. A `/release-notes` page will +# be automatically generated and populated at build time. +# +# Note that an entry needs to be added to the `doc-links.yml` file in +# order to surface the release notes in the table of contents. +# +# The YAML in this file should contain: +# +# changelog: An (optional) URL to the CHANGELOG for the product. +# items: An array of releases with the following attributes: +# - version: The (optional) version number of the release, if applicable. +# - date: The date of the release in the format YYYY-MM-DD. +# - notes: An array of noteworthy changes included in the release, each having the following attributes: +# - type: The type of change, one of `bugfix`, `feature`, `security` or `change`. +# - title: A short title of the noteworthy change. +# - body: >- +# Two or three sentences describing the change and why it +# is noteworthy. This is HTML, not plain text or +# markdown. It is handy to use YAML's ">-" feature to +# allow line-wrapping. +# - image: >- +# The URL of an image that visually represents the +# noteworthy change. This path is relative to the +# `release-notes` directory; if this file is +# `FOO/releaseNotes.yml`, then the image paths are +# relative to `FOO/release-notes/`. +# - docs: The path to the documentation page where additional information can be found. +# - href: A path from the root to a resource on the getambassador website, takes precedence over a docs link. + +docTitle: Telepresence Release Notes +docDescription: >- + Release notes for Telepresence by Ambassador Labs, a CNCF project + that enables developers to iterate rapidly on Kubernetes + microservices by arming them with infinite-scale development + environments, access to instantaneous feedback loops, and highly + customizable development environments. + +changelog: https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md + +items: + - version: 2.6.8 + date: "2022-06-23" + notes: + - type: feature + title: Specify Your DNS + body: >- + The name and namespace for the DNS Service that the traffic-manager uses in DNS auto-detection can now be specified. + - type: feature + title: Specify a Fallback DNS + body: >- + Should the DNS auto-detection logic in the traffic-manager fail, users can now specify a fallback IP to use. + - type: feature + title: Intercept UDP Ports + body: >- + It is now possible to intercept UDP ports with Telepresence and also use `--to-pod` to forward UDP traffic from ports on localhost. + - type: change + title: Additional Helm Values + body: >- + The Helm chart will now add the `nodeSelector`, `affinity` and `tolerations` values to the traffic-manager's post-upgrade-hook and pre-delete-hook jobs. + - type: bugfix + title: Agent Injection Bugfix + body: >- + Telepresence no longer fails to inject the traffic agent into the pod generated for workloads that have no volumes and `automountServiceAccountToken: false`. + - version: 2.6.7 + date: "2022-06-22" + notes: + - type: bugfix + title: Persistant Sessions + body: >- + The Telepresence client will remember and reuse the traffic-manager session after a network failure or other reason that caused an unclean disconnect. + - type: bugfix + title: DNS Requests + body: >- + Telepresence will no longer forward DNS requests for "wpad" to the cluster. + - type: bugfix + title: Graceful Shutdown + body: >- + The traffic-agent will properly shut down if one of its goroutines errors. + - version: 2.6.6 + date: "2022-06-9" + notes: + - type: bugfix + title: Env Var `TELEPRESENCE_API_PORT` + body: >- + The propagation of the `TELEPRESENCE_API_PORT` environment variable now works correctly. + - type: bugfix + title: Double Printing `--output json` + body: >- + The `--output json` global flag no longer outputs multiple objects + - version: 2.6.5 + date: "2022-06-03" + notes: + - type: feature + title: Helm Option -- `reinvocationPolicy` + body: >- + The `reinvocationPolicy` or the traffic-agent injector webhook can now be configured using the Helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Proxy Certificate + body: >- + The traffic manager now accepts a root CA for a proxy, allowing it to connect to ambassador cloud from behind an HTTPS proxy. This can be configured through the helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Agent Injection + body: >- + A policy that controls when the mutating webhook injects the traffic-agent was added, and can be configured in the Helm chart. + docs: install/helm + - type: change + title: Windows Tunnel Version Upgrade + body: >- + Telepresence on Windows upgraded wintun.dll from version 0.12 to version 0.14.1 + - type: change + title: Helm Version Upgrade + body: >- + Telepresence upgraded its embedded Helm from version 3.8.1 to 3.9 + - type: change + title: Kubernetes API Version Upgrade + body: >- + Telepresence upgraded its embedded Kubernetes API from version 0.23.4 to 0.24.1 + - type: feature + title: Flag `--watch` Added to `list` Command + body: >- + Added a `--watch` flag to `telepresence list` that can be used to watch interceptable workloads in a namespace. + - type: change + title: Depreciated `images.webhookAgentImage` + body: >- + The Telepresence configuration setting for `images.webhookAgentImage` is now deprecated. Use `images.agentImage` instead. + - type: bugfix + title: Default `reinvocationPolicy` Set to Never + body: >- + The `reinvocationPolicy` or the traffic-agent injector webhook now defaults to `Never` insteadof `IfNeeded` so that `LimitRange`s on namespaces can inject a missing `resources` element into the injected traffic-agent container. + - type: bugfix + title: UDP + body: >- + UDP based communication with services in the cluster now works as expected. + - type: bugfix + title: Telepresence `--help` + body: >- + The command help will only show Kubernetes flags on the commands that supports them + - type: change + title: Error Count + body: >- + Only the errors from the last session will be considered when counting the number of errors in the log after a command failure. + - version: 2.6.4 + date: "2022-05-23" + notes: + - type: bugfix + title: Upgrade RBAC Permissions + body: >- + The traffic-manager RBAC grants permissions to update services, deployments, replicatsets, and statefulsets. Those permissions are needed when the traffic-manager upgrades from versions < 2.6.0 and can be revoked after the upgrade. + - version: 2.6.3 + date: "2022-05-20" + notes: + - type: bugfix + title: Relative Mount Paths + body: >- + The `--mount` intercept flag now handles relative mount points correctly on non-windows platforms. Windows still require the argument to be a drive letter followed by a colon. + - type: bugfix + title: Traffic Agent Config + body: >- + The traffic-agent's configuration update automatically when services are added, updated or deleted. + - type: bugfix + title: Container Injection for Numeric Ports + body: >- + Telepresence will now always inject an initContainer when the service's targetPort is numeric + - type: bugfix + title: Matching Services + body: >- + Workloads that have several matching services pointing to the same target port are now handled correctly. + - type: bugfix + title: Unexpected Panic + body: >- + A potential race condition causing a panic when closing a DNS connection is now handled correctly. + - type: bugfix + title: Mount Volume Cleanup + body: >- + A container start would sometimes fail because and old directory remained in a mounted temp volume. + - version: 2.6.2 + date: "2022-05-17" + notes: + - type: bugfix + title: Argo Injection + body: >- + Workloads controlled by workloads like Argo `Rollout` are injected correctly. + - type: bugfix + title: Agent Port Mapping + body: >- + Multiple services appointing the same container port no longer result in duplicated ports in an injected pod. + - type: bugfix + title: GRPC Max Message Size + body: >- + The `telepresence list` command no longer errors out with "grpc: received message larger than max" when listing namespaces with a large number of workloads. + - version: 2.6.1 + date: "2022-05-16" + notes: + - type: bugfix + title: KUBECONFIG environment variable + body: >- + Telepresence will now handle multiple path entries in the KUBECONFIG environment correctly. + - type: bugfix + title: Don't Panic + body: >- + Telepresence will no longer panic when using preview URLs with traffic-managers < 2.6.0 + - version: 2.6.0 + date: "2022-05-13" + notes: + - type: feature + title: Intercept multiple containers in a pod, and multiple ports per container + body: >- + Telepresence can now intercept multiple services and/or service-ports that connect to the same pod. + docs: new-in-2.6#intercept-multiple-containers-and-ports + - type: feature + title: The Traffic Agent sidecar is always injected by the Traffic Manager's mutating webhook + body: >- + The client will no longer modify deployments, replicasets, or statefulsets in order to + inject a Traffic Agent into an intercepted pod. Instead, all injection is now performed by a mutating webhook. As a result, + the client now needs less permissions in the cluster. + docs: install/upgrade#important-note-about-upgrading-to-2.6.0 + - type: change + title: Automatic upgrade of Traffic Agents + body: >- + When upgrading, all workloads with injected agents will have their agent "uninstalled" automatically. + The mutating webhook will then ensure that their pods will receive an updated Traffic Agent. + docs: new-in-2.6#no-more-workload-modifications + - type: change + title: No default image in the Helm chart + body: >- + The helm chart no longer has a default set for the agentInjector.image.name, and unless it's set, the + traffic-manager will ask Ambassador Could for the preferred image. + docs: new-in-2.6#smarter-agent + - type: change + title: Upgrade to Helm version 3.8.1 + body: The Telepresence client now uses Helm version 3.8.1 when auto-installing the Traffic Manager. + - type: bugfix + title: Remote mounts will now function correctly with custom securityContext + body: >- + The bug causing permission problems when the Traffic Agent is in a Pod with a custom securityContext has been fixed. + - type: bugfix + title: Improved presentation of flags in CLI help + body: The help for commands that accept Kubernetes flags will now display those flags in a separate group. + - type: bugfix + title: Better termination of process parented by intercept + body: >- + Occasionally an intercept will spawn a command using -- on the command line, often in another console. + When you use telepresence leave or telepresence quit while the intercept with the spawned command is still active, + Telepresence will now terminate that the command because it's considered to be parented by the intercept that is being removed. + - version: 2.5.8 + date: "2022-04-27" + notes: + - type: bugfix + title: Folder creation on `telepresence login` + body: >- + Fixed a bug where the telepresence config folder would not be created if the user ran `telepresence login` before other commands. + - version: 2.5.7 + date: "2022-04-25" + notes: + - type: change + title: RBAC requirements + body: >- + A namespaced traffic-manager will no longer require cluster wide RBAC. Only Roles and RoleBindings are now used. + - type: bugfix + title: Windows DNS + body: >- + The DNS recursion detector didn't work correctly on Windows, resulting in sporadic failures to resolve names that were resolved correctly at other times. + - type: bugfix + title: Session TTL and Reconnect + body: >- + A telepresence session will now last for 24 hours after the user's last connectivity. If a session expires, the connector will automatically try to reconnect. + - version: 2.5.6 + date: "2022-04-18" + notes: + - type: change + title: Less Watchers + body: >- + Telepresence agents watcher will now only watch namespaces that the user has accessed since the last `connect`. + - type: bugfix + title: More Efficient `gather-logs` + body: >- + The `gather-logs` command will no longer send any logs through `gRPC`. + - version: 2.5.5 + date: "2022-04-08" + notes: + - type: change + title: Traffic Manager Permissions + body: >- + The traffic-manager now requires permissions to read pods across namespaces even if installed with limited permissions + - type: bugfix + title: Linux DNS Cache + body: >- + The DNS resolver used on Linux with systemd-resolved now flushes the cache when the search path changes. + - type: bugfix + title: Automatic Connect Sync + body: >- + The `telepresence list` command will produce a correct listing even when not preceded by a `telepresence connect`. + - type: bugfix + title: Disconnect Reconnect Stability + body: >- + The root daemon will no longer get into a bad state when a disconnect is rapidly followed by a new connect. + - type: bugfix + title: Limit Watched Namespaces + body: >- + The client will now only watch agents from accessible namespaces, and is also constrained to namespaces explicitly mapped using the `connect` command's `--mapped-namespaces` flag. + - type: bugfix + title: Limit Namespaces used in `gather-logs` + body: >- + The `gather-logs` command will only gather traffic-agent logs from accessible namespaces, and is also constrained to namespaces explicitly mapped using the `connect` command's `--mapped-namespaces` flag. + - version: 2.5.4 + date: "2022-03-29" + notes: + - type: bugfix + title: Linux DNS Concurrency + body: >- + The DNS fallback resolver on Linux now correctly handles concurrent requests without timing them out + - type: bugfix + title: Non-Functional Flag + body: >- + The ingress-l5 flag will no longer be forcefully set to equal the --ingress-host flag + - type: bugfix + title: Automatically Remove Failed Intercepts + body: >- + Intercepts that fail to create are now consistently removed to prevent non-working dangling intercepts from sticking around. + - type: bugfix + title: Agent UID + body: >- + Agent container is no longer sensitive to a random UID or an UID imposed by a SecurityContext. + - type: bugfix + title: Gather-Logs Output Filepath + body: >- + Removed a bad concatenation that corrupted the output path of `telepresence gather-logs`. + - type: change + title: Remove Unnecessary Error Advice + body: >- + An advice to "see logs for details" is no longer printed when the argument count is incorrect in a CLI command. + - type: bugfix + title: Garbage Collection + body: >- + Client and agent sessions no longer leaves dangling waiters in the traffic-manager when they depart. + - type: bugfix + title: Limit Gathered Logs + body: >- + The client's gather logs command and agent watcher will now respect the configured grpc.maxReceiveSize + - type: change + title: In-Cluster Checks + body: >- + The TUN device will no longer route pod or service subnets if it is running in a machine that's already connected to the cluster + - type: change + title: Expanded Status Command + body: >- + The status command includes the install id, user id, account id, and user email in its result, and can print output as JSON + - type: change + title: List Command Shows All Intercepts + body: >- + The list command, when used with the `--intercepts` flag, will list the users intercepts from all namespaces + - version: 2.5.3 + date: "2022-02-25" + notes: + - type: bugfix + title: TCP Connectivity + body: >- + Fixed bug in the TCP stack causing timeouts after repeated connects to the same address + - type: feature + title: Linux Binaries + body: >- + Client-side binaries for the arm64 architecture are now available for linux + - version: 2.5.2 + date: "2022-02-23" + notes: + - type: bugfix + title: DNS server bugfix + body: >- + Fixed a bug where Telepresence would use the last server in resolv.conf + - version: 2.5.1 + date: "2022-02-19" + notes: + - type: bugfix + title: Fix GKE auth issue + body: >- + Fixed a bug where using a GKE cluster would error with: No Auth Provider found for name "gcp" + - version: 2.5.0 + date: "2022-02-18" + notes: + - type: feature + title: Intercept specific endpoints + body: >- + The flags --http-path-equal, --http-path-prefix, and --http-path-regex can can be used in + addition to the --http-match flag to filter personal intercepts by the request URL path + docs: concepts/intercepts#intercepting-a-specific-endpoint + - type: feature + title: Intercept metadata + body: >- + The flag --http-meta can be used to declare metadata key value pairs that will be returned by the Telepresence rest + API endpoint /intercept-info + docs: reference/restapi#intercept-info + - type: change + title: Client RBAC watch + body: >- + The verb "watch" was added to the set of required verbs when accessing services and workloads for the client RBAC + ClusterRole + docs: reference/rbac + - type: change + title: Dropped backward compatibility with versions <=2.4.4 + body: >- + Telepresence is no longer backward compatible with versions 2.4.4 or older because the deprecated multiplexing tunnel + functionality was removed. + - type: change + title: No global networking flags + body: >- + The global networking flags are no longer used and using them will render a deprecation warning unless they are supported by the + command. The subcommands that support networking flags are connect, current-cluster-id, + and genyaml. + - type: bugfix + title: Output of status command + body: >- + The also-proxy and never-proxy subnets are now displayed correctly when using the + telepresence status command. + - type: bugfix + title: SETENV sudo privilege no longer needed + body: >- + Telepresence longer requires SETENV privileges when starting the root daemon. + - type: bugfix + title: Network device names containing dash + body: >- + Telepresence will now parse device names containing dashes correctly when determining routes that it should never block. + - type: bugfix + title: Linux uses cluster.local as domain instead of search + body: >- + The cluster domain (typically "cluster.local") is no longer added to the DNS search on Linux using + systemd-resolved. Instead, it is added as a domain so that names ending with it are routed + to the DNS server. + - version: 2.4.11 + date: "2022-02-10" + notes: + - type: change + title: Add additional logging to troubleshoot intermittent issues with intercepts + body: >- + We've noticed some issues with intercepts in v2.4.10, so we are releasing a version + with enhanced logging to help debug and fix the issue. + - version: 2.4.10 + date: "2022-01-13" + notes: + - type: feature + title: Application Protocol Strategy + body: >- + The strategy used when selecting the application protocol for personal intercepts can now be configured using + the intercept.appProtocolStrategy in the config.yml file. + docs: reference/config/#intercept + image: telepresence-2.4.10-intercept-config.png + - type: feature + title: Helm value for the Application Protocol Strategy + body: >- + The strategy when selecting the application protocol for personal intercepts in agents injected by the + mutating webhook can now be configured using the agentInjector.appProtocolStrategy in the Helm chart. + docs: install/helm + - type: feature + title: New --http-plaintext option + body: >- + The flag --http-plaintext can be used to ensure that an intercept uses plaintext http or grpc when + communicating with the workstation process. + docs: reference/intercepts/#tls + - type: feature + title: Configure the default intercept port + body: >- + The port used by default in the telepresence intercept command (8080), can now be changed by setting + the intercept.defaultPort in the config.yml file. + docs: reference/config/#intercept + - type: change + title: Telepresence CI now uses Github Actions + body: >- + Telepresence now uses Github Actions for doing unit and integration testing. It is + now easier for contributors to run tests on PRs since maintainers can add an + "ok to test" label to PRs (including from forks) to run integration tests. + docs: https://github.com/telepresenceio/telepresence/actions + image: telepresence-2.4.10-actions.png + - type: bugfix + title: Check conditions before asking questions + body: >- + User will not be asked to log in or add ingress information when creating an intercept until a check has been + made that the intercept is possible. + docs: reference/intercepts/ + - type: bugfix + title: Fix invalid log statement + body: >- + Telepresence will no longer log invalid: "unhandled connection control message: code DIAL_OK" errors. + - type: bugfix + title: Log errors from sshfs/sftp + body: >- + Output to stderr from the traffic-agent's sftp and the client's sshfs processes + are properly logged as errors. + - type: bugfix + title: Don't use Windows path separators in workload pod template + body: >- + Auto installer will no longer not emit backslash separators for the /tel-app-mounts paths in the + traffic-agent container spec when running on Windows. + - version: 2.4.9 + date: "2021-12-09" + notes: + - type: bugfix + title: Helm upgrade nil pointer error + body: >- + A helm upgrade using the --reuse-values flag no longer fails on a "nil pointer" error caused by a nil + telpresenceAPI value. + docs: install/helm#upgrading-the-traffic-manager + - version: 2.4.8 + date: "2021-12-03" + notes: + - type: feature + title: VPN diagnostics tool + body: >- + There is a new subcommand, test-vpn, that can be used to diagnose connectivity issues with a VPN. + See the VPN docs for more information on how to use it. + docs: reference/vpn + image: telepresence-2.4.8-vpn.png + + - type: feature + title: RESTful API service + body: >- + A RESTful service was added to Telepresence, both locally to the client and to the traffic-agent to + help determine if messages with a set of headers should be consumed or not from a message queue where the + intercept headers are added to the messages. + docs: reference/restapi + image: telepresence-2.4.8-health-check.png + + - type: change + title: TELEPRESENCE_LOGIN_CLIENT_ID env variable no longer used + body: >- + You could previously configure this value, but there was no reason to change it, so the value + was removed. + + - type: bugfix + title: Tunneled network connections behave more like ordinary TCP connections. + body: >- + When using Telepresence with an external cloud provider for extensions, those tunneled + connections now behave more like TCP connections, especially when it comes to timeouts. + We've also added increased testing around these types of connections. + - version: 2.4.7 + date: "2021-11-24" + notes: + - type: feature + title: Injector service-name annotation + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-service-name, that can be used to set the name of the service to be intercepted. + This will help disambiguate which service to intercept for when a workload is exposed by multiple services, such as can happen with Argo Rollouts + docs: reference/cluster-config#service-name-annotation + - type: feature + title: Skip the Ingress Dialogue + body: >- + You can now skip the ingress dialogue by setting the ingress parameters in the corresponding flags. + docs: reference/intercepts#skipping-the-ingress-dialogue + - type: feature + title: Never proxy subnets + body: >- + The kubeconfig extensions now support a never-proxy argument, + analogous to also-proxy, that defines a set of subnets that + will never be proxied via telepresence. + docs: reference/config#neverproxy + - type: change + title: Daemon versions check + body: >- + Telepresence now checks the versions of the client and the daemons and asks the user to quit and restart if they don't match. + - type: change + title: No explicit DNS flushes + body: >- + Telepresence DNS now uses a very short TTL instead of explicitly flushing DNS by killing the mDNSResponder or doing resolvectl flush-caches + docs: reference/routing#dns-caching + - type: bugfix + title: Legacy flags now work with global flags + body: >- + Legacy flags such as `--swap-deployment` can now be used together with global flags. + - type: bugfix + title: Outbound connection closing + body: >- + Outbound connections are now properly closed when the peer closes. + - type: bugfix + title: Prevent DNS recursion + body: >- + The DNS-resolver will trap recursive resolution attempts (may happen when the cluster runs in a docker-container on the client). + docs: reference/routing#dns-recursion + - type: bugfix + title: Prevent network recursion + body: >- + The TUN-device will trap failed connection attempts that results in recursive calls back into the TUN-device (may happen when the + cluster runs in a docker-container on the client). + docs: reference/routing#connect-recursion + - type: bugfix + title: Traffic Manager deadlock fix + body: >- + The Traffic Manager no longer runs a risk of entering a deadlock when a new Traffic agent arrives. + - type: bugfix + title: webhookRegistry config propagation + body: >- + The configured webhookRegistry is now propagated to the webhook installer even if no webhookAgentImage has been set. + docs: reference/config#images + - type: bugfix + title: Login refreshes expired tokens + body: >- + When a user's token has expired, telepresence login + will prompt the user to log in again to get a new token. Previously, + the user had to telepresence quit and telepresence logout + to get a new token. + docs: https://github.com/telepresenceio/telepresence/issues/2062 + - version: 2.4.6 + date: "2021-11-02" + notes: + - type: feature + title: Manually injecting Traffic Agent + body: >- + Telepresence now supports manually injecting the traffic-agent YAML into workload manifests. + Use the genyaml command to create the sidecar YAML, then add the telepresence.getambassador.io/manually-injected: "true" annotation to your pods to allow Telepresence to intercept them. + docs: reference/intercepts/manual-agent + + - type: feature + title: Telepresence CLI released for Apple silicon + body: >- + Telepresence is now built and released for Apple silicon. + docs: install/?os=macos + + - type: change + title: Telepresence help text now links to telepresence.io + body: >- + We now include a link to our documentation when you run telepresence --help. This will make it easier + for users to find this page whether they acquire Telepresence through Brew or some other mechanism. + image: telepresence-2.4.6-help-text.png + + - type: bugfix + title: Fixed bug when API server is inside CIDR range of pods/services + body: >- + If the API server for your kubernetes cluster had an IP that fell within the + subnet generated from pods/services in a kubernetes cluster, it would proxy traffic + to the API server which would result in hanging or a failed connection. We now ensure + that the API server is explicitly not proxied. + - version: 2.4.5 + date: "2021-10-15" + notes: + - type: feature + title: Get pod yaml with gather-logs command + body: >- + Adding the flag --get-pod-yaml to your request will get the + pod yaml manifest for all kubernetes components you are getting logs for + ( traffic-manager and/or pods containing a + traffic-agent container). This flag is set to false + by default. + docs: reference/client + image: telepresence-2.4.5-pod-yaml.png + + - type: feature + title: Anonymize pod name + namespace when using gather-logs command + body: >- + Adding the flag --anonymize to your command will + anonymize your pod names + namespaces in the output file. We replace the + sensitive names with simple names (e.g. pod-1, namespace-2) to maintain + relationships between the objects without exposing the real names of your + objects. This flag is set to false by default. + docs: reference/client + image: telepresence-2.4.5-logs-anonymize.png + + - type: feature + title: Added context and defaults to ingress questions when creating a preview URL + body: >- + Previously, we referred to OSI model layers when asking these questions, but this + terminology is not commonly used. The questions now provide a clearer context for the user, along with a default answer as an example. + docs: howtos/preview-urls + image: telepresence-2.4.5-preview-url-questions.png + + - type: feature + title: Support for intercepting headless services + body: >- + Intercepting headless services is now officially supported. You can request a + headless service on whatever port it exposes and get a response from the + intercept. This leverages the same approach as intercepting numeric ports when + using the mutating webhook injector, mainly requires the initContainer + to have NET_ADMIN capabilities. + docs: reference/intercepts/#intercepting-headless-services + + - type: change + title: Use one tunnel per connection instead of multiplexing into one tunnel + body: >- + We have changed Telepresence so that it uses one tunnel per connection instead + of multiplexing all connections into one tunnel. This will provide substantial + performance improvements. Clients will still be backwards compatible with older + managers that only support multiplexing. + + - type: bugfix + title: Added checks for Telepresence kubernetes compatibility + body: >- + Telepresence currently works with Kubernetes server versions 1.17.0 + and higher. We have added logs in the connector and traffic-manager + to let users know when they are using Telepresence with a cluster it doesn't support. + docs: reference/cluster-config + + - type: bugfix + title: Traffic Agent security context is now only added when necessary + body: >- + When creating an intercept, Telepresence will now only set the traffic agent's GID + when strictly necessary (i.e. when using headless services or numeric ports). This mitigates + an issue on openshift clusters where the traffic agent can fail to be created due to + openshift's security policies banning arbitrary GIDs. + + - version: 2.4.4 + date: "2021-09-27" + notes: + - type: feature + title: Numeric ports in agent injector + body: >- + The agent injector now supports injecting Traffic Agents into pods that have unnamed ports. + docs: reference/cluster-config/#note-on-numeric-ports + + - type: feature + title: New subcommand to gather logs and export into zip file + body: >- + Telepresence has logs for various components (the + traffic-manager, traffic-agents, the root and + user daemons), which are integral for understanding and debugging + Telepresence behavior. We have added the telepresence + gather-logs command to make it simple to compile logs for + all Telepresence components and export them in a zip file that can + be shared to others and/or included in a github issue. For more + information on usage, run telepresence gather-logs --help + . + docs: reference/client + image: telepresence-2.4.4-gather-logs.png + + - type: feature + title: Pod CIDR strategy is configurable in Helm chart + body: >- + Telepresence now enables you to directly configure how to get + pod CIDRs when deploying Telepresence with the Helm chart. + The default behavior remains the same. We've also introduced + the ability to explicitly set what the pod CIDRs should be. + docs: install/helm + + - type: bugfix + title: Compute pod CIDRs more efficiently + body: >- + When computing subnets using the pod CIDRs, the traffic-manager + now uses less CPU cycles. + docs: reference/routing/#subnets + + - type: bugfix + title: Prevent busy loop in traffic-manager + body: >- + In some circumstances, the traffic-manager's CPU + would max out and get pinned at its limit. This required a + shutdown or pod restart to fix. We've added some fixes + to prevent the traffic-manager from getting into this state. + + - type: bugfix + title: Added a fixed buffer size to TUN-device + body: >- + The TUN-device now has a max buffer size of 64K. This prevents the + buffer from growing limitlessly until it receies a PSH, which could + be a blocking operation when receiving lots of TCP-packets. + docs: reference/tun-device + + - type: bugfix + title: Fix hanging user daemon + body: >- + When Telepresence encountered an issue connecting to the cluster or + the root daemon, it could hang indefintely. It now will error correctly + when it encounters that situation. + + - type: bugfix + title: Improved proprietary agent connectivity + body: >- + To determine whether the environment cluster is air-gapped, the + proprietary agent attempts to connect to the cloud during startup. + To deal with a possible initial failure, the agent backs off + and retries the connection with an increasing backoff duration. + + - type: bugfix + title: Telepresence correctly reports intercept port conflict + body: >- + When creating a second intercept targetting the same local port, + it now gives the user an informative error message. Additionally, + it tells them which intercept is currently using that port to make + it easier to remedy. + + - version: 2.4.3 + date: "2021-09-15" + notes: + - type: feature + title: Environment variable TELEPRESENCE_INTERCEPT_ID available in interceptor's environment + body: >- + When you perform an intercept, we now include a TELEPRESENCE_INTERCEPT_ID environment + variable in the environment. + docs: reference/environment/#telepresence-environment-variables + + - type: bugfix + title: Improved daemon stability + body: >- + Fixed a timing bug that sometimes caused a "daemon did not start" failure. + + - type: bugfix + title: Complete logs for Windows + body: >- + Crash stack traces and other errors were incorrectly not written to log files. This has + been fixed so logs for Windows should be at parity with the ones in MacOS and Linux. + + - type: bugfix + title: Log rotation fix for Linux kernel 4.11+ + body: >- + On Linux kernel 4.11 and above, the log file rotation now properly reads the + birth-time of the log file. Older kernels continue to use the old behavior + of using the change-time in place of the birth-time. + + - type: bugfix + title: Improved error messaging + body: >- + When Telepresence encounters an error, it tells the user where they should look for + logs related to the error. We have refined this so that it only tells users to look + for errors in the daemon logs for issues that are logged there. + + - type: bugfix + title: Stop resolving localhost + body: >- + When using the overriding DNS resolver, it will no longer apply search paths when + resolving localhost, since that should be resolved on the user's machine + instead of the cluster. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Variable cluster domain + body: >- + Previously, the cluster domain was hardcoded to cluster.local. While this + is true for many kubernetes clusters, it is not for all of them. Now this value is + retrieved from the traffic-manager. + + - type: bugfix + title: Improved cleanup of traffic-agents + body: >- + Telepresence now uninstalls traffic-agents installed via mutating webhook + when using telepresence uninstall --everything. + + - type: bugfix + title: More large file transfer fixes + body: >- + Downloading large files during an intercept will no longer cause timeouts and hanging + traffic-agents. + + - type: bugfix + title: Setting --mount to false when intercepting works as expected + body: >- + When using --mount=false while performing an intercept, the file system + was still mounted. This has been remedied so the intercept behavior respects the + flag. + docs: reference/volume + + - type: bugfix + title: Traffic-manager establishes outbound connections in parallel + body: >- + Previously, the traffic-manager established outbound connections + sequentially. This resulted in slow (and failing) Dial calls would + block all outbound traffic from the workstation (for up to 30 seconds). We now + establish these connections in parallel so that won't occur. + docs: reference/routing/#outbound + + - type: bugfix + title: Status command reports correct DNS settings + body: >- + Telepresence status now correctly reports DNS settings for all operating + systems, instead of Local IP:nil, Remote IP:nil when they don't exist. + + - version: 2.4.2 + date: "2021-09-01" + notes: + - type: feature + title: New subcommand to temporarily change log-level + body: >- + We have added a new telepresence loglevel subcommand that enables users + to temporarily change the log-level for the local demons, the traffic-manager and + the traffic-agents. While the logLevels settings from the config will + still be used by default, this can be helpful if you are currently experiencing an issue and + want to have higher fidelity logs, without doing a telepresence quit and + telepresence connect. You can use telepresence loglevel --help to get + more information on options for the command. + docs: reference/config + + - type: change + title: All components have info as the default log-level + body: >- + We've now set the default for all components of Telepresence (traffic-agent, + traffic-manager, local daemons) to use info as the default log-level. + + - type: bugfix + title: Updating RBAC in helm chart to fix cluster-id regression + body: >- + In 2.4.1, we enabled the traffic-manager to get the cluster ID by getting the UID + of the default namespace. The helm chart was not updated to give the traffic-manager + those permissions, which has since been fixed. This impacted users who use licensed features of + the Telepresence extension in an air-gapped environment. + docs: reference/cluster-config/#air-gapped-cluster + + - type: bugfix + title: Timeouts for Helm actions are now respected + body: >- + The user-defined timeout for Helm actions wasn't always respected, causing the daemon to hang + indefinitely when failing to install the traffic-manager. + docs: reference/config#timeouts + + - version: 2.4.1 + date: "2021-08-30" + notes: + - type: feature + title: External cloud variables are now configurable + body: >- + We now support configuring the host and port for the cloud in your config.yml. These + are used when logging in to utilize features provided by an extension, and are also passed + along as environment variables when installing the `traffic-manager`. Additionally, we + now run our testsuite with these variables set to localhost to continue to ensure Telepresence + is fully fuctional without depeneding on an external service. The SYSTEMA_HOST and SYSTEMA_PORT + environment variables are no longer used. + image: telepresence-2.4.1-systema-vars.png + docs: reference/config/#cloud + + - type: feature + title: Helm chart can now regenerate certificate used for mutating webhook on-demand. + body: >- + You can now set agentInjector.certificate.regenerate when deploying Telepresence + with the Helm chart to automatically regenerate the certificate used by the agent injector webhook. + docs: install/helm + + - type: change + title: Traffic Manager installed via helm + body: >- + The traffic-manager is now installed via an embedded version of the Helm chart when telepresence connect is first performed on a cluster. + This change is transparent to the user. + A new configuration flag, timeouts.helm sets the timeouts for all helm operations performed by the Telepresence binary. + docs: reference/config#timeouts + + - type: change + title: traffic-manager gets cluster ID itself instead of via environment variable + body: >- + The traffic-manager used to get the cluster ID as an environment variable when running + telepresence connnect or via adding the value in the helm chart. This was + clunky so now the traffic-manager gets the value itself as long as it has permissions + to "get" and "list" namespaces (this has been updated in the helm chart). + docs: install/helm + + - type: bugfix + title: Telepresence now mounts all directories from /var/run/secrets + body: >- + In the past, we only mounted secret directories in /var/run/secrets/kubernetes.io. + We now mount *all* directories in /var/run/secrets, which, for example, includes + directories like eks.amazonaws.com used for IRSA tokens. + docs: reference/volume + + - type: bugfix + title: Max gRPC receive size correctly propagates to all grpc servers + body: >- + This fixes a bug where the max gRPC receive size was only propagated to some of the + grpc servers, causing failures when the message size was over the default. + docs: reference/config/#grpc + + - type: bugfix + title: Updated our Homebrew packaging to run manually + body: >- + We made some updates to our script that packages Telepresence for Homebrew so that it + can be run manually. This will enable maintainers of Telepresence to run the script manually + should we ever need to rollback a release and have latest point to an older verison. + docs: install/ + + - type: bugfix + title: Telepresence uses namespace from kubeconfig context on each call + body: >- + In the past, Telepresence would use whatever namespace was specified in the kubeconfig's current-context + for the entirety of the time a user was connected to Telepresence. This would lead to confusing behavior + when a user changed the context in their kubeconfig and expected Telepresence to acknowledge that change. + Telepresence now will do that and use the namespace designated by the context on each call. + + - type: bugfix + title: Idle outbound TCP connections timeout increased to 7200 seconds + body: >- + Some users were noticing that their intercepts would start failing after 60 seconds. + This was because the keep idle outbound TCP connections were set to 60 seconds, which we have + now bumped to 7200 seconds to match Linux's tcp_keepalive_time default. + + - type: bugfix + title: Telepresence will automatically remove a socket upon ungraceful termination + body: >- + When a Telepresence process terminates ungracefully, it would inform users that "this usually means + that the process has terminated ungracefully" and implied that they should remove the socket. We've + now made it so Telepresence will automatically attempt to remove the socket upon ungraceful termination. + + - type: bugfix + title: Fixed user daemon deadlock + body: >- + Remedied a situation where the user daemon could hang when a user was logged in. + + - type: bugfix + title: Fixed agentImage config setting + body: >- + The config setting images.agentImages is no longer required to contain the repository, and it + will use the value at images.repository. + docs: reference/config/#images + + - version: 2.4.0 + date: "2021-08-04" + notes: + - type: feature + title: Windows Client Developer Preview + body: >- + There is now a native Windows client for Telepresence that is being released as a Developer Preview. + All the same features supported by the MacOS and Linux client are available on Windows. + image: telepresence-2.4.0-windows.png + docs: install + + - type: feature + title: CLI raises helpful messages from Ambassador Cloud + body: >- + Telepresence can now receive messages from Ambassador Cloud and raise + them to the user when they perform certain commands. This enables us + to send you messages that may enhance your Telepresence experience when + using certain commands. Frequency of messages can be configured in your + config.yml. + image: telepresence-2.4.0-cloud-messages.png + docs: reference/config#cloud + + - type: bugfix + title: Improved stability of systemd-resolved-based DNS + body: >- + When initializing the systemd-resolved-based DNS, the routing domain + is set to improve stability in non-standard configurations. This also enables the + overriding resolver to do a proper take over once the DNS service ends. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Fixed an edge case when intercepting a container with multiple ports + body: >- + When specifying a port of a container to intercept, if there was a container in the + pod without ports, it was automatically selected. This has been fixed so we'll only + choose the container with "no ports" if there's no container that explicitly matches + the port used in your intercept. + docs: reference/intercepts/#creating-an-intercept-when-a-service-has-multiple-ports + + - type: bugfix + title: $(NAME) references in agent's environments are now interpolated correctly. + body: >- + If you had an environment variable $(NAME) in your workload that referenced another, intercepts + would not correctly interpolate $(NAME). This has been fixed and works automatically. + + - type: bugfix + title: Telepresence no longer prints INFO message when there is no config.yml + body: >- + Fixed a regression that printed an INFO message to the terminal when there wasn't a + config.yml present. The config is optional, so this message has been + removed. + docs: reference/config + + - type: bugfix + title: Telepresence no longer panics when using --http-match + body: >- + Fixed a bug where Telepresence would panic if the value passed to --http-match + didn't contain an equal sign, which has been fixed. The correct syntax is in the --help + string and looks like --http-match=HTTP2_HEADER=REGEX + docs: reference/intercepts/#intercept-behavior-when-logged-in-to-ambassador-cloud + + - type: bugfix + title: Improved subnet updates + body: >- + The `traffic-manager` used to update subnets whenever the `Nodes` or `Pods` changed, even if + the underlying subnet hadn't changed, which created a lot of unnecessary traffic between the + client and the `traffic-manager`. This has been fixed so we only send updates when the subnets + themselves actually change. + docs: reference/routing/#subnets + + - version: 2.3.7 + date: "2021-07-23" + notes: + - type: feature + title: Also-proxy in telepresence status + body: >- + An also-proxy entry in the Kubernetes cluster config will + show up in the output of the telepresence status command. + docs: reference/config + + - type: feature + title: Non-interactive telepresence login + body: >- + telepresence login now has an + --apikey=KEY flag that allows for + non-interactive logins. This is useful for headless + environments where launching a web-browser is impossible, + such as cloud shells, Docker containers, or CI. + image: telepresence-2.3.7-newkey.png + docs: reference/client/login/ + + - type: bugfix + title: Mutating webhook injector correctly hides named ports for probes. + body: >- + The mutating webhook injector has been fixed to correctly rename named ports for liveness and readiness probes + docs: reference/cluster-config + + - type: bugfix + title: telepresence current-cluster-id crash fixed + body: >- + Fixed a regression introduced in 2.3.5 that caused `telepresence current-cluster-id` + to crash. + docs: reference/cluster-config + + - type: bugfix + title: Better UX around intercepts with no local process running + body: >- + Requests would hang indefinitely when initiating an intercept before you + had a local process running. This has been fixed and will result in an + Empty reply from server until you start a local process. + docs: reference/intercepts + + - type: bugfix + title: API keys no longer show as "no description" + body: >- + New API keys generated internally for communication with + Ambassador Cloud no longer show up as "no description" in + the Ambassador Cloud web UI. Existing API keys generated by + older versions of Telepresence will still show up this way. + image: telepresence-2.3.7-keydesc.png + + - type: bugfix + title: Fix corruption of user-info.json + body: >- + Fixed a race condition that logging in and logging out + rapidly could cause memory corruption or corruption of the + user-info.json cache file used when + authenticating with Ambassador Cloud. + + - type: bugfix + title: Improved DNS resolver for systemd-resolved + body: + Telepresence's systemd-resolved-based DNS resolver is now more + stable and in case it fails to initialize, the overriding resolver + will no longer cause general DNS lookup failures when telepresence defaults to + using it. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Faster telepresence list command + body: + The performance of telepresence list has been increased + significantly by reducing the number of calls the command makes to the cluster. + docs: reference/client + + - version: 2.3.6 + date: "2021-07-20" + notes: + - type: bugfix + title: Fix preview URLs + body: >- + Fixed a regression introduced in 2.3.5 that caused preview + URLs to not work. + + - type: bugfix + title: Fix subnet discovery + body: >- + Fixed a regression introduced in 2.3.5 where the Traffic + Manager's RoleBinding did not correctly appoint + the traffic-manager Role, causing + subnet discovery to not be able to work correctly. + docs: reference/rbac/ + + - type: bugfix + title: Fix root-user configuration loading + body: >- + Fixed a regression introduced in 2.3.5 where the root daemon + did not correctly read the configuration file; ignoring the + user's configured log levels and timeouts. + docs: reference/config/ + + - type: bugfix + title: Fix a user daemon crash + body: >- + Fixed an issue that could cause the user daemon to crash + during shutdown, as during shutdown it unconditionally + attempted to close a channel even though the channel might + already be closed. + + - version: 2.3.5 + date: "2021-07-15" + notes: + - type: feature + title: traffic-manager in multiple namespaces + body: >- + We now support installing multiple traffic managers in the same cluster. + This will allow operators to install deployments of telepresence that are + limited to certain namespaces. + image: ./telepresence-2.3.5-traffic-manager-namespaces.png + docs: install/helm + - type: feature + title: No more dependence on kubectl + body: >- + Telepresence no longer depends on having an external + kubectl binary, which might not be present for + OpenShift users (who have oc instead of + kubectl). + - type: feature + title: Agent image now configurable + body: >- + We now support configuring which agent image + registry to use in the + config. This enables users whose laptop is an air-gapped environment to + create personal intercepts without requiring a login. It also makes it easier + for those who are developing on Telepresence to specify which agent image should + be used. Env vars TELEPRESENCE_AGENT_IMAGE and TELEPRESENCE_REGISTRY are no longer + used. + image: ./telepresence-2.3.5-agent-config.png + docs: reference/config/#images + - type: feature + title: Max gRPC receive size now configurable + body: >- + The default max size of messages received through gRPC (4 MB) is sometimes insufficient. It can now be configured. + image: ./telepresence-2.3.5-grpc-max-receive-size.png + docs: reference/config/#grpc + - type: feature + title: CLI can be used in air-gapped environments + body: >- + While Telepresence will auto-detect if your cluster is in an air-gapped environment, + we've added an option users can add to their config.yml to ensure the cli acts like it + is in an air-gapped environment. Air-gapped environments require a manually installed + licence. + docs: reference/cluster-config/#air-gapped-cluster + image: ./telepresence-2.3.5-skipLogin.png + - version: 2.3.4 + date: "2021-07-09" + notes: + - type: bugfix + title: Improved IP log statements + body: >- + Some log statements were printing incorrect characters, when they should have been IP addresses. + This has been resolved to include more accurate and useful logging. + docs: reference/config/#log-levels + image: ./telepresence-2.3.4-ip-error.png + - type: bugfix + title: Improved messaging when multiple services match a workload + body: >- + If multiple services matched a workload when performing an intercept, Telepresence would crash. + It now gives the correct error message, instructing the user on how to specify which + service the intercept should use. + image: ./telepresence-2.3.4-improved-error.png + docs: reference/intercepts + - type: bugfix + title: Traffic-manger creates services in its own namespace to determine subnet + body: >- + Telepresence will now determine the service subnet by creating a dummy-service in its own + namespace, instead of the default namespace, which was causing RBAC permissions issues in + some clusters. + docs: reference/routing/#subnets + - type: bugfix + title: Telepresence connect respects pre-existing clusterrole + body: >- + When Telepresence connects, if the traffic-manager's desired clusterrole already exists in the + cluster, Telepresence will no longer try to update the clusterrole. + docs: reference/rbac + - type: bugfix + title: Helm Chart fixed for clientRbac.namespaced + body: >- + The Telepresence Helm chart no longer fails when installing with --set clientRbac.namespaced=true. + docs: install/helm + - version: 2.3.3 + date: "2021-07-07" + notes: + - type: feature + title: Traffic Manager Helm Chart + body: >- + Telepresence now supports installing the Traffic Manager via Helm. + This will make it easy for operators to install and configure the + server-side components of Telepresence separately from the CLI (which + in turn allows for better separation of permissions). + image: ./telepresence-2.3.3-helm.png + docs: install/helm/ + - type: feature + title: Traffic-manager in custom namespace + body: >- + As the traffic-manager can now be installed in any + namespace via Helm, Telepresence can now be configured to look for the + Traffic Manager in a namespace other than ambassador. + This can be configured on a per-cluster basis. + image: ./telepresence-2.3.3-namespace-config.png + docs: reference/config + - type: feature + title: Intercept --to-pod + body: >- + telepresence intercept now supports a + --to-pod flag that can be used to port-forward sidecars' + ports from an intercepted pod. + image: ./telepresence-2.3.3-to-pod.png + docs: reference/intercepts + - type: change + title: Change in migration from edgectl + body: >- + Telepresence no longer automatically shuts down the old + api_version=1 edgectl daemon. If migrating + from such an old version of edgectl you must now manually + shut down the edgectl daemon before running Telepresence. + This was already the case when migrating from the newer + api_version=2 edgectl. + - type: bugfix + title: Fixed error during shutdown + body: >- + The root daemon no longer terminates when the user daemon disconnects + from its gRPC streams, and instead waits to be terminated by the CLI. + This could cause problems with things not being cleaned up correctly. + - type: bugfix + title: Intercepts will survive deletion of intercepted pod + body: >- + An intercept will survive deletion of the intercepted pod provided + that another pod is created (or already exists) that can take over. + - version: 2.3.2 + date: "2021-06-18" + notes: + # Headliners + - type: feature + title: Service Port Annotation + body: >- + The mutator webhook for injecting traffic-agents now + recognizes a + telepresence.getambassador.io/inject-service-port + annotation to specify which port to intercept; bringing the + functionality of the --port flag to users who + use the mutator webook in order to control Telepresence via + GitOps. + image: ./telepresence-2.3.2-svcport-annotation.png + docs: reference/cluster-config#service-port-annotation + - type: feature + title: Outbound Connections + body: >- + Outbound connections are now routed through the intercepted + Pods which means that the connections originate from that + Pod from the cluster's perspective. This allows service + meshes to correctly identify the traffic. + docs: reference/routing/#outbound + - type: change + title: Inbound Connections + body: >- + Inbound connections from an intercepted agent are now + tunneled to the manager over the existing gRPC connection, + instead of establishing a new connection to the manager for + each inbound connection. This avoids interference from + certain service mesh configurations. + docs: reference/routing/#inbound + + # RBAC changes + - type: change + title: Traffic Manager needs new RBAC permissions + body: >- + The Traffic Manager requires RBAC + permissions to list Nodes, Pods, and to create a dummy + Service in the manager's namespace. + docs: reference/routing/#subnets + - type: change + title: Reduced developer RBAC requirements + body: >- + The on-laptop client no longer requires RBAC permissions to list the Nodes + in the cluster or to create Services, as that functionality + has been moved to the Traffic Manager. + + # Bugfixes + - type: bugfix + title: Able to detect subnets + body: >- + Telepresence will now detect the Pod CIDR ranges even if + they are not listed in the Nodes. + image: ./telepresence-2.3.2-subnets.png + docs: reference/routing/#subnets + - type: bugfix + title: Dynamic IP ranges + body: >- + The list of cluster subnets that the virtual network + interface will route is now configured dynamically and will + follow changes in the cluster. + - type: bugfix + title: No duplicate subnets + body: >- + Subnets fully covered by other subnets are now pruned + internally and thus never superfluously added to the + laptop's routing table. + docs: reference/routing/#subnets + - type: change # not a bugfix, but it only makes sense to mention after the above bugfixes + title: Change in default timeout + body: >- + The trafficManagerAPI timeout default has + changed from 5 seconds to 15 seconds, in order to facilitate + the extended time it takes for the traffic-manager to do its + initial discovery of cluster info as a result of the above + bugfixes. + - type: bugfix + title: Removal of DNS config files on macOS + body: >- + On macOS, files generated under + /etc/resolver/ as the result of using + include-suffixes in the cluster config are now + properly removed on quit. + docs: reference/routing/#macos-resolver + + - type: bugfix + title: Large file transfers + body: >- + Telepresence no longer erroneously terminates connections + early when sending a large HTTP response from an intercepted + service. + - type: bugfix + title: Race condition in shutdown + body: >- + When shutting down the user-daemon or root-daemon on the + laptop, telepresence quit and related commands + no longer return early before everything is fully shut down. + Now it can be counted on that by the time the command has + returned that all of the side-effects on the laptop have + been cleaned up. + - version: 2.3.1 + date: "2021-06-14" + notes: + - title: DNS Resolver Configuration + body: "Telepresence now supports per-cluster configuration for custom dns behavior, which will enable users to determine which local + remote resolver to use and which suffixes should be ignored + included. These can be configured on a per-cluster basis." + image: ./telepresence-2.3.1-dns.png + docs: reference/config + type: feature + - title: AlsoProxy Configuration + body: "Telepresence now supports also proxying user-specified subnets so that they can access external services only accessible to the cluster while connected to Telepresence. These can be configured on a per-cluster basis and each subnet is added to the TUN device so that requests are routed to the cluster for IPs that fall within that subnet." + image: ./telepresence-2.3.1-alsoProxy.png + docs: reference/config + type: feature + - title: Mutating Webhook for Injecting Traffic Agents + body: "The Traffic Manager now contains a mutating webhook to automatically add an agent to pods that have the telepresence.getambassador.io/traffic-agent: enabled annotation. This enables Telepresence to work well with GitOps CD platforms that rely on higher level kubernetes objects matching what is stored in git. For workloads without the annotation, Telepresence will add the agent the way it has in the past" + image: ./telepresence-2.3.1-inject.png + docs: reference/rbac + type: feature + - title: Traffic Manager Connect Timeout + body: "The trafficManagerConnect timeout default has changed from 20 seconds to 60 seconds, in order to facilitate the extended time it takes to apply everything needed for the mutator webhook." + image: ./telepresence-2.3.1-trafficmanagerconnect.png + docs: reference/config + type: change + - title: Fix for large file transfers + body: "Fix a tun-device bug where sometimes large transfers from services on the cluster would hang indefinitely" + image: ./telepresence-2.3.1-large-file-transfer.png + docs: reference/tun-device + type: bugfix + - title: Brew Formula Changed + body: "Now that the Telepresence rewrite is the main version of Telepresence, you can install it via Brew like so: brew install datawire/blackbird/telepresence." + image: ./telepresence-2.3.1-brew.png + docs: install/ + type: change + - version: 2.3.0 + date: "2021-06-01" + notes: + - title: Brew install Telepresence + body: "Telepresence can now be installed via brew on macOS, which makes it easier for users to stay up-to-date with the latest telepresence version. To install via brew, you can use the following command: brew install datawire/blackbird/telepresence2." + image: ./telepresence-2.3.0-homebrew.png + docs: install/ + type: feature + - title: TCP and UDP routing via Virtual Network Interface + body: "Telepresence will now perform routing of outbound TCP and UDP traffic via a Virtual Network Interface (VIF). The VIF is a layer 3 TUN-device that exists while Telepresence is connected. It makes the subnets in the cluster available to the workstation and will also route DNS requests to the cluster and forward them to intercepted pods. This means that pods with custom DNS configuration will work as expected. Prior versions of Telepresence would use firewall rules and were only capable of routing TCP." + image: ./tunnel.jpg + docs: reference/tun-device + type: feature + - title: SSH is no longer used + body: "All traffic between the client and the cluster is now tunneled via the traffic manager gRPC API. This means that Telepresence no longer uses ssh tunnels and that the manager no longer have an sshd installed. Volume mounts are still established using sshfs but it is now configured to communicate using the sftp-protocol directly, which means that the traffic agent also runs without sshd. A desired side effect of this is that the manager and agent containers no longer need a special user configuration." + image: ./no-ssh.png + docs: reference/tun-device/#no-ssh-required + type: change + - title: Running in a Docker container + body: "Telepresence can now be run inside a Docker container. This can be useful for avoiding side effects on a workstation's network, establishing multiple sessions with the traffic manager, or working with different clusters simultaneously." + image: ./run-tp-in-docker.png + docs: reference/inside-container + type: feature + - title: Configurable Log Levels + body: "Telepresence now supports configuring the log level for Root Daemon and User Daemon logs. This provides control over the nature and volume of information that Telepresence generates in daemon.log and connector.log." + image: ./telepresence-2.3.0-loglevels.png + docs: reference/config/#log-levels + type: feature + - version: 2.2.2 + date: "2021-05-17" + notes: + - title: Legacy Telepresence subcommands + body: Telepresence is now able to translate common legacy Telepresence commands into native Telepresence commands. So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used to with the new Telepresence binary. + image: ./telepresence-2.2.png + docs: install/migrate-from-legacy/ + type: feature diff --git a/docs/telepresence/2.6/troubleshooting/index.md b/docs/telepresence/2.6/troubleshooting/index.md new file mode 100644 index 000000000..87c746fe5 --- /dev/null +++ b/docs/telepresence/2.6/troubleshooting/index.md @@ -0,0 +1,106 @@ +--- +description: "Troubleshooting issues related to Telepresence." +--- +# Troubleshooting + +## Creating an intercept did not generate a preview URL + +Preview URLs can only be created if Telepresence is [logged in to +Ambassador Cloud](../reference/client/login/). When not logged in, it +will not even try to create a preview URL (additionally, by default it +will intercept all traffic rather than just a subset of the traffic). +Remove the intercept with `telepresence leave [deployment name]`, run +`telepresence login` to login to Ambassador Cloud, then recreate the +intercept. See the [intercepts how-to doc](../howtos/intercepts) for +more details. + +## Error on accessing preview URL: `First record does not look like a TLS handshake` + +The service you are intercepting is likely not using TLS, however when configuring the intercept you indicated that it does use TLS. Remove the intercept with `telepresence leave [deployment name]` and recreate it, setting `TLS` to `n`. Telepresence tries to intelligently determine these settings for you when creating an intercept and offer them as defaults, but odd service configurations might cause it to suggest the wrong settings. + +## Error on accessing preview URL: Detected a 301 Redirect Loop + +If your ingress is set to redirect HTTP requests to HTTPS and your web app uses HTTPS, but you configure the intercept to not use TLS, you will get this error when opening the preview URL. Remove the intercept with `telepresence leave [deployment name]` and recreate it, selecting the correct port and setting `TLS` to `y` when prompted. + +## Connecting to a cluster via VPN doesn't work. + +There are a few different issues that could arise when working with a VPN. Please see the [dedicated page](../reference/vpn) on Telepresence and VPNs to learn more on how to fix these. + +## Your GitHub organization isn't listed + +Ambassador Cloud needs access granted to your GitHub organization as a +third-party OAuth app. If an organization isn't listed during login +then the correct access has not been granted. + +The quickest way to resolve this is to go to the **Github menu** → +**Settings** → **Applications** → **Authorized OAuth Apps** → +**Ambassador Labs**. An organization owner will have a **Grant** +button, anyone not an owner will have **Request** which sends an email +to the owner. If an access request has been denied in the past the +user will not see the **Request** button, they will have to reach out +to the owner. + +Once access is granted, log out of Ambassador Cloud and log back in; +you should see the GitHub organization listed. + +The organization owner can go to the **GitHub menu** → **Your +organizations** → **[org name]** → **Settings** → **Third-party +access** to see if Ambassador Labs has access already or authorize a +request for access (only owners will see **Settings** on the +organization page). Clicking the pencil icon will show the +permissions that were granted. + +GitHub's documentation provides more detail about [managing access granted to third-party applications](https://docs.github.com/en/github/authenticating-to-github/connecting-with-third-party-applications) and [approving access to apps](https://docs.github.com/en/github/setting-up-and-managing-organizations-and-teams/approving-oauth-apps-for-your-organization). + +## Granting or requesting access on initial login + +When using GitHub as your identity provider, the first time you log in +to Ambassador Cloud GitHub will ask to authorize Ambassador Labs to +access your organizations and certain user data. + +Authorize Ambassador labs form + +Any listed organization with a green check has already granted access +to Ambassador Labs (you still need to authorize to allow Ambassador +Labs to read your user data and organization membership). + +Any organization with a red "X" requires access to be granted to +Ambassador Labs. Owners of the organization will see a **Grant** +button. Anyone who is not an owner will see a **Request** button. +This will send an email to the organization owner requesting approval +to access the organization. If an access request has been denied in +the past the user will not see the **Request** button, they will have +to reach out to the owner. + +Once approval is granted, you will have to log out of Ambassador Cloud +then back in to select the organization. + +## Volume mounts are not working on macOS + +It's necessary to have `sshfs` installed in order for volume mounts to work correctly during intercepts. Lately there's been some issues using `brew install sshfs` a macOS workstation because the required component `osxfuse` (now named `macfuse`) isn't open source and hence, no longer supported. As a workaround, you can now use `gromgit/fuse/sshfs-mac` instead. Follow these steps: + +1. Remove old sshfs, macfuse, osxfuse using `brew uninstall` +2. `brew install --cask macfuse` +3. `brew install gromgit/fuse/sshfs-mac` +4. `brew link --overwrite sshfs-mac` + +Now sshfs -V shows you the correct version, e.g.: +``` +$ sshfs -V +SSHFS version 2.10 +FUSE library version: 2.9.9 +fuse: no mount point +``` + +but one more thing must be done before it works OK: +5. Try a mount (or an intercept that performs a mount). It will fail because you need to give permission to “Benjamin Fleischer” to execute a kernel extension (a pop-up appears that takes you to the system preferences). +6. Approve the needed permission +7. Reboot your computer. + +## Authorization for Preview URLs +Services that require authentication may not function correctly with preview URLs. When accessing a preview URL, it is necessary to configure your intercept to use custom authentication headers for the preview URL. If you don't, you may receive an unauthorized response or be redirected to the login page for Ambassador Cloud. + +You can accomplish this by using a browser extension such as the `ModHeader extension` for [Chrome](https://chrome.google.com/webstore/detail/modheader/idgpnmonknjnojddfkpgkljpfnnfcklj) +or [Firefox](https://addons.mozilla.org/en-CA/firefox/addon/modheader-firefox/). + +It is important to note that Ambassador Cloud does not support OAuth browser flows when accessing a preview URL, but other auth schemes such as Basic access authentication and session cookies will work. diff --git a/docs/telepresence/2.6/versions.yml b/docs/telepresence/2.6/versions.yml new file mode 100644 index 000000000..cfdb0b443 --- /dev/null +++ b/docs/telepresence/2.6/versions.yml @@ -0,0 +1,5 @@ +version: "2.6.8" +dlVersion: "latest" +docsVersion: "2.6" +branch: release/v2 +productName: "Telepresence" diff --git a/docs/telepresence/2.7 b/docs/telepresence/2.7 deleted file mode 120000 index 22fecabdf..000000000 --- a/docs/telepresence/2.7 +++ /dev/null @@ -1 +0,0 @@ -../../../docs/telepresence/v2.7 \ No newline at end of file diff --git a/docs/telepresence/2.7/ci/github-actions.md b/docs/telepresence/2.7/ci/github-actions.md new file mode 100644 index 000000000..810a2d239 --- /dev/null +++ b/docs/telepresence/2.7/ci/github-actions.md @@ -0,0 +1,176 @@ +--- +title: GitHub Actions for Telepresence +description: "Learn more about GitHub Actions for Telepresence and how to integrate them in your processes to run tests for your own environments and improve your CI/CD pipeline. " +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Telepresence with GitHub Actions + +Telepresence combined with [GitHub Actions](https://docs.github.com/en/actions) allows you to run integration tests in your continuous integration/continuous delivery (CI/CD) pipeline without the need to run any dependant service. When you connect to the target Kubernetes cluster, you can intercept traffic of the remote services and send it to an instance of the local service running in CI. This way, you can quickly test the bugfixes, updates, and features that you develop in your project. + +You can [register here](https://app.getambassador.io/auth/realms/production/protocol/openid-connect/auth?client_id=telepresence-github-actions&response_type=code&code_challenge=qhXI67CwarbmH-pqjDIV1ZE6kqggBKvGfs69cxst43w&code_challenge_method=S256&redirect_uri=https://app.getambassador.io) to get a free Ambassador Cloud account to try the GitHub Actions for Telepresence yourself. + +## GitHub Actions for Telepresence + +Ambassador Labs has created a set of GitHub Actions for Telepresence that enable you to run integration tests in your CI pipeline against any existing remote cluster. The GitHub Actions for Telepresence are the following: + + - **configure**: Initial configuration setup for Telepresence that is needed to run the actions successfully. + - **install**: Installs Telepresence in your CI server with latest version or the one specified by you. + - **login**: Logs into Telepresence, you can create a [personal intercept](/docs/telepresence/latest/concepts/intercepts/#personal-intercept). You'll need a Telepresence API key and set it as an environment variable in your workflow. See the [acquiring an API key guide](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) for instructions on how to get one. + - **connect**: Connects to the remote target environment. + - **intercept**: Redirects traffic to the remote service to the version of the service running in CI so you can run integration tests. + +Each action contains a post-action script to clean up resources. This includes logging out of Telepresence, closing the connection to the remote cluster, and stopping the intercept process. These post scripts are executed automatically, regardless of job result. This way, you don't have to worry about terminating the session yourself. You can look at the [GitHub Actions for Telepresence repository](https://github.com/datawire/telepresence-actions) for more information. + +# Using Telepresence in your GitHub Actions CI pipeline + +## Prerequisites + +To enable GitHub Actions with telepresence, you need: + +* A [Telepresence API KEY](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) and set it as an environment variable in your workflow. +* Access to your remote Kubernetes cluster, like a `kubeconfig.yaml` file with the information to connect to the cluster. +* If your remote cluster already has Telepresence installed, you need to know whether Telepresence is installed [Cluster wide](/docs/telepresence/latest/reference/rbac/#cluster-wide-telepresence-user-access) or [Namespace only](/docs/telepresence/latest/reference/rbac/#namespace-only-telepresence-user-access). If telepresence is configured for namespace only, verify that your `kubeconfig.yaml` is configured to find the installation of the Traffic Manager. For example: + + ```yaml + apiVersion: v1 + clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: traffic-manager-namespace + name: example-cluster + ``` + +* If Telepresence is installed, you also need to know the version of Telepresence running in the cluster. You can run the command `kubectl describe service traffic-manager -n namespace`. The version is listed in the `labels` section of the output. +* You need a GitHub Actions secret named `TELEPRESENCE_API_KEY` in your repository that has your Telepresence API key. See [GitHub docs](https://docs.github.com/en/github-ae@latest/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository) for instructions on how to create GitHub Actions secrets. +* You need a GitHub Actions secret named `KUBECONFIG_FILE` in your repository with the content of your `kubeconfig.yaml`). + +**Does your environment look different?** We're actively working on making GitHub Actions for Telepresence more useful for more + +
+ + +
+ +## Initial configuration setup + +To be able to use the GitHub Actions for Telepresence, you need to do an initial setup to [configure Telepresence](../../reference/config/) so the repository is able to run your workflow. To complete the Telepresence setup: + + +This action only supports Ubuntu runners at the moment. + +1. In your main branch, create a `.github/workflows` directory in your GitHub repository if it does not already exist. +1. Next, in the `.github/workflows` directory, create a new YAML file named `configure-telepresence.yaml`: + + ```yaml + name: Configuring telepresence + on: workflow_dispatch + jobs: + configuring: + name: Configure telepresence + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + steps: + - name : Checkout + uses: actions/checkout@v3 + #---- here run your custom command to connect to your cluster + #- name: Connect to cluster + # shell: bash + # run: ./connnect to cluster + #---- + - name: Congifuring Telepresence + uses: datawire/telepresence-actions/configure@v1.0-rc + with: + version: latest + ``` + +1. Push the `configure-telepresence.yaml` file to your repository. +1. Run the `Configuring Telepresence Workflow` [manually](https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow) in your repository's Actions tab. + +When the workflow runs, the action caches the configuration directory of Telepresence and a Telepresence configuration file if is provided by you. This configuration file should be placed in the`/.github/telepresence-config/config.yml` with your own [Telepresence config](../../reference/config/). If you update your this file with a new configuration, you must run the `Configuring Telepresence Workflow` action manually on your main branch so your workflow detects the new configuration. + + +When you create a branch, do not remove the .telepresence/config.yml file. This is required for Telepresence to run GitHub action properly when there is a new push to the branch in your repository. + + +## Using Telepresence in your GitHub Actions workflows + +1. In the `.github/workflows` directory create a new YAML file named `run-integration-tests.yaml` and modify placeholders with real actions to run your service and perform integration tests. + + ```yaml + name: Run Integration Tests + on: + push: + branches-ignore: + - 'main' + jobs: + my-job: + name: Run Integration Test using Remote Cluster + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + KUBECONFIG_FILE: ${{ secrets.KUBECONFIG_FILE }} + KUBECONFIG: /opt/kubeconfig + steps: + - name : Checkout + uses: actions/checkout@v3 + with: + ref: ${{ github.event.pull_request.head.sha }} + #---- here run your custom command to run your service + #- name: Run your service to test + # shell: bash + # run: ./run_local_service + #---- + # First you need to log in to Telepresence, with your api key + - name: Create kubeconfig file + run: | + cat < /opt/kubeconfig + ${{ env.KUBECONFIG_FILE }} + EOF + - name: Install Telepresence + uses: datawire/telepresence-actions/install@v1.0-rc + with: + version: 2.5.8 # Change the version number here according to the version of Telepresence in your cluster or omit this parameter to install the latest version + - name: Telepresence connect + uses: datawire/telepresence-actions/connect@v1.0-rc + - name: Login + uses: datawire/telepresence-actions/login@v1.0-rc + with: + telepresence_api_key: ${{ secrets.TELEPRESENCE_API_KEY }} + - name: Intercept the service + uses: datawire/telepresence-actions/intercept@v1.0-rc + with: + service_name: service-name + service_port: 8081:8080 + namespace: namespacename-of-your-service + http_header: "x-telepresence-intercept-id=service-intercepted" + print_logs: true # Flag to instruct the action to print out Telepresence logs and export an artifact with them + #---- here run your custom command + #- name: Run integrations test + # shell: bash + # run: ./run_integration_test + #---- + ``` + +The previous is an example of a workflow that: + +* Checks out the repository code. +* Has a placeholder step to run the service during CI. +* Creates the `/opt/kubeconfig` file with the contents of the `secrets.KUBECONFIG_FILE` to make it available for Telepresence. +* Installs Telepresence. +* Runs Telepresence Connect. +* Logs into Telepresence. +* Intercepts traffic to the service running in the remote cluster. +* A placeholder for an action that would run integration tests, such as one that makes HTTP requests to your running service and verifies it works while dependent services run in the remote cluster. + +This workflow gives you the ability to run integration tests during the CI run against an ephemeral instance of your service to verify that the any change that is pushed to the working branch works as expected. After you push the changes, the CI server will run the integration tests against the intercept. You can view the results on your GitHub repository, under "Actions" tab. diff --git a/docs/telepresence/2.7/community.md b/docs/telepresence/2.7/community.md new file mode 100644 index 000000000..922457c9d --- /dev/null +++ b/docs/telepresence/2.7/community.md @@ -0,0 +1,12 @@ +# Community + +## Contributor's guide +Please review our [contributor's guide](https://github.com/telepresenceio/telepresence/blob/release/v2/DEVELOPING.md) +on GitHub to learn how you can help make Telepresence better. + +## Changelog +Our [changelog](https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md) +describes new features, bug fixes, and updates to each version of Telepresence. + +## Meetings +Check out our community [meeting schedule](https://github.com/telepresenceio/telepresence/blob/release/v2/MEETING_SCHEDULE.md) for opportunities to interact with Telepresence developers. diff --git a/docs/telepresence/2.7/concepts/context-prop.md b/docs/telepresence/2.7/concepts/context-prop.md new file mode 100644 index 000000000..b3eb41e32 --- /dev/null +++ b/docs/telepresence/2.7/concepts/context-prop.md @@ -0,0 +1,37 @@ +# Context propagation + +**Context propagation** is the transfer of request metadata across the services and remote processes of a distributed system. Telepresence uses context propagation to intelligently route requests to the appropriate destination. + +This metadata is the context that is transferred across system services. It commonly takes the form of HTTP headers; context propagation is usually referred to as header propagation. A component of the system (like a proxy or performance monitoring tool) injects the headers into requests as it relays them. + +Metadata propagation refers to any service or other middleware not stripping away the headers. Propagation facilitates the movement of the injected contexts between other downstream services and processes. + + +## What is distributed tracing? + +Distributed tracing is a technique for troubleshooting and profiling distributed microservices applications and is a common application for context propagation. It is becoming a key component for debugging. + +In a microservices architecture, a single request may trigger additional requests to other services. The originating service may not cause the failure or slow request directly; a downstream dependent service may instead be to blame. + +An application like Datadog or New Relic will use agents running on services throughout the system to inject traffic with HTTP headers (the context). They will track the request’s entire path from origin to destination to reply, gathering data on routes the requests follow and performance. The injected headers follow the [W3C Trace Context specification](https://www.w3.org/TR/trace-context/) (or another header format, such as [B3 headers](https://github.com/openzipkin/b3-propagation)), which facilitates maintaining the headers through every service without being stripped (the propagation). + + +## What are intercepts and preview URLs? + +[Intercepts](../../reference/intercepts) and [preview +URLs](../../howtos/preview-urls/) are functions of Telepresence that +enable easy local development from a remote Kubernetes cluster and +offer a preview environment for sharing and real-time collaboration. + +Telepresence uses custom HTTP headers and header propagation to +identify which traffic to intercept both for plain personal intercepts +and for personal intercepts with preview URLs; these techniques are +more commonly used for distributed tracing, so what they are being +used for is a little unorthodox, but the mechanisms for their use are +already widely deployed because of the prevalence of tracing. The +headers facilitate the smart routing of requests either to live +services in the cluster or services running locally on a developer’s +machine. The intercepted traffic can be further limited by using path +based routing. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to [Ambassador Cloud](https://app.getambassador.io) with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. diff --git a/docs/telepresence/2.7/concepts/devloop.md b/docs/telepresence/2.7/concepts/devloop.md new file mode 100644 index 000000000..86aac87e2 --- /dev/null +++ b/docs/telepresence/2.7/concepts/devloop.md @@ -0,0 +1,54 @@ +--- +title: "The developer and the inner dev loop | Ambassador " +--- + +# The developer experience and the inner dev loop + +## How is the developer experience changing? + +The developer experience is the workflow a developer uses to develop, test, deploy, and release software. + +Typically this experience has consisted of both an inner dev loop and an outer dev loop. The inner dev loop is where the individual developer codes and tests, and once the developer pushes their code to version control, the outer dev loop is triggered. + +The outer dev loop is _everything else_ that happens leading up to release. This includes code merge, automated code review, test execution, deployment, [controlled (canary) release](https://www.getambassador.io/docs/argo/latest/concepts/canary/), and observation of results. The modern outer dev loop might include, for example, an automated CI/CD pipeline as part of a [GitOps workflow](https://www.getambassador.io/docs/argo/latest/concepts/gitops/#what-is-gitops) and a [progressive delivery](/docs/argo/latest/concepts/cicd/) strategy relying on automated canaries, i.e. to make the outer loop as fast, efficient and automated as possible. + +Cloud-native technologies have fundamentally altered the developer experience in two ways: one, developers now have to take extra steps in the inner dev loop; two, developers need to be concerned with the outer dev loop as part of their workflow, even if most of their time is spent in the inner dev loop. + +Engineers now must design and build distributed service-based applications _and_ also assume responsibility for the full development life cycle. The new developer experience means that developers can no longer rely on monolithic application developer best practices, such as checking out the entire codebase and coding locally with a rapid “live-reload” inner development loop. Now developers have to manage external dependencies, build containers, and implement orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this adds development time to the equation. + +## What is the inner dev loop? + +The inner dev loop is the single developer workflow. A single developer should be able to set up and use an inner dev loop to code and test changes quickly. + +Even within the Kubernetes space, developers will find much of the inner dev loop familiar. That is, code can still be written locally at a level that a developer controls and committed to version control. + +In a traditional inner dev loop, if a typical developer codes for 360 minutes (6 hours) a day, with a traditional local iterative development loop of 5 minutes — 3 coding, 1 building, i.e. compiling/deploying/reloading, 1 testing inspecting, and 10-20 seconds for committing code — they can expect to make ~70 iterations of their code per day. Any one of these iterations could be a release candidate. The only “developer tax” being paid here is for the commit process, which is negligible. + +![traditional inner dev loop](../images/trad-inner-dev-loop.png) + +## In search of lost time: How does containerization change the inner dev loop? + +The inner dev loop is where writing and testing code happens, and time is critical for maximum developer productivity and getting features in front of end users. The faster the feedback loop, the faster developers can refactor and test again. + +Changes to the inner dev loop process, i.e., containerization, threaten to slow this development workflow down. Coding stays the same in the new inner dev loop, but code has to be containerized. The _containerized_ inner dev loop requires a number of new steps: + +* packaging code in containers +* writing a manifest to specify how Kubernetes should run the application (e.g., YAML-based configuration information, such as how much memory should be given to a container) +* pushing the container to the registry +* deploying containers in Kubernetes + +Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. + + +![container inner dev loop](../images/container-inner-dev-loop.png) + +## Tackling the slow inner dev loop + +A slow inner dev loop can negatively impact frontend and backend teams, delaying work on individual and team levels and slowing releases into production overall. + +For example: + +* Frontend developers have to wait for previews of backend changes on a shared dev/staging environment (for example, until CI/CD deploys a new version) and/or rely on mocks/stubs/virtual services when coding their application locally. These changes are only verifiable by going through the CI/CD process to build and deploy within a target environment. +* Backend developers have to wait for CI/CD to build and deploy their app to a target environment to verify that their code works correctly with cluster or cloud-based dependencies as well as to share their work to get feedback. + +New technologies and tools can facilitate cloud-native, containerized development. And in the case of a sluggish inner dev loop, developers can accelerate productivity with tools that help speed the loop up again. diff --git a/docs/telepresence/2.7/concepts/devworkflow.md b/docs/telepresence/2.7/concepts/devworkflow.md new file mode 100644 index 000000000..fa24fc2bd --- /dev/null +++ b/docs/telepresence/2.7/concepts/devworkflow.md @@ -0,0 +1,7 @@ +# The changing development workflow + +A changing workflow is one of the main challenges for developers adopting Kubernetes. Software development itself isn’t the challenge. Developers can continue to [code using the languages and tools with which they are most productive and comfortable](https://www.getambassador.io/resources/kubernetes-local-dev-toolkit/). That’s the beauty of containerized development. + +However, the cloud-native, Kubernetes-based approach to development means adopting a new development workflow and development environment. Beyond the basics, such as figuring out how to containerize software, [how to run containers in Kubernetes](https://www.getambassador.io/docs/kubernetes/latest/concepts/appdev/), and how to deploy changes into containers, for example, Kubernetes adds complexity before it delivers efficiency. The promise of a “quicker way to develop software” applies at least within the traditional aspects of the inner dev loop, where the single developer codes, builds and tests their software. But both within the inner dev loop and once code is pushed into version control to trigger the outer dev loop, the developer experience changes considerably from what many developers are used to. + +In this new paradigm, new steps are added to the inner dev loop, and more broadly, the developer begins to share responsibility for the full life cycle of their software. Inevitably this means taking new workflows and tools on board to ensure that the full life cycle continues full speed ahead. diff --git a/docs/telepresence/2.7/concepts/faster.md b/docs/telepresence/2.7/concepts/faster.md new file mode 100644 index 000000000..03dc9bd8b --- /dev/null +++ b/docs/telepresence/2.7/concepts/faster.md @@ -0,0 +1,28 @@ +--- +title: Install the Telepresence Docker extension | Ambassador +--- +# Making the remote local: Faster feedback, collaboration and debugging + +With the goal of achieving [fast, efficient development](https://www.getambassador.io/use-case/local-kubernetes-development/), developers need a set of approaches to bridge the gap between remote Kubernetes clusters and local development, and reduce time to feedback and debugging. + +## How should I set up a Kubernetes development environment? + +[Setting up a development environment](https://www.getambassador.io/resources/development-environments-microservices/) for Kubernetes can be much more complex than the setup for traditional web applications. Creating and maintaining a Kubernetes development environment relies on a number of external dependencies, such as databases or authentication. + +While there are several ways to set up a Kubernetes development environment, most introduce complexities and impediments to speed. The dev environment should be set up to easily code and test in conditions where a service can access the resources it depends on. + +A good way to meet the goals of faster feedback, possibilities for collaboration, and scale in a realistic production environment is the "single service local, all other remote" environment. Developing in a fully remote environment offers some benefits, but for developers, it offers the slowest possible feedback loop. With local development in a remote environment, the developer retains considerable control while using tools like [Telepresence](../../quick-start/) to facilitate fast feedback, debugging and collaboration. + +## What is Telepresence? + +Telepresence is an open source tool that lets developers [code and test microservices locally against a remote Kubernetes cluster](../../quick-start/). Telepresence facilitates more efficient development workflows while relieving the need to worry about other service dependencies. + +## How can I get fast, efficient local development? + +The dev loop can be jump-started with the right development environment and Kubernetes development tools to support speed, efficiency and collaboration. Telepresence is designed to let Kubernetes developers code as though their laptop is in their Kubernetes cluster, enabling the service to run locally and be proxied into the remote cluster. Telepresence runs code locally and forwards requests to and from the remote Kubernetes cluster, bypassing the much slower process of waiting for a container to build, pushing it to registry, and deploying to production. + +A rapid and continuous feedback loop is essential for productivity and speed; Telepresence enables the fast, efficient feedback loop to ensure that developers can access the rapid local development loop they rely on without disrupting their own or other developers' workflows. Telepresence safely intercepts traffic from the production cluster and enables near-instant testing of code, local debugging in production, and [preview URL](../../howtos/preview-urls/) functionality to share dev environments with others for multi-user collaboration. + +Telepresence works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This pod proxies data from the Kubernetes environment (e.g., TCP connections, environment variables, volumes) to the local process. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +The intercept proxy works thanks to context propagation, which is most frequently associated with distributed tracing but also plays a key role in controllable intercepts and preview URLs. diff --git a/docs/telepresence/2.7/concepts/intercepts.md b/docs/telepresence/2.7/concepts/intercepts.md new file mode 100644 index 000000000..0a2909be2 --- /dev/null +++ b/docs/telepresence/2.7/concepts/intercepts.md @@ -0,0 +1,208 @@ +--- +title: "Types of intercepts" +description: "Short demonstration of personal vs global intercepts" +--- + +import React from 'react'; + +import Alert from '@material-ui/lab/Alert'; +import AppBar from '@material-ui/core/AppBar'; +import Paper from '@material-ui/core/Paper'; +import Tab from '@material-ui/core/Tab'; +import TabContext from '@material-ui/lab/TabContext'; +import TabList from '@material-ui/lab/TabList'; +import TabPanel from '@material-ui/lab/TabPanel'; +import Animation from '@src/components/InterceptAnimation'; + +export function TabsContainer({ children, ...props }) { + const [state, setState] = React.useState({curTab: "personal"}); + React.useEffect(() => { + const query = new URLSearchParams(window.location.search); + var interceptType = query.get('intercept') || "personal"; + if (state.curTab != interceptType) { + setState({curTab: interceptType}); + } + }, [state, setState]) + var setURL = function(newTab) { + history.replaceState(null,null, + `?intercept=${newTab}${window.location.hash}`, + ); + }; + return ( +
+ + + {setState({curTab: newTab}); setURL(newTab)}} aria-label="intercept types"> + + + + + + {children} + +
+ ); +}; + +# Types of intercepts + + + + +# No intercept + + + + +This is the normal operation of your cluster without Telepresence. + + + + + +# Global intercept + + + + +**Global intercepts** replace the Kubernetes "Orders" service with the +Orders service running on your laptop. The users see no change, but +with all the traffic coming to your laptop, you can observe and debug +with all your dev tools. + + + +### Creating and using global intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=all + ``` + + + + Make sure your current kubectl context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service: + + All requests will be sent to the version of your service that is + running in the local development environment. + + + + +# Personal intercept + +**Personal intercepts** allow you to be selective and intercept only +some of the traffic to a service while not interfering with the rest +of the traffic. This allows you to share a cluster with others on your +team without interfering with their work. + +Personal intercepts are subject to the Ambassador Cloud active service and user limit quotas. +To read more about these quotas limits, see the [subscription management page](../../../cloud/latest/subscriptions/howtos/manage-my-subscriptions). + + + + +In the illustration above, **Orange** +requests are being made by Developer 2 on their laptop and the +**green** are made by a teammate, +Developer 1, on a different laptop. + +Each developer can intercept the Orders service for their requests only, +while sharing the rest of the development environment. + + + +### Creating and using personal intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b + ``` + + We're using + `Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b` as the + header for the sake of the example, but you can use any + `key=value` pair you want, or `--http-header=auto` to have it + choose something automatically. + + + + Make sure your current kubect context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service by passing the + HTTP header: + + ```http + Personal-Intercept: 126a72c7-be8b-4329-af64-768e207a184b + ``` + + + + Need a browser extension to modify or remove an HTTP-request-headers? + + Chrome + {' '} + Firefox + + + + 3. Using the intercept: Send requests to your service without the + HTTP header: + + Requests without the header will be sent to the version of your + service that is running in the cluster. This enables you to share + the cluster with a team! + +### Intercepting a specific endpoint + +It's not uncommon to have one service serving several endpoints. Telepresence is capable of limiting an +intercept to only affect the endpoints you want to work with by using one of the `--http-path-xxx` +flags below in addition to using `--http-header` flags. Only one such flag can be used in an intercept +and, contrary to the `--http-header` flag, it cannot be repeated. + +The following flags are available: + +| Flag | Meaning | +|-------------------------------|------------------------------------------------------------------| +| `--http-path-equal ` | Only intercept the endpoint for this exact path | +| `--http-path-prefix ` | Only intercept endpoints with a matching path prefix | +| `--http-path-regex ` | Only intercept endpoints that match the given regular expression | + +#### Examples: + +1. A personal intercept using the header "Coder: Bob" limited to all endpoints that start with "/api': + + ```shell + telepresence intercept SERVICENAME --http-path-prefix=/api --http-header=Coder=Bob + ``` + +2. A personal intercept using the auto generated header that applies only to the endpoint "/api/version": + + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version --http-header=auto + ``` + or, since `--http-header=auto` is the implicit when using `--http` options, just: + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version + ``` + +3. A personal intercept using the auto generated header limited to all endpoints matching the regular expression "(staging-)?api/.*": + + ```shell + telepresence intercept SERVICENAME --http-path-regex='/(staging-)?api/.*' + ``` + + + diff --git a/docs/telepresence/2.7/doc-links.yml b/docs/telepresence/2.7/doc-links.yml new file mode 100644 index 000000000..427486bc5 --- /dev/null +++ b/docs/telepresence/2.7/doc-links.yml @@ -0,0 +1,102 @@ +- title: Quick start + link: quick-start +- title: Install Telepresence + items: + - title: Install + link: install/ + - title: Upgrade + link: install/upgrade/ + - title: Install Traffic Manager + link: install/manager/ + - title: Install Traffic Manager with Helm + link: install/helm/ + - title: Cloud Provider Prerequisites + link: install/cloud/ + - title: Migrate from legacy Telepresence + link: install/migrate-from-legacy/ +- title: Core concepts + items: + - title: The changing development workflow + link: concepts/devworkflow + - title: The developer experience and the inner dev loop + link: concepts/devloop + - title: 'Making the remote local: Faster feedback, collaboration and debugging' + link: concepts/faster + - title: Context propagation + link: concepts/context-prop + - title: Types of intercepts + link: concepts/intercepts +- title: How do I... + items: + - title: Intercept a service in your own environment + link: howtos/intercepts + - title: Share dev environments with preview URLs + link: howtos/preview-urls + - title: Proxy outbound traffic to my cluster + link: howtos/outbound + - title: Host a cluster in a local VM + link: howtos/cluster-in-vm + - title: Send requests to an intercepted service + link: howtos/request +- title: Telepresence for Docker + items: + - title: What is Telepresence for Docker + link: extension/intro + - title: Install into Docker-Desktop + link: extension/install + - title: Intercept into a Docker Container + link: extension/intercept +- title: Telepresence for CI + items: + - title: Github Actions + link: ci/github-actions +- title: Technical reference + items: + - title: Architecture + link: reference/architecture + - title: Client reference + link: reference/client + items: + - title: login + link: reference/client/login + - title: Laptop-side configuration + link: reference/config + - title: Cluster-side configuration + link: reference/cluster-config + - title: Using Docker for intercepts + link: reference/docker-run + - title: Running Telepresence in a Docker container + link: reference/inside-container + - title: Environment variables + link: reference/environment + - title: Intercepts + link: reference/intercepts/ + items: + - title: Manually injecting the Traffic Agent + link: reference/intercepts/manual-agent + - title: Volume mounts + link: reference/volume + - title: RESTful API service + link: reference/restapi + - title: DNS resolution + link: reference/dns + - title: RBAC + link: reference/rbac + - title: Telepresence and VPNs + link: reference/vpn + - title: Networking through Virtual Network Interface + link: reference/tun-device + - title: Connection Routing + link: reference/routing + - title: Using Telepresence with Linkerd + link: reference/linkerd +- title: FAQs + link: faqs +- title: Troubleshooting + link: troubleshooting +- title: Community + link: community +- title: Release Notes + link: release-notes +- title: Licenses + link: licenses diff --git a/docs/telepresence/2.7/extension/install.md b/docs/telepresence/2.7/extension/install.md new file mode 100644 index 000000000..471752775 --- /dev/null +++ b/docs/telepresence/2.7/extension/install.md @@ -0,0 +1,39 @@ +--- +title: "Telepresence for Docker installation and connection guide" +description: "Learn how to install and update Ambassador Labs' Telepresence for Docker." +indexable: true +--- + +# Install and connect the Telepresence Docker extension + +[Docker](https://docker.com), the popular containerized runtime environment, now offers the [Telepresence](../../../../../kubernetes-learning-center/telepresence-docker-extension/) extension for Docker Desktop. With this extension, you can quickly install Telepresence and begin using its features with your Docker containers in a matter of minutes. + +## Install Telepresence for Docker + +Telepresence for Docker is available through the Docker Destktop. To install Telepresence for Docker: + +1. Open Docker Desktop. +2. In the Docker Dashboard, click **Add Extensions** in the left navigation bar. +3. In the Extensions Marketplace, search for the Ambassador Telepresence extension. +4. Click **Install**. + +## Connect to Ambassador Cloud through the Telepresence extension. + + After you install the Telepresence extension in Docker Desktop, you need to generate an API key to connect the Telepresence extension to Ambassador Cloud. + + 1. Click the Telepresence extension in Docker Desktop, then click **Get Started**. + + 2. Click the **Get API Key** button to open Ambassador Cloud in a browser window. + + 3. Sign in with your Google, Github, or Gitlab account. + Ambassador Cloud opens to your profile and displays the API key. + + 4. Copy the API key and paste it into the API key field in the Docker Dashboard. + +## Connect to your cluster in Docker Desktop. + + 1. Select the desired clister from the dropdown menu and click **Next**. + This cluster is now set to kubectl's current context. + + 2. Click **Connect to [your cluster]**. + Your cluster is connected and you can now create [intercepts](../intercept/). \ No newline at end of file diff --git a/docs/telepresence/2.7/extension/intercept.md b/docs/telepresence/2.7/extension/intercept.md new file mode 100644 index 000000000..3868407a8 --- /dev/null +++ b/docs/telepresence/2.7/extension/intercept.md @@ -0,0 +1,48 @@ +--- +title: "Create an intercept with Telepreence for Docker" +description: "Create an intercept with Telepresence for Docker. With Telepresence, you can create intercepts to debug, " +indexable: true +--- + +# Create an intercept + +With the Telepresence for Docker extension, you can create [personal intercepts](../../concepts/intercepts/?intercept=personal). These intercepts route the cluster traffic through a proxy UTL to your local Docker container. Follow the instructions below to create an intercept with Docker Desktop. + +## Prerequisites + +Before you begin, you need: +- [Docker Desktop](https://www.docker.com/products/docker-desktop). +- The [Telepresence ](../../../../../kubernetes-learning-center/telepresence-docker-extension/) extension [installed](../install). +- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/), the Kubernetes command-line tool. + +This guide assumes you have a Kubernetes deployment with a running service, and that you can run a copy of that service in a docker container on your laptop. + +## Copy the service you want to intercept + +Once you have the Telepresence extension [installed and connected](../install/) the Telepresence extension, you need to copy the service. To do this, use the `docker run` command with the following flags: + + ```console + $ docker run --rm -it --network host + ``` + +The Telepresence extension requires the target service to be on the host network. This allows Telepresence to share a network with your container. The mounted network device redirects cluster-related traffic back into the cluster. + +## Intercept a service + +In Docker Desktop, the Telepresence extension shows all the services in the namespace. + + 1. Choose a service to intercept and click the **Intercept** button. + + 2. Select the service port for the intercept from the dropdown. + + 3. Enter the target port of the service you previously copied in the Docker container. + + 4. Click **Submit** to create the intercept. + +The intercept now shows up in the Docker Telepresence extension. + +## Test your code + +Now you can make your code changes in your preferred IDE. When you're finished, build a new container with your code changes and run your container on Docker's host network. All the traffic previously routed to and from your Kubernetes service is now routed to and from your local container. + +Click the globe icon next to your intercept to get the preview URL. From here, you can view the intercept details in Ambassador Cloud, open the preview URL in your browser to see the changes you've made in realtime, or you can share the preview URL with teammates so they can review your work. \ No newline at end of file diff --git a/docs/telepresence/2.7/extension/intro.md b/docs/telepresence/2.7/extension/intro.md new file mode 100644 index 000000000..6a653ae06 --- /dev/null +++ b/docs/telepresence/2.7/extension/intro.md @@ -0,0 +1,29 @@ +--- +title: "Telepresence for Docker introduction" +description: "Learn about the Telepresence extension for Docker." +indexable: true +--- + +# Telepresence for Docker + +Telepresence is now available as a [Docker Extension](https://www.docker.com/products/extensions/) for Docker Desktop. + +## What is the Telepresence extension for Docker? + +The [Telepresence Docker extension](../../../../../kubernetes-learning-center/telepresence-docker-extension/) is an extension that runs in Docker Desktop. This extension allows you to spin up a selection of your application and run the Telepresence daemons in that container. The Telepresence extension allows you to intercept a service and redirect cloud traffic to other containers on the Docker host network. + +## What does the Telepresence Docker extension do? + +Telepresence for Docker is designed to simplify your coding experience and test your code faster. Traditionally, you need to build a container within docker with your code changes, push them, wait for it to upload, deploy the changes, verify them, view them, and repeat that process as you continually test your changes. This makes it a slow and cumbersome process when you need to continually test changes. + +With the Telepresence extension for Docker Desktop, you can use intercepts to immediately preview changes as you make them, without the need to redeploy after every change. Because the Telepresence extension also enables you to isolate your machine and operate it entirely within the Docker runtime, this means you can make changes without root permission on your machine. + +## How does Telepresence for Docker work? + +The Telepresence extension is configured to use Docker's host network (VM network for Windows and Mac, host network on Linux). + +Telepresence runs entirely within containers. The Telepresence daemons run in a container, which can be given commands using the extension UI. When Telepresence intercepts a service, it redirects cloud traffic to other containers on the Docker host network. + +## What do I need to begin? + +All you need is [Docker Desktop](https://www.docker.com/products/docker-desktop) with the [Ambassador Telepresence extension installed](../install) and the Kubernetes command-line tool [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/). diff --git a/docs/telepresence/2.7/faqs.md b/docs/telepresence/2.7/faqs.md new file mode 100644 index 000000000..3c37f1cc5 --- /dev/null +++ b/docs/telepresence/2.7/faqs.md @@ -0,0 +1,124 @@ +--- +description: "Learn how Telepresence helps with fast development and debugging in your Kubernetes cluster." +--- + +# FAQs + +** Why Telepresence?** + +Modern microservices-based applications that are deployed into Kubernetes often consist of tens or hundreds of services. The resource constraints and number of these services means that it is often difficult to impossible to run all of this on a local development machine, which makes fast development and debugging very challenging. The fast [inner development loop](../concepts/devloop/) from previous software projects is often a distant memory for cloud developers. + +Telepresence enables you to connect your local development machine seamlessly to the cluster via a two way proxying mechanism. This enables you to code locally and run the majority of your services within a remote Kubernetes cluster -- which in the cloud means you have access to effectively unlimited resources. + +Ultimately, this empowers you to develop services locally and still test integrations with dependent services or data stores running in the remote cluster. + +You can “intercept” any requests made to a target Kubernetes workload, and code and debug your associated service locally using your favourite local IDE and in-process debugger. You can test your integrations by making requests against the remote cluster’s ingress and watching how the resulting internal traffic is handled by your service running locally. + +By using the preview URL functionality you can share access with additional developers or stakeholders to the application via an entry point associated with your intercept and locally developed service. You can make changes that are visible in near real-time to all of the participants authenticated and viewing the preview URL. All other viewers of the application entrypoint will not see the results of your changes. + +** What operating systems does Telepresence work on?** + +Telepresence currently works natively on macOS (Intel and Apple silicon), Linux, and WSL 2. Starting with v2.4.0, we are also releasing a native Windows version of Telepresence that we are considering a Developer Preview. + +** What protocols can be intercepted by Telepresence?** + +All HTTP/1.1 and HTTP/2 protocols can be intercepted. This includes: + +- REST +- JSON/XML over HTTP +- gRPC +- GraphQL + +If you need another protocol supported, please [drop us a line](https://www.getambassador.io/feedback/) to request it. + +** When using Telepresence to intercept a pod, are the Kubernetes cluster environment variables proxied to my local machine?** + +Yes, you can either set the pod's environment variables on your machine or write the variables to a file to use with Docker or another build process. Please see [the environment variable reference doc](../reference/environment) for more information. + +** When using Telepresence to intercept a pod, can the associated pod volume mounts also be mounted by my local machine?** + +Yes, please see [the volume mounts reference doc](../reference/volume/) for more information. + +** When connected to a Kubernetes cluster via Telepresence, can I access cluster-based services via their DNS name?** + +Yes. After you have successfully connected to your cluster via `telepresence connect` you will be able to access any service in your cluster via their namespace qualified DNS name. + +This means you can curl endpoints directly e.g. `curl .:8080/mypath`. + +If you create an intercept for a service in a namespace, you will be able to use the service name directly. + +This means if you `telepresence intercept -n `, you will be able to resolve just the `` DNS record. + +You can connect to databases or middleware running in the cluster, such as MySQL, PostgreSQL and RabbitMQ, via their service name. + +** When connected to a Kubernetes cluster via Telepresence, can I access cloud-based services and data stores via their DNS name?** + +You can connect to cloud-based data stores and services that are directly addressable within the cluster (e.g. when using an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) Service type), such as AWS RDS, Google pub-sub, or Azure SQL Database. + +** What types of ingress does Telepresence support for the preview URL functionality?** + +The preview URL functionality should work with most ingress configurations, including straightforward load balancer setups. + +Telepresence will discover/prompt during first use for this info and make its best guess at figuring this out and ask you to confirm or update this. + +** Why are my intercepts still reporting as active when they've been disconnected?** + + In certain cases, Telepresence might not have been able to communicate back with Ambassador Cloud to update the intercept's status. Worry not, they will get garbage collected after a period of time. + +** Why is my intercept associated with an "Unreported" cluster?** + + Intercepts tagged with "Unreported" clusters simply mean Ambassador Cloud was unable to associate a service instance with a known detailed service from an Edge Stack or API Gateway cluster. [Connecting your cluster to the Service Catalog](/docs/telepresence/latest/quick-start/) will properly match your services from multiple data sources. + +** Will Telepresence be able to intercept workloads running on a private cluster or cluster running within a virtual private cloud (VPC)?** + +Yes. The cluster has to have outbound access to the internet for the preview URLs to function correctly, but it doesn’t need to have a publicly accessible IP address. + +The cluster must also have access to an external registry in order to be able to download the traffic-manager and traffic-agent images that are deployed when connecting with Telepresence. + +** Why does running Telepresence require sudo access for the local daemon?** + +The local daemon needs sudo to create iptable mappings. Telepresence uses this to create outbound access from the laptop to the cluster. + +On Fedora, Telepresence also creates a virtual network device (a TUN network) for DNS routing. That also requires root access. + +** What components get installed in the cluster when running Telepresence?** + +A single `traffic-manager` service is deployed in the `ambassador` namespace within your cluster, and this manages resilient intercepts and connections between your local machine and the cluster. + +A Traffic Agent container is injected per pod that is being intercepted. The first time a workload is intercepted all pods associated with this workload will be restarted with the Traffic Agent automatically injected. + +** How can I remove all of the Telepresence components installed within my cluster?** + +You can run the command `telepresence uninstall --everything` to remove the `traffic-manager` service installed in the cluster and `traffic-agent` containers injected into each pod being intercepted. + +Running this command will also stop the local daemon running. + +** What language is Telepresence written in?** + +All components of the Telepresence application and cluster components are written using Go. + +** How does Telepresence connect and tunnel into the Kubernetes cluster?** + +The connection between your laptop and cluster is established by using +the `kubectl port-forward` machinery (though without actually spawning +a separate program) to establish a TCP connection to Telepresence +Traffic Manager in the cluster, and running Telepresence's custom VPN +protocol over that TCP connection. + + + +** What identity providers are supported for authenticating to view a preview URL?** + +* GitHub +* GitLab +* Google + +More authentication mechanisms and identity provider support will be added soon. Please [let us know](https://www.getambassador.io/feedback/) which providers are the most important to you and your team in order for us to prioritize those. + +** Is Telepresence open source?** + +Yes it is! You can find its source code on [GitHub](https://github.com/telepresenceio/telepresence). + +** How do I share my feedback on Telepresence?** + +Your feedback is always appreciated and helps us build a product that provides as much value as possible for our community. You can chat with us directly on our [feedback page](https://www.getambassador.io/feedback/), or you can [join our Slack channel](http://a8r.io/slack) to share your thoughts. diff --git a/docs/telepresence/2.7/howtos/cluster-in-vm.md b/docs/telepresence/2.7/howtos/cluster-in-vm.md new file mode 100644 index 000000000..4762344c9 --- /dev/null +++ b/docs/telepresence/2.7/howtos/cluster-in-vm.md @@ -0,0 +1,192 @@ +--- +title: "Considerations for locally hosted clusters | Ambassador" +description: "Use Telepresence to intercept services in a cluster running in a hosted virtual machine." +--- + +# Network considerations for locally hosted clusters + +## The problem +Telepresence creates a Virtual Network Interface ([VIF](../../reference/tun-device)) that maps the clusters subnets to the host machine when it connects. If you're running Kubernetes locally (e.g., k3s, Minikube, Docker for Desktop), you may encounter network problems because the devices in the host are also accessible from the cluster's nodes. + +### Example: +A k3s cluster runs in a headless VirtualBox machine that uses a "host-only" network. This network will allow both host-to-guest and guest-to-host connections. In other words, the cluster will have access to the host's network and, while Telepresence is connected, also to its VIF. This means that from the cluster's perspective, there will now be more than one interface that maps the cluster's subnets; the ones already present in the cluster's nodes, and then the Telepresence VIF, mapping them again. + +Now, if a request arrives to Telepresence that is covered by a subnet mapped by the VIF, the request is routed to the cluster. If the cluster for some reason doesn't find a corresponding listener that can handle the request, it will eventually try the host network, and find the VIF. The VIF routes the request to the cluster and now the recursion is in motion. The final outcome of the request will likely be a timeout but since the recursion is very resource intensive (a large amount of very rapid connection requests), this will likely also affect other connections in a bad way. + +## Solution + +### Create a bridge network +A bridge network is a Link Layer (L2) device that forwards traffic between network segments. By creating a bridge network, you can bypass the host's network stack which enable the Kubernetes cluster to connect directly to the same router as your host. + +To create a bridge network, you need to change the network settings of the guest running a cluster's node so that it connects directly to a physical network device on your host. The details on how to configure the bridge depends on what type of virtualization solution you're using. + +### Vagrant + Virtualbox + k3s example +Here's a sample `Vagrantfile` that will spin up a server node and two agent nodes in three headless instances using a bridged network. It also adds the configuration needed for the cluster to host a docker repository (very handy in case you want to save bandwidth). The Kubernetes registry manifest must be applied using `kubectl -f registry.yaml` once the cluster is up and running. + +#### Vagrantfile +```ruby +# -*- mode: ruby -*- +# vi: set ft=ruby : + +# bridge is the name of the host's default network device +$bridge = 'wlp5s0' + +# default_route should be the IP of the host's default route. +$default_route = '192.168.1.1' + +# nameserver must be the IP of an external DNS, such as 8.8.8.8 +$nameserver = '8.8.8.8' + +# server_name should also be added to the host's /etc/hosts file and point to the server_ip +# for easy access when pushing docker images +server_name = 'multi' + +# static IPs for the server and agents. Those IPs must be on the default router's subnet +server_ip = '192.168.1.110' +agents = { + 'agent1' => '192.168.1.111', + 'agent2' => '192.168.1.112', +} + +# Extra parameters in INSTALL_K3S_EXEC variable because of +# K3s picking up the wrong interface when starting server and agent +# https://github.com/alexellis/k3sup/issues/306 +server_script = <<-SHELL + sudo -i + apk add curl + export INSTALL_K3S_EXEC="--bind-address=#{server_ip} --node-external-ip=#{server_ip} --flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + echo "Sleeping for 5 seconds to wait for k3s to start" + sleep 5 + cp /var/lib/rancher/k3s/server/token /vagrant_shared + cp /etc/rancher/k3s/k3s.yaml /vagrant_shared + cp /etc/rancher/k3s/registries.yaml /vagrant_shared + SHELL + +agent_script = <<-SHELL + sudo -i + apk add curl + export K3S_TOKEN_FILE=/vagrant_shared/token + export K3S_URL=https://#{server_ip}:6443 + export INSTALL_K3S_EXEC="--flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + SHELL + +def config_vm(name, ip, script, vm) + # The network_script has two objectives: + # 1. Ensure that the guest's default route is the bridged network (bypass the network of the host) + # 2. Ensure that the DNS points to an external DNS service, as opposed to the DNS of the host that + # the NAT network provides. + network_script = <<-SHELL + sudo -i + ip route delete default 2>&1 >/dev/null || true; ip route add default via #{$default_route} + cp /etc/resolv.conf /etc/resolv.conf.orig + sed 's/^nameserver.*/nameserver #{$nameserver}/' /etc/resolv.conf.orig > /etc/resolv.conf + SHELL + + vm.hostname = name + vm.network 'public_network', bridge: $bridge, ip: ip + vm.synced_folder './shared', '/vagrant_shared' + vm.provider 'virtualbox' do |vb| + vb.memory = '4096' + vb.cpus = '2' + end + vm.provision 'shell', inline: script + vm.provision 'shell', inline: network_script, run: 'always' +end + +Vagrant.configure('2') do |config| + config.vm.box = 'generic/alpine314' + + config.vm.define 'server', primary: true do |server| + config_vm(server_name, server_ip, server_script, server.vm) + end + + agents.each do |agent_name, agent_ip| + config.vm.define agent_name do |agent| + config_vm(agent_name, agent_ip, agent_script, agent.vm) + end + end +end +``` + +The Kubernetes manifest to add the registry: + +#### registry.yaml +```yaml +apiVersion: v1 +kind: ReplicationController +metadata: + name: kube-registry-v0 + namespace: kube-system + labels: + k8s-app: kube-registry + version: v0 +spec: + replicas: 1 + selector: + app: kube-registry + version: v0 + template: + metadata: + labels: + app: kube-registry + version: v0 + spec: + containers: + - name: registry + image: registry:2 + resources: + limits: + cpu: 100m + memory: 200Mi + env: + - name: REGISTRY_HTTP_ADDR + value: :5000 + - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY + value: /var/lib/registry + volumeMounts: + - name: image-store + mountPath: /var/lib/registry + ports: + - containerPort: 5000 + name: registry + protocol: TCP + volumes: + - name: image-store + hostPath: + path: /var/lib/registry-storage +--- +apiVersion: v1 +kind: Service +metadata: + name: kube-registry + namespace: kube-system + labels: + app: kube-registry + kubernetes.io/name: "KubeRegistry" +spec: + selector: + app: kube-registry + ports: + - name: registry + port: 5000 + targetPort: 5000 + protocol: TCP + type: LoadBalancer +``` + diff --git a/docs/telepresence/2.7/howtos/intercepts.md b/docs/telepresence/2.7/howtos/intercepts.md new file mode 100644 index 000000000..87bd9f92b --- /dev/null +++ b/docs/telepresence/2.7/howtos/intercepts.md @@ -0,0 +1,108 @@ +--- +description: "Start using Telepresence in your own environment. Follow these steps to intercept your service in your cluster." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Intercept a service in your own environment + +Telepresence enables you to create intercepts to a target Kubernetes workload. Once you have created and intercept, you can code and debug your associated service locally. + +For a detailed walk-though on creating intercepts using our sample app, follow the [quick start guide](../../quick-start/demo-node/). + + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +This guide assumes you have a Kubernetes deployment and service accessible publicly by an ingress controller, and that you can run a copy of that service on your laptop. + + +## Intercept your service with a global intercept + +With Telepresence, you can create [global intercepts](../../concepts/intercepts/?intercept=global) that intercept all traffic going to a service in your cluster and route it to your local environment instead. + +1. Connect to your cluster with `telepresence connect` and connect to the Kubernetes API server: + + ```console + $ curl -ik https://kubernetes.default + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected when you first connect. + + + You now have access to your remote Kubernetes API server as if you were on the same network. You can now use any local tools to connect to any service in the cluster. + + If you have difficulties connecting, make sure you are using Telepresence 2.0.3 or a later version. Check your version by entering `telepresence version` and [upgrade if needed](../../install/upgrade/). + + +2. Enter `telepresence list` and make sure the service you want to intercept is listed. For example: + + ```console + $ telepresence list + ... + example-service: ready to intercept (traffic-agent not yet installed) + ... + ``` + +3. Get the name of the port you want to intercept on your service: + `kubectl get service --output yaml`. + + For example: + + ```console + $ kubectl get service example-service --output yaml + ... + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + ... + ``` + +4. Intercept all traffic going to the service in your cluster: + `telepresence intercept --port [:] --env-file `. + * For `--port`: specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * For `--env-file`: specify a file path for Telepresence to write the environment variables that are set in the pod. + The example below shows Telepresence intercepting traffic going to service `example-service`. Requests now reach the service on port `http` in the cluster get routed to `8080` on the workstation and write the environment variables of the service to `~/example-service-intercept.env`. + ```console + $ telepresence intercept example-service --port 8080:http --env-file ~/example-service-intercept.env + Using Deployment example-service + intercepted + Intercept name: example-service + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Intercepting : all TCP connections + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + The following are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Query the environment in which you intercepted a service and verify your local instance being invoked. + All the traffic previously routed to your Kubernetes Service is now routed to your local environment + +You can now: +- Make changes on the fly and see them reflected when interacting with + your Kubernetes environment. +- Query services only exposed in your cluster's network. +- Set breakpoints in your IDE to investigate bugs. + + + + **Didn't work?** Make sure the port you're listening on matches the one you specified when you created your intercept. + + diff --git a/docs/telepresence/2.7/howtos/outbound.md b/docs/telepresence/2.7/howtos/outbound.md new file mode 100644 index 000000000..d1a9676a9 --- /dev/null +++ b/docs/telepresence/2.7/howtos/outbound.md @@ -0,0 +1,89 @@ +--- +description: "Telepresence can connect to your Kubernetes cluster, letting you access cluster services as if your laptop was another pod in the cluster." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Proxy outbound traffic to my cluster + +While preview URLs are a powerful feature, Telepresence offers other options for proxying traffic between your laptop and the cluster. This section discribes how to proxy outbound traffic and control outbound connectivity to your cluster. + + This guide assumes that you have the quick start sample web app running in your cluster to test accessing the web-app service. You can substitute this service for any other service you are running. + +## Proxying outbound traffic + +Connecting to the cluster instead of running an intercept allows you to access cluster workloads as if your laptop was another pod in the cluster. This enables you to access other Kubernetes services using `.`. A service running on your laptop can interact with other services on the cluster by name. + +When you connect to your cluster, the background daemon on your machine runs and installs the [Traffic Manager deployment](../../reference/architecture/) into the cluster of your current `kubectl` context. The Traffic Manager handles the service proxying. + +1. Run `telepresence connect` and enter your password to run the daemon. + + ``` + $ telepresence connect + Launching Telepresence Daemon v2.3.7 (api v3) + Need root privileges to run "/usr/local/bin/telepresence daemon-foreground /home//.cache/telepresence/logs '' ''" + [sudo] password: + Connecting to traffic manager... + Connected to context default (https://) + ``` + +2. Run `telepresence status` to confirm connection to your cluster and that it is proxying traffic. + + ``` + $ telepresence status + Root Daemon: Running + Version : v2.3.7 (api 3) + Primary DNS : "" + Fallback DNS: "" + User Daemon: Running + Version : v2.3.7 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 0 total + ``` + +3. Access your service by name with `curl web-app.emojivoto:80`. Telepresence routes the request to the cluster, as if your laptop is actually running in the cluster. + + ``` + $ curl web-app.emojivoto:80 + + + + + Emoji Vote + ... + ``` + +If you terminate the client with `telepresence quit` and try to access the service again, it will fail because traffic is no longer proxied from your laptop. + + ``` + $ telepresence quit + Telepresence Daemon quitting...done + ``` + +When using Telepresence in this way, you need to access services with the namespace qualified DNS name (<service name>.<namespace>) before you start an intercept. After you start an intercept, only <service name> is required. Read more about these differences in the DNS resolution reference guide. + +## Controlling outbound connectivity + +By default, Telepresence provides access to all Services found in all namespaces in the connected cluster. This can lead to problems if the user does not have RBAC access permissions to all namespaces. You can use the `--mapped-namespaces ` flag to control which namespaces are accessible. + +When you use the `--mapped-namespaces` flag, you need to include all namespaces containing services you want to access, as well as all namespaces that contain services related to the intercept. + +### Using local-only intercepts + +When you develop on isolated apps or on a virtualized container, you don't need an outbound connection. However, when developing services that aren't deployed to the cluster, it can be necessary to provide outbound connectivity to the namespace where the service will be deployed. This is because services that aren't exposed through ingress controllers require connectivity to those services. When you provide outbound connectivity, the service can access other services in that namespace without using qualified names. A local-only intercept does not cause outbound connections to originate from the intercepted namespace. The reason for this is to establish correct origin; the connection must be routed to a `traffic-agent`of an intercepted pod. For local-only intercepts, the outbound connections originates from the `traffic-manager`. + +To control outbound connectivity to specific namespaces, add the `--local-only` flag: + + ``` + $ telepresence intercept --namespace --local-only + ``` +The resources in the given namespace can now be accessed using unqualified names as long as the intercept is active. +You can deactivate the intercept with `telepresence leave `. This removes unqualified name access. + +### Proxy outcound connectivity for laptops + +To specify additional hosts or subnets that should be resolved inside the cluster, see [AlsoProxy](../../reference/config/#alsoproxy) for more details. diff --git a/docs/telepresence/2.7/howtos/preview-urls.md b/docs/telepresence/2.7/howtos/preview-urls.md new file mode 100644 index 000000000..15a1c5181 --- /dev/null +++ b/docs/telepresence/2.7/howtos/preview-urls.md @@ -0,0 +1,127 @@ +--- +title: "Share dev environments with preview URLs | Ambassador" +description: "Telepresence uses Preview URLs to help you collaborate on developing Kubernetes services with teammates." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Share development environments with preview URLs + +Telepresence can generate sharable preview URLs. This enables you to work on a copy of your service locally, and share that environment with a teammate for pair programming. While using preview URLs, Telepresence will route only the requests coming from that preview URL to your local environment. Requests to the ingress are routed to your cluster as usual. + +Preview URLs are protected behind authentication through Ambassador Cloud, and, access to the URL is only available to users in your organization. You can make the URL publicly accessible for sharing with outside collaborators. + +## Creating a preview URL + +1. Connect to Telepresence and enter the `telepresence list` command in your CLI to verify the service is listed. +Telepresence only supports Deployments, ReplicaSets, and StatefulSet workloads with a label that matches a Service. + +2. Enter `telepresence login` to launch Ambassador Cloud in your browser. + + If you are in an environment you can't launch Telepresence in your local browser, enter If you are in an environment where Telepresence cannot launch in a local browser, pass the [`--apikey` flag to `telepresence login`](../../reference/client/login/). + +3. Start the intercept with `telepresence intercept --port --env-file `and adjust the flags as follows: + Start the intercept: + * **port:** specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * **env-file:** specify a file path for Telepresence to write the environment variables that are set in the pod. + +4. Answer the question prompts. + * **What's your ingress' IP address?**: whether the ingress controller is expecting TLS communication on the specified port. + * **What's your ingress' TCP port number?**: the port your ingress controller is listening to. This is often 443 for TLS ports, and 80 for non-TLS ports. + * **Does that TCP port on your ingress use TLS (as opposed to cleartext)?**: whether the ingress controller is expecting TLS communication on the specified port. + * **If required by your ingress, specify a different hostname (TLS-SNI, HTTP "Host" header) to be used in requests.**: if your ingress controller routes traffic based on a domain name (often using the `Host` HTTP header), enter that value here. + + The example below shows a preview URL for `example-service` which listens on port 8080. The preview URL for ingress will use the `ambassador` service in the `ambassador` namespace on port `443` using TLS encryption and the hostname `dev-environment.edgestack.me`: + + ```console +$ telepresence intercept example-service --port 8080 --env-file ~/ex-svc.env + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Confirm the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: -]: ambassador.ambassador + + 2/4: What's your ingress' TCP port number? + + [default: -]: 80 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: y + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: ambassador.ambassador]: dev-environment.edgestack.me + + Using deployment example-service + intercepted + Intercept name : example-service + State : ACTIVE + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":example-service") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname : dev-environment.edgestack.me + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + Here are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Go to the Preview URL generated from the intercept. +Traffic is now intercepted from your preview URL without impacting other traffic from your Ingress. + + + Didn't work? It might be because you have services in between your ingress controller and the service you are intercepting that do not propagate the x-telepresence-intercept-id HTTP Header. Read more on context propagation. + + +7. Make a request on the URL you would usually query for that environment. Don't route a request to your laptop. + + Normal traffic coming into the cluster through the Ingress (i.e. not coming from the preview URL) routes to services in the cluster like normal. + +8. Share with a teammate. + + You can collaborate with teammates by sending your preview URL to them. Once your teammate logs in, they must select the same identity provider and org as you are using. This authorizes their access to the preview URL. When they visit the preview URL, they see the intercepted service running on your laptop. + You can now collaborate with a teammate to debug the service on the shared intercept URL without impacting the production environment. + +## Sharing a preview URL with people outside your team + +To collaborate with someone outside of your identity provider's organization: +Log into [Ambassador Cloud](https://app.getambassador.io/cloud/). + navigate to your service's intercepts, select the preview URL details, and click **Make Publicly Accessible**. Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on your laptop. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. Removing the preview URL either from the dashboard or by running `telepresence preview remove ` also removes all access to the preview URL. + +## Change access restrictions + +To collaborate with someone outside of your identity provider's organization, you must make your preview URL publicly accessible. + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/). +2. Select the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Make Publicly Accessible**. + +Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on a local environment. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. + +## Remove a preview URL from an Intercept + +To delete a preview URL and remove all access to the intercepted service, + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) +2. Click on the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Remove Preview**. + +Alternatively, you can remove a preview URL with the following command: +`telepresence preview remove ` diff --git a/docs/telepresence/2.7/howtos/request.md b/docs/telepresence/2.7/howtos/request.md new file mode 100644 index 000000000..1109c68df --- /dev/null +++ b/docs/telepresence/2.7/howtos/request.md @@ -0,0 +1,12 @@ +import Alert from '@material-ui/lab/Alert'; + +# Send requests to an intercepted service + +Ambassador Cloud can inform you about the required request parameters to reach an intercepted service. + + 1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) + 2. Navigate to the desired service Intercepts page + 3. Click the **Query** button to open the pop-up menu. + 4. Toggle between **CURL**, **Headers** and **Browse**. + +The pre-built queries and header information will help you get started to query the desired intercepted service and manage header propagation. diff --git a/docs/telepresence/2.7/images/container-inner-dev-loop.png b/docs/telepresence/2.7/images/container-inner-dev-loop.png new file mode 100644 index 000000000..06586cd6e Binary files /dev/null and b/docs/telepresence/2.7/images/container-inner-dev-loop.png differ diff --git a/docs/telepresence/2.7/images/docker-header-containers.png b/docs/telepresence/2.7/images/docker-header-containers.png new file mode 100644 index 000000000..06f422a93 Binary files /dev/null and b/docs/telepresence/2.7/images/docker-header-containers.png differ diff --git a/docs/telepresence/2.7/images/github-login.png b/docs/telepresence/2.7/images/github-login.png new file mode 100644 index 000000000..cfd4d4bf1 Binary files /dev/null and b/docs/telepresence/2.7/images/github-login.png differ diff --git a/docs/telepresence/2.7/images/logo.png b/docs/telepresence/2.7/images/logo.png new file mode 100644 index 000000000..701f63ba8 Binary files /dev/null and b/docs/telepresence/2.7/images/logo.png differ diff --git a/docs/telepresence/2.7/images/split-tunnel.png b/docs/telepresence/2.7/images/split-tunnel.png new file mode 100644 index 000000000..5bf30378e Binary files /dev/null and b/docs/telepresence/2.7/images/split-tunnel.png differ diff --git a/docs/telepresence/2.7/images/tracing.png b/docs/telepresence/2.7/images/tracing.png new file mode 100644 index 000000000..c374807e5 Binary files /dev/null and b/docs/telepresence/2.7/images/tracing.png differ diff --git a/docs/telepresence/2.7/images/trad-inner-dev-loop.png b/docs/telepresence/2.7/images/trad-inner-dev-loop.png new file mode 100644 index 000000000..618b674f8 Binary files /dev/null and b/docs/telepresence/2.7/images/trad-inner-dev-loop.png differ diff --git a/docs/telepresence/2.7/images/tunnelblick.png b/docs/telepresence/2.7/images/tunnelblick.png new file mode 100644 index 000000000..8944d445a Binary files /dev/null and b/docs/telepresence/2.7/images/tunnelblick.png differ diff --git a/docs/telepresence/2.7/images/vpn-dns.png b/docs/telepresence/2.7/images/vpn-dns.png new file mode 100644 index 000000000..eed535c45 Binary files /dev/null and b/docs/telepresence/2.7/images/vpn-dns.png differ diff --git a/docs/telepresence/2.7/install/cloud.md b/docs/telepresence/2.7/install/cloud.md new file mode 100644 index 000000000..9bcf9e63e --- /dev/null +++ b/docs/telepresence/2.7/install/cloud.md @@ -0,0 +1,43 @@ +# Provider Prerequisites for Traffic Manager + +## GKE + +### Firewall Rules for private clusters + +A GKE cluster with private networking will come preconfigured with firewall rules that prevent the Traffic Manager's +webhook injector from being invoked by the Kubernetes API server. +For Telepresence to work in such a cluster, you'll need to [add a firewall rule](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) allowing the Kubernetes masters to access TCP port `8443` in your pods. +For example, for a cluster named `tele-webhook-gke` in region `us-central1-c1`: + +```bash +$ gcloud container clusters describe tele-webhook-gke --region us-central1-c | grep masterIpv4CidrBlock + masterIpv4CidrBlock: 172.16.0.0/28 # Take note of the IP range, 172.16.0.0/28 + +$ gcloud compute firewall-rules list \ + --filter 'name~^gke-tele-webhook-gke' \ + --format 'table( + name, + network, + direction, + sourceRanges.list():label=SRC_RANGES, + allowed[].map().firewall_rule().list():label=ALLOW, + targetTags.list():label=TARGET_TAGS + )' + +NAME NETWORK DIRECTION SRC_RANGES ALLOW TARGET_TAGS +gke-tele-webhook-gke-33fa1791-all tele-webhook-net INGRESS 10.40.0.0/14 esp,ah,sctp,tcp,udp,icmp gke-tele-webhook-gke-33fa1791-node +gke-tele-webhook-gke-33fa1791-master tele-webhook-net INGRESS 172.16.0.0/28 tcp:10250,tcp:443 gke-tele-webhook-gke-33fa1791-node +gke-tele-webhook-gke-33fa1791-vms tele-webhook-net INGRESS 10.128.0.0/9 icmp,tcp:1-65535,udp:1-65535 gke-tele-webhook-gke-33fa1791-node +# Take note fo the TARGET_TAGS value, gke-tele-webhook-gke-33fa1791-node + +$ gcloud compute firewall-rules create gke-tele-webhook-gke-webhook \ + --action ALLOW \ + --direction INGRESS \ + --source-ranges 172.16.0.0/28 \ + --rules tcp:8443 \ + --target-tags gke-tele-webhook-gke-33fa1791-node --network tele-webhook-net +Creating firewall...⠹Created [https://www.googleapis.com/compute/v1/projects/datawire-dev/global/firewalls/gke-tele-webhook-gke-webhook]. +Creating firewall...done. +NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED +gke-tele-webhook-gke-webhook tele-webhook-net INGRESS 1000 tcp:8443 False +``` diff --git a/docs/telepresence/2.7/install/helm.md b/docs/telepresence/2.7/install/helm.md new file mode 100644 index 000000000..2709ee8f3 --- /dev/null +++ b/docs/telepresence/2.7/install/helm.md @@ -0,0 +1,181 @@ +# Install the Traffic Manager with Helm + +[Helm](https://helm.sh) is a package manager for Kubernetes that automates the release and management of software on Kubernetes. The Telepresence Traffic Manager can be installed via a Helm chart with a few simple steps. + +For more details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +## Before you begin + +Before you begin you need to have [`helm`](https://helm.sh/docs/intro/install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +The Telepresence Helm chart is hosted by Ambassador Labs and published at `https://app.getambassador.io`. + +Start by adding this repo to your Helm client with the following command: + +```shell +helm repo add datawire https://app.getambassador.io +helm repo update +``` + +## Install with Helm + +When you run the Helm chart, it installs all the components required for the Telepresence Traffic Manager. + +1. If you are installing the Telepresence Traffic Manager **for the first time on your cluster**, create the `ambassador` namespace in your cluster: + + ```shell + kubectl create namespace ambassador + ``` + +2. Install the Telepresence Traffic Manager with the following command: + + ```shell + helm install traffic-manager --namespace ambassador datawire/telepresence + ``` + +### Install into custom namespace + +The Helm chart supports being installed into any namespace, not necessarily `ambassador`. Simply pass a different `namespace` argument to `helm install`. +For example, if you wanted to deploy the traffic manager to the `staging` namespace: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence +``` + +Note that users of Telepresence will need to configure their kubeconfig to find this installation of the Traffic Manager: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +See [the kubeconfig documentation](../../reference/config#manager) for more information. + +### Upgrading the Traffic Manager. + +Versions of the Traffic Manager Helm chart are coupled to the versions of the Telepresence CLI that they are intended for. +Thus, for example, if you wish to use Telepresence `v2.4.0`, you'll need to install version `v2.4.0` of the Traffic Manager Helm chart. + +Upgrading the Traffic Manager is the same as upgrading any other Helm chart; for example, if you installed the release into the `ambassador` namespace, and you just wished to upgrade it to the latest version without changing any configuration values: + +```shell +helm repo up +helm upgrade traffic-manager datawire/telepresence --reuse-values --namespace ambassador +``` + +If you want to upgrade the Traffic-Manager to a specific version, add a `--version` flag with the version number to the upgrade command. For example: `--version v2.4.1` + +## RBAC + +### Installing a namespace-scoped traffic manager + +You might not want the Traffic Manager to have permissions across the entire kubernetes cluster, or you might want to be able to install multiple traffic managers per cluster (for example, to separate them by environment). +In these cases, the traffic manager supports being installed with a namespace scope, allowing cluster administrators to limit the reach of a traffic manager's permissions. + +For example, suppose you want a Traffic Manager that only works on namespaces `dev` and `staging`. +To do this, create a `values.yaml` like the following: + +```yaml +managerRbac: + create: true + namespaced: true + namespaces: + - dev + - staging +``` + +This can then be installed via: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence -f ./values.yaml +``` + +**NOTE** Do not install namespace-scoped Traffic Managers and a global Traffic Manager in the same cluster, as it could have unexpected effects. + +#### Namespace collision detection + +The Telepresence Helm chart will try to prevent namespace-scoped Traffic Managers from managing the same namespaces. +It will do this by creating a ConfigMap, called `traffic-manager-claim`, in each namespace that a given install manages. + +So, for example, suppose you install one Traffic Manager to manage namespaces `dev` and `staging`, as: + +```bash +helm install traffic-manager --namespace dev datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={dev,staging}' +``` + +You might then attempt to install another Traffic Manager to manage namespaces `staging` and `prod`: + +```bash +helm install traffic-manager --namespace prod datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={staging,prod}' +``` + +This would fail with an error: + +``` +Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap "traffic-manager-claim" in namespace "staging" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "prod": current value is "dev" +``` + +To fix this error, fix the overlap either by removing `staging` from the first install, or from the second. + +#### Namespace scoped user permissions + +Optionally, you can also configure user rbac to be scoped to the same namespaces as the manager itself. +You might want to do this if you don't give your users permissions throughout the cluster, and want to make sure they only have the minimum set required to perform telepresence commands on certain namespaces. + +Continuing with the `dev` and `staging` example from the previous section, simply add the following to `values.yaml` (make sure you set the `subjects`!): + +```yaml +clientRbac: + create: true + + # These are the users or groups to which the user rbac will be bound. + # This MUST be set. + subjects: {} + # - kind: User + # name: jane + # apiGroup: rbac.authorization.k8s.io + + namespaced: true + + namespaces: + - dev + - staging +``` + +#### Namespace-scoped webhook + +If you wish to use the traffic-manager's [mutating webhook](../../reference/cluster-config#mutating-webhook) with a namespace-scoped traffic manager, you will have to ensure that each namespace has an `app.kubernetes.io/name` label that is identical to its name: + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: staging + labels: + app.kubernetes.io/name: staging +``` + +You can also use `kubectl label` to add the label to an existing namespace, e.g.: + +```shell +kubectl label namespace staging app.kubernetes.io/name=staging +``` + +This is required because the mutating webhook will use the name label to find namespaces to operate on. + +**NOTE** This labelling happens automatically in kubernetes >= 1.21. + +### Installing RBAC only + +Telepresence Traffic Manager does require some [RBAC](../../reference/rbac/) for the traffic-manager deployment itself, as well as for users. +To make it easier for operators to introspect / manage RBAC separately, you can use `rbac.only=true` to +only create the rbac-related objects. +Additionally, you can use `clientRbac.create=true` and `managerRbac.create=true` to toggle which subset(s) of RBAC objects you wish to create. diff --git a/docs/telepresence/2.7/install/index.md b/docs/telepresence/2.7/install/index.md new file mode 100644 index 000000000..624cb33d6 --- /dev/null +++ b/docs/telepresence/2.7/install/index.md @@ -0,0 +1,153 @@ +import Platform from '@src/components/Platform'; + +# Install + +Install Telepresence by running the commands below for your OS. If you are not the administrator of your cluster, you will need [administrative RBAC permissions](../reference/rbac#administrating-telepresence) to install and use Telepresence in your cluster. + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## What's Next? + +Follow one of our [quick start guides](../quick-start/) to start using Telepresence, either with our sample app or in your own environment. + +## Installing nightly versions of Telepresence + +We build and publish the contents of the default branch, [release/v2](https://github.com/telepresenceio/telepresence), of Telepresence +nightly, Monday through Friday, for macOS (Intel and Apple silicon), Linux, and Windows. + +The tags are formatted like so: `vX.Y.Z-nightly-$gitShortHash`. + +`vX.Y.Z` is the most recent release of Telepresence with the patch version (Z) bumped one higher. +For example, if our last release was 2.3.4, nightly builds would start with v2.3.5, until a new +version of Telepresence is released. + +`$gitShortHash` will be the short hash of the git commit of the build. + +Use these URLs to download the most recent nightly build. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/nightly/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/nightly/telepresence.zip +``` + + + + +## Installing older versions of Telepresence + +Use these URLs to download an older version for your OS (including older nightly builds), replacing `x.y.z` with the versions you want. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/x.y.z/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/x.y.z/telepresence +``` + + + diff --git a/docs/telepresence/2.7/install/manager.md b/docs/telepresence/2.7/install/manager.md new file mode 100644 index 000000000..9a747d895 --- /dev/null +++ b/docs/telepresence/2.7/install/manager.md @@ -0,0 +1,53 @@ +# Install/Uninstall the Traffic Manager + +Telepresence uses a traffic manager to send/recieve cloud traffic to the user. Telepresence uses [Helm](https://helm.sh) under the hood to install the traffic manager in your cluster. + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/). +In addition, you may need certain prerequisites depending on your cloud provider and platform. +See the [cloud provider installation notes](../../install/cloud) for more. + +## Install the Traffic Manager + +The telepresence cli can install the traffic manager for you. The basic install will install the same version as the client used. + +1. Install the Telepresence Traffic Manager with the following command: + + ```shell + telepresence helm install + ``` + +### Customizing the Traffic Manager. + +For details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +1. Create a values.yaml file with your config values. + +2. Run the install command with the values flag set to the path to your values file. + + ```shell + telepresence helm install --values values.yaml + ``` + + +## Upgrading/Downgrading the Traffic Manager. + +1. Download the cli of the version of Telepresence you wish to use. + +2. Run the install command with the upgrade flag. + + ```shell + telepresence helm install --upgrade + ``` + + +## Uninstall + +The telepresence cli can uninstall the traffic manager for you using the `telepresence helm uninstall` command (previously `telepresence uninstall --everything`). + +1. Uninstall the Telepresence Traffic Manager and all of the agents installed by it using the following command: + + ```shell + telepresence helm uninstall + ``` diff --git a/docs/telepresence/2.7/install/migrate-from-legacy.md b/docs/telepresence/2.7/install/migrate-from-legacy.md new file mode 100644 index 000000000..94307dfa1 --- /dev/null +++ b/docs/telepresence/2.7/install/migrate-from-legacy.md @@ -0,0 +1,110 @@ +# Migrate from legacy Telepresence + +[Telepresence](/products/telepresence/) (formerly referenced as Telepresence 2, which is the current major version) has different mechanics and requires a different mental model from [legacy Telepresence 1](https://www.telepresence.io/docs/v1/) when working with local instances of your services. + +In legacy Telepresence, a pod running a service was swapped with a pod running the Telepresence proxy. This proxy received traffic intended for the service, and sent the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment". + +In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time. + +Telepresence 2 introduces a [new +architecture](../../reference/architecture/) built around "intercepts" +that addresses these problems. With the new Telepresence, a sidecar +proxy ("traffic agent") is injected onto the pod. The proxy then +intercepts traffic intended for the Pod and routes it to the +workstation/laptop. The advantage of this approach is that the +service is running at all times, and no swapping is used. By using +the proxy approach, we can also do personal intercepts, where rather +than re-routing all traffic to the laptop/workstation, it only +re-routes the traffic designated as belonging to that user, so that +multiple developers can intercept the same service at the same time +without disrupting normal operation or disrupting eacho. + +Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts. + +## Using legacy Telepresence commands + +First please ensure you've [installed Telepresence](../). + +Telepresence is able to translate common legacy Telepresence commands into native Telepresence commands. +So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used +to with the Telepresence binary. + +For example, say you have a deployment (`myserver`) that you want to swap deployment (equivalent to intercept in +Telepresence) with a python server, you could run the following command: + +``` +$ telepresence --swap-deployment myserver --expose 9090 --run python3 -m http.server 9090 +< help text > + +Legacy telepresence command used +Command roughly translates to the following in Telepresence: +telepresence intercept echo-easy --port 9090 -- python3 -m http.server 9090 +running... +Connecting to traffic manager... +Connected to context +Using Deployment myserver +intercepted + Intercept name : myserver + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:9090 + Intercepting : all TCP connections +Serving HTTP on :: port 9090 (http://[::]:9090/) ... +``` + +Telepresence will let you know what the legacy Telepresence command has mapped to and automatically +runs it. So you can get started with Telepresence today, using the commands you are used to +and it will help you learn the Telepresence syntax. + +### Legacy command mapping + +Below is the mapping of legacy Telepresence to Telepresence commands (where they exist and +are supported). + +| Legacy Telepresence Command | Telepresence Command | +|--------------------------------------------------|--------------------------------------------| +| --swap-deployment $workload | intercept $workload | +| --expose localPort[:remotePort] | intercept --port localPort[:remotePort] | +| --swap-deployment $workload --run-shell | intercept $workload -- bash | +| --swap-deployment $workload --run $cmd | intercept $workload -- $cmd | +| --swap-deployment $workload --docker-run $cmd | intercept $workload --docker-run -- $cmd | +| --run-shell | connect -- bash | +| --run $cmd | connect -- $cmd | +| --env-file,--env-json | --env-file, --env-json (haven't changed) | +| --context,--namespace | --context, --namespace (haven't changed) | +| --mount,--docker-mount | --mount, --docker-mount (haven't changed) | + +### Legacy Telepresence command limitations + +Some of the commands and flags from legacy Telepresence either didn't apply to Telepresence or +aren't yet supported in Telepresence. For some known popular commands, such as --method, +Telepresence will include output letting you know that the flag has gone away. For flags that +Telepresence can't translate yet, it will let you know that that flag is "unsupported". + +If Telepresence is missing any flags or functionality that is integral to your usage, please let us know +by [creating an issue](https://github.com/telepresenceio/telepresence/issues) and/or talking to us on our [Slack channel](http://a8r.io/slack)! + +## Telepresence changes + +Telepresence installs a Traffic Manager in the cluster and Traffic Agents alongside workloads when performing intercepts (including +with `--swap-deployment`) and leaves them. If you use `--swap-deployment`, the intercept will be left once the process +dies, but the agent will remain. There's no harm in leaving the agent running alongside your service, but when you +want to remove them from the cluster, the following Telepresence command will help: +``` +$ telepresence uninstall --help +Uninstall telepresence agents + +Usage: + telepresence uninstall [flags] { --agent |--all-agents } + +Flags: + -d, --agent uninstall intercept agent on specific deployments + -a, --all-agents uninstall intercept agent on all deployments + -h, --help help for uninstall + -n, --namespace string If present, the namespace scope for this CLI request +``` + +Since the new architecture deploys a Traffic Manager into the Ambassador namespace, please take a look at +our [rbac guide](../../reference/rbac) if you run into any issues with permissions while upgrading to Telepresence. + +The Traffic Manager can be uninstalled using `telepresence helm uninstall`. \ No newline at end of file diff --git a/docs/telepresence/2.7/install/upgrade.md b/docs/telepresence/2.7/install/upgrade.md new file mode 100644 index 000000000..97359cef8 --- /dev/null +++ b/docs/telepresence/2.7/install/upgrade.md @@ -0,0 +1,84 @@ +--- +description: "How to upgrade your installation of Telepresence and install previous versions." +--- + +import Platform from '@src/components/Platform'; + +Please see [Whats new in 2.7.0](../../new-in-2.7) for more info. + +# Upgrade Process +The Telepresence CLI will periodically check for new versions and notify you when an upgrade is available. Running the same commands used for installation will replace your current binary with the latest version. + +Before upgrading your CLI, you must stop any live Telepresence processes by issuing `telepresence quit -ru`. + + + + +```shell +# Intel Macs + +# Upgrade via brew: +brew upgrade datawire/blackbird/telepresence + +# OR upgrade manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +The Telepresence CLI contains an embedded Helm chart. See [Install/Uninstall the Traffic Manager](../manager/) if you want to also upgrade +the Traffic Manager in your cluster. diff --git a/docs/telepresence/2.7/new-in-2.7.md b/docs/telepresence/2.7/new-in-2.7.md new file mode 100644 index 000000000..6bced03e4 --- /dev/null +++ b/docs/telepresence/2.7/new-in-2.7.md @@ -0,0 +1,10 @@ +# What’s new in Telepresence 2.7.0? + +## Distributed tracing +- Telepresence components now automatically pass OTEL headers to one another and keep trace data in memory +- Traces can be collected into a zip file with `telepresence gather-traces` and uploaded to an OTEL collector with `telepresence upload-traces` + +## Helm install improvements +- The traffic manager is installed with `telepresence helm install`. It must be installed before connecting or creating intercepts. +- The traffic manager can be configured using a values file. +- The command `telepresence uninstall` has been moved to `telepresence helm uninstall` diff --git a/docs/telepresence/2.7/quick-start/TelepresenceQuickStartLanding.js b/docs/telepresence/2.7/quick-start/TelepresenceQuickStartLanding.js new file mode 100644 index 000000000..bd375dee0 --- /dev/null +++ b/docs/telepresence/2.7/quick-start/TelepresenceQuickStartLanding.js @@ -0,0 +1,118 @@ +import queryString from 'query-string'; +import React, { useEffect, useState } from 'react'; + +import Embed from '../../../../src/components/Embed'; +import Icon from '../../../../src/components/Icon'; +import Link from '../../../../src/components/Link'; + +import './telepresence-quickstart-landing.less'; + +/** @type React.FC> */ +const RightArrow = (props) => ( + + + +); + +const TelepresenceQuickStartLanding = () => { + const [getStartedUrl, setGetStartedUrl] = useState( + 'https://app.getambassador.io/cloud/welcome?docs_source=telepresence-quick-start', + ); + + const getUrlFromQueryParams = () => { + const { docs_source, docs_campaign } = queryString.parse( + window.location.search, + ); + + if (docs_source === 'cloud-quickstart-ad' && docs_campaign === 'loops') { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=loops', + ); + } else if ( + docs_source === 'cloud-quickstart-ad' && + docs_campaign === 'environments' + ) { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=environments', + ); + } + }; + + useEffect(() => { + getUrlFromQueryParams(); + }, []); + + return ( +
+

+ Telepresence +

+

+ Set up your ideal development environment for Kubernetes in seconds. + Accelerate your inner development loop with hot reload using your + existing IDE, and workflow. +

+ +
+
+
+

+ Set Up Telepresence with Ambassador Cloud +

+

+ Seamlessly integrate Telepresence into your existing Kubernetes + environment by following our 3-step setup guide. +

+ + Get Started + +
+
+

+ + Do it Yourself: + {' '} + install Telepresence and manually connect to your Kubernetes + workloads. +

+
+ +
+
+
+

+ What Can Telepresence Do for You? +

+

Telepresence gives Kubernetes application developers:

+
    +
  • Instant feedback loops
  • +
  • Remote development environments
  • +
  • Access to your favorite local tools
  • +
  • Easy collaborative development with teammates
  • +
+ + LEARN MORE{' '} + + +
+
+ +
+
+
+
+ ); +}; + +export default TelepresenceQuickStartLanding; diff --git a/docs/telepresence/2.7/quick-start/demo-node.md b/docs/telepresence/2.7/quick-start/demo-node.md new file mode 100644 index 000000000..4b4d71308 --- /dev/null +++ b/docs/telepresence/2.7/quick-start/demo-node.md @@ -0,0 +1,152 @@ +--- +description: "Claim a remote demo cluster and learn to use Telepresence to intercept services running in a Kubernetes Cluster, speeding up local development and debugging." +--- + +import {DemoClusterMetadata, ExpirationDate} from '../../../../../src/components/DemoClusterMetadata'; +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start + +
+

Contents

+ +* [1. Get a free remote cluster](#1-get-a-free-remote-cluster) +* [2. Try the Emojivoto application](#2-try-the-emojivoto-application) +* [3. Set up your local development environment](#3-set-up-your-local-development-environment) +* [4. Testing our fix](#4-testing-our-fix) +* [5. Preview URLs](#5-preview-urls) +* [6. How/Why does this all work](#6-howwhy-does-this-all-work) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +In this guide, we'll give you a hands-on tutorial with [Telepresence](/products/telepresence/). To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js and Golang. We have a version in React if you prefer. + + +## 1. Get a free remote cluster + +[Telepresence](/docs/telepresence/) connects your local workstation with a remote Kubernetes cluster. In this tutorial, we'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. Test out the application: + +1. Go to the and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. We're going to use Telepresence shortly to fix this bug, as everyone should be able to vote for 🍩! + + + Congratulations! You've successfully accessed the Emojivoto application on your remote cluster. + + +## 3. Set up your local development environment + +We'll set up a development environment locally on your workstation. We'll then use [Telepresence](../../reference/inside-container/) to connect this local development environment to the remote Kubernetes cluster. To save time, the development environment we'll use is pre-packaged as a Docker container. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + + + + + + + +Make sure that ports 8080 and 8083 are free.
+If the Docker engine is not running, the command will fail and you will see docker: unknown server OS in your terminal. +
+ +2. The Docker container includes a copy of the Emojivoto application that fixes the bug. Visit the [leaderboard](http://localhost:8083/leaderboard) and notice how it is different from the leaderboard in your Kubernetes cluster. + +3. Vote for 🍩 on your local leaderboard, and you can see that the bug is fixed! + + + Congratulations! You have successfully set up a local development environment, and tested the fix locally. + + +## 4. Testing our fix + +A common use case for Telepresence is to connect your local development environment to a remote cluster. This way, if your application is too big or complex to run locally, you can still develop locally. In this Quick Start, we're also going to show Telepresence can be used for integration testing, by testing our fix against the services in the remote cluster. + +1. From your Docker container, create an intercept, which will tell Telepresence to send traffic to the service in your container instead of the service in the cluster: + + + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Preview URLs + +Preview URLs enable you to safely share your development environment with anyone. For example, you may want your UX designer to take a quick look at what you're developing, before you commit the code. Preview URLs enable this easy collaboration. + +1. If you access the Emojivoto application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +2. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version running locally. + + +Now you're able to share your fix in your local environment with your team! + + + + To get more information regarding Preview URLs and intercepts, visit Ambassador Cloud. + + +
+ +## 6. How/Why does this all work? + +[Telepresence](../qs-go/) works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +Intercepts and preview URLs are functions of Telepresence that enable easy local development from a remote Kubernetes cluster and offer a preview environment for sharing and real-time collaboration. + +Telepresence also uses custom headers and header propagation for controllable intercepts and preview URLs. The headers facilitate the smart routing of requests either to live services in the cluster or services running locally on a developer’s machine. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to Ambassador Cloud with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. + +## What's Next? + + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! diff --git a/docs/telepresence/2.7/quick-start/demo-react.md b/docs/telepresence/2.7/quick-start/demo-react.md new file mode 100644 index 000000000..2312dbbbc --- /dev/null +++ b/docs/telepresence/2.7/quick-start/demo-react.md @@ -0,0 +1,257 @@ +--- +description: "Telepresence Quick Start - React. In this guide we'll give you everything you need in a preconfigured demo cluster: the Telepresence CLI, a config file for..." +--- + +import Alert from '@material-ui/lab/Alert'; +import QSCards26 from './qs-cards'; +import { DownloadDemo } from '../../../../../src/components/Docs/DownloadDemo'; +import { UserInterceptCommand } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start - React + +
+

Contents

+ +* [1. Download the demo cluster archive](#1-download-the-demo-cluster-archive) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Set up the sample application](#3-set-up-the-sample-application) +* [4. Test app](#4-test-app) +* [5. Run a service on your laptop](#5-run-a-service-on-your-laptop) +* [6. Make a code change](#6-make-a-code-change) +* [7. Intercept all traffic to the service](#7-intercept-all-traffic-to-the-service) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +In this guide we'll give you **everything you need in a preconfigured demo cluster:** the [Telepresence](/products/telepresence/) CLI, a config file for connecting to your demo cluster, and code to run a cluster service locally. + + + While Telepresence works with any language, this guide uses a sample app with a frontend written in React. We have a version with a Node.js backend if you prefer. + + + + +## 1. Download the demo cluster archive + +1. + +2. Extract the archive file, open the `ambassador-demo-cluster` folder, and run the installer script (the commands below might vary based on where your browser saves downloaded files). + + + This step will also install some dependency packages onto your laptop using npm, you can see those packages at ambassador-demo-cluster/edgey-corp-nodejs/DataProcessingService/package.json. + + + ``` + cd ~/Downloads + unzip ambassador-demo-cluster.zip -d ambassador-demo-cluster + cd ambassador-demo-cluster + ./install.sh + # type y to install the npm dependencies when asked + ``` + +3. Confirm that your `kubectl` is configured to use the demo cluster by getting the status of the cluster nodes, you should see a single node named `tpdemo-prod-...`: + `kubectl get nodes` + + ``` + $ kubectl get nodes + + NAME STATUS ROLES AGE VERSION + tpdemo-prod-1234 Ready control-plane,master 5d10h v1.20.2+k3s1 + ``` + +4. Confirm that the Telepresence CLI is now installed (we expect to see the daemons are not running yet): +`telepresence status` + + ``` + $ telepresence status + + Root Daemon: Not running + User Daemon: Not running + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open System Preferences → Security & Privacy → General. Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence status command. + + + + You now have Telepresence installed on your workstation and a Kubernetes cluster configured in your terminal! + + +## 2. Test Telepresence + +[Telepresence](../../reference/client/login/) connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster (this requires **root** privileges and will ask for your password): +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Set up the sample application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + +1. Clone the emojivoto app: +`git clone https://github.com/datawire/emojivoto.git` + +1. Deploy the app to your cluster: +`kubectl apply -k emojivoto/kustomize/deployment` + +1. Change the kubectl namespace: +`kubectl config set-context --current --namespace=emojivoto` + +1. List the Services: +`kubectl get svc` + + ``` + $ kubectl get svc + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + emoji-svc ClusterIP 10.43.162.236 8080/TCP,8801/TCP 29s + voting-svc ClusterIP 10.43.51.201 8080/TCP,8801/TCP 29s + web-app ClusterIP 10.43.242.240 80/TCP 29s + web-svc ClusterIP 10.43.182.119 8080/TCP 29s + ``` + +1. Since you’ve already connected Telepresence to your cluster, you can access the frontend service in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). This is the namespace qualified DNS name in the form of `service.namespace`. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Test app + +1. Vote for some emojis and see how the [leaderboard](http://web-app.emojivoto/leaderboard) changes. + +1. There is one emoji that causes an error when you vote for it. Vote for 🍩 and the leaderboard does not actually update. Also an error is shown on the browser dev console: +`GET http://web-svc.emojivoto:8080/api/vote?choice=:doughnut: 500 (Internal Server Error)` + + + Open the dev console in Chrome or Firefox with Option + ⌘ + J (macOS) or Shift + CTRL + J (Windows/Linux).
+ Open the dev console in Safari with Option + ⌘ + C. +
+ +The error is on a backend service, so **we can add an error page to notify the user** while the bug is fixed. + +## 5. Run a service on your laptop + +Now start up the `web-app` service on your laptop. We'll then make a code change and intercept this service so that we can see the immediate results of a code change to the service. + +1. **In a new terminal window**, change into the repo directory and build the application: + + `cd /emojivoto` + `make web-app-local` + + ``` + $ make web-app-local + + ... + webpack 5.34.0 compiled successfully in 4326 ms + ✨ Done in 5.38s. + ``` + +2. Change into the service's code directory and start the server: + + `cd emojivoto-web-app` + `yarn webpack serve` + + ``` + $ yarn webpack serve + + ... + ℹ 「wds」: Project is running at http://localhost:8080/ + ... + ℹ 「wdm」: Compiled successfully. + ``` + +4. Access the application at [http://localhost:8080](http://localhost:8080) and see how voting for the 🍩 is generating the same error as the application deployed in the cluster. + + + Victory, your local React server is running a-ok! + + +## 6. Make a code change +We’ve now set up a local development environment for the app. Next we'll make and locally test a code change to the app to improve the issue with voting for 🍩. + +1. In the terminal running webpack, stop the server with `Ctrl+c`. + +1. In your preferred editor open the file `emojivoto/emojivoto-web-app/js/components/Vote.jsx` and replace the `render()` function (lines 83 to the end) with [this highlighted code snippet](https://github.com/datawire/emojivoto/blob/main/assets/Vote-fixed.jsx#L83-L149). + +1. Run webpack to fully recompile the code then start the server again: + + `yarn webpack` + `yarn webpack serve` + +1. Reload the browser tab showing [http://localhost:8080](http://localhost:8080) and vote for 🍩. Notice how you see an error instead, improving the user experience. + +## 7. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the app to the version running locally instead. + + + This command must be run in the terminal window where you ran the script because the script set environment variables to access the demo cluster. Those variables will only will apply to that terminal session. + + +1. Start the intercept with the `intercept` command, setting the workload name (a Deployment in this case), namespace, and port: +`telepresence intercept web-app --namespace emojivoto --port 8080` + + ``` + $ telepresence intercept web-app --namespace emojivoto --port 8080 + + Using deployment web-app + intercepted + Intercept name: web-app-emojivoto + State : ACTIVE + ... + ``` + +2. Go to the frontend service again in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). Voting for 🍩 should now show an error message to the user. + + + The web-app Deployment is being intercepted and rerouted to the server on your laptop! + + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## What's Next? + + diff --git a/docs/telepresence/2.7/quick-start/go.md b/docs/telepresence/2.7/quick-start/go.md new file mode 100644 index 000000000..5ccf26aeb --- /dev/null +++ b/docs/telepresence/2.7/quick-start/go.md @@ -0,0 +1,190 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + + +# Telepresence Quick Start - **Go** + +This guide provides you with a hands-on tutorial with Telepresence and Golang. To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + +## 1. Get a free remote cluster + +Telepresence connects your local workstation with a remote Kubernetes cluster. In this tutorial, you'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. +Test out the application: + +1. Go to the Emojivoto webapp and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. + +## 3. Run the Docker container + +The bug is present in the `voting-svc` service, you'll run that service locally. To save your time, we prepared a Docker container with this service running and all you'll need to fix the bug. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + +2. The application is failing due to a little bug inside this service which uses gRPC to communicate with the others services. We can use `grpcurl` to test the gRPC endpoint and see the error by running: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + (empty) + + Response trailers received: + content-type: application/grpc + Sent 0 requests and received 0 responses + ERROR: + Code: Unknown + Message: ERROR + ``` + +3. In order to fix the bug, use the Docker container's embedded IDE to fix this error. Go to http://localhost:8083 and open `api/api.go`. Remove the `"fmt"` package by deleting the line 5. + + ```go + 3 import ( + 4 "context" + 5 "fmt" // DELETE THIS LINE + 6 + 7 pb "github.com/buoyantio/emojivoto/emojivoto-voting-svc/gen/proto" + ``` + + and also replace the line `21`: + + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return nil, fmt.Errorf("ERROR") + 22 } + ``` + with + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return pS.vote(":doughnut:") + 22 } + ``` + Then save the file (`Ctrl+s` for Windows, `Cmd+s` for Mac or `Menu -> File -> Save`) and verify that the error is fixed now: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + content-type: application/grpc + + Response contents: + { + } + + Response trailers received: + (empty) + Sent 0 requests and received 1 response + ``` + +## 4. Telepresence intercept + +1. Now the bug is fixed, you can use Telepresence to intercept *all* the traffic through our local service. +Run the following command inside the container: + + ``` + $ telepresence intercept voting --port 8081:8080 + + Using Deployment voting + intercepted + Intercept name : voting + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8081 + Service Port Identifier: 8080 + Volume Mount Point : /tmp/telfs-XXXXXXXXX + Intercepting : all TCP connections + ``` + Now you can go back to Emojivoto webapp and you'll see that voting for 🍩 woks as expected. + +You have created an intercept to tell Telepresence where to send traffic. The `voting-svc` traffic is now destined to the local Dockerized version of the service. This intercepts *all the traffic* to the local `voting-svc` service, which has been fixed with the Telepresence intercept. + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Telepresence intercept with a preview URL + +Preview URLs allows you to safely share your development environment. With this approach, you can try and test your local service more accurately because you have a total control about which traffic is handled through your service, all of this thanks to the preview URL. + +1. First leave the current intercept: + + ``` + $ telepresence leave voting + ``` + +2. Then login to telepresence: + + + +3. Create an intercept, which will tell Telepresence to send traffic to the service in our container instead of the service in the cluster. + + + +4. If you access the Emojivoto webapp application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +5. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version which is running locally. + +
+ +## What's Next? + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! diff --git a/docs/telepresence/2.7/quick-start/index.md b/docs/telepresence/2.7/quick-start/index.md new file mode 100644 index 000000000..e0d26fa9e --- /dev/null +++ b/docs/telepresence/2.7/quick-start/index.md @@ -0,0 +1,7 @@ +--- +description: Telepresence Quick Start. +--- + +import NewTelepresenceQuickStartLanding from './TelepresenceQuickStartLanding' + + diff --git a/docs/telepresence/2.7/quick-start/qs-cards.js b/docs/telepresence/2.7/quick-start/qs-cards.js new file mode 100644 index 000000000..5b68aa4ae --- /dev/null +++ b/docs/telepresence/2.7/quick-start/qs-cards.js @@ -0,0 +1,71 @@ +import Grid from '@material-ui/core/Grid'; +import Paper from '@material-ui/core/Paper'; +import Typography from '@material-ui/core/Typography'; +import { makeStyles } from '@material-ui/core/styles'; +import { Link as GatsbyLink } from 'gatsby'; +import React from 'react'; + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + textAlign: 'center', + alignItem: 'stretch', + padding: 0, + }, + paper: { + padding: theme.spacing(1), + textAlign: 'center', + color: 'black', + height: '100%', + }, +})); + +export default function CenteredGrid() { + const classes = useStyles(); + + return ( +
+ + + + + + Collaborating + + + + Use preview URLS to collaborate with your colleagues and others + outside of your organization. + + + + + + + + Outbound Sessions + + + + While connected to the cluster, your laptop can interact with + services as if it was another pod in the cluster. + + + + + + + + FAQs + + + + Learn more about uses cases and the technical implementation of + Telepresence. + + + + +
+ ); +} diff --git a/docs/telepresence/2.7/quick-start/qs-go.md b/docs/telepresence/2.7/quick-start/qs-go.md new file mode 100644 index 000000000..2e140f6a7 --- /dev/null +++ b/docs/telepresence/2.7/quick-start/qs-go.md @@ -0,0 +1,396 @@ +--- +description: "Telepresence Quick Start Go. You will need kubectl or oc installed and set up (Linux / macOS / Windows) to use a Kubernetes cluster, preferably an empty." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Go** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Go application](#3-install-a-sample-go-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used [Telepresence](/products/telepresence/) previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Go application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Go. We have versions in Python (Flask), Python (FastAPI), Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-go.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-go.git + + Cloning into 'edgey-corp-go'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-go/DataProcessingService/` + +3. You will use [Fresh](https://pkg.go.dev/github.com/BUGLAN/fresh) to support auto reloading of the Go server, which we'll use later. Confirm it is installed by running: + `go get github.com/pilu/fresh` + Then start the Go server: + `$GOPATH/bin/fresh` + + ``` + $ go get github.com/pilu/fresh + + $ $GOPATH/bin/fresh + + ... + 10:23:41 app | Welcome to the DataProcessingGoService! + ``` + + + Install Go from here and set your GOPATH if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Go server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Go server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-go/DataProcessingService/main.go` in your editor and change `var color string` from `blue` to `orange`. Save the file and the Go server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.7/quick-start/qs-java.md b/docs/telepresence/2.7/quick-start/qs-java.md new file mode 100644 index 000000000..9056d61cd --- /dev/null +++ b/docs/telepresence/2.7/quick-start/qs-java.md @@ -0,0 +1,390 @@ +--- +description: "Telepresence Quick Start - Java. This document uses kubectl in all example commands, but OpenShift users should have no problem substituting in the oc command." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Java** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Java application](#3-install-a-sample-java-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Java application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Java. We have versions in Python (FastAPI), Python (Flask), Go, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-java.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-java.git + + Cloning into 'edgey-corp-java'... + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-java/DataProcessingService/` + +3. Start the Maven server. + `mvn spring-boot:run` + + + Install Java and Maven first if needed. + + + ``` + $ mvn spring-boot:run + + ... + g.d.DataProcessingServiceJavaApplication : Started DataProcessingServiceJavaApplication in 1.408 seconds (JVM running for 1.684) + + ``` + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Java server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Java server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-java/DataProcessingService/src/main/resources/application.properties` in your editor and change `app.default.color` on line 2 from `blue` to `orange`. Save the file then stop and restart your Java server. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.7/quick-start/qs-node.md b/docs/telepresence/2.7/quick-start/qs-node.md new file mode 100644 index 000000000..d4282240f --- /dev/null +++ b/docs/telepresence/2.7/quick-start/qs-node.md @@ -0,0 +1,384 @@ +--- +description: "Telepresence Quick Start Node.js. This document uses kubectl in all example commands. OpenShift users should have no problem substituting in the oc command..." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Node.js** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Node.js application](#3-install-a-sample-nodejs-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Node.js application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js. We have versions in Go, Java,Python using Flask, and Python using FastAPI if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-nodejs.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-nodejs.git + + Cloning into 'edgey-corp-nodejs'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-nodejs/DataProcessingService/` + +3. Install the dependencies and start the Node server: +`npm install && npm start` + + ``` + $ npm install && npm start + + ... + Welcome to the DataProcessingService! + { _: [] } + Server running on port 3000 + ``` + + + Install Node.js from here if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Node server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + See this doc for more information on how Telepresence resolves DNS. + + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Node server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-nodejs/DataProcessingService/app.js` in your editor and change line 6 from `blue` to `orange`. Save the file and the Node server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.7/quick-start/qs-python-fastapi.md b/docs/telepresence/2.7/quick-start/qs-python-fastapi.md new file mode 100644 index 000000000..dacfd9f25 --- /dev/null +++ b/docs/telepresence/2.7/quick-start/qs-python-fastapi.md @@ -0,0 +1,381 @@ +--- +description: "Telepresence Quick Start - Python (FastAPI) You need kubectl or oc installed & set up (Linux/macOS/Windows) to use Kubernetes cluster, preferably an empty test." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Python (FastAPI)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the FastAPI framework. We have versions in Python (Flask), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python-fastapi.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python-fastapi.git + + Cloning into 'edgey-corp-python-fastapi'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python-fastapi/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install fastapi uvicorn requests && python app.py + + Collecting fastapi + ... + Application startup complete. + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local service is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python-fastapi/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 17 from `blue` to `orange`. Save the file and the Python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080) and it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.7/quick-start/qs-python.md b/docs/telepresence/2.7/quick-start/qs-python.md new file mode 100644 index 000000000..02ad7de97 --- /dev/null +++ b/docs/telepresence/2.7/quick-start/qs-python.md @@ -0,0 +1,392 @@ +--- +description: "Telepresence Quick Start - Python (Flask). This document uses kubectl in all example commands, but OpenShift users should have no problem substituting in the oc." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Python (Flask)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the Flask framework. We have versions in Python (FastAPI), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python.git + + Cloning into 'edgey-corp-python'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install flask requests && python app.py + + Collecting flask + ... + Welcome to the DataServiceProcessingPythonService! + ... + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Python server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 15 from `blue` to `orange`. Save the file and the python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.7/quick-start/telepresence-quickstart-landing.less b/docs/telepresence/2.7/quick-start/telepresence-quickstart-landing.less new file mode 100644 index 000000000..e2a83df4f --- /dev/null +++ b/docs/telepresence/2.7/quick-start/telepresence-quickstart-landing.less @@ -0,0 +1,152 @@ +@import '~@src/components/Layout/vars.less'; + +.doc-body .telepresence-quickstart-landing { + font-family: @InterFont; + color: @black; + margin: -8.4px auto 48px; + max-width: 1050px; + min-width: @docs-min-width; + width: 100%; + + h1 { + color: @blue-dark; + font-weight: normal; + letter-spacing: 0.25px; + font-size: 33px; + margin: 0 0 15px; + } + p { + font-size: 0.875rem; + line-height: 24px; + margin: 0; + padding: 0; + } + + .demo-cluster-container { + display: grid; + margin: 40px 0; + grid-template-columns: 1fr; + grid-template-columns: 1fr; + @media screen and (max-width: 900px) { + grid-template-columns: repeat(1, 1fr); + } + } + .main-title-container { + display: flex; + flex-direction: column; + align-items: center; + p { + text-align: center; + font-size: 0.875rem; + } + } + h2 { + font-size: 23px; + color: @black; + margin: 0 0 20px 0; + padding: 0; + &.underlined { + padding-bottom: 2px; + border-bottom: 3px solid @grey-separator; + text-align: center; + } + strong { + font-weight: 800; + } + &.subtitle { + margin-bottom: 10px; + font-size: 19px; + line-height: 28px; + } + } + .learn-more, + .get-started { + font-size: 14px; + font-weight: 600; + letter-spacing: 1.25px; + display: flex; + align-items: center; + text-decoration: none; + &.inline { + display: inline-block; + text-decoration: underline; + font-size: unset; + font-weight: normal; + &:hover { + text-decoration: none; + } + } + &.blue { + color: @blue-5; + } + &.blue:hover { + color: @blue-dark; + } + } + + .learn-more { + margin-top: 20px; + padding: 13px 0; + } + + .box-container { + &.border { + border: 1.5px solid @grey-separator; + border-radius: 5px; + padding: 10px; + } + &::before { + content: ''; + position: absolute; + width: 14px; + height: 14px; + border-radius: 50%; + top: 0; + left: 50%; + transform: translate(-50%, -50%); + } + p { + font-size: 0.875rem; + line-height: 24px; + padding: 0; + } + } + + .telepresence-video { + border: 2px solid @grey-separator; + box-shadow: -6px 12px 0px fade(@black, 12%); + border-radius: 8px; + padding: 18px; + h2.telepresence-video-title { + font-weight: 400; + font-size: 23px; + line-height: 33px; + color: @blue-6; + } + } + + .video-section { + display: grid; + grid-template-columns: 1fr 1fr; + column-gap: 20px; + @media screen and (max-width: 800px) { + grid-template-columns: 1fr; + } + ul { + font-size: 14px; + margin: 0 10px 6px 0; + } + .video-container { + position: relative; + padding-bottom: 56.25%; // 16:9 aspect ratio + height: 0; + iframe { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + } + } + } +} diff --git a/docs/telepresence/2.7/redirects.yml b/docs/telepresence/2.7/redirects.yml new file mode 100644 index 000000000..5961b3477 --- /dev/null +++ b/docs/telepresence/2.7/redirects.yml @@ -0,0 +1 @@ +- {from: "", to: "quick-start"} diff --git a/docs/telepresence/2.7/reference/architecture.md b/docs/telepresence/2.7/reference/architecture.md new file mode 100644 index 000000000..8aa90b267 --- /dev/null +++ b/docs/telepresence/2.7/reference/architecture.md @@ -0,0 +1,102 @@ +--- +description: "How Telepresence works to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Telepresence Architecture + +
+ +![Telepresence Architecture](https://www.getambassador.io/images/documentation/telepresence-architecture.inline.svg) + +
+ +## Telepresence CLI + +The Telepresence CLI orchestrates the moving parts on the workstation: it starts the Telepresence Daemons, +authenticates against Ambassador Cloud, and then acts as a user-friendly interface to the Telepresence User Daemon. + +## Telepresence Daemons +Telepresence has Daemons that run on a developer's workstation and act as the main point of communication to the cluster's +network in order to communicate with the cluster and handle intercepted traffic. + +### User-Daemon +The User-Daemon coordinates the creation and deletion of intercepts by communicating with the [Traffic Manager](#traffic-manager). +All requests from and to the cluster go through this Daemon. + +When you run telepresence login, Telepresence installs an enhanced version of the User-Daemon. This replaces the existing User-Daemon and +allows you to create intercepts on your local machine from Ambassador Cloud. + +### Root-Daemon +The Root-Daemon manages the networking necessary to handle traffic between the local workstation and the cluster by setting up a +[Virtual Network Device](../tun-device) (VIF). For a detailed description of how the VIF manages traffic and why it is necessary +please refer to this blog post: +[Implementing Telepresence Networking with a TUN Device](https://blog.getambassador.io/implementing-telepresence-networking-with-a-tun-device-a23a786d51e9). + +When you run telepresence login, Telepresence installs an enhanced Telepresence User Daemon. This replaces the open source +User Daemon and allows you to create intercepts on your local machine from Ambassador Cloud. + +## Traffic Manager + +The Traffic Manager is the central point of communication between Traffic Agents in the cluster and Telepresence Daemons +on developer workstations. It is responsible for injecting the Traffic Agent sidecar into intercepted pods, proxying all +relevant inbound and outbound traffic, and tracking active intercepts. + +The Traffic-Manager is installed, either by a cluster administrator using a Helm Chart, or on demand by the Telepresence +User Daemon. When the User Daemon performs its initial connect, it first checks the cluster for the Traffic Manager +deployment, and if missing it will make an attempt to install it using its embedded Helm Chart. + +When an intercept gets created with a Preview URL, the Traffic Manager will establish a connection with Ambassador Cloud +so that Preview URL requests can be routed to the cluster. This allows Ambassador Cloud to reach the Traffic Manager +without requiring the Traffic Manager to be publicly exposed. Once the Traffic Manager receives a request from a Preview +URL, it forwards the request to the ingress service specified at the Preview URL creation. + +## Traffic Agent + +The Traffic Agent is a sidecar container that facilitates intercepts. When an intercept is first started, the Traffic Agent +container is injected into the workload's pod(s). You can see the Traffic Agent's status by running `telepresence list` +or `kubectl describe pod `. + +Depending on the type of intercept that gets created, the Traffic Agent will either route the incoming request to the +Traffic Manager so that it gets routed to a developer's workstation, or it will pass it along to the container in the +pod usually handling requests on that port. + +## Ambassador Cloud + +Ambassador Cloud enables Preview URLs by generating random ephemeral domain names and routing requests received on those +domains from authorized users to the appropriate Traffic Manager. + +Ambassador Cloud also lets users manage their Preview URLs: making them publicly accessible, seeing users who have +accessed them and deleting them. + +## Pod-Daemon + +The Pod-Daemon is a modified version of the [Telepresence User-Daemon](#user-daemon) built as a container image so that +it can be inserted into a `Deployment` manifest as an additional container. This allows users to create intercepts completely +within the cluster with the benefit that the intercept stays active until the deployment with the Pod-Daemon container is removed. + +The Pod-Daemon will take arguments and environment variables as part of the `Deployment` manifest to specify which service the intercept +should be run on and to provide similar configuration that would be provided when using Telepresence intercepts from the command line. + +After being deployed to the cluster, it behaves similarly to the Telepresence User-Daemon and installs the [Traffic Agent Sidecar](#traffic-agent) +on the service that is being intercepted. After the intercept is created, traffic can then be redirected to the `Deployment` with the Pod-Daemon +container instead. The Pod-Daemon will automatically generate a Preview URL so that the intercept can be accessed from outside the cluster. +The Preview URL can be obtained from the Pod-Daemon logs if you are deploying it manually. + +The Pod-Daemon was created for use as a component of Deployment Previews in order to automatically create intercepts with development images built +by CI so that changes from pull requests can be quickly visualized in a live cluster before changes are landed by accessing the Preview URL +link which would be posted to an associated GitHub pull request when using Deployment Previews. + +See the [Deployment Previews quick-start](https://www.getambassador.io/docs/cloud/latest/deployment-previews/quick-start) for information on how to get started with Deployment Previews +or for a reference on how Pod-Daemon can be manually deployed to the cluster. + + +# Changes from Service Preview + +Using Ambassador's previous offering, Service Preview, the Traffic Agent had to be manually added to a pod by an +annotation. This is no longer required as the Traffic Agent is automatically injected when an intercept is started. + +Service Preview also started an intercept via `edgectl intercept`. The `edgectl` CLI is no longer required to intercept +as this functionality has been moved to the Telepresence CLI. + +For both the Traffic Manager and Traffic Agents, configuring Kubernetes ClusterRoles and ClusterRoleBindings is not +required as it was in Service Preview. Instead, the user running Telepresence must already have sufficient permissions in the cluster to add and modify deployments in the cluster. diff --git a/docs/telepresence/2.7/reference/client.md b/docs/telepresence/2.7/reference/client.md new file mode 100644 index 000000000..491dbbb8e --- /dev/null +++ b/docs/telepresence/2.7/reference/client.md @@ -0,0 +1,31 @@ +--- +description: "CLI options for Telepresence to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Client reference + +The [Telepresence CLI client](../../quick-start) is used to connect Telepresence to your cluster, start and stop intercepts, and create preview URLs. All commands are run in the form of `telepresence `. + +## Commands + +A list of all CLI commands and flags is available by running `telepresence help`, but here is more detail on the most common ones. +You can append `--help` to each command below to get even more information about its usage. + +| Command | Description | +| --- | --- | +| `connect` | Starts the local daemon and connects Telepresence to your cluster and installs the Traffic Manager if it is missing. After connecting, outbound traffic is routed to the cluster so that you can interact with services as if your laptop was another pod (for example, curling a service by it's name) | +| [`login`](login) | Authenticates you to Ambassador Cloud to create, manage, and share [preview URLs](../../howtos/preview-urls/) +| `logout` | Logs out out of Ambassador Cloud | +| `license` | Formats a license from Ambassdor Cloud into a secret that can be [applied to your cluster](../cluster-config#add-license-to-cluster) if you require features of the extension in an air-gapped environment| +| `status` | Shows the current connectivity status | +| `quit` | Tell Telepresence daemons to quit | +| `list` | Lists the current active intercepts | +| `intercept` | Intercepts a service, run followed by the service name to be intercepted and what port to proxy to your laptop: `telepresence intercept --port `. This command can also start a process so you can run a local instance of the service you are intercepting. For example the following will intercept the hello service on port 8000 and start a Python web server: `telepresence intercept hello --port 8000 -- python3 -m http.server 8000`. A special flag `--docker-run` can be used to run the local instance [in a docker container](../docker-run). | +| `leave` | Stops an active intercept: `telepresence leave hello` | +| `preview` | Create or remove [preview URLs](../../howtos/preview-urls) for existing intercepts: `telepresence preview create ` | +| `loglevel` | Temporarily change the log-level of the traffic-manager, traffic-agents, and user and root daemons | +| `gather-logs` | Gather logs from traffic-manager, traffic-agents, user, and root daemons, and export them into a zip file that can be shared with others or included with a github issue. Use `--get-pod-yaml` to include the yaml for the `traffic-manager` and `traffic-agent`s. Use `--anonymize` to replace the actual pod names + namespaces used for the `traffic-manager` and pods containing `traffic-agent`s in the logs. | +| `version` | Show version of Telepresence CLI + Traffic-Manager (if connected) | +| `uninstall` | Uninstalls Telepresence from your cluster, using the `--agent` flag to target the Traffic Agent for a specific workload, the `--all-agents` flag to remove all Traffic Agents from all workloads, or the `--everything` flag to remove all Traffic Agents and the Traffic Manager. +| `dashboard` | Reopens the Ambassador Cloud dashboard in your browser | +| `current-cluster-id` | Get cluster ID for your kubernetes cluster, used for [configuring license](../cluster-config#add-license-to-cluster) in an air-gapped environment | diff --git a/docs/telepresence/2.7/reference/client/login.md b/docs/telepresence/2.7/reference/client/login.md new file mode 100644 index 000000000..ab4319a54 --- /dev/null +++ b/docs/telepresence/2.7/reference/client/login.md @@ -0,0 +1,61 @@ +# Telepresence Login + +```console +$ telepresence login --help +Authenticate to Ambassador Cloud + +Usage: + telepresence login [flags] + +Flags: + --apikey string Static API key to use instead of performing an interactive login +``` + +## Description + +Use `telepresence login` to explicitly authenticate with [Ambassador +Cloud](https://www.getambassador.io/docs/cloud). Unless the +[`skipLogin` option](../../config) is set, other commands will +automatically invoke the `telepresence login` interactive login +procedure as nescessary, so it is rarely nescessary to explicitly run +`telepresence login`; it should only be truly nescessary to explictly +run `telepresence login` when you require a non-interactive login. + +The normal interactive login procedure involves launching a web +browser, a user interacting with that web browser, and finally having +the web browser make callbacks to the local Telepresence process. If +it is not possible to do this (perhaps you are using a headless remote +box via SSH, or are using Telepresence in CI), then you may instead +have Ambassador Cloud issue an API key that you pass to `telepresence +login` with the `--apikey` flag. + +## Telepresence + +When you run `telepresence login`, the CLI installs +a Telepresence binary. The Telepresence enhanced free client of the [User +Daemon](../../architecture) communicates with the Ambassador Cloud to +provide fremium features including the ability to create intercepts from +Ambassador Cloud. + +## Acquiring an API key + +1. Log in to Ambassador Cloud at https://app.getambassador.io/ . + +2. Click on your profile icon in the upper-left: ![Screenshot with the + mouse pointer over the upper-left profile icon](./login/apikey-2.png) + +3. Click on the "API Keys" menu button: ![Screenshot with the mouse + pointer over the "API Keys" menu button](./login/apikey-3.png) + +4. Click on the "generate new key" button in the upper-right: + ![Screenshot with the mouse pointer over the "generate new key" + button](./login/apikey-4.png) + +5. Enter a description for the key (perhaps the name of your laptop, + or perhaps the "CI"), and click "generate api key" to create it. + +You may now pass the API key as `KEY` to `telepresence login --apikey=KEY`. + +Telepresence will use that "master" API key to create narrower keys +for different components of Telepresence. You will see these appear +in the Ambassador Cloud web interface. diff --git a/docs/telepresence/2.7/reference/client/login/apikey-2.png b/docs/telepresence/2.7/reference/client/login/apikey-2.png new file mode 100644 index 000000000..1379502a9 Binary files /dev/null and b/docs/telepresence/2.7/reference/client/login/apikey-2.png differ diff --git a/docs/telepresence/2.7/reference/client/login/apikey-3.png b/docs/telepresence/2.7/reference/client/login/apikey-3.png new file mode 100644 index 000000000..4559b784d Binary files /dev/null and b/docs/telepresence/2.7/reference/client/login/apikey-3.png differ diff --git a/docs/telepresence/2.7/reference/client/login/apikey-4.png b/docs/telepresence/2.7/reference/client/login/apikey-4.png new file mode 100644 index 000000000..25c6581a4 Binary files /dev/null and b/docs/telepresence/2.7/reference/client/login/apikey-4.png differ diff --git a/docs/telepresence/2.7/reference/cluster-config.md b/docs/telepresence/2.7/reference/cluster-config.md new file mode 100644 index 000000000..6751ae77d --- /dev/null +++ b/docs/telepresence/2.7/reference/cluster-config.md @@ -0,0 +1,299 @@ +import Alert from '@material-ui/lab/Alert'; +import { ClusterConfig, PaidPlansDisclaimer } from '../../../../../src/components/Docs/Telepresence'; + +# Cluster-side configuration + +For the most part, Telepresence doesn't require any special +configuration in the cluster and can be used right away in any +cluster (as long as the user has adequate [RBAC permissions](../rbac) +and the cluster's server version is `1.17.0` or higher). + +However, some advanced features do require some configuration in the +cluster. + +## TLS + +In this example, other applications in the cluster expect to speak TLS to your +intercepted application (perhaps you're using a service-mesh that does +mTLS). + +In order to use `--mechanism=http` (or any features that imply +`--mechanism=http`) you need to tell Telepresence about the TLS +certificates in use. + +Tell Telepresence about the certificates in use by adjusting your +[workload's](../intercepts/#supported-workloads) Pod template to set a couple of +annotations on the intercepted Pods: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ "getambassador.io/inject-terminating-tls-secret": "your-terminating-secret" # optional ++ "getambassador.io/inject-originating-tls-secret": "your-originating-secret" # optional + spec: ++ serviceAccountName: "your-account-that-has-rbac-to-read-those-secrets" + containers: +``` + +- The `getambassador.io/inject-terminating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS server + certificate to use for decrypting and responding to incoming + requests. + + When Telepresence modifies the Service and workload port + definitions to point at the Telepresence Agent sidecar's port + instead of your application's actual port, the sidecar will use this + certificate to terminate TLS. + +- The `getambassador.io/inject-originating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS + client certificate to use for communicating with your application. + + You will need to set this if your application expects incoming + requests to speak TLS (for example, your + code expects to handle mTLS itself instead of letting a service-mesh + sidecar handle mTLS for it, or the port definition that Telepresence + modified pointed at the service-mesh sidecar instead of at your + application). + + If you do set this, you should to set it to the + same client certificate Secret that you configure the Ambassador + Edge Stack to use for mTLS. + +It is only possible to refer to a Secret that is in the same Namespace +as the Pod. + +The Pod will need to have permission to `get` and `watch` each of +those Secrets. + +Telepresence understands `type: kubernetes.io/tls` Secrets and +`type: istio.io/key-and-cert` Secrets; as well as `type: Opaque` +Secrets that it detects to be formatted as one of those types. + +## Air-gapped cluster + + +If your cluster is on an isolated network such that it cannot +communicate with Ambassador Cloud, then some additional configuration +is required to acquire a license key in order to use personal +intercepts. + +### Create a license + +1. + +2. Generate a new license (if one doesn't already exist) by clicking *Generate New License*. + +3. You will be prompted for your Cluster ID. Ensure your +kubeconfig context is using the cluster you want to create a license for then +run this command to generate the Cluster ID: + + ``` + $ telepresence current-cluster-id + + Cluster ID: + ``` + +4. Click *Generate API Key* to finish generating the license. + +5. On the licenses page, download the license file associated with your cluster. + +### Add license to cluster +There are two separate ways you can add the license to your cluster: manually creating and deploying +the license secret or having the helm chart manage the secret + +You only need to do one of the two options. + +#### Manual deploy of license secret + +1. Use this command to generate a Kubernetes Secret config using the license file: + + ``` + $ telepresence license -f + + apiVersion: v1 + data: + hostDomain: + license: + kind: Secret + metadata: + creationTimestamp: null + name: systema-license + namespace: ambassador + ``` + +2. Save the output as a YAML file and apply it to your +cluster with `kubectl`. + +3. When deploying the `traffic-manager` chart, you must add the additional values when running `helm install` by putting +the following into a file (for the example we'll assume it's called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + secret: + # This tells the helm chart not to create the secret since you've created it yourself + create: false + ``` + +4. Install the helm chart into the cluster + + ``` + helm install traffic-manager -n ambassador datawire/telepresence --create-namespace -f license-values.yaml + ``` + +5. Ensure that you have the docker image for the Smart Agent (datawire/ambassador-telepresence-agent:1.11.0) +pulled and in a registry your cluster can pull from. + +6. Have users use the `images` [config key](../config/#images) keys so telepresence uses the aforementioned image for their agent. + +#### Helm chart manages the secret + +1. Get the jwt token from the downloaded license file + + ``` + $ cat ~/Downloads/ambassador.License_for_yourcluster + eyJhbGnotarealtoken.butanexample + ``` + +2. Create the following values file, substituting your real jwt token in for the one used in the example below. +(for this example we'll assume the following is placed in a file called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + # This is the value from the license file you download. this value is an example and will not work + value: eyJhbGnotarealtoken.butanexample + secret: + # This tells the helm chart to create the secret + create: true + ``` + +3. Install the helm chart into the cluster + + ``` + helm install traffic-manager charts/telepresence -n ambassador --create-namespace -f license-values.yaml + ``` + +Users will now be able to use preview intercepts with the +`--preview-url=false` flag. Even with the license key, preview URLs +cannot be used without enabling direct communication with Ambassador +Cloud, as Ambassador Cloud is essential to their operation. + +If using Helm to install the server-side components, see the chart's [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) to learn how to configure the image registry and license secret. + +Have clients use the [skipLogin](../config/#cloud) key to ensure the cli knows it is operating in an +air-gapped environment. + +## Mutating Webhook + +Telepresence uses a Mutating Webhook to inject the [Traffic Agent](../architecture/#traffic-agent) sidecar container and update the +port definitions. This means that an intercepted workload (Deployment, StatefulSet, ReplicaSet) will remain untouched +and in sync as far as GitOps workflows (such as ArgoCD) are concerned. + +The injection will happen on demand the first time an attempt is made to intercept the workload. + +If you want to prevent that the injection ever happens, simply add the `telepresence.getambassador.io/inject-traffic-agent: disabled` +annotation to your workload template's annotations: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ telepresence.getambassador.io/inject-traffic-agent: disabled + spec: + containers: +``` + +### Service Name and Port Annotations + +Telepresence will automatically find all services and all ports that will connect to a workload and make them available +for an intercept, but you can explicitly define that only one service and/or port can be intercepted. + +```diff + spec: + template: + metadata: + labels: + service: your-service + annotations: ++ telepresence.getambassador.io/inject-service-name: my-service ++ telepresence.getambassador.io/inject-service-port: https + spec: + containers: +``` + +### Ignore Certain Volume Mounts + +An annotation `telepresence.getambassador.io/inject-ignore-volume-mounts` can be used to make the injector ignore certain volume mounts denoted by a comma-separated string. The specified volume mounts from the original container will not be appended to the agent sidecar container. + +```diff + spec: + template: + metadata: + annotations: ++ telepresence.getambassador.io/inject-ignore-volume-mounts: "foo,bar" + spec: + containers: +``` + +### Note on Numeric Ports + +If the targetPort of your intercepted service is pointing at a port number, in addition to +injecting the Traffic Agent sidecar, Telepresence will also inject an initContainer that will +reconfigure the pod's firewall rules to redirect traffic to the Traffic Agent. + + +Note that this initContainer requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + +If you need to use numeric ports without the aforementioned capabilities, you can [manually install the agent](../intercepts/manual-agent) + +For example, the following service is using a numeric port, so Telepresence would inject an initContainer into it: +```yaml +apiVersion: v1 +kind: Service +metadata: + name: your-service +spec: + type: ClusterIP + selector: + service: your-service + ports: + - port: 80 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: your-service + labels: + service: your-service +spec: + replicas: 1 + selector: + matchLabels: + service: your-service + template: + metadata: + annotations: + telepresence.getambassador.io/inject-traffic-agent: enabled + labels: + service: your-service + spec: + containers: + - name: your-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 +``` diff --git a/docs/telepresence/2.7/reference/config.md b/docs/telepresence/2.7/reference/config.md new file mode 100644 index 000000000..0ee52c13a --- /dev/null +++ b/docs/telepresence/2.7/reference/config.md @@ -0,0 +1,284 @@ +# Laptop-side configuration + +## Global Configuration +Telepresence uses a `config.yml` file to store and change certain global configuration values that will be used for all clusters you use Telepresence with. The location of this file varies based on your OS: + +* macOS: `$HOME/Library/Application Support/telepresence/config.yml` +* Linux: `$XDG_CONFIG_HOME/telepresence/config.yml` or, if that variable is not set, `$HOME/.config/telepresence/config.yml` +* Windows: `%APPDATA%\telepresence\config.yml` + +For Linux, the above paths are for a user-level configuration. For system-level configuration, use the file at `$XDG_CONFIG_DIRS/telepresence/config.yml` or, if that variable is empty, `/etc/xdg/telepresence/config.yml`. If a file exists at both the user-level and system-level paths, the user-level path file will take precedence. + +### Values + +The config file currently supports values for the `timeouts`, `logLevels`, `images`, `cloud`, and `grpc` keys. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +timeouts: + agentInstall: 1m + intercept: 10s +logLevels: + userDaemon: debug +images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting +cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. +grpc: + maxReceiveSize: 10Mi +telepresenceAPI: + port: 9980 +``` + +#### Timeouts + +Values for `timeouts` are all durations either as a number of seconds +or as a string with a unit suffix of `ms`, `s`, `m`, or `h`. Strings +can be fractional (`1.5h`) or combined (`2h45m`). + +These are the valid fields for the `timeouts` key: + +| Field | Description | Type | Default | +|-------------------------|------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|------------| +| `agentInstall` | Waiting for Traffic Agent to be installed | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | +| `apply` | Waiting for a Kubernetes manifest to be applied | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 1 minute | +| `clusterConnect` | Waiting for cluster to be connected | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `intercept` | Waiting for an intercept to become active | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `proxyDial` | Waiting for an outbound connection to be established | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `trafficManagerConnect` | Waiting for the Traffic Manager API to connect for port forwards | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `trafficManagerAPI` | Waiting for connection to the gPRC API after `trafficManagerConnect` is successful | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 15 seconds | +| `helm` | Waiting for Helm operations (e.g. `install`) on the Traffic Manager | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | + +#### Log Levels + +Values for the `logLevels` fields are one of the following strings, +case-insensitive: + + - `trace` + - `debug` + - `info` + - `warning` or `warn` + - `error` + - `fatal` + - `panic` + +For whichever log-level you select, you will get logs labeled with that level and of higher severity. +(e.g. if you use `info`, you will also get logs labeled `error`. You will NOT get logs labeled `debug`. + +These are the valid fields for the `logLevels` key: + +| Field | Description | Type | Default | +|--------------|---------------------------------------------------------------------|---------------------------------------------|---------| +| `userDaemon` | Logging level to be used by the User Daemon (logs to connector.log) | [loglevel][logrus-level] [string][yaml-str] | debug | +| `rootDaemon` | Logging level to be used for the Root Daemon (logs to daemon.log) | [loglevel][logrus-level] [string][yaml-str] | info | + +#### Images +Values for `images` are strings. These values affect the objects that are deployed in the cluster, +so it's important to ensure users have the same configuration. + +Additionally, you can deploy the server-side components with [Helm](../../install/helm), to prevent them +from being overridden by a client's config and use the [mutating-webhook](../cluster-config/#mutating-webhook) +to handle installation of the `traffic-agents`. + +These are the valid fields for the `images` key: + +| Field | Description | Type | Default | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------|----------------------| +| `registry` | Docker registry to be used for installing the Traffic Manager and default Traffic Agent. If not using a helm chart to deploy server-side objects, changing this value will create a new traffic-manager deployment when using Telepresence commands. Additionally, changing this value will update installed default `traffic-agents` to use the new registry when creating a new intercept. | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `agentImage` | `$registry/$imageName:$imageTag` to use when installing the Traffic Agent. Changing this value will update pre-existing `traffic-agents` to use this new image. *The `registry` value is not used for the `traffic-agent` if you have this value set.* | qualified Docker image name [string][yaml-str] | (unset) | +| `webhookRegistry` | The container `$registry` that the [Traffic Manager](../cluster-config/#mutating-webhook) will use with the `webhookAgentImage` *This value is only used if a new `traffic-manager` is deployed* | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `webhookAgentImage` | The container image that the [Traffic Manager](../cluster-config/#mutating-webhook) will pull from the `webhookRegistry` when installing the Traffic Agent in annotated pods *This value is only used if a new `traffic-manager` is deployed* | non-qualified Docker image name [string][yaml-str] | (unset) | + +#### Cloud +Values for `cloud` are listed below and their type varies, so please see the chart for the expected type for each config value. +These fields control how the client interacts with the Cloud service. + +| Field | Description | Type | Default | +|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------|----------------------| +| `skipLogin` | Whether the CLI should skip automatic login to Ambassador Cloud. If set to true, in order to perform personal intercepts you must have a [license key](../cluster-config/#air-gapped-cluster) installed in the cluster. | [bool][yaml-bool] | false | +| `refreshMessages` | How frequently the CLI should communicate with Ambassador Cloud to get new command messages, which also resets whether the message has been raised or not. You will see each message at most once within the duration given by this config | [duration][go-duration] [string][yaml-str] | 168h | +| `systemaHost` | The host used to communicate with Ambassador Cloud | [string][yaml-str] | app.getambassador.io | +| `systemaPort` | The port used with `systemaHost` to communicate with Ambassador Cloud | [string][yaml-str] | 443 | + +Telepresence attempts to auto-detect if the cluster is capable of +communication with Ambassador Cloud, but in cases where only the on-laptop client wishes to communicate with +Ambassador Cloud Telepresence may still prompt you to log in. If you want those auto-login points to be disabled +as well, or would like it to not attempt to communicate with +Ambassador Cloud at all (even for the auto-detection), then be sure to +set the `skipLogin` value to `true`. + +Reminder: To use personal intercepts, which normally require a login, +you must have a license key in your cluster and specify which +`agentImage` should be installed by also adding the following to your +`config.yml`: + +```yaml +images: + agentImage: / +``` + +#### Grpc +The `maxReceiveSize` determines how large a message that the workstation receives via gRPC can be. The default is 4Mi (determined by gRPC). All traffic to and from the cluster is tunneled via gRPC. + +The size is measured in bytes. You can express it as a plain integer or as a fixed-point number using E, G, M, or K. You can also use the power-of-two equivalents: Gi, Mi, Ki. For example, the following represent roughly the same value: +``` +128974848, 129e6, 129M, 123Mi +``` + +#### RESTful API server +The `telepresenceAPI` controls the behavior of Telepresence's RESTful API server that can be queried for additional information about ongoing intercepts. When present, and the `port` is set to a valid port number, it's propagated to the auto-installer so that application containers that can be intercepted gets the `TELEPRESENCE_API_PORT` environment set. The server can then be queried at `localhost:`. In addition, the `traffic-agent` and the `user-daemon` on the workstation that performs an intercept will start the server on that port. +If the `traffic-manager` is auto-installed, its webhook agent injector will be configured to add the `TELEPRESENCE_API_PORT` environment to the app container when the `traffic-agent` is injected. +See [RESTful API server](../restapi) for more info. + +#### Daemons + +`daemons` controls which binary to use for the user daemon. By default it will +use the Telepresence binary. For example, this can be used to tell Telepresence to +use the Telepresence Pro binary. + +| Field | Description | Type | Default | +|--------------------|-------------------------------------------------------------|--------------------|--------------------------------------| +| `userDaemonBinary` | The path to the binary you want to use for the User Daemon. | [string][yaml-str] | The path to Telepresence executable | + + +## Per-Cluster Configuration +Some configuration is not global to Telepresence and is actually specific to a cluster. Thus, we store that config information in your kubeconfig file, so that it is easier to maintain per-cluster configuration. + +### Values +The current per-cluster configuration supports `dns`, `alsoProxy`, and `manager` keys. +To add configuration, simply add a `telepresence.io` entry to the cluster in your kubeconfig like so: + +``` +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + dns: + also-proxy: + manager: + name: example-cluster +``` +#### DNS +The fields for `dns` are: local-ip, remote-ip, exclude-suffixes, include-suffixes, and lookup-timeout. + +| Field | Description | Type | Default | +|--------------------|---------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|-----------------------------------------------------------------------------| +| `local-ip` | The address of the local DNS server. This entry is only used on Linux systems that are not configured to use systemd-resolved. | IP address [string][yaml-str] | first `nameserver` mentioned in `/etc/resolv.conf` | +| `remote-ip` | The address of the cluster's DNS service. | IP address [string][yaml-str] | IP of the `kube-dns.kube-system` or the `dns-default.openshift-dns` service | +| `exclude-suffixes` | Suffixes for which the DNS resolver will always fail (or fallback in case of the overriding resolver) | [sequence][yaml-seq] of [strings][yaml-str] | `[".arpa", ".com", ".io", ".net", ".org", ".ru"]` | +| `include-suffixes` | Suffixes for which the DNS resolver will always attempt to do a lookup. Includes have higher priority than excludes. | [sequence][yaml-seq] of [strings][yaml-str] | `[]` | +| `lookup-timeout` | Maximum time to wait for a cluster side host lookup. | [duration][go-duration] [string][yaml-str] | 4 seconds | + +Here is an example kubeconfig: +``` +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + dns: + include-suffixes: + - .se + exclude-suffixes: + - .com + name: example-cluster +``` + + +#### AlsoProxy + +When using `also-proxy`, you provide a list of subnets after the key in your kubeconfig file to be added to the TUN device. +All connections to addresses that the subnet spans will be dispatched to the cluster + +Here is an example kubeconfig for the subnet `1.2.3.4/32`: +``` +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + also-proxy: + - 1.2.3.4/32 + name: example-cluster +``` + +#### NeverProxy + +When using `never-proxy` you provide a list of subnets after the key in your kubeconfig file. These will never be routed via the +TUN device, even if they fall within the subnets (pod or service) for the cluster. Instead, whatever route they have before +telepresence connects is the route they will keep. + +Here is an example kubeconfig for the subnet `1.2.3.4/32`: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + never-proxy: + - 1.2.3.4/32 + name: example-cluster +``` + +##### Using AlsoProxy together with NeverProxy + +Never proxy and also proxy are implemented as routing rules, meaning that when the two conflict, regular routing routes apply. +Usually this means that the most specific route will win. + +So, for example, if an `also-proxy` subnet falls within a broader `never-proxy` subnet: + +```yaml +never-proxy: [10.0.0.0/16] +also-proxy: [10.0.5.0/24] +``` + +Then the specific `also-proxy` of `10.0.5.0/24` will be proxied by the TUN device, whereas the rest of `10.0.0.0/16` will not. + +Conversely if a `never-proxy` subnet is inside a larger `also-proxy` subnet: + +```yaml +also-proxy: [10.0.0.0/16] +never-proxy: [10.0.5.0/24] +``` + +Then all of the also-proxy of `10.0.0.0/16` will be proxied, with the exception of the specific `never-proxy` of `10.0.5.0/24` + +#### Manager + +The `manager` key contains configuration for finding the `traffic-manager` that telepresence will connect to. It supports one key, `namespace`, indicating the namespace where the traffic manager is to be found + +Here is an example kubeconfig that will instruct telepresence to connect to a manager in namespace `staging`: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +[yaml-bool]: https://yaml.org/type/bool.html +[yaml-float]: https://yaml.org/type/float.html +[yaml-int]: https://yaml.org/type/int.html +[yaml-seq]: https://yaml.org/type/seq.html +[yaml-str]: https://yaml.org/type/str.html +[go-duration]: https://pkg.go.dev/time#ParseDuration +[logrus-level]: https://github.com/sirupsen/logrus/blob/v1.8.1/logrus.go#L25-L45 diff --git a/docs/telepresence/2.7/reference/dns.md b/docs/telepresence/2.7/reference/dns.md new file mode 100644 index 000000000..e38fbc61d --- /dev/null +++ b/docs/telepresence/2.7/reference/dns.md @@ -0,0 +1,75 @@ +# DNS resolution + +The Telepresence DNS resolver is dynamically configured to resolve names using the namespaces of currently active intercepts. Processes running locally on the desktop will have network access to all services in the such namespaces by service-name only. + +All intercepts contribute to the DNS resolver, even those that do not use the `--namespace=` option. This is because `--namespace default` is implied, and in this context, `default` is treated just like any other namespace. + +No namespaces are used by the DNS resolver (not even `default`) when no intercepts are active, which means that no service is available by `` only. Without an active intercept, the namespace qualified DNS name must be used (in the form `.`). + +See this demonstrated below, using the [quick start's](../../quick-start/) sample app services. + +No intercepts are currently running, we'll connect to the cluster and list the services that can be intercepted. + +``` +$ telepresence connect + + Connecting to traffic manager... + Connected to context default (https://) + +$ telepresence list + + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + emoji : ready to intercept (traffic-agent not yet installed) + web : ready to intercept (traffic-agent not yet installed) + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + +$ curl web-app:80 + + curl: (6) Could not resolve host: web-app + +``` + +This is expected as Telepresence cannot reach the service yet by short name without an active intercept in that namespace. + +``` +$ curl web-app.emojivoto:80 + + + + + + Emoji Vote + ... +``` + +Using the namespaced qualified DNS name though does work. +Now we'll start an intercept against another service in the same namespace. Remember, `--namespace default` is implied since it is not specified. + +``` +$ telepresence intercept web --port 8080 + + Using Deployment web + intercepted + Intercept name : web + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Volume Mount Point: /tmp/telfs-166119801 + Intercepting : HTTP requests that match all headers: + 'x-telepresence-intercept-id: 8eac04e3-bf24-4d62-b3ba-35297c16f5cd:web' + +$ curl webapp:80 + + + + + + Emoji Vote + ... +``` + +Now curling that service by its short name works and will as long as the intercept is active. + +The DNS resolver will always be able to resolve services using `.` regardless of intercepts. + +See [Outbound connectivity](../routing/#dns-resolution) for details on DNS lookups. diff --git a/docs/telepresence/2.7/reference/docker-run.md b/docs/telepresence/2.7/reference/docker-run.md new file mode 100644 index 000000000..8aa7852e5 --- /dev/null +++ b/docs/telepresence/2.7/reference/docker-run.md @@ -0,0 +1,31 @@ +--- +Description: "How a Telepresence intercept can run a Docker container with configured environment and volume mounts." +--- + +# Using Docker for intercepts + +If you want your intercept to go to a Docker container on your laptop, use the `--docker-run` option. It creates the intercept, runs your container in the foreground, then automatically ends the intercept when the container exits. + +`telepresence intercept --port --docker-run -- ` + +The `--` separates flags intended for `telepresence intercept` from flags intended for `docker run`. + +## Example + +Imagine you are working on a new version of your frontend service. It is running in your cluster as a Deployment called `frontend-v1`. You use Docker on your laptop to build an improved version of the container called `frontend-v2`. To test it out, use this command to run the new container on your laptop and start an intercept of the cluster service to your local container. + +`telepresence intercept frontend-v1 --port 8000 --docker-run -- frontend-v2` + +## Ports + +The `--port` flag can specify an additional port when `--docker-run` is used so that the local and container port can be different. This is done using `--port :`. The container port will default to the local port when using the `--port ` syntax. + +## Flags + +Telepresence will automatically pass some relevant flags to Docker in order to connect the container with the intercept. Those flags are combined with the arguments given after `--` on the command line. + +- `--dns-search tel2-search` Enables single label name lookups in intercepted namespaces +- `--env-file ` Loads the intercepted environment +- `--name intercept--` Names the Docker container, this flag is omitted if explicitly given on the command line +- `-p ` The local port for the intercept and the container port +- `-v ` Volume mount specification, see CLI help for `--mount` and `--docker-mount` flags for more info diff --git a/docs/telepresence/2.7/reference/environment.md b/docs/telepresence/2.7/reference/environment.md new file mode 100644 index 000000000..7f83ff119 --- /dev/null +++ b/docs/telepresence/2.7/reference/environment.md @@ -0,0 +1,46 @@ +--- +description: "How Telepresence can import environment variables from your Kubernetes cluster to use with code running on your laptop." +--- + +# Environment variables + +Telepresence can import environment variables from the cluster pod when running an intercept. +You can then use these variables with the code running on your laptop of the service being intercepted. + +There are three options available to do this: + +1. `telepresence intercept [service] --port [port] --env-file=FILENAME` + + This will write the environment variables to a Docker Compose `.env` file. This file can be used with `docker-compose` when starting containers locally. Please see the Docker documentation regarding the [file syntax](https://docs.docker.com/compose/env-file/) and [usage](https://docs.docker.com/compose/environment-variables/) for more information. + +2. `telepresence intercept [service] --port [port] --env-json=FILENAME` + + This will write the environment variables to a JSON file. This file can be injected into other build processes. + +3. `telepresence intercept [service] --port [port] -- [COMMAND]` + + This will run a command locally with the pod's environment variables set on your laptop. Once the command quits the intercept is stopped (as if `telepresence leave [service]` was run). This can be used in conjunction with a local server command, such as `python [FILENAME]` or `node [FILENAME]` to run a service locally while using the environment variables that were set on the pod via a ConfigMap or other means. + + Another use would be running a subshell, Bash for example: + + `telepresence intercept [service] --port [port] -- /bin/bash` + + This would start the intercept then launch the subshell on your laptop with all the same variables set as on the pod. + +## Telepresence Environment Variables + +Telepresence adds some useful environment variables in addition to the ones imported from the intercepted pod: + +### TELEPRESENCE_ROOT +Directory where all remote volumes mounts are rooted. See [Volume Mounts](../volume/) for more info. + +### TELEPRESENCE_MOUNTS +Colon separated list of remotely mounted directories. + +### TELEPRESENCE_CONTAINER +The name of the intercepted container. Useful when a pod has several containers, and you want to know which one that was intercepted by Telepresence. + +### TELEPRESENCE_INTERCEPT_ID +ID of the intercept (same as the "x-intercept-id" http header). + +Useful if you need special behavior when intercepting a pod. One example might be when dealing with pub/sub systems like Kafka, where all processes that don't have the `TELEPRESENCE_INTERCEPT_ID` set can filter out all messages that contain an `x-intercept-id` header, while those that do, instead filter based on a matching `x-intercept-id` header. This is to assure that messages belonging to a certain intercept always are consumed by the intercepting process. diff --git a/docs/telepresence/2.7/reference/inside-container.md b/docs/telepresence/2.7/reference/inside-container.md new file mode 100644 index 000000000..637e0cdfd --- /dev/null +++ b/docs/telepresence/2.7/reference/inside-container.md @@ -0,0 +1,37 @@ +# Running Telepresence inside a container + +It is sometimes desirable to run [Telepresence](/products/telepresence/) inside a container. One reason can be to avoid any side effects on the workstation's network, another can be to establish multiple sessions with the traffic manager, or even work with different clusters simultaneously. + +## Building the container + +Building a container with a ready-to-run Telepresence is easy because there are relatively few external dependencies. Add the following to a `Dockerfile`: + +```Dockerfile +# Dockerfile with telepresence and its prerequisites +FROM alpine:3.13 + +# Install Telepresence prerequisites +RUN apk add --no-cache curl iproute2 sshfs + +# Download and install the telepresence binary +RUN curl -fL https://app.getambassador.io/download/tel2/linux/amd64/latest/telepresence -o telepresence && \ + install -o root -g root -m 0755 telepresence /usr/local/bin/telepresence +``` +In order to build the container, do this in the same directory as the `Dockerfile`: +``` +$ docker build -t tp-in-docker . +``` + +## Running the container + +Telepresence will need access to the `/dev/net/tun` device on your Linux host (or, in case the host isn't Linux, the Linux VM that Docker starts automatically), and a Kubernetes config that identifies the cluster. It will also need `--cap-add=NET_ADMIN` to create its Virtual Network Interface. + +The command to run the container can look like this: +```bash +$ docker run \ + --cap-add=NET_ADMIN \ + --device /dev/net/tun:/dev/net/tun \ + --network=host \ + -v ~/.kube/config:/root/.kube/config \ + -it --rm tp-in-docker +``` diff --git a/docs/telepresence/2.7/reference/intercepts/index.md b/docs/telepresence/2.7/reference/intercepts/index.md new file mode 100644 index 000000000..469a36178 --- /dev/null +++ b/docs/telepresence/2.7/reference/intercepts/index.md @@ -0,0 +1,403 @@ +import Alert from '@material-ui/lab/Alert'; + +# Intercepts + +When intercepting a service, Telepresence installs a *traffic-agent* +sidecar in to the workload. That traffic-agent supports one or more +intercept *mechanisms* that it uses to decide which traffic to +intercept. Telepresence has a simple default traffic-agent, however +you can configure a different traffic-agent with more sophisticated +mechanisms either by setting the [`images.agentImage` field in +`config.yml`](../config/#images) or by writing an +[`extensions/${extension}.yml`][extensions] file that tells +Telepresence about a traffic-agent that it can use, what mechanisms +that traffic-agent supports, and command-line flags to expose to the +user to configure that mechanism. You may tell Telepresence which +known mechanism to use with the `--mechanism=${mechanism}` flag or by +setting one of the `--${mechansim}-XXX` flags, which implicitly set +the mechanism; for example, setting `--http-header=auto` implicitly +sets `--mechanism=http`. + +The default open-source traffic-agent only supports the `tcp` +mechanism, which treats the raw layer 4 TCP streams as opaque and +sends all of that traffic down to the developer's workstation. This +means that it is a "global" intercept, affecting all users of the +cluster. + +In addition to the default open-source traffic-agent, Telepresence +already knows about the Ambassador Cloud +[traffic-agent][ambassador-agent], which supports the `http` +mechanism. The `http` mechanism operates at higher layer, working +with layer 7 HTTP, and may intercept specific HTTP requests, allowing +other HTTP requests through to the regular service. This allows for +"personal" intercepts which only intercept traffic tagged as belonging +to a given developer. + +[extensions]: https://pkg.go.dev/github.com/telepresenceio/telepresence/v2@v$version$/pkg/client/cli/extensions +[ambassador-agent]: https://github.com/telepresenceio/telepresence/blob/release/v2/pkg/client/cli/extensions/builtin.go#L30-L50 + +## Intercept behavior when logged in to Ambassador Cloud + +Logging in to Ambassador Cloud (with [`telepresence +login`](../client/login/)) changes the Telepresence defaults in two +ways. + +First, being logged in to Ambassador Cloud causes Telepresence to +default to `--mechanism=http --http-header=auto --http-path-prefix=/` ( +`--mechanism=http` is redundant. It is implied by other `--http-xxx` flags). +If you hadn't been logged in it would have defaulted to +`--mechanism=tcp`. This tells Telepresence to use the Ambassador +Cloud traffic-agent to do smart "personal" intercepts and only +intercept a subset of HTTP requests, rather than just intercepting the +entirety of all TCP connections. This is important for working in a +shared cluster with teammates, and is important for the preview URL +functionality below. See `telepresence intercept --help` for +information on using the `--http-header` and `--http-path-xxx` flags to +customize which requests that are intercepted. + +Secondly, being logged in causes Telepresence to default to +`--preview-url=true`. If you hadn't been logged in it would have +defaulted to `--preview-url=false`. This tells Telepresence to take +advantage of Ambassador Cloud to create a preview URL for this +intercept, creating a shareable URL that automatically sets the +appropriate headers to have requests coming from the preview URL be +intercepted. In order to create the preview URL, it will prompt you +for four settings about how your cluster's ingress is configured. For +each, Telepresence tries to intelligently detect the correct value for +your cluster; if it detects it correctly, may simply press "enter" and +accept the default, otherwise you must tell Telepresence the correct +value. + +When creating an intercept with the `http` mechanism, the +traffic-agent sends a `GET /telepresence-http2-check` request to your +service and to the process running on your local machine at the port +specified in your intercept, in order to determine if they support +HTTP/2. This is required for the intercepts to behave correctly. If +you do not have a service running locally when the intercept is +created, the traffic-agent will use the result it got from checking +the in-cluster service. + +## Supported workloads + +Kubernetes has various +[workloads](https://kubernetes.io/docs/concepts/workloads/). +Currently, Telepresence supports intercepting (installing a +traffic-agent on) `Deployments`, `ReplicaSets`, and `StatefulSets`. + + + +While many of our examples use Deployments, they would also work on +ReplicaSets and StatefulSets + + + +## Specifying a namespace for an intercept + +The namespace of the intercepted workload is specified using the +`--namespace` option. When this option is used, and `--workload` is +not used, then the given name is interpreted as the name of the +workload and the name of the intercept will be constructed from that +name and the namespace. + +```shell +telepresence intercept hello --namespace myns --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept +`hello-myns`. In order to remove the intercept, you will need to run +`telepresence leave hello-mydns` instead of just `telepresence leave +hello`. + +The name of the intercept will be left unchanged if the workload is specified. + +```shell +telepresence intercept myhello --namespace myns --workload hello --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept `myhello`. + +## Importing environment variables + +Telepresence can import the environment variables from the pod that is +being intercepted, see [this doc](../environment/) for more details. + +## Creating an intercept without a preview URL + +If you *are not* logged in to Ambassador Cloud, the following command +will intercept all traffic bound to the service and proxy it to your +laptop. This includes traffic coming through your ingress controller, +so use this option carefully as to not disrupt production +environments. + +```shell +telepresence intercept --port= +``` + +If you *are* logged in to Ambassador Cloud, setting the +`--preview-url` flag to `false` is necessary. + +```shell +telepresence intercept --port= --preview-url=false +``` + +This will output an HTTP header that you can set on your request for +that traffic to be intercepted: + +```console +$ telepresence intercept --port= --preview-url=false +Using Deployment +intercepted + Intercept name: + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":") +``` + +Run `telepresence status` to see the list of active intercepts. + +```console +$ telepresence status +Root Daemon: Running + Version : v2.1.4 (api 3) + Primary DNS : "" + Fallback DNS: "" +User Daemon: Running + Version : v2.1.4 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 1 total + dataprocessingnodeservice: @ +``` + +Finally, run `telepresence leave ` to stop the intercept. + +## Skipping the ingress dialogue + +You can skip the ingress dialogue by setting the relevant parameters using flags. If any of the following flags are set, the dialogue will be skipped and the flag values will be used instead. If any of the required flags are missing, an error will be thrown. + +| Flag | Description | Required | +|------------------|------------------------------------------------------------------|------------| +| `--ingress-host` | The ip address for the ingress | yes | +| `--ingress-port` | The port for the ingress | yes | +| `--ingress-tls` | Whether tls should be used | no | +| `--ingress-l5` | Whether a different ip address should be used in request headers | no | + +## Creating an intercept when a service has multiple ports + +If you are trying to intercept a service that has multiple ports, you +need to tell Telepresence which service port you are trying to +intercept. To specify, you can either use the name of the service +port or the port number itself. To see which options might be +available to you and your service, use kubectl to describe your +service or look in the object's YAML. For more information on multiple +ports, see the [Kubernetes documentation][kube-multi-port-services]. + +[kube-multi-port-services]: https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services + +```console +$ telepresence intercept --port=: +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +When intercepting a service that has multiple ports, the name of the +service port that has been intercepted is also listed. + +If you want to change which port has been intercepted, you can create +a new intercept the same way you did above and it will change which +service port is being intercepted. + +## Creating an intercept When multiple services match your workload + +Oftentimes, there's a 1-to-1 relationship between a service and a +workload, so telepresence is able to auto-detect which service it +should intercept based on the workload you are trying to intercept. +But if you use something like +[Argo](https://www.getambassador.io/docs/argo/latest/), there may be +two services (that use the same labels) to manage traffic between a +canary and a stable service. + +Fortunately, if you know which service you want to use when +intercepting a workload, you can use the `--service` flag. So in the +aforementioned example, if you wanted to use the `echo-stable` service +when intercepting your workload, your command would look like this: + +```console +$ telepresence intercept echo-rollout- --port --service echo-stable +Using ReplicaSet echo-rollout- +intercepted + Intercept name : echo-rollout- + State : ACTIVE + Workload kind : ReplicaSet + Destination : 127.0.0.1:3000 + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-921196036 + Intercepting : all TCP connections +``` + +## Intercepting multiple ports + +It is possible to intercept more than one service and/or service port that are using the same workload. You do this +by creating more than one intercept that identify the same workload using the `--workload` flag. + +Let's assume that we have a service `multi-echo` with the two ports `http` and `grpc`. They are both +targeting the same `multi-echo` deployment. + +```console +$ telepresence intercept multi-echo-http --workload multi-echo --port 8080:http --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-http + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Volume Mount Point : /tmp/telfs-893700837 + Intercepting : all TCP requests + Preview URL : https://sleepy-bassi-1140.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +$ telepresence intercept multi-echo-grpc --workload multi-echo --port 8443:grpc --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-grpc + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8443 + Service Port Identifier: extra + Volume Mount Point : /tmp/telfs-1277723591 + Intercepting : all TCP requests + Preview URL : https://upbeat-thompson-6613.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +``` + +## Port-forwarding an intercepted container's sidecars + +Sidecars are containers that sit in the same pod as an application +container; they usually provide auxiliary functionality to an +application, and can usually be reached at +`localhost:${SIDECAR_PORT}`. For example, a common use case for a +sidecar is to proxy requests to a database, your application would +connect to `localhost:${SIDECAR_PORT}`, and the sidecar would then +connect to the database, perhaps augmenting the connection with TLS or +authentication. + +When intercepting a container that uses sidecars, you might want those +sidecars' ports to be available to your local application at +`localhost:${SIDECAR_PORT}`, exactly as they would be if running +in-cluster. Telepresence's `--to-pod ${PORT}` flag implements this +behavior, adding port-forwards for the port given. + +```console +$ telepresence intercept --port=: --to-pod= +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +If there are multiple ports that you need forwarded, simply repeat the +flag (`--to-pod= --to-pod=`). + +## Intercepting headless services + +Kubernetes supports creating [services without a ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services), +which, when they have a pod selector, serve to provide a DNS record that will directly point to the service's backing pods. +Telepresence supports intercepting these `headless` services as it would a regular service with a ClusterIP. +So, for example, if you have the following service: + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: my-headless +spec: + type: ClusterIP + clusterIP: None + selector: + service: my-headless + ports: + - port: 8080 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: my-headless + labels: + service: my-headless +spec: + replicas: 1 + serviceName: my-headless + selector: + matchLabels: + service: my-headless + template: + metadata: + labels: + service: my-headless + spec: + containers: + - name: my-headless + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +``` + +You can intercept it like any other: + +```console +$ telepresence intercept my-headless --port 8080 +Using StatefulSet my-headless +intercepted + Intercept name : my-headless + State : ACTIVE + Workload kind : StatefulSet + Destination : 127.0.0.1:8080 + Volume Mount Point: /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-524189712 + Intercepting : all TCP connections +``` + + +This utilizes an initContainer that requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + + +This requires the Traffic Agent to run as GID 7777. By default, this is disabled on openshift clusters. +To enable running as GID 7777 on a specific openshift namespace, run: +oc adm policy add-scc-to-group anyuid system:serviceaccounts:$NAMESPACE + + + +Intercepting headless services without a selector is not supported. + + +## Sharing intercepts with teammates + +Once a combination of flags to easily intercept a service has been found, it's useful to share it with teammates. You +can do that easily by going to [Ambassador Cloud -> Intercepts history](https://app.getambassador.io/cloud/saved-intercepts/history) +pick the intercept command from the history tab and create a Saved Intercept by giving it a name, when doing that +the intercept command will be easily accessible for all your teammates. Note that this requires the free enhanced +client to be installed and to be logged in (`telepresence login`). + +To instantiate an intercept based on a saved intercept, simply run +`telepresence intercept --use-saved-intercept `. When logged in, the command will first check for a +saved intercept in Ambassador Cloud and will use it if found, otherwise an error will be returned. + +Saved Intercepts can be [managed through Ambassador Cloud](../../../../cloud/latest/telepresence-saved-intercepts/). diff --git a/docs/telepresence/2.7/reference/intercepts/manual-agent.md b/docs/telepresence/2.7/reference/intercepts/manual-agent.md new file mode 100644 index 000000000..8c24d6dbe --- /dev/null +++ b/docs/telepresence/2.7/reference/intercepts/manual-agent.md @@ -0,0 +1,267 @@ +import Alert from '@material-ui/lab/Alert'; + +# Manually injecting the Traffic Agent + +You can directly modify your workload's YAML configuration to add the Telepresence Traffic Agent and enable it to be intercepted. + +When you use a Telepresence intercept for the first time on a Pod, the [Telepresence Mutating Webhook](../../cluster-config/#mutating-webhook) +will automatically inject a Traffic Agent sidecar into it. There might be some situations where this approach cannot be used, such +as very strict company security policies preventing it. + + +Although it is possible to manually inject the Traffic Agent, it is not the recommended approach to making a workload interceptable, +try the Mutating Webhook before proceeding. + + +## Procedure + +You can manually inject the agent into Deployments, StatefulSets, or ReplicaSets. The example on this page +uses the following Deployment and Service. It's a prerequisite that they have been applied to the cluster: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "my-service" + labels: + service: my-service +spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +--- +apiVersion: v1 +kind: Service +metadata: + name: "my-service" +spec: + type: ClusterIP + selector: + service: my-service + ports: + - port: 80 + targetPort: 8080 +``` + +### 1. Generating the YAML + +First, generate the YAML for the traffic-agent configmap entry. It's important that the generated file have +the same name as the service, and no extension: + +```console +$ telepresence genyaml config --workload my-service -o /tmp/my-service +$ cat /tmp/my-service-config.yaml +agentImage: docker.io/datawire/tel2:2.7.0 +agentName: my-service +containers: +- Mounts: null + envPrefix: A_ + intercepts: + - agentPort: 9900 + containerPort: 8080 + protocol: TCP + serviceName: my-service + servicePort: 80 + serviceUID: f6680334-10ef-4703-aa4e-bb1f9d1665fd + mountPoint: /tel_app_mounts/echo-container + name: echo-container +logLevel: info +managerHost: traffic-manager.ambassador +managerPort: 8081 +manual: true +namespace: default +workloadKind: Deployment +workloadName: my-service +``` + +Next, generate the YAML for the traffic-agent container: + +```console +$ telepresence genyaml container --config /tmp/my-service -o /tmp/my-service-agent.yaml +$ cat /tmp/my-service-agent.yaml +args: +- agent +env: +- name: _TEL_AGENT_POD_IP + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: status.podIP +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: traffic-agent +ports: +- containerPort: 9900 + protocol: TCP +readinessProbe: + exec: + command: + - /bin/stat + - /tmp/agent/ready +resources: {} +volumeMounts: +- mountPath: /tel_pod_info + name: traffic-annotations +- mountPath: /etc/traffic-agent + name: traffic-config +- mountPath: /tel_app_exports + name: export-volume + name: traffic-annotations +``` + +Next, generate the init-container + +```console +$ telepresence genyaml initcontainer --config /tmp/my-service -o /tmp/my-service-init.yaml +$ cat /tmp/my-service-init.yaml +args: +- agent-init +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: tel-agent-init +resources: {} +securityContext: + capabilities: + add: + - NET_ADMIN +volumeMounts: +- mountPath: /etc/traffic-agent + name: traffic-config +``` + +Next, generate the YAML for the volumes: + +```console +$ telepresence genyaml volume --workload my-service -o /tmp/my-service-volume.yaml +$ cat /tmp/my-service-volume.yaml +- downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.annotations + path: annotations + name: traffic-annotations +- configMap: + items: + - key: my-service + path: config.yaml + name: telepresence-agents + name: traffic-config +- emptyDir: {} + name: export-volume + +``` + + +Enter `telepresence genyaml container --help` or `telepresence genyaml volume --help` for more information about these flags. + + +### 2. Creating (or updating) the configmap + +The generated configmap entry must be insterted into the `telepresence-agents` `ConfigMap` in the same namespace as the +modified `Deployment`. If the `ConfigMap` doesn't exist yet, it can be created using the following command: + +```console +$ kubectl create configmap telepresence-agents --from-file=/tmp/my-service +``` + +If it already exists, new entries can be added under the `Data` key using `kubectl edit configmap telepresence-agents`. + +### 3. Injecting the YAML into the Deployment + +You need to add the `Deployment` YAML you genereated to include the container and the volume. These are placed as elements +of `spec.template.spec.containers`, `spec.template.spec.initContainers`, and `spec.template.spec.volumes` respectively. +You also need to modify `spec.template.metadata.annotations` and add the annotation +`telepresence.getambassador.io/manually-injected: "true"`. These changes should look like the following: + +```diff + apiVersion: apps/v1 + kind: Deployment + metadata: + name: "my-service" + labels: + service: my-service + spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service ++ annotations: ++ telepresence.getambassador.io/manually-injected: "true" + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} ++ - args: ++ - agent ++ env: ++ - name: _TEL_AGENT_POD_IP ++ valueFrom: ++ fieldRef: ++ apiVersion: v1 ++ fieldPath: status.podIP ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: traffic-agent ++ ports: ++ - containerPort: 9900 ++ protocol: TCP ++ readinessProbe: ++ exec: ++ command: ++ - /bin/stat ++ - /tmp/agent/ready ++ resources: { } ++ volumeMounts: ++ - mountPath: /tel_pod_info ++ name: traffic-annotations ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ - mountPath: /tel_app_exports ++ name: export-volume ++ initContainers: ++ - args: ++ - agent-init ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: tel-agent-init ++ resources: { } ++ securityContext: ++ capabilities: ++ add: ++ - NET_ADMIN ++ volumeMounts: ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ volumes: ++ - downwardAPI: ++ items: ++ - fieldRef: ++ apiVersion: v1 ++ fieldPath: metadata.annotations ++ path: annotations ++ name: traffic-annotations ++ - configMap: ++ items: ++ - key: my-service ++ path: config.yaml ++ name: telepresence-agents ++ name: traffic-config ++ - emptyDir: { } ++ name: export-volume +``` diff --git a/docs/telepresence/2.7/reference/linkerd.md b/docs/telepresence/2.7/reference/linkerd.md new file mode 100644 index 000000000..9b903fa76 --- /dev/null +++ b/docs/telepresence/2.7/reference/linkerd.md @@ -0,0 +1,75 @@ +--- +Description: "How to get Linkerd meshed services working with Telepresence" +--- + +# Using Telepresence with Linkerd + +## Introduction +Getting started with Telepresence on Linkerd services is as simple as adding an annotation to your Deployment: + +```yaml +spec: + template: + metadata: + annotations: + config.linkerd.io/skip-outbound-ports: "8081" +``` + +The local system and the Traffic Agent connect to the Traffic Manager using its gRPC API on port 8081. Telling Linkerd to skip that port allows the Traffic Agent sidecar to fully communicate with the Traffic Manager, and therefore the rest of the Telepresence system. + +## Prerequisites +1. [Telepresence binary](../../install) +2. Linkerd control plane [installed to cluster](https://linkerd.io/2.10/tasks/install/) +3. Kubectl +4. [Working ingress controller](https://www.getambassador.io/docs/edge-stack/latest/howtos/linkerd2) + +## Deploy +Save and deploy the following YAML. Note the `config.linkerd.io/skip-outbound-ports` annotation in the metadata of the pod template. + +```yaml +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: quote +spec: + replicas: 1 + selector: + matchLabels: + app: quote + strategy: + type: RollingUpdate + template: + metadata: + annotations: + linkerd.io/inject: "enabled" + config.linkerd.io/skip-outbound-ports: "8081,8022,6001" + labels: + app: quote + spec: + containers: + - name: backend + image: docker.io/datawire/quote:0.4.1 + ports: + - name: http + containerPort: 8000 + env: + - name: PORT + value: "8000" + resources: + limits: + cpu: "0.1" + memory: 100Mi +``` + +## Connect to Telepresence +Run `telepresence connect` to connect to the cluster. Then `telepresence list` should show the `quote` deployment as `ready to intercept`: + +``` +$ telepresence list + + quote: ready to intercept (traffic-agent not yet installed) +``` + +## Run the intercept +Run `telepresence intercept quote --port 8080:80` to direct traffic from the `quote` deployment to port 8080 on your local system. Assuming you have something listening on 8080, you should now be able to see your local service whenever attempting to access the `quote` service. diff --git a/docs/telepresence/2.7/reference/rbac.md b/docs/telepresence/2.7/reference/rbac.md new file mode 100644 index 000000000..d78133441 --- /dev/null +++ b/docs/telepresence/2.7/reference/rbac.md @@ -0,0 +1,236 @@ +import Alert from '@material-ui/lab/Alert'; + +# Telepresence RBAC +The intention of this document is to provide a template for securing and limiting the permissions of Telepresence. +This documentation covers the full extent of permissions necessary to administrate Telepresence components in a cluster. + +There are two general categories for cluster permissions with respect to Telepresence. There are RBAC settings for a User and for an Administrator described above. The User is expected to only have the minimum cluster permissions necessary to create a Telepresence [intercept](../../howtos/intercepts/), and otherwise be unable to affect Kubernetes resources. + +In addition to the above, there is also a consideration of how to manage Users and Groups in Kubernetes which is outside of the scope of the document. This document will use Service Accounts to assign Roles and Bindings. Other methods of RBAC administration and enforcement can be found on the [Kubernetes RBAC documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) page. + +## Requirements + +- Kubernetes version 1.16+ +- Cluster admin privileges to apply RBAC + +## Editing your kubeconfig + +This guide also assumes that you are utilizing a kubeconfig file that is specified by the `KUBECONFIG` environment variable. This is a `yaml` file that contains the cluster's API endpoint information as well as the user data being supplied for authentication. The Service Account name used in the example below is called tp-user. This can be replaced by any value (i.e. John or Jane) as long as references to the Service Account are consistent throughout the `yaml`. After an administrator has applied the RBAC configuration, a user should create a `config.yaml` in your current directory that looks like the following:​ + +```yaml +apiVersion: v1 +kind: Config +clusters: +- name: my-cluster # Must match the cluster value in the contexts config + cluster: + ## The cluster field is highly cloud dependent. +contexts: +- name: my-context + context: + cluster: my-cluster # Must match the name field in the clusters config + user: tp-user +users: +- name: tp-user # Must match the name of the Service Account created by the cluster admin + user: + token: # See note below +``` + +The Service Account token will be obtained by the cluster administrator after they create the user's Service Account. Creating the Service Account will create an associated Secret in the same namespace with the format `-token-`. This token can be obtained by your cluster administrator by running `kubectl get secret -n ambassador -o jsonpath='{.data.token}' | base64 -d`. + +After creating `config.yaml` in your current directory, export the file's location to KUBECONFIG by running `export KUBECONFIG=$(pwd)/config.yaml`. You should then be able to switch to this context by running `kubectl config use-context my-context`. + +## Administrating Telepresence + +Telepresence administration requires permissions for creating `Namespaces`, `ServiceAccounts`, `ClusterRoles`, `ClusterRoleBindings`, `Secrets`, `Services`, `MutatingWebhookConfiguration`, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator. The following permissions are needed for the installation and use of Telepresence: + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: telepresence-admin + namespace: default +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-admin-role +rules: + - apiGroups: [""] + resources: ["pods", "pods/log"] + verbs: ["get", "list", "create", "delete", "watch"] + - apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "update", "create", "delete"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] + - apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update", "create", "delete", "watch"] + - apiGroups: ["rbac.authorization.k8s.io"] + resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"] + verbs: ["get", "list", "watch", "create", "delete"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["create"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["get", "list", "watch", "delete"] + resourceNames: ["telepresence-agents"] + - apiGroups: [""] + resources: ["namespaces"] + verbs: ["get", "list", "watch", "create"] + - apiGroups: [""] + resources: ["secrets"] + verbs: ["get", "create", "list", "delete"] + - apiGroups: [""] + resources: ["serviceaccounts"] + verbs: ["get", "create", "delete"] + - apiGroups: ["admissionregistration.k8s.io"] + resources: ["mutatingwebhookconfigurations"] + verbs: ["get", "create", "delete"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["list", "get", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-clusterrolebinding +subjects: + - name: telepresence-admin + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-admin-role + kind: ClusterRole +``` + +There are two ways to install the traffic-manager: Using `telepresence connect` and installing the [helm chart](../../install/helm/). + +By using `telepresence connect`, Telepresence will use your kubeconfig to create the objects mentioned above in the cluster if they don't already exist. If you want the most introspection into what is being installed, we recommend using the helm chart to install the traffic-manager. + +## Cluster-wide telepresence user access + +To allow users to make intercepts across all namespaces, but with more limited `kubectl` permissions, the following `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` will allow full `telepresence intercept` functionality. + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate value + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-role +rules: +# For gather-logs command +- apiGroups: [""] + resources: ["pods/log"] + verbs: ["get"] +- apiGroups: [""] + resources: ["pods"] + verbs: ["list"] +# Needed in order to maintain a list of workloads +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +- apiGroups: [""] + resources: ["namespaces", "services"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-rolebinding +subjects: +- name: tp-user + kind: ServiceAccount + namespace: ambassador +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-role + kind: ClusterRole +``` + +### Traffic Manager connect permission +In addition to the cluster-wide permissions, the client will also need the following namespace scoped permissions +in the traffic-manager's namespace in order to establish the needed port-forward to the traffic-manager. +```yaml +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: traffic-manager-connect +rules: + - apiGroups: [""] + resources: ["pods"] + verbs: ["get", "list", "watch"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: traffic-manager-connect +subjects: + - name: telepresence-test-developer + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: traffic-manager-connect + kind: Role +``` + +## Namespace only telepresence user access + +RBAC for multi-tenant scenarios where multiple dev teams are sharing a single cluster where users are constrained to a specific namespace(s). + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +For each accessible namespace +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate user name + namespace: tp-namespace # Update value for appropriate namespace +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +rules: +- apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "watch"] +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role-binding + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount +roleRef: + kind: Role + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +``` + +The user will also need the [Traffic Manager connect permission](#traffic-manager-connect-permission) described above. diff --git a/docs/telepresence/2.7/reference/restapi.md b/docs/telepresence/2.7/reference/restapi.md new file mode 100644 index 000000000..4be1924a3 --- /dev/null +++ b/docs/telepresence/2.7/reference/restapi.md @@ -0,0 +1,93 @@ +# Telepresence RESTful API server + +[Telepresence](/products/telepresence/) can run a RESTful API server on the local host, both on the local workstation and in a pod that contains a `traffic-agent`. The server currently has two endpoints. The standard `healthz` endpoint and the `consume-here` endpoint. + +## Enabling the server +The server is enabled by setting the `telepresenceAPI.port` to a valid port number in the [Telepresence Helm Chart](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). The values may be passed explicitly to Helm during install, or configured using the [Telepresence Config](../config#restful-api-server) to impact an auto-install. + +## Querying the server +On the cluster's side, it's the `traffic-agent` of potentially intercepted pods that runs the server. The server can be accessed using `http://localhost:/` from the application container. Telepresence ensures that the container has the `TELEPRESENCE_API_PORT` environment variable set when the `traffic-agent` is installed. On the workstation, it is the `user-daemon` that runs the server. It uses the `TELEPRESENCE_API_PORT` that is conveyed in the environment of the intercept. This means that the server can be accessed the exact same way locally, provided that the environment is propagated correctly to the interceptor process. + +## Endpoints + +The `consume-here` and `intercept-info` endpoints are both intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar. Telepresence provides the ID of the intercept in the environment variable [TELEPRESENCE_INTERCEPT_ID](../environment/#telepresence_intercept_id) during an intercept. This ID must be provided in a `x-telepresence-caller-intercept-id: = ` header. [Telepresence](/products/telepresence/) needs this to identify the caller correctly. The `` will be empty when running in the cluster, but it's harmless to provide it there too, so there's no need for conditional code. + +There are three prerequisites to fulfill before testing The `consume-here` and `intercept-info` endpoints using `curl -v` on the workstation: +1. An intercept must be active +2. The "/healthz" endpoint must respond with OK +3. The ID of the intercept must be known. It will be visible as `ID` in the output of `telepresence list --debug`. + +### healthz +The `http://localhost:/healthz` endpoint should respond with status code 200 OK. If it doesn't then something isn't configured correctly. Check that the `traffic-agent` container is present and that the `TELEPRESENCE_API_PORT` has been added to the environment of the application container and/or in the environment that is propagated to the interceptor that runs on the local workstation. + +#### test endpoint using curl +A `curl -v` call can be used to test the endpoint when an intercept is active. This example assumes that the API port is configured to be 9980. +```console +$ curl -v localhost:9980/healthz +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /healthz HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Date: Fri, 26 Nov 2021 07:06:18 GMT +< Content-Length: 0 +< +* Connection #0 to host localhost left intact +``` + +### consume-here +`http://localhost:/consume-here` will respond with "true" (consume the message) or "false" (leave the message on the queue). When running in the cluster, this endpoint will respond with `false` if the headers match an ongoing intercept for the same workload because it's assumed that it's up to the intercept to consume the message. When running locally, the response is inverted. Matching headers means that the message should be consumed. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api`, we can now check that the "/consume-here" returns "true" for the path "/api" and given headers. +```console +$ curl -v localhost:9980/consume-here?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /consume-here?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Fri, 26 Nov 2021 06:43:28 GMT +< Content-Length: 4 +< +* Connection #0 to host localhost left intact +true +``` + +If you can run curl from the pod, you can try the exact same URL. The result should be "false" when there's an ongoing intercept. The `x-telepresence-caller-intercept-id` is not needed when the call is made from the pod. + +### intercept-info +`http://localhost:/intercept-info` is intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar, and will respond with a JSON structure containing the two booleans `clientSide` and `intercepted`, and a `metadata` map which corresponds to the `--http-meta` key pairs used when the intercept was created. This field is always omitted in case `intercepted` is `false`. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api --http-meta a=b --http-meta b=c`, we can now check that the "/intercept-info" returns information for the given path and headers. +```console +$ curl -v localhost:9980/intercept-info?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980...* Connected to localhost (127.0.0.1) port 9980 (#0) +> GET /intercept-info?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.79.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Tue, 01 Feb 2022 11:39:55 GMT +< Content-Length: 68 +< +{"intercepted":true,"clientSide":true,"metadata":{"a":"b","b":"c"}} +* Connection #0 to host localhost left intact +``` diff --git a/docs/telepresence/2.7/reference/routing.md b/docs/telepresence/2.7/reference/routing.md new file mode 100644 index 000000000..671dae5d8 --- /dev/null +++ b/docs/telepresence/2.7/reference/routing.md @@ -0,0 +1,69 @@ +# Connection Routing + +## Outbound + +### DNS resolution +When requesting a connection to a host, the IP of that host must be determined. Telepresence provides DNS resolvers to help with this task. There are currently four types of resolvers but only one of them will be used on a workstation at any given time. Common for all of them is that they will propagate a selection of the host lookups to be performed in the cluster. The selection normally includes all names ending with `.cluster.local` or a currently mapped namespace but more entries can be added to the list using the `include-suffixes` option in the +[local DNS configuration](../config/#dns) + +#### Cluster side DNS lookups +The cluster side host lookup will be performed by the traffic-manager unless the client has an active intercept, in which case, the agent performing that intercept will be responsible for doing it. If the client has multiple intercepts, then all of them will be asked to perform the lookup, and the response to the client will contain the unique sum of IPs that they produce. It's therefore important to never have multiple intercepts that span more than one namespace[[1](#namespacelimit)] running concurrently on the same workstation because that would logically put the workstation in several namespaces and make the DNS resolution ambiguous. The reason for asking all of them is that the workstation currently impersonates multiple containers, and it is not possible to determine on behalf of what container the lookup request is made. + +#### macOS resolver +This resolver hooks into the macOS DNS system by creating files under `/etc/resolver`. Those files correspond to some domain and contain the port number of the Telepresence resolver. Telepresence creates one such file for each of the currently mapped namespaces and `include-suffixes` option. The file `telepresence.local` contains a search path that is configured based on current intercepts so that single label names can be resolved correctly. + +#### Linux systemd-resolved resolver +This resolver registers itself as part of telepresence's [VIF](../tun-device) using `systemd-resolved` and uses the DBus API to configure domains and routes that corresponds to the current set of intercepts and namespaces. + +#### Linux overriding resolver +Linux systems that aren't configured with `systemd-resolved` will use this resolver. A Typical case is when running Telepresence [inside a docker container](../inside-container). During initialization, the resolver will first establish a _fallback_ connection to the IP passed as `--dns`, the one configured as `local-ip` in the [local DNS configuration](../config/#dns), or the primary `nameserver` registered in `/etc/resolv.conf`. It will then use iptables to actually override that IP so that requests to it instead end up in the overriding resolver, which unless it succeeds on its own, will use the _fallback_. + +#### Windows resolver +This resolver uses the DNS resolution capabilities of the [win-tun](https://www.wintun.net/) device in conjunction with [Win32_NetworkAdapterConfiguration SetDNSDomain](https://docs.microsoft.com/en-us/powershell/scripting/samples/performing-networking-tasks?view=powershell-7.2#assigning-the-dns-domain-for-a-network-adapter). + +#### DNS caching +The Telepresence DNS resolver often changes its configuration. This means that Telepresence must either flush the DNS caches on the local host, or ensure that DNS-records returned from the Telepresence resolver aren't cached (or cached for a very short time). All operating systems have different ways of flushing the DNS caches and even different versions of one system may have differences. Also, on some systems it is necessary to actually kill and restart processes to ensure a proper flush, which in turn may result in network instabilities. + +Starting with 2.4.7, Telepresence will no longer flush the host's DNS caches. Instead, all records will have a short Time To Live (TTL) so that such caches evict the entries quickly. This causes increased load on the Telepresence resolver (shorter TTL means more frequent queries) and to cater for that, telepresence now has an internal cache to minimize the number of DNS queries that it sends to the cluster. This cache is flushed as needed without causing instabilities. + +### Routing + +#### Subnets +The Telepresence `traffic-manager` service is responsible for discovering the cluster's service subnet and all subnets used by the pods. In order to do this, it needs permission to create a dummy service[[2](#servicesubnet)] in its own namespace, and the ability to list, get, and watch nodes and pods. Most clusters will expose the pod subnets as `podCIDR` in the `Node` while others, like Amazon EKS, don't. Telepresence will then fall back to deriving the subnets from the IPs of all pods. If you'd like to choose a specific method for discovering subnets, or want to provide the list yourself, you can use the `podCIDRStrategy` configuration value in the [helm](../../install/helm) chart to do that. + +The complete set of subnets that the [VIF](../tun-device) will be configured with is dynamic and may change during a connection's life cycle as new nodes arrive or disappear from the cluster. The set consists of what that the traffic-manager finds in the cluster, and the subnets configured using the [also-proxy](../config#alsoproxy) configuration option. Telepresence will remove subnets that are equal to, or completely covered by, other subnets. + +#### Connection origin +A request to connect to an IP-address that belongs to one of the subnets of the [VIF](../tun-device) will cause a connection request to be made in the cluster. As with host name lookups, the request will originate from the traffic-manager unless the client has ongoing intercepts. If it does, one of the intercepted pods will be chosen, and the request will instead originate from that pod. This is a best-effort approach. Telepresence only knows that the request originated from the workstation. It cannot know that it is intended to originate from a specific pod when multiple intercepts are active. + +A `--local-only` intercept will not have any effect on the connection origin because there is no pod from which the connection can originate. The intercept must be made on a workload that has been deployed in the cluster if there's a requirement for correct connection origin. + +There are multiple reasons for doing this. One is that it is important that the request originates from the correct namespace. Example: + +```bash +curl some-host +``` +results in a http request with header `Host: some-host`. Now, if a service-mesh like Istio performs header based routing, then it will fail to find that host unless the request originates from the same namespace as the host resides in. Another reason is that the configuration of a service mesh can contain very strict rules. If the request then originates from the wrong pod, it will be denied. Only one intercept at a time can be used if there is a need to ensure that the chosen pod is exactly right. + +### Recursion detection +It is common that clusters used in development, such as Minikube, Minishift or k3s, run on the same host as the Telepresence client, often in a Docker container. Such clusters may have access to host network, which means that both DNS and L4 routing may be subjected to recursion. + +#### DNS recursion +When a local cluster's DNS-resolver fails to resolve a hostname, it may fall back to querying the local host network. This means that the Telepresence resolver will be asked to resolve a query that was issued from the cluster. Telepresence must check if such a query is recursive because there is a chance that it actually originated from the Telepresence DNS resolver and was dispatched to the `traffic-manager`, or a `traffic-agent`. + +Telepresence handles this by sending one initial DNS-query to resolve the hostname "tel2-recursion-check.kube-system". If the cluster runs locally, and has access to the local host's network, then that query will recurse back into the Telepresence resolver. Telepresence remembers this and alters its own behavior so that queries that are believed to be recursions are detected and respond with an NXNAME record. Telepresence performs this solution to the best of its ability, but may not be completely accurate in all situations. There's a chance that the DNS-resolver will yield a false negative for the second query if the same hostname is queried more than once in rapid succession, that is when the second query is made before the first query has received a response from the cluster. + +#### Connect recursion +A cluster running locally may dispatch connection attempts to non-existing host:port combinations to the host network. This means that they may reach the Telepresence [VIF](../tun-device). Endless recursions occur if the VIF simply dispatches such attempts on to the cluster. + +The telepresence client handles this by serializing all connection attempts to one specific IP:PORT, trapping all subsequent attempts to connect to that IP:PORT until the first attempt has completed. If the first attempt was deemed a success, then the currently trapped attempts are allowed to proceed. If the first attempt failed, then the currently trapped attempts fail. + +## Inbound + +The traffic-manager and traffic-agent are mutually responsible for setting up the necessary connection to the workstation when an intercept becomes active. In versions prior to 2.3.2, this would be accomplished by the traffic-manager creating a port dynamically that it would pass to the traffic-agent. The traffic-agent would then forward the intercepted connection to that port, and the traffic-manager would forward it to the workstation. This lead to problems when integrating with service meshes like Istio since those dynamic ports needed to be configured. It also imposed an undesired requirement to be able to use mTLS between the traffic-manager and traffic-agent. + +In 2.3.2, this changes, so that the traffic-agent instead creates a tunnel to the traffic-manager using the already existing gRPC API connection. The traffic-manager then forwards that using another tunnel to the workstation. This is completely invisible to other service meshes and is therefore much easier to configure. + +##### Footnotes: +

1: A future version of Telepresence will not allow that the same workstation creates concurrent intercepts that span multiple namespaces.

+

2: The error message from an attempt to create a service in a bad subnet contains the service subnet. The trick of creating a dummy service is currently the only way to get Kubernetes to expose that subnet.

diff --git a/docs/telepresence/2.7/reference/tun-device.md b/docs/telepresence/2.7/reference/tun-device.md new file mode 100644 index 000000000..4410f6f3c --- /dev/null +++ b/docs/telepresence/2.7/reference/tun-device.md @@ -0,0 +1,27 @@ +# Networking through Virtual Network Interface + +The Telepresence daemon process creates a Virtual Network Interface (VIF) when Telepresence connects to the cluster. The VIF ensures that the cluster's subnets are available to the workstation. It also intercepts DNS requests and forwards them to the traffic-manager which in turn forwards them to intercepted agents, if any, or performs a host lookup by itself. + +### TUN-Device +The VIF is a TUN-device, which means that it communicates with the workstation in terms of L3 IP-packets. The router will recognize UDP and TCP packets and tunnel their payload to the traffic-manager via its encrypted gRPC API. The traffic-manager will then establish corresponding connections in the cluster. All protocol negotiation takes place in the client because the VIF takes care of the L3 to L4 translation (i.e. the tunnel is L4, not L3). + +## Gains when using the VIF + +### Both TCP and UDP +The TUN-device is capable of routing both TCP and UDP for outbound traffic. Earlier versions of Telepresence would only allow TCP. Future enhancements might be to also route inbound UDP, and perhaps a selection of ICMP packages (to allow for things like `ping`). + +### No SSH required + +The VIF approach is somewhat similar to using `sshuttle` but without +any requirements for extra software, configuration or connections. +Using the VIF means that only one single connection needs to be +forwarded through the Kubernetes apiserver (à la `kubectl +port-forward`), using only one single port. There is no need for +`ssh` in the client nor for `sshd` in the traffic-manager. This also +means that the traffic-manager container can run as the default user. + +#### sshfs without ssh encryption +When a POD is intercepted, and its volumes are mounted on the local machine, this mount is performed by [sshfs](https://github.com/libfuse/sshfs). Telepresence will run `sshfs -o slave` which means that instead of using `ssh` to establish an encrypted communication to an `sshd`, which in turn terminates the encryption and forwards to `sftp`, the `sshfs` will talk `sftp` directly on its `stdin/stdout` pair. Telepresence tunnels that directly to an `sftp` in the agent using its already encrypted gRPC API. As a result, no `sshd` is needed in client nor in the traffic-agent, and the traffic-agent container can run as the default user. + +### No Firewall rules +With the VIF in place, there's no longer any need to tamper with firewalls in order to establish IP routes. The VIF makes the cluster subnets available during connect, and the kernel will perform the routing automatically. When the session ends, the kernel is also responsible for cleaning up. diff --git a/docs/telepresence/2.7/reference/volume.md b/docs/telepresence/2.7/reference/volume.md new file mode 100644 index 000000000..82df9cafa --- /dev/null +++ b/docs/telepresence/2.7/reference/volume.md @@ -0,0 +1,36 @@ +# Volume mounts + +import Alert from '@material-ui/lab/Alert'; + +Telepresence supports locally mounting of volumes that are mounted to your Pods. You can specify a command to run when starting the intercept, this could be a subshell or local server such as Python or Node. + +``` +telepresence intercept --port --mount=/tmp/ -- /bin/bash +``` + +In this case, Telepresence creates the intercept, mounts the Pod's volumes to locally to `/tmp`, and starts a Bash subshell. + +Telepresence can set a random mount point for you by using `--mount=true` instead, you can then find the mount point in the output of `telepresence list` or using the `$TELEPRESENCE_ROOT` variable. + +``` +$ telepresence intercept --port --mount=true -- /bin/bash +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 + Intercepting : all TCP connections + +bash-3.2$ echo $TELEPRESENCE_ROOT +/var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 +``` + +--mount=true is the default if a mount option is not specified, use --mount=false to disable mounting volumes. + +With either method, the code you run locally either from the subshell or from the intercept command will need to be prepended with the `$TELEPRESENCE_ROOT` environment variable to utilize the mounted volumes. + +For example, Kubernetes mounts secrets to `/var/run/secrets/kubernetes.io` (even if no `mountPoint` for it exists in the Pod spec). Once mounted, to access these you would need to change your code to use `$TELEPRESENCE_ROOT/var/run/secrets/kubernetes.io`. + +If using --mount=true without a command, you can use either environment variable flag to retrieve the variable. diff --git a/docs/telepresence/2.7/reference/vpn.md b/docs/telepresence/2.7/reference/vpn.md new file mode 100644 index 000000000..ceabd4c0c --- /dev/null +++ b/docs/telepresence/2.7/reference/vpn.md @@ -0,0 +1,157 @@ + +
+ +# Telepresence and VPNs + +## The test-vpn command + +You can make use of the `telepresence test-vpn` command to diagnose issues +with your VPN setup. +This guides you through a series of steps to figure out if there are +conflicts between your VPN configuration and [Telepresence](/products/telepresence/). + +### Prerequisites + +Before running `telepresence test-vpn` you should ensure that your VPN is +in split-tunnel mode. +This means that only traffic that _must_ pass through the VPN is directed +through it; otherwise, the test results may be inaccurate. + +You may need to configure this on both the client and server sides. +Client-side, taking the Tunnelblick client as an example, you must ensure that +the `Route all IPv4 traffic through the VPN` tickbox is not enabled: + +![Tunnelblick](../images/tunnelblick.png) + +Server-side, taking AWS' ClientVPN as an example, you simply have to enable +split-tunnel mode: + +![Modify client VPN Endpoint](../images/split-tunnel.png) + +In AWS, this setting can be toggled without reprovisioning the VPN. Other cloud providers may work differently. + +### Testing the VPN configuration + +To run it, enter: + +```console +$ telepresence test-vpn +``` + +The test-vpn tool begins by asking you to disconnect from your VPN; ensure you are disconnected then +press enter: + +``` +Telepresence Root Daemon is already stopped +Telepresence User Daemon is already stopped +Please disconnect from your VPN now and hit enter once you're disconnected... +``` + +Once it's gathered information about your network configuration without an active connection, +it will ask you to connect to the VPN: + +``` +Please connect to your VPN now and hit enter once you're connected... +``` + +It will then connect to the cluster: + + +``` +Launching Telepresence Root Daemon +Launching Telepresence User Daemon +Connected to context arn:aws:eks:us-east-1:914373874199:cluster/josec-tp-test-vpn-cluster (https://07C63820C58A0426296DAEFC73AED10C.gr7.us-east-1.eks.amazonaws.com) +Telepresence Root Daemon quitting... done +Telepresence User Daemon quitting... done +``` + +And show you the results of the test: + +``` +---------- Test Results: +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +✅ svc subnet 10.19.0.0/16 is clear of VPN + +Please see https://www.telepresence.io/docs/latest/reference/vpn for more info on these corrective actions, as well as examples + +Still having issues? Please create a new github issue at https://github.com/telepresenceio/telepresence/issues/new?template=Bug_report.md + Please make sure to add the following to your issue: + * Run `telepresence loglevel debug`, try to connect, then run `telepresence gather_logs`. It will produce a zipfile that you should attach to the issue. + * Which VPN client are you using? + * Which VPN server are you using? + * How is your VPN pushing DNS configuration? It may be useful to add the contents of /etc/resolv.conf +``` + +#### Interpreting test results + +##### Case 1: VPN masked by cluster + +In an instance where the VPN is masked by the cluster, the test-vpn tool informs you that a pod or service subnet is masking a CIDR that the VPN +routes: + +``` +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +``` + +This means that all VPN hosts within `10.0.0.0/19` will be rendered inaccessible while +telepresence is connected. + +The ideal resolution in this case is to move the pods to a different subnet. This is possible, +for example, in Amazon EKS by configuring a [new CIDR range](https://aws.amazon.com/premiumsupport/knowledge-center/eks-multiple-cidr-ranges/) for the pods. +In this case, configuring the pods to be located in `10.1.0.0/19` clears the VPN and allows you +to reach hosts inside the VPC's `10.0.0.0/19` + +However, it is not always possible to move the pods to a different subnet. +In these cases, you should use the [never-proxy](../config#neverproxy) configuration to prevent certain +hosts from being masked. +This might be particularly important for DNS resolution. In an AWS ClientVPN VPN it is often +customary to set the `.2` host as a DNS server (e.g. `10.0.0.2` in this case): + +![Modify Client VPN Endpoint](../images/vpn-dns.png) + +If this is the case for your VPN, you should place the DNS server in the never-proxy list for your +cluster. In your kubeconfig file, add a `telepresence` extension like so: + +```yaml +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + never-proxy: + - 10.0.0.2/32 +``` + +##### Case 2: Cluster masked by VPN + +In an instance where the Cluster is masked by the VPN, the test-vpn tool informs you that a pod or service subnet is being masked by a CIDR +that the VPN routes: + +``` +❌ pod subnet 10.0.0.0/8 being masked by VPN-routed CIDR 10.0.0.0/16. This usually means that Telepresence will not be able to connect to your cluster. To resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, consider shrinking the mask of the 10.0.0.0/16 CIDR (e.g. from /16 to /8), or disabling split-tunneling +``` + +Typically this means that pods within `10.0.0.0/8` are not accessible while the VPN is +connected. + +As with the first case, the ideal resolution is to move the pods away, but this may not always +be possible. In that case, your best bet is to attempt to shrink the VPN's CIDR +(that is, make it route more hosts) to make Telepresence's routes win by virtue of specificity. +One easy way to do this may be by disabling split tunneling (see the [prerequisites](#prerequisites) +section for more on split-tunneling). + +Note that once you fix this, you may find yourself landing again in [Case 1](#case-1-vpn-masked-by-cluster), and may need +to use never-proxy rules to whitelist hosts in the VPN: + +``` +❌ pod subnet 10.0.0.0/8 is masking VPN-routed CIDR 0.0.0.0/1. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 0.0.0.0/1 are placed in the never-proxy list +``` +
diff --git a/docs/telepresence/2.7/release-notes/no-ssh.png b/docs/telepresence/2.7/release-notes/no-ssh.png new file mode 100644 index 000000000..025f20ab7 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/no-ssh.png differ diff --git a/docs/telepresence/2.7/release-notes/run-tp-in-docker.png b/docs/telepresence/2.7/release-notes/run-tp-in-docker.png new file mode 100644 index 000000000..53b66a9b2 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/run-tp-in-docker.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.2.png b/docs/telepresence/2.7/release-notes/telepresence-2.2.png new file mode 100644 index 000000000..43abc7e89 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.2.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.3.0-homebrew.png b/docs/telepresence/2.7/release-notes/telepresence-2.3.0-homebrew.png new file mode 100644 index 000000000..e203a9750 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.3.0-homebrew.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.3.0-loglevels.png b/docs/telepresence/2.7/release-notes/telepresence-2.3.0-loglevels.png new file mode 100644 index 000000000..3d628c54a Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.3.0-loglevels.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.3.1-alsoProxy.png b/docs/telepresence/2.7/release-notes/telepresence-2.3.1-alsoProxy.png new file mode 100644 index 000000000..4052b927b Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.3.1-alsoProxy.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.3.1-brew.png b/docs/telepresence/2.7/release-notes/telepresence-2.3.1-brew.png new file mode 100644 index 000000000..2af424904 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.3.1-brew.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.3.1-dns.png b/docs/telepresence/2.7/release-notes/telepresence-2.3.1-dns.png new file mode 100644 index 000000000..c6335e7a7 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.3.1-dns.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.3.1-inject.png b/docs/telepresence/2.7/release-notes/telepresence-2.3.1-inject.png new file mode 100644 index 000000000..aea1003ef Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.3.1-inject.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.3.1-large-file-transfer.png b/docs/telepresence/2.7/release-notes/telepresence-2.3.1-large-file-transfer.png new file mode 100644 index 000000000..48ceb3817 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.3.1-large-file-transfer.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.3.1-trafficmanagerconnect.png b/docs/telepresence/2.7/release-notes/telepresence-2.3.1-trafficmanagerconnect.png new file mode 100644 index 000000000..78128c174 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.3.1-trafficmanagerconnect.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.3.2-subnets.png b/docs/telepresence/2.7/release-notes/telepresence-2.3.2-subnets.png new file mode 100644 index 000000000..778c722ab Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.3.2-subnets.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.3.2-svcport-annotation.png b/docs/telepresence/2.7/release-notes/telepresence-2.3.2-svcport-annotation.png new file mode 100644 index 000000000..1e1e92408 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.3.2-svcport-annotation.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.3.3-helm.png b/docs/telepresence/2.7/release-notes/telepresence-2.3.3-helm.png new file mode 100644 index 000000000..7b81480a7 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.3.3-helm.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.3.3-namespace-config.png b/docs/telepresence/2.7/release-notes/telepresence-2.3.3-namespace-config.png new file mode 100644 index 000000000..7864d3a30 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.3.3-namespace-config.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.3.3-to-pod.png b/docs/telepresence/2.7/release-notes/telepresence-2.3.3-to-pod.png new file mode 100644 index 000000000..aa7be3f63 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.3.3-to-pod.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.3.4-improved-error.png b/docs/telepresence/2.7/release-notes/telepresence-2.3.4-improved-error.png new file mode 100644 index 000000000..fa8a12986 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.3.4-improved-error.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.3.4-ip-error.png b/docs/telepresence/2.7/release-notes/telepresence-2.3.4-ip-error.png new file mode 100644 index 000000000..1d37380c7 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.3.4-ip-error.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.3.5-agent-config.png b/docs/telepresence/2.7/release-notes/telepresence-2.3.5-agent-config.png new file mode 100644 index 000000000..67d6d3e8b Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.3.5-agent-config.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.3.5-grpc-max-receive-size.png b/docs/telepresence/2.7/release-notes/telepresence-2.3.5-grpc-max-receive-size.png new file mode 100644 index 000000000..32939f9dd Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.3.5-grpc-max-receive-size.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.3.5-skipLogin.png b/docs/telepresence/2.7/release-notes/telepresence-2.3.5-skipLogin.png new file mode 100644 index 000000000..bf79c1910 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.3.5-skipLogin.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png b/docs/telepresence/2.7/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png new file mode 100644 index 000000000..d29a05ad7 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.3.7-keydesc.png b/docs/telepresence/2.7/release-notes/telepresence-2.3.7-keydesc.png new file mode 100644 index 000000000..9bffe5ccb Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.3.7-keydesc.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.3.7-newkey.png b/docs/telepresence/2.7/release-notes/telepresence-2.3.7-newkey.png new file mode 100644 index 000000000..c7d47c42d Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.3.7-newkey.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.4.0-cloud-messages.png b/docs/telepresence/2.7/release-notes/telepresence-2.4.0-cloud-messages.png new file mode 100644 index 000000000..ffd045ae0 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.4.0-cloud-messages.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.4.0-windows.png b/docs/telepresence/2.7/release-notes/telepresence-2.4.0-windows.png new file mode 100644 index 000000000..d27ba254a Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.4.0-windows.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.4.1-systema-vars.png b/docs/telepresence/2.7/release-notes/telepresence-2.4.1-systema-vars.png new file mode 100644 index 000000000..c098b439f Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.4.1-systema-vars.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.4.4-gather-logs.png b/docs/telepresence/2.7/release-notes/telepresence-2.4.4-gather-logs.png new file mode 100644 index 000000000..7db541735 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.4.4-gather-logs.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.4.5-logs-anonymize.png b/docs/telepresence/2.7/release-notes/telepresence-2.4.5-logs-anonymize.png new file mode 100644 index 000000000..edd01fde4 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.4.5-logs-anonymize.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.4.5-pod-yaml.png b/docs/telepresence/2.7/release-notes/telepresence-2.4.5-pod-yaml.png new file mode 100644 index 000000000..3f565c4f8 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.4.5-pod-yaml.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.4.5-preview-url-questions.png b/docs/telepresence/2.7/release-notes/telepresence-2.4.5-preview-url-questions.png new file mode 100644 index 000000000..1823aaa14 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.4.5-preview-url-questions.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.4.6-help-text.png b/docs/telepresence/2.7/release-notes/telepresence-2.4.6-help-text.png new file mode 100644 index 000000000..aab9178ad Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.4.6-help-text.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.4.8-health-check.png b/docs/telepresence/2.7/release-notes/telepresence-2.4.8-health-check.png new file mode 100644 index 000000000..e10a0b472 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.4.8-health-check.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.4.8-vpn.png b/docs/telepresence/2.7/release-notes/telepresence-2.4.8-vpn.png new file mode 100644 index 000000000..fbb215882 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.4.8-vpn.png differ diff --git a/docs/telepresence/2.7/release-notes/telepresence-2.5.0-pro-daemon.png b/docs/telepresence/2.7/release-notes/telepresence-2.5.0-pro-daemon.png new file mode 100644 index 000000000..5b82fc769 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/telepresence-2.5.0-pro-daemon.png differ diff --git a/docs/telepresence/2.7/release-notes/tunnel.jpg b/docs/telepresence/2.7/release-notes/tunnel.jpg new file mode 100644 index 000000000..59a0397e6 Binary files /dev/null and b/docs/telepresence/2.7/release-notes/tunnel.jpg differ diff --git a/docs/telepresence/2.7/releaseNotes.yml b/docs/telepresence/2.7/releaseNotes.yml new file mode 100644 index 000000000..f3abd5884 --- /dev/null +++ b/docs/telepresence/2.7/releaseNotes.yml @@ -0,0 +1,1657 @@ +# This file should be placed in the folder for the version of the +# product that's meant to be documented. A `/release-notes` page will +# be automatically generated and populated at build time. +# +# Note that an entry needs to be added to the `doc-links.yml` file in +# order to surface the release notes in the table of contents. +# +# The YAML in this file should contain: +# +# changelog: An (optional) URL to the CHANGELOG for the product. +# items: An array of releases with the following attributes: +# - version: The (optional) version number of the release, if applicable. +# - date: The date of the release in the format YYYY-MM-DD. +# - notes: An array of noteworthy changes included in the release, each having the following attributes: +# - type: The type of change, one of `bugfix`, `feature`, `security` or `change`. +# - title: A short title of the noteworthy change. +# - body: >- +# Two or three sentences describing the change and why it +# is noteworthy. This is HTML, not plain text or +# markdown. It is handy to use YAML's ">-" feature to +# allow line-wrapping. +# - image: >- +# The URL of an image that visually represents the +# noteworthy change. This path is relative to the +# `release-notes` directory; if this file is +# `FOO/releaseNotes.yml`, then the image paths are +# relative to `FOO/release-notes/`. +# - docs: The path to the documentation page where additional information can be found. +# - href: A path from the root to a resource on the getambassador website, takes precedence over a docs link. + +docTitle: Telepresence Release Notes +docDescription: >- + Release notes for Telepresence by Ambassador Labs, a CNCF project + that enables developers to iterate rapidly on Kubernetes + microservices by arming them with infinite-scale development + environments, access to instantaneous feedback loops, and highly + customizable development environments. + +changelog: https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md + +items: + - version: 2.7.6 + date: "2022-09-16" + notes: + - type: feature + title: Helm chart resource entries for injected agents + body: >- + The resources for the traffic-agent container and the optional init container can be + specified in the Helm chart using the resources and initResource fields + of the agentInjector.agentImage + - type: feature + title: Cluster event propagation when injection fails + body: >- + When the traffic-manager fails to inject a traffic-agent, the cause for the failure is + detected by reading the cluster events, and propagated to the user. + - type: feature + title: FTP-client instead of sshfs for remote mounts + body: >- + Telepresence can now use an embedded FTP client and load an existing FUSE library instead of running + an external sshfs or sshfs-win binary. This feature is experimental in 2.7.x + and enabled by setting intercept.useFtp to true> in the config.yml. + - type: change + title: Upgrade of winfsp + body: >- + Telepresence on Windows upgraded winfsp from version 1.10 to 1.11 + - type: bugfix + title: Removal of invalid warning messages + body: >- + Running CLI commands on Apple M1 machines will no longer throw warnings about /proc/cpuinfo + and /proc/self/auxv. + - version: 2.7.5 + date: "2022-09-14" + notes: + - type: change + title: Rollback of release 2.7.4 + body: >- + This release is a rollback of the changes in 2.7.4, so essentially the same as 2.7.3 + - version: 2.7.4 + date: "2022-09-14" + notes: + - type: change + body: >- + This release was broken on some platforms. Use 2.7.6 instead. + - version: 2.7.3 + date: "2022-09-07" + notes: + - type: bugfix + title: PTY for CLI commands + body: >- + CLI commands that are executed by the user daemon now use a pseudo TTY. This enables + docker run -it to allocate a TTY and will also give other commands like bash read the + same behavior as when executed directly in a terminal. + docs: https://github.com/telepresenceio/telepresence/issues/2724 + - type: bugfix + title: Traffic Manager useless warning silenced + body: >- + The traffic-manager will no longer log numerous warnings saying Issuing a + systema request without ApiKey or InstallID may result in an error. + - type: bugfix + title: Traffic Manager useless error silenced + body: >- + The traffic-manager will no longer log an error saying Unable to derive subnets + from nodes when the podCIDRStrategy is auto and it chooses to instead derive the + subnets from the pod IPs. + - version: 2.7.2 + date: "2022-08-25" + notes: + - type: feature + title: Autocompletion scripts + body: >- + Autocompletion scripts can now be generated with telepresence completion SHELL where SHELL can be bash, zsh, fish or powershell. + - type: feature + title: Connectivity check timeout + body: >- + The timeout for the initial connectivity check that Telepresence performs + in order to determine if the cluster's subnets are proxied or not can now be configured + in the config.yml file using timeouts.connectivityCheck. The default timeout was + changed from 5 seconds to 500 milliseconds to speed up the actual connect. + docs: reference/config#timeouts + - type: change + title: gather-traces feedback + body: >- + The command telepresence gather-traces now prints out a message on success. + docs: troubleshooting#distributed-tracing + - type: change + title: upload-traces feedback + body: >- + The command telepresence upload-traces now prints out a message on success. + docs: troubleshooting#distributed-tracing + - type: change + title: gather-traces tracing + body: >- + The command telepresence gather-traces now traces itself and reports errors with trace gathering. + docs: troubleshooting#distributed-tracing + - type: change + title: CLI log level + body: >- + The cli.log log is now logged at the same level as the connector.log + docs: reference/config#log-levels + - type: bugfix + title: Telepresence --help fixed + body: >- + telepresence --help now works once more even if there's no user daemon running. + docs: https://github.com/telepresenceio/telepresence/issues/2735 + - type: bugfix + title: Stream cancellation when no process intercepts + body: >- + Streams created between the traffic-agent and the workstation are now properly closed + when no interceptor process has been started on the workstation. This fixes a potential problem where + a large number of attempts to connect to a non-existing interceptor would cause stream congestion + and an unresponsive intercept. + - type: bugfix + title: List command excludes the traffic-manager + body: >- + The telepresence list command no longer includes the traffic-manager deployment. + - version: 2.7.1 + date: "2022-08-10" + notes: + - type: change + title: Reinstate telepresence uninstall + body: >- + Reinstate telepresence uninstall with --everything depreciated + - type: change + title: Reduce telepresence helm uninstall + body: >- + telepresence helm uninstall will only uninstall the traffic-manager helm chart and no longer accepts the --everything, --agent, or --all-agents flags. + - type: bugfix + title: Auto-connect for telepresence intercpet + body: >- + telepresence intercept will attempt to connect to the traffic manager before creating an intercept. + - version: 2.7.0 + date: "2022-08-07" + notes: + - type: feature + title: Saved Intercepts + body: >- + Create telepresence intercepts based on existing Saved Intercepts configurations with telepresence intercept --use-saved-intercept $SAVED_INTERCEPT_NAME + docs: reference/intercepts#sharing-intercepts-with-teammates + - type: feature + title: Distributed Tracing + body: >- + The Telepresence components now collect OpenTelemetry traces. + Up to 10MB of trace data are available at any given time for collection from + components. telepresence gather-traces is a new command that will collect + all that data and place it into a gzip file, and telepresence upload-traces is + a new command that will push the gzipped data into an OTLP collector. + docs: troubleshooting#distributed-tracing + - type: feature + title: Helm install + body: >- + A new telepresence helm command was added to provide an easy way to install, upgrade, or uninstall the telepresence traffic-manager. + docs: install/manager + - type: feature + title: Ignore Volume Mounts + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-ignore-volume-mounts, that can be used to make the injector ignore specified volume mounts denoted by a comma-separated string. + - type: feature + title: telepresence pod-daemon + body: >- + The Docker image now contains a new program in addition to + the existing traffic-manager and traffic-agent: the pod-daemon. The + pod-daemon is a trimmed-down version of the user-daemon that is + designed to run as a sidecar in a Pod, enabling CI systems to create + preview deploys. + - type: feature + title: Prometheus support for traffic manager + body: >- + Added prometheus support to the traffic manager. + - type: change + title: No install on telepresence connect + body: >- + The traffic manager is no longer automatically installed into the cluster. Connecting or creating an intercept in a cluster without a traffic manager will return an error. + docs: install/manager + - type: change + title: Helm Uninstall + body: >- + The command telepresence uninstall has been moved to telepresence helm uninstall. + docs: install/manager + - type: bugfix + title: readOnlyRootFileSystem mounts work + body: >- + Add an emptyDir volume and volume mount under `/tmp` on the agent sidecar so it works with `readOnlyRootFileSystem: true` + docs: https://github.com/telepresenceio/telepresence/pull/2666 + - version: 2.6.8 + date: "2022-06-23" + notes: + - type: feature + title: Specify Your DNS + body: >- + The name and namespace for the DNS Service that the traffic-manager uses in DNS auto-detection can now be specified. + - type: feature + title: Specify a Fallback DNS + body: >- + Should the DNS auto-detection logic in the traffic-manager fail, users can now specify a fallback IP to use. + - type: feature + title: Intercept UDP Ports + body: >- + It is now possible to intercept UDP ports with Telepresence and also use `--to-pod` to forward UDP traffic from ports on localhost. + - type: change + title: Additional Helm Values + body: >- + The Helm chart will now add the `nodeSelector`, `affinity` and `tolerations` values to the traffic-manager's post-upgrade-hook and pre-delete-hook jobs. + - type: bugfix + title: Agent Injection Bugfix + body: >- + Telepresence no longer fails to inject the traffic agent into the pod generated for workloads that have no volumes and `automountServiceAccountToken: false`. + - version: 2.6.7 + date: "2022-06-22" + notes: + - type: bugfix + title: Persistant Sessions + body: >- + The Telepresence client will remember and reuse the traffic-manager session after a network failure or other reason that caused an unclean disconnect. + - type: bugfix + title: DNS Requests + body: >- + Telepresence will no longer forward DNS requests for "wpad" to the cluster. + - type: bugfix + title: Graceful Shutdown + body: >- + The traffic-agent will properly shut down if one of its goroutines errors. + - version: 2.6.6 + date: "2022-06-9" + notes: + - type: bugfix + title: Env Var `TELEPRESENCE_API_PORT` + body: >- + The propagation of the `TELEPRESENCE_API_PORT` environment variable now works correctly. + - type: bugfix + title: Double Printing `--output json` + body: >- + The `--output json` global flag no longer outputs multiple objects + - version: 2.6.5 + date: "2022-06-03" + notes: + - type: feature + title: Helm Option -- `reinvocationPolicy` + body: >- + The `reinvocationPolicy` or the traffic-agent injector webhook can now be configured using the Helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Proxy Certificate + body: >- + The traffic manager now accepts a root CA for a proxy, allowing it to connect to ambassador cloud from behind an HTTPS proxy. This can be configured through the helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Agent Injection + body: >- + A policy that controls when the mutating webhook injects the traffic-agent was added, and can be configured in the Helm chart. + docs: install/helm + - type: change + title: Windows Tunnel Version Upgrade + body: >- + Telepresence on Windows upgraded wintun.dll from version 0.12 to version 0.14.1 + - type: change + title: Helm Version Upgrade + body: >- + Telepresence upgraded its embedded Helm from version 3.8.1 to 3.9 + - type: change + title: Kubernetes API Version Upgrade + body: >- + Telepresence upgraded its embedded Kubernetes API from version 0.23.4 to 0.24.1 + - type: feature + title: Flag `--watch` Added to `list` Command + body: >- + Added a `--watch` flag to `telepresence list` that can be used to watch interceptable workloads in a namespace. + - type: change + title: Depreciated `images.webhookAgentImage` + body: >- + The Telepresence configuration setting for `images.webhookAgentImage` is now deprecated. Use `images.agentImage` instead. + - type: bugfix + title: Default `reinvocationPolicy` Set to Never + body: >- + The `reinvocationPolicy` or the traffic-agent injector webhook now defaults to `Never` insteadof `IfNeeded` so that `LimitRange`s on namespaces can inject a missing `resources` element into the injected traffic-agent container. + - type: bugfix + title: UDP + body: >- + UDP based communication with services in the cluster now works as expected. + - type: bugfix + title: Telepresence `--help` + body: >- + The command help will only show Kubernetes flags on the commands that supports them + - type: change + title: Error Count + body: >- + Only the errors from the last session will be considered when counting the number of errors in the log after a command failure. + - version: 2.6.4 + date: "2022-05-23" + notes: + - type: bugfix + title: Upgrade RBAC Permissions + body: >- + The traffic-manager RBAC grants permissions to update services, deployments, replicatsets, and statefulsets. Those permissions are needed when the traffic-manager upgrades from versions < 2.6.0 and can be revoked after the upgrade. + - version: 2.6.3 + date: "2022-05-20" + notes: + - type: bugfix + title: Relative Mount Paths + body: >- + The `--mount` intercept flag now handles relative mount points correctly on non-windows platforms. Windows still require the argument to be a drive letter followed by a colon. + - type: bugfix + title: Traffic Agent Config + body: >- + The traffic-agent's configuration update automatically when services are added, updated or deleted. + - type: bugfix + title: Container Injection for Numeric Ports + body: >- + Telepresence will now always inject an initContainer when the service's targetPort is numeric + - type: bugfix + title: Matching Services + body: >- + Workloads that have several matching services pointing to the same target port are now handled correctly. + - type: bugfix + title: Unexpected Panic + body: >- + A potential race condition causing a panic when closing a DNS connection is now handled correctly. + - type: bugfix + title: Mount Volume Cleanup + body: >- + A container start would sometimes fail because and old directory remained in a mounted temp volume. + - version: 2.6.2 + date: "2022-05-17" + notes: + - type: bugfix + title: Argo Injection + body: >- + Workloads controlled by workloads like Argo `Rollout` are injected correctly. + - type: bugfix + title: Agent Port Mapping + body: >- + Multiple services appointing the same container port no longer result in duplicated ports in an injected pod. + - type: bugfix + title: GRPC Max Message Size + body: >- + The `telepresence list` command no longer errors out with "grpc: received message larger than max" when listing namespaces with a large number of workloads. + - version: 2.6.1 + date: "2022-05-16" + notes: + - type: bugfix + title: KUBECONFIG environment variable + body: >- + Telepresence will now handle multiple path entries in the KUBECONFIG environment correctly. + - type: bugfix + title: Don't Panic + body: >- + Telepresence will no longer panic when using preview URLs with traffic-managers < 2.6.0 + - version: 2.6.0 + date: "2022-05-13" + notes: + - type: feature + title: Intercept multiple containers in a pod, and multiple ports per container + body: >- + Telepresence can now intercept multiple services and/or service-ports that connect to the same pod. + docs: new-in-2.6#intercept-multiple-containers-and-ports + - type: feature + title: The Traffic Agent sidecar is always injected by the Traffic Manager's mutating webhook + body: >- + The client will no longer modify deployments, replicasets, or statefulsets in order to + inject a Traffic Agent into an intercepted pod. Instead, all injection is now performed by a mutating webhook. As a result, + the client now needs less permissions in the cluster. + docs: install/upgrade#important-note-about-upgrading-to-2.6.0 + - type: change + title: Automatic upgrade of Traffic Agents + body: >- + When upgrading, all workloads with injected agents will have their agent "uninstalled" automatically. + The mutating webhook will then ensure that their pods will receive an updated Traffic Agent. + docs: new-in-2.6#no-more-workload-modifications + - type: change + title: No default image in the Helm chart + body: >- + The helm chart no longer has a default set for the agentInjector.image.name, and unless it's set, the + traffic-manager will ask Ambassador Could for the preferred image. + docs: new-in-2.6#smarter-agent + - type: change + title: Upgrade to Helm version 3.8.1 + body: The Telepresence client now uses Helm version 3.8.1 when auto-installing the Traffic Manager. + - type: bugfix + title: Remote mounts will now function correctly with custom securityContext + body: >- + The bug causing permission problems when the Traffic Agent is in a Pod with a custom securityContext has been fixed. + - type: bugfix + title: Improved presentation of flags in CLI help + body: The help for commands that accept Kubernetes flags will now display those flags in a separate group. + - type: bugfix + title: Better termination of process parented by intercept + body: >- + Occasionally an intercept will spawn a command using -- on the command line, often in another console. + When you use telepresence leave or telepresence quit while the intercept with the spawned command is still active, + Telepresence will now terminate that the command because it's considered to be parented by the intercept that is being removed. + - version: 2.5.8 + date: "2022-04-27" + notes: + - type: bugfix + title: Folder creation on `telepresence login` + body: >- + Fixed a bug where the telepresence config folder would not be created if the user ran `telepresence login` before other commands. + - version: 2.5.7 + date: "2022-04-25" + notes: + - type: change + title: RBAC requirements + body: >- + A namespaced traffic-manager will no longer require cluster wide RBAC. Only Roles and RoleBindings are now used. + - type: bugfix + title: Windows DNS + body: >- + The DNS recursion detector didn't work correctly on Windows, resulting in sporadic failures to resolve names that were resolved correctly at other times. + - type: bugfix + title: Session TTL and Reconnect + body: >- + A telepresence session will now last for 24 hours after the user's last connectivity. If a session expires, the connector will automatically try to reconnect. + - version: 2.5.6 + date: "2022-04-18" + notes: + - type: change + title: Less Watchers + body: >- + Telepresence agents watcher will now only watch namespaces that the user has accessed since the last `connect`. + - type: bugfix + title: More Efficient `gather-logs` + body: >- + The `gather-logs` command will no longer send any logs through `gRPC`. + - version: 2.5.5 + date: "2022-04-08" + notes: + - type: change + title: Traffic Manager Permissions + body: >- + The traffic-manager now requires permissions to read pods across namespaces even if installed with limited permissions + - type: bugfix + title: Linux DNS Cache + body: >- + The DNS resolver used on Linux with systemd-resolved now flushes the cache when the search path changes. + - type: bugfix + title: Automatic Connect Sync + body: >- + The `telepresence list` command will produce a correct listing even when not preceded by a `telepresence connect`. + - type: bugfix + title: Disconnect Reconnect Stability + body: >- + The root daemon will no longer get into a bad state when a disconnect is rapidly followed by a new connect. + - type: bugfix + title: Limit Watched Namespaces + body: >- + The client will now only watch agents from accessible namespaces, and is also constrained to namespaces explicitly mapped using the `connect` command's `--mapped-namespaces` flag. + - type: bugfix + title: Limit Namespaces used in `gather-logs` + body: >- + The `gather-logs` command will only gather traffic-agent logs from accessible namespaces, and is also constrained to namespaces explicitly mapped using the `connect` command's `--mapped-namespaces` flag. + - version: 2.5.4 + date: "2022-03-29" + notes: + - type: bugfix + title: Linux DNS Concurrency + body: >- + The DNS fallback resolver on Linux now correctly handles concurrent requests without timing them out + - type: bugfix + title: Non-Functional Flag + body: >- + The ingress-l5 flag will no longer be forcefully set to equal the --ingress-host flag + - type: bugfix + title: Automatically Remove Failed Intercepts + body: >- + Intercepts that fail to create are now consistently removed to prevent non-working dangling intercepts from sticking around. + - type: bugfix + title: Agent UID + body: >- + Agent container is no longer sensitive to a random UID or an UID imposed by a SecurityContext. + - type: bugfix + title: Gather-Logs Output Filepath + body: >- + Removed a bad concatenation that corrupted the output path of `telepresence gather-logs`. + - type: change + title: Remove Unnecessary Error Advice + body: >- + An advice to "see logs for details" is no longer printed when the argument count is incorrect in a CLI command. + - type: bugfix + title: Garbage Collection + body: >- + Client and agent sessions no longer leaves dangling waiters in the traffic-manager when they depart. + - type: bugfix + title: Limit Gathered Logs + body: >- + The client's gather logs command and agent watcher will now respect the configured grpc.maxReceiveSize + - type: change + title: In-Cluster Checks + body: >- + The TUN device will no longer route pod or service subnets if it is running in a machine that's already connected to the cluster + - type: change + title: Expanded Status Command + body: >- + The status command includes the install id, user id, account id, and user email in its result, and can print output as JSON + - type: change + title: List Command Shows All Intercepts + body: >- + The list command, when used with the `--intercepts` flag, will list the users intercepts from all namespaces + - version: 2.5.3 + date: "2022-02-25" + notes: + - type: bugfix + title: TCP Connectivity + body: >- + Fixed bug in the TCP stack causing timeouts after repeated connects to the same address + - type: feature + title: Linux Binaries + body: >- + Client-side binaries for the arm64 architecture are now available for linux + - version: 2.5.2 + date: "2022-02-23" + notes: + - type: bugfix + title: DNS server bugfix + body: >- + Fixed a bug where Telepresence would use the last server in resolv.conf + - version: 2.5.1 + date: "2022-02-19" + notes: + - type: bugfix + title: Fix GKE auth issue + body: >- + Fixed a bug where using a GKE cluster would error with: No Auth Provider found for name "gcp" + - version: 2.5.0 + date: "2022-02-18" + notes: + - type: feature + title: Intercept specific endpoints + body: >- + The flags --http-path-equal, --http-path-prefix, and --http-path-regex can can be used in + addition to the --http-match flag to filter personal intercepts by the request URL path + docs: concepts/intercepts#intercepting-a-specific-endpoint + - type: feature + title: Intercept metadata + body: >- + The flag --http-meta can be used to declare metadata key value pairs that will be returned by the Telepresence rest + API endpoint /intercept-info + docs: reference/restapi#intercept-info + - type: change + title: Client RBAC watch + body: >- + The verb "watch" was added to the set of required verbs when accessing services and workloads for the client RBAC + ClusterRole + docs: reference/rbac + - type: change + title: Dropped backward compatibility with versions <=2.4.4 + body: >- + Telepresence is no longer backward compatible with versions 2.4.4 or older because the deprecated multiplexing tunnel + functionality was removed. + - type: change + title: No global networking flags + body: >- + The global networking flags are no longer used and using them will render a deprecation warning unless they are supported by the + command. The subcommands that support networking flags are connect, current-cluster-id, + and genyaml. + - type: bugfix + title: Output of status command + body: >- + The also-proxy and never-proxy subnets are now displayed correctly when using the + telepresence status command. + - type: bugfix + title: SETENV sudo privilege no longer needed + body: >- + Telepresence longer requires SETENV privileges when starting the root daemon. + - type: bugfix + title: Network device names containing dash + body: >- + Telepresence will now parse device names containing dashes correctly when determining routes that it should never block. + - type: bugfix + title: Linux uses cluster.local as domain instead of search + body: >- + The cluster domain (typically "cluster.local") is no longer added to the DNS search on Linux using + systemd-resolved. Instead, it is added as a domain so that names ending with it are routed + to the DNS server. + - version: 2.4.11 + date: "2022-02-10" + notes: + - type: change + title: Add additional logging to troubleshoot intermittent issues with intercepts + body: >- + We've noticed some issues with intercepts in v2.4.10, so we are releasing a version + with enhanced logging to help debug and fix the issue. + - version: 2.4.10 + date: "2022-01-13" + notes: + - type: feature + title: Application Protocol Strategy + body: >- + The strategy used when selecting the application protocol for personal intercepts can now be configured using + the intercept.appProtocolStrategy in the config.yml file. + docs: reference/config/#intercept + image: telepresence-2.4.10-intercept-config.png + - type: feature + title: Helm value for the Application Protocol Strategy + body: >- + The strategy when selecting the application protocol for personal intercepts in agents injected by the + mutating webhook can now be configured using the agentInjector.appProtocolStrategy in the Helm chart. + docs: install/helm + - type: feature + title: New --http-plaintext option + body: >- + The flag --http-plaintext can be used to ensure that an intercept uses plaintext http or grpc when + communicating with the workstation process. + docs: reference/intercepts/#tls + - type: feature + title: Configure the default intercept port + body: >- + The port used by default in the telepresence intercept command (8080), can now be changed by setting + the intercept.defaultPort in the config.yml file. + docs: reference/config/#intercept + - type: change + title: Telepresence CI now uses Github Actions + body: >- + Telepresence now uses Github Actions for doing unit and integration testing. It is + now easier for contributors to run tests on PRs since maintainers can add an + "ok to test" label to PRs (including from forks) to run integration tests. + docs: https://github.com/telepresenceio/telepresence/actions + image: telepresence-2.4.10-actions.png + - type: bugfix + title: Check conditions before asking questions + body: >- + User will not be asked to log in or add ingress information when creating an intercept until a check has been + made that the intercept is possible. + docs: reference/intercepts/ + - type: bugfix + title: Fix invalid log statement + body: >- + Telepresence will no longer log invalid: "unhandled connection control message: code DIAL_OK" errors. + - type: bugfix + title: Log errors from sshfs/sftp + body: >- + Output to stderr from the traffic-agent's sftp and the client's sshfs processes + are properly logged as errors. + - type: bugfix + title: Don't use Windows path separators in workload pod template + body: >- + Auto installer will no longer not emit backslash separators for the /tel-app-mounts paths in the + traffic-agent container spec when running on Windows. + - version: 2.4.9 + date: "2021-12-09" + notes: + - type: bugfix + title: Helm upgrade nil pointer error + body: >- + A helm upgrade using the --reuse-values flag no longer fails on a "nil pointer" error caused by a nil + telpresenceAPI value. + docs: install/helm#upgrading-the-traffic-manager + - version: 2.4.8 + date: "2021-12-03" + notes: + - type: feature + title: VPN diagnostics tool + body: >- + There is a new subcommand, test-vpn, that can be used to diagnose connectivity issues with a VPN. + See the VPN docs for more information on how to use it. + docs: reference/vpn + image: telepresence-2.4.8-vpn.png + + - type: feature + title: RESTful API service + body: >- + A RESTful service was added to Telepresence, both locally to the client and to the traffic-agent to + help determine if messages with a set of headers should be consumed or not from a message queue where the + intercept headers are added to the messages. + docs: reference/restapi + image: telepresence-2.4.8-health-check.png + + - type: change + title: TELEPRESENCE_LOGIN_CLIENT_ID env variable no longer used + body: >- + You could previously configure this value, but there was no reason to change it, so the value + was removed. + + - type: bugfix + title: Tunneled network connections behave more like ordinary TCP connections. + body: >- + When using Telepresence with an external cloud provider for extensions, those tunneled + connections now behave more like TCP connections, especially when it comes to timeouts. + We've also added increased testing around these types of connections. + - version: 2.4.7 + date: "2021-11-24" + notes: + - type: feature + title: Injector service-name annotation + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-service-name, that can be used to set the name of the service to be intercepted. + This will help disambiguate which service to intercept for when a workload is exposed by multiple services, such as can happen with Argo Rollouts + docs: reference/cluster-config#service-name-annotation + - type: feature + title: Skip the Ingress Dialogue + body: >- + You can now skip the ingress dialogue by setting the ingress parameters in the corresponding flags. + docs: reference/intercepts#skipping-the-ingress-dialogue + - type: feature + title: Never proxy subnets + body: >- + The kubeconfig extensions now support a never-proxy argument, + analogous to also-proxy, that defines a set of subnets that + will never be proxied via telepresence. + docs: reference/config#neverproxy + - type: change + title: Daemon versions check + body: >- + Telepresence now checks the versions of the client and the daemons and asks the user to quit and restart if they don't match. + - type: change + title: No explicit DNS flushes + body: >- + Telepresence DNS now uses a very short TTL instead of explicitly flushing DNS by killing the mDNSResponder or doing resolvectl flush-caches + docs: reference/routing#dns-caching + - type: bugfix + title: Legacy flags now work with global flags + body: >- + Legacy flags such as `--swap-deployment` can now be used together with global flags. + - type: bugfix + title: Outbound connection closing + body: >- + Outbound connections are now properly closed when the peer closes. + - type: bugfix + title: Prevent DNS recursion + body: >- + The DNS-resolver will trap recursive resolution attempts (may happen when the cluster runs in a docker-container on the client). + docs: reference/routing#dns-recursion + - type: bugfix + title: Prevent network recursion + body: >- + The TUN-device will trap failed connection attempts that results in recursive calls back into the TUN-device (may happen when the + cluster runs in a docker-container on the client). + docs: reference/routing#connect-recursion + - type: bugfix + title: Traffic Manager deadlock fix + body: >- + The Traffic Manager no longer runs a risk of entering a deadlock when a new Traffic agent arrives. + - type: bugfix + title: webhookRegistry config propagation + body: >- + The configured webhookRegistry is now propagated to the webhook installer even if no webhookAgentImage has been set. + docs: reference/config#images + - type: bugfix + title: Login refreshes expired tokens + body: >- + When a user's token has expired, telepresence login + will prompt the user to log in again to get a new token. Previously, + the user had to telepresence quit and telepresence logout + to get a new token. + docs: https://github.com/telepresenceio/telepresence/issues/2062 + - version: 2.4.6 + date: "2021-11-02" + notes: + - type: feature + title: Manually injecting Traffic Agent + body: >- + Telepresence now supports manually injecting the traffic-agent YAML into workload manifests. + Use the genyaml command to create the sidecar YAML, then add the telepresence.getambassador.io/manually-injected: "true" annotation to your pods to allow Telepresence to intercept them. + docs: reference/intercepts/manual-agent + + - type: feature + title: Telepresence CLI released for Apple silicon + body: >- + Telepresence is now built and released for Apple silicon. + docs: install/?os=macos + + - type: change + title: Telepresence help text now links to telepresence.io + body: >- + We now include a link to our documentation when you run telepresence --help. This will make it easier + for users to find this page whether they acquire Telepresence through Brew or some other mechanism. + image: telepresence-2.4.6-help-text.png + + - type: bugfix + title: Fixed bug when API server is inside CIDR range of pods/services + body: >- + If the API server for your kubernetes cluster had an IP that fell within the + subnet generated from pods/services in a kubernetes cluster, it would proxy traffic + to the API server which would result in hanging or a failed connection. We now ensure + that the API server is explicitly not proxied. + - version: 2.4.5 + date: "2021-10-15" + notes: + - type: feature + title: Get pod yaml with gather-logs command + body: >- + Adding the flag --get-pod-yaml to your request will get the + pod yaml manifest for all kubernetes components you are getting logs for + ( traffic-manager and/or pods containing a + traffic-agent container). This flag is set to false + by default. + docs: reference/client + image: telepresence-2.4.5-pod-yaml.png + + - type: feature + title: Anonymize pod name + namespace when using gather-logs command + body: >- + Adding the flag --anonymize to your command will + anonymize your pod names + namespaces in the output file. We replace the + sensitive names with simple names (e.g. pod-1, namespace-2) to maintain + relationships between the objects without exposing the real names of your + objects. This flag is set to false by default. + docs: reference/client + image: telepresence-2.4.5-logs-anonymize.png + + - type: feature + title: Added context and defaults to ingress questions when creating a preview URL + body: >- + Previously, we referred to OSI model layers when asking these questions, but this + terminology is not commonly used. The questions now provide a clearer context for the user, along with a default answer as an example. + docs: howtos/preview-urls + image: telepresence-2.4.5-preview-url-questions.png + + - type: feature + title: Support for intercepting headless services + body: >- + Intercepting headless services is now officially supported. You can request a + headless service on whatever port it exposes and get a response from the + intercept. This leverages the same approach as intercepting numeric ports when + using the mutating webhook injector, mainly requires the initContainer + to have NET_ADMIN capabilities. + docs: reference/intercepts/#intercepting-headless-services + + - type: change + title: Use one tunnel per connection instead of multiplexing into one tunnel + body: >- + We have changed Telepresence so that it uses one tunnel per connection instead + of multiplexing all connections into one tunnel. This will provide substantial + performance improvements. Clients will still be backwards compatible with older + managers that only support multiplexing. + + - type: bugfix + title: Added checks for Telepresence kubernetes compatibility + body: >- + Telepresence currently works with Kubernetes server versions 1.17.0 + and higher. We have added logs in the connector and traffic-manager + to let users know when they are using Telepresence with a cluster it doesn't support. + docs: reference/cluster-config + + - type: bugfix + title: Traffic Agent security context is now only added when necessary + body: >- + When creating an intercept, Telepresence will now only set the traffic agent's GID + when strictly necessary (i.e. when using headless services or numeric ports). This mitigates + an issue on openshift clusters where the traffic agent can fail to be created due to + openshift's security policies banning arbitrary GIDs. + + - version: 2.4.4 + date: "2021-09-27" + notes: + - type: feature + title: Numeric ports in agent injector + body: >- + The agent injector now supports injecting Traffic Agents into pods that have unnamed ports. + docs: reference/cluster-config/#note-on-numeric-ports + + - type: feature + title: New subcommand to gather logs and export into zip file + body: >- + Telepresence has logs for various components (the + traffic-manager, traffic-agents, the root and + user daemons), which are integral for understanding and debugging + Telepresence behavior. We have added the telepresence + gather-logs command to make it simple to compile logs for + all Telepresence components and export them in a zip file that can + be shared to others and/or included in a github issue. For more + information on usage, run telepresence gather-logs --help + . + docs: reference/client + image: telepresence-2.4.4-gather-logs.png + + - type: feature + title: Pod CIDR strategy is configurable in Helm chart + body: >- + Telepresence now enables you to directly configure how to get + pod CIDRs when deploying Telepresence with the Helm chart. + The default behavior remains the same. We've also introduced + the ability to explicitly set what the pod CIDRs should be. + docs: install/helm + + - type: bugfix + title: Compute pod CIDRs more efficiently + body: >- + When computing subnets using the pod CIDRs, the traffic-manager + now uses less CPU cycles. + docs: reference/routing/#subnets + + - type: bugfix + title: Prevent busy loop in traffic-manager + body: >- + In some circumstances, the traffic-manager's CPU + would max out and get pinned at its limit. This required a + shutdown or pod restart to fix. We've added some fixes + to prevent the traffic-manager from getting into this state. + + - type: bugfix + title: Added a fixed buffer size to TUN-device + body: >- + The TUN-device now has a max buffer size of 64K. This prevents the + buffer from growing limitlessly until it receies a PSH, which could + be a blocking operation when receiving lots of TCP-packets. + docs: reference/tun-device + + - type: bugfix + title: Fix hanging user daemon + body: >- + When Telepresence encountered an issue connecting to the cluster or + the root daemon, it could hang indefintely. It now will error correctly + when it encounters that situation. + + - type: bugfix + title: Improved proprietary agent connectivity + body: >- + To determine whether the environment cluster is air-gapped, the + proprietary agent attempts to connect to the cloud during startup. + To deal with a possible initial failure, the agent backs off + and retries the connection with an increasing backoff duration. + + - type: bugfix + title: Telepresence correctly reports intercept port conflict + body: >- + When creating a second intercept targetting the same local port, + it now gives the user an informative error message. Additionally, + it tells them which intercept is currently using that port to make + it easier to remedy. + + - version: 2.4.3 + date: "2021-09-15" + notes: + - type: feature + title: Environment variable TELEPRESENCE_INTERCEPT_ID available in interceptor's environment + body: >- + When you perform an intercept, we now include a TELEPRESENCE_INTERCEPT_ID environment + variable in the environment. + docs: reference/environment/#telepresence-environment-variables + + - type: bugfix + title: Improved daemon stability + body: >- + Fixed a timing bug that sometimes caused a "daemon did not start" failure. + + - type: bugfix + title: Complete logs for Windows + body: >- + Crash stack traces and other errors were incorrectly not written to log files. This has + been fixed so logs for Windows should be at parity with the ones in MacOS and Linux. + + - type: bugfix + title: Log rotation fix for Linux kernel 4.11+ + body: >- + On Linux kernel 4.11 and above, the log file rotation now properly reads the + birth-time of the log file. Older kernels continue to use the old behavior + of using the change-time in place of the birth-time. + + - type: bugfix + title: Improved error messaging + body: >- + When Telepresence encounters an error, it tells the user where they should look for + logs related to the error. We have refined this so that it only tells users to look + for errors in the daemon logs for issues that are logged there. + + - type: bugfix + title: Stop resolving localhost + body: >- + When using the overriding DNS resolver, it will no longer apply search paths when + resolving localhost, since that should be resolved on the user's machine + instead of the cluster. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Variable cluster domain + body: >- + Previously, the cluster domain was hardcoded to cluster.local. While this + is true for many kubernetes clusters, it is not for all of them. Now this value is + retrieved from the traffic-manager. + + - type: bugfix + title: Improved cleanup of traffic-agents + body: >- + Telepresence now uninstalls traffic-agents installed via mutating webhook + when using telepresence uninstall --everything. + + - type: bugfix + title: More large file transfer fixes + body: >- + Downloading large files during an intercept will no longer cause timeouts and hanging + traffic-agents. + + - type: bugfix + title: Setting --mount to false when intercepting works as expected + body: >- + When using --mount=false while performing an intercept, the file system + was still mounted. This has been remedied so the intercept behavior respects the + flag. + docs: reference/volume + + - type: bugfix + title: Traffic-manager establishes outbound connections in parallel + body: >- + Previously, the traffic-manager established outbound connections + sequentially. This resulted in slow (and failing) Dial calls would + block all outbound traffic from the workstation (for up to 30 seconds). We now + establish these connections in parallel so that won't occur. + docs: reference/routing/#outbound + + - type: bugfix + title: Status command reports correct DNS settings + body: >- + Telepresence status now correctly reports DNS settings for all operating + systems, instead of Local IP:nil, Remote IP:nil when they don't exist. + + - version: 2.4.2 + date: "2021-09-01" + notes: + - type: feature + title: New subcommand to temporarily change log-level + body: >- + We have added a new telepresence loglevel subcommand that enables users + to temporarily change the log-level for the local demons, the traffic-manager and + the traffic-agents. While the logLevels settings from the config will + still be used by default, this can be helpful if you are currently experiencing an issue and + want to have higher fidelity logs, without doing a telepresence quit and + telepresence connect. You can use telepresence loglevel --help to get + more information on options for the command. + docs: reference/config + + - type: change + title: All components have info as the default log-level + body: >- + We've now set the default for all components of Telepresence (traffic-agent, + traffic-manager, local daemons) to use info as the default log-level. + + - type: bugfix + title: Updating RBAC in helm chart to fix cluster-id regression + body: >- + In 2.4.1, we enabled the traffic-manager to get the cluster ID by getting the UID + of the default namespace. The helm chart was not updated to give the traffic-manager + those permissions, which has since been fixed. This impacted users who use licensed features of + the Telepresence extension in an air-gapped environment. + docs: reference/cluster-config/#air-gapped-cluster + + - type: bugfix + title: Timeouts for Helm actions are now respected + body: >- + The user-defined timeout for Helm actions wasn't always respected, causing the daemon to hang + indefinitely when failing to install the traffic-manager. + docs: reference/config#timeouts + + - version: 2.4.1 + date: "2021-08-30" + notes: + - type: feature + title: External cloud variables are now configurable + body: >- + We now support configuring the host and port for the cloud in your config.yml. These + are used when logging in to utilize features provided by an extension, and are also passed + along as environment variables when installing the `traffic-manager`. Additionally, we + now run our testsuite with these variables set to localhost to continue to ensure Telepresence + is fully fuctional without depeneding on an external service. The SYSTEMA_HOST and SYSTEMA_PORT + environment variables are no longer used. + image: telepresence-2.4.1-systema-vars.png + docs: reference/config/#cloud + + - type: feature + title: Helm chart can now regenerate certificate used for mutating webhook on-demand. + body: >- + You can now set agentInjector.certificate.regenerate when deploying Telepresence + with the Helm chart to automatically regenerate the certificate used by the agent injector webhook. + docs: install/helm + + - type: change + title: Traffic Manager installed via helm + body: >- + The traffic-manager is now installed via an embedded version of the Helm chart when telepresence connect is first performed on a cluster. + This change is transparent to the user. + A new configuration flag, timeouts.helm sets the timeouts for all helm operations performed by the Telepresence binary. + docs: reference/config#timeouts + + - type: change + title: traffic-manager gets cluster ID itself instead of via environment variable + body: >- + The traffic-manager used to get the cluster ID as an environment variable when running + telepresence connnect or via adding the value in the helm chart. This was + clunky so now the traffic-manager gets the value itself as long as it has permissions + to "get" and "list" namespaces (this has been updated in the helm chart). + docs: install/helm + + - type: bugfix + title: Telepresence now mounts all directories from /var/run/secrets + body: >- + In the past, we only mounted secret directories in /var/run/secrets/kubernetes.io. + We now mount *all* directories in /var/run/secrets, which, for example, includes + directories like eks.amazonaws.com used for IRSA tokens. + docs: reference/volume + + - type: bugfix + title: Max gRPC receive size correctly propagates to all grpc servers + body: >- + This fixes a bug where the max gRPC receive size was only propagated to some of the + grpc servers, causing failures when the message size was over the default. + docs: reference/config/#grpc + + - type: bugfix + title: Updated our Homebrew packaging to run manually + body: >- + We made some updates to our script that packages Telepresence for Homebrew so that it + can be run manually. This will enable maintainers of Telepresence to run the script manually + should we ever need to rollback a release and have latest point to an older verison. + docs: install/ + + - type: bugfix + title: Telepresence uses namespace from kubeconfig context on each call + body: >- + In the past, Telepresence would use whatever namespace was specified in the kubeconfig's current-context + for the entirety of the time a user was connected to Telepresence. This would lead to confusing behavior + when a user changed the context in their kubeconfig and expected Telepresence to acknowledge that change. + Telepresence now will do that and use the namespace designated by the context on each call. + + - type: bugfix + title: Idle outbound TCP connections timeout increased to 7200 seconds + body: >- + Some users were noticing that their intercepts would start failing after 60 seconds. + This was because the keep idle outbound TCP connections were set to 60 seconds, which we have + now bumped to 7200 seconds to match Linux's tcp_keepalive_time default. + + - type: bugfix + title: Telepresence will automatically remove a socket upon ungraceful termination + body: >- + When a Telepresence process terminates ungracefully, it would inform users that "this usually means + that the process has terminated ungracefully" and implied that they should remove the socket. We've + now made it so Telepresence will automatically attempt to remove the socket upon ungraceful termination. + + - type: bugfix + title: Fixed user daemon deadlock + body: >- + Remedied a situation where the user daemon could hang when a user was logged in. + + - type: bugfix + title: Fixed agentImage config setting + body: >- + The config setting images.agentImages is no longer required to contain the repository, and it + will use the value at images.repository. + docs: reference/config/#images + + - version: 2.4.0 + date: "2021-08-04" + notes: + - type: feature + title: Windows Client Developer Preview + body: >- + There is now a native Windows client for Telepresence that is being released as a Developer Preview. + All the same features supported by the MacOS and Linux client are available on Windows. + image: telepresence-2.4.0-windows.png + docs: install + + - type: feature + title: CLI raises helpful messages from Ambassador Cloud + body: >- + Telepresence can now receive messages from Ambassador Cloud and raise + them to the user when they perform certain commands. This enables us + to send you messages that may enhance your Telepresence experience when + using certain commands. Frequency of messages can be configured in your + config.yml. + image: telepresence-2.4.0-cloud-messages.png + docs: reference/config#cloud + + - type: bugfix + title: Improved stability of systemd-resolved-based DNS + body: >- + When initializing the systemd-resolved-based DNS, the routing domain + is set to improve stability in non-standard configurations. This also enables the + overriding resolver to do a proper take over once the DNS service ends. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Fixed an edge case when intercepting a container with multiple ports + body: >- + When specifying a port of a container to intercept, if there was a container in the + pod without ports, it was automatically selected. This has been fixed so we'll only + choose the container with "no ports" if there's no container that explicitly matches + the port used in your intercept. + docs: reference/intercepts/#creating-an-intercept-when-a-service-has-multiple-ports + + - type: bugfix + title: $(NAME) references in agent's environments are now interpolated correctly. + body: >- + If you had an environment variable $(NAME) in your workload that referenced another, intercepts + would not correctly interpolate $(NAME). This has been fixed and works automatically. + + - type: bugfix + title: Telepresence no longer prints INFO message when there is no config.yml + body: >- + Fixed a regression that printed an INFO message to the terminal when there wasn't a + config.yml present. The config is optional, so this message has been + removed. + docs: reference/config + + - type: bugfix + title: Telepresence no longer panics when using --http-match + body: >- + Fixed a bug where Telepresence would panic if the value passed to --http-match + didn't contain an equal sign, which has been fixed. The correct syntax is in the --help + string and looks like --http-match=HTTP2_HEADER=REGEX + docs: reference/intercepts/#intercept-behavior-when-logged-in-to-ambassador-cloud + + - type: bugfix + title: Improved subnet updates + body: >- + The `traffic-manager` used to update subnets whenever the `Nodes` or `Pods` changed, even if + the underlying subnet hadn't changed, which created a lot of unnecessary traffic between the + client and the `traffic-manager`. This has been fixed so we only send updates when the subnets + themselves actually change. + docs: reference/routing/#subnets + + - version: 2.3.7 + date: "2021-07-23" + notes: + - type: feature + title: Also-proxy in telepresence status + body: >- + An also-proxy entry in the Kubernetes cluster config will + show up in the output of the telepresence status command. + docs: reference/config + + - type: feature + title: Non-interactive telepresence login + body: >- + telepresence login now has an + --apikey=KEY flag that allows for + non-interactive logins. This is useful for headless + environments where launching a web-browser is impossible, + such as cloud shells, Docker containers, or CI. + image: telepresence-2.3.7-newkey.png + docs: reference/client/login/ + + - type: bugfix + title: Mutating webhook injector correctly hides named ports for probes. + body: >- + The mutating webhook injector has been fixed to correctly rename named ports for liveness and readiness probes + docs: reference/cluster-config + + - type: bugfix + title: telepresence current-cluster-id crash fixed + body: >- + Fixed a regression introduced in 2.3.5 that caused `telepresence current-cluster-id` + to crash. + docs: reference/cluster-config + + - type: bugfix + title: Better UX around intercepts with no local process running + body: >- + Requests would hang indefinitely when initiating an intercept before you + had a local process running. This has been fixed and will result in an + Empty reply from server until you start a local process. + docs: reference/intercepts + + - type: bugfix + title: API keys no longer show as "no description" + body: >- + New API keys generated internally for communication with + Ambassador Cloud no longer show up as "no description" in + the Ambassador Cloud web UI. Existing API keys generated by + older versions of Telepresence will still show up this way. + image: telepresence-2.3.7-keydesc.png + + - type: bugfix + title: Fix corruption of user-info.json + body: >- + Fixed a race condition that logging in and logging out + rapidly could cause memory corruption or corruption of the + user-info.json cache file used when + authenticating with Ambassador Cloud. + + - type: bugfix + title: Improved DNS resolver for systemd-resolved + body: + Telepresence's systemd-resolved-based DNS resolver is now more + stable and in case it fails to initialize, the overriding resolver + will no longer cause general DNS lookup failures when telepresence defaults to + using it. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Faster telepresence list command + body: + The performance of telepresence list has been increased + significantly by reducing the number of calls the command makes to the cluster. + docs: reference/client + + - version: 2.3.6 + date: "2021-07-20" + notes: + - type: bugfix + title: Fix preview URLs + body: >- + Fixed a regression introduced in 2.3.5 that caused preview + URLs to not work. + + - type: bugfix + title: Fix subnet discovery + body: >- + Fixed a regression introduced in 2.3.5 where the Traffic + Manager's RoleBinding did not correctly appoint + the traffic-manager Role, causing + subnet discovery to not be able to work correctly. + docs: reference/rbac/ + + - type: bugfix + title: Fix root-user configuration loading + body: >- + Fixed a regression introduced in 2.3.5 where the root daemon + did not correctly read the configuration file; ignoring the + user's configured log levels and timeouts. + docs: reference/config/ + + - type: bugfix + title: Fix a user daemon crash + body: >- + Fixed an issue that could cause the user daemon to crash + during shutdown, as during shutdown it unconditionally + attempted to close a channel even though the channel might + already be closed. + + - version: 2.3.5 + date: "2021-07-15" + notes: + - type: feature + title: traffic-manager in multiple namespaces + body: >- + We now support installing multiple traffic managers in the same cluster. + This will allow operators to install deployments of telepresence that are + limited to certain namespaces. + image: ./telepresence-2.3.5-traffic-manager-namespaces.png + docs: install/helm + - type: feature + title: No more dependence on kubectl + body: >- + Telepresence no longer depends on having an external + kubectl binary, which might not be present for + OpenShift users (who have oc instead of + kubectl). + - type: feature + title: Agent image now configurable + body: >- + We now support configuring which agent image + registry to use in the + config. This enables users whose laptop is an air-gapped environment to + create personal intercepts without requiring a login. It also makes it easier + for those who are developing on Telepresence to specify which agent image should + be used. Env vars TELEPRESENCE_AGENT_IMAGE and TELEPRESENCE_REGISTRY are no longer + used. + image: ./telepresence-2.3.5-agent-config.png + docs: reference/config/#images + - type: feature + title: Max gRPC receive size now configurable + body: >- + The default max size of messages received through gRPC (4 MB) is sometimes insufficient. It can now be configured. + image: ./telepresence-2.3.5-grpc-max-receive-size.png + docs: reference/config/#grpc + - type: feature + title: CLI can be used in air-gapped environments + body: >- + While Telepresence will auto-detect if your cluster is in an air-gapped environment, + we've added an option users can add to their config.yml to ensure the cli acts like it + is in an air-gapped environment. Air-gapped environments require a manually installed + licence. + docs: reference/cluster-config/#air-gapped-cluster + image: ./telepresence-2.3.5-skipLogin.png + - version: 2.3.4 + date: "2021-07-09" + notes: + - type: bugfix + title: Improved IP log statements + body: >- + Some log statements were printing incorrect characters, when they should have been IP addresses. + This has been resolved to include more accurate and useful logging. + docs: reference/config/#log-levels + image: ./telepresence-2.3.4-ip-error.png + - type: bugfix + title: Improved messaging when multiple services match a workload + body: >- + If multiple services matched a workload when performing an intercept, Telepresence would crash. + It now gives the correct error message, instructing the user on how to specify which + service the intercept should use. + image: ./telepresence-2.3.4-improved-error.png + docs: reference/intercepts + - type: bugfix + title: Traffic-manger creates services in its own namespace to determine subnet + body: >- + Telepresence will now determine the service subnet by creating a dummy-service in its own + namespace, instead of the default namespace, which was causing RBAC permissions issues in + some clusters. + docs: reference/routing/#subnets + - type: bugfix + title: Telepresence connect respects pre-existing clusterrole + body: >- + When Telepresence connects, if the traffic-manager's desired clusterrole already exists in the + cluster, Telepresence will no longer try to update the clusterrole. + docs: reference/rbac + - type: bugfix + title: Helm Chart fixed for clientRbac.namespaced + body: >- + The Telepresence Helm chart no longer fails when installing with --set clientRbac.namespaced=true. + docs: install/helm + - version: 2.3.3 + date: "2021-07-07" + notes: + - type: feature + title: Traffic Manager Helm Chart + body: >- + Telepresence now supports installing the Traffic Manager via Helm. + This will make it easy for operators to install and configure the + server-side components of Telepresence separately from the CLI (which + in turn allows for better separation of permissions). + image: ./telepresence-2.3.3-helm.png + docs: install/helm/ + - type: feature + title: Traffic-manager in custom namespace + body: >- + As the traffic-manager can now be installed in any + namespace via Helm, Telepresence can now be configured to look for the + Traffic Manager in a namespace other than ambassador. + This can be configured on a per-cluster basis. + image: ./telepresence-2.3.3-namespace-config.png + docs: reference/config + - type: feature + title: Intercept --to-pod + body: >- + telepresence intercept now supports a + --to-pod flag that can be used to port-forward sidecars' + ports from an intercepted pod. + image: ./telepresence-2.3.3-to-pod.png + docs: reference/intercepts + - type: change + title: Change in migration from edgectl + body: >- + Telepresence no longer automatically shuts down the old + api_version=1 edgectl daemon. If migrating + from such an old version of edgectl you must now manually + shut down the edgectl daemon before running Telepresence. + This was already the case when migrating from the newer + api_version=2 edgectl. + - type: bugfix + title: Fixed error during shutdown + body: >- + The root daemon no longer terminates when the user daemon disconnects + from its gRPC streams, and instead waits to be terminated by the CLI. + This could cause problems with things not being cleaned up correctly. + - type: bugfix + title: Intercepts will survive deletion of intercepted pod + body: >- + An intercept will survive deletion of the intercepted pod provided + that another pod is created (or already exists) that can take over. + - version: 2.3.2 + date: "2021-06-18" + notes: + # Headliners + - type: feature + title: Service Port Annotation + body: >- + The mutator webhook for injecting traffic-agents now + recognizes a + telepresence.getambassador.io/inject-service-port + annotation to specify which port to intercept; bringing the + functionality of the --port flag to users who + use the mutator webook in order to control Telepresence via + GitOps. + image: ./telepresence-2.3.2-svcport-annotation.png + docs: reference/cluster-config#service-port-annotation + - type: feature + title: Outbound Connections + body: >- + Outbound connections are now routed through the intercepted + Pods which means that the connections originate from that + Pod from the cluster's perspective. This allows service + meshes to correctly identify the traffic. + docs: reference/routing/#outbound + - type: change + title: Inbound Connections + body: >- + Inbound connections from an intercepted agent are now + tunneled to the manager over the existing gRPC connection, + instead of establishing a new connection to the manager for + each inbound connection. This avoids interference from + certain service mesh configurations. + docs: reference/routing/#inbound + + # RBAC changes + - type: change + title: Traffic Manager needs new RBAC permissions + body: >- + The Traffic Manager requires RBAC + permissions to list Nodes, Pods, and to create a dummy + Service in the manager's namespace. + docs: reference/routing/#subnets + - type: change + title: Reduced developer RBAC requirements + body: >- + The on-laptop client no longer requires RBAC permissions to list the Nodes + in the cluster or to create Services, as that functionality + has been moved to the Traffic Manager. + + # Bugfixes + - type: bugfix + title: Able to detect subnets + body: >- + Telepresence will now detect the Pod CIDR ranges even if + they are not listed in the Nodes. + image: ./telepresence-2.3.2-subnets.png + docs: reference/routing/#subnets + - type: bugfix + title: Dynamic IP ranges + body: >- + The list of cluster subnets that the virtual network + interface will route is now configured dynamically and will + follow changes in the cluster. + - type: bugfix + title: No duplicate subnets + body: >- + Subnets fully covered by other subnets are now pruned + internally and thus never superfluously added to the + laptop's routing table. + docs: reference/routing/#subnets + - type: change # not a bugfix, but it only makes sense to mention after the above bugfixes + title: Change in default timeout + body: >- + The trafficManagerAPI timeout default has + changed from 5 seconds to 15 seconds, in order to facilitate + the extended time it takes for the traffic-manager to do its + initial discovery of cluster info as a result of the above + bugfixes. + - type: bugfix + title: Removal of DNS config files on macOS + body: >- + On macOS, files generated under + /etc/resolver/ as the result of using + include-suffixes in the cluster config are now + properly removed on quit. + docs: reference/routing/#macos-resolver + + - type: bugfix + title: Large file transfers + body: >- + Telepresence no longer erroneously terminates connections + early when sending a large HTTP response from an intercepted + service. + - type: bugfix + title: Race condition in shutdown + body: >- + When shutting down the user-daemon or root-daemon on the + laptop, telepresence quit and related commands + no longer return early before everything is fully shut down. + Now it can be counted on that by the time the command has + returned that all of the side-effects on the laptop have + been cleaned up. + - version: 2.3.1 + date: "2021-06-14" + notes: + - title: DNS Resolver Configuration + body: "Telepresence now supports per-cluster configuration for custom dns behavior, which will enable users to determine which local + remote resolver to use and which suffixes should be ignored + included. These can be configured on a per-cluster basis." + image: ./telepresence-2.3.1-dns.png + docs: reference/config + type: feature + - title: AlsoProxy Configuration + body: "Telepresence now supports also proxying user-specified subnets so that they can access external services only accessible to the cluster while connected to Telepresence. These can be configured on a per-cluster basis and each subnet is added to the TUN device so that requests are routed to the cluster for IPs that fall within that subnet." + image: ./telepresence-2.3.1-alsoProxy.png + docs: reference/config + type: feature + - title: Mutating Webhook for Injecting Traffic Agents + body: "The Traffic Manager now contains a mutating webhook to automatically add an agent to pods that have the telepresence.getambassador.io/traffic-agent: enabled annotation. This enables Telepresence to work well with GitOps CD platforms that rely on higher level kubernetes objects matching what is stored in git. For workloads without the annotation, Telepresence will add the agent the way it has in the past" + image: ./telepresence-2.3.1-inject.png + docs: reference/rbac + type: feature + - title: Traffic Manager Connect Timeout + body: "The trafficManagerConnect timeout default has changed from 20 seconds to 60 seconds, in order to facilitate the extended time it takes to apply everything needed for the mutator webhook." + image: ./telepresence-2.3.1-trafficmanagerconnect.png + docs: reference/config + type: change + - title: Fix for large file transfers + body: "Fix a tun-device bug where sometimes large transfers from services on the cluster would hang indefinitely" + image: ./telepresence-2.3.1-large-file-transfer.png + docs: reference/tun-device + type: bugfix + - title: Brew Formula Changed + body: "Now that the Telepresence rewrite is the main version of Telepresence, you can install it via Brew like so: brew install datawire/blackbird/telepresence." + image: ./telepresence-2.3.1-brew.png + docs: install/ + type: change + - version: 2.3.0 + date: "2021-06-01" + notes: + - title: Brew install Telepresence + body: "Telepresence can now be installed via brew on macOS, which makes it easier for users to stay up-to-date with the latest telepresence version. To install via brew, you can use the following command: brew install datawire/blackbird/telepresence2." + image: ./telepresence-2.3.0-homebrew.png + docs: install/ + type: feature + - title: TCP and UDP routing via Virtual Network Interface + body: "Telepresence will now perform routing of outbound TCP and UDP traffic via a Virtual Network Interface (VIF). The VIF is a layer 3 TUN-device that exists while Telepresence is connected. It makes the subnets in the cluster available to the workstation and will also route DNS requests to the cluster and forward them to intercepted pods. This means that pods with custom DNS configuration will work as expected. Prior versions of Telepresence would use firewall rules and were only capable of routing TCP." + image: ./tunnel.jpg + docs: reference/tun-device + type: feature + - title: SSH is no longer used + body: "All traffic between the client and the cluster is now tunneled via the traffic manager gRPC API. This means that Telepresence no longer uses ssh tunnels and that the manager no longer have an sshd installed. Volume mounts are still established using sshfs but it is now configured to communicate using the sftp-protocol directly, which means that the traffic agent also runs without sshd. A desired side effect of this is that the manager and agent containers no longer need a special user configuration." + image: ./no-ssh.png + docs: reference/tun-device/#no-ssh-required + type: change + - title: Running in a Docker container + body: "Telepresence can now be run inside a Docker container. This can be useful for avoiding side effects on a workstation's network, establishing multiple sessions with the traffic manager, or working with different clusters simultaneously." + image: ./run-tp-in-docker.png + docs: reference/inside-container + type: feature + - title: Configurable Log Levels + body: "Telepresence now supports configuring the log level for Root Daemon and User Daemon logs. This provides control over the nature and volume of information that Telepresence generates in daemon.log and connector.log." + image: ./telepresence-2.3.0-loglevels.png + docs: reference/config/#log-levels + type: feature + - version: 2.2.2 + date: "2021-05-17" + notes: + - title: Legacy Telepresence subcommands + body: Telepresence is now able to translate common legacy Telepresence commands into native Telepresence commands. So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used to with the new Telepresence binary. + image: ./telepresence-2.2.png + docs: install/migrate-from-legacy/ + type: feature diff --git a/docs/telepresence/2.7/troubleshooting/index.md b/docs/telepresence/2.7/troubleshooting/index.md new file mode 100644 index 000000000..9c6246a5e --- /dev/null +++ b/docs/telepresence/2.7/troubleshooting/index.md @@ -0,0 +1,250 @@ +--- +title: "Telepresence Troubleshooting" +description: "Learn how to troubleshoot common issues related to Telepresence, including intercept issues, cluster connection issues, and errors related to Ambassador Cloud." +--- +# Troubleshooting + +## Creating an intercept did not generate a preview URL + +Preview URLs can only be created if Telepresence is [logged in to +Ambassador Cloud](../reference/client/login/). When not logged in, it +will not even try to create a preview URL (additionally, by default it +will intercept all traffic rather than just a subset of the traffic). +Remove the intercept with `telepresence leave [deployment name]`, run +`telepresence login` to login to Ambassador Cloud, then recreate the +intercept. See the [intercepts how-to doc](../howtos/intercepts) for +more details. + +## Error on accessing preview URL: `First record does not look like a TLS handshake` + +The service you are intercepting is likely not using TLS, however when configuring the intercept you indicated that it does use TLS. Remove the intercept with `telepresence leave [deployment name]` and recreate it, setting `TLS` to `n`. Telepresence tries to intelligently determine these settings for you when creating an intercept and offer them as defaults, but odd service configurations might cause it to suggest the wrong settings. + +## Error on accessing preview URL: Detected a 301 Redirect Loop + +If your ingress is set to redirect HTTP requests to HTTPS and your web app uses HTTPS, but you configure the intercept to not use TLS, you will get this error when opening the preview URL. Remove the intercept with `telepresence leave [deployment name]` and recreate it, selecting the correct port and setting `TLS` to `y` when prompted. + +## Connecting to a cluster via VPN doesn't work. + +There are a few different issues that could arise when working with a VPN. Please see the [dedicated page](../reference/vpn) on Telepresence and VPNs to learn more on how to fix these. + +## Connecting to a cluster hosted in a VM on the workstation doesn't work + +The cluster probably has access to the host's network and gets confused when it is mapped by Telepresence. +Please check the [cluster in hosted vm](../howtos/cluster-in-vm) for more details. + +## Your GitHub organization isn't listed + +Ambassador Cloud needs access granted to your GitHub organization as a +third-party OAuth app. If an organization isn't listed during login +then the correct access has not been granted. + +The quickest way to resolve this is to go to the **Github menu** → +**Settings** → **Applications** → **Authorized OAuth Apps** → +**Ambassador Labs**. An organization owner will have a **Grant** +button, anyone not an owner will have **Request** which sends an email +to the owner. If an access request has been denied in the past the +user will not see the **Request** button, they will have to reach out +to the owner. + +Once access is granted, log out of Ambassador Cloud and log back in; +you should see the GitHub organization listed. + +The organization owner can go to the **GitHub menu** → **Your +organizations** → **[org name]** → **Settings** → **Third-party +access** to see if Ambassador Labs has access already or authorize a +request for access (only owners will see **Settings** on the +organization page). Clicking the pencil icon will show the +permissions that were granted. + +GitHub's documentation provides more detail about [managing access granted to third-party applications](https://docs.github.com/en/github/authenticating-to-github/connecting-with-third-party-applications) and [approving access to apps](https://docs.github.com/en/github/setting-up-and-managing-organizations-and-teams/approving-oauth-apps-for-your-organization). + +### Granting or requesting access on initial login + +When using GitHub as your identity provider, the first time you log in +to Ambassador Cloud GitHub will ask to authorize Ambassador Labs to +access your organizations and certain user data. + +Authorize Ambassador labs form + +Any listed organization with a green check has already granted access +to Ambassador Labs (you still need to authorize to allow Ambassador +Labs to read your user data and organization membership). + +Any organization with a red "X" requires access to be granted to +Ambassador Labs. Owners of the organization will see a **Grant** +button. Anyone who is not an owner will see a **Request** button. +This will send an email to the organization owner requesting approval +to access the organization. If an access request has been denied in +the past the user will not see the **Request** button, they will have +to reach out to the owner. + +Once approval is granted, you will have to log out of Ambassador Cloud +then back in to select the organization. + +## Volume mounts are not working on macOS + +It's necessary to have `sshfs` installed in order for volume mounts to work correctly during intercepts. Lately there's been some issues using `brew install sshfs` a macOS workstation because the required component `osxfuse` (now named `macfuse`) isn't open source and hence, no longer supported. As a workaround, you can now use `gromgit/fuse/sshfs-mac` instead. Follow these steps: + +1. Remove old sshfs, macfuse, osxfuse using `brew uninstall` +2. `brew install --cask macfuse` +3. `brew install gromgit/fuse/sshfs-mac` +4. `brew link --overwrite sshfs-mac` + +Now sshfs -V shows you the correct version, e.g.: +``` +$ sshfs -V +SSHFS version 2.10 +FUSE library version: 2.9.9 +fuse: no mount point +``` + +5. Next, try a mount (or an intercept that performs a mount). It will fail because you need to give permission to “Benjamin Fleischer” to execute a kernel extension (a pop-up appears that takes you to the system preferences). +6. Approve the needed permission +7. Reboot your computer. + +## Daemon service did not start + +An attempt to do `telepresence connect` results in the error message `daemon service did not start: timeout while waiting for daemon to start` and +the logs show no helpful error. + +The likely cause of this is that the user lack permission to run `sudo --preserve-env`. Here is a workaround for this problem. Edit the +sudoers file with: + +```command +$ sudo visudo +``` + +and add the following line: + +``` + ALL=(ALL) NOPASSWD: SETENV: /usr/local/bin/telepresence +``` + +DO NOT fix this by making the Telepresence binary a SUID root. It must only run as root when invoked with `--daemon-foreground`. + + +## Authorization for preview URLs +Services that require authentication may not function correctly with preview URLs. When accessing a preview URL, it is necessary to configure your intercept to use custom authentication headers for the preview URL. If you don't, you may receive an unauthorized response or be redirected to the login page for Ambassador Cloud. + +You can accomplish this by using a browser extension such as the `ModHeader extension` for [Chrome](https://chrome.google.com/webstore/detail/modheader/idgpnmonknjnojddfkpgkljpfnnfcklj) +or [Firefox](https://addons.mozilla.org/en-CA/firefox/addon/modheader-firefox/). + +It is important to note that Ambassador Cloud does not support OAuth browser flows when accessing a preview URL, but other auth schemes such as Basic access authentication and session cookies will work. + +## Distributed tracing + +Telepresence is a complex piece of software with components running locally on your laptop and remotely in a distributed kubernetes environment. +As such, troubleshooting investigations require tools that can give users, cluster admins, and maintainers a broad view of what these distributed components are doing. +In order to facilitate such investigations, telepresence >= 2.7.0 includes distributed tracing functionality via [OpenTelemetry](https://opentelemetry.io/) +Tracing is controlled via a `grpcPort` flag under the `tracing` configuration of your `values.yaml`. It is enabled by default and can be disabled by setting `grpcPort` to `0`, or `tracing` to an empty object: + +```yaml +tracing: {} +``` + +If tracing is configured, the traffic manager and traffic agents will open a GRPC server under the port given, from which telepresence clients will be able to gather trace data. +To collect trace data, ensure you're connected to the cluster, perform whatever operation you'd like to debug and then run `gather-traces` immediately after: + +```console +$ telepresence gather-traces +``` + +This command will gather traces from both the cloud and local components of telepresence and output them into a file called `traces.gz` in your current working directory: + +```console +$ file traces.gz + traces.gz: gzip compressed data, original size modulo 2^32 158255 +``` + +Please do not try to open or uncompress this file, as it contains binary trace data. +Instead, you can use the `upload-traces` command built into telepresence to send it to an [OpenTelemetry collector](https://opentelemetry.io/docs/collector/) for ingestion: + +```console +$ telepresence upload-traces traces.gz $OTLP_GRPC_ENDPOINT +``` + +Once that's been done, the traces will be visible via whatever means your usual collector allows. For example, this is what they look like when loaded into Jaeger's [OTLP API](https://www.jaegertracing.io/docs/1.36/apis/#opentelemetry-protocol-stable): + +![Jaeger Interface](../images/tracing.png) + +**Note:** The host and port provided for the `OTLP_GRPC_ENDPOINT` must accept OTLP formatted spans (instead of e.g. Jaeger or Zipkin specific spans) via a GRPC API (instead of the HTTP API that is also available in some collectors) +**Note:** Since traces are not automatically shipped to the backend by telepresence, they are stored in memory. Hence, to avoid running telepresence components out of memory, only the last 10MB of trace data are available for export. + +## No Sidecar Injected in GKE private clusters + +An attempt to `telepresence intercept` results in a timeout, and upon examination of the pods (`kubectl get pods`) it's discovered that the intercept command did not inject a sidecar into the workload's pods: + +```bash +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +echo-easy-7f6d54cff8-rz44k 1/1 Running 0 5m5s + +$ telepresence intercept echo-easy -p 8080 +Error: rpc error: code = DeadlineExceeded desc = request timed out while waiting for agent echo-easy.default to arrive +telepresence: error: rpc error: code = DeadlineExceeded desc = request timed out while waiting for agent echo-easy.default to arrive + +See logs for details (1 error found): "/Users/josecortes/Library/Logs/telepresence/connector.log" +If you think you have encountered a bug, please run `telepresence gather-logs` and attach the telepresence_logs.zip to your github issue or create a new one: https://github.com/telepresenceio/telepresence/issues/new?template=Bug_report.md . + +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +echo-easy-d8dc4cc7c-27567 1/1 Running 0 2m9s + +# Notice how 1/1 containers are ready. +``` + +If this is occurring in a GKE cluster with private networking enabled, it is likely due to firewall rules blocking the +Traffic Manager's webhook injector from the API server. +To fix this, add a firewall rule allowing your cluster's master nodes to access TCP port `8443` in your cluster's pods. +Please refer to the [telepresence install instructions](../install/cloud#gke) or the [GCP docs](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) for information to resolve this. + +## Injected init-container doesn't function properly + +The init-container is injected to insert `iptables` rules that redirects port numbers from the app container to the +traffic-agent sidecar. This is necessary when the service's `targetPort` is numeric. It requires elevated privileges +(`NET_ADMIN` capabilities), and the inserted rules may get overridden by `iptables` rules inserted by other vendors, +such as Istio or Linkerd. + +Injection of the init-container can often be avoided by using a `targetPort` _name_ instead of a number, and ensure +that the corresponding container's `containerPort` is also named. This example uses the name "http", but any valid +name will do: +```yaml +apiVersion: v1 +kind: Pod +metadata: + ... +spec: + ... + containers: + - ... + ports: + - name: http + containerPort: 8080 +--- +apiVersion: v1 +kind: Service +metadata: + ... +spec: + ... + ports: + - port: 80 + targetPort: http +``` + +Telepresence's mutating webhook will refrain from injecting an init-container when the `targetPort` is a name. Instead, +it will do the following during the injection of the traffic-agent: + +1. Rename the designated container's port by prefixing it (i.e., containerPort: http becomes containerPort: tm-http). +2. Let the container port of our injected traffic-agent use the original name (i.e., containerPort: http). + +Kubernetes takes care of the rest and will now associate the service's `targetPort` with our traffic-agent's +`containerPort`. + +### Important note +If the service is "headless" (using `ClusterIP: None`), then using named ports won't help because the `targetPort` will +not get remapped. A headless service will always require the init-container. + +## `too many files open` error when running `telepresence connect` on Linux + +If `telepresence connect` on linux fails with a message in the logs `too many files open`, then check if `fs.inotify.max_user_instances` is set too low. Check the current settings with `sysctl fs.notify.max_user_instances` and increase it temporarily with `sudo sysctl -w fs.inotify.max_user_instances=512`. For more information about permanently increasing it see [Kernel inotify watch limit reached](https://unix.stackexchange.com/a/13757/514457). diff --git a/docs/telepresence/2.7/versions.yml b/docs/telepresence/2.7/versions.yml new file mode 100644 index 000000000..e1b0b2e56 --- /dev/null +++ b/docs/telepresence/2.7/versions.yml @@ -0,0 +1,5 @@ +version: "2.7.1" +dlVersion: "latest" +docsVersion: "2.7" +branch: release/v2 +productName: "Telepresence" diff --git a/docs/telepresence/2.8 b/docs/telepresence/2.8 deleted file mode 120000 index bc7aa46a1..000000000 --- a/docs/telepresence/2.8 +++ /dev/null @@ -1 +0,0 @@ -../../../docs/telepresence/v2.8 \ No newline at end of file diff --git a/docs/telepresence/2.8/ci/github-actions.md b/docs/telepresence/2.8/ci/github-actions.md new file mode 100644 index 000000000..810a2d239 --- /dev/null +++ b/docs/telepresence/2.8/ci/github-actions.md @@ -0,0 +1,176 @@ +--- +title: GitHub Actions for Telepresence +description: "Learn more about GitHub Actions for Telepresence and how to integrate them in your processes to run tests for your own environments and improve your CI/CD pipeline. " +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Telepresence with GitHub Actions + +Telepresence combined with [GitHub Actions](https://docs.github.com/en/actions) allows you to run integration tests in your continuous integration/continuous delivery (CI/CD) pipeline without the need to run any dependant service. When you connect to the target Kubernetes cluster, you can intercept traffic of the remote services and send it to an instance of the local service running in CI. This way, you can quickly test the bugfixes, updates, and features that you develop in your project. + +You can [register here](https://app.getambassador.io/auth/realms/production/protocol/openid-connect/auth?client_id=telepresence-github-actions&response_type=code&code_challenge=qhXI67CwarbmH-pqjDIV1ZE6kqggBKvGfs69cxst43w&code_challenge_method=S256&redirect_uri=https://app.getambassador.io) to get a free Ambassador Cloud account to try the GitHub Actions for Telepresence yourself. + +## GitHub Actions for Telepresence + +Ambassador Labs has created a set of GitHub Actions for Telepresence that enable you to run integration tests in your CI pipeline against any existing remote cluster. The GitHub Actions for Telepresence are the following: + + - **configure**: Initial configuration setup for Telepresence that is needed to run the actions successfully. + - **install**: Installs Telepresence in your CI server with latest version or the one specified by you. + - **login**: Logs into Telepresence, you can create a [personal intercept](/docs/telepresence/latest/concepts/intercepts/#personal-intercept). You'll need a Telepresence API key and set it as an environment variable in your workflow. See the [acquiring an API key guide](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) for instructions on how to get one. + - **connect**: Connects to the remote target environment. + - **intercept**: Redirects traffic to the remote service to the version of the service running in CI so you can run integration tests. + +Each action contains a post-action script to clean up resources. This includes logging out of Telepresence, closing the connection to the remote cluster, and stopping the intercept process. These post scripts are executed automatically, regardless of job result. This way, you don't have to worry about terminating the session yourself. You can look at the [GitHub Actions for Telepresence repository](https://github.com/datawire/telepresence-actions) for more information. + +# Using Telepresence in your GitHub Actions CI pipeline + +## Prerequisites + +To enable GitHub Actions with telepresence, you need: + +* A [Telepresence API KEY](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) and set it as an environment variable in your workflow. +* Access to your remote Kubernetes cluster, like a `kubeconfig.yaml` file with the information to connect to the cluster. +* If your remote cluster already has Telepresence installed, you need to know whether Telepresence is installed [Cluster wide](/docs/telepresence/latest/reference/rbac/#cluster-wide-telepresence-user-access) or [Namespace only](/docs/telepresence/latest/reference/rbac/#namespace-only-telepresence-user-access). If telepresence is configured for namespace only, verify that your `kubeconfig.yaml` is configured to find the installation of the Traffic Manager. For example: + + ```yaml + apiVersion: v1 + clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: traffic-manager-namespace + name: example-cluster + ``` + +* If Telepresence is installed, you also need to know the version of Telepresence running in the cluster. You can run the command `kubectl describe service traffic-manager -n namespace`. The version is listed in the `labels` section of the output. +* You need a GitHub Actions secret named `TELEPRESENCE_API_KEY` in your repository that has your Telepresence API key. See [GitHub docs](https://docs.github.com/en/github-ae@latest/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository) for instructions on how to create GitHub Actions secrets. +* You need a GitHub Actions secret named `KUBECONFIG_FILE` in your repository with the content of your `kubeconfig.yaml`). + +**Does your environment look different?** We're actively working on making GitHub Actions for Telepresence more useful for more + +
+ + +
+ +## Initial configuration setup + +To be able to use the GitHub Actions for Telepresence, you need to do an initial setup to [configure Telepresence](../../reference/config/) so the repository is able to run your workflow. To complete the Telepresence setup: + + +This action only supports Ubuntu runners at the moment. + +1. In your main branch, create a `.github/workflows` directory in your GitHub repository if it does not already exist. +1. Next, in the `.github/workflows` directory, create a new YAML file named `configure-telepresence.yaml`: + + ```yaml + name: Configuring telepresence + on: workflow_dispatch + jobs: + configuring: + name: Configure telepresence + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + steps: + - name : Checkout + uses: actions/checkout@v3 + #---- here run your custom command to connect to your cluster + #- name: Connect to cluster + # shell: bash + # run: ./connnect to cluster + #---- + - name: Congifuring Telepresence + uses: datawire/telepresence-actions/configure@v1.0-rc + with: + version: latest + ``` + +1. Push the `configure-telepresence.yaml` file to your repository. +1. Run the `Configuring Telepresence Workflow` [manually](https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow) in your repository's Actions tab. + +When the workflow runs, the action caches the configuration directory of Telepresence and a Telepresence configuration file if is provided by you. This configuration file should be placed in the`/.github/telepresence-config/config.yml` with your own [Telepresence config](../../reference/config/). If you update your this file with a new configuration, you must run the `Configuring Telepresence Workflow` action manually on your main branch so your workflow detects the new configuration. + + +When you create a branch, do not remove the .telepresence/config.yml file. This is required for Telepresence to run GitHub action properly when there is a new push to the branch in your repository. + + +## Using Telepresence in your GitHub Actions workflows + +1. In the `.github/workflows` directory create a new YAML file named `run-integration-tests.yaml` and modify placeholders with real actions to run your service and perform integration tests. + + ```yaml + name: Run Integration Tests + on: + push: + branches-ignore: + - 'main' + jobs: + my-job: + name: Run Integration Test using Remote Cluster + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + KUBECONFIG_FILE: ${{ secrets.KUBECONFIG_FILE }} + KUBECONFIG: /opt/kubeconfig + steps: + - name : Checkout + uses: actions/checkout@v3 + with: + ref: ${{ github.event.pull_request.head.sha }} + #---- here run your custom command to run your service + #- name: Run your service to test + # shell: bash + # run: ./run_local_service + #---- + # First you need to log in to Telepresence, with your api key + - name: Create kubeconfig file + run: | + cat < /opt/kubeconfig + ${{ env.KUBECONFIG_FILE }} + EOF + - name: Install Telepresence + uses: datawire/telepresence-actions/install@v1.0-rc + with: + version: 2.5.8 # Change the version number here according to the version of Telepresence in your cluster or omit this parameter to install the latest version + - name: Telepresence connect + uses: datawire/telepresence-actions/connect@v1.0-rc + - name: Login + uses: datawire/telepresence-actions/login@v1.0-rc + with: + telepresence_api_key: ${{ secrets.TELEPRESENCE_API_KEY }} + - name: Intercept the service + uses: datawire/telepresence-actions/intercept@v1.0-rc + with: + service_name: service-name + service_port: 8081:8080 + namespace: namespacename-of-your-service + http_header: "x-telepresence-intercept-id=service-intercepted" + print_logs: true # Flag to instruct the action to print out Telepresence logs and export an artifact with them + #---- here run your custom command + #- name: Run integrations test + # shell: bash + # run: ./run_integration_test + #---- + ``` + +The previous is an example of a workflow that: + +* Checks out the repository code. +* Has a placeholder step to run the service during CI. +* Creates the `/opt/kubeconfig` file with the contents of the `secrets.KUBECONFIG_FILE` to make it available for Telepresence. +* Installs Telepresence. +* Runs Telepresence Connect. +* Logs into Telepresence. +* Intercepts traffic to the service running in the remote cluster. +* A placeholder for an action that would run integration tests, such as one that makes HTTP requests to your running service and verifies it works while dependent services run in the remote cluster. + +This workflow gives you the ability to run integration tests during the CI run against an ephemeral instance of your service to verify that the any change that is pushed to the working branch works as expected. After you push the changes, the CI server will run the integration tests against the intercept. You can view the results on your GitHub repository, under "Actions" tab. diff --git a/docs/telepresence/2.8/community.md b/docs/telepresence/2.8/community.md new file mode 100644 index 000000000..922457c9d --- /dev/null +++ b/docs/telepresence/2.8/community.md @@ -0,0 +1,12 @@ +# Community + +## Contributor's guide +Please review our [contributor's guide](https://github.com/telepresenceio/telepresence/blob/release/v2/DEVELOPING.md) +on GitHub to learn how you can help make Telepresence better. + +## Changelog +Our [changelog](https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md) +describes new features, bug fixes, and updates to each version of Telepresence. + +## Meetings +Check out our community [meeting schedule](https://github.com/telepresenceio/telepresence/blob/release/v2/MEETING_SCHEDULE.md) for opportunities to interact with Telepresence developers. diff --git a/docs/telepresence/2.8/concepts/context-prop.md b/docs/telepresence/2.8/concepts/context-prop.md new file mode 100644 index 000000000..b3eb41e32 --- /dev/null +++ b/docs/telepresence/2.8/concepts/context-prop.md @@ -0,0 +1,37 @@ +# Context propagation + +**Context propagation** is the transfer of request metadata across the services and remote processes of a distributed system. Telepresence uses context propagation to intelligently route requests to the appropriate destination. + +This metadata is the context that is transferred across system services. It commonly takes the form of HTTP headers; context propagation is usually referred to as header propagation. A component of the system (like a proxy or performance monitoring tool) injects the headers into requests as it relays them. + +Metadata propagation refers to any service or other middleware not stripping away the headers. Propagation facilitates the movement of the injected contexts between other downstream services and processes. + + +## What is distributed tracing? + +Distributed tracing is a technique for troubleshooting and profiling distributed microservices applications and is a common application for context propagation. It is becoming a key component for debugging. + +In a microservices architecture, a single request may trigger additional requests to other services. The originating service may not cause the failure or slow request directly; a downstream dependent service may instead be to blame. + +An application like Datadog or New Relic will use agents running on services throughout the system to inject traffic with HTTP headers (the context). They will track the request’s entire path from origin to destination to reply, gathering data on routes the requests follow and performance. The injected headers follow the [W3C Trace Context specification](https://www.w3.org/TR/trace-context/) (or another header format, such as [B3 headers](https://github.com/openzipkin/b3-propagation)), which facilitates maintaining the headers through every service without being stripped (the propagation). + + +## What are intercepts and preview URLs? + +[Intercepts](../../reference/intercepts) and [preview +URLs](../../howtos/preview-urls/) are functions of Telepresence that +enable easy local development from a remote Kubernetes cluster and +offer a preview environment for sharing and real-time collaboration. + +Telepresence uses custom HTTP headers and header propagation to +identify which traffic to intercept both for plain personal intercepts +and for personal intercepts with preview URLs; these techniques are +more commonly used for distributed tracing, so what they are being +used for is a little unorthodox, but the mechanisms for their use are +already widely deployed because of the prevalence of tracing. The +headers facilitate the smart routing of requests either to live +services in the cluster or services running locally on a developer’s +machine. The intercepted traffic can be further limited by using path +based routing. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to [Ambassador Cloud](https://app.getambassador.io) with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. diff --git a/docs/telepresence/2.8/concepts/devloop.md b/docs/telepresence/2.8/concepts/devloop.md new file mode 100644 index 000000000..86aac87e2 --- /dev/null +++ b/docs/telepresence/2.8/concepts/devloop.md @@ -0,0 +1,54 @@ +--- +title: "The developer and the inner dev loop | Ambassador " +--- + +# The developer experience and the inner dev loop + +## How is the developer experience changing? + +The developer experience is the workflow a developer uses to develop, test, deploy, and release software. + +Typically this experience has consisted of both an inner dev loop and an outer dev loop. The inner dev loop is where the individual developer codes and tests, and once the developer pushes their code to version control, the outer dev loop is triggered. + +The outer dev loop is _everything else_ that happens leading up to release. This includes code merge, automated code review, test execution, deployment, [controlled (canary) release](https://www.getambassador.io/docs/argo/latest/concepts/canary/), and observation of results. The modern outer dev loop might include, for example, an automated CI/CD pipeline as part of a [GitOps workflow](https://www.getambassador.io/docs/argo/latest/concepts/gitops/#what-is-gitops) and a [progressive delivery](/docs/argo/latest/concepts/cicd/) strategy relying on automated canaries, i.e. to make the outer loop as fast, efficient and automated as possible. + +Cloud-native technologies have fundamentally altered the developer experience in two ways: one, developers now have to take extra steps in the inner dev loop; two, developers need to be concerned with the outer dev loop as part of their workflow, even if most of their time is spent in the inner dev loop. + +Engineers now must design and build distributed service-based applications _and_ also assume responsibility for the full development life cycle. The new developer experience means that developers can no longer rely on monolithic application developer best practices, such as checking out the entire codebase and coding locally with a rapid “live-reload” inner development loop. Now developers have to manage external dependencies, build containers, and implement orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this adds development time to the equation. + +## What is the inner dev loop? + +The inner dev loop is the single developer workflow. A single developer should be able to set up and use an inner dev loop to code and test changes quickly. + +Even within the Kubernetes space, developers will find much of the inner dev loop familiar. That is, code can still be written locally at a level that a developer controls and committed to version control. + +In a traditional inner dev loop, if a typical developer codes for 360 minutes (6 hours) a day, with a traditional local iterative development loop of 5 minutes — 3 coding, 1 building, i.e. compiling/deploying/reloading, 1 testing inspecting, and 10-20 seconds for committing code — they can expect to make ~70 iterations of their code per day. Any one of these iterations could be a release candidate. The only “developer tax” being paid here is for the commit process, which is negligible. + +![traditional inner dev loop](../images/trad-inner-dev-loop.png) + +## In search of lost time: How does containerization change the inner dev loop? + +The inner dev loop is where writing and testing code happens, and time is critical for maximum developer productivity and getting features in front of end users. The faster the feedback loop, the faster developers can refactor and test again. + +Changes to the inner dev loop process, i.e., containerization, threaten to slow this development workflow down. Coding stays the same in the new inner dev loop, but code has to be containerized. The _containerized_ inner dev loop requires a number of new steps: + +* packaging code in containers +* writing a manifest to specify how Kubernetes should run the application (e.g., YAML-based configuration information, such as how much memory should be given to a container) +* pushing the container to the registry +* deploying containers in Kubernetes + +Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. + + +![container inner dev loop](../images/container-inner-dev-loop.png) + +## Tackling the slow inner dev loop + +A slow inner dev loop can negatively impact frontend and backend teams, delaying work on individual and team levels and slowing releases into production overall. + +For example: + +* Frontend developers have to wait for previews of backend changes on a shared dev/staging environment (for example, until CI/CD deploys a new version) and/or rely on mocks/stubs/virtual services when coding their application locally. These changes are only verifiable by going through the CI/CD process to build and deploy within a target environment. +* Backend developers have to wait for CI/CD to build and deploy their app to a target environment to verify that their code works correctly with cluster or cloud-based dependencies as well as to share their work to get feedback. + +New technologies and tools can facilitate cloud-native, containerized development. And in the case of a sluggish inner dev loop, developers can accelerate productivity with tools that help speed the loop up again. diff --git a/docs/telepresence/2.8/concepts/devworkflow.md b/docs/telepresence/2.8/concepts/devworkflow.md new file mode 100644 index 000000000..fa24fc2bd --- /dev/null +++ b/docs/telepresence/2.8/concepts/devworkflow.md @@ -0,0 +1,7 @@ +# The changing development workflow + +A changing workflow is one of the main challenges for developers adopting Kubernetes. Software development itself isn’t the challenge. Developers can continue to [code using the languages and tools with which they are most productive and comfortable](https://www.getambassador.io/resources/kubernetes-local-dev-toolkit/). That’s the beauty of containerized development. + +However, the cloud-native, Kubernetes-based approach to development means adopting a new development workflow and development environment. Beyond the basics, such as figuring out how to containerize software, [how to run containers in Kubernetes](https://www.getambassador.io/docs/kubernetes/latest/concepts/appdev/), and how to deploy changes into containers, for example, Kubernetes adds complexity before it delivers efficiency. The promise of a “quicker way to develop software” applies at least within the traditional aspects of the inner dev loop, where the single developer codes, builds and tests their software. But both within the inner dev loop and once code is pushed into version control to trigger the outer dev loop, the developer experience changes considerably from what many developers are used to. + +In this new paradigm, new steps are added to the inner dev loop, and more broadly, the developer begins to share responsibility for the full life cycle of their software. Inevitably this means taking new workflows and tools on board to ensure that the full life cycle continues full speed ahead. diff --git a/docs/telepresence/2.8/concepts/faster.md b/docs/telepresence/2.8/concepts/faster.md new file mode 100644 index 000000000..03dc9bd8b --- /dev/null +++ b/docs/telepresence/2.8/concepts/faster.md @@ -0,0 +1,28 @@ +--- +title: Install the Telepresence Docker extension | Ambassador +--- +# Making the remote local: Faster feedback, collaboration and debugging + +With the goal of achieving [fast, efficient development](https://www.getambassador.io/use-case/local-kubernetes-development/), developers need a set of approaches to bridge the gap between remote Kubernetes clusters and local development, and reduce time to feedback and debugging. + +## How should I set up a Kubernetes development environment? + +[Setting up a development environment](https://www.getambassador.io/resources/development-environments-microservices/) for Kubernetes can be much more complex than the setup for traditional web applications. Creating and maintaining a Kubernetes development environment relies on a number of external dependencies, such as databases or authentication. + +While there are several ways to set up a Kubernetes development environment, most introduce complexities and impediments to speed. The dev environment should be set up to easily code and test in conditions where a service can access the resources it depends on. + +A good way to meet the goals of faster feedback, possibilities for collaboration, and scale in a realistic production environment is the "single service local, all other remote" environment. Developing in a fully remote environment offers some benefits, but for developers, it offers the slowest possible feedback loop. With local development in a remote environment, the developer retains considerable control while using tools like [Telepresence](../../quick-start/) to facilitate fast feedback, debugging and collaboration. + +## What is Telepresence? + +Telepresence is an open source tool that lets developers [code and test microservices locally against a remote Kubernetes cluster](../../quick-start/). Telepresence facilitates more efficient development workflows while relieving the need to worry about other service dependencies. + +## How can I get fast, efficient local development? + +The dev loop can be jump-started with the right development environment and Kubernetes development tools to support speed, efficiency and collaboration. Telepresence is designed to let Kubernetes developers code as though their laptop is in their Kubernetes cluster, enabling the service to run locally and be proxied into the remote cluster. Telepresence runs code locally and forwards requests to and from the remote Kubernetes cluster, bypassing the much slower process of waiting for a container to build, pushing it to registry, and deploying to production. + +A rapid and continuous feedback loop is essential for productivity and speed; Telepresence enables the fast, efficient feedback loop to ensure that developers can access the rapid local development loop they rely on without disrupting their own or other developers' workflows. Telepresence safely intercepts traffic from the production cluster and enables near-instant testing of code, local debugging in production, and [preview URL](../../howtos/preview-urls/) functionality to share dev environments with others for multi-user collaboration. + +Telepresence works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This pod proxies data from the Kubernetes environment (e.g., TCP connections, environment variables, volumes) to the local process. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +The intercept proxy works thanks to context propagation, which is most frequently associated with distributed tracing but also plays a key role in controllable intercepts and preview URLs. diff --git a/docs/telepresence/2.8/concepts/intercepts.md b/docs/telepresence/2.8/concepts/intercepts.md new file mode 100644 index 000000000..0a2909be2 --- /dev/null +++ b/docs/telepresence/2.8/concepts/intercepts.md @@ -0,0 +1,208 @@ +--- +title: "Types of intercepts" +description: "Short demonstration of personal vs global intercepts" +--- + +import React from 'react'; + +import Alert from '@material-ui/lab/Alert'; +import AppBar from '@material-ui/core/AppBar'; +import Paper from '@material-ui/core/Paper'; +import Tab from '@material-ui/core/Tab'; +import TabContext from '@material-ui/lab/TabContext'; +import TabList from '@material-ui/lab/TabList'; +import TabPanel from '@material-ui/lab/TabPanel'; +import Animation from '@src/components/InterceptAnimation'; + +export function TabsContainer({ children, ...props }) { + const [state, setState] = React.useState({curTab: "personal"}); + React.useEffect(() => { + const query = new URLSearchParams(window.location.search); + var interceptType = query.get('intercept') || "personal"; + if (state.curTab != interceptType) { + setState({curTab: interceptType}); + } + }, [state, setState]) + var setURL = function(newTab) { + history.replaceState(null,null, + `?intercept=${newTab}${window.location.hash}`, + ); + }; + return ( +
+ + + {setState({curTab: newTab}); setURL(newTab)}} aria-label="intercept types"> + + + + + + {children} + +
+ ); +}; + +# Types of intercepts + + + + +# No intercept + + + + +This is the normal operation of your cluster without Telepresence. + + + + + +# Global intercept + + + + +**Global intercepts** replace the Kubernetes "Orders" service with the +Orders service running on your laptop. The users see no change, but +with all the traffic coming to your laptop, you can observe and debug +with all your dev tools. + + + +### Creating and using global intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=all + ``` + + + + Make sure your current kubectl context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service: + + All requests will be sent to the version of your service that is + running in the local development environment. + + + + +# Personal intercept + +**Personal intercepts** allow you to be selective and intercept only +some of the traffic to a service while not interfering with the rest +of the traffic. This allows you to share a cluster with others on your +team without interfering with their work. + +Personal intercepts are subject to the Ambassador Cloud active service and user limit quotas. +To read more about these quotas limits, see the [subscription management page](../../../cloud/latest/subscriptions/howtos/manage-my-subscriptions). + + + + +In the illustration above, **Orange** +requests are being made by Developer 2 on their laptop and the +**green** are made by a teammate, +Developer 1, on a different laptop. + +Each developer can intercept the Orders service for their requests only, +while sharing the rest of the development environment. + + + +### Creating and using personal intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b + ``` + + We're using + `Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b` as the + header for the sake of the example, but you can use any + `key=value` pair you want, or `--http-header=auto` to have it + choose something automatically. + + + + Make sure your current kubect context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service by passing the + HTTP header: + + ```http + Personal-Intercept: 126a72c7-be8b-4329-af64-768e207a184b + ``` + + + + Need a browser extension to modify or remove an HTTP-request-headers? + + Chrome + {' '} + Firefox + + + + 3. Using the intercept: Send requests to your service without the + HTTP header: + + Requests without the header will be sent to the version of your + service that is running in the cluster. This enables you to share + the cluster with a team! + +### Intercepting a specific endpoint + +It's not uncommon to have one service serving several endpoints. Telepresence is capable of limiting an +intercept to only affect the endpoints you want to work with by using one of the `--http-path-xxx` +flags below in addition to using `--http-header` flags. Only one such flag can be used in an intercept +and, contrary to the `--http-header` flag, it cannot be repeated. + +The following flags are available: + +| Flag | Meaning | +|-------------------------------|------------------------------------------------------------------| +| `--http-path-equal ` | Only intercept the endpoint for this exact path | +| `--http-path-prefix ` | Only intercept endpoints with a matching path prefix | +| `--http-path-regex ` | Only intercept endpoints that match the given regular expression | + +#### Examples: + +1. A personal intercept using the header "Coder: Bob" limited to all endpoints that start with "/api': + + ```shell + telepresence intercept SERVICENAME --http-path-prefix=/api --http-header=Coder=Bob + ``` + +2. A personal intercept using the auto generated header that applies only to the endpoint "/api/version": + + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version --http-header=auto + ``` + or, since `--http-header=auto` is the implicit when using `--http` options, just: + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version + ``` + +3. A personal intercept using the auto generated header limited to all endpoints matching the regular expression "(staging-)?api/.*": + + ```shell + telepresence intercept SERVICENAME --http-path-regex='/(staging-)?api/.*' + ``` + + + diff --git a/docs/telepresence/2.8/doc-links.yml b/docs/telepresence/2.8/doc-links.yml new file mode 100644 index 000000000..427486bc5 --- /dev/null +++ b/docs/telepresence/2.8/doc-links.yml @@ -0,0 +1,102 @@ +- title: Quick start + link: quick-start +- title: Install Telepresence + items: + - title: Install + link: install/ + - title: Upgrade + link: install/upgrade/ + - title: Install Traffic Manager + link: install/manager/ + - title: Install Traffic Manager with Helm + link: install/helm/ + - title: Cloud Provider Prerequisites + link: install/cloud/ + - title: Migrate from legacy Telepresence + link: install/migrate-from-legacy/ +- title: Core concepts + items: + - title: The changing development workflow + link: concepts/devworkflow + - title: The developer experience and the inner dev loop + link: concepts/devloop + - title: 'Making the remote local: Faster feedback, collaboration and debugging' + link: concepts/faster + - title: Context propagation + link: concepts/context-prop + - title: Types of intercepts + link: concepts/intercepts +- title: How do I... + items: + - title: Intercept a service in your own environment + link: howtos/intercepts + - title: Share dev environments with preview URLs + link: howtos/preview-urls + - title: Proxy outbound traffic to my cluster + link: howtos/outbound + - title: Host a cluster in a local VM + link: howtos/cluster-in-vm + - title: Send requests to an intercepted service + link: howtos/request +- title: Telepresence for Docker + items: + - title: What is Telepresence for Docker + link: extension/intro + - title: Install into Docker-Desktop + link: extension/install + - title: Intercept into a Docker Container + link: extension/intercept +- title: Telepresence for CI + items: + - title: Github Actions + link: ci/github-actions +- title: Technical reference + items: + - title: Architecture + link: reference/architecture + - title: Client reference + link: reference/client + items: + - title: login + link: reference/client/login + - title: Laptop-side configuration + link: reference/config + - title: Cluster-side configuration + link: reference/cluster-config + - title: Using Docker for intercepts + link: reference/docker-run + - title: Running Telepresence in a Docker container + link: reference/inside-container + - title: Environment variables + link: reference/environment + - title: Intercepts + link: reference/intercepts/ + items: + - title: Manually injecting the Traffic Agent + link: reference/intercepts/manual-agent + - title: Volume mounts + link: reference/volume + - title: RESTful API service + link: reference/restapi + - title: DNS resolution + link: reference/dns + - title: RBAC + link: reference/rbac + - title: Telepresence and VPNs + link: reference/vpn + - title: Networking through Virtual Network Interface + link: reference/tun-device + - title: Connection Routing + link: reference/routing + - title: Using Telepresence with Linkerd + link: reference/linkerd +- title: FAQs + link: faqs +- title: Troubleshooting + link: troubleshooting +- title: Community + link: community +- title: Release Notes + link: release-notes +- title: Licenses + link: licenses diff --git a/docs/telepresence/2.8/extension/install.md b/docs/telepresence/2.8/extension/install.md new file mode 100644 index 000000000..471752775 --- /dev/null +++ b/docs/telepresence/2.8/extension/install.md @@ -0,0 +1,39 @@ +--- +title: "Telepresence for Docker installation and connection guide" +description: "Learn how to install and update Ambassador Labs' Telepresence for Docker." +indexable: true +--- + +# Install and connect the Telepresence Docker extension + +[Docker](https://docker.com), the popular containerized runtime environment, now offers the [Telepresence](../../../../../kubernetes-learning-center/telepresence-docker-extension/) extension for Docker Desktop. With this extension, you can quickly install Telepresence and begin using its features with your Docker containers in a matter of minutes. + +## Install Telepresence for Docker + +Telepresence for Docker is available through the Docker Destktop. To install Telepresence for Docker: + +1. Open Docker Desktop. +2. In the Docker Dashboard, click **Add Extensions** in the left navigation bar. +3. In the Extensions Marketplace, search for the Ambassador Telepresence extension. +4. Click **Install**. + +## Connect to Ambassador Cloud through the Telepresence extension. + + After you install the Telepresence extension in Docker Desktop, you need to generate an API key to connect the Telepresence extension to Ambassador Cloud. + + 1. Click the Telepresence extension in Docker Desktop, then click **Get Started**. + + 2. Click the **Get API Key** button to open Ambassador Cloud in a browser window. + + 3. Sign in with your Google, Github, or Gitlab account. + Ambassador Cloud opens to your profile and displays the API key. + + 4. Copy the API key and paste it into the API key field in the Docker Dashboard. + +## Connect to your cluster in Docker Desktop. + + 1. Select the desired clister from the dropdown menu and click **Next**. + This cluster is now set to kubectl's current context. + + 2. Click **Connect to [your cluster]**. + Your cluster is connected and you can now create [intercepts](../intercept/). \ No newline at end of file diff --git a/docs/telepresence/2.8/extension/intercept.md b/docs/telepresence/2.8/extension/intercept.md new file mode 100644 index 000000000..3868407a8 --- /dev/null +++ b/docs/telepresence/2.8/extension/intercept.md @@ -0,0 +1,48 @@ +--- +title: "Create an intercept with Telepreence for Docker" +description: "Create an intercept with Telepresence for Docker. With Telepresence, you can create intercepts to debug, " +indexable: true +--- + +# Create an intercept + +With the Telepresence for Docker extension, you can create [personal intercepts](../../concepts/intercepts/?intercept=personal). These intercepts route the cluster traffic through a proxy UTL to your local Docker container. Follow the instructions below to create an intercept with Docker Desktop. + +## Prerequisites + +Before you begin, you need: +- [Docker Desktop](https://www.docker.com/products/docker-desktop). +- The [Telepresence ](../../../../../kubernetes-learning-center/telepresence-docker-extension/) extension [installed](../install). +- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/), the Kubernetes command-line tool. + +This guide assumes you have a Kubernetes deployment with a running service, and that you can run a copy of that service in a docker container on your laptop. + +## Copy the service you want to intercept + +Once you have the Telepresence extension [installed and connected](../install/) the Telepresence extension, you need to copy the service. To do this, use the `docker run` command with the following flags: + + ```console + $ docker run --rm -it --network host + ``` + +The Telepresence extension requires the target service to be on the host network. This allows Telepresence to share a network with your container. The mounted network device redirects cluster-related traffic back into the cluster. + +## Intercept a service + +In Docker Desktop, the Telepresence extension shows all the services in the namespace. + + 1. Choose a service to intercept and click the **Intercept** button. + + 2. Select the service port for the intercept from the dropdown. + + 3. Enter the target port of the service you previously copied in the Docker container. + + 4. Click **Submit** to create the intercept. + +The intercept now shows up in the Docker Telepresence extension. + +## Test your code + +Now you can make your code changes in your preferred IDE. When you're finished, build a new container with your code changes and run your container on Docker's host network. All the traffic previously routed to and from your Kubernetes service is now routed to and from your local container. + +Click the globe icon next to your intercept to get the preview URL. From here, you can view the intercept details in Ambassador Cloud, open the preview URL in your browser to see the changes you've made in realtime, or you can share the preview URL with teammates so they can review your work. \ No newline at end of file diff --git a/docs/telepresence/2.8/extension/intro.md b/docs/telepresence/2.8/extension/intro.md new file mode 100644 index 000000000..6a653ae06 --- /dev/null +++ b/docs/telepresence/2.8/extension/intro.md @@ -0,0 +1,29 @@ +--- +title: "Telepresence for Docker introduction" +description: "Learn about the Telepresence extension for Docker." +indexable: true +--- + +# Telepresence for Docker + +Telepresence is now available as a [Docker Extension](https://www.docker.com/products/extensions/) for Docker Desktop. + +## What is the Telepresence extension for Docker? + +The [Telepresence Docker extension](../../../../../kubernetes-learning-center/telepresence-docker-extension/) is an extension that runs in Docker Desktop. This extension allows you to spin up a selection of your application and run the Telepresence daemons in that container. The Telepresence extension allows you to intercept a service and redirect cloud traffic to other containers on the Docker host network. + +## What does the Telepresence Docker extension do? + +Telepresence for Docker is designed to simplify your coding experience and test your code faster. Traditionally, you need to build a container within docker with your code changes, push them, wait for it to upload, deploy the changes, verify them, view them, and repeat that process as you continually test your changes. This makes it a slow and cumbersome process when you need to continually test changes. + +With the Telepresence extension for Docker Desktop, you can use intercepts to immediately preview changes as you make them, without the need to redeploy after every change. Because the Telepresence extension also enables you to isolate your machine and operate it entirely within the Docker runtime, this means you can make changes without root permission on your machine. + +## How does Telepresence for Docker work? + +The Telepresence extension is configured to use Docker's host network (VM network for Windows and Mac, host network on Linux). + +Telepresence runs entirely within containers. The Telepresence daemons run in a container, which can be given commands using the extension UI. When Telepresence intercepts a service, it redirects cloud traffic to other containers on the Docker host network. + +## What do I need to begin? + +All you need is [Docker Desktop](https://www.docker.com/products/docker-desktop) with the [Ambassador Telepresence extension installed](../install) and the Kubernetes command-line tool [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/). diff --git a/docs/telepresence/2.8/faqs.md b/docs/telepresence/2.8/faqs.md new file mode 100644 index 000000000..3c37f1cc5 --- /dev/null +++ b/docs/telepresence/2.8/faqs.md @@ -0,0 +1,124 @@ +--- +description: "Learn how Telepresence helps with fast development and debugging in your Kubernetes cluster." +--- + +# FAQs + +** Why Telepresence?** + +Modern microservices-based applications that are deployed into Kubernetes often consist of tens or hundreds of services. The resource constraints and number of these services means that it is often difficult to impossible to run all of this on a local development machine, which makes fast development and debugging very challenging. The fast [inner development loop](../concepts/devloop/) from previous software projects is often a distant memory for cloud developers. + +Telepresence enables you to connect your local development machine seamlessly to the cluster via a two way proxying mechanism. This enables you to code locally and run the majority of your services within a remote Kubernetes cluster -- which in the cloud means you have access to effectively unlimited resources. + +Ultimately, this empowers you to develop services locally and still test integrations with dependent services or data stores running in the remote cluster. + +You can “intercept” any requests made to a target Kubernetes workload, and code and debug your associated service locally using your favourite local IDE and in-process debugger. You can test your integrations by making requests against the remote cluster’s ingress and watching how the resulting internal traffic is handled by your service running locally. + +By using the preview URL functionality you can share access with additional developers or stakeholders to the application via an entry point associated with your intercept and locally developed service. You can make changes that are visible in near real-time to all of the participants authenticated and viewing the preview URL. All other viewers of the application entrypoint will not see the results of your changes. + +** What operating systems does Telepresence work on?** + +Telepresence currently works natively on macOS (Intel and Apple silicon), Linux, and WSL 2. Starting with v2.4.0, we are also releasing a native Windows version of Telepresence that we are considering a Developer Preview. + +** What protocols can be intercepted by Telepresence?** + +All HTTP/1.1 and HTTP/2 protocols can be intercepted. This includes: + +- REST +- JSON/XML over HTTP +- gRPC +- GraphQL + +If you need another protocol supported, please [drop us a line](https://www.getambassador.io/feedback/) to request it. + +** When using Telepresence to intercept a pod, are the Kubernetes cluster environment variables proxied to my local machine?** + +Yes, you can either set the pod's environment variables on your machine or write the variables to a file to use with Docker or another build process. Please see [the environment variable reference doc](../reference/environment) for more information. + +** When using Telepresence to intercept a pod, can the associated pod volume mounts also be mounted by my local machine?** + +Yes, please see [the volume mounts reference doc](../reference/volume/) for more information. + +** When connected to a Kubernetes cluster via Telepresence, can I access cluster-based services via their DNS name?** + +Yes. After you have successfully connected to your cluster via `telepresence connect` you will be able to access any service in your cluster via their namespace qualified DNS name. + +This means you can curl endpoints directly e.g. `curl .:8080/mypath`. + +If you create an intercept for a service in a namespace, you will be able to use the service name directly. + +This means if you `telepresence intercept -n `, you will be able to resolve just the `` DNS record. + +You can connect to databases or middleware running in the cluster, such as MySQL, PostgreSQL and RabbitMQ, via their service name. + +** When connected to a Kubernetes cluster via Telepresence, can I access cloud-based services and data stores via their DNS name?** + +You can connect to cloud-based data stores and services that are directly addressable within the cluster (e.g. when using an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) Service type), such as AWS RDS, Google pub-sub, or Azure SQL Database. + +** What types of ingress does Telepresence support for the preview URL functionality?** + +The preview URL functionality should work with most ingress configurations, including straightforward load balancer setups. + +Telepresence will discover/prompt during first use for this info and make its best guess at figuring this out and ask you to confirm or update this. + +** Why are my intercepts still reporting as active when they've been disconnected?** + + In certain cases, Telepresence might not have been able to communicate back with Ambassador Cloud to update the intercept's status. Worry not, they will get garbage collected after a period of time. + +** Why is my intercept associated with an "Unreported" cluster?** + + Intercepts tagged with "Unreported" clusters simply mean Ambassador Cloud was unable to associate a service instance with a known detailed service from an Edge Stack or API Gateway cluster. [Connecting your cluster to the Service Catalog](/docs/telepresence/latest/quick-start/) will properly match your services from multiple data sources. + +** Will Telepresence be able to intercept workloads running on a private cluster or cluster running within a virtual private cloud (VPC)?** + +Yes. The cluster has to have outbound access to the internet for the preview URLs to function correctly, but it doesn’t need to have a publicly accessible IP address. + +The cluster must also have access to an external registry in order to be able to download the traffic-manager and traffic-agent images that are deployed when connecting with Telepresence. + +** Why does running Telepresence require sudo access for the local daemon?** + +The local daemon needs sudo to create iptable mappings. Telepresence uses this to create outbound access from the laptop to the cluster. + +On Fedora, Telepresence also creates a virtual network device (a TUN network) for DNS routing. That also requires root access. + +** What components get installed in the cluster when running Telepresence?** + +A single `traffic-manager` service is deployed in the `ambassador` namespace within your cluster, and this manages resilient intercepts and connections between your local machine and the cluster. + +A Traffic Agent container is injected per pod that is being intercepted. The first time a workload is intercepted all pods associated with this workload will be restarted with the Traffic Agent automatically injected. + +** How can I remove all of the Telepresence components installed within my cluster?** + +You can run the command `telepresence uninstall --everything` to remove the `traffic-manager` service installed in the cluster and `traffic-agent` containers injected into each pod being intercepted. + +Running this command will also stop the local daemon running. + +** What language is Telepresence written in?** + +All components of the Telepresence application and cluster components are written using Go. + +** How does Telepresence connect and tunnel into the Kubernetes cluster?** + +The connection between your laptop and cluster is established by using +the `kubectl port-forward` machinery (though without actually spawning +a separate program) to establish a TCP connection to Telepresence +Traffic Manager in the cluster, and running Telepresence's custom VPN +protocol over that TCP connection. + + + +** What identity providers are supported for authenticating to view a preview URL?** + +* GitHub +* GitLab +* Google + +More authentication mechanisms and identity provider support will be added soon. Please [let us know](https://www.getambassador.io/feedback/) which providers are the most important to you and your team in order for us to prioritize those. + +** Is Telepresence open source?** + +Yes it is! You can find its source code on [GitHub](https://github.com/telepresenceio/telepresence). + +** How do I share my feedback on Telepresence?** + +Your feedback is always appreciated and helps us build a product that provides as much value as possible for our community. You can chat with us directly on our [feedback page](https://www.getambassador.io/feedback/), or you can [join our Slack channel](http://a8r.io/slack) to share your thoughts. diff --git a/docs/telepresence/2.8/howtos/cluster-in-vm.md b/docs/telepresence/2.8/howtos/cluster-in-vm.md new file mode 100644 index 000000000..4762344c9 --- /dev/null +++ b/docs/telepresence/2.8/howtos/cluster-in-vm.md @@ -0,0 +1,192 @@ +--- +title: "Considerations for locally hosted clusters | Ambassador" +description: "Use Telepresence to intercept services in a cluster running in a hosted virtual machine." +--- + +# Network considerations for locally hosted clusters + +## The problem +Telepresence creates a Virtual Network Interface ([VIF](../../reference/tun-device)) that maps the clusters subnets to the host machine when it connects. If you're running Kubernetes locally (e.g., k3s, Minikube, Docker for Desktop), you may encounter network problems because the devices in the host are also accessible from the cluster's nodes. + +### Example: +A k3s cluster runs in a headless VirtualBox machine that uses a "host-only" network. This network will allow both host-to-guest and guest-to-host connections. In other words, the cluster will have access to the host's network and, while Telepresence is connected, also to its VIF. This means that from the cluster's perspective, there will now be more than one interface that maps the cluster's subnets; the ones already present in the cluster's nodes, and then the Telepresence VIF, mapping them again. + +Now, if a request arrives to Telepresence that is covered by a subnet mapped by the VIF, the request is routed to the cluster. If the cluster for some reason doesn't find a corresponding listener that can handle the request, it will eventually try the host network, and find the VIF. The VIF routes the request to the cluster and now the recursion is in motion. The final outcome of the request will likely be a timeout but since the recursion is very resource intensive (a large amount of very rapid connection requests), this will likely also affect other connections in a bad way. + +## Solution + +### Create a bridge network +A bridge network is a Link Layer (L2) device that forwards traffic between network segments. By creating a bridge network, you can bypass the host's network stack which enable the Kubernetes cluster to connect directly to the same router as your host. + +To create a bridge network, you need to change the network settings of the guest running a cluster's node so that it connects directly to a physical network device on your host. The details on how to configure the bridge depends on what type of virtualization solution you're using. + +### Vagrant + Virtualbox + k3s example +Here's a sample `Vagrantfile` that will spin up a server node and two agent nodes in three headless instances using a bridged network. It also adds the configuration needed for the cluster to host a docker repository (very handy in case you want to save bandwidth). The Kubernetes registry manifest must be applied using `kubectl -f registry.yaml` once the cluster is up and running. + +#### Vagrantfile +```ruby +# -*- mode: ruby -*- +# vi: set ft=ruby : + +# bridge is the name of the host's default network device +$bridge = 'wlp5s0' + +# default_route should be the IP of the host's default route. +$default_route = '192.168.1.1' + +# nameserver must be the IP of an external DNS, such as 8.8.8.8 +$nameserver = '8.8.8.8' + +# server_name should also be added to the host's /etc/hosts file and point to the server_ip +# for easy access when pushing docker images +server_name = 'multi' + +# static IPs for the server and agents. Those IPs must be on the default router's subnet +server_ip = '192.168.1.110' +agents = { + 'agent1' => '192.168.1.111', + 'agent2' => '192.168.1.112', +} + +# Extra parameters in INSTALL_K3S_EXEC variable because of +# K3s picking up the wrong interface when starting server and agent +# https://github.com/alexellis/k3sup/issues/306 +server_script = <<-SHELL + sudo -i + apk add curl + export INSTALL_K3S_EXEC="--bind-address=#{server_ip} --node-external-ip=#{server_ip} --flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + echo "Sleeping for 5 seconds to wait for k3s to start" + sleep 5 + cp /var/lib/rancher/k3s/server/token /vagrant_shared + cp /etc/rancher/k3s/k3s.yaml /vagrant_shared + cp /etc/rancher/k3s/registries.yaml /vagrant_shared + SHELL + +agent_script = <<-SHELL + sudo -i + apk add curl + export K3S_TOKEN_FILE=/vagrant_shared/token + export K3S_URL=https://#{server_ip}:6443 + export INSTALL_K3S_EXEC="--flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + SHELL + +def config_vm(name, ip, script, vm) + # The network_script has two objectives: + # 1. Ensure that the guest's default route is the bridged network (bypass the network of the host) + # 2. Ensure that the DNS points to an external DNS service, as opposed to the DNS of the host that + # the NAT network provides. + network_script = <<-SHELL + sudo -i + ip route delete default 2>&1 >/dev/null || true; ip route add default via #{$default_route} + cp /etc/resolv.conf /etc/resolv.conf.orig + sed 's/^nameserver.*/nameserver #{$nameserver}/' /etc/resolv.conf.orig > /etc/resolv.conf + SHELL + + vm.hostname = name + vm.network 'public_network', bridge: $bridge, ip: ip + vm.synced_folder './shared', '/vagrant_shared' + vm.provider 'virtualbox' do |vb| + vb.memory = '4096' + vb.cpus = '2' + end + vm.provision 'shell', inline: script + vm.provision 'shell', inline: network_script, run: 'always' +end + +Vagrant.configure('2') do |config| + config.vm.box = 'generic/alpine314' + + config.vm.define 'server', primary: true do |server| + config_vm(server_name, server_ip, server_script, server.vm) + end + + agents.each do |agent_name, agent_ip| + config.vm.define agent_name do |agent| + config_vm(agent_name, agent_ip, agent_script, agent.vm) + end + end +end +``` + +The Kubernetes manifest to add the registry: + +#### registry.yaml +```yaml +apiVersion: v1 +kind: ReplicationController +metadata: + name: kube-registry-v0 + namespace: kube-system + labels: + k8s-app: kube-registry + version: v0 +spec: + replicas: 1 + selector: + app: kube-registry + version: v0 + template: + metadata: + labels: + app: kube-registry + version: v0 + spec: + containers: + - name: registry + image: registry:2 + resources: + limits: + cpu: 100m + memory: 200Mi + env: + - name: REGISTRY_HTTP_ADDR + value: :5000 + - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY + value: /var/lib/registry + volumeMounts: + - name: image-store + mountPath: /var/lib/registry + ports: + - containerPort: 5000 + name: registry + protocol: TCP + volumes: + - name: image-store + hostPath: + path: /var/lib/registry-storage +--- +apiVersion: v1 +kind: Service +metadata: + name: kube-registry + namespace: kube-system + labels: + app: kube-registry + kubernetes.io/name: "KubeRegistry" +spec: + selector: + app: kube-registry + ports: + - name: registry + port: 5000 + targetPort: 5000 + protocol: TCP + type: LoadBalancer +``` + diff --git a/docs/telepresence/2.8/howtos/intercepts.md b/docs/telepresence/2.8/howtos/intercepts.md new file mode 100644 index 000000000..87bd9f92b --- /dev/null +++ b/docs/telepresence/2.8/howtos/intercepts.md @@ -0,0 +1,108 @@ +--- +description: "Start using Telepresence in your own environment. Follow these steps to intercept your service in your cluster." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Intercept a service in your own environment + +Telepresence enables you to create intercepts to a target Kubernetes workload. Once you have created and intercept, you can code and debug your associated service locally. + +For a detailed walk-though on creating intercepts using our sample app, follow the [quick start guide](../../quick-start/demo-node/). + + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +This guide assumes you have a Kubernetes deployment and service accessible publicly by an ingress controller, and that you can run a copy of that service on your laptop. + + +## Intercept your service with a global intercept + +With Telepresence, you can create [global intercepts](../../concepts/intercepts/?intercept=global) that intercept all traffic going to a service in your cluster and route it to your local environment instead. + +1. Connect to your cluster with `telepresence connect` and connect to the Kubernetes API server: + + ```console + $ curl -ik https://kubernetes.default + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected when you first connect. + + + You now have access to your remote Kubernetes API server as if you were on the same network. You can now use any local tools to connect to any service in the cluster. + + If you have difficulties connecting, make sure you are using Telepresence 2.0.3 or a later version. Check your version by entering `telepresence version` and [upgrade if needed](../../install/upgrade/). + + +2. Enter `telepresence list` and make sure the service you want to intercept is listed. For example: + + ```console + $ telepresence list + ... + example-service: ready to intercept (traffic-agent not yet installed) + ... + ``` + +3. Get the name of the port you want to intercept on your service: + `kubectl get service --output yaml`. + + For example: + + ```console + $ kubectl get service example-service --output yaml + ... + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + ... + ``` + +4. Intercept all traffic going to the service in your cluster: + `telepresence intercept --port [:] --env-file `. + * For `--port`: specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * For `--env-file`: specify a file path for Telepresence to write the environment variables that are set in the pod. + The example below shows Telepresence intercepting traffic going to service `example-service`. Requests now reach the service on port `http` in the cluster get routed to `8080` on the workstation and write the environment variables of the service to `~/example-service-intercept.env`. + ```console + $ telepresence intercept example-service --port 8080:http --env-file ~/example-service-intercept.env + Using Deployment example-service + intercepted + Intercept name: example-service + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Intercepting : all TCP connections + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + The following are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Query the environment in which you intercepted a service and verify your local instance being invoked. + All the traffic previously routed to your Kubernetes Service is now routed to your local environment + +You can now: +- Make changes on the fly and see them reflected when interacting with + your Kubernetes environment. +- Query services only exposed in your cluster's network. +- Set breakpoints in your IDE to investigate bugs. + + + + **Didn't work?** Make sure the port you're listening on matches the one you specified when you created your intercept. + + diff --git a/docs/telepresence/2.8/howtos/outbound.md b/docs/telepresence/2.8/howtos/outbound.md new file mode 100644 index 000000000..48877df8c --- /dev/null +++ b/docs/telepresence/2.8/howtos/outbound.md @@ -0,0 +1,89 @@ +--- +description: "Telepresence can connect to your Kubernetes cluster, letting you access cluster services as if your laptop was another pod in the cluster." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Proxy outbound traffic to my cluster + +While preview URLs are a powerful feature, Telepresence offers other options for proxying traffic between your laptop and the cluster. This section discribes how to proxy outbound traffic and control outbound connectivity to your cluster. + + This guide assumes that you have the quick start sample web app running in your cluster to test accessing the web-app service. You can substitute this service for any other service you are running. + +## Proxying outbound traffic + +Connecting to the cluster instead of running an intercept allows you to access cluster workloads as if your laptop was another pod in the cluster. This enables you to access other Kubernetes services using `.`. A service running on your laptop can interact with other services on the cluster by name. + +When you connect to your cluster, the background daemon on your machine runs and installs the [Traffic Manager deployment](../../reference/architecture/) into the cluster of your current `kubectl` context. The Traffic Manager handles the service proxying. + +1. Run `telepresence connect` and enter your password to run the daemon. + + ``` + $ telepresence connect + Launching Telepresence Daemon v2.3.7 (api v3) + Need root privileges to run "/usr/local/bin/telepresence daemon-foreground /home//.cache/telepresence/logs '' ''" + [sudo] password: + Connecting to traffic manager... + Connected to context default (https://) + ``` + +2. Run `telepresence status` to confirm connection to your cluster and that it is proxying traffic. + + ``` + $ telepresence status + Root Daemon: Running + Version : v2.3.7 (api 3) + Primary DNS : "" + Fallback DNS: "" + User Daemon: Running + Version : v2.3.7 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 0 total + ``` + +3. Access your service by name with `curl web-app.emojivoto:80`. Telepresence routes the request to the cluster, as if your laptop is actually running in the cluster. + + ``` + $ curl web-app.emojivoto:80 + + + + + Emoji Vote + ... + ``` + +If you terminate the client with `telepresence quit` and try to access the service again, it will fail because traffic is no longer proxied from your laptop. + + ``` + $ telepresence quit + Telepresence Daemon quitting...done + ``` + +When using Telepresence in this way, you need to access services with the namespace qualified DNS name (<service name>.<namespace>) before you start an intercept. After you start an intercept, only <service name> is required. Read more about these differences in the DNS resolution reference guide. + +## Controlling outbound connectivity + +By default, Telepresence provides access to all Services found in all namespaces in the connected cluster. This can lead to problems if the user does not have RBAC access permissions to all namespaces. You can use the `--mapped-namespaces ` flag to control which namespaces are accessible. + +When you use the `--mapped-namespaces` flag, you need to include all namespaces containing services you want to access, as well as all namespaces that contain services related to the intercept. + +### Using local-only intercepts + +When you develop on isolated apps or on a virtualized container, you don't need an outbound connection. However, when developing services that aren't deployed to the cluster, it can be necessary to provide outbound connectivity to the namespace where the service will be deployed. This is because services that aren't exposed through ingress controllers require connectivity to those services. When you provide outbound connectivity, the service can access other services in that namespace without using qualified names. A local-only intercept does not cause outbound connections to originate from the intercepted namespace. The reason for this is to establish correct origin; the connection must be routed to a `traffic-agent`of an intercepted pod. For local-only intercepts, the outbound connections originates from the `traffic-manager`. + +To control outbound connectivity to specific namespaces, add the `--local-only` flag: + + ``` + $ telepresence intercept --namespace --local-only + ``` +The resources in the given namespace can now be accessed using unqualified names as long as the intercept is active. +You can deactivate the intercept with `telepresence leave `. This removes unqualified name access. + +### Proxy outcound connectivity for laptops + +To specify additional hosts or subnets that should be resolved inside the cluster, see [AlsoProxy](../../reference/cluster-config/#alsoproxy) for more details. diff --git a/docs/telepresence/2.8/howtos/preview-urls.md b/docs/telepresence/2.8/howtos/preview-urls.md new file mode 100644 index 000000000..15a1c5181 --- /dev/null +++ b/docs/telepresence/2.8/howtos/preview-urls.md @@ -0,0 +1,127 @@ +--- +title: "Share dev environments with preview URLs | Ambassador" +description: "Telepresence uses Preview URLs to help you collaborate on developing Kubernetes services with teammates." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Share development environments with preview URLs + +Telepresence can generate sharable preview URLs. This enables you to work on a copy of your service locally, and share that environment with a teammate for pair programming. While using preview URLs, Telepresence will route only the requests coming from that preview URL to your local environment. Requests to the ingress are routed to your cluster as usual. + +Preview URLs are protected behind authentication through Ambassador Cloud, and, access to the URL is only available to users in your organization. You can make the URL publicly accessible for sharing with outside collaborators. + +## Creating a preview URL + +1. Connect to Telepresence and enter the `telepresence list` command in your CLI to verify the service is listed. +Telepresence only supports Deployments, ReplicaSets, and StatefulSet workloads with a label that matches a Service. + +2. Enter `telepresence login` to launch Ambassador Cloud in your browser. + + If you are in an environment you can't launch Telepresence in your local browser, enter If you are in an environment where Telepresence cannot launch in a local browser, pass the [`--apikey` flag to `telepresence login`](../../reference/client/login/). + +3. Start the intercept with `telepresence intercept --port --env-file `and adjust the flags as follows: + Start the intercept: + * **port:** specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * **env-file:** specify a file path for Telepresence to write the environment variables that are set in the pod. + +4. Answer the question prompts. + * **What's your ingress' IP address?**: whether the ingress controller is expecting TLS communication on the specified port. + * **What's your ingress' TCP port number?**: the port your ingress controller is listening to. This is often 443 for TLS ports, and 80 for non-TLS ports. + * **Does that TCP port on your ingress use TLS (as opposed to cleartext)?**: whether the ingress controller is expecting TLS communication on the specified port. + * **If required by your ingress, specify a different hostname (TLS-SNI, HTTP "Host" header) to be used in requests.**: if your ingress controller routes traffic based on a domain name (often using the `Host` HTTP header), enter that value here. + + The example below shows a preview URL for `example-service` which listens on port 8080. The preview URL for ingress will use the `ambassador` service in the `ambassador` namespace on port `443` using TLS encryption and the hostname `dev-environment.edgestack.me`: + + ```console +$ telepresence intercept example-service --port 8080 --env-file ~/ex-svc.env + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Confirm the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: -]: ambassador.ambassador + + 2/4: What's your ingress' TCP port number? + + [default: -]: 80 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: y + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: ambassador.ambassador]: dev-environment.edgestack.me + + Using deployment example-service + intercepted + Intercept name : example-service + State : ACTIVE + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":example-service") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname : dev-environment.edgestack.me + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + Here are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Go to the Preview URL generated from the intercept. +Traffic is now intercepted from your preview URL without impacting other traffic from your Ingress. + + + Didn't work? It might be because you have services in between your ingress controller and the service you are intercepting that do not propagate the x-telepresence-intercept-id HTTP Header. Read more on context propagation. + + +7. Make a request on the URL you would usually query for that environment. Don't route a request to your laptop. + + Normal traffic coming into the cluster through the Ingress (i.e. not coming from the preview URL) routes to services in the cluster like normal. + +8. Share with a teammate. + + You can collaborate with teammates by sending your preview URL to them. Once your teammate logs in, they must select the same identity provider and org as you are using. This authorizes their access to the preview URL. When they visit the preview URL, they see the intercepted service running on your laptop. + You can now collaborate with a teammate to debug the service on the shared intercept URL without impacting the production environment. + +## Sharing a preview URL with people outside your team + +To collaborate with someone outside of your identity provider's organization: +Log into [Ambassador Cloud](https://app.getambassador.io/cloud/). + navigate to your service's intercepts, select the preview URL details, and click **Make Publicly Accessible**. Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on your laptop. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. Removing the preview URL either from the dashboard or by running `telepresence preview remove ` also removes all access to the preview URL. + +## Change access restrictions + +To collaborate with someone outside of your identity provider's organization, you must make your preview URL publicly accessible. + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/). +2. Select the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Make Publicly Accessible**. + +Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on a local environment. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. + +## Remove a preview URL from an Intercept + +To delete a preview URL and remove all access to the intercepted service, + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) +2. Click on the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Remove Preview**. + +Alternatively, you can remove a preview URL with the following command: +`telepresence preview remove ` diff --git a/docs/telepresence/2.8/howtos/request.md b/docs/telepresence/2.8/howtos/request.md new file mode 100644 index 000000000..1109c68df --- /dev/null +++ b/docs/telepresence/2.8/howtos/request.md @@ -0,0 +1,12 @@ +import Alert from '@material-ui/lab/Alert'; + +# Send requests to an intercepted service + +Ambassador Cloud can inform you about the required request parameters to reach an intercepted service. + + 1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) + 2. Navigate to the desired service Intercepts page + 3. Click the **Query** button to open the pop-up menu. + 4. Toggle between **CURL**, **Headers** and **Browse**. + +The pre-built queries and header information will help you get started to query the desired intercepted service and manage header propagation. diff --git a/docs/telepresence/2.8/images/container-inner-dev-loop.png b/docs/telepresence/2.8/images/container-inner-dev-loop.png new file mode 100644 index 000000000..06586cd6e Binary files /dev/null and b/docs/telepresence/2.8/images/container-inner-dev-loop.png differ diff --git a/docs/telepresence/2.8/images/docker-header-containers.png b/docs/telepresence/2.8/images/docker-header-containers.png new file mode 100644 index 000000000..06f422a93 Binary files /dev/null and b/docs/telepresence/2.8/images/docker-header-containers.png differ diff --git a/docs/telepresence/2.8/images/github-login.png b/docs/telepresence/2.8/images/github-login.png new file mode 100644 index 000000000..cfd4d4bf1 Binary files /dev/null and b/docs/telepresence/2.8/images/github-login.png differ diff --git a/docs/telepresence/2.8/images/logo.png b/docs/telepresence/2.8/images/logo.png new file mode 100644 index 000000000..701f63ba8 Binary files /dev/null and b/docs/telepresence/2.8/images/logo.png differ diff --git a/docs/telepresence/2.8/images/split-tunnel.png b/docs/telepresence/2.8/images/split-tunnel.png new file mode 100644 index 000000000..5bf30378e Binary files /dev/null and b/docs/telepresence/2.8/images/split-tunnel.png differ diff --git a/docs/telepresence/2.8/images/tracing.png b/docs/telepresence/2.8/images/tracing.png new file mode 100644 index 000000000..c374807e5 Binary files /dev/null and b/docs/telepresence/2.8/images/tracing.png differ diff --git a/docs/telepresence/2.8/images/trad-inner-dev-loop.png b/docs/telepresence/2.8/images/trad-inner-dev-loop.png new file mode 100644 index 000000000..618b674f8 Binary files /dev/null and b/docs/telepresence/2.8/images/trad-inner-dev-loop.png differ diff --git a/docs/telepresence/2.8/images/tunnelblick.png b/docs/telepresence/2.8/images/tunnelblick.png new file mode 100644 index 000000000..8944d445a Binary files /dev/null and b/docs/telepresence/2.8/images/tunnelblick.png differ diff --git a/docs/telepresence/2.8/images/vpn-dns.png b/docs/telepresence/2.8/images/vpn-dns.png new file mode 100644 index 000000000..eed535c45 Binary files /dev/null and b/docs/telepresence/2.8/images/vpn-dns.png differ diff --git a/docs/telepresence/2.8/install/cloud.md b/docs/telepresence/2.8/install/cloud.md new file mode 100644 index 000000000..9bcf9e63e --- /dev/null +++ b/docs/telepresence/2.8/install/cloud.md @@ -0,0 +1,43 @@ +# Provider Prerequisites for Traffic Manager + +## GKE + +### Firewall Rules for private clusters + +A GKE cluster with private networking will come preconfigured with firewall rules that prevent the Traffic Manager's +webhook injector from being invoked by the Kubernetes API server. +For Telepresence to work in such a cluster, you'll need to [add a firewall rule](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) allowing the Kubernetes masters to access TCP port `8443` in your pods. +For example, for a cluster named `tele-webhook-gke` in region `us-central1-c1`: + +```bash +$ gcloud container clusters describe tele-webhook-gke --region us-central1-c | grep masterIpv4CidrBlock + masterIpv4CidrBlock: 172.16.0.0/28 # Take note of the IP range, 172.16.0.0/28 + +$ gcloud compute firewall-rules list \ + --filter 'name~^gke-tele-webhook-gke' \ + --format 'table( + name, + network, + direction, + sourceRanges.list():label=SRC_RANGES, + allowed[].map().firewall_rule().list():label=ALLOW, + targetTags.list():label=TARGET_TAGS + )' + +NAME NETWORK DIRECTION SRC_RANGES ALLOW TARGET_TAGS +gke-tele-webhook-gke-33fa1791-all tele-webhook-net INGRESS 10.40.0.0/14 esp,ah,sctp,tcp,udp,icmp gke-tele-webhook-gke-33fa1791-node +gke-tele-webhook-gke-33fa1791-master tele-webhook-net INGRESS 172.16.0.0/28 tcp:10250,tcp:443 gke-tele-webhook-gke-33fa1791-node +gke-tele-webhook-gke-33fa1791-vms tele-webhook-net INGRESS 10.128.0.0/9 icmp,tcp:1-65535,udp:1-65535 gke-tele-webhook-gke-33fa1791-node +# Take note fo the TARGET_TAGS value, gke-tele-webhook-gke-33fa1791-node + +$ gcloud compute firewall-rules create gke-tele-webhook-gke-webhook \ + --action ALLOW \ + --direction INGRESS \ + --source-ranges 172.16.0.0/28 \ + --rules tcp:8443 \ + --target-tags gke-tele-webhook-gke-33fa1791-node --network tele-webhook-net +Creating firewall...⠹Created [https://www.googleapis.com/compute/v1/projects/datawire-dev/global/firewalls/gke-tele-webhook-gke-webhook]. +Creating firewall...done. +NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED +gke-tele-webhook-gke-webhook tele-webhook-net INGRESS 1000 tcp:8443 False +``` diff --git a/docs/telepresence/2.8/install/helm.md b/docs/telepresence/2.8/install/helm.md new file mode 100644 index 000000000..2709ee8f3 --- /dev/null +++ b/docs/telepresence/2.8/install/helm.md @@ -0,0 +1,181 @@ +# Install the Traffic Manager with Helm + +[Helm](https://helm.sh) is a package manager for Kubernetes that automates the release and management of software on Kubernetes. The Telepresence Traffic Manager can be installed via a Helm chart with a few simple steps. + +For more details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +## Before you begin + +Before you begin you need to have [`helm`](https://helm.sh/docs/intro/install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +The Telepresence Helm chart is hosted by Ambassador Labs and published at `https://app.getambassador.io`. + +Start by adding this repo to your Helm client with the following command: + +```shell +helm repo add datawire https://app.getambassador.io +helm repo update +``` + +## Install with Helm + +When you run the Helm chart, it installs all the components required for the Telepresence Traffic Manager. + +1. If you are installing the Telepresence Traffic Manager **for the first time on your cluster**, create the `ambassador` namespace in your cluster: + + ```shell + kubectl create namespace ambassador + ``` + +2. Install the Telepresence Traffic Manager with the following command: + + ```shell + helm install traffic-manager --namespace ambassador datawire/telepresence + ``` + +### Install into custom namespace + +The Helm chart supports being installed into any namespace, not necessarily `ambassador`. Simply pass a different `namespace` argument to `helm install`. +For example, if you wanted to deploy the traffic manager to the `staging` namespace: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence +``` + +Note that users of Telepresence will need to configure their kubeconfig to find this installation of the Traffic Manager: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +See [the kubeconfig documentation](../../reference/config#manager) for more information. + +### Upgrading the Traffic Manager. + +Versions of the Traffic Manager Helm chart are coupled to the versions of the Telepresence CLI that they are intended for. +Thus, for example, if you wish to use Telepresence `v2.4.0`, you'll need to install version `v2.4.0` of the Traffic Manager Helm chart. + +Upgrading the Traffic Manager is the same as upgrading any other Helm chart; for example, if you installed the release into the `ambassador` namespace, and you just wished to upgrade it to the latest version without changing any configuration values: + +```shell +helm repo up +helm upgrade traffic-manager datawire/telepresence --reuse-values --namespace ambassador +``` + +If you want to upgrade the Traffic-Manager to a specific version, add a `--version` flag with the version number to the upgrade command. For example: `--version v2.4.1` + +## RBAC + +### Installing a namespace-scoped traffic manager + +You might not want the Traffic Manager to have permissions across the entire kubernetes cluster, or you might want to be able to install multiple traffic managers per cluster (for example, to separate them by environment). +In these cases, the traffic manager supports being installed with a namespace scope, allowing cluster administrators to limit the reach of a traffic manager's permissions. + +For example, suppose you want a Traffic Manager that only works on namespaces `dev` and `staging`. +To do this, create a `values.yaml` like the following: + +```yaml +managerRbac: + create: true + namespaced: true + namespaces: + - dev + - staging +``` + +This can then be installed via: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence -f ./values.yaml +``` + +**NOTE** Do not install namespace-scoped Traffic Managers and a global Traffic Manager in the same cluster, as it could have unexpected effects. + +#### Namespace collision detection + +The Telepresence Helm chart will try to prevent namespace-scoped Traffic Managers from managing the same namespaces. +It will do this by creating a ConfigMap, called `traffic-manager-claim`, in each namespace that a given install manages. + +So, for example, suppose you install one Traffic Manager to manage namespaces `dev` and `staging`, as: + +```bash +helm install traffic-manager --namespace dev datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={dev,staging}' +``` + +You might then attempt to install another Traffic Manager to manage namespaces `staging` and `prod`: + +```bash +helm install traffic-manager --namespace prod datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={staging,prod}' +``` + +This would fail with an error: + +``` +Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap "traffic-manager-claim" in namespace "staging" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "prod": current value is "dev" +``` + +To fix this error, fix the overlap either by removing `staging` from the first install, or from the second. + +#### Namespace scoped user permissions + +Optionally, you can also configure user rbac to be scoped to the same namespaces as the manager itself. +You might want to do this if you don't give your users permissions throughout the cluster, and want to make sure they only have the minimum set required to perform telepresence commands on certain namespaces. + +Continuing with the `dev` and `staging` example from the previous section, simply add the following to `values.yaml` (make sure you set the `subjects`!): + +```yaml +clientRbac: + create: true + + # These are the users or groups to which the user rbac will be bound. + # This MUST be set. + subjects: {} + # - kind: User + # name: jane + # apiGroup: rbac.authorization.k8s.io + + namespaced: true + + namespaces: + - dev + - staging +``` + +#### Namespace-scoped webhook + +If you wish to use the traffic-manager's [mutating webhook](../../reference/cluster-config#mutating-webhook) with a namespace-scoped traffic manager, you will have to ensure that each namespace has an `app.kubernetes.io/name` label that is identical to its name: + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: staging + labels: + app.kubernetes.io/name: staging +``` + +You can also use `kubectl label` to add the label to an existing namespace, e.g.: + +```shell +kubectl label namespace staging app.kubernetes.io/name=staging +``` + +This is required because the mutating webhook will use the name label to find namespaces to operate on. + +**NOTE** This labelling happens automatically in kubernetes >= 1.21. + +### Installing RBAC only + +Telepresence Traffic Manager does require some [RBAC](../../reference/rbac/) for the traffic-manager deployment itself, as well as for users. +To make it easier for operators to introspect / manage RBAC separately, you can use `rbac.only=true` to +only create the rbac-related objects. +Additionally, you can use `clientRbac.create=true` and `managerRbac.create=true` to toggle which subset(s) of RBAC objects you wish to create. diff --git a/docs/telepresence/2.8/install/index.md b/docs/telepresence/2.8/install/index.md new file mode 100644 index 000000000..624cb33d6 --- /dev/null +++ b/docs/telepresence/2.8/install/index.md @@ -0,0 +1,153 @@ +import Platform from '@src/components/Platform'; + +# Install + +Install Telepresence by running the commands below for your OS. If you are not the administrator of your cluster, you will need [administrative RBAC permissions](../reference/rbac#administrating-telepresence) to install and use Telepresence in your cluster. + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## What's Next? + +Follow one of our [quick start guides](../quick-start/) to start using Telepresence, either with our sample app or in your own environment. + +## Installing nightly versions of Telepresence + +We build and publish the contents of the default branch, [release/v2](https://github.com/telepresenceio/telepresence), of Telepresence +nightly, Monday through Friday, for macOS (Intel and Apple silicon), Linux, and Windows. + +The tags are formatted like so: `vX.Y.Z-nightly-$gitShortHash`. + +`vX.Y.Z` is the most recent release of Telepresence with the patch version (Z) bumped one higher. +For example, if our last release was 2.3.4, nightly builds would start with v2.3.5, until a new +version of Telepresence is released. + +`$gitShortHash` will be the short hash of the git commit of the build. + +Use these URLs to download the most recent nightly build. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/nightly/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/nightly/telepresence.zip +``` + + + + +## Installing older versions of Telepresence + +Use these URLs to download an older version for your OS (including older nightly builds), replacing `x.y.z` with the versions you want. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/x.y.z/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/x.y.z/telepresence +``` + + + diff --git a/docs/telepresence/2.8/install/manager.md b/docs/telepresence/2.8/install/manager.md new file mode 100644 index 000000000..9a747d895 --- /dev/null +++ b/docs/telepresence/2.8/install/manager.md @@ -0,0 +1,53 @@ +# Install/Uninstall the Traffic Manager + +Telepresence uses a traffic manager to send/recieve cloud traffic to the user. Telepresence uses [Helm](https://helm.sh) under the hood to install the traffic manager in your cluster. + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/). +In addition, you may need certain prerequisites depending on your cloud provider and platform. +See the [cloud provider installation notes](../../install/cloud) for more. + +## Install the Traffic Manager + +The telepresence cli can install the traffic manager for you. The basic install will install the same version as the client used. + +1. Install the Telepresence Traffic Manager with the following command: + + ```shell + telepresence helm install + ``` + +### Customizing the Traffic Manager. + +For details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +1. Create a values.yaml file with your config values. + +2. Run the install command with the values flag set to the path to your values file. + + ```shell + telepresence helm install --values values.yaml + ``` + + +## Upgrading/Downgrading the Traffic Manager. + +1. Download the cli of the version of Telepresence you wish to use. + +2. Run the install command with the upgrade flag. + + ```shell + telepresence helm install --upgrade + ``` + + +## Uninstall + +The telepresence cli can uninstall the traffic manager for you using the `telepresence helm uninstall` command (previously `telepresence uninstall --everything`). + +1. Uninstall the Telepresence Traffic Manager and all of the agents installed by it using the following command: + + ```shell + telepresence helm uninstall + ``` diff --git a/docs/telepresence/2.8/install/migrate-from-legacy.md b/docs/telepresence/2.8/install/migrate-from-legacy.md new file mode 100644 index 000000000..94307dfa1 --- /dev/null +++ b/docs/telepresence/2.8/install/migrate-from-legacy.md @@ -0,0 +1,110 @@ +# Migrate from legacy Telepresence + +[Telepresence](/products/telepresence/) (formerly referenced as Telepresence 2, which is the current major version) has different mechanics and requires a different mental model from [legacy Telepresence 1](https://www.telepresence.io/docs/v1/) when working with local instances of your services. + +In legacy Telepresence, a pod running a service was swapped with a pod running the Telepresence proxy. This proxy received traffic intended for the service, and sent the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment". + +In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time. + +Telepresence 2 introduces a [new +architecture](../../reference/architecture/) built around "intercepts" +that addresses these problems. With the new Telepresence, a sidecar +proxy ("traffic agent") is injected onto the pod. The proxy then +intercepts traffic intended for the Pod and routes it to the +workstation/laptop. The advantage of this approach is that the +service is running at all times, and no swapping is used. By using +the proxy approach, we can also do personal intercepts, where rather +than re-routing all traffic to the laptop/workstation, it only +re-routes the traffic designated as belonging to that user, so that +multiple developers can intercept the same service at the same time +without disrupting normal operation or disrupting eacho. + +Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts. + +## Using legacy Telepresence commands + +First please ensure you've [installed Telepresence](../). + +Telepresence is able to translate common legacy Telepresence commands into native Telepresence commands. +So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used +to with the Telepresence binary. + +For example, say you have a deployment (`myserver`) that you want to swap deployment (equivalent to intercept in +Telepresence) with a python server, you could run the following command: + +``` +$ telepresence --swap-deployment myserver --expose 9090 --run python3 -m http.server 9090 +< help text > + +Legacy telepresence command used +Command roughly translates to the following in Telepresence: +telepresence intercept echo-easy --port 9090 -- python3 -m http.server 9090 +running... +Connecting to traffic manager... +Connected to context +Using Deployment myserver +intercepted + Intercept name : myserver + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:9090 + Intercepting : all TCP connections +Serving HTTP on :: port 9090 (http://[::]:9090/) ... +``` + +Telepresence will let you know what the legacy Telepresence command has mapped to and automatically +runs it. So you can get started with Telepresence today, using the commands you are used to +and it will help you learn the Telepresence syntax. + +### Legacy command mapping + +Below is the mapping of legacy Telepresence to Telepresence commands (where they exist and +are supported). + +| Legacy Telepresence Command | Telepresence Command | +|--------------------------------------------------|--------------------------------------------| +| --swap-deployment $workload | intercept $workload | +| --expose localPort[:remotePort] | intercept --port localPort[:remotePort] | +| --swap-deployment $workload --run-shell | intercept $workload -- bash | +| --swap-deployment $workload --run $cmd | intercept $workload -- $cmd | +| --swap-deployment $workload --docker-run $cmd | intercept $workload --docker-run -- $cmd | +| --run-shell | connect -- bash | +| --run $cmd | connect -- $cmd | +| --env-file,--env-json | --env-file, --env-json (haven't changed) | +| --context,--namespace | --context, --namespace (haven't changed) | +| --mount,--docker-mount | --mount, --docker-mount (haven't changed) | + +### Legacy Telepresence command limitations + +Some of the commands and flags from legacy Telepresence either didn't apply to Telepresence or +aren't yet supported in Telepresence. For some known popular commands, such as --method, +Telepresence will include output letting you know that the flag has gone away. For flags that +Telepresence can't translate yet, it will let you know that that flag is "unsupported". + +If Telepresence is missing any flags or functionality that is integral to your usage, please let us know +by [creating an issue](https://github.com/telepresenceio/telepresence/issues) and/or talking to us on our [Slack channel](http://a8r.io/slack)! + +## Telepresence changes + +Telepresence installs a Traffic Manager in the cluster and Traffic Agents alongside workloads when performing intercepts (including +with `--swap-deployment`) and leaves them. If you use `--swap-deployment`, the intercept will be left once the process +dies, but the agent will remain. There's no harm in leaving the agent running alongside your service, but when you +want to remove them from the cluster, the following Telepresence command will help: +``` +$ telepresence uninstall --help +Uninstall telepresence agents + +Usage: + telepresence uninstall [flags] { --agent |--all-agents } + +Flags: + -d, --agent uninstall intercept agent on specific deployments + -a, --all-agents uninstall intercept agent on all deployments + -h, --help help for uninstall + -n, --namespace string If present, the namespace scope for this CLI request +``` + +Since the new architecture deploys a Traffic Manager into the Ambassador namespace, please take a look at +our [rbac guide](../../reference/rbac) if you run into any issues with permissions while upgrading to Telepresence. + +The Traffic Manager can be uninstalled using `telepresence helm uninstall`. \ No newline at end of file diff --git a/docs/telepresence/2.8/install/upgrade.md b/docs/telepresence/2.8/install/upgrade.md new file mode 100644 index 000000000..8272b4844 --- /dev/null +++ b/docs/telepresence/2.8/install/upgrade.md @@ -0,0 +1,81 @@ +--- +description: "How to upgrade your installation of Telepresence and install previous versions." +--- + +# Upgrade Process +The Telepresence CLI will periodically check for new versions and notify you when an upgrade is available. Running the same commands used for installation will replace your current binary with the latest version. + +Before upgrading your CLI, you must stop any live Telepresence processes by issuing `telepresence quit -s` (or `telepresence quit -ur` +if your current version is less than 2.8.0). + + + + +```shell +# Intel Macs + +# Upgrade via brew: +brew upgrade datawire/blackbird/telepresence + +# OR upgrade manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +The Telepresence CLI contains an embedded Helm chart. See [Install/Uninstall the Traffic Manager](../manager/) if you want to also upgrade +the Traffic Manager in your cluster. diff --git a/docs/telepresence/2.8/new-in-2.8.md b/docs/telepresence/2.8/new-in-2.8.md new file mode 100644 index 000000000..4021a8192 --- /dev/null +++ b/docs/telepresence/2.8/new-in-2.8.md @@ -0,0 +1,14 @@ +# What’s new in Telepresence 2.8.0? + +## DNS improvements +The Telepresence DNS resolver is now capable of resolving queries of type `A`, `AAAA`, `CNAME`, +`MX`, `NS`, `PTR`, `SRV`, and `TXT`. + +## Helm configuration additions +- A `connectionTTL` that controls the time the traffic manager will retain a connection without seeing any sign of life from + the client. +- DNS settings for `includeSuffixes` and `excludeSuffixes` can now be set cluster-wide in the Helm chart. Prior to this change, + this had to be set in a kubeconfig extension on all workstations. +- Router settings for `alsoProxySubnets` and `neverProxySubnets` can now be set cluster-wide in the Helm chart. Prior to this change, + this had to be set in a kubeconfig extension on all workstations. +- The Envoy server and admin port can now be configured in the Helm chart. \ No newline at end of file diff --git a/docs/telepresence/2.8/quick-start/TelepresenceQuickStartLanding.js b/docs/telepresence/2.8/quick-start/TelepresenceQuickStartLanding.js new file mode 100644 index 000000000..bd375dee0 --- /dev/null +++ b/docs/telepresence/2.8/quick-start/TelepresenceQuickStartLanding.js @@ -0,0 +1,118 @@ +import queryString from 'query-string'; +import React, { useEffect, useState } from 'react'; + +import Embed from '../../../../src/components/Embed'; +import Icon from '../../../../src/components/Icon'; +import Link from '../../../../src/components/Link'; + +import './telepresence-quickstart-landing.less'; + +/** @type React.FC> */ +const RightArrow = (props) => ( + + + +); + +const TelepresenceQuickStartLanding = () => { + const [getStartedUrl, setGetStartedUrl] = useState( + 'https://app.getambassador.io/cloud/welcome?docs_source=telepresence-quick-start', + ); + + const getUrlFromQueryParams = () => { + const { docs_source, docs_campaign } = queryString.parse( + window.location.search, + ); + + if (docs_source === 'cloud-quickstart-ad' && docs_campaign === 'loops') { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=loops', + ); + } else if ( + docs_source === 'cloud-quickstart-ad' && + docs_campaign === 'environments' + ) { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=environments', + ); + } + }; + + useEffect(() => { + getUrlFromQueryParams(); + }, []); + + return ( +
+

+ Telepresence +

+

+ Set up your ideal development environment for Kubernetes in seconds. + Accelerate your inner development loop with hot reload using your + existing IDE, and workflow. +

+ +
+
+
+

+ Set Up Telepresence with Ambassador Cloud +

+

+ Seamlessly integrate Telepresence into your existing Kubernetes + environment by following our 3-step setup guide. +

+ + Get Started + +
+
+

+ + Do it Yourself: + {' '} + install Telepresence and manually connect to your Kubernetes + workloads. +

+
+ +
+
+
+

+ What Can Telepresence Do for You? +

+

Telepresence gives Kubernetes application developers:

+
    +
  • Instant feedback loops
  • +
  • Remote development environments
  • +
  • Access to your favorite local tools
  • +
  • Easy collaborative development with teammates
  • +
+ + LEARN MORE{' '} + + +
+
+ +
+
+
+
+ ); +}; + +export default TelepresenceQuickStartLanding; diff --git a/docs/telepresence/2.8/quick-start/demo-node.md b/docs/telepresence/2.8/quick-start/demo-node.md new file mode 100644 index 000000000..4b4d71308 --- /dev/null +++ b/docs/telepresence/2.8/quick-start/demo-node.md @@ -0,0 +1,152 @@ +--- +description: "Claim a remote demo cluster and learn to use Telepresence to intercept services running in a Kubernetes Cluster, speeding up local development and debugging." +--- + +import {DemoClusterMetadata, ExpirationDate} from '../../../../../src/components/DemoClusterMetadata'; +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start + +
+

Contents

+ +* [1. Get a free remote cluster](#1-get-a-free-remote-cluster) +* [2. Try the Emojivoto application](#2-try-the-emojivoto-application) +* [3. Set up your local development environment](#3-set-up-your-local-development-environment) +* [4. Testing our fix](#4-testing-our-fix) +* [5. Preview URLs](#5-preview-urls) +* [6. How/Why does this all work](#6-howwhy-does-this-all-work) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +In this guide, we'll give you a hands-on tutorial with [Telepresence](/products/telepresence/). To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js and Golang. We have a version in React if you prefer. + + +## 1. Get a free remote cluster + +[Telepresence](/docs/telepresence/) connects your local workstation with a remote Kubernetes cluster. In this tutorial, we'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. Test out the application: + +1. Go to the and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. We're going to use Telepresence shortly to fix this bug, as everyone should be able to vote for 🍩! + + + Congratulations! You've successfully accessed the Emojivoto application on your remote cluster. + + +## 3. Set up your local development environment + +We'll set up a development environment locally on your workstation. We'll then use [Telepresence](../../reference/inside-container/) to connect this local development environment to the remote Kubernetes cluster. To save time, the development environment we'll use is pre-packaged as a Docker container. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + + + + + + + +Make sure that ports 8080 and 8083 are free.
+If the Docker engine is not running, the command will fail and you will see docker: unknown server OS in your terminal. +
+ +2. The Docker container includes a copy of the Emojivoto application that fixes the bug. Visit the [leaderboard](http://localhost:8083/leaderboard) and notice how it is different from the leaderboard in your Kubernetes cluster. + +3. Vote for 🍩 on your local leaderboard, and you can see that the bug is fixed! + + + Congratulations! You have successfully set up a local development environment, and tested the fix locally. + + +## 4. Testing our fix + +A common use case for Telepresence is to connect your local development environment to a remote cluster. This way, if your application is too big or complex to run locally, you can still develop locally. In this Quick Start, we're also going to show Telepresence can be used for integration testing, by testing our fix against the services in the remote cluster. + +1. From your Docker container, create an intercept, which will tell Telepresence to send traffic to the service in your container instead of the service in the cluster: + + + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Preview URLs + +Preview URLs enable you to safely share your development environment with anyone. For example, you may want your UX designer to take a quick look at what you're developing, before you commit the code. Preview URLs enable this easy collaboration. + +1. If you access the Emojivoto application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +2. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version running locally. + + +Now you're able to share your fix in your local environment with your team! + + + + To get more information regarding Preview URLs and intercepts, visit Ambassador Cloud. + + +
+ +## 6. How/Why does this all work? + +[Telepresence](../qs-go/) works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +Intercepts and preview URLs are functions of Telepresence that enable easy local development from a remote Kubernetes cluster and offer a preview environment for sharing and real-time collaboration. + +Telepresence also uses custom headers and header propagation for controllable intercepts and preview URLs. The headers facilitate the smart routing of requests either to live services in the cluster or services running locally on a developer’s machine. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to Ambassador Cloud with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. + +## What's Next? + + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! diff --git a/docs/telepresence/2.8/quick-start/demo-react.md b/docs/telepresence/2.8/quick-start/demo-react.md new file mode 100644 index 000000000..2312dbbbc --- /dev/null +++ b/docs/telepresence/2.8/quick-start/demo-react.md @@ -0,0 +1,257 @@ +--- +description: "Telepresence Quick Start - React. In this guide we'll give you everything you need in a preconfigured demo cluster: the Telepresence CLI, a config file for..." +--- + +import Alert from '@material-ui/lab/Alert'; +import QSCards26 from './qs-cards'; +import { DownloadDemo } from '../../../../../src/components/Docs/DownloadDemo'; +import { UserInterceptCommand } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start - React + +
+

Contents

+ +* [1. Download the demo cluster archive](#1-download-the-demo-cluster-archive) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Set up the sample application](#3-set-up-the-sample-application) +* [4. Test app](#4-test-app) +* [5. Run a service on your laptop](#5-run-a-service-on-your-laptop) +* [6. Make a code change](#6-make-a-code-change) +* [7. Intercept all traffic to the service](#7-intercept-all-traffic-to-the-service) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +In this guide we'll give you **everything you need in a preconfigured demo cluster:** the [Telepresence](/products/telepresence/) CLI, a config file for connecting to your demo cluster, and code to run a cluster service locally. + + + While Telepresence works with any language, this guide uses a sample app with a frontend written in React. We have a version with a Node.js backend if you prefer. + + + + +## 1. Download the demo cluster archive + +1. + +2. Extract the archive file, open the `ambassador-demo-cluster` folder, and run the installer script (the commands below might vary based on where your browser saves downloaded files). + + + This step will also install some dependency packages onto your laptop using npm, you can see those packages at ambassador-demo-cluster/edgey-corp-nodejs/DataProcessingService/package.json. + + + ``` + cd ~/Downloads + unzip ambassador-demo-cluster.zip -d ambassador-demo-cluster + cd ambassador-demo-cluster + ./install.sh + # type y to install the npm dependencies when asked + ``` + +3. Confirm that your `kubectl` is configured to use the demo cluster by getting the status of the cluster nodes, you should see a single node named `tpdemo-prod-...`: + `kubectl get nodes` + + ``` + $ kubectl get nodes + + NAME STATUS ROLES AGE VERSION + tpdemo-prod-1234 Ready control-plane,master 5d10h v1.20.2+k3s1 + ``` + +4. Confirm that the Telepresence CLI is now installed (we expect to see the daemons are not running yet): +`telepresence status` + + ``` + $ telepresence status + + Root Daemon: Not running + User Daemon: Not running + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open System Preferences → Security & Privacy → General. Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence status command. + + + + You now have Telepresence installed on your workstation and a Kubernetes cluster configured in your terminal! + + +## 2. Test Telepresence + +[Telepresence](../../reference/client/login/) connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster (this requires **root** privileges and will ask for your password): +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Set up the sample application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + +1. Clone the emojivoto app: +`git clone https://github.com/datawire/emojivoto.git` + +1. Deploy the app to your cluster: +`kubectl apply -k emojivoto/kustomize/deployment` + +1. Change the kubectl namespace: +`kubectl config set-context --current --namespace=emojivoto` + +1. List the Services: +`kubectl get svc` + + ``` + $ kubectl get svc + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + emoji-svc ClusterIP 10.43.162.236 8080/TCP,8801/TCP 29s + voting-svc ClusterIP 10.43.51.201 8080/TCP,8801/TCP 29s + web-app ClusterIP 10.43.242.240 80/TCP 29s + web-svc ClusterIP 10.43.182.119 8080/TCP 29s + ``` + +1. Since you’ve already connected Telepresence to your cluster, you can access the frontend service in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). This is the namespace qualified DNS name in the form of `service.namespace`. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Test app + +1. Vote for some emojis and see how the [leaderboard](http://web-app.emojivoto/leaderboard) changes. + +1. There is one emoji that causes an error when you vote for it. Vote for 🍩 and the leaderboard does not actually update. Also an error is shown on the browser dev console: +`GET http://web-svc.emojivoto:8080/api/vote?choice=:doughnut: 500 (Internal Server Error)` + + + Open the dev console in Chrome or Firefox with Option + ⌘ + J (macOS) or Shift + CTRL + J (Windows/Linux).
+ Open the dev console in Safari with Option + ⌘ + C. +
+ +The error is on a backend service, so **we can add an error page to notify the user** while the bug is fixed. + +## 5. Run a service on your laptop + +Now start up the `web-app` service on your laptop. We'll then make a code change and intercept this service so that we can see the immediate results of a code change to the service. + +1. **In a new terminal window**, change into the repo directory and build the application: + + `cd /emojivoto` + `make web-app-local` + + ``` + $ make web-app-local + + ... + webpack 5.34.0 compiled successfully in 4326 ms + ✨ Done in 5.38s. + ``` + +2. Change into the service's code directory and start the server: + + `cd emojivoto-web-app` + `yarn webpack serve` + + ``` + $ yarn webpack serve + + ... + ℹ 「wds」: Project is running at http://localhost:8080/ + ... + ℹ 「wdm」: Compiled successfully. + ``` + +4. Access the application at [http://localhost:8080](http://localhost:8080) and see how voting for the 🍩 is generating the same error as the application deployed in the cluster. + + + Victory, your local React server is running a-ok! + + +## 6. Make a code change +We’ve now set up a local development environment for the app. Next we'll make and locally test a code change to the app to improve the issue with voting for 🍩. + +1. In the terminal running webpack, stop the server with `Ctrl+c`. + +1. In your preferred editor open the file `emojivoto/emojivoto-web-app/js/components/Vote.jsx` and replace the `render()` function (lines 83 to the end) with [this highlighted code snippet](https://github.com/datawire/emojivoto/blob/main/assets/Vote-fixed.jsx#L83-L149). + +1. Run webpack to fully recompile the code then start the server again: + + `yarn webpack` + `yarn webpack serve` + +1. Reload the browser tab showing [http://localhost:8080](http://localhost:8080) and vote for 🍩. Notice how you see an error instead, improving the user experience. + +## 7. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the app to the version running locally instead. + + + This command must be run in the terminal window where you ran the script because the script set environment variables to access the demo cluster. Those variables will only will apply to that terminal session. + + +1. Start the intercept with the `intercept` command, setting the workload name (a Deployment in this case), namespace, and port: +`telepresence intercept web-app --namespace emojivoto --port 8080` + + ``` + $ telepresence intercept web-app --namespace emojivoto --port 8080 + + Using deployment web-app + intercepted + Intercept name: web-app-emojivoto + State : ACTIVE + ... + ``` + +2. Go to the frontend service again in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). Voting for 🍩 should now show an error message to the user. + + + The web-app Deployment is being intercepted and rerouted to the server on your laptop! + + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## What's Next? + + diff --git a/docs/telepresence/2.8/quick-start/go.md b/docs/telepresence/2.8/quick-start/go.md new file mode 100644 index 000000000..e1c51d651 --- /dev/null +++ b/docs/telepresence/2.8/quick-start/go.md @@ -0,0 +1,190 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + + +# Telepresence Quick Start - **Go** + +This guide provides you with a hands-on tutorial with Telepresence and Golang. To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + +## 1. Get a free remote cluster + +Telepresence connects your local workstation with a remote Kubernetes cluster. In this tutorial, you'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. +Test out the application: + +1. Go to the Emojivoto webapp and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. + +## 3. Run the Docker container + +The bug is present in the `voting-svc` service, you'll run that service locally. To save your time, we prepared a Docker container with this service running and all you'll need to fix the bug. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + +2. The application is failing due to a little bug inside this service which uses gRPC to communicate with the others services. We can use `grpcurl` to test the gRPC endpoint and see the error by running: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + (empty) + + Response trailers received: + content-type: application/grpc + Sent 0 requests and received 0 responses + ERROR: + Code: Unknown + Message: ERROR + ``` + +3. In order to fix the bug, use the Docker container's embedded IDE to fix this error. Go to http://localhost:8083 and open `api/api.go`. Remove the `"fmt"` package by deleting the line 5. + + ```go + 3 import ( + 4 "context" + 5 "fmt" // DELETE THIS LINE + 6 + 7 pb "github.com/buoyantio/emojivoto/emojivoto-voting-svc/gen/proto" + ``` + + and also replace the line `21`: + + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return nil, fmt.Errorf("ERROR") + 22 } + ``` + with + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return pS.vote(":doughnut:") + 22 } + ``` + Then save the file (`Ctrl+s` for Windows, `Cmd+s` for Mac or `Menu -> File -> Save`) and verify that the error is fixed now: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + content-type: application/grpc + + Response contents: + { + } + + Response trailers received: + (empty) + Sent 0 requests and received 1 response + ``` + +## 4. Telepresence intercept + +1. Now the bug is fixed, you can use Telepresence to intercept *all* the traffic through our local service. +Run the following command inside the container: + + ``` + $ telepresence intercept voting --port 8081:8080 + + Using Deployment voting + intercepted + Intercept name : voting + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8081 + Service Port Identifier: 8080 + Volume Mount Point : /tmp/telfs-XXXXXXXXX + Intercepting : all TCP connections + ``` + Now you can go back to Emojivoto webapp and you'll see that voting for 🍩 woks as expected. + +You have created an intercept to tell Telepresence where to send traffic. The `voting-svc` traffic is now destined to the local Dockerized version of the service. This intercepts *all the traffic* to the local `voting-svc` service, which has been fixed with the Telepresence intercept. + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Telepresence intercept with a preview URL + +Preview URLs allows you to safely share your development environment. With this approach, you can try and test your local service more accurately because you have a total control about which traffic is handled through your service, all of this thanks to the preview URL. + +1. First leave the current intercept: + + ``` + $ telepresence leave voting + ``` + +2. Then login to telepresence: + + + +3. Create an intercept, which will tell Telepresence to send traffic to the service in our container instead of the service in the cluster. + + + +4. If you access the Emojivoto webapp application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +5. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version which is running locally. + +
+ +## What's Next? + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! diff --git a/docs/telepresence/2.8/quick-start/index.md b/docs/telepresence/2.8/quick-start/index.md new file mode 100644 index 000000000..e0d26fa9e --- /dev/null +++ b/docs/telepresence/2.8/quick-start/index.md @@ -0,0 +1,7 @@ +--- +description: Telepresence Quick Start. +--- + +import NewTelepresenceQuickStartLanding from './TelepresenceQuickStartLanding' + + diff --git a/docs/telepresence/2.8/quick-start/qs-cards.js b/docs/telepresence/2.8/quick-start/qs-cards.js new file mode 100644 index 000000000..5b68aa4ae --- /dev/null +++ b/docs/telepresence/2.8/quick-start/qs-cards.js @@ -0,0 +1,71 @@ +import Grid from '@material-ui/core/Grid'; +import Paper from '@material-ui/core/Paper'; +import Typography from '@material-ui/core/Typography'; +import { makeStyles } from '@material-ui/core/styles'; +import { Link as GatsbyLink } from 'gatsby'; +import React from 'react'; + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + textAlign: 'center', + alignItem: 'stretch', + padding: 0, + }, + paper: { + padding: theme.spacing(1), + textAlign: 'center', + color: 'black', + height: '100%', + }, +})); + +export default function CenteredGrid() { + const classes = useStyles(); + + return ( +
+ + + + + + Collaborating + + + + Use preview URLS to collaborate with your colleagues and others + outside of your organization. + + + + + + + + Outbound Sessions + + + + While connected to the cluster, your laptop can interact with + services as if it was another pod in the cluster. + + + + + + + + FAQs + + + + Learn more about uses cases and the technical implementation of + Telepresence. + + + + +
+ ); +} diff --git a/docs/telepresence/2.8/quick-start/qs-go.md b/docs/telepresence/2.8/quick-start/qs-go.md new file mode 100644 index 000000000..2e140f6a7 --- /dev/null +++ b/docs/telepresence/2.8/quick-start/qs-go.md @@ -0,0 +1,396 @@ +--- +description: "Telepresence Quick Start Go. You will need kubectl or oc installed and set up (Linux / macOS / Windows) to use a Kubernetes cluster, preferably an empty." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Go** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Go application](#3-install-a-sample-go-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used [Telepresence](/products/telepresence/) previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Go application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Go. We have versions in Python (Flask), Python (FastAPI), Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-go.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-go.git + + Cloning into 'edgey-corp-go'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-go/DataProcessingService/` + +3. You will use [Fresh](https://pkg.go.dev/github.com/BUGLAN/fresh) to support auto reloading of the Go server, which we'll use later. Confirm it is installed by running: + `go get github.com/pilu/fresh` + Then start the Go server: + `$GOPATH/bin/fresh` + + ``` + $ go get github.com/pilu/fresh + + $ $GOPATH/bin/fresh + + ... + 10:23:41 app | Welcome to the DataProcessingGoService! + ``` + + + Install Go from here and set your GOPATH if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Go server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Go server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-go/DataProcessingService/main.go` in your editor and change `var color string` from `blue` to `orange`. Save the file and the Go server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.8/quick-start/qs-java.md b/docs/telepresence/2.8/quick-start/qs-java.md new file mode 100644 index 000000000..9056d61cd --- /dev/null +++ b/docs/telepresence/2.8/quick-start/qs-java.md @@ -0,0 +1,390 @@ +--- +description: "Telepresence Quick Start - Java. This document uses kubectl in all example commands, but OpenShift users should have no problem substituting in the oc command." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Java** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Java application](#3-install-a-sample-java-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Java application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Java. We have versions in Python (FastAPI), Python (Flask), Go, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-java.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-java.git + + Cloning into 'edgey-corp-java'... + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-java/DataProcessingService/` + +3. Start the Maven server. + `mvn spring-boot:run` + + + Install Java and Maven first if needed. + + + ``` + $ mvn spring-boot:run + + ... + g.d.DataProcessingServiceJavaApplication : Started DataProcessingServiceJavaApplication in 1.408 seconds (JVM running for 1.684) + + ``` + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Java server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Java server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-java/DataProcessingService/src/main/resources/application.properties` in your editor and change `app.default.color` on line 2 from `blue` to `orange`. Save the file then stop and restart your Java server. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.8/quick-start/qs-node.md b/docs/telepresence/2.8/quick-start/qs-node.md new file mode 100644 index 000000000..d4282240f --- /dev/null +++ b/docs/telepresence/2.8/quick-start/qs-node.md @@ -0,0 +1,384 @@ +--- +description: "Telepresence Quick Start Node.js. This document uses kubectl in all example commands. OpenShift users should have no problem substituting in the oc command..." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Node.js** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Node.js application](#3-install-a-sample-nodejs-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Node.js application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js. We have versions in Go, Java,Python using Flask, and Python using FastAPI if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-nodejs.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-nodejs.git + + Cloning into 'edgey-corp-nodejs'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-nodejs/DataProcessingService/` + +3. Install the dependencies and start the Node server: +`npm install && npm start` + + ``` + $ npm install && npm start + + ... + Welcome to the DataProcessingService! + { _: [] } + Server running on port 3000 + ``` + + + Install Node.js from here if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Node server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + See this doc for more information on how Telepresence resolves DNS. + + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Node server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-nodejs/DataProcessingService/app.js` in your editor and change line 6 from `blue` to `orange`. Save the file and the Node server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.8/quick-start/qs-python-fastapi.md b/docs/telepresence/2.8/quick-start/qs-python-fastapi.md new file mode 100644 index 000000000..dacfd9f25 --- /dev/null +++ b/docs/telepresence/2.8/quick-start/qs-python-fastapi.md @@ -0,0 +1,381 @@ +--- +description: "Telepresence Quick Start - Python (FastAPI) You need kubectl or oc installed & set up (Linux/macOS/Windows) to use Kubernetes cluster, preferably an empty test." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Python (FastAPI)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the FastAPI framework. We have versions in Python (Flask), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python-fastapi.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python-fastapi.git + + Cloning into 'edgey-corp-python-fastapi'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python-fastapi/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install fastapi uvicorn requests && python app.py + + Collecting fastapi + ... + Application startup complete. + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local service is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python-fastapi/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 17 from `blue` to `orange`. Save the file and the Python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080) and it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.8/quick-start/qs-python.md b/docs/telepresence/2.8/quick-start/qs-python.md new file mode 100644 index 000000000..02ad7de97 --- /dev/null +++ b/docs/telepresence/2.8/quick-start/qs-python.md @@ -0,0 +1,392 @@ +--- +description: "Telepresence Quick Start - Python (Flask). This document uses kubectl in all example commands, but OpenShift users should have no problem substituting in the oc." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Python (Flask)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the Flask framework. We have versions in Python (FastAPI), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python.git + + Cloning into 'edgey-corp-python'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install flask requests && python app.py + + Collecting flask + ... + Welcome to the DataServiceProcessingPythonService! + ... + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Python server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 15 from `blue` to `orange`. Save the file and the python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.8/quick-start/telepresence-quickstart-landing.less b/docs/telepresence/2.8/quick-start/telepresence-quickstart-landing.less new file mode 100644 index 000000000..e2a83df4f --- /dev/null +++ b/docs/telepresence/2.8/quick-start/telepresence-quickstart-landing.less @@ -0,0 +1,152 @@ +@import '~@src/components/Layout/vars.less'; + +.doc-body .telepresence-quickstart-landing { + font-family: @InterFont; + color: @black; + margin: -8.4px auto 48px; + max-width: 1050px; + min-width: @docs-min-width; + width: 100%; + + h1 { + color: @blue-dark; + font-weight: normal; + letter-spacing: 0.25px; + font-size: 33px; + margin: 0 0 15px; + } + p { + font-size: 0.875rem; + line-height: 24px; + margin: 0; + padding: 0; + } + + .demo-cluster-container { + display: grid; + margin: 40px 0; + grid-template-columns: 1fr; + grid-template-columns: 1fr; + @media screen and (max-width: 900px) { + grid-template-columns: repeat(1, 1fr); + } + } + .main-title-container { + display: flex; + flex-direction: column; + align-items: center; + p { + text-align: center; + font-size: 0.875rem; + } + } + h2 { + font-size: 23px; + color: @black; + margin: 0 0 20px 0; + padding: 0; + &.underlined { + padding-bottom: 2px; + border-bottom: 3px solid @grey-separator; + text-align: center; + } + strong { + font-weight: 800; + } + &.subtitle { + margin-bottom: 10px; + font-size: 19px; + line-height: 28px; + } + } + .learn-more, + .get-started { + font-size: 14px; + font-weight: 600; + letter-spacing: 1.25px; + display: flex; + align-items: center; + text-decoration: none; + &.inline { + display: inline-block; + text-decoration: underline; + font-size: unset; + font-weight: normal; + &:hover { + text-decoration: none; + } + } + &.blue { + color: @blue-5; + } + &.blue:hover { + color: @blue-dark; + } + } + + .learn-more { + margin-top: 20px; + padding: 13px 0; + } + + .box-container { + &.border { + border: 1.5px solid @grey-separator; + border-radius: 5px; + padding: 10px; + } + &::before { + content: ''; + position: absolute; + width: 14px; + height: 14px; + border-radius: 50%; + top: 0; + left: 50%; + transform: translate(-50%, -50%); + } + p { + font-size: 0.875rem; + line-height: 24px; + padding: 0; + } + } + + .telepresence-video { + border: 2px solid @grey-separator; + box-shadow: -6px 12px 0px fade(@black, 12%); + border-radius: 8px; + padding: 18px; + h2.telepresence-video-title { + font-weight: 400; + font-size: 23px; + line-height: 33px; + color: @blue-6; + } + } + + .video-section { + display: grid; + grid-template-columns: 1fr 1fr; + column-gap: 20px; + @media screen and (max-width: 800px) { + grid-template-columns: 1fr; + } + ul { + font-size: 14px; + margin: 0 10px 6px 0; + } + .video-container { + position: relative; + padding-bottom: 56.25%; // 16:9 aspect ratio + height: 0; + iframe { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + } + } + } +} diff --git a/docs/telepresence/2.8/redirects.yml b/docs/telepresence/2.8/redirects.yml new file mode 100644 index 000000000..5961b3477 --- /dev/null +++ b/docs/telepresence/2.8/redirects.yml @@ -0,0 +1 @@ +- {from: "", to: "quick-start"} diff --git a/docs/telepresence/2.8/reference/architecture.md b/docs/telepresence/2.8/reference/architecture.md new file mode 100644 index 000000000..8aa90b267 --- /dev/null +++ b/docs/telepresence/2.8/reference/architecture.md @@ -0,0 +1,102 @@ +--- +description: "How Telepresence works to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Telepresence Architecture + +
+ +![Telepresence Architecture](https://www.getambassador.io/images/documentation/telepresence-architecture.inline.svg) + +
+ +## Telepresence CLI + +The Telepresence CLI orchestrates the moving parts on the workstation: it starts the Telepresence Daemons, +authenticates against Ambassador Cloud, and then acts as a user-friendly interface to the Telepresence User Daemon. + +## Telepresence Daemons +Telepresence has Daemons that run on a developer's workstation and act as the main point of communication to the cluster's +network in order to communicate with the cluster and handle intercepted traffic. + +### User-Daemon +The User-Daemon coordinates the creation and deletion of intercepts by communicating with the [Traffic Manager](#traffic-manager). +All requests from and to the cluster go through this Daemon. + +When you run telepresence login, Telepresence installs an enhanced version of the User-Daemon. This replaces the existing User-Daemon and +allows you to create intercepts on your local machine from Ambassador Cloud. + +### Root-Daemon +The Root-Daemon manages the networking necessary to handle traffic between the local workstation and the cluster by setting up a +[Virtual Network Device](../tun-device) (VIF). For a detailed description of how the VIF manages traffic and why it is necessary +please refer to this blog post: +[Implementing Telepresence Networking with a TUN Device](https://blog.getambassador.io/implementing-telepresence-networking-with-a-tun-device-a23a786d51e9). + +When you run telepresence login, Telepresence installs an enhanced Telepresence User Daemon. This replaces the open source +User Daemon and allows you to create intercepts on your local machine from Ambassador Cloud. + +## Traffic Manager + +The Traffic Manager is the central point of communication between Traffic Agents in the cluster and Telepresence Daemons +on developer workstations. It is responsible for injecting the Traffic Agent sidecar into intercepted pods, proxying all +relevant inbound and outbound traffic, and tracking active intercepts. + +The Traffic-Manager is installed, either by a cluster administrator using a Helm Chart, or on demand by the Telepresence +User Daemon. When the User Daemon performs its initial connect, it first checks the cluster for the Traffic Manager +deployment, and if missing it will make an attempt to install it using its embedded Helm Chart. + +When an intercept gets created with a Preview URL, the Traffic Manager will establish a connection with Ambassador Cloud +so that Preview URL requests can be routed to the cluster. This allows Ambassador Cloud to reach the Traffic Manager +without requiring the Traffic Manager to be publicly exposed. Once the Traffic Manager receives a request from a Preview +URL, it forwards the request to the ingress service specified at the Preview URL creation. + +## Traffic Agent + +The Traffic Agent is a sidecar container that facilitates intercepts. When an intercept is first started, the Traffic Agent +container is injected into the workload's pod(s). You can see the Traffic Agent's status by running `telepresence list` +or `kubectl describe pod `. + +Depending on the type of intercept that gets created, the Traffic Agent will either route the incoming request to the +Traffic Manager so that it gets routed to a developer's workstation, or it will pass it along to the container in the +pod usually handling requests on that port. + +## Ambassador Cloud + +Ambassador Cloud enables Preview URLs by generating random ephemeral domain names and routing requests received on those +domains from authorized users to the appropriate Traffic Manager. + +Ambassador Cloud also lets users manage their Preview URLs: making them publicly accessible, seeing users who have +accessed them and deleting them. + +## Pod-Daemon + +The Pod-Daemon is a modified version of the [Telepresence User-Daemon](#user-daemon) built as a container image so that +it can be inserted into a `Deployment` manifest as an additional container. This allows users to create intercepts completely +within the cluster with the benefit that the intercept stays active until the deployment with the Pod-Daemon container is removed. + +The Pod-Daemon will take arguments and environment variables as part of the `Deployment` manifest to specify which service the intercept +should be run on and to provide similar configuration that would be provided when using Telepresence intercepts from the command line. + +After being deployed to the cluster, it behaves similarly to the Telepresence User-Daemon and installs the [Traffic Agent Sidecar](#traffic-agent) +on the service that is being intercepted. After the intercept is created, traffic can then be redirected to the `Deployment` with the Pod-Daemon +container instead. The Pod-Daemon will automatically generate a Preview URL so that the intercept can be accessed from outside the cluster. +The Preview URL can be obtained from the Pod-Daemon logs if you are deploying it manually. + +The Pod-Daemon was created for use as a component of Deployment Previews in order to automatically create intercepts with development images built +by CI so that changes from pull requests can be quickly visualized in a live cluster before changes are landed by accessing the Preview URL +link which would be posted to an associated GitHub pull request when using Deployment Previews. + +See the [Deployment Previews quick-start](https://www.getambassador.io/docs/cloud/latest/deployment-previews/quick-start) for information on how to get started with Deployment Previews +or for a reference on how Pod-Daemon can be manually deployed to the cluster. + + +# Changes from Service Preview + +Using Ambassador's previous offering, Service Preview, the Traffic Agent had to be manually added to a pod by an +annotation. This is no longer required as the Traffic Agent is automatically injected when an intercept is started. + +Service Preview also started an intercept via `edgectl intercept`. The `edgectl` CLI is no longer required to intercept +as this functionality has been moved to the Telepresence CLI. + +For both the Traffic Manager and Traffic Agents, configuring Kubernetes ClusterRoles and ClusterRoleBindings is not +required as it was in Service Preview. Instead, the user running Telepresence must already have sufficient permissions in the cluster to add and modify deployments in the cluster. diff --git a/docs/telepresence/2.8/reference/client.md b/docs/telepresence/2.8/reference/client.md new file mode 100644 index 000000000..491dbbb8e --- /dev/null +++ b/docs/telepresence/2.8/reference/client.md @@ -0,0 +1,31 @@ +--- +description: "CLI options for Telepresence to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Client reference + +The [Telepresence CLI client](../../quick-start) is used to connect Telepresence to your cluster, start and stop intercepts, and create preview URLs. All commands are run in the form of `telepresence `. + +## Commands + +A list of all CLI commands and flags is available by running `telepresence help`, but here is more detail on the most common ones. +You can append `--help` to each command below to get even more information about its usage. + +| Command | Description | +| --- | --- | +| `connect` | Starts the local daemon and connects Telepresence to your cluster and installs the Traffic Manager if it is missing. After connecting, outbound traffic is routed to the cluster so that you can interact with services as if your laptop was another pod (for example, curling a service by it's name) | +| [`login`](login) | Authenticates you to Ambassador Cloud to create, manage, and share [preview URLs](../../howtos/preview-urls/) +| `logout` | Logs out out of Ambassador Cloud | +| `license` | Formats a license from Ambassdor Cloud into a secret that can be [applied to your cluster](../cluster-config#add-license-to-cluster) if you require features of the extension in an air-gapped environment| +| `status` | Shows the current connectivity status | +| `quit` | Tell Telepresence daemons to quit | +| `list` | Lists the current active intercepts | +| `intercept` | Intercepts a service, run followed by the service name to be intercepted and what port to proxy to your laptop: `telepresence intercept --port `. This command can also start a process so you can run a local instance of the service you are intercepting. For example the following will intercept the hello service on port 8000 and start a Python web server: `telepresence intercept hello --port 8000 -- python3 -m http.server 8000`. A special flag `--docker-run` can be used to run the local instance [in a docker container](../docker-run). | +| `leave` | Stops an active intercept: `telepresence leave hello` | +| `preview` | Create or remove [preview URLs](../../howtos/preview-urls) for existing intercepts: `telepresence preview create ` | +| `loglevel` | Temporarily change the log-level of the traffic-manager, traffic-agents, and user and root daemons | +| `gather-logs` | Gather logs from traffic-manager, traffic-agents, user, and root daemons, and export them into a zip file that can be shared with others or included with a github issue. Use `--get-pod-yaml` to include the yaml for the `traffic-manager` and `traffic-agent`s. Use `--anonymize` to replace the actual pod names + namespaces used for the `traffic-manager` and pods containing `traffic-agent`s in the logs. | +| `version` | Show version of Telepresence CLI + Traffic-Manager (if connected) | +| `uninstall` | Uninstalls Telepresence from your cluster, using the `--agent` flag to target the Traffic Agent for a specific workload, the `--all-agents` flag to remove all Traffic Agents from all workloads, or the `--everything` flag to remove all Traffic Agents and the Traffic Manager. +| `dashboard` | Reopens the Ambassador Cloud dashboard in your browser | +| `current-cluster-id` | Get cluster ID for your kubernetes cluster, used for [configuring license](../cluster-config#add-license-to-cluster) in an air-gapped environment | diff --git a/docs/telepresence/2.8/reference/client/login.md b/docs/telepresence/2.8/reference/client/login.md new file mode 100644 index 000000000..ab4319a54 --- /dev/null +++ b/docs/telepresence/2.8/reference/client/login.md @@ -0,0 +1,61 @@ +# Telepresence Login + +```console +$ telepresence login --help +Authenticate to Ambassador Cloud + +Usage: + telepresence login [flags] + +Flags: + --apikey string Static API key to use instead of performing an interactive login +``` + +## Description + +Use `telepresence login` to explicitly authenticate with [Ambassador +Cloud](https://www.getambassador.io/docs/cloud). Unless the +[`skipLogin` option](../../config) is set, other commands will +automatically invoke the `telepresence login` interactive login +procedure as nescessary, so it is rarely nescessary to explicitly run +`telepresence login`; it should only be truly nescessary to explictly +run `telepresence login` when you require a non-interactive login. + +The normal interactive login procedure involves launching a web +browser, a user interacting with that web browser, and finally having +the web browser make callbacks to the local Telepresence process. If +it is not possible to do this (perhaps you are using a headless remote +box via SSH, or are using Telepresence in CI), then you may instead +have Ambassador Cloud issue an API key that you pass to `telepresence +login` with the `--apikey` flag. + +## Telepresence + +When you run `telepresence login`, the CLI installs +a Telepresence binary. The Telepresence enhanced free client of the [User +Daemon](../../architecture) communicates with the Ambassador Cloud to +provide fremium features including the ability to create intercepts from +Ambassador Cloud. + +## Acquiring an API key + +1. Log in to Ambassador Cloud at https://app.getambassador.io/ . + +2. Click on your profile icon in the upper-left: ![Screenshot with the + mouse pointer over the upper-left profile icon](./login/apikey-2.png) + +3. Click on the "API Keys" menu button: ![Screenshot with the mouse + pointer over the "API Keys" menu button](./login/apikey-3.png) + +4. Click on the "generate new key" button in the upper-right: + ![Screenshot with the mouse pointer over the "generate new key" + button](./login/apikey-4.png) + +5. Enter a description for the key (perhaps the name of your laptop, + or perhaps the "CI"), and click "generate api key" to create it. + +You may now pass the API key as `KEY` to `telepresence login --apikey=KEY`. + +Telepresence will use that "master" API key to create narrower keys +for different components of Telepresence. You will see these appear +in the Ambassador Cloud web interface. diff --git a/docs/telepresence/2.8/reference/client/login/apikey-2.png b/docs/telepresence/2.8/reference/client/login/apikey-2.png new file mode 100644 index 000000000..1379502a9 Binary files /dev/null and b/docs/telepresence/2.8/reference/client/login/apikey-2.png differ diff --git a/docs/telepresence/2.8/reference/client/login/apikey-3.png b/docs/telepresence/2.8/reference/client/login/apikey-3.png new file mode 100644 index 000000000..4559b784d Binary files /dev/null and b/docs/telepresence/2.8/reference/client/login/apikey-3.png differ diff --git a/docs/telepresence/2.8/reference/client/login/apikey-4.png b/docs/telepresence/2.8/reference/client/login/apikey-4.png new file mode 100644 index 000000000..25c6581a4 Binary files /dev/null and b/docs/telepresence/2.8/reference/client/login/apikey-4.png differ diff --git a/docs/telepresence/2.8/reference/cluster-config.md b/docs/telepresence/2.8/reference/cluster-config.md new file mode 100644 index 000000000..13f1c5001 --- /dev/null +++ b/docs/telepresence/2.8/reference/cluster-config.md @@ -0,0 +1,438 @@ +import Alert from '@material-ui/lab/Alert'; +import { ClusterConfig, PaidPlansDisclaimer } from '../../../../../src/components/Docs/Telepresence'; + +# Cluster-side configuration + +For the most part, Telepresence doesn't require any special +configuration in the cluster and can be used right away in any +cluster (as long as the user has adequate [RBAC permissions](../rbac) +and the cluster's server version is `1.19.0` or higher). + +## Helm Chart configuration +Some cluster specific configuration can be provided when installing +or upgrading the Telepresence cluster installation using Helm. Once +installed, the Telepresence client will configure itself from values +that it receives when connecting to the Traffic manager. + +See the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) +for a full list of available configuration settings. + +### Values +To add configuration, create a yaml file with the configuration values and then use it executing `telepresence helm install [--upgrade] --values ` + +### Client Configuration + +The `client` structure of the Helm chart configures how the Telepresence clients will interact with the cluster. + +#### DNS +The fields for `client.dns` are: `excludeSuffixes` and `includeSuffixes`. + +| Field | Description | Type | Default | +|-------------------|-----------------------------------------------------------------------------------------------------------------------|---------------------------------------------|--------------------------------| +| `excludeSuffixes` | Suffixes for which the DNS resolver will always fail (or fallback in case of the overriding resolver) | [sequence][yaml-seq] of [strings][yaml-str] | `[.com, .io, .net, .org, .ru]` | +| `includeSuffixes` | Suffixes for which the DNS resolver will always attempt to do a lookup. Includes have higher priority than excludes. | [sequence][yaml-seq] of [strings][yaml-str] | `[]` | + +Here is an example values.yaml: +```yaml +client: + dns: + includeSuffixes: [.private] + excludeSuffixes: [.se, .com, .io, .net, .org, .ru] +``` + +#### Routing +The fields for `client.routing` are `alsoProxySubnets` and `neverProxySubnets`. + +##### AlsoProxy + +When using `alsoProxySubnets`, you provide a list of subnets to be added to the TUN device. +All connections to addresses that the subnet spans will be dispatched to the cluster + +Here is an example values.yaml for the subnet `1.2.3.4/32`: +```yaml +client: + routing: + alsoProxySubnets: + - 1.2.3.4/32 +``` + +##### NeverProxy + +When using `neverProxySubnets` you provide a list of subnets. These will never be routed via the TUN device, +even if they fall within the subnets (pod or service) for the cluster. Instead, whatever route they have before +telepresence connects is the route they will keep. + +Here is an example kubeconfig for the subnet `1.2.3.4/32`: + +```yaml +client: + routing: + neverProxySubnets: + - 1.2.3.4/32 +``` + +##### Using AlsoProxy together with NeverProxy + +Never proxy and also proxy are implemented as routing rules, meaning that when the two conflict, regular routing routes apply. +Usually this means that the most specific route will win. + +So, for example, if an `alsoProxySubnets` subnet falls within a broader `neverProxySubnets` subnet: + +```yaml +neverProxySubnets: [10.0.0.0/16] +alsoProxySubnets: [10.0.5.0/24] +``` + +Then the specific `alsoProxySubnets` of `10.0.5.0/24` will be proxied by the TUN device, whereas the rest of `10.0.0.0/16` will not. + +Conversely, if a `neverProxySubnets` subnet is inside a larger `alsoProxySubnets` subnet: + +```yaml +alsoProxySubnets: [10.0.0.0/16] +neverProxySubnets: [10.0.5.0/24] +``` + +Then all of the `alsoProxySubnets` of `10.0.0.0/16` will be proxied, with the exception of the specific `neverProxySubnets` of `10.0.5.0/24` + +### Agent Configuration + +The `agent` structure of the Helm chart configures the behavior of the Telepresence agents. + +#### Application Protocol Selection +The `agent.appProtocolStrategy` is relevant when using personal intercepts and controls how telepresence selects the application protocol to use +when intercepting a service that has no `service.ports.appProtocol` declared. The port's `appProtocol` is always trusted if it is present. +Valid values are: + +| Value | Resulting action | +|--------------|------------------------------------------------------------------------------------------------------------------------------| +| `http2Probe` | The Telepresence Traffic Agent will probe the intercepted container to check whether it supports http2. This is the default. | +| `portName` | Telepresence will make an educated guess about the protocol based on the name of the service port | +| `http` | Telepresence will use http | +| `http2` | Telepresence will use http2 | + +When `portName` is used, Telepresence will determine the protocol by the name of the port: `[-suffix]`. The following protocols +are recognized: + +| Protocol | Meaning | +|----------|---------------------------------------| +| `http` | Plaintext HTTP/1.1 traffic | +| `http2` | Plaintext HTTP/2 traffic | +| `https` | TLS Encrypted HTTP (1.1 or 2) traffic | +| `grpc` | Same as http2 | + +The application protocol strategy can also be configured on a workstation. See [Intercepts](../config/#intercept) for more info. + +#### Envoy Configuration + +The `agent.envoy` structure contains three values: + +| Setting | Meaning | +|--------------|----------------------------------------------------------| +| `logLevel` | Log level used by the Envoy proxy. Defaults to "warning" | +| `serverPort` | Port used by the Envoy server. Default 18000. | +| `adminPort` | Port used for Envoy administration. Default 19000. | + +#### Image Configuration + +The `agent.image` structure contains the following values: + +| Setting | Meaning | +|------------|-----------------------------------------------------------------------------| +| `registry` | Registry used when downloading the image. Defaults to "docker.io/datawire". | +| `name` | The name of the image. Retrieved from Ambassador Cloud if not set. | +| `tag` | The tag of the image. Retrieved from Ambassador Cloud if not set. | + +#### Log level + +The `agent.LogLevel` controls the log level of the traffic-agent. See [Log Levels](../config/#log-levels) for more info. + +#### Resources + +The `agent.resources` and `agent.initResources` will be used as the `resources` element when injecting traffic-agents and init-containers. + +## TLS + +In this example, other applications in the cluster expect to speak TLS to your +intercepted application (perhaps you're using a service-mesh that does +mTLS). + +In order to use `--mechanism=http` (or any features that imply +`--mechanism=http`) you need to tell Telepresence about the TLS +certificates in use. + +Tell Telepresence about the certificates in use by adjusting your +[workload's](../intercepts/#supported-workloads) Pod template to set a couple of +annotations on the intercepted Pods: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ "getambassador.io/inject-terminating-tls-secret": "your-terminating-secret" # optional ++ "getambassador.io/inject-originating-tls-secret": "your-originating-secret" # optional + spec: ++ serviceAccountName: "your-account-that-has-rbac-to-read-those-secrets" + containers: +``` + +- The `getambassador.io/inject-terminating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS server + certificate to use for decrypting and responding to incoming + requests. + + When Telepresence modifies the Service and workload port + definitions to point at the Telepresence Agent sidecar's port + instead of your application's actual port, the sidecar will use this + certificate to terminate TLS. + +- The `getambassador.io/inject-originating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS + client certificate to use for communicating with your application. + + You will need to set this if your application expects incoming + requests to speak TLS (for example, your + code expects to handle mTLS itself instead of letting a service-mesh + sidecar handle mTLS for it, or the port definition that Telepresence + modified pointed at the service-mesh sidecar instead of at your + application). + + If you do set this, you should to set it to the + same client certificate Secret that you configure the Ambassador + Edge Stack to use for mTLS. + +It is only possible to refer to a Secret that is in the same Namespace +as the Pod. + +The Pod will need to have permission to `get` and `watch` each of +those Secrets. + +Telepresence understands `type: kubernetes.io/tls` Secrets and +`type: istio.io/key-and-cert` Secrets; as well as `type: Opaque` +Secrets that it detects to be formatted as one of those types. + +## Air-gapped cluster + + +If your cluster is on an isolated network such that it cannot +communicate with Ambassador Cloud, then some additional configuration +is required to acquire a license key in order to use personal +intercepts. + +### Create a license + +1. + +2. Generate a new license (if one doesn't already exist) by clicking *Generate New License*. + +3. You will be prompted for your Cluster ID. Ensure your +kubeconfig context is using the cluster you want to create a license for then +run this command to generate the Cluster ID: + + ``` + $ telepresence current-cluster-id + + Cluster ID: + ``` + +4. Click *Generate API Key* to finish generating the license. + +5. On the licenses page, download the license file associated with your cluster. + +### Add license to cluster +There are two separate ways you can add the license to your cluster: manually creating and deploying +the license secret or having the helm chart manage the secret + +You only need to do one of the two options. + +#### Manual deploy of license secret + +1. Use this command to generate a Kubernetes Secret config using the license file: + + ``` + $ telepresence license -f + + apiVersion: v1 + data: + hostDomain: + license: + kind: Secret + metadata: + creationTimestamp: null + name: systema-license + namespace: ambassador + ``` + +2. Save the output as a YAML file and apply it to your +cluster with `kubectl`. + +3. When deploying the `traffic-manager` chart, you must add the additional values when running `helm install` by putting +the following into a file (for the example we'll assume it's called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + secret: + # This tells the helm chart not to create the secret since you've created it yourself + create: false + ``` + +4. Install the helm chart into the cluster + + ``` + telepresence helm install -f license-values.yaml + ``` + +5. Ensure that you have the docker image for the Smart Agent (datawire/ambassador-telepresence-agent:1.11.0) +pulled and in a registry your cluster can pull from. + +6. Have users use the `images` [config key](../config/#images) keys so telepresence uses the aforementioned image for their agent. + +#### Helm chart manages the secret + +1. Get the jwt token from the downloaded license file + + ``` + $ cat ~/Downloads/ambassador.License_for_yourcluster + eyJhbGnotarealtoken.butanexample + ``` + +2. Create the following values file, substituting your real jwt token in for the one used in the example below. +(for this example we'll assume the following is placed in a file called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + # This is the value from the license file you download. this value is an example and will not work + value: eyJhbGnotarealtoken.butanexample + secret: + # This tells the helm chart to create the secret + create: true + ``` + +3. Install the helm chart into the cluster + + ``` + telepresence helm install -f license-values.yaml + ``` + +Users will now be able to use preview intercepts with the +`--preview-url=false` flag. Even with the license key, preview URLs +cannot be used without enabling direct communication with Ambassador +Cloud, as Ambassador Cloud is essential to their operation. + +If using Helm to install the server-side components, see the chart's [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) to learn how to configure the image registry and license secret. + +Have clients use the [skipLogin](../config/#cloud) key to ensure the cli knows it is operating in an +air-gapped environment. + +## Mutating Webhook + +Telepresence uses a Mutating Webhook to inject the [Traffic Agent](../architecture/#traffic-agent) sidecar container and update the +port definitions. This means that an intercepted workload (Deployment, StatefulSet, ReplicaSet) will remain untouched +and in sync as far as GitOps workflows (such as ArgoCD) are concerned. + +The injection will happen on demand the first time an attempt is made to intercept the workload. + +If you want to prevent that the injection ever happens, simply add the `telepresence.getambassador.io/inject-traffic-agent: disabled` +annotation to your workload template's annotations: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ telepresence.getambassador.io/inject-traffic-agent: disabled + spec: + containers: +``` + +### Service Name and Port Annotations + +Telepresence will automatically find all services and all ports that will connect to a workload and make them available +for an intercept, but you can explicitly define that only one service and/or port can be intercepted. + +```diff + spec: + template: + metadata: + labels: + service: your-service + annotations: ++ telepresence.getambassador.io/inject-service-name: my-service ++ telepresence.getambassador.io/inject-service-port: https + spec: + containers: +``` + +### Ignore Certain Volume Mounts + +An annotation `telepresence.getambassador.io/inject-ignore-volume-mounts` can be used to make the injector ignore certain volume mounts denoted by a comma-separated string. The specified volume mounts from the original container will not be appended to the agent sidecar container. + +```diff + spec: + template: + metadata: + annotations: ++ telepresence.getambassador.io/inject-ignore-volume-mounts: "foo,bar" + spec: + containers: +``` + +### Note on Numeric Ports + +If the targetPort of your intercepted service is pointing at a port number, in addition to +injecting the Traffic Agent sidecar, Telepresence will also inject an initContainer that will +reconfigure the pod's firewall rules to redirect traffic to the Traffic Agent. + + +Note that this initContainer requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + +If you need to use numeric ports without the aforementioned capabilities, you can [manually install the agent](../intercepts/manual-agent) + +For example, the following service is using a numeric port, so Telepresence would inject an initContainer into it: +```yaml +apiVersion: v1 +kind: Service +metadata: + name: your-service +spec: + type: ClusterIP + selector: + service: your-service + ports: + - port: 80 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: your-service + labels: + service: your-service +spec: + replicas: 1 + selector: + matchLabels: + service: your-service + template: + metadata: + annotations: + telepresence.getambassador.io/inject-traffic-agent: enabled + labels: + service: your-service + spec: + containers: + - name: your-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 +``` diff --git a/docs/telepresence/2.8/reference/config.md b/docs/telepresence/2.8/reference/config.md new file mode 100644 index 000000000..8cffdb7ed --- /dev/null +++ b/docs/telepresence/2.8/reference/config.md @@ -0,0 +1,239 @@ +# Laptop-side configuration + +## Global Configuration +Telepresence uses a `config.yml` file to store and change certain global configuration values that will be used for all clusters you use Telepresence with. The location of this file varies based on your OS: + +* macOS: `$HOME/Library/Application Support/telepresence/config.yml` +* Linux: `$XDG_CONFIG_HOME/telepresence/config.yml` or, if that variable is not set, `$HOME/.config/telepresence/config.yml` +* Windows: `%APPDATA%\telepresence\config.yml` + +For Linux, the above paths are for a user-level configuration. For system-level configuration, use the file at `$XDG_CONFIG_DIRS/telepresence/config.yml` or, if that variable is empty, `/etc/xdg/telepresence/config.yml`. If a file exists at both the user-level and system-level paths, the user-level path file will take precedence. + +### Values + +The config file currently supports values for the `timeouts`, `logLevels`, `images`, `cloud`, and `grpc` keys. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +timeouts: + agentInstall: 1m + intercept: 10s +logLevels: + userDaemon: debug +images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting +cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. +grpc: + maxReceiveSize: 10Mi +telepresenceAPI: + port: 9980 +intercept: + appProtocolStrategy: portName + defaultPort: "8088" +``` + +#### Timeouts + +Values for `timeouts` are all durations either as a number of seconds +or as a string with a unit suffix of `ms`, `s`, `m`, or `h`. Strings +can be fractional (`1.5h`) or combined (`2h45m`). + +These are the valid fields for the `timeouts` key: + +| Field | Description | Type | Default | +|-------------------------|------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|------------| +| `agentInstall` | Waiting for Traffic Agent to be installed | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | +| `apply` | Waiting for a Kubernetes manifest to be applied | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 1 minute | +| `clusterConnect` | Waiting for cluster to be connected | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `intercept` | Waiting for an intercept to become active | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `proxyDial` | Waiting for an outbound connection to be established | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `trafficManagerConnect` | Waiting for the Traffic Manager API to connect for port forwards | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `trafficManagerAPI` | Waiting for connection to the gPRC API after `trafficManagerConnect` is successful | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 15 seconds | +| `helm` | Waiting for Helm operations (e.g. `install`) on the Traffic Manager | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | + +#### Log Levels + +Values for the `logLevels` fields are one of the following strings, +case-insensitive: + + - `trace` + - `debug` + - `info` + - `warning` or `warn` + - `error` + +For whichever log-level you select, you will get logs labeled with that level and of higher severity. +(e.g. if you use `info`, you will also get logs labeled `error`. You will NOT get logs labeled `debug`. + +These are the valid fields for the `logLevels` key: + +| Field | Description | Type | Default | +|--------------|---------------------------------------------------------------------|---------------------------------------------|---------| +| `userDaemon` | Logging level to be used by the User Daemon (logs to connector.log) | [loglevel][logrus-level] [string][yaml-str] | debug | +| `rootDaemon` | Logging level to be used for the Root Daemon (logs to daemon.log) | [loglevel][logrus-level] [string][yaml-str] | info | + +#### Images +Values for `images` are strings. These values affect the objects that are deployed in the cluster, +so it's important to ensure users have the same configuration. + +Additionally, you can deploy the server-side components with [Helm](../../install/helm), to prevent them +from being overridden by a client's config and use the [mutating-webhook](../cluster-config/#mutating-webhook) +to handle installation of the `traffic-agents`. + +These are the valid fields for the `images` key: + +| Field | Description | Type | Default | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------|----------------------| +| `registry` | Docker registry to be used for installing the Traffic Manager and default Traffic Agent. If not using a helm chart to deploy server-side objects, changing this value will create a new traffic-manager deployment when using Telepresence commands. Additionally, changing this value will update installed default `traffic-agents` to use the new registry when creating a new intercept. | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `agentImage` | `$registry/$imageName:$imageTag` to use when installing the Traffic Agent. Changing this value will update pre-existing `traffic-agents` to use this new image. *The `registry` value is not used for the `traffic-agent` if you have this value set.* | qualified Docker image name [string][yaml-str] | (unset) | +| `webhookRegistry` | The container `$registry` that the [Traffic Manager](../cluster-config/#mutating-webhook) will use with the `webhookAgentImage` *This value is only used if a new `traffic-manager` is deployed* | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `webhookAgentImage` | The container image that the [Traffic Manager](../cluster-config/#mutating-webhook) will pull from the `webhookRegistry` when installing the Traffic Agent in annotated pods *This value is only used if a new `traffic-manager` is deployed* | non-qualified Docker image name [string][yaml-str] | (unset) | + +#### Cloud +Values for `cloud` are listed below and their type varies, so please see the chart for the expected type for each config value. +These fields control how the client interacts with the Cloud service. + +| Field | Description | Type | Default | +|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------|----------------------| +| `skipLogin` | Whether the CLI should skip automatic login to Ambassador Cloud. If set to true, in order to perform personal intercepts you must have a [license key](../cluster-config/#air-gapped-cluster) installed in the cluster. | [bool][yaml-bool] | false | +| `refreshMessages` | How frequently the CLI should communicate with Ambassador Cloud to get new command messages, which also resets whether the message has been raised or not. You will see each message at most once within the duration given by this config | [duration][go-duration] [string][yaml-str] | 168h | +| `systemaHost` | The host used to communicate with Ambassador Cloud | [string][yaml-str] | app.getambassador.io | +| `systemaPort` | The port used with `systemaHost` to communicate with Ambassador Cloud | [string][yaml-str] | 443 | + +Telepresence attempts to auto-detect if the cluster is capable of +communication with Ambassador Cloud, but in cases where only the on-laptop client wishes to communicate with +Ambassador Cloud Telepresence may still prompt you to log in. If you want those auto-login points to be disabled +as well, or would like it to not attempt to communicate with +Ambassador Cloud at all (even for the auto-detection), then be sure to +set the `skipLogin` value to `true`. + +Reminder: To use personal intercepts, which normally require a login, +you must have a license key in your cluster and specify which +`agentImage` should be installed by also adding the following to your +`config.yml`: + +```yaml +images: + agentImage: / +``` + +#### Grpc +The `maxReceiveSize` determines how large a message that the workstation receives via gRPC can be. The default is 4Mi (determined by gRPC). All traffic to and from the cluster is tunneled via gRPC. + +The size is measured in bytes. You can express it as a plain integer or as a fixed-point number using E, G, M, or K. You can also use the power-of-two equivalents: Gi, Mi, Ki. For example, the following represent roughly the same value: +``` +128974848, 129e6, 129M, 123Mi +``` + +#### RESTful API server +The `telepresenceAPI` controls the behavior of Telepresence's RESTful API server that can be queried for additional information about ongoing intercepts. When present, and the `port` is set to a valid port number, it's propagated to the auto-installer so that application containers that can be intercepted gets the `TELEPRESENCE_API_PORT` environment set. The server can then be queried at `localhost:`. In addition, the `traffic-agent` and the `user-daemon` on the workstation that performs an intercept will start the server on that port. +If the `traffic-manager` is auto-installed, its webhook agent injector will be configured to add the `TELEPRESENCE_API_PORT` environment to the app container when the `traffic-agent` is injected. +See [RESTful API server](../restapi) for more info. + +#### Intercept +The `intercept` controls applies to how Telepresence will intercept the communications to the intercepted service. + +The `defaultPort` controls which port is selected when no `--port` flag is given to the `telepresence intercept` command. The default value is "8080". + +The `appProtocolStrategy` is only relevant when using personal intercepts. This controls how Telepresence selects the application protocol to use when intercepting a service that has no `service.ports.appProtocol` defined. Valid values are: + +| Value | Resulting action | +|--------------|--------------------------------------------------------------------------------------------------------| +| `http2Probe` | The Telepresence Traffic Agent will probe the intercepted container to check whether it supports http2 | +| `portName` | Telepresence will make an educated guess about the protocol based on the name of the service port | +| `http` | Telepresence will use http | +| `http2` | Telepresence will use http2 | + +When `portName` is used, Telepresence will determine the protocol by the name of the port: `[-suffix]`. The following protocols are recognized: + +| Protocol | Meaning | +|----------|---------------------------------------| +| `http` | Plaintext HTTP/1.1 traffic | +| `http2` | Plaintext HTTP/2 traffic | +| `https` | TLS Encrypted HTTP (1.1 or 2) traffic | +| `grpc` | Same as http2 | + +#### Daemons + +`daemons` controls which binary to use for the user daemon. By default it will +use the Telepresence binary. For example, this can be used to tell Telepresence to +use the Telepresence Pro binary. + +| Field | Description | Type | Default | +|--------------------|-------------------------------------------------------------|--------------------|--------------------------------------| +| `userDaemonBinary` | The path to the binary you want to use for the User Daemon. | [string][yaml-str] | The path to Telepresence executable | + + +## Workstation Per-Cluster Configuration +The preferred way to configure cluster specific settings is to use the Helm chart, +see [Cluster-side configuration](../cluster-config), but some settings cannot be +declared globally, and sometimes it's desirable to make changes that only affect +one specific workstation. + +### DNS and Routing +DNS and Routing can be defined locally in your kubeconfig using the `telepresence.io` +extension. This might be a good approach while experimenting on a shared cluster installation. + +The fields for `dns` are: local-ip, remote-ip, exclude-suffixes, include-suffixes, and lookup-timeout. + +| Field | Description | Type | Default | +|--------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|-----------------------------------------------------------------------------| +| `local-ip` | The address of the local DNS server. This entry is only used on Linux systems that are not configured to use systemd-resolved. | IP address [string][yaml-str] | first `nameserver` mentioned in `/etc/resolv.conf` | +| `remote-ip` | The address of the cluster's DNS service. | IP address [string][yaml-str] | IP of the `kube-dns.kube-system` or the `dns-default.openshift-dns` service | +| `exclude-suffixes` | Suffixes for which the DNS resolver will always fail (or fallback in case of the overriding resolver). Can be globally configured in the Helm chart. | [sequence][yaml-seq] of [strings][yaml-str] | `[".arpa", ".com", ".io", ".net", ".org", ".ru"]` | +| `include-suffixes` | Suffixes for which the DNS resolver will always attempt to do a lookup. Includes have higher priority than excludes. Can be globally configured in the Helm chart. | [sequence][yaml-seq] of [strings][yaml-str] | `[]` | +| `lookup-timeout` | Maximum time to wait for a cluster side host lookup. | [duration][go-duration] [string][yaml-str] | 4 seconds | + + +Example kubeconfig: +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + dns: + include-suffixes: [.private] + exclude-suffixes: [.se, .com, .io, .net, .org, .ru] + never-proxy: [10.0.0.0/16] + also-proxy: [10.0.5.0/24] + name: example-cluster +``` + +Please note that the names are dash-separated instead of snakeCase and the "also-proxy" and "never-proxy" lack the "Subnets" suffix. + +### Manager + +This is the one cluster configuration that cannot be set using the Helm chart because it defines how Telepresence connects to +the Traffic manager. When not default, that setting needs to be configured in the workstation's kubeconfig for the cluster. + +The `manager` key contains configuration for finding the `traffic-manager` that telepresence will connect to. It supports one key, `namespace`, indicating the namespace where the traffic manager is to be found + +Here is an example kubeconfig that will instruct telepresence to connect to a manager in namespace `staging`: + +```yaml +apiVersion: v1 +clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +[yaml-bool]: https://yaml.org/type/bool.html +[yaml-float]: https://yaml.org/type/float.html +[yaml-int]: https://yaml.org/type/int.html +[yaml-seq]: https://yaml.org/type/seq.html +[yaml-str]: https://yaml.org/type/str.html +[go-duration]: https://pkg.go.dev/time#ParseDuration +[logrus-level]: https://github.com/sirupsen/logrus/blob/v1.8.1/logrus.go#L25-L45 diff --git a/docs/telepresence/2.8/reference/dns.md b/docs/telepresence/2.8/reference/dns.md new file mode 100644 index 000000000..2f263860e --- /dev/null +++ b/docs/telepresence/2.8/reference/dns.md @@ -0,0 +1,80 @@ +# DNS resolution + +The Telepresence DNS resolver is dynamically configured to resolve names using the namespaces of currently active intercepts. Processes running locally on the desktop will have network access to all services in the such namespaces by service-name only. + +All intercepts contribute to the DNS resolver, even those that do not use the `--namespace=` option. This is because `--namespace default` is implied, and in this context, `default` is treated just like any other namespace. + +No namespaces are used by the DNS resolver (not even `default`) when no intercepts are active, which means that no service is available by `` only. Without an active intercept, the namespace qualified DNS name must be used (in the form `.`). + +See this demonstrated below, using the [quick start's](../../quick-start/) sample app services. + +No intercepts are currently running, we'll connect to the cluster and list the services that can be intercepted. + +``` +$ telepresence connect + + Connecting to traffic manager... + Connected to context default (https://) + +$ telepresence list + + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + emoji : ready to intercept (traffic-agent not yet installed) + web : ready to intercept (traffic-agent not yet installed) + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + +$ curl web-app:80 + + curl: (6) Could not resolve host: web-app + +``` + +This is expected as Telepresence cannot reach the service yet by short name without an active intercept in that namespace. + +``` +$ curl web-app.emojivoto:80 + + + + + + Emoji Vote + ... +``` + +Using the namespaced qualified DNS name though does work. +Now we'll start an intercept against another service in the same namespace. Remember, `--namespace default` is implied since it is not specified. + +``` +$ telepresence intercept web --port 8080 + + Using Deployment web + intercepted + Intercept name : web + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Volume Mount Point: /tmp/telfs-166119801 + Intercepting : HTTP requests that match all headers: + 'x-telepresence-intercept-id: 8eac04e3-bf24-4d62-b3ba-35297c16f5cd:web' + +$ curl webapp:80 + + + + + + Emoji Vote + ... +``` + +Now curling that service by its short name works and will as long as the intercept is active. + +The DNS resolver will always be able to resolve services using `.` regardless of intercepts. + +### Supported Query Types + +The Telepresence DNS resolver is now capable of resolving queries of type `A`, `AAAA`, `CNAME`, +`MX`, `NS`, `PTR`, `SRV`, and `TXT`. + +See [Outbound connectivity](../routing/#dns-resolution) for details on DNS lookups. diff --git a/docs/telepresence/2.8/reference/docker-run.md b/docs/telepresence/2.8/reference/docker-run.md new file mode 100644 index 000000000..8aa7852e5 --- /dev/null +++ b/docs/telepresence/2.8/reference/docker-run.md @@ -0,0 +1,31 @@ +--- +Description: "How a Telepresence intercept can run a Docker container with configured environment and volume mounts." +--- + +# Using Docker for intercepts + +If you want your intercept to go to a Docker container on your laptop, use the `--docker-run` option. It creates the intercept, runs your container in the foreground, then automatically ends the intercept when the container exits. + +`telepresence intercept --port --docker-run -- ` + +The `--` separates flags intended for `telepresence intercept` from flags intended for `docker run`. + +## Example + +Imagine you are working on a new version of your frontend service. It is running in your cluster as a Deployment called `frontend-v1`. You use Docker on your laptop to build an improved version of the container called `frontend-v2`. To test it out, use this command to run the new container on your laptop and start an intercept of the cluster service to your local container. + +`telepresence intercept frontend-v1 --port 8000 --docker-run -- frontend-v2` + +## Ports + +The `--port` flag can specify an additional port when `--docker-run` is used so that the local and container port can be different. This is done using `--port :`. The container port will default to the local port when using the `--port ` syntax. + +## Flags + +Telepresence will automatically pass some relevant flags to Docker in order to connect the container with the intercept. Those flags are combined with the arguments given after `--` on the command line. + +- `--dns-search tel2-search` Enables single label name lookups in intercepted namespaces +- `--env-file ` Loads the intercepted environment +- `--name intercept--` Names the Docker container, this flag is omitted if explicitly given on the command line +- `-p ` The local port for the intercept and the container port +- `-v ` Volume mount specification, see CLI help for `--mount` and `--docker-mount` flags for more info diff --git a/docs/telepresence/2.8/reference/environment.md b/docs/telepresence/2.8/reference/environment.md new file mode 100644 index 000000000..7f83ff119 --- /dev/null +++ b/docs/telepresence/2.8/reference/environment.md @@ -0,0 +1,46 @@ +--- +description: "How Telepresence can import environment variables from your Kubernetes cluster to use with code running on your laptop." +--- + +# Environment variables + +Telepresence can import environment variables from the cluster pod when running an intercept. +You can then use these variables with the code running on your laptop of the service being intercepted. + +There are three options available to do this: + +1. `telepresence intercept [service] --port [port] --env-file=FILENAME` + + This will write the environment variables to a Docker Compose `.env` file. This file can be used with `docker-compose` when starting containers locally. Please see the Docker documentation regarding the [file syntax](https://docs.docker.com/compose/env-file/) and [usage](https://docs.docker.com/compose/environment-variables/) for more information. + +2. `telepresence intercept [service] --port [port] --env-json=FILENAME` + + This will write the environment variables to a JSON file. This file can be injected into other build processes. + +3. `telepresence intercept [service] --port [port] -- [COMMAND]` + + This will run a command locally with the pod's environment variables set on your laptop. Once the command quits the intercept is stopped (as if `telepresence leave [service]` was run). This can be used in conjunction with a local server command, such as `python [FILENAME]` or `node [FILENAME]` to run a service locally while using the environment variables that were set on the pod via a ConfigMap or other means. + + Another use would be running a subshell, Bash for example: + + `telepresence intercept [service] --port [port] -- /bin/bash` + + This would start the intercept then launch the subshell on your laptop with all the same variables set as on the pod. + +## Telepresence Environment Variables + +Telepresence adds some useful environment variables in addition to the ones imported from the intercepted pod: + +### TELEPRESENCE_ROOT +Directory where all remote volumes mounts are rooted. See [Volume Mounts](../volume/) for more info. + +### TELEPRESENCE_MOUNTS +Colon separated list of remotely mounted directories. + +### TELEPRESENCE_CONTAINER +The name of the intercepted container. Useful when a pod has several containers, and you want to know which one that was intercepted by Telepresence. + +### TELEPRESENCE_INTERCEPT_ID +ID of the intercept (same as the "x-intercept-id" http header). + +Useful if you need special behavior when intercepting a pod. One example might be when dealing with pub/sub systems like Kafka, where all processes that don't have the `TELEPRESENCE_INTERCEPT_ID` set can filter out all messages that contain an `x-intercept-id` header, while those that do, instead filter based on a matching `x-intercept-id` header. This is to assure that messages belonging to a certain intercept always are consumed by the intercepting process. diff --git a/docs/telepresence/2.8/reference/inside-container.md b/docs/telepresence/2.8/reference/inside-container.md new file mode 100644 index 000000000..637e0cdfd --- /dev/null +++ b/docs/telepresence/2.8/reference/inside-container.md @@ -0,0 +1,37 @@ +# Running Telepresence inside a container + +It is sometimes desirable to run [Telepresence](/products/telepresence/) inside a container. One reason can be to avoid any side effects on the workstation's network, another can be to establish multiple sessions with the traffic manager, or even work with different clusters simultaneously. + +## Building the container + +Building a container with a ready-to-run Telepresence is easy because there are relatively few external dependencies. Add the following to a `Dockerfile`: + +```Dockerfile +# Dockerfile with telepresence and its prerequisites +FROM alpine:3.13 + +# Install Telepresence prerequisites +RUN apk add --no-cache curl iproute2 sshfs + +# Download and install the telepresence binary +RUN curl -fL https://app.getambassador.io/download/tel2/linux/amd64/latest/telepresence -o telepresence && \ + install -o root -g root -m 0755 telepresence /usr/local/bin/telepresence +``` +In order to build the container, do this in the same directory as the `Dockerfile`: +``` +$ docker build -t tp-in-docker . +``` + +## Running the container + +Telepresence will need access to the `/dev/net/tun` device on your Linux host (or, in case the host isn't Linux, the Linux VM that Docker starts automatically), and a Kubernetes config that identifies the cluster. It will also need `--cap-add=NET_ADMIN` to create its Virtual Network Interface. + +The command to run the container can look like this: +```bash +$ docker run \ + --cap-add=NET_ADMIN \ + --device /dev/net/tun:/dev/net/tun \ + --network=host \ + -v ~/.kube/config:/root/.kube/config \ + -it --rm tp-in-docker +``` diff --git a/docs/telepresence/2.8/reference/intercepts/index.md b/docs/telepresence/2.8/reference/intercepts/index.md new file mode 100644 index 000000000..83cf271c8 --- /dev/null +++ b/docs/telepresence/2.8/reference/intercepts/index.md @@ -0,0 +1,403 @@ +import Alert from '@material-ui/lab/Alert'; + +# Intercepts + +When intercepting a service, Telepresence installs a *traffic-agent* +sidecar in to the workload. That traffic-agent supports one or more +intercept *mechanisms* that it uses to decide which traffic to +intercept. Telepresence has a simple default traffic-agent, however +you can configure a different traffic-agent with more sophisticated +mechanisms either by setting the [`images.agentImage` field in +`config.yml`](../config/#images) or by writing an +[`extensions/${extension}.yml`][extensions] file that tells +Telepresence about a traffic-agent that it can use, what mechanisms +that traffic-agent supports, and command-line flags to expose to the +user to configure that mechanism. You may tell Telepresence which +known mechanism to use with the `--mechanism=${mechanism}` flag or by +setting one of the `--${mechansim}-XXX` flags, which implicitly set +the mechanism; for example, setting `--http-header=auto` implicitly +sets `--mechanism=http`. + +The default open-source traffic-agent only supports the `tcp` +mechanism, which treats the raw layer 4 TCP streams as opaque and +sends all of that traffic down to the developer's workstation. This +means that it is a "global" intercept, affecting all users of the +cluster. + +In addition to the default open-source traffic-agent, Telepresence +already knows about the Ambassador Cloud +[traffic-agent][ambassador-agent], which supports the `http` +mechanism. The `http` mechanism operates at higher layer, working +with layer 7 HTTP, and may intercept specific HTTP requests, allowing +other HTTP requests through to the regular service. This allows for +"personal" intercepts which only intercept traffic tagged as belonging +to a given developer. + +[extensions]: https://pkg.go.dev/github.com/telepresenceio/telepresence/v2@v$version$/pkg/client/cli/extensions +[ambassador-agent]: https://github.com/telepresenceio/telepresence/blob/release/v2.4/pkg/client/cli/extensions/builtin.go#L30-L50 + +## Intercept behavior when logged in to Ambassador Cloud + +Logging in to Ambassador Cloud (with [`telepresence +login`](../client/login/)) changes the Telepresence defaults in two +ways. + +First, being logged in to Ambassador Cloud causes Telepresence to +default to `--mechanism=http --http-header=auto --http-path-prefix=/` ( +`--mechanism=http` is redundant. It is implied by other `--http-xxx` flags). +If you hadn't been logged in it would have defaulted to +`--mechanism=tcp`. This tells Telepresence to use the Ambassador +Cloud traffic-agent to do smart "personal" intercepts and only +intercept a subset of HTTP requests, rather than just intercepting the +entirety of all TCP connections. This is important for working in a +shared cluster with teammates, and is important for the preview URL +functionality below. See `telepresence intercept --help` for +information on using the `--http-header` and `--http-path-xxx` flags to +customize which requests that are intercepted. + +Secondly, being logged in causes Telepresence to default to +`--preview-url=true`. If you hadn't been logged in it would have +defaulted to `--preview-url=false`. This tells Telepresence to take +advantage of Ambassador Cloud to create a preview URL for this +intercept, creating a shareable URL that automatically sets the +appropriate headers to have requests coming from the preview URL be +intercepted. In order to create the preview URL, it will prompt you +for four settings about how your cluster's ingress is configured. For +each, Telepresence tries to intelligently detect the correct value for +your cluster; if it detects it correctly, may simply press "enter" and +accept the default, otherwise you must tell Telepresence the correct +value. + +When creating an intercept with the `http` mechanism, the +traffic-agent sends a `GET /telepresence-http2-check` request to your +service and to the process running on your local machine at the port +specified in your intercept, in order to determine if they support +HTTP/2. This is required for the intercepts to behave correctly. If +you do not have a service running locally when the intercept is +created, the traffic-agent will use the result it got from checking +the in-cluster service. + +## Supported workloads + +Kubernetes has various +[workloads](https://kubernetes.io/docs/concepts/workloads/). +Currently, Telepresence supports intercepting (installing a +traffic-agent on) `Deployments`, `ReplicaSets`, and `StatefulSets`. + + + +While many of our examples use Deployments, they would also work on +ReplicaSets and StatefulSets + + + +## Specifying a namespace for an intercept + +The namespace of the intercepted workload is specified using the +`--namespace` option. When this option is used, and `--workload` is +not used, then the given name is interpreted as the name of the +workload and the name of the intercept will be constructed from that +name and the namespace. + +```shell +telepresence intercept hello --namespace myns --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept +`hello-myns`. In order to remove the intercept, you will need to run +`telepresence leave hello-mydns` instead of just `telepresence leave +hello`. + +The name of the intercept will be left unchanged if the workload is specified. + +```shell +telepresence intercept myhello --namespace myns --workload hello --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept `myhello`. + +## Importing environment variables + +Telepresence can import the environment variables from the pod that is +being intercepted, see [this doc](../environment/) for more details. + +## Creating an intercept without a preview URL + +If you *are not* logged in to Ambassador Cloud, the following command +will intercept all traffic bound to the service and proxy it to your +laptop. This includes traffic coming through your ingress controller, +so use this option carefully as to not disrupt production +environments. + +```shell +telepresence intercept --port= +``` + +If you *are* logged in to Ambassador Cloud, setting the +`--preview-url` flag to `false` is necessary. + +```shell +telepresence intercept --port= --preview-url=false +``` + +This will output an HTTP header that you can set on your request for +that traffic to be intercepted: + +```console +$ telepresence intercept --port= --preview-url=false +Using Deployment +intercepted + Intercept name: + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":") +``` + +Run `telepresence status` to see the list of active intercepts. + +```console +$ telepresence status +Root Daemon: Running + Version : v2.1.4 (api 3) + Primary DNS : "" + Fallback DNS: "" +User Daemon: Running + Version : v2.1.4 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 1 total + dataprocessingnodeservice: @ +``` + +Finally, run `telepresence leave ` to stop the intercept. + +## Skipping the ingress dialogue + +You can skip the ingress dialogue by setting the relevant parameters using flags. If any of the following flags are set, the dialogue will be skipped and the flag values will be used instead. If any of the required flags are missing, an error will be thrown. + +| Flag | Description | Required | +|------------------|------------------------------------------------------------------|------------| +| `--ingress-host` | The ip address for the ingress | yes | +| `--ingress-port` | The port for the ingress | yes | +| `--ingress-tls` | Whether tls should be used | no | +| `--ingress-l5` | Whether a different ip address should be used in request headers | no | + +## Creating an intercept when a service has multiple ports + +If you are trying to intercept a service that has multiple ports, you +need to tell Telepresence which service port you are trying to +intercept. To specify, you can either use the name of the service +port or the port number itself. To see which options might be +available to you and your service, use kubectl to describe your +service or look in the object's YAML. For more information on multiple +ports, see the [Kubernetes documentation][kube-multi-port-services]. + +[kube-multi-port-services]: https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services + +```console +$ telepresence intercept --port=: +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +When intercepting a service that has multiple ports, the name of the +service port that has been intercepted is also listed. + +If you want to change which port has been intercepted, you can create +a new intercept the same way you did above and it will change which +service port is being intercepted. + +## Creating an intercept When multiple services match your workload + +Oftentimes, there's a 1-to-1 relationship between a service and a +workload, so telepresence is able to auto-detect which service it +should intercept based on the workload you are trying to intercept. +But if you use something like +[Argo](https://www.getambassador.io/docs/argo/latest/quick-start/), there may be +two services (that use the same labels) to manage traffic between a +canary and a stable service. + +Fortunately, if you know which service you want to use when +intercepting a workload, you can use the `--service` flag. So in the +aforementioned example, if you wanted to use the `echo-stable` service +when intercepting your workload, your command would look like this: + +```console +$ telepresence intercept echo-rollout- --port --service echo-stable +Using ReplicaSet echo-rollout- +intercepted + Intercept name : echo-rollout- + State : ACTIVE + Workload kind : ReplicaSet + Destination : 127.0.0.1:3000 + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-921196036 + Intercepting : all TCP connections +``` + +## Intercepting multiple ports + +It is possible to intercept more than one service and/or service port that are using the same workload. You do this +by creating more than one intercept that identify the same workload using the `--workload` flag. + +Let's assume that we have a service `multi-echo` with the two ports `http` and `grpc`. They are both +targeting the same `multi-echo` deployment. + +```console +$ telepresence intercept multi-echo-http --workload multi-echo --port 8080:http --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-http + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Volume Mount Point : /tmp/telfs-893700837 + Intercepting : all TCP requests + Preview URL : https://sleepy-bassi-1140.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +$ telepresence intercept multi-echo-grpc --workload multi-echo --port 8443:grpc --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-grpc + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8443 + Service Port Identifier: extra + Volume Mount Point : /tmp/telfs-1277723591 + Intercepting : all TCP requests + Preview URL : https://upbeat-thompson-6613.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +``` + +## Port-forwarding an intercepted container's sidecars + +Sidecars are containers that sit in the same pod as an application +container; they usually provide auxiliary functionality to an +application, and can usually be reached at +`localhost:${SIDECAR_PORT}`. For example, a common use case for a +sidecar is to proxy requests to a database, your application would +connect to `localhost:${SIDECAR_PORT}`, and the sidecar would then +connect to the database, perhaps augmenting the connection with TLS or +authentication. + +When intercepting a container that uses sidecars, you might want those +sidecars' ports to be available to your local application at +`localhost:${SIDECAR_PORT}`, exactly as they would be if running +in-cluster. Telepresence's `--to-pod ${PORT}` flag implements this +behavior, adding port-forwards for the port given. + +```console +$ telepresence intercept --port=: --to-pod= +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +If there are multiple ports that you need forwarded, simply repeat the +flag (`--to-pod= --to-pod=`). + +## Intercepting headless services + +Kubernetes supports creating [services without a ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services), +which, when they have a pod selector, serve to provide a DNS record that will directly point to the service's backing pods. +Telepresence supports intercepting these `headless` services as it would a regular service with a ClusterIP. +So, for example, if you have the following service: + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: my-headless +spec: + type: ClusterIP + clusterIP: None + selector: + service: my-headless + ports: + - port: 8080 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: my-headless + labels: + service: my-headless +spec: + replicas: 1 + serviceName: my-headless + selector: + matchLabels: + service: my-headless + template: + metadata: + labels: + service: my-headless + spec: + containers: + - name: my-headless + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +``` + +You can intercept it like any other: + +```console +$ telepresence intercept my-headless --port 8080 +Using StatefulSet my-headless +intercepted + Intercept name : my-headless + State : ACTIVE + Workload kind : StatefulSet + Destination : 127.0.0.1:8080 + Volume Mount Point: /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-524189712 + Intercepting : all TCP connections +``` + + +This utilizes an initContainer that requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + + +This requires the Traffic Agent to run as GID 7777. By default, this is disabled on openshift clusters. +To enable running as GID 7777 on a specific openshift namespace, run: +oc adm policy add-scc-to-group anyuid system:serviceaccounts:$NAMESPACE + + + +Intercepting headless services without a selector is not supported. + + +## Sharing intercepts with teammates + +Once a combination of flags to easily intercept a service has been found, it's useful to share it with teammates. You +can do that easily by going to [Ambassador Cloud -> Intercepts history](https://app.getambassador.io/cloud/saved-intercepts/history) +pick the intercept command from the history tab and create a Saved Intercept by giving it a name, when doing that +the intercept command will be easily accessible for all your teammates. Note that this requires the free enhanced +client to be installed and to be logged in (`telepresence login`). + +To instantiate an intercept based on a saved intercept, simply run +`telepresence intercept --use-saved-intercept `. When logged in, the command will first check for a +saved intercept in Ambassador Cloud and will use it if found, otherwise an error will be returned. + +Saved Intercepts can be [managed through Ambassador Cloud](../../../../cloud/latest/telepresence-saved-intercepts/). diff --git a/docs/telepresence/2.8/reference/intercepts/manual-agent.md b/docs/telepresence/2.8/reference/intercepts/manual-agent.md new file mode 100644 index 000000000..8c24d6dbe --- /dev/null +++ b/docs/telepresence/2.8/reference/intercepts/manual-agent.md @@ -0,0 +1,267 @@ +import Alert from '@material-ui/lab/Alert'; + +# Manually injecting the Traffic Agent + +You can directly modify your workload's YAML configuration to add the Telepresence Traffic Agent and enable it to be intercepted. + +When you use a Telepresence intercept for the first time on a Pod, the [Telepresence Mutating Webhook](../../cluster-config/#mutating-webhook) +will automatically inject a Traffic Agent sidecar into it. There might be some situations where this approach cannot be used, such +as very strict company security policies preventing it. + + +Although it is possible to manually inject the Traffic Agent, it is not the recommended approach to making a workload interceptable, +try the Mutating Webhook before proceeding. + + +## Procedure + +You can manually inject the agent into Deployments, StatefulSets, or ReplicaSets. The example on this page +uses the following Deployment and Service. It's a prerequisite that they have been applied to the cluster: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "my-service" + labels: + service: my-service +spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +--- +apiVersion: v1 +kind: Service +metadata: + name: "my-service" +spec: + type: ClusterIP + selector: + service: my-service + ports: + - port: 80 + targetPort: 8080 +``` + +### 1. Generating the YAML + +First, generate the YAML for the traffic-agent configmap entry. It's important that the generated file have +the same name as the service, and no extension: + +```console +$ telepresence genyaml config --workload my-service -o /tmp/my-service +$ cat /tmp/my-service-config.yaml +agentImage: docker.io/datawire/tel2:2.7.0 +agentName: my-service +containers: +- Mounts: null + envPrefix: A_ + intercepts: + - agentPort: 9900 + containerPort: 8080 + protocol: TCP + serviceName: my-service + servicePort: 80 + serviceUID: f6680334-10ef-4703-aa4e-bb1f9d1665fd + mountPoint: /tel_app_mounts/echo-container + name: echo-container +logLevel: info +managerHost: traffic-manager.ambassador +managerPort: 8081 +manual: true +namespace: default +workloadKind: Deployment +workloadName: my-service +``` + +Next, generate the YAML for the traffic-agent container: + +```console +$ telepresence genyaml container --config /tmp/my-service -o /tmp/my-service-agent.yaml +$ cat /tmp/my-service-agent.yaml +args: +- agent +env: +- name: _TEL_AGENT_POD_IP + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: status.podIP +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: traffic-agent +ports: +- containerPort: 9900 + protocol: TCP +readinessProbe: + exec: + command: + - /bin/stat + - /tmp/agent/ready +resources: {} +volumeMounts: +- mountPath: /tel_pod_info + name: traffic-annotations +- mountPath: /etc/traffic-agent + name: traffic-config +- mountPath: /tel_app_exports + name: export-volume + name: traffic-annotations +``` + +Next, generate the init-container + +```console +$ telepresence genyaml initcontainer --config /tmp/my-service -o /tmp/my-service-init.yaml +$ cat /tmp/my-service-init.yaml +args: +- agent-init +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: tel-agent-init +resources: {} +securityContext: + capabilities: + add: + - NET_ADMIN +volumeMounts: +- mountPath: /etc/traffic-agent + name: traffic-config +``` + +Next, generate the YAML for the volumes: + +```console +$ telepresence genyaml volume --workload my-service -o /tmp/my-service-volume.yaml +$ cat /tmp/my-service-volume.yaml +- downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.annotations + path: annotations + name: traffic-annotations +- configMap: + items: + - key: my-service + path: config.yaml + name: telepresence-agents + name: traffic-config +- emptyDir: {} + name: export-volume + +``` + + +Enter `telepresence genyaml container --help` or `telepresence genyaml volume --help` for more information about these flags. + + +### 2. Creating (or updating) the configmap + +The generated configmap entry must be insterted into the `telepresence-agents` `ConfigMap` in the same namespace as the +modified `Deployment`. If the `ConfigMap` doesn't exist yet, it can be created using the following command: + +```console +$ kubectl create configmap telepresence-agents --from-file=/tmp/my-service +``` + +If it already exists, new entries can be added under the `Data` key using `kubectl edit configmap telepresence-agents`. + +### 3. Injecting the YAML into the Deployment + +You need to add the `Deployment` YAML you genereated to include the container and the volume. These are placed as elements +of `spec.template.spec.containers`, `spec.template.spec.initContainers`, and `spec.template.spec.volumes` respectively. +You also need to modify `spec.template.metadata.annotations` and add the annotation +`telepresence.getambassador.io/manually-injected: "true"`. These changes should look like the following: + +```diff + apiVersion: apps/v1 + kind: Deployment + metadata: + name: "my-service" + labels: + service: my-service + spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service ++ annotations: ++ telepresence.getambassador.io/manually-injected: "true" + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} ++ - args: ++ - agent ++ env: ++ - name: _TEL_AGENT_POD_IP ++ valueFrom: ++ fieldRef: ++ apiVersion: v1 ++ fieldPath: status.podIP ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: traffic-agent ++ ports: ++ - containerPort: 9900 ++ protocol: TCP ++ readinessProbe: ++ exec: ++ command: ++ - /bin/stat ++ - /tmp/agent/ready ++ resources: { } ++ volumeMounts: ++ - mountPath: /tel_pod_info ++ name: traffic-annotations ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ - mountPath: /tel_app_exports ++ name: export-volume ++ initContainers: ++ - args: ++ - agent-init ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: tel-agent-init ++ resources: { } ++ securityContext: ++ capabilities: ++ add: ++ - NET_ADMIN ++ volumeMounts: ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ volumes: ++ - downwardAPI: ++ items: ++ - fieldRef: ++ apiVersion: v1 ++ fieldPath: metadata.annotations ++ path: annotations ++ name: traffic-annotations ++ - configMap: ++ items: ++ - key: my-service ++ path: config.yaml ++ name: telepresence-agents ++ name: traffic-config ++ - emptyDir: { } ++ name: export-volume +``` diff --git a/docs/telepresence/2.8/reference/linkerd.md b/docs/telepresence/2.8/reference/linkerd.md new file mode 100644 index 000000000..9b903fa76 --- /dev/null +++ b/docs/telepresence/2.8/reference/linkerd.md @@ -0,0 +1,75 @@ +--- +Description: "How to get Linkerd meshed services working with Telepresence" +--- + +# Using Telepresence with Linkerd + +## Introduction +Getting started with Telepresence on Linkerd services is as simple as adding an annotation to your Deployment: + +```yaml +spec: + template: + metadata: + annotations: + config.linkerd.io/skip-outbound-ports: "8081" +``` + +The local system and the Traffic Agent connect to the Traffic Manager using its gRPC API on port 8081. Telling Linkerd to skip that port allows the Traffic Agent sidecar to fully communicate with the Traffic Manager, and therefore the rest of the Telepresence system. + +## Prerequisites +1. [Telepresence binary](../../install) +2. Linkerd control plane [installed to cluster](https://linkerd.io/2.10/tasks/install/) +3. Kubectl +4. [Working ingress controller](https://www.getambassador.io/docs/edge-stack/latest/howtos/linkerd2) + +## Deploy +Save and deploy the following YAML. Note the `config.linkerd.io/skip-outbound-ports` annotation in the metadata of the pod template. + +```yaml +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: quote +spec: + replicas: 1 + selector: + matchLabels: + app: quote + strategy: + type: RollingUpdate + template: + metadata: + annotations: + linkerd.io/inject: "enabled" + config.linkerd.io/skip-outbound-ports: "8081,8022,6001" + labels: + app: quote + spec: + containers: + - name: backend + image: docker.io/datawire/quote:0.4.1 + ports: + - name: http + containerPort: 8000 + env: + - name: PORT + value: "8000" + resources: + limits: + cpu: "0.1" + memory: 100Mi +``` + +## Connect to Telepresence +Run `telepresence connect` to connect to the cluster. Then `telepresence list` should show the `quote` deployment as `ready to intercept`: + +``` +$ telepresence list + + quote: ready to intercept (traffic-agent not yet installed) +``` + +## Run the intercept +Run `telepresence intercept quote --port 8080:80` to direct traffic from the `quote` deployment to port 8080 on your local system. Assuming you have something listening on 8080, you should now be able to see your local service whenever attempting to access the `quote` service. diff --git a/docs/telepresence/2.8/reference/rbac.md b/docs/telepresence/2.8/reference/rbac.md new file mode 100644 index 000000000..d78133441 --- /dev/null +++ b/docs/telepresence/2.8/reference/rbac.md @@ -0,0 +1,236 @@ +import Alert from '@material-ui/lab/Alert'; + +# Telepresence RBAC +The intention of this document is to provide a template for securing and limiting the permissions of Telepresence. +This documentation covers the full extent of permissions necessary to administrate Telepresence components in a cluster. + +There are two general categories for cluster permissions with respect to Telepresence. There are RBAC settings for a User and for an Administrator described above. The User is expected to only have the minimum cluster permissions necessary to create a Telepresence [intercept](../../howtos/intercepts/), and otherwise be unable to affect Kubernetes resources. + +In addition to the above, there is also a consideration of how to manage Users and Groups in Kubernetes which is outside of the scope of the document. This document will use Service Accounts to assign Roles and Bindings. Other methods of RBAC administration and enforcement can be found on the [Kubernetes RBAC documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) page. + +## Requirements + +- Kubernetes version 1.16+ +- Cluster admin privileges to apply RBAC + +## Editing your kubeconfig + +This guide also assumes that you are utilizing a kubeconfig file that is specified by the `KUBECONFIG` environment variable. This is a `yaml` file that contains the cluster's API endpoint information as well as the user data being supplied for authentication. The Service Account name used in the example below is called tp-user. This can be replaced by any value (i.e. John or Jane) as long as references to the Service Account are consistent throughout the `yaml`. After an administrator has applied the RBAC configuration, a user should create a `config.yaml` in your current directory that looks like the following:​ + +```yaml +apiVersion: v1 +kind: Config +clusters: +- name: my-cluster # Must match the cluster value in the contexts config + cluster: + ## The cluster field is highly cloud dependent. +contexts: +- name: my-context + context: + cluster: my-cluster # Must match the name field in the clusters config + user: tp-user +users: +- name: tp-user # Must match the name of the Service Account created by the cluster admin + user: + token: # See note below +``` + +The Service Account token will be obtained by the cluster administrator after they create the user's Service Account. Creating the Service Account will create an associated Secret in the same namespace with the format `-token-`. This token can be obtained by your cluster administrator by running `kubectl get secret -n ambassador -o jsonpath='{.data.token}' | base64 -d`. + +After creating `config.yaml` in your current directory, export the file's location to KUBECONFIG by running `export KUBECONFIG=$(pwd)/config.yaml`. You should then be able to switch to this context by running `kubectl config use-context my-context`. + +## Administrating Telepresence + +Telepresence administration requires permissions for creating `Namespaces`, `ServiceAccounts`, `ClusterRoles`, `ClusterRoleBindings`, `Secrets`, `Services`, `MutatingWebhookConfiguration`, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator. The following permissions are needed for the installation and use of Telepresence: + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: telepresence-admin + namespace: default +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-admin-role +rules: + - apiGroups: [""] + resources: ["pods", "pods/log"] + verbs: ["get", "list", "create", "delete", "watch"] + - apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "update", "create", "delete"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] + - apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update", "create", "delete", "watch"] + - apiGroups: ["rbac.authorization.k8s.io"] + resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"] + verbs: ["get", "list", "watch", "create", "delete"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["create"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["get", "list", "watch", "delete"] + resourceNames: ["telepresence-agents"] + - apiGroups: [""] + resources: ["namespaces"] + verbs: ["get", "list", "watch", "create"] + - apiGroups: [""] + resources: ["secrets"] + verbs: ["get", "create", "list", "delete"] + - apiGroups: [""] + resources: ["serviceaccounts"] + verbs: ["get", "create", "delete"] + - apiGroups: ["admissionregistration.k8s.io"] + resources: ["mutatingwebhookconfigurations"] + verbs: ["get", "create", "delete"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["list", "get", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-clusterrolebinding +subjects: + - name: telepresence-admin + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-admin-role + kind: ClusterRole +``` + +There are two ways to install the traffic-manager: Using `telepresence connect` and installing the [helm chart](../../install/helm/). + +By using `telepresence connect`, Telepresence will use your kubeconfig to create the objects mentioned above in the cluster if they don't already exist. If you want the most introspection into what is being installed, we recommend using the helm chart to install the traffic-manager. + +## Cluster-wide telepresence user access + +To allow users to make intercepts across all namespaces, but with more limited `kubectl` permissions, the following `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` will allow full `telepresence intercept` functionality. + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate value + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-role +rules: +# For gather-logs command +- apiGroups: [""] + resources: ["pods/log"] + verbs: ["get"] +- apiGroups: [""] + resources: ["pods"] + verbs: ["list"] +# Needed in order to maintain a list of workloads +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +- apiGroups: [""] + resources: ["namespaces", "services"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-rolebinding +subjects: +- name: tp-user + kind: ServiceAccount + namespace: ambassador +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-role + kind: ClusterRole +``` + +### Traffic Manager connect permission +In addition to the cluster-wide permissions, the client will also need the following namespace scoped permissions +in the traffic-manager's namespace in order to establish the needed port-forward to the traffic-manager. +```yaml +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: traffic-manager-connect +rules: + - apiGroups: [""] + resources: ["pods"] + verbs: ["get", "list", "watch"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: traffic-manager-connect +subjects: + - name: telepresence-test-developer + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: traffic-manager-connect + kind: Role +``` + +## Namespace only telepresence user access + +RBAC for multi-tenant scenarios where multiple dev teams are sharing a single cluster where users are constrained to a specific namespace(s). + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +For each accessible namespace +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate user name + namespace: tp-namespace # Update value for appropriate namespace +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +rules: +- apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "watch"] +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role-binding + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount +roleRef: + kind: Role + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +``` + +The user will also need the [Traffic Manager connect permission](#traffic-manager-connect-permission) described above. diff --git a/docs/telepresence/2.8/reference/restapi.md b/docs/telepresence/2.8/reference/restapi.md new file mode 100644 index 000000000..4be1924a3 --- /dev/null +++ b/docs/telepresence/2.8/reference/restapi.md @@ -0,0 +1,93 @@ +# Telepresence RESTful API server + +[Telepresence](/products/telepresence/) can run a RESTful API server on the local host, both on the local workstation and in a pod that contains a `traffic-agent`. The server currently has two endpoints. The standard `healthz` endpoint and the `consume-here` endpoint. + +## Enabling the server +The server is enabled by setting the `telepresenceAPI.port` to a valid port number in the [Telepresence Helm Chart](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). The values may be passed explicitly to Helm during install, or configured using the [Telepresence Config](../config#restful-api-server) to impact an auto-install. + +## Querying the server +On the cluster's side, it's the `traffic-agent` of potentially intercepted pods that runs the server. The server can be accessed using `http://localhost:/` from the application container. Telepresence ensures that the container has the `TELEPRESENCE_API_PORT` environment variable set when the `traffic-agent` is installed. On the workstation, it is the `user-daemon` that runs the server. It uses the `TELEPRESENCE_API_PORT` that is conveyed in the environment of the intercept. This means that the server can be accessed the exact same way locally, provided that the environment is propagated correctly to the interceptor process. + +## Endpoints + +The `consume-here` and `intercept-info` endpoints are both intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar. Telepresence provides the ID of the intercept in the environment variable [TELEPRESENCE_INTERCEPT_ID](../environment/#telepresence_intercept_id) during an intercept. This ID must be provided in a `x-telepresence-caller-intercept-id: = ` header. [Telepresence](/products/telepresence/) needs this to identify the caller correctly. The `` will be empty when running in the cluster, but it's harmless to provide it there too, so there's no need for conditional code. + +There are three prerequisites to fulfill before testing The `consume-here` and `intercept-info` endpoints using `curl -v` on the workstation: +1. An intercept must be active +2. The "/healthz" endpoint must respond with OK +3. The ID of the intercept must be known. It will be visible as `ID` in the output of `telepresence list --debug`. + +### healthz +The `http://localhost:/healthz` endpoint should respond with status code 200 OK. If it doesn't then something isn't configured correctly. Check that the `traffic-agent` container is present and that the `TELEPRESENCE_API_PORT` has been added to the environment of the application container and/or in the environment that is propagated to the interceptor that runs on the local workstation. + +#### test endpoint using curl +A `curl -v` call can be used to test the endpoint when an intercept is active. This example assumes that the API port is configured to be 9980. +```console +$ curl -v localhost:9980/healthz +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /healthz HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Date: Fri, 26 Nov 2021 07:06:18 GMT +< Content-Length: 0 +< +* Connection #0 to host localhost left intact +``` + +### consume-here +`http://localhost:/consume-here` will respond with "true" (consume the message) or "false" (leave the message on the queue). When running in the cluster, this endpoint will respond with `false` if the headers match an ongoing intercept for the same workload because it's assumed that it's up to the intercept to consume the message. When running locally, the response is inverted. Matching headers means that the message should be consumed. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api`, we can now check that the "/consume-here" returns "true" for the path "/api" and given headers. +```console +$ curl -v localhost:9980/consume-here?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /consume-here?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Fri, 26 Nov 2021 06:43:28 GMT +< Content-Length: 4 +< +* Connection #0 to host localhost left intact +true +``` + +If you can run curl from the pod, you can try the exact same URL. The result should be "false" when there's an ongoing intercept. The `x-telepresence-caller-intercept-id` is not needed when the call is made from the pod. + +### intercept-info +`http://localhost:/intercept-info` is intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar, and will respond with a JSON structure containing the two booleans `clientSide` and `intercepted`, and a `metadata` map which corresponds to the `--http-meta` key pairs used when the intercept was created. This field is always omitted in case `intercepted` is `false`. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api --http-meta a=b --http-meta b=c`, we can now check that the "/intercept-info" returns information for the given path and headers. +```console +$ curl -v localhost:9980/intercept-info?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980...* Connected to localhost (127.0.0.1) port 9980 (#0) +> GET /intercept-info?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.79.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Tue, 01 Feb 2022 11:39:55 GMT +< Content-Length: 68 +< +{"intercepted":true,"clientSide":true,"metadata":{"a":"b","b":"c"}} +* Connection #0 to host localhost left intact +``` diff --git a/docs/telepresence/2.8/reference/routing.md b/docs/telepresence/2.8/reference/routing.md new file mode 100644 index 000000000..cc88490a0 --- /dev/null +++ b/docs/telepresence/2.8/reference/routing.md @@ -0,0 +1,69 @@ +# Connection Routing + +## Outbound + +### DNS resolution +When requesting a connection to a host, the IP of that host must be determined. Telepresence provides DNS resolvers to help with this task. There are currently four types of resolvers but only one of them will be used on a workstation at any given time. Common for all of them is that they will propagate a selection of the host lookups to be performed in the cluster. The selection normally includes all names ending with `.cluster.local` or a currently mapped namespace but more entries can be added to the list using the `includeSuffixes` option in the +[cluster DNS configuration](../cluster-config/#dns) + +#### Cluster side DNS lookups +The cluster side host lookup will be performed by the traffic-manager unless the client has an active intercept, in which case, the agent performing that intercept will be responsible for doing it. If the client has multiple intercepts, then all of them will be asked to perform the lookup, and the response to the client will contain the unique sum of IPs that they produce. It's therefore important to never have multiple intercepts that span more than one namespace[[1](#namespacelimit)] running concurrently on the same workstation because that would logically put the workstation in several namespaces and make the DNS resolution ambiguous. The reason for asking all of them is that the workstation currently impersonates multiple containers, and it is not possible to determine on behalf of what container the lookup request is made. + +#### macOS resolver +This resolver hooks into the macOS DNS system by creating files under `/etc/resolver`. Those files correspond to some domain and contain the port number of the Telepresence resolver. Telepresence creates one such file for each of the currently mapped namespaces and `include-suffixes` option. The file `telepresence.local` contains a search path that is configured based on current intercepts so that single label names can be resolved correctly. + +#### Linux systemd-resolved resolver +This resolver registers itself as part of telepresence's [VIF](../tun-device) using `systemd-resolved` and uses the DBus API to configure domains and routes that corresponds to the current set of intercepts and namespaces. + +#### Linux overriding resolver +Linux systems that aren't configured with `systemd-resolved` will use this resolver. A Typical case is when running Telepresence [inside a docker container](../inside-container). During initialization, the resolver will first establish a _fallback_ connection to the IP passed as `--dns`, the one configured as `local-ip` in the [local DNS configuration](../config/#dns-and-routing), or the primary `nameserver` registered in `/etc/resolv.conf`. It will then use iptables to actually override that IP so that requests to it instead end up in the overriding resolver, which unless it succeeds on its own, will use the _fallback_. + +#### Windows resolver +This resolver uses the DNS resolution capabilities of the [win-tun](https://www.wintun.net/) device in conjunction with [Win32_NetworkAdapterConfiguration SetDNSDomain](https://docs.microsoft.com/en-us/powershell/scripting/samples/performing-networking-tasks?view=powershell-7.2#assigning-the-dns-domain-for-a-network-adapter). + +#### DNS caching +The Telepresence DNS resolver often changes its configuration. This means that Telepresence must either flush the DNS caches on the local host, or ensure that DNS-records returned from the Telepresence resolver aren't cached (or cached for a very short time). All operating systems have different ways of flushing the DNS caches and even different versions of one system may have differences. Also, on some systems it is necessary to actually kill and restart processes to ensure a proper flush, which in turn may result in network instabilities. + +Starting with 2.4.7, Telepresence will no longer flush the host's DNS caches. Instead, all records will have a short Time To Live (TTL) so that such caches evict the entries quickly. This causes increased load on the Telepresence resolver (shorter TTL means more frequent queries) and to cater for that, telepresence now has an internal cache to minimize the number of DNS queries that it sends to the cluster. This cache is flushed as needed without causing instabilities. + +### Routing + +#### Subnets +The Telepresence `traffic-manager` service is responsible for discovering the cluster's service subnet and all subnets used by the pods. In order to do this, it needs permission to create a dummy service[[2](#servicesubnet)] in its own namespace, and the ability to list, get, and watch nodes and pods. Most clusters will expose the pod subnets as `podCIDR` in the `Node` while others, like Amazon EKS, don't. Telepresence will then fall back to deriving the subnets from the IPs of all pods. If you'd like to choose a specific method for discovering subnets, or want to provide the list yourself, you can use the `podCIDRStrategy` configuration value in the [helm](../../install/helm) chart to do that. + +The complete set of subnets that the [VIF](../tun-device) will be configured with is dynamic and may change during a connection's life cycle as new nodes arrive or disappear from the cluster. The set consists of what that the traffic-manager finds in the cluster, and the subnets configured using the [also-proxy](../cluster-config#alsoproxy) configuration option. Telepresence will remove subnets that are equal to, or completely covered by, other subnets. + +#### Connection origin +A request to connect to an IP-address that belongs to one of the subnets of the [VIF](../tun-device) will cause a connection request to be made in the cluster. As with host name lookups, the request will originate from the traffic-manager unless the client has ongoing intercepts. If it does, one of the intercepted pods will be chosen, and the request will instead originate from that pod. This is a best-effort approach. Telepresence only knows that the request originated from the workstation. It cannot know that it is intended to originate from a specific pod when multiple intercepts are active. + +A `--local-only` intercept will not have any effect on the connection origin because there is no pod from which the connection can originate. The intercept must be made on a workload that has been deployed in the cluster if there's a requirement for correct connection origin. + +There are multiple reasons for doing this. One is that it is important that the request originates from the correct namespace. Example: + +```bash +curl some-host +``` +results in a http request with header `Host: some-host`. Now, if a service-mesh like Istio performs header based routing, then it will fail to find that host unless the request originates from the same namespace as the host resides in. Another reason is that the configuration of a service mesh can contain very strict rules. If the request then originates from the wrong pod, it will be denied. Only one intercept at a time can be used if there is a need to ensure that the chosen pod is exactly right. + +### Recursion detection +It is common that clusters used in development, such as Minikube, Minishift or k3s, run on the same host as the Telepresence client, often in a Docker container. Such clusters may have access to host network, which means that both DNS and L4 routing may be subjected to recursion. + +#### DNS recursion +When a local cluster's DNS-resolver fails to resolve a hostname, it may fall back to querying the local host network. This means that the Telepresence resolver will be asked to resolve a query that was issued from the cluster. Telepresence must check if such a query is recursive because there is a chance that it actually originated from the Telepresence DNS resolver and was dispatched to the `traffic-manager`, or a `traffic-agent`. + +Telepresence handles this by sending one initial DNS-query to resolve the hostname "tel2-recursion-check.kube-system". If the cluster runs locally, and has access to the local host's network, then that query will recurse back into the Telepresence resolver. Telepresence remembers this and alters its own behavior so that queries that are believed to be recursions are detected and respond with an NXNAME record. Telepresence performs this solution to the best of its ability, but may not be completely accurate in all situations. There's a chance that the DNS-resolver will yield a false negative for the second query if the same hostname is queried more than once in rapid succession, that is when the second query is made before the first query has received a response from the cluster. + +#### Connect recursion +A cluster running locally may dispatch connection attempts to non-existing host:port combinations to the host network. This means that they may reach the Telepresence [VIF](../tun-device). Endless recursions occur if the VIF simply dispatches such attempts on to the cluster. + +The telepresence client handles this by serializing all connection attempts to one specific IP:PORT, trapping all subsequent attempts to connect to that IP:PORT until the first attempt has completed. If the first attempt was deemed a success, then the currently trapped attempts are allowed to proceed. If the first attempt failed, then the currently trapped attempts fail. + +## Inbound + +The traffic-manager and traffic-agent are mutually responsible for setting up the necessary connection to the workstation when an intercept becomes active. In versions prior to 2.3.2, this would be accomplished by the traffic-manager creating a port dynamically that it would pass to the traffic-agent. The traffic-agent would then forward the intercepted connection to that port, and the traffic-manager would forward it to the workstation. This lead to problems when integrating with service meshes like Istio since those dynamic ports needed to be configured. It also imposed an undesired requirement to be able to use mTLS between the traffic-manager and traffic-agent. + +In 2.3.2, this changes, so that the traffic-agent instead creates a tunnel to the traffic-manager using the already existing gRPC API connection. The traffic-manager then forwards that using another tunnel to the workstation. This is completely invisible to other service meshes and is therefore much easier to configure. + +##### Footnotes: +

1: Starting with 2.8.0, Telepresence will not allow that the same workstation creates concurrent intercepts that span multiple namespaces.

+

2: The error message from an attempt to create a service in a bad subnet contains the service subnet. The trick of creating a dummy service is currently the only way to get Kubernetes to expose that subnet.

diff --git a/docs/telepresence/2.8/reference/tun-device.md b/docs/telepresence/2.8/reference/tun-device.md new file mode 100644 index 000000000..4410f6f3c --- /dev/null +++ b/docs/telepresence/2.8/reference/tun-device.md @@ -0,0 +1,27 @@ +# Networking through Virtual Network Interface + +The Telepresence daemon process creates a Virtual Network Interface (VIF) when Telepresence connects to the cluster. The VIF ensures that the cluster's subnets are available to the workstation. It also intercepts DNS requests and forwards them to the traffic-manager which in turn forwards them to intercepted agents, if any, or performs a host lookup by itself. + +### TUN-Device +The VIF is a TUN-device, which means that it communicates with the workstation in terms of L3 IP-packets. The router will recognize UDP and TCP packets and tunnel their payload to the traffic-manager via its encrypted gRPC API. The traffic-manager will then establish corresponding connections in the cluster. All protocol negotiation takes place in the client because the VIF takes care of the L3 to L4 translation (i.e. the tunnel is L4, not L3). + +## Gains when using the VIF + +### Both TCP and UDP +The TUN-device is capable of routing both TCP and UDP for outbound traffic. Earlier versions of Telepresence would only allow TCP. Future enhancements might be to also route inbound UDP, and perhaps a selection of ICMP packages (to allow for things like `ping`). + +### No SSH required + +The VIF approach is somewhat similar to using `sshuttle` but without +any requirements for extra software, configuration or connections. +Using the VIF means that only one single connection needs to be +forwarded through the Kubernetes apiserver (à la `kubectl +port-forward`), using only one single port. There is no need for +`ssh` in the client nor for `sshd` in the traffic-manager. This also +means that the traffic-manager container can run as the default user. + +#### sshfs without ssh encryption +When a POD is intercepted, and its volumes are mounted on the local machine, this mount is performed by [sshfs](https://github.com/libfuse/sshfs). Telepresence will run `sshfs -o slave` which means that instead of using `ssh` to establish an encrypted communication to an `sshd`, which in turn terminates the encryption and forwards to `sftp`, the `sshfs` will talk `sftp` directly on its `stdin/stdout` pair. Telepresence tunnels that directly to an `sftp` in the agent using its already encrypted gRPC API. As a result, no `sshd` is needed in client nor in the traffic-agent, and the traffic-agent container can run as the default user. + +### No Firewall rules +With the VIF in place, there's no longer any need to tamper with firewalls in order to establish IP routes. The VIF makes the cluster subnets available during connect, and the kernel will perform the routing automatically. When the session ends, the kernel is also responsible for cleaning up. diff --git a/docs/telepresence/2.8/reference/volume.md b/docs/telepresence/2.8/reference/volume.md new file mode 100644 index 000000000..82df9cafa --- /dev/null +++ b/docs/telepresence/2.8/reference/volume.md @@ -0,0 +1,36 @@ +# Volume mounts + +import Alert from '@material-ui/lab/Alert'; + +Telepresence supports locally mounting of volumes that are mounted to your Pods. You can specify a command to run when starting the intercept, this could be a subshell or local server such as Python or Node. + +``` +telepresence intercept --port --mount=/tmp/ -- /bin/bash +``` + +In this case, Telepresence creates the intercept, mounts the Pod's volumes to locally to `/tmp`, and starts a Bash subshell. + +Telepresence can set a random mount point for you by using `--mount=true` instead, you can then find the mount point in the output of `telepresence list` or using the `$TELEPRESENCE_ROOT` variable. + +``` +$ telepresence intercept --port --mount=true -- /bin/bash +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 + Intercepting : all TCP connections + +bash-3.2$ echo $TELEPRESENCE_ROOT +/var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 +``` + +--mount=true is the default if a mount option is not specified, use --mount=false to disable mounting volumes. + +With either method, the code you run locally either from the subshell or from the intercept command will need to be prepended with the `$TELEPRESENCE_ROOT` environment variable to utilize the mounted volumes. + +For example, Kubernetes mounts secrets to `/var/run/secrets/kubernetes.io` (even if no `mountPoint` for it exists in the Pod spec). Once mounted, to access these you would need to change your code to use `$TELEPRESENCE_ROOT/var/run/secrets/kubernetes.io`. + +If using --mount=true without a command, you can use either environment variable flag to retrieve the variable. diff --git a/docs/telepresence/2.8/reference/vpn.md b/docs/telepresence/2.8/reference/vpn.md new file mode 100644 index 000000000..91213babc --- /dev/null +++ b/docs/telepresence/2.8/reference/vpn.md @@ -0,0 +1,155 @@ + +
+ +# Telepresence and VPNs + +## The test-vpn command + +You can make use of the `telepresence test-vpn` command to diagnose issues +with your VPN setup. +This guides you through a series of steps to figure out if there are +conflicts between your VPN configuration and [Telepresence](/products/telepresence/). + +### Prerequisites + +Before running `telepresence test-vpn` you should ensure that your VPN is +in split-tunnel mode. +This means that only traffic that _must_ pass through the VPN is directed +through it; otherwise, the test results may be inaccurate. + +You may need to configure this on both the client and server sides. +Client-side, taking the Tunnelblick client as an example, you must ensure that +the `Route all IPv4 traffic through the VPN` tickbox is not enabled: + +![Tunnelblick](../images/tunnelblick.png) + +Server-side, taking AWS' ClientVPN as an example, you simply have to enable +split-tunnel mode: + +![Modify client VPN Endpoint](../images/split-tunnel.png) + +In AWS, this setting can be toggled without reprovisioning the VPN. Other cloud providers may work differently. + +### Testing the VPN configuration + +To run it, enter: + +```console +$ telepresence test-vpn +``` + +The test-vpn tool begins by asking you to disconnect from your VPN; ensure you are disconnected then +press enter: + +``` +Telepresence Root Daemon is already stopped +Telepresence User Daemon is already stopped +Please disconnect from your VPN now and hit enter once you're disconnected... +``` + +Once it's gathered information about your network configuration without an active connection, +it will ask you to connect to the VPN: + +``` +Please connect to your VPN now and hit enter once you're connected... +``` + +It will then connect to the cluster: + + +``` +Launching Telepresence Root Daemon +Launching Telepresence User Daemon +Connected to context arn:aws:eks:us-east-1:914373874199:cluster/josec-tp-test-vpn-cluster (https://07C63820C58A0426296DAEFC73AED10C.gr7.us-east-1.eks.amazonaws.com) +Telepresence Root Daemon quitting... done +Telepresence User Daemon quitting... done +``` + +And show you the results of the test: + +``` +---------- Test Results: +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +✅ svc subnet 10.19.0.0/16 is clear of VPN + +Please see https://www.telepresence.io/docs/latest/reference/vpn for more info on these corrective actions, as well as examples + +Still having issues? Please create a new github issue at https://github.com/telepresenceio/telepresence/issues/new?template=Bug_report.md + Please make sure to add the following to your issue: + * Run `telepresence loglevel debug`, try to connect, then run `telepresence gather_logs`. It will produce a zipfile that you should attach to the issue. + * Which VPN client are you using? + * Which VPN server are you using? + * How is your VPN pushing DNS configuration? It may be useful to add the contents of /etc/resolv.conf +``` + +#### Interpreting test results + +##### Case 1: VPN masked by cluster + +In an instance where the VPN is masked by the cluster, the test-vpn tool informs you that a pod or service subnet is masking a CIDR that the VPN +routes: + +``` +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +``` + +This means that all VPN hosts within `10.0.0.0/19` will be rendered inaccessible while +telepresence is connected. + +The ideal resolution in this case is to move the pods to a different subnet. This is possible, +for example, in Amazon EKS by configuring a [new CIDR range](https://aws.amazon.com/premiumsupport/knowledge-center/eks-multiple-cidr-ranges/) for the pods. +In this case, configuring the pods to be located in `10.1.0.0/19` clears the VPN and allows you +to reach hosts inside the VPC's `10.0.0.0/19` + +However, it is not always possible to move the pods to a different subnet. +In these cases, you should use the [never-proxy](../cluster-config#neverproxy) configuration to prevent certain +hosts from being masked. +This might be particularly important for DNS resolution. In an AWS ClientVPN VPN it is often +customary to set the `.2` host as a DNS server (e.g. `10.0.0.2` in this case): + +![Modify Client VPN Endpoint](../images/vpn-dns.png) + +If this is the case for your VPN, you should place the DNS server in the never-proxy list for your +cluster. In the values file that you pass to `telepresence helm install [--upgrade] --values `, add a `client.routing` +entry like so: + +```yaml +client: + routing: + neverProxySubnets: + - 10.0.0.2/32 +``` + +##### Case 2: Cluster masked by VPN + +In an instance where the Cluster is masked by the VPN, the test-vpn tool informs you that a pod or service subnet is being masked by a CIDR +that the VPN routes: + +``` +❌ pod subnet 10.0.0.0/8 being masked by VPN-routed CIDR 10.0.0.0/16. This usually means that Telepresence will not be able to connect to your cluster. To resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, consider shrinking the mask of the 10.0.0.0/16 CIDR (e.g. from /16 to /8), or disabling split-tunneling +``` + +Typically this means that pods within `10.0.0.0/8` are not accessible while the VPN is +connected. + +As with the first case, the ideal resolution is to move the pods away, but this may not always +be possible. In that case, your best bet is to attempt to shrink the VPN's CIDR +(that is, make it route more hosts) to make Telepresence's routes win by virtue of specificity. +One easy way to do this may be by disabling split tunneling (see the [prerequisites](#prerequisites) +section for more on split-tunneling). + +Note that once you fix this, you may find yourself landing again in [Case 1](#case-1-vpn-masked-by-cluster), and may need +to use never-proxy rules to whitelist hosts in the VPN: + +``` +❌ pod subnet 10.0.0.0/8 is masking VPN-routed CIDR 0.0.0.0/1. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 0.0.0.0/1 are placed in the never-proxy list +``` +
diff --git a/docs/telepresence/2.8/release-notes/no-ssh.png b/docs/telepresence/2.8/release-notes/no-ssh.png new file mode 100644 index 000000000..025f20ab7 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/no-ssh.png differ diff --git a/docs/telepresence/2.8/release-notes/run-tp-in-docker.png b/docs/telepresence/2.8/release-notes/run-tp-in-docker.png new file mode 100644 index 000000000..53b66a9b2 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/run-tp-in-docker.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.2.png b/docs/telepresence/2.8/release-notes/telepresence-2.2.png new file mode 100644 index 000000000..43abc7e89 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.2.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.3.0-homebrew.png b/docs/telepresence/2.8/release-notes/telepresence-2.3.0-homebrew.png new file mode 100644 index 000000000..e203a9750 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.3.0-homebrew.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.3.0-loglevels.png b/docs/telepresence/2.8/release-notes/telepresence-2.3.0-loglevels.png new file mode 100644 index 000000000..3d628c54a Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.3.0-loglevels.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.3.1-alsoProxy.png b/docs/telepresence/2.8/release-notes/telepresence-2.3.1-alsoProxy.png new file mode 100644 index 000000000..4052b927b Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.3.1-alsoProxy.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.3.1-brew.png b/docs/telepresence/2.8/release-notes/telepresence-2.3.1-brew.png new file mode 100644 index 000000000..2af424904 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.3.1-brew.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.3.1-dns.png b/docs/telepresence/2.8/release-notes/telepresence-2.3.1-dns.png new file mode 100644 index 000000000..c6335e7a7 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.3.1-dns.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.3.1-inject.png b/docs/telepresence/2.8/release-notes/telepresence-2.3.1-inject.png new file mode 100644 index 000000000..aea1003ef Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.3.1-inject.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.3.1-large-file-transfer.png b/docs/telepresence/2.8/release-notes/telepresence-2.3.1-large-file-transfer.png new file mode 100644 index 000000000..48ceb3817 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.3.1-large-file-transfer.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.3.1-trafficmanagerconnect.png b/docs/telepresence/2.8/release-notes/telepresence-2.3.1-trafficmanagerconnect.png new file mode 100644 index 000000000..78128c174 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.3.1-trafficmanagerconnect.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.3.2-subnets.png b/docs/telepresence/2.8/release-notes/telepresence-2.3.2-subnets.png new file mode 100644 index 000000000..778c722ab Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.3.2-subnets.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.3.2-svcport-annotation.png b/docs/telepresence/2.8/release-notes/telepresence-2.3.2-svcport-annotation.png new file mode 100644 index 000000000..1e1e92408 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.3.2-svcport-annotation.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.3.3-helm.png b/docs/telepresence/2.8/release-notes/telepresence-2.3.3-helm.png new file mode 100644 index 000000000..7b81480a7 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.3.3-helm.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.3.3-namespace-config.png b/docs/telepresence/2.8/release-notes/telepresence-2.3.3-namespace-config.png new file mode 100644 index 000000000..7864d3a30 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.3.3-namespace-config.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.3.3-to-pod.png b/docs/telepresence/2.8/release-notes/telepresence-2.3.3-to-pod.png new file mode 100644 index 000000000..aa7be3f63 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.3.3-to-pod.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.3.4-improved-error.png b/docs/telepresence/2.8/release-notes/telepresence-2.3.4-improved-error.png new file mode 100644 index 000000000..fa8a12986 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.3.4-improved-error.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.3.4-ip-error.png b/docs/telepresence/2.8/release-notes/telepresence-2.3.4-ip-error.png new file mode 100644 index 000000000..1d37380c7 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.3.4-ip-error.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.3.5-agent-config.png b/docs/telepresence/2.8/release-notes/telepresence-2.3.5-agent-config.png new file mode 100644 index 000000000..67d6d3e8b Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.3.5-agent-config.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.3.5-grpc-max-receive-size.png b/docs/telepresence/2.8/release-notes/telepresence-2.3.5-grpc-max-receive-size.png new file mode 100644 index 000000000..32939f9dd Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.3.5-grpc-max-receive-size.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.3.5-skipLogin.png b/docs/telepresence/2.8/release-notes/telepresence-2.3.5-skipLogin.png new file mode 100644 index 000000000..bf79c1910 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.3.5-skipLogin.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png b/docs/telepresence/2.8/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png new file mode 100644 index 000000000..d29a05ad7 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.3.7-keydesc.png b/docs/telepresence/2.8/release-notes/telepresence-2.3.7-keydesc.png new file mode 100644 index 000000000..9bffe5ccb Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.3.7-keydesc.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.3.7-newkey.png b/docs/telepresence/2.8/release-notes/telepresence-2.3.7-newkey.png new file mode 100644 index 000000000..c7d47c42d Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.3.7-newkey.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.4.0-cloud-messages.png b/docs/telepresence/2.8/release-notes/telepresence-2.4.0-cloud-messages.png new file mode 100644 index 000000000..ffd045ae0 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.4.0-cloud-messages.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.4.0-windows.png b/docs/telepresence/2.8/release-notes/telepresence-2.4.0-windows.png new file mode 100644 index 000000000..d27ba254a Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.4.0-windows.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.4.1-systema-vars.png b/docs/telepresence/2.8/release-notes/telepresence-2.4.1-systema-vars.png new file mode 100644 index 000000000..c098b439f Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.4.1-systema-vars.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.4.4-gather-logs.png b/docs/telepresence/2.8/release-notes/telepresence-2.4.4-gather-logs.png new file mode 100644 index 000000000..7db541735 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.4.4-gather-logs.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.4.5-logs-anonymize.png b/docs/telepresence/2.8/release-notes/telepresence-2.4.5-logs-anonymize.png new file mode 100644 index 000000000..edd01fde4 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.4.5-logs-anonymize.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.4.5-pod-yaml.png b/docs/telepresence/2.8/release-notes/telepresence-2.4.5-pod-yaml.png new file mode 100644 index 000000000..3f565c4f8 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.4.5-pod-yaml.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.4.5-preview-url-questions.png b/docs/telepresence/2.8/release-notes/telepresence-2.4.5-preview-url-questions.png new file mode 100644 index 000000000..1823aaa14 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.4.5-preview-url-questions.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.4.6-help-text.png b/docs/telepresence/2.8/release-notes/telepresence-2.4.6-help-text.png new file mode 100644 index 000000000..aab9178ad Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.4.6-help-text.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.4.8-health-check.png b/docs/telepresence/2.8/release-notes/telepresence-2.4.8-health-check.png new file mode 100644 index 000000000..e10a0b472 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.4.8-health-check.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.4.8-vpn.png b/docs/telepresence/2.8/release-notes/telepresence-2.4.8-vpn.png new file mode 100644 index 000000000..fbb215882 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.4.8-vpn.png differ diff --git a/docs/telepresence/2.8/release-notes/telepresence-2.5.0-pro-daemon.png b/docs/telepresence/2.8/release-notes/telepresence-2.5.0-pro-daemon.png new file mode 100644 index 000000000..5b82fc769 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/telepresence-2.5.0-pro-daemon.png differ diff --git a/docs/telepresence/2.8/release-notes/tunnel.jpg b/docs/telepresence/2.8/release-notes/tunnel.jpg new file mode 100644 index 000000000..59a0397e6 Binary files /dev/null and b/docs/telepresence/2.8/release-notes/tunnel.jpg differ diff --git a/docs/telepresence/2.8/releaseNotes.yml b/docs/telepresence/2.8/releaseNotes.yml new file mode 100644 index 000000000..c434beb0e --- /dev/null +++ b/docs/telepresence/2.8/releaseNotes.yml @@ -0,0 +1,1804 @@ +# This file should be placed in the folder for the version of the +# product that's meant to be documented. A `/release-notes` page will +# be automatically generated and populated at build time. +# +# Note that an entry needs to be added to the `doc-links.yml` file in +# order to surface the release notes in the table of contents. +# +# The YAML in this file should contain: +# +# changelog: An (optional) URL to the CHANGELOG for the product. +# items: An array of releases with the following attributes: +# - version: The (optional) version number of the release, if applicable. +# - date: The date of the release in the format YYYY-MM-DD. +# - notes: An array of noteworthy changes included in the release, each having the following attributes: +# - type: The type of change, one of `bugfix`, `feature`, `security` or `change`. +# - title: A short title of the noteworthy change. +# - body: >- +# Two or three sentences describing the change and why it +# is noteworthy. This is HTML, not plain text or +# markdown. It is handy to use YAML's ">-" feature to +# allow line-wrapping. +# - image: >- +# The URL of an image that visually represents the +# noteworthy change. This path is relative to the +# `release-notes` directory; if this file is +# `FOO/releaseNotes.yml`, then the image paths are +# relative to `FOO/release-notes/`. +# - docs: The path to the documentation page where additional information can be found. +# - href: A path from the root to a resource on the getambassador website, takes precedence over a docs link. + +docTitle: Telepresence Release Notes +docDescription: >- + Release notes for Telepresence by Ambassador Labs, a CNCF project + that enables developers to iterate rapidly on Kubernetes + microservices by arming them with infinite-scale development + environments, access to instantaneous feedback loops, and highly + customizable development environments. + +changelog: https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md + +items: + - version: 2.8.5 + date: "2022-11-2" + notes: + - type: security + title: CVE-2022-41716 + body: >- + Updated Golang to 1.19.3 to address CVE-2022-41716. + - version: 2.8.4 + date: "2022-11-2" + notes: + - type: bugfix + title: Release Process + body: >- + This release resulted in changes to our release process. + - version: 2.8.3 + date: "2022-10-27" + notes: + - type: feature + title: Ability to disable global intercepts. + body: >- + Global intercepts (a.k.a. TCP intercepts) can now be disabled by using the new Helm chart setting intercept.disableGlobal. + docs: https://github.com/telepresenceio/telepresence/issues/2140 + - type: feature + title: Configurable mutating webhook port + body: >- + The port used for the mutating webhook can be configured using the Helm chart setting + agentInjector.webhook.port. + docs: install/helm + - type: change + title: Mutating webhook port defaults to 443 + body: >- + The default port for the mutating webhook is now 443. It used to be 8443. + - type: change + title: Agent image configuration mandatory in air-gapped environments. + body: >- + The traffic-manager will no longer default to use the tel2 image for the traffic-agent when it is + unable to connect to Ambassador Cloud. Air-gapped environments must declare what image to use in the Helm chart. + - type: bugfix + title: Can now connect to non-helm installs + body: >- + telepresence connect now works as long as the traffic manager is installed, even if + it wasn't installed via >code>helm install
+ docs: https://github.com/telepresenceio/telepresence/issues/2824 + - type: bugfix + title: check-vpn crash fixed + body: >- + telepresence check-vpn no longer crashes when the daemons don't start properly. + - version: 2.8.2 + date: "2022-10-15" + notes: + - type: bugfix + title: Reinstate 2.8.0 + body: >- + There was an issue downloading the free enhanced client. This problem was fixed, 2.8.0 was reinstated + - version: 2.8.1 + date: "2022-10-14" + notes: + - type: bugfix + title: Rollback 2.8.0 + body: >- + Rollback 2.8.0 while we investigate an issue with ambassador cloud. + - version: 2.8.0 + date: "2022-10-14" + notes: + - type: feature + title: Improved DNS resolver + body: >- + The Telepresence DNS resolver is now capable of resolving queries of type A, AAAA, CNAME, + MX, NS, PTR, SRV, and TXT. + docs: reference/dns + - type: feature + title: New `client` structure in Helm chart + body: >- + A new client struct was added to the Helm chart. It contains a connectionTTL that controls + how long the traffic manager will retain a client connection without seeing any sign of life from the client. + docs: reference/cluster-config#Client-Configuration + - type: feature + title: Include and exclude suffixes configurable using the Helm chart. + body: >- + A dns element was added to the client struct in Helm chart. It contains an includeSuffixes and + an excludeSuffixes value that controls what type of names that the DNS resolver in the client will delegate to + the cluster. + docs: reference/cluster-config#DNS + - type: feature + title: Configurable traffic-manager API port + body: >- + The API port used by the traffic-manager is now configurable using the Helm chart value apiPort. + The default port is 8081. + docs: https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence + - type: feature + title: Envoy server and admin port configuration. + body: >- + An new agent struct was added to the Helm chart. It contains an `envoy` structure where the server and + admin port of the Envoy proxy running in the enhanced traffic-agent can be configured. + docs: reference/cluster-config#Envoy-Configuration + - type: change + title: Helm chart `dnsConfig` moved to `client.routing`. + body: >- + The Helm chart dnsConfig was deprecated but retained for backward compatibility. The fields alsoProxySubnets + and neverProxySubnets can now be found under routing in the client struct. + docs: reference/cluster-config#Routing + - type: change + title: Helm chart `agentInjector.agentImage` moved to `agent.image`. + body: >- + The Helm chart agentInjector.agentImage was moved to agent.image. The old value is deprecated but + retained for backward compatibility. + docs: reference/cluster-config#Image-Configuration + - type: change + title: Helm chart `agentInjector.appProtocolStrategy` moved to `agent.appProtocolStrategy`. + body: >- + The Helm chart agentInjector.appProtocolStrategy was moved to agent.appProtocolStrategy. The old + value is deprecated but retained for backward compatibility. + docs: reference/cluster-config#Application-Protocol-Selection + - type: change + title: Helm chart `dnsServiceName`, `dnsServiceNamespace`, and `dnsServiceIP` removed. + body: >- + The Helm chart dnsServiceName, dnsServiceNamespace, and dnsServiceIP has been removed, because + they are no longer needed. The TUN-device will use the traffic-manager pod-IP on platforms where it needs to + dedicate an IP for its local resolver. + - type: change + title: Quit daemons with `telepresence quit -s` + body: >- + The former options `-u` and `-r` for `telepresence quit` has been deprecated and replaced with one option `-s` which will + quit both the root daemon and the user daemon. + - type: bugfix + title: Environment variable interpolation in pods now works. + body: >- + Environment variable interpolation now works for all definitions that are copied from pod containers + into the injected traffic-agent container. + - type: bugfix + title: Early detection of namespace conflict + body: >- + An attempt to create simultaneous intercepts that span multiple namespace on the same workstation + is detected early and prohibited instead of resulting in failing DNS lookups later on. + - type: bugfix + title: Annoying log message removed + body: >- + Spurious and incorrect ""!! SRV xxx"" messages will no longer appear in the logs when the reason + is normal context cancellation. + - type: bugfix + title: Single name DNS resolution in Docker on Linux host + body: >- + Single label names now resolves correctly when using Telepresence in Docker on a Linux host + - type: bugfix + title: Misnomer `appPortStrategy` in Helm chart renamed to `appProtocolStrategy`. + body: >- + The Helm chart value appProtocolStrategy is now correctly named (used to be appPortStategy) + - version: 2.7.6 + date: "2022-09-16" + notes: + - type: feature + title: Helm chart resource entries for injected agents + body: >- + The resources for the traffic-agent container and the optional init container can be + specified in the Helm chart using the resources and initResource fields + of the agentInjector.agentImage + - type: feature + title: Cluster event propagation when injection fails + body: >- + When the traffic-manager fails to inject a traffic-agent, the cause for the failure is + detected by reading the cluster events, and propagated to the user. + - type: feature + title: FTP-client instead of sshfs for remote mounts + body: >- + Telepresence can now use an embedded FTP client and load an existing FUSE library instead of running + an external sshfs or sshfs-win binary. This feature is experimental in 2.7.x + and enabled by setting intercept.useFtp to true> in the config.yml. + - type: change + title: Upgrade of winfsp + body: >- + Telepresence on Windows upgraded winfsp from version 1.10 to 1.11 + - type: bugfix + title: Removal of invalid warning messages + body: >- + Running CLI commands on Apple M1 machines will no longer throw warnings about /proc/cpuinfo + and /proc/self/auxv. + - version: 2.7.5 + date: "2022-09-14" + notes: + - type: change + title: Rollback of release 2.7.4 + body: >- + This release is a rollback of the changes in 2.7.4, so essentially the same as 2.7.3 + - version: 2.7.4 + date: "2022-09-14" + notes: + - type: change + body: >- + This release was broken on some platforms. Use 2.7.6 instead. + - version: 2.7.3 + date: "2022-09-07" + notes: + - type: bugfix + title: PTY for CLI commands + body: >- + CLI commands that are executed by the user daemon now use a pseudo TTY. This enables + docker run -it to allocate a TTY and will also give other commands like bash read the + same behavior as when executed directly in a terminal. + docs: https://github.com/telepresenceio/telepresence/issues/2724 + - type: bugfix + title: Traffic Manager useless warning silenced + body: >- + The traffic-manager will no longer log numerous warnings saying Issuing a + systema request without ApiKey or InstallID may result in an error. + - type: bugfix + title: Traffic Manager useless error silenced + body: >- + The traffic-manager will no longer log an error saying Unable to derive subnets + from nodes when the podCIDRStrategy is auto and it chooses to instead derive the + subnets from the pod IPs. + - version: 2.7.2 + date: "2022-08-25" + notes: + - type: feature + title: Autocompletion scripts + body: >- + Autocompletion scripts can now be generated with telepresence completion SHELL where SHELL can be bash, zsh, fish or powershell. + - type: feature + title: Connectivity check timeout + body: >- + The timeout for the initial connectivity check that Telepresence performs + in order to determine if the cluster's subnets are proxied or not can now be configured + in the config.yml file using timeouts.connectivityCheck. The default timeout was + changed from 5 seconds to 500 milliseconds to speed up the actual connect. + docs: reference/config#timeouts + - type: change + title: gather-traces feedback + body: >- + The command telepresence gather-traces now prints out a message on success. + docs: troubleshooting#distributed-tracing + - type: change + title: upload-traces feedback + body: >- + The command telepresence upload-traces now prints out a message on success. + docs: troubleshooting#distributed-tracing + - type: change + title: gather-traces tracing + body: >- + The command telepresence gather-traces now traces itself and reports errors with trace gathering. + docs: troubleshooting#distributed-tracing + - type: change + title: CLI log level + body: >- + The cli.log log is now logged at the same level as the connector.log + docs: reference/config#log-levels + - type: bugfix + title: Telepresence --help fixed + body: >- + telepresence --help now works once more even if there's no user daemon running. + docs: https://github.com/telepresenceio/telepresence/issues/2735 + - type: bugfix + title: Stream cancellation when no process intercepts + body: >- + Streams created between the traffic-agent and the workstation are now properly closed + when no interceptor process has been started on the workstation. This fixes a potential problem where + a large number of attempts to connect to a non-existing interceptor would cause stream congestion + and an unresponsive intercept. + - type: bugfix + title: List command excludes the traffic-manager + body: >- + The telepresence list command no longer includes the traffic-manager deployment. + - version: 2.7.1 + date: "2022-08-10" + notes: + - type: change + title: Reinstate telepresence uninstall + body: >- + Reinstate telepresence uninstall with --everything depreciated + - type: change + title: Reduce telepresence helm uninstall + body: >- + telepresence helm uninstall will only uninstall the traffic-manager helm chart and no longer accepts the --everything, --agent, or --all-agents flags. + - type: bugfix + title: Auto-connect for telepresence intercpet + body: >- + telepresence intercept will attempt to connect to the traffic manager before creating an intercept. + - version: 2.7.0 + date: "2022-08-07" + notes: + - type: feature + title: Saved Intercepts + body: >- + Create telepresence intercepts based on existing Saved Intercepts configurations with telepresence intercept --use-saved-intercept $SAVED_INTERCEPT_NAME + docs: reference/intercepts#sharing-intercepts-with-teammates + - type: feature + title: Distributed Tracing + body: >- + The Telepresence components now collect OpenTelemetry traces. + Up to 10MB of trace data are available at any given time for collection from + components. telepresence gather-traces is a new command that will collect + all that data and place it into a gzip file, and telepresence upload-traces is + a new command that will push the gzipped data into an OTLP collector. + docs: troubleshooting#distributed-tracing + - type: feature + title: Helm install + body: >- + A new telepresence helm command was added to provide an easy way to install, upgrade, or uninstall the telepresence traffic-manager. + docs: install/manager + - type: feature + title: Ignore Volume Mounts + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-ignore-volume-mounts, that can be used to make the injector ignore specified volume mounts denoted by a comma-separated string. + - type: feature + title: telepresence pod-daemon + body: >- + The Docker image now contains a new program in addition to + the existing traffic-manager and traffic-agent: the pod-daemon. The + pod-daemon is a trimmed-down version of the user-daemon that is + designed to run as a sidecar in a Pod, enabling CI systems to create + preview deploys. + - type: feature + title: Prometheus support for traffic manager + body: >- + Added prometheus support to the traffic manager. + - type: change + title: No install on telepresence connect + body: >- + The traffic manager is no longer automatically installed into the cluster. Connecting or creating an intercept in a cluster without a traffic manager will return an error. + docs: install/manager + - type: change + title: Helm Uninstall + body: >- + The command telepresence uninstall has been moved to telepresence helm uninstall. + docs: install/manager + - type: bugfix + title: readOnlyRootFileSystem mounts work + body: >- + Add an emptyDir volume and volume mount under /tmp on the agent sidecar so it works with `readOnlyRootFileSystem: true` + docs: https://github.com/telepresenceio/telepresence/pull/2666 + - version: 2.6.8 + date: "2022-06-23" + notes: + - type: feature + title: Specify Your DNS + body: >- + The name and namespace for the DNS Service that the traffic-manager uses in DNS auto-detection can now be specified. + - type: feature + title: Specify a Fallback DNS + body: >- + Should the DNS auto-detection logic in the traffic-manager fail, users can now specify a fallback IP to use. + - type: feature + title: Intercept UDP Ports + body: >- + It is now possible to intercept UDP ports with Telepresence and also use --to-pod to forward UDP traffic from ports on localhost. + - type: change + title: Additional Helm Values + body: >- + The Helm chart will now add the nodeSelector, affinity and tolerations values to the traffic-manager's post-upgrade-hook and pre-delete-hook jobs. + - type: bugfix + title: Agent Injection Bugfix + body: >- + Telepresence no longer fails to inject the traffic agent into the pod generated for workloads that have no volumes and `automountServiceAccountToken: false`. + - version: 2.6.7 + date: "2022-06-22" + notes: + - type: bugfix + title: Persistant Sessions + body: >- + The Telepresence client will remember and reuse the traffic-manager session after a network failure or other reason that caused an unclean disconnect. + - type: bugfix + title: DNS Requests + body: >- + Telepresence will no longer forward DNS requests for "wpad" to the cluster. + - type: bugfix + title: Graceful Shutdown + body: >- + The traffic-agent will properly shut down if one of its goroutines errors. + - version: 2.6.6 + date: "2022-06-9" + notes: + - type: bugfix + title: Env Var `TELEPRESENCE_API_PORT` + body: >- + The propagation of the TELEPRESENCE_API_PORT environment variable now works correctly. + - type: bugfix + title: Double Printing `--output json` + body: >- + The --output json global flag no longer outputs multiple objects + - version: 2.6.5 + date: "2022-06-03" + notes: + - type: feature + title: Helm Option -- `reinvocationPolicy` + body: >- + The reinvocationPolicy or the traffic-agent injector webhook can now be configured using the Helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Proxy Certificate + body: >- + The traffic manager now accepts a root CA for a proxy, allowing it to connect to ambassador cloud from behind an HTTPS proxy. This can be configured through the helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Agent Injection + body: >- + A policy that controls when the mutating webhook injects the traffic-agent was added, and can be configured in the Helm chart. + docs: install/helm + - type: change + title: Windows Tunnel Version Upgrade + body: >- + Telepresence on Windows upgraded wintun.dll from version 0.12 to version 0.14.1 + - type: change + title: Helm Version Upgrade + body: >- + Telepresence upgraded its embedded Helm from version 3.8.1 to 3.9 + - type: change + title: Kubernetes API Version Upgrade + body: >- + Telepresence upgraded its embedded Kubernetes API from version 0.23.4 to 0.24.1 + - type: feature + title: Flag `--watch` Added to `list` Command + body: >- + Added a --watch flag to telepresence list that can be used to watch interceptable workloads in a namespace. + - type: change + title: Depreciated `images.webhookAgentImage` + body: >- + The Telepresence configuration setting for `images.webhookAgentImage` is now deprecated. Use `images.agentImage` instead. + - type: bugfix + title: Default `reinvocationPolicy` Set to Never + body: >- + The reinvocationPolicy or the traffic-agent injector webhook now defaults to Never insteadof IfNeeded so that LimitRanges on namespaces can inject a missing resources element into the injected traffic-agent container. + - type: bugfix + title: UDP + body: >- + UDP based communication with services in the cluster now works as expected. + - type: bugfix + title: Telepresence `--help` + body: >- + The command help will only show Kubernetes flags on the commands that supports them + - type: change + title: Error Count + body: >- + Only the errors from the last session will be considered when counting the number of errors in the log after a command failure. + - version: 2.6.4 + date: "2022-05-23" + notes: + - type: bugfix + title: Upgrade RBAC Permissions + body: >- + The traffic-manager RBAC grants permissions to update services, deployments, replicatsets, and statefulsets. Those permissions are needed when the traffic-manager upgrades from versions < 2.6.0 and can be revoked after the upgrade. + - version: 2.6.3 + date: "2022-05-20" + notes: + - type: bugfix + title: Relative Mount Paths + body: >- + The --mount intercept flag now handles relative mount points correctly on non-windows platforms. Windows still require the argument to be a drive letter followed by a colon. + - type: bugfix + title: Traffic Agent Config + body: >- + The traffic-agent's configuration update automatically when services are added, updated or deleted. + - type: bugfix + title: Container Injection for Numeric Ports + body: >- + Telepresence will now always inject an initContainer when the service's targetPort is numeric + - type: bugfix + title: Matching Services + body: >- + Workloads that have several matching services pointing to the same target port are now handled correctly. + - type: bugfix + title: Unexpected Panic + body: >- + A potential race condition causing a panic when closing a DNS connection is now handled correctly. + - type: bugfix + title: Mount Volume Cleanup + body: >- + A container start would sometimes fail because and old directory remained in a mounted temp volume. + - version: 2.6.2 + date: "2022-05-17" + notes: + - type: bugfix + title: Argo Injection + body: >- + Workloads controlled by workloads like Argo Rollout are injected correctly. + - type: bugfix + title: Agent Port Mapping + body: >- + Multiple services appointing the same container port no longer result in duplicated ports in an injected pod. + - type: bugfix + title: GRPC Max Message Size + body: >- + The telepresence list command no longer errors out with "grpc: received message larger than max" when listing namespaces with a large number of workloads. + - version: 2.6.1 + date: "2022-05-16" + notes: + - type: bugfix + title: KUBECONFIG environment variable + body: >- + Telepresence will now handle multiple path entries in the KUBECONFIG environment correctly. + - type: bugfix + title: Don't Panic + body: >- + Telepresence will no longer panic when using preview URLs with traffic-managers < 2.6.0 + - version: 2.6.0 + date: "2022-05-13" + notes: + - type: feature + title: Intercept multiple containers in a pod, and multiple ports per container + body: >- + Telepresence can now intercept multiple services and/or service-ports that connect to the same pod. + docs: new-in-2.6#intercept-multiple-containers-and-ports + - type: feature + title: The Traffic Agent sidecar is always injected by the Traffic Manager's mutating webhook + body: >- + The client will no longer modify deployments, replicasets, or statefulsets in order to + inject a Traffic Agent into an intercepted pod. Instead, all injection is now performed by a mutating webhook. As a result, + the client now needs less permissions in the cluster. + docs: install/upgrade#important-note-about-upgrading-to-2.6.0 + - type: change + title: Automatic upgrade of Traffic Agents + body: >- + When upgrading, all workloads with injected agents will have their agent "uninstalled" automatically. + The mutating webhook will then ensure that their pods will receive an updated Traffic Agent. + docs: new-in-2.6#no-more-workload-modifications + - type: change + title: No default image in the Helm chart + body: >- + The helm chart no longer has a default set for the agentInjector.image.name, and unless it's set, the + traffic-manager will ask Ambassador Could for the preferred image. + docs: new-in-2.6#smarter-agent + - type: change + title: Upgrade to Helm version 3.8.1 + body: The Telepresence client now uses Helm version 3.8.1 when auto-installing the Traffic Manager. + - type: bugfix + title: Remote mounts will now function correctly with custom securityContext + body: >- + The bug causing permission problems when the Traffic Agent is in a Pod with a custom securityContext has been fixed. + - type: bugfix + title: Improved presentation of flags in CLI help + body: The help for commands that accept Kubernetes flags will now display those flags in a separate group. + - type: bugfix + title: Better termination of process parented by intercept + body: >- + Occasionally an intercept will spawn a command using -- on the command line, often in another console. + When you use telepresence leave or telepresence quit while the intercept with the spawned command is still active, + Telepresence will now terminate that the command because it's considered to be parented by the intercept that is being removed. + - version: 2.5.8 + date: "2022-04-27" + notes: + - type: bugfix + title: Folder creation on `telepresence login` + body: >- + Fixed a bug where the telepresence config folder would not be created if the user ran telepresence login before other commands. + - version: 2.5.7 + date: "2022-04-25" + notes: + - type: change + title: RBAC requirements + body: >- + A namespaced traffic-manager will no longer require cluster wide RBAC. Only Roles and RoleBindings are now used. + - type: bugfix + title: Windows DNS + body: >- + The DNS recursion detector didn't work correctly on Windows, resulting in sporadic failures to resolve names that were resolved correctly at other times. + - type: bugfix + title: Session TTL and Reconnect + body: >- + A telepresence session will now last for 24 hours after the user's last connectivity. If a session expires, the connector will automatically try to reconnect. + - version: 2.5.6 + date: "2022-04-18" + notes: + - type: change + title: Less Watchers + body: >- + Telepresence agents watcher will now only watch namespaces that the user has accessed since the last connect. + - type: bugfix + title: More Efficient `gather-logs` + body: >- + The gather-logs command will no longer send any logs through gRPC. + - version: 2.5.5 + date: "2022-04-08" + notes: + - type: change + title: Traffic Manager Permissions + body: >- + The traffic-manager now requires permissions to read pods across namespaces even if installed with limited permissions + - type: bugfix + title: Linux DNS Cache + body: >- + The DNS resolver used on Linux with systemd-resolved now flushes the cache when the search path changes. + - type: bugfix + title: Automatic Connect Sync + body: >- + The telepresence list command will produce a correct listing even when not preceded by a telepresence connect. + - type: bugfix + title: Disconnect Reconnect Stability + body: >- + The root daemon will no longer get into a bad state when a disconnect is rapidly followed by a new connect. + - type: bugfix + title: Limit Watched Namespaces + body: >- + The client will now only watch agents from accessible namespaces, and is also constrained to namespaces explicitly mapped using the connect command's --mapped-namespaces flag. + - type: bugfix + title: Limit Namespaces used in `gather-logs` + body: >- + The gather-logs command will only gather traffic-agent logs from accessible namespaces, and is also constrained to namespaces explicitly mapped using the connect command's --mapped-namespaces flag. + - version: 2.5.4 + date: "2022-03-29" + notes: + - type: bugfix + title: Linux DNS Concurrency + body: >- + The DNS fallback resolver on Linux now correctly handles concurrent requests without timing them out + - type: bugfix + title: Non-Functional Flag + body: >- + The ingress-l5 flag will no longer be forcefully set to equal the --ingress-host flag + - type: bugfix + title: Automatically Remove Failed Intercepts + body: >- + Intercepts that fail to create are now consistently removed to prevent non-working dangling intercepts from sticking around. + - type: bugfix + title: Agent UID + body: >- + Agent container is no longer sensitive to a random UID or an UID imposed by a SecurityContext. + - type: bugfix + title: Gather-Logs Output Filepath + body: >- + Removed a bad concatenation that corrupted the output path of telepresence gather-logs. + - type: change + title: Remove Unnecessary Error Advice + body: >- + An advice to "see logs for details" is no longer printed when the argument count is incorrect in a CLI command. + - type: bugfix + title: Garbage Collection + body: >- + Client and agent sessions no longer leaves dangling waiters in the traffic-manager when they depart. + - type: bugfix + title: Limit Gathered Logs + body: >- + The client's gather logs command and agent watcher will now respect the configured grpc.maxReceiveSize + - type: change + title: In-Cluster Checks + body: >- + The TUN device will no longer route pod or service subnets if it is running in a machine that's already connected to the cluster + - type: change + title: Expanded Status Command + body: >- + The status command includes the install id, user id, account id, and user email in its result, and can print output as JSON + - type: change + title: List Command Shows All Intercepts + body: >- + The list command, when used with the --intercepts flag, will list the users intercepts from all namespaces + - version: 2.5.3 + date: "2022-02-25" + notes: + - type: bugfix + title: TCP Connectivity + body: >- + Fixed bug in the TCP stack causing timeouts after repeated connects to the same address + - type: feature + title: Linux Binaries + body: >- + Client-side binaries for the arm64 architecture are now available for linux + - version: 2.5.2 + date: "2022-02-23" + notes: + - type: bugfix + title: DNS server bugfix + body: >- + Fixed a bug where Telepresence would use the last server in resolv.conf + - version: 2.5.1 + date: "2022-02-19" + notes: + - type: bugfix + title: Fix GKE auth issue + body: >- + Fixed a bug where using a GKE cluster would error with: No Auth Provider found for name "gcp" + - version: 2.5.0 + date: "2022-02-18" + notes: + - type: feature + title: Intercept specific endpoints + body: >- + The flags --http-path-equal, --http-path-prefix, and --http-path-regex can can be used in + addition to the --http-match flag to filter personal intercepts by the request URL path + docs: concepts/intercepts#intercepting-a-specific-endpoint + - type: feature + title: Intercept metadata + body: >- + The flag --http-meta can be used to declare metadata key value pairs that will be returned by the Telepresence rest + API endpoint /intercept-info + docs: reference/restapi#intercept-info + - type: change + title: Client RBAC watch + body: >- + The verb "watch" was added to the set of required verbs when accessing services and workloads for the client RBAC + ClusterRole + docs: reference/rbac + - type: change + title: Dropped backward compatibility with versions <=2.4.4 + body: >- + Telepresence is no longer backward compatible with versions 2.4.4 or older because the deprecated multiplexing tunnel + functionality was removed. + - type: change + title: No global networking flags + body: >- + The global networking flags are no longer used and using them will render a deprecation warning unless they are supported by the + command. The subcommands that support networking flags are connect, current-cluster-id, + and genyaml. + - type: bugfix + title: Output of status command + body: >- + The also-proxy and never-proxy subnets are now displayed correctly when using the + telepresence status command. + - type: bugfix + title: SETENV sudo privilege no longer needed + body: >- + Telepresence longer requires SETENV privileges when starting the root daemon. + - type: bugfix + title: Network device names containing dash + body: >- + Telepresence will now parse device names containing dashes correctly when determining routes that it should never block. + - type: bugfix + title: Linux uses cluster.local as domain instead of search + body: >- + The cluster domain (typically "cluster.local") is no longer added to the DNS search on Linux using + systemd-resolved. Instead, it is added as a domain so that names ending with it are routed + to the DNS server. + - version: 2.4.11 + date: "2022-02-10" + notes: + - type: change + title: Add additional logging to troubleshoot intermittent issues with intercepts + body: >- + We've noticed some issues with intercepts in v2.4.10, so we are releasing a version + with enhanced logging to help debug and fix the issue. + - version: 2.4.10 + date: "2022-01-13" + notes: + - type: feature + title: Application Protocol Strategy + body: >- + The strategy used when selecting the application protocol for personal intercepts can now be configured using + the intercept.appProtocolStrategy in the config.yml file. + docs: reference/config/#intercept + image: telepresence-2.4.10-intercept-config.png + - type: feature + title: Helm value for the Application Protocol Strategy + body: >- + The strategy when selecting the application protocol for personal intercepts in agents injected by the + mutating webhook can now be configured using the agentInjector.appProtocolStrategy in the Helm chart. + docs: install/helm + - type: feature + title: New --http-plaintext option + body: >- + The flag --http-plaintext can be used to ensure that an intercept uses plaintext http or grpc when + communicating with the workstation process. + docs: reference/intercepts/#tls + - type: feature + title: Configure the default intercept port + body: >- + The port used by default in the telepresence intercept command (8080), can now be changed by setting + the intercept.defaultPort in the config.yml file. + docs: reference/config/#intercept + - type: change + title: Telepresence CI now uses Github Actions + body: >- + Telepresence now uses Github Actions for doing unit and integration testing. It is + now easier for contributors to run tests on PRs since maintainers can add an + "ok to test" label to PRs (including from forks) to run integration tests. + docs: https://github.com/telepresenceio/telepresence/actions + image: telepresence-2.4.10-actions.png + - type: bugfix + title: Check conditions before asking questions + body: >- + User will not be asked to log in or add ingress information when creating an intercept until a check has been + made that the intercept is possible. + docs: reference/intercepts/ + - type: bugfix + title: Fix invalid log statement + body: >- + Telepresence will no longer log invalid: "unhandled connection control message: code DIAL_OK" errors. + - type: bugfix + title: Log errors from sshfs/sftp + body: >- + Output to stderr from the traffic-agent's sftp and the client's sshfs processes + are properly logged as errors. + - type: bugfix + title: Don't use Windows path separators in workload pod template + body: >- + Auto installer will no longer not emit backslash separators for the /tel-app-mounts paths in the + traffic-agent container spec when running on Windows. + - version: 2.4.9 + date: "2021-12-09" + notes: + - type: bugfix + title: Helm upgrade nil pointer error + body: >- + A helm upgrade using the --reuse-values flag no longer fails on a "nil pointer" error caused by a nil + telpresenceAPI value. + docs: install/helm#upgrading-the-traffic-manager + - version: 2.4.8 + date: "2021-12-03" + notes: + - type: feature + title: VPN diagnostics tool + body: >- + There is a new subcommand, test-vpn, that can be used to diagnose connectivity issues with a VPN. + See the VPN docs for more information on how to use it. + docs: reference/vpn + image: telepresence-2.4.8-vpn.png + + - type: feature + title: RESTful API service + body: >- + A RESTful service was added to Telepresence, both locally to the client and to the traffic-agent to + help determine if messages with a set of headers should be consumed or not from a message queue where the + intercept headers are added to the messages. + docs: reference/restapi + image: telepresence-2.4.8-health-check.png + + - type: change + title: TELEPRESENCE_LOGIN_CLIENT_ID env variable no longer used + body: >- + You could previously configure this value, but there was no reason to change it, so the value + was removed. + + - type: bugfix + title: Tunneled network connections behave more like ordinary TCP connections. + body: >- + When using Telepresence with an external cloud provider for extensions, those tunneled + connections now behave more like TCP connections, especially when it comes to timeouts. + We've also added increased testing around these types of connections. + - version: 2.4.7 + date: "2021-11-24" + notes: + - type: feature + title: Injector service-name annotation + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-service-name, that can be used to set the name of the service to be intercepted. + This will help disambiguate which service to intercept for when a workload is exposed by multiple services, such as can happen with Argo Rollouts + docs: reference/cluster-config#service-name-annotation + - type: feature + title: Skip the Ingress Dialogue + body: >- + You can now skip the ingress dialogue by setting the ingress parameters in the corresponding flags. + docs: reference/intercepts#skipping-the-ingress-dialogue + - type: feature + title: Never proxy subnets + body: >- + The kubeconfig extensions now support a never-proxy argument, + analogous to also-proxy, that defines a set of subnets that + will never be proxied via telepresence. + docs: reference/config#neverproxy + - type: change + title: Daemon versions check + body: >- + Telepresence now checks the versions of the client and the daemons and asks the user to quit and restart if they don't match. + - type: change + title: No explicit DNS flushes + body: >- + Telepresence DNS now uses a very short TTL instead of explicitly flushing DNS by killing the mDNSResponder or doing resolvectl flush-caches + docs: reference/routing#dns-caching + - type: bugfix + title: Legacy flags now work with global flags + body: >- + Legacy flags such as --swap-deployment can now be used together with global flags. + - type: bugfix + title: Outbound connection closing + body: >- + Outbound connections are now properly closed when the peer closes. + - type: bugfix + title: Prevent DNS recursion + body: >- + The DNS-resolver will trap recursive resolution attempts (may happen when the cluster runs in a docker-container on the client). + docs: reference/routing#dns-recursion + - type: bugfix + title: Prevent network recursion + body: >- + The TUN-device will trap failed connection attempts that results in recursive calls back into the TUN-device (may happen when the + cluster runs in a docker-container on the client). + docs: reference/routing#connect-recursion + - type: bugfix + title: Traffic Manager deadlock fix + body: >- + The Traffic Manager no longer runs a risk of entering a deadlock when a new Traffic agent arrives. + - type: bugfix + title: webhookRegistry config propagation + body: >- + The configured webhookRegistry is now propagated to the webhook installer even if no webhookAgentImage has been set. + docs: reference/config#images + - type: bugfix + title: Login refreshes expired tokens + body: >- + When a user's token has expired, telepresence login + will prompt the user to log in again to get a new token. Previously, + the user had to telepresence quit and telepresence logout + to get a new token. + docs: https://github.com/telepresenceio/telepresence/issues/2062 + - version: 2.4.6 + date: "2021-11-02" + notes: + - type: feature + title: Manually injecting Traffic Agent + body: >- + Telepresence now supports manually injecting the traffic-agent YAML into workload manifests. + Use the genyaml command to create the sidecar YAML, then add the telepresence.getambassador.io/manually-injected: "true" annotation to your pods to allow Telepresence to intercept them. + docs: reference/intercepts/manual-agent + + - type: feature + title: Telepresence CLI released for Apple silicon + body: >- + Telepresence is now built and released for Apple silicon. + docs: install/?os=macos + + - type: change + title: Telepresence help text now links to telepresence.io + body: >- + We now include a link to our documentation when you run telepresence --help. This will make it easier + for users to find this page whether they acquire Telepresence through Brew or some other mechanism. + image: telepresence-2.4.6-help-text.png + + - type: bugfix + title: Fixed bug when API server is inside CIDR range of pods/services + body: >- + If the API server for your kubernetes cluster had an IP that fell within the + subnet generated from pods/services in a kubernetes cluster, it would proxy traffic + to the API server which would result in hanging or a failed connection. We now ensure + that the API server is explicitly not proxied. + - version: 2.4.5 + date: "2021-10-15" + notes: + - type: feature + title: Get pod yaml with gather-logs command + body: >- + Adding the flag --get-pod-yaml to your request will get the + pod yaml manifest for all kubernetes components you are getting logs for + ( traffic-manager and/or pods containing a + traffic-agent container). This flag is set to false + by default. + docs: reference/client + image: telepresence-2.4.5-pod-yaml.png + + - type: feature + title: Anonymize pod name + namespace when using gather-logs command + body: >- + Adding the flag --anonymize to your command will + anonymize your pod names + namespaces in the output file. We replace the + sensitive names with simple names (e.g. pod-1, namespace-2) to maintain + relationships between the objects without exposing the real names of your + objects. This flag is set to false by default. + docs: reference/client + image: telepresence-2.4.5-logs-anonymize.png + + - type: feature + title: Added context and defaults to ingress questions when creating a preview URL + body: >- + Previously, we referred to OSI model layers when asking these questions, but this + terminology is not commonly used. The questions now provide a clearer context for the user, along with a default answer as an example. + docs: howtos/preview-urls + image: telepresence-2.4.5-preview-url-questions.png + + - type: feature + title: Support for intercepting headless services + body: >- + Intercepting headless services is now officially supported. You can request a + headless service on whatever port it exposes and get a response from the + intercept. This leverages the same approach as intercepting numeric ports when + using the mutating webhook injector, mainly requires the initContainer + to have NET_ADMIN capabilities. + docs: reference/intercepts/#intercepting-headless-services + + - type: change + title: Use one tunnel per connection instead of multiplexing into one tunnel + body: >- + We have changed Telepresence so that it uses one tunnel per connection instead + of multiplexing all connections into one tunnel. This will provide substantial + performance improvements. Clients will still be backwards compatible with older + managers that only support multiplexing. + + - type: bugfix + title: Added checks for Telepresence kubernetes compatibility + body: >- + Telepresence currently works with Kubernetes server versions 1.17.0 + and higher. We have added logs in the connector and traffic-manager + to let users know when they are using Telepresence with a cluster it doesn't support. + docs: reference/cluster-config + + - type: bugfix + title: Traffic Agent security context is now only added when necessary + body: >- + When creating an intercept, Telepresence will now only set the traffic agent's GID + when strictly necessary (i.e. when using headless services or numeric ports). This mitigates + an issue on openshift clusters where the traffic agent can fail to be created due to + openshift's security policies banning arbitrary GIDs. + + - version: 2.4.4 + date: "2021-09-27" + notes: + - type: feature + title: Numeric ports in agent injector + body: >- + The agent injector now supports injecting Traffic Agents into pods that have unnamed ports. + docs: reference/cluster-config/#note-on-numeric-ports + + - type: feature + title: New subcommand to gather logs and export into zip file + body: >- + Telepresence has logs for various components (the + traffic-manager, traffic-agents, the root and + user daemons), which are integral for understanding and debugging + Telepresence behavior. We have added the telepresence + gather-logs command to make it simple to compile logs for + all Telepresence components and export them in a zip file that can + be shared to others and/or included in a github issue. For more + information on usage, run telepresence gather-logs --help + . + docs: reference/client + image: telepresence-2.4.4-gather-logs.png + + - type: feature + title: Pod CIDR strategy is configurable in Helm chart + body: >- + Telepresence now enables you to directly configure how to get + pod CIDRs when deploying Telepresence with the Helm chart. + The default behavior remains the same. We've also introduced + the ability to explicitly set what the pod CIDRs should be. + docs: install/helm + + - type: bugfix + title: Compute pod CIDRs more efficiently + body: >- + When computing subnets using the pod CIDRs, the traffic-manager + now uses less CPU cycles. + docs: reference/routing/#subnets + + - type: bugfix + title: Prevent busy loop in traffic-manager + body: >- + In some circumstances, the traffic-manager's CPU + would max out and get pinned at its limit. This required a + shutdown or pod restart to fix. We've added some fixes + to prevent the traffic-manager from getting into this state. + + - type: bugfix + title: Added a fixed buffer size to TUN-device + body: >- + The TUN-device now has a max buffer size of 64K. This prevents the + buffer from growing limitlessly until it receies a PSH, which could + be a blocking operation when receiving lots of TCP-packets. + docs: reference/tun-device + + - type: bugfix + title: Fix hanging user daemon + body: >- + When Telepresence encountered an issue connecting to the cluster or + the root daemon, it could hang indefintely. It now will error correctly + when it encounters that situation. + + - type: bugfix + title: Improved proprietary agent connectivity + body: >- + To determine whether the environment cluster is air-gapped, the + proprietary agent attempts to connect to the cloud during startup. + To deal with a possible initial failure, the agent backs off + and retries the connection with an increasing backoff duration. + + - type: bugfix + title: Telepresence correctly reports intercept port conflict + body: >- + When creating a second intercept targetting the same local port, + it now gives the user an informative error message. Additionally, + it tells them which intercept is currently using that port to make + it easier to remedy. + + - version: 2.4.3 + date: "2021-09-15" + notes: + - type: feature + title: Environment variable TELEPRESENCE_INTERCEPT_ID available in interceptor's environment + body: >- + When you perform an intercept, we now include a TELEPRESENCE_INTERCEPT_ID environment + variable in the environment. + docs: reference/environment/#telepresence-environment-variables + + - type: bugfix + title: Improved daemon stability + body: >- + Fixed a timing bug that sometimes caused a "daemon did not start" failure. + + - type: bugfix + title: Complete logs for Windows + body: >- + Crash stack traces and other errors were incorrectly not written to log files. This has + been fixed so logs for Windows should be at parity with the ones in MacOS and Linux. + + - type: bugfix + title: Log rotation fix for Linux kernel 4.11+ + body: >- + On Linux kernel 4.11 and above, the log file rotation now properly reads the + birth-time of the log file. Older kernels continue to use the old behavior + of using the change-time in place of the birth-time. + + - type: bugfix + title: Improved error messaging + body: >- + When Telepresence encounters an error, it tells the user where they should look for + logs related to the error. We have refined this so that it only tells users to look + for errors in the daemon logs for issues that are logged there. + + - type: bugfix + title: Stop resolving localhost + body: >- + When using the overriding DNS resolver, it will no longer apply search paths when + resolving localhost, since that should be resolved on the user's machine + instead of the cluster. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Variable cluster domain + body: >- + Previously, the cluster domain was hardcoded to cluster.local. While this + is true for many kubernetes clusters, it is not for all of them. Now this value is + retrieved from the traffic-manager. + + - type: bugfix + title: Improved cleanup of traffic-agents + body: >- + Telepresence now uninstalls traffic-agents installed via mutating webhook + when using telepresence uninstall --everything. + + - type: bugfix + title: More large file transfer fixes + body: >- + Downloading large files during an intercept will no longer cause timeouts and hanging + traffic-agents. + + - type: bugfix + title: Setting --mount to false when intercepting works as expected + body: >- + When using --mount=false while performing an intercept, the file system + was still mounted. This has been remedied so the intercept behavior respects the + flag. + docs: reference/volume + + - type: bugfix + title: Traffic-manager establishes outbound connections in parallel + body: >- + Previously, the traffic-manager established outbound connections + sequentially. This resulted in slow (and failing) Dial calls would + block all outbound traffic from the workstation (for up to 30 seconds). We now + establish these connections in parallel so that won't occur. + docs: reference/routing/#outbound + + - type: bugfix + title: Status command reports correct DNS settings + body: >- + Telepresence status now correctly reports DNS settings for all operating + systems, instead of Local IP:nil, Remote IP:nil when they don't exist. + + - version: 2.4.2 + date: "2021-09-01" + notes: + - type: feature + title: New subcommand to temporarily change log-level + body: >- + We have added a new telepresence loglevel subcommand that enables users + to temporarily change the log-level for the local demons, the traffic-manager and + the traffic-agents. While the logLevels settings from the config will + still be used by default, this can be helpful if you are currently experiencing an issue and + want to have higher fidelity logs, without doing a telepresence quit and + telepresence connect. You can use telepresence loglevel --help to get + more information on options for the command. + docs: reference/config + + - type: change + title: All components have info as the default log-level + body: >- + We've now set the default for all components of Telepresence (traffic-agent, + traffic-manager, local daemons) to use info as the default log-level. + + - type: bugfix + title: Updating RBAC in helm chart to fix cluster-id regression + body: >- + In 2.4.1, we enabled the traffic-manager to get the cluster ID by getting the UID + of the default namespace. The helm chart was not updated to give the traffic-manager + those permissions, which has since been fixed. This impacted users who use licensed features of + the Telepresence extension in an air-gapped environment. + docs: reference/cluster-config/#air-gapped-cluster + + - type: bugfix + title: Timeouts for Helm actions are now respected + body: >- + The user-defined timeout for Helm actions wasn't always respected, causing the daemon to hang + indefinitely when failing to install the traffic-manager. + docs: reference/config#timeouts + + - version: 2.4.1 + date: "2021-08-30" + notes: + - type: feature + title: External cloud variables are now configurable + body: >- + We now support configuring the host and port for the cloud in your config.yml. These + are used when logging in to utilize features provided by an extension, and are also passed + along as environment variables when installing the traffic-manager. Additionally, we + now run our testsuite with these variables set to localhost to continue to ensure Telepresence + is fully fuctional without depeneding on an external service. The SYSTEMA_HOST and SYSTEMA_PORT + environment variables are no longer used. + image: telepresence-2.4.1-systema-vars.png + docs: reference/config/#cloud + + - type: feature + title: Helm chart can now regenerate certificate used for mutating webhook on-demand. + body: >- + You can now set agentInjector.certificate.regenerate when deploying Telepresence + with the Helm chart to automatically regenerate the certificate used by the agent injector webhook. + docs: install/helm + + - type: change + title: Traffic Manager installed via helm + body: >- + The traffic-manager is now installed via an embedded version of the Helm chart when telepresence connect is first performed on a cluster. + This change is transparent to the user. + A new configuration flag, timeouts.helm sets the timeouts for all helm operations performed by the Telepresence binary. + docs: reference/config#timeouts + + - type: change + title: traffic-manager gets cluster ID itself instead of via environment variable + body: >- + The traffic-manager used to get the cluster ID as an environment variable when running + telepresence connnect or via adding the value in the helm chart. This was + clunky so now the traffic-manager gets the value itself as long as it has permissions + to "get" and "list" namespaces (this has been updated in the helm chart). + docs: install/helm + + - type: bugfix + title: Telepresence now mounts all directories from /var/run/secrets + body: >- + In the past, we only mounted secret directories in /var/run/secrets/kubernetes.io. + We now mount *all* directories in /var/run/secrets, which, for example, includes + directories like eks.amazonaws.com used for IRSA tokens. + docs: reference/volume + + - type: bugfix + title: Max gRPC receive size correctly propagates to all grpc servers + body: >- + This fixes a bug where the max gRPC receive size was only propagated to some of the + grpc servers, causing failures when the message size was over the default. + docs: reference/config/#grpc + + - type: bugfix + title: Updated our Homebrew packaging to run manually + body: >- + We made some updates to our script that packages Telepresence for Homebrew so that it + can be run manually. This will enable maintainers of Telepresence to run the script manually + should we ever need to rollback a release and have latest point to an older verison. + docs: install/ + + - type: bugfix + title: Telepresence uses namespace from kubeconfig context on each call + body: >- + In the past, Telepresence would use whatever namespace was specified in the kubeconfig's current-context + for the entirety of the time a user was connected to Telepresence. This would lead to confusing behavior + when a user changed the context in their kubeconfig and expected Telepresence to acknowledge that change. + Telepresence now will do that and use the namespace designated by the context on each call. + + - type: bugfix + title: Idle outbound TCP connections timeout increased to 7200 seconds + body: >- + Some users were noticing that their intercepts would start failing after 60 seconds. + This was because the keep idle outbound TCP connections were set to 60 seconds, which we have + now bumped to 7200 seconds to match Linux's tcp_keepalive_time default. + + - type: bugfix + title: Telepresence will automatically remove a socket upon ungraceful termination + body: >- + When a Telepresence process terminates ungracefully, it would inform users that "this usually means + that the process has terminated ungracefully" and implied that they should remove the socket. We've + now made it so Telepresence will automatically attempt to remove the socket upon ungraceful termination. + + - type: bugfix + title: Fixed user daemon deadlock + body: >- + Remedied a situation where the user daemon could hang when a user was logged in. + + - type: bugfix + title: Fixed agentImage config setting + body: >- + The config setting images.agentImages is no longer required to contain the repository, and it + will use the value at images.repository. + docs: reference/config/#images + + - version: 2.4.0 + date: "2021-08-04" + notes: + - type: feature + title: Windows Client Developer Preview + body: >- + There is now a native Windows client for Telepresence that is being released as a Developer Preview. + All the same features supported by the MacOS and Linux client are available on Windows. + image: telepresence-2.4.0-windows.png + docs: install + + - type: feature + title: CLI raises helpful messages from Ambassador Cloud + body: >- + Telepresence can now receive messages from Ambassador Cloud and raise + them to the user when they perform certain commands. This enables us + to send you messages that may enhance your Telepresence experience when + using certain commands. Frequency of messages can be configured in your + config.yml. + image: telepresence-2.4.0-cloud-messages.png + docs: reference/config#cloud + + - type: bugfix + title: Improved stability of systemd-resolved-based DNS + body: >- + When initializing the systemd-resolved-based DNS, the routing domain + is set to improve stability in non-standard configurations. This also enables the + overriding resolver to do a proper take over once the DNS service ends. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Fixed an edge case when intercepting a container with multiple ports + body: >- + When specifying a port of a container to intercept, if there was a container in the + pod without ports, it was automatically selected. This has been fixed so we'll only + choose the container with "no ports" if there's no container that explicitly matches + the port used in your intercept. + docs: reference/intercepts/#creating-an-intercept-when-a-service-has-multiple-ports + + - type: bugfix + title: $(NAME) references in agent's environments are now interpolated correctly. + body: >- + If you had an environment variable $(NAME) in your workload that referenced another, intercepts + would not correctly interpolate $(NAME). This has been fixed and works automatically. + + - type: bugfix + title: Telepresence no longer prints INFO message when there is no config.yml + body: >- + Fixed a regression that printed an INFO message to the terminal when there wasn't a + config.yml present. The config is optional, so this message has been + removed. + docs: reference/config + + - type: bugfix + title: Telepresence no longer panics when using --http-match + body: >- + Fixed a bug where Telepresence would panic if the value passed to --http-match + didn't contain an equal sign, which has been fixed. The correct syntax is in the --help + string and looks like --http-match=HTTP2_HEADER=REGEX + docs: reference/intercepts/#intercept-behavior-when-logged-in-to-ambassador-cloud + + - type: bugfix + title: Improved subnet updates + body: >- + The traffic-manager used to update subnets whenever the Nodes or Pods changed, even if + the underlying subnet hadn't changed, which created a lot of unnecessary traffic between the + client and the traffic-manager. This has been fixed so we only send updates when the subnets + themselves actually change. + docs: reference/routing/#subnets + + - version: 2.3.7 + date: "2021-07-23" + notes: + - type: feature + title: Also-proxy in telepresence status + body: >- + An also-proxy entry in the Kubernetes cluster config will + show up in the output of the telepresence status command. + docs: reference/config + + - type: feature + title: Non-interactive telepresence login + body: >- + telepresence login now has an + --apikey=KEY flag that allows for + non-interactive logins. This is useful for headless + environments where launching a web-browser is impossible, + such as cloud shells, Docker containers, or CI. + image: telepresence-2.3.7-newkey.png + docs: reference/client/login/ + + - type: bugfix + title: Mutating webhook injector correctly hides named ports for probes. + body: >- + The mutating webhook injector has been fixed to correctly rename named ports for liveness and readiness probes + docs: reference/cluster-config + + - type: bugfix + title: telepresence current-cluster-id crash fixed + body: >- + Fixed a regression introduced in 2.3.5 that caused telepresence current-cluster-id + to crash. + docs: reference/cluster-config + + - type: bugfix + title: Better UX around intercepts with no local process running + body: >- + Requests would hang indefinitely when initiating an intercept before you + had a local process running. This has been fixed and will result in an + Empty reply from server until you start a local process. + docs: reference/intercepts + + - type: bugfix + title: API keys no longer show as "no description" + body: >- + New API keys generated internally for communication with + Ambassador Cloud no longer show up as "no description" in + the Ambassador Cloud web UI. Existing API keys generated by + older versions of Telepresence will still show up this way. + image: telepresence-2.3.7-keydesc.png + + - type: bugfix + title: Fix corruption of user-info.json + body: >- + Fixed a race condition that logging in and logging out + rapidly could cause memory corruption or corruption of the + user-info.json cache file used when + authenticating with Ambassador Cloud. + + - type: bugfix + title: Improved DNS resolver for systemd-resolved + body: + Telepresence's systemd-resolved-based DNS resolver is now more + stable and in case it fails to initialize, the overriding resolver + will no longer cause general DNS lookup failures when telepresence defaults to + using it. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Faster telepresence list command + body: + The performance of telepresence list has been increased + significantly by reducing the number of calls the command makes to the cluster. + docs: reference/client + + - version: 2.3.6 + date: "2021-07-20" + notes: + - type: bugfix + title: Fix preview URLs + body: >- + Fixed a regression introduced in 2.3.5 that caused preview + URLs to not work. + + - type: bugfix + title: Fix subnet discovery + body: >- + Fixed a regression introduced in 2.3.5 where the Traffic + Manager's RoleBinding did not correctly appoint + the traffic-manager Role, causing + subnet discovery to not be able to work correctly. + docs: reference/rbac/ + + - type: bugfix + title: Fix root-user configuration loading + body: >- + Fixed a regression introduced in 2.3.5 where the root daemon + did not correctly read the configuration file; ignoring the + user's configured log levels and timeouts. + docs: reference/config/ + + - type: bugfix + title: Fix a user daemon crash + body: >- + Fixed an issue that could cause the user daemon to crash + during shutdown, as during shutdown it unconditionally + attempted to close a channel even though the channel might + already be closed. + + - version: 2.3.5 + date: "2021-07-15" + notes: + - type: feature + title: traffic-manager in multiple namespaces + body: >- + We now support installing multiple traffic managers in the same cluster. + This will allow operators to install deployments of telepresence that are + limited to certain namespaces. + image: ./telepresence-2.3.5-traffic-manager-namespaces.png + docs: install/helm + - type: feature + title: No more dependence on kubectl + body: >- + Telepresence no longer depends on having an external + kubectl binary, which might not be present for + OpenShift users (who have oc instead of + kubectl). + - type: feature + title: Agent image now configurable + body: >- + We now support configuring which agent image + registry to use in the + config. This enables users whose laptop is an air-gapped environment to + create personal intercepts without requiring a login. It also makes it easier + for those who are developing on Telepresence to specify which agent image should + be used. Env vars TELEPRESENCE_AGENT_IMAGE and TELEPRESENCE_REGISTRY are no longer + used. + image: ./telepresence-2.3.5-agent-config.png + docs: reference/config/#images + - type: feature + title: Max gRPC receive size now configurable + body: >- + The default max size of messages received through gRPC (4 MB) is sometimes insufficient. It can now be configured. + image: ./telepresence-2.3.5-grpc-max-receive-size.png + docs: reference/config/#grpc + - type: feature + title: CLI can be used in air-gapped environments + body: >- + While Telepresence will auto-detect if your cluster is in an air-gapped environment, + we've added an option users can add to their config.yml to ensure the cli acts like it + is in an air-gapped environment. Air-gapped environments require a manually installed + licence. + docs: reference/cluster-config/#air-gapped-cluster + image: ./telepresence-2.3.5-skipLogin.png + - version: 2.3.4 + date: "2021-07-09" + notes: + - type: bugfix + title: Improved IP log statements + body: >- + Some log statements were printing incorrect characters, when they should have been IP addresses. + This has been resolved to include more accurate and useful logging. + docs: reference/config/#log-levels + image: ./telepresence-2.3.4-ip-error.png + - type: bugfix + title: Improved messaging when multiple services match a workload + body: >- + If multiple services matched a workload when performing an intercept, Telepresence would crash. + It now gives the correct error message, instructing the user on how to specify which + service the intercept should use. + image: ./telepresence-2.3.4-improved-error.png + docs: reference/intercepts + - type: bugfix + title: Traffic-manger creates services in its own namespace to determine subnet + body: >- + Telepresence will now determine the service subnet by creating a dummy-service in its own + namespace, instead of the default namespace, which was causing RBAC permissions issues in + some clusters. + docs: reference/routing/#subnets + - type: bugfix + title: Telepresence connect respects pre-existing clusterrole + body: >- + When Telepresence connects, if the traffic-manager's desired clusterrole already exists in the + cluster, Telepresence will no longer try to update the clusterrole. + docs: reference/rbac + - type: bugfix + title: Helm Chart fixed for clientRbac.namespaced + body: >- + The Telepresence Helm chart no longer fails when installing with --set clientRbac.namespaced=true. + docs: install/helm + - version: 2.3.3 + date: "2021-07-07" + notes: + - type: feature + title: Traffic Manager Helm Chart + body: >- + Telepresence now supports installing the Traffic Manager via Helm. + This will make it easy for operators to install and configure the + server-side components of Telepresence separately from the CLI (which + in turn allows for better separation of permissions). + image: ./telepresence-2.3.3-helm.png + docs: install/helm/ + - type: feature + title: Traffic-manager in custom namespace + body: >- + As the traffic-manager can now be installed in any + namespace via Helm, Telepresence can now be configured to look for the + Traffic Manager in a namespace other than ambassador. + This can be configured on a per-cluster basis. + image: ./telepresence-2.3.3-namespace-config.png + docs: reference/config + - type: feature + title: Intercept --to-pod + body: >- + telepresence intercept now supports a + --to-pod flag that can be used to port-forward sidecars' + ports from an intercepted pod. + image: ./telepresence-2.3.3-to-pod.png + docs: reference/intercepts + - type: change + title: Change in migration from edgectl + body: >- + Telepresence no longer automatically shuts down the old + api_version=1 edgectl daemon. If migrating + from such an old version of edgectl you must now manually + shut down the edgectl daemon before running Telepresence. + This was already the case when migrating from the newer + api_version=2 edgectl. + - type: bugfix + title: Fixed error during shutdown + body: >- + The root daemon no longer terminates when the user daemon disconnects + from its gRPC streams, and instead waits to be terminated by the CLI. + This could cause problems with things not being cleaned up correctly. + - type: bugfix + title: Intercepts will survive deletion of intercepted pod + body: >- + An intercept will survive deletion of the intercepted pod provided + that another pod is created (or already exists) that can take over. + - version: 2.3.2 + date: "2021-06-18" + notes: + # Headliners + - type: feature + title: Service Port Annotation + body: >- + The mutator webhook for injecting traffic-agents now + recognizes a + telepresence.getambassador.io/inject-service-port + annotation to specify which port to intercept; bringing the + functionality of the --port flag to users who + use the mutator webook in order to control Telepresence via + GitOps. + image: ./telepresence-2.3.2-svcport-annotation.png + docs: reference/cluster-config#service-port-annotation + - type: feature + title: Outbound Connections + body: >- + Outbound connections are now routed through the intercepted + Pods which means that the connections originate from that + Pod from the cluster's perspective. This allows service + meshes to correctly identify the traffic. + docs: reference/routing/#outbound + - type: change + title: Inbound Connections + body: >- + Inbound connections from an intercepted agent are now + tunneled to the manager over the existing gRPC connection, + instead of establishing a new connection to the manager for + each inbound connection. This avoids interference from + certain service mesh configurations. + docs: reference/routing/#inbound + + # RBAC changes + - type: change + title: Traffic Manager needs new RBAC permissions + body: >- + The Traffic Manager requires RBAC + permissions to list Nodes, Pods, and to create a dummy + Service in the manager's namespace. + docs: reference/routing/#subnets + - type: change + title: Reduced developer RBAC requirements + body: >- + The on-laptop client no longer requires RBAC permissions to list the Nodes + in the cluster or to create Services, as that functionality + has been moved to the Traffic Manager. + + # Bugfixes + - type: bugfix + title: Able to detect subnets + body: >- + Telepresence will now detect the Pod CIDR ranges even if + they are not listed in the Nodes. + image: ./telepresence-2.3.2-subnets.png + docs: reference/routing/#subnets + - type: bugfix + title: Dynamic IP ranges + body: >- + The list of cluster subnets that the virtual network + interface will route is now configured dynamically and will + follow changes in the cluster. + - type: bugfix + title: No duplicate subnets + body: >- + Subnets fully covered by other subnets are now pruned + internally and thus never superfluously added to the + laptop's routing table. + docs: reference/routing/#subnets + - type: change # not a bugfix, but it only makes sense to mention after the above bugfixes + title: Change in default timeout + body: >- + The trafficManagerAPI timeout default has + changed from 5 seconds to 15 seconds, in order to facilitate + the extended time it takes for the traffic-manager to do its + initial discovery of cluster info as a result of the above + bugfixes. + - type: bugfix + title: Removal of DNS config files on macOS + body: >- + On macOS, files generated under + /etc/resolver/ as the result of using + include-suffixes in the cluster config are now + properly removed on quit. + docs: reference/routing/#macos-resolver + + - type: bugfix + title: Large file transfers + body: >- + Telepresence no longer erroneously terminates connections + early when sending a large HTTP response from an intercepted + service. + - type: bugfix + title: Race condition in shutdown + body: >- + When shutting down the user-daemon or root-daemon on the + laptop, telepresence quit and related commands + no longer return early before everything is fully shut down. + Now it can be counted on that by the time the command has + returned that all of the side-effects on the laptop have + been cleaned up. + - version: 2.3.1 + date: "2021-06-14" + notes: + - title: DNS Resolver Configuration + body: "Telepresence now supports per-cluster configuration for custom dns behavior, which will enable users to determine which local + remote resolver to use and which suffixes should be ignored + included. These can be configured on a per-cluster basis." + image: ./telepresence-2.3.1-dns.png + docs: reference/config + type: feature + - title: AlsoProxy Configuration + body: "Telepresence now supports also proxying user-specified subnets so that they can access external services only accessible to the cluster while connected to Telepresence. These can be configured on a per-cluster basis and each subnet is added to the TUN device so that requests are routed to the cluster for IPs that fall within that subnet." + image: ./telepresence-2.3.1-alsoProxy.png + docs: reference/config + type: feature + - title: Mutating Webhook for Injecting Traffic Agents + body: "The Traffic Manager now contains a mutating webhook to automatically add an agent to pods that have the telepresence.getambassador.io/traffic-agent: enabled annotation. This enables Telepresence to work well with GitOps CD platforms that rely on higher level kubernetes objects matching what is stored in git. For workloads without the annotation, Telepresence will add the agent the way it has in the past" + image: ./telepresence-2.3.1-inject.png + docs: reference/rbac + type: feature + - title: Traffic Manager Connect Timeout + body: "The trafficManagerConnect timeout default has changed from 20 seconds to 60 seconds, in order to facilitate the extended time it takes to apply everything needed for the mutator webhook." + image: ./telepresence-2.3.1-trafficmanagerconnect.png + docs: reference/config + type: change + - title: Fix for large file transfers + body: "Fix a tun-device bug where sometimes large transfers from services on the cluster would hang indefinitely" + image: ./telepresence-2.3.1-large-file-transfer.png + docs: reference/tun-device + type: bugfix + - title: Brew Formula Changed + body: "Now that the Telepresence rewrite is the main version of Telepresence, you can install it via Brew like so: brew install datawire/blackbird/telepresence." + image: ./telepresence-2.3.1-brew.png + docs: install/ + type: change + - version: 2.3.0 + date: "2021-06-01" + notes: + - title: Brew install Telepresence + body: "Telepresence can now be installed via brew on macOS, which makes it easier for users to stay up-to-date with the latest telepresence version. To install via brew, you can use the following command: brew install datawire/blackbird/telepresence2." + image: ./telepresence-2.3.0-homebrew.png + docs: install/ + type: feature + - title: TCP and UDP routing via Virtual Network Interface + body: "Telepresence will now perform routing of outbound TCP and UDP traffic via a Virtual Network Interface (VIF). The VIF is a layer 3 TUN-device that exists while Telepresence is connected. It makes the subnets in the cluster available to the workstation and will also route DNS requests to the cluster and forward them to intercepted pods. This means that pods with custom DNS configuration will work as expected. Prior versions of Telepresence would use firewall rules and were only capable of routing TCP." + image: ./tunnel.jpg + docs: reference/tun-device + type: feature + - title: SSH is no longer used + body: "All traffic between the client and the cluster is now tunneled via the traffic manager gRPC API. This means that Telepresence no longer uses ssh tunnels and that the manager no longer have an sshd installed. Volume mounts are still established using sshfs but it is now configured to communicate using the sftp-protocol directly, which means that the traffic agent also runs without sshd. A desired side effect of this is that the manager and agent containers no longer need a special user configuration." + image: ./no-ssh.png + docs: reference/tun-device/#no-ssh-required + type: change + - title: Running in a Docker container + body: "Telepresence can now be run inside a Docker container. This can be useful for avoiding side effects on a workstation's network, establishing multiple sessions with the traffic manager, or working with different clusters simultaneously." + image: ./run-tp-in-docker.png + docs: reference/inside-container + type: feature + - title: Configurable Log Levels + body: "Telepresence now supports configuring the log level for Root Daemon and User Daemon logs. This provides control over the nature and volume of information that Telepresence generates in daemon.log and connector.log." + image: ./telepresence-2.3.0-loglevels.png + docs: reference/config/#log-levels + type: feature + - version: 2.2.2 + date: "2021-05-17" + notes: + - title: Legacy Telepresence subcommands + body: Telepresence is now able to translate common legacy Telepresence commands into native Telepresence commands. So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used to with the new Telepresence binary. + image: ./telepresence-2.2.png + docs: install/migrate-from-legacy/ + type: feature diff --git a/docs/telepresence/2.8/troubleshooting/index.md b/docs/telepresence/2.8/troubleshooting/index.md new file mode 100644 index 000000000..918e107b0 --- /dev/null +++ b/docs/telepresence/2.8/troubleshooting/index.md @@ -0,0 +1,226 @@ +--- +title: "Telepresence Troubleshooting" +description: "Learn how to troubleshoot common issues related to Telepresence, including intercept issues, cluster connection issues, and errors related to Ambassador Cloud." +--- +# Troubleshooting + +## Creating an intercept did not generate a preview URL + +Preview URLs can only be created if Telepresence is [logged in to +Ambassador Cloud](../reference/client/login/). When not logged in, it +will not even try to create a preview URL (additionally, by default it +will intercept all traffic rather than just a subset of the traffic). +Remove the intercept with `telepresence leave [deployment name]`, run +`telepresence login` to login to Ambassador Cloud, then recreate the +intercept. See the [intercepts how-to doc](../howtos/intercepts) for +more details. + +## Error on accessing preview URL: `First record does not look like a TLS handshake` + +The service you are intercepting is likely not using TLS, however when configuring the intercept you indicated that it does use TLS. Remove the intercept with `telepresence leave [deployment name]` and recreate it, setting `TLS` to `n`. Telepresence tries to intelligently determine these settings for you when creating an intercept and offer them as defaults, but odd service configurations might cause it to suggest the wrong settings. + +## Error on accessing preview URL: Detected a 301 Redirect Loop + +If your ingress is set to redirect HTTP requests to HTTPS and your web app uses HTTPS, but you configure the intercept to not use TLS, you will get this error when opening the preview URL. Remove the intercept with `telepresence leave [deployment name]` and recreate it, selecting the correct port and setting `TLS` to `y` when prompted. + +## Connecting to a cluster via VPN doesn't work. + +There are a few different issues that could arise when working with a VPN. Please see the [dedicated page](../reference/vpn) on Telepresence and VPNs to learn more on how to fix these. + +## Connecting to a cluster hosted in a VM on the workstation doesn't work + +The cluster probably has access to the host's network and gets confused when it is mapped by Telepresence. +Please check the [cluster in hosted vm](../howtos/cluster-in-vm) for more details. + +## Your GitHub organization isn't listed + +Ambassador Cloud needs access granted to your GitHub organization as a +third-party OAuth app. If an organization isn't listed during login +then the correct access has not been granted. + +The quickest way to resolve this is to go to the **Github menu** → +**Settings** → **Applications** → **Authorized OAuth Apps** → +**Ambassador Labs**. An organization owner will have a **Grant** +button, anyone not an owner will have **Request** which sends an email +to the owner. If an access request has been denied in the past the +user will not see the **Request** button, they will have to reach out +to the owner. + +Once access is granted, log out of Ambassador Cloud and log back in; +you should see the GitHub organization listed. + +The organization owner can go to the **GitHub menu** → **Your +organizations** → **[org name]** → **Settings** → **Third-party +access** to see if Ambassador Labs has access already or authorize a +request for access (only owners will see **Settings** on the +organization page). Clicking the pencil icon will show the +permissions that were granted. + +GitHub's documentation provides more detail about [managing access granted to third-party applications](https://docs.github.com/en/github/authenticating-to-github/connecting-with-third-party-applications) and [approving access to apps](https://docs.github.com/en/github/setting-up-and-managing-organizations-and-teams/approving-oauth-apps-for-your-organization). + +### Granting or requesting access on initial login + +When using GitHub as your identity provider, the first time you log in +to Ambassador Cloud GitHub will ask to authorize Ambassador Labs to +access your organizations and certain user data. + +Authorize Ambassador labs form + +Any listed organization with a green check has already granted access +to Ambassador Labs (you still need to authorize to allow Ambassador +Labs to read your user data and organization membership). + +Any organization with a red "X" requires access to be granted to +Ambassador Labs. Owners of the organization will see a **Grant** +button. Anyone who is not an owner will see a **Request** button. +This will send an email to the organization owner requesting approval +to access the organization. If an access request has been denied in +the past the user will not see the **Request** button, they will have +to reach out to the owner. + +Once approval is granted, you will have to log out of Ambassador Cloud +then back in to select the organization. + +## Volume mounts are not working on macOS + +It's necessary to have `sshfs` installed in order for volume mounts to work correctly during intercepts. Lately there's been some issues using `brew install sshfs` a macOS workstation because the required component `osxfuse` (now named `macfuse`) isn't open source and hence, no longer supported. As a workaround, you can now use `gromgit/fuse/sshfs-mac` instead. Follow these steps: + +1. Remove old sshfs, macfuse, osxfuse using `brew uninstall` +2. `brew install --cask macfuse` +3. `brew install gromgit/fuse/sshfs-mac` +4. `brew link --overwrite sshfs-mac` + +Now sshfs -V shows you the correct version, e.g.: +``` +$ sshfs -V +SSHFS version 2.10 +FUSE library version: 2.9.9 +fuse: no mount point +``` + +5. Next, try a mount (or an intercept that performs a mount). It will fail because you need to give permission to “Benjamin Fleischer” to execute a kernel extension (a pop-up appears that takes you to the system preferences). +6. Approve the needed permission +7. Reboot your computer. + +## Authorization for preview URLs +Services that require authentication may not function correctly with preview URLs. When accessing a preview URL, it is necessary to configure your intercept to use custom authentication headers for the preview URL. If you don't, you may receive an unauthorized response or be redirected to the login page for Ambassador Cloud. + +You can accomplish this by using a browser extension such as the `ModHeader extension` for [Chrome](https://chrome.google.com/webstore/detail/modheader/idgpnmonknjnojddfkpgkljpfnnfcklj) +or [Firefox](https://addons.mozilla.org/en-CA/firefox/addon/modheader-firefox/). + +It is important to note that Ambassador Cloud does not support OAuth browser flows when accessing a preview URL, but other auth schemes such as Basic access authentication and session cookies will work. + +## Distributed tracing + +Telepresence is a complex piece of software with components running locally on your laptop and remotely in a distributed kubernetes environment. +As such, troubleshooting investigations require tools that can give users, cluster admins, and maintainers a broad view of what these distributed components are doing. +In order to facilitate such investigations, telepresence >= 2.7.0 includes distributed tracing functionality via [OpenTelemetry](https://opentelemetry.io/) +Tracing is controlled via a `grpcPort` flag under the `tracing` configuration of your `values.yaml`. It is enabled by default and can be disabled by setting `grpcPort` to `0`, or `tracing` to an empty object: + +```yaml +tracing: {} +``` + +If tracing is configured, the traffic manager and traffic agents will open a GRPC server under the port given, from which telepresence clients will be able to gather trace data. +To collect trace data, ensure you're connected to the cluster, perform whatever operation you'd like to debug and then run `gather-traces` immediately after: + +```console +$ telepresence gather-traces +``` + +This command will gather traces from both the cloud and local components of telepresence and output them into a file called `traces.gz` in your current working directory: + +```console +$ file traces.gz + traces.gz: gzip compressed data, original size modulo 2^32 158255 +``` + +Please do not try to open or uncompress this file, as it contains binary trace data. +Instead, you can use the `upload-traces` command built into telepresence to send it to an [OpenTelemetry collector](https://opentelemetry.io/docs/collector/) for ingestion: + +```console +$ telepresence upload-traces traces.gz $OTLP_GRPC_ENDPOINT +``` + +Once that's been done, the traces will be visible via whatever means your usual collector allows. For example, this is what they look like when loaded into Jaeger's [OTLP API](https://www.jaegertracing.io/docs/1.36/apis/#opentelemetry-protocol-stable): + +![Jaeger Interface](../images/tracing.png) + +**Note:** The host and port provided for the `OTLP_GRPC_ENDPOINT` must accept OTLP formatted spans (instead of e.g. Jaeger or Zipkin specific spans) via a GRPC API (instead of the HTTP API that is also available in some collectors) +**Note:** Since traces are not automatically shipped to the backend by telepresence, they are stored in memory. Hence, to avoid running telepresence components out of memory, only the last 10MB of trace data are available for export. + +## No Sidecar Injected in GKE private clusters + +An attempt to `telepresence intercept` results in a timeout, and upon examination of the pods (`kubectl get pods`) it's discovered that the intercept command did not inject a sidecar into the workload's pods: + +```bash +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +echo-easy-7f6d54cff8-rz44k 1/1 Running 0 5m5s + +$ telepresence intercept echo-easy -p 8080 +telepresence: error: connector.CreateIntercept: request timed out while waiting for agent echo-easy.default to arrive +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +echo-easy-d8dc4cc7c-27567 1/1 Running 0 2m9s + +# Notice how 1/1 containers are ready. +``` + +If this is occurring in a GKE cluster with private networking enabled, it is likely due to firewall rules blocking the +Traffic Manager's webhook injector from the API server. +To fix this, add a firewall rule allowing your cluster's master nodes to access TCP port `443` in your cluster's pods, +or change the port number that Telepresence is using for the agent injector by providing the number of an allowed port +using the Helm chart value `agentInjector.webhook.port`. +Please refer to the [telepresence install instructions](../install/cloud#gke) or the [GCP docs](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) for information to resolve this. + +## Injected init-container doesn't function properly + +The init-container is injected to insert `iptables` rules that redirects port numbers from the app container to the +traffic-agent sidecar. This is necessary when the service's `targetPort` is numeric. It requires elevated privileges +(`NET_ADMIN` capabilities), and the inserted rules may get overridden by `iptables` rules inserted by other vendors, +such as Istio or Linkerd. + +Injection of the init-container can often be avoided by using a `targetPort` _name_ instead of a number, and ensure +that the corresponding container's `containerPort` is also named. This example uses the name "http", but any valid +name will do: +```yaml +apiVersion: v1 +kind: Pod +metadata: + ... +spec: + ... + containers: + - ... + ports: + - name: http + containerPort: 8080 +--- +apiVersion: v1 +kind: Service +metadata: + ... +spec: + ... + ports: + - port: 80 + targetPort: http +``` + +Telepresence's mutating webhook will refrain from injecting an init-container when the `targetPort` is a name. Instead, +it will do the following during the injection of the traffic-agent: + +1. Rename the designated container's port by prefixing it (i.e., containerPort: http becomes containerPort: tm-http). +2. Let the container port of our injected traffic-agent use the original name (i.e., containerPort: http). + +Kubernetes takes care of the rest and will now associate the service's `targetPort` with our traffic-agent's +`containerPort`. + +### Important note +If the service is "headless" (using `ClusterIP: None`), then using named ports won't help because the `targetPort` will +not get remapped. A headless service will always require the init-container. + +## `too many files open` error when running `telepresence connect` on Linux + +If `telepresence connect` on linux fails with a message in the logs `too many files open`, then check if `fs.inotify.max_user_instances` is set too low. Check the current settings with `sysctl fs.notify.max_user_instances` and increase it temporarily with `sudo sysctl -w fs.inotify.max_user_instances=512`. For more information about permanently increasing it see [Kernel inotify watch limit reached](https://unix.stackexchange.com/a/13757/514457). diff --git a/docs/telepresence/2.8/versions.yml b/docs/telepresence/2.8/versions.yml new file mode 100644 index 000000000..3ad6c1d5f --- /dev/null +++ b/docs/telepresence/2.8/versions.yml @@ -0,0 +1,5 @@ +version: "2.8.5" +dlVersion: "latest" +docsVersion: "2.8" +branch: release/v2 +productName: "Telepresence" diff --git a/docs/telepresence/2.9 b/docs/telepresence/2.9 deleted file mode 120000 index 6c6b588df..000000000 --- a/docs/telepresence/2.9 +++ /dev/null @@ -1 +0,0 @@ -../../../docs/telepresence/v2.9 \ No newline at end of file diff --git a/docs/telepresence/2.9/ci/github-actions.md b/docs/telepresence/2.9/ci/github-actions.md new file mode 100644 index 000000000..810a2d239 --- /dev/null +++ b/docs/telepresence/2.9/ci/github-actions.md @@ -0,0 +1,176 @@ +--- +title: GitHub Actions for Telepresence +description: "Learn more about GitHub Actions for Telepresence and how to integrate them in your processes to run tests for your own environments and improve your CI/CD pipeline. " +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Telepresence with GitHub Actions + +Telepresence combined with [GitHub Actions](https://docs.github.com/en/actions) allows you to run integration tests in your continuous integration/continuous delivery (CI/CD) pipeline without the need to run any dependant service. When you connect to the target Kubernetes cluster, you can intercept traffic of the remote services and send it to an instance of the local service running in CI. This way, you can quickly test the bugfixes, updates, and features that you develop in your project. + +You can [register here](https://app.getambassador.io/auth/realms/production/protocol/openid-connect/auth?client_id=telepresence-github-actions&response_type=code&code_challenge=qhXI67CwarbmH-pqjDIV1ZE6kqggBKvGfs69cxst43w&code_challenge_method=S256&redirect_uri=https://app.getambassador.io) to get a free Ambassador Cloud account to try the GitHub Actions for Telepresence yourself. + +## GitHub Actions for Telepresence + +Ambassador Labs has created a set of GitHub Actions for Telepresence that enable you to run integration tests in your CI pipeline against any existing remote cluster. The GitHub Actions for Telepresence are the following: + + - **configure**: Initial configuration setup for Telepresence that is needed to run the actions successfully. + - **install**: Installs Telepresence in your CI server with latest version or the one specified by you. + - **login**: Logs into Telepresence, you can create a [personal intercept](/docs/telepresence/latest/concepts/intercepts/#personal-intercept). You'll need a Telepresence API key and set it as an environment variable in your workflow. See the [acquiring an API key guide](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) for instructions on how to get one. + - **connect**: Connects to the remote target environment. + - **intercept**: Redirects traffic to the remote service to the version of the service running in CI so you can run integration tests. + +Each action contains a post-action script to clean up resources. This includes logging out of Telepresence, closing the connection to the remote cluster, and stopping the intercept process. These post scripts are executed automatically, regardless of job result. This way, you don't have to worry about terminating the session yourself. You can look at the [GitHub Actions for Telepresence repository](https://github.com/datawire/telepresence-actions) for more information. + +# Using Telepresence in your GitHub Actions CI pipeline + +## Prerequisites + +To enable GitHub Actions with telepresence, you need: + +* A [Telepresence API KEY](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) and set it as an environment variable in your workflow. +* Access to your remote Kubernetes cluster, like a `kubeconfig.yaml` file with the information to connect to the cluster. +* If your remote cluster already has Telepresence installed, you need to know whether Telepresence is installed [Cluster wide](/docs/telepresence/latest/reference/rbac/#cluster-wide-telepresence-user-access) or [Namespace only](/docs/telepresence/latest/reference/rbac/#namespace-only-telepresence-user-access). If telepresence is configured for namespace only, verify that your `kubeconfig.yaml` is configured to find the installation of the Traffic Manager. For example: + + ```yaml + apiVersion: v1 + clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: traffic-manager-namespace + name: example-cluster + ``` + +* If Telepresence is installed, you also need to know the version of Telepresence running in the cluster. You can run the command `kubectl describe service traffic-manager -n namespace`. The version is listed in the `labels` section of the output. +* You need a GitHub Actions secret named `TELEPRESENCE_API_KEY` in your repository that has your Telepresence API key. See [GitHub docs](https://docs.github.com/en/github-ae@latest/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository) for instructions on how to create GitHub Actions secrets. +* You need a GitHub Actions secret named `KUBECONFIG_FILE` in your repository with the content of your `kubeconfig.yaml`). + +**Does your environment look different?** We're actively working on making GitHub Actions for Telepresence more useful for more + +
+ + +
+ +## Initial configuration setup + +To be able to use the GitHub Actions for Telepresence, you need to do an initial setup to [configure Telepresence](../../reference/config/) so the repository is able to run your workflow. To complete the Telepresence setup: + + +This action only supports Ubuntu runners at the moment. + +1. In your main branch, create a `.github/workflows` directory in your GitHub repository if it does not already exist. +1. Next, in the `.github/workflows` directory, create a new YAML file named `configure-telepresence.yaml`: + + ```yaml + name: Configuring telepresence + on: workflow_dispatch + jobs: + configuring: + name: Configure telepresence + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + steps: + - name : Checkout + uses: actions/checkout@v3 + #---- here run your custom command to connect to your cluster + #- name: Connect to cluster + # shell: bash + # run: ./connnect to cluster + #---- + - name: Congifuring Telepresence + uses: datawire/telepresence-actions/configure@v1.0-rc + with: + version: latest + ``` + +1. Push the `configure-telepresence.yaml` file to your repository. +1. Run the `Configuring Telepresence Workflow` [manually](https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow) in your repository's Actions tab. + +When the workflow runs, the action caches the configuration directory of Telepresence and a Telepresence configuration file if is provided by you. This configuration file should be placed in the`/.github/telepresence-config/config.yml` with your own [Telepresence config](../../reference/config/). If you update your this file with a new configuration, you must run the `Configuring Telepresence Workflow` action manually on your main branch so your workflow detects the new configuration. + + +When you create a branch, do not remove the .telepresence/config.yml file. This is required for Telepresence to run GitHub action properly when there is a new push to the branch in your repository. + + +## Using Telepresence in your GitHub Actions workflows + +1. In the `.github/workflows` directory create a new YAML file named `run-integration-tests.yaml` and modify placeholders with real actions to run your service and perform integration tests. + + ```yaml + name: Run Integration Tests + on: + push: + branches-ignore: + - 'main' + jobs: + my-job: + name: Run Integration Test using Remote Cluster + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + KUBECONFIG_FILE: ${{ secrets.KUBECONFIG_FILE }} + KUBECONFIG: /opt/kubeconfig + steps: + - name : Checkout + uses: actions/checkout@v3 + with: + ref: ${{ github.event.pull_request.head.sha }} + #---- here run your custom command to run your service + #- name: Run your service to test + # shell: bash + # run: ./run_local_service + #---- + # First you need to log in to Telepresence, with your api key + - name: Create kubeconfig file + run: | + cat < /opt/kubeconfig + ${{ env.KUBECONFIG_FILE }} + EOF + - name: Install Telepresence + uses: datawire/telepresence-actions/install@v1.0-rc + with: + version: 2.5.8 # Change the version number here according to the version of Telepresence in your cluster or omit this parameter to install the latest version + - name: Telepresence connect + uses: datawire/telepresence-actions/connect@v1.0-rc + - name: Login + uses: datawire/telepresence-actions/login@v1.0-rc + with: + telepresence_api_key: ${{ secrets.TELEPRESENCE_API_KEY }} + - name: Intercept the service + uses: datawire/telepresence-actions/intercept@v1.0-rc + with: + service_name: service-name + service_port: 8081:8080 + namespace: namespacename-of-your-service + http_header: "x-telepresence-intercept-id=service-intercepted" + print_logs: true # Flag to instruct the action to print out Telepresence logs and export an artifact with them + #---- here run your custom command + #- name: Run integrations test + # shell: bash + # run: ./run_integration_test + #---- + ``` + +The previous is an example of a workflow that: + +* Checks out the repository code. +* Has a placeholder step to run the service during CI. +* Creates the `/opt/kubeconfig` file with the contents of the `secrets.KUBECONFIG_FILE` to make it available for Telepresence. +* Installs Telepresence. +* Runs Telepresence Connect. +* Logs into Telepresence. +* Intercepts traffic to the service running in the remote cluster. +* A placeholder for an action that would run integration tests, such as one that makes HTTP requests to your running service and verifies it works while dependent services run in the remote cluster. + +This workflow gives you the ability to run integration tests during the CI run against an ephemeral instance of your service to verify that the any change that is pushed to the working branch works as expected. After you push the changes, the CI server will run the integration tests against the intercept. You can view the results on your GitHub repository, under "Actions" tab. diff --git a/docs/telepresence/2.9/community.md b/docs/telepresence/2.9/community.md new file mode 100644 index 000000000..922457c9d --- /dev/null +++ b/docs/telepresence/2.9/community.md @@ -0,0 +1,12 @@ +# Community + +## Contributor's guide +Please review our [contributor's guide](https://github.com/telepresenceio/telepresence/blob/release/v2/DEVELOPING.md) +on GitHub to learn how you can help make Telepresence better. + +## Changelog +Our [changelog](https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md) +describes new features, bug fixes, and updates to each version of Telepresence. + +## Meetings +Check out our community [meeting schedule](https://github.com/telepresenceio/telepresence/blob/release/v2/MEETING_SCHEDULE.md) for opportunities to interact with Telepresence developers. diff --git a/docs/telepresence/2.9/concepts/context-prop.md b/docs/telepresence/2.9/concepts/context-prop.md new file mode 100644 index 000000000..b3eb41e32 --- /dev/null +++ b/docs/telepresence/2.9/concepts/context-prop.md @@ -0,0 +1,37 @@ +# Context propagation + +**Context propagation** is the transfer of request metadata across the services and remote processes of a distributed system. Telepresence uses context propagation to intelligently route requests to the appropriate destination. + +This metadata is the context that is transferred across system services. It commonly takes the form of HTTP headers; context propagation is usually referred to as header propagation. A component of the system (like a proxy or performance monitoring tool) injects the headers into requests as it relays them. + +Metadata propagation refers to any service or other middleware not stripping away the headers. Propagation facilitates the movement of the injected contexts between other downstream services and processes. + + +## What is distributed tracing? + +Distributed tracing is a technique for troubleshooting and profiling distributed microservices applications and is a common application for context propagation. It is becoming a key component for debugging. + +In a microservices architecture, a single request may trigger additional requests to other services. The originating service may not cause the failure or slow request directly; a downstream dependent service may instead be to blame. + +An application like Datadog or New Relic will use agents running on services throughout the system to inject traffic with HTTP headers (the context). They will track the request’s entire path from origin to destination to reply, gathering data on routes the requests follow and performance. The injected headers follow the [W3C Trace Context specification](https://www.w3.org/TR/trace-context/) (or another header format, such as [B3 headers](https://github.com/openzipkin/b3-propagation)), which facilitates maintaining the headers through every service without being stripped (the propagation). + + +## What are intercepts and preview URLs? + +[Intercepts](../../reference/intercepts) and [preview +URLs](../../howtos/preview-urls/) are functions of Telepresence that +enable easy local development from a remote Kubernetes cluster and +offer a preview environment for sharing and real-time collaboration. + +Telepresence uses custom HTTP headers and header propagation to +identify which traffic to intercept both for plain personal intercepts +and for personal intercepts with preview URLs; these techniques are +more commonly used for distributed tracing, so what they are being +used for is a little unorthodox, but the mechanisms for their use are +already widely deployed because of the prevalence of tracing. The +headers facilitate the smart routing of requests either to live +services in the cluster or services running locally on a developer’s +machine. The intercepted traffic can be further limited by using path +based routing. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to [Ambassador Cloud](https://app.getambassador.io) with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. diff --git a/docs/telepresence/2.9/concepts/devloop.md b/docs/telepresence/2.9/concepts/devloop.md new file mode 100644 index 000000000..86aac87e2 --- /dev/null +++ b/docs/telepresence/2.9/concepts/devloop.md @@ -0,0 +1,54 @@ +--- +title: "The developer and the inner dev loop | Ambassador " +--- + +# The developer experience and the inner dev loop + +## How is the developer experience changing? + +The developer experience is the workflow a developer uses to develop, test, deploy, and release software. + +Typically this experience has consisted of both an inner dev loop and an outer dev loop. The inner dev loop is where the individual developer codes and tests, and once the developer pushes their code to version control, the outer dev loop is triggered. + +The outer dev loop is _everything else_ that happens leading up to release. This includes code merge, automated code review, test execution, deployment, [controlled (canary) release](https://www.getambassador.io/docs/argo/latest/concepts/canary/), and observation of results. The modern outer dev loop might include, for example, an automated CI/CD pipeline as part of a [GitOps workflow](https://www.getambassador.io/docs/argo/latest/concepts/gitops/#what-is-gitops) and a [progressive delivery](/docs/argo/latest/concepts/cicd/) strategy relying on automated canaries, i.e. to make the outer loop as fast, efficient and automated as possible. + +Cloud-native technologies have fundamentally altered the developer experience in two ways: one, developers now have to take extra steps in the inner dev loop; two, developers need to be concerned with the outer dev loop as part of their workflow, even if most of their time is spent in the inner dev loop. + +Engineers now must design and build distributed service-based applications _and_ also assume responsibility for the full development life cycle. The new developer experience means that developers can no longer rely on monolithic application developer best practices, such as checking out the entire codebase and coding locally with a rapid “live-reload” inner development loop. Now developers have to manage external dependencies, build containers, and implement orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this adds development time to the equation. + +## What is the inner dev loop? + +The inner dev loop is the single developer workflow. A single developer should be able to set up and use an inner dev loop to code and test changes quickly. + +Even within the Kubernetes space, developers will find much of the inner dev loop familiar. That is, code can still be written locally at a level that a developer controls and committed to version control. + +In a traditional inner dev loop, if a typical developer codes for 360 minutes (6 hours) a day, with a traditional local iterative development loop of 5 minutes — 3 coding, 1 building, i.e. compiling/deploying/reloading, 1 testing inspecting, and 10-20 seconds for committing code — they can expect to make ~70 iterations of their code per day. Any one of these iterations could be a release candidate. The only “developer tax” being paid here is for the commit process, which is negligible. + +![traditional inner dev loop](../images/trad-inner-dev-loop.png) + +## In search of lost time: How does containerization change the inner dev loop? + +The inner dev loop is where writing and testing code happens, and time is critical for maximum developer productivity and getting features in front of end users. The faster the feedback loop, the faster developers can refactor and test again. + +Changes to the inner dev loop process, i.e., containerization, threaten to slow this development workflow down. Coding stays the same in the new inner dev loop, but code has to be containerized. The _containerized_ inner dev loop requires a number of new steps: + +* packaging code in containers +* writing a manifest to specify how Kubernetes should run the application (e.g., YAML-based configuration information, such as how much memory should be given to a container) +* pushing the container to the registry +* deploying containers in Kubernetes + +Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. + + +![container inner dev loop](../images/container-inner-dev-loop.png) + +## Tackling the slow inner dev loop + +A slow inner dev loop can negatively impact frontend and backend teams, delaying work on individual and team levels and slowing releases into production overall. + +For example: + +* Frontend developers have to wait for previews of backend changes on a shared dev/staging environment (for example, until CI/CD deploys a new version) and/or rely on mocks/stubs/virtual services when coding their application locally. These changes are only verifiable by going through the CI/CD process to build and deploy within a target environment. +* Backend developers have to wait for CI/CD to build and deploy their app to a target environment to verify that their code works correctly with cluster or cloud-based dependencies as well as to share their work to get feedback. + +New technologies and tools can facilitate cloud-native, containerized development. And in the case of a sluggish inner dev loop, developers can accelerate productivity with tools that help speed the loop up again. diff --git a/docs/telepresence/2.9/concepts/devworkflow.md b/docs/telepresence/2.9/concepts/devworkflow.md new file mode 100644 index 000000000..fa24fc2bd --- /dev/null +++ b/docs/telepresence/2.9/concepts/devworkflow.md @@ -0,0 +1,7 @@ +# The changing development workflow + +A changing workflow is one of the main challenges for developers adopting Kubernetes. Software development itself isn’t the challenge. Developers can continue to [code using the languages and tools with which they are most productive and comfortable](https://www.getambassador.io/resources/kubernetes-local-dev-toolkit/). That’s the beauty of containerized development. + +However, the cloud-native, Kubernetes-based approach to development means adopting a new development workflow and development environment. Beyond the basics, such as figuring out how to containerize software, [how to run containers in Kubernetes](https://www.getambassador.io/docs/kubernetes/latest/concepts/appdev/), and how to deploy changes into containers, for example, Kubernetes adds complexity before it delivers efficiency. The promise of a “quicker way to develop software” applies at least within the traditional aspects of the inner dev loop, where the single developer codes, builds and tests their software. But both within the inner dev loop and once code is pushed into version control to trigger the outer dev loop, the developer experience changes considerably from what many developers are used to. + +In this new paradigm, new steps are added to the inner dev loop, and more broadly, the developer begins to share responsibility for the full life cycle of their software. Inevitably this means taking new workflows and tools on board to ensure that the full life cycle continues full speed ahead. diff --git a/docs/telepresence/2.9/concepts/faster.md b/docs/telepresence/2.9/concepts/faster.md new file mode 100644 index 000000000..03dc9bd8b --- /dev/null +++ b/docs/telepresence/2.9/concepts/faster.md @@ -0,0 +1,28 @@ +--- +title: Install the Telepresence Docker extension | Ambassador +--- +# Making the remote local: Faster feedback, collaboration and debugging + +With the goal of achieving [fast, efficient development](https://www.getambassador.io/use-case/local-kubernetes-development/), developers need a set of approaches to bridge the gap between remote Kubernetes clusters and local development, and reduce time to feedback and debugging. + +## How should I set up a Kubernetes development environment? + +[Setting up a development environment](https://www.getambassador.io/resources/development-environments-microservices/) for Kubernetes can be much more complex than the setup for traditional web applications. Creating and maintaining a Kubernetes development environment relies on a number of external dependencies, such as databases or authentication. + +While there are several ways to set up a Kubernetes development environment, most introduce complexities and impediments to speed. The dev environment should be set up to easily code and test in conditions where a service can access the resources it depends on. + +A good way to meet the goals of faster feedback, possibilities for collaboration, and scale in a realistic production environment is the "single service local, all other remote" environment. Developing in a fully remote environment offers some benefits, but for developers, it offers the slowest possible feedback loop. With local development in a remote environment, the developer retains considerable control while using tools like [Telepresence](../../quick-start/) to facilitate fast feedback, debugging and collaboration. + +## What is Telepresence? + +Telepresence is an open source tool that lets developers [code and test microservices locally against a remote Kubernetes cluster](../../quick-start/). Telepresence facilitates more efficient development workflows while relieving the need to worry about other service dependencies. + +## How can I get fast, efficient local development? + +The dev loop can be jump-started with the right development environment and Kubernetes development tools to support speed, efficiency and collaboration. Telepresence is designed to let Kubernetes developers code as though their laptop is in their Kubernetes cluster, enabling the service to run locally and be proxied into the remote cluster. Telepresence runs code locally and forwards requests to and from the remote Kubernetes cluster, bypassing the much slower process of waiting for a container to build, pushing it to registry, and deploying to production. + +A rapid and continuous feedback loop is essential for productivity and speed; Telepresence enables the fast, efficient feedback loop to ensure that developers can access the rapid local development loop they rely on without disrupting their own or other developers' workflows. Telepresence safely intercepts traffic from the production cluster and enables near-instant testing of code, local debugging in production, and [preview URL](../../howtos/preview-urls/) functionality to share dev environments with others for multi-user collaboration. + +Telepresence works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This pod proxies data from the Kubernetes environment (e.g., TCP connections, environment variables, volumes) to the local process. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +The intercept proxy works thanks to context propagation, which is most frequently associated with distributed tracing but also plays a key role in controllable intercepts and preview URLs. diff --git a/docs/telepresence/2.9/concepts/intercepts.md b/docs/telepresence/2.9/concepts/intercepts.md new file mode 100644 index 000000000..0a2909be2 --- /dev/null +++ b/docs/telepresence/2.9/concepts/intercepts.md @@ -0,0 +1,208 @@ +--- +title: "Types of intercepts" +description: "Short demonstration of personal vs global intercepts" +--- + +import React from 'react'; + +import Alert from '@material-ui/lab/Alert'; +import AppBar from '@material-ui/core/AppBar'; +import Paper from '@material-ui/core/Paper'; +import Tab from '@material-ui/core/Tab'; +import TabContext from '@material-ui/lab/TabContext'; +import TabList from '@material-ui/lab/TabList'; +import TabPanel from '@material-ui/lab/TabPanel'; +import Animation from '@src/components/InterceptAnimation'; + +export function TabsContainer({ children, ...props }) { + const [state, setState] = React.useState({curTab: "personal"}); + React.useEffect(() => { + const query = new URLSearchParams(window.location.search); + var interceptType = query.get('intercept') || "personal"; + if (state.curTab != interceptType) { + setState({curTab: interceptType}); + } + }, [state, setState]) + var setURL = function(newTab) { + history.replaceState(null,null, + `?intercept=${newTab}${window.location.hash}`, + ); + }; + return ( +
+ + + {setState({curTab: newTab}); setURL(newTab)}} aria-label="intercept types"> + + + + + + {children} + +
+ ); +}; + +# Types of intercepts + + + + +# No intercept + + + + +This is the normal operation of your cluster without Telepresence. + + + + + +# Global intercept + + + + +**Global intercepts** replace the Kubernetes "Orders" service with the +Orders service running on your laptop. The users see no change, but +with all the traffic coming to your laptop, you can observe and debug +with all your dev tools. + + + +### Creating and using global intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=all + ``` + + + + Make sure your current kubectl context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service: + + All requests will be sent to the version of your service that is + running in the local development environment. + + + + +# Personal intercept + +**Personal intercepts** allow you to be selective and intercept only +some of the traffic to a service while not interfering with the rest +of the traffic. This allows you to share a cluster with others on your +team without interfering with their work. + +Personal intercepts are subject to the Ambassador Cloud active service and user limit quotas. +To read more about these quotas limits, see the [subscription management page](../../../cloud/latest/subscriptions/howtos/manage-my-subscriptions). + + + + +In the illustration above, **Orange** +requests are being made by Developer 2 on their laptop and the +**green** are made by a teammate, +Developer 1, on a different laptop. + +Each developer can intercept the Orders service for their requests only, +while sharing the rest of the development environment. + + + +### Creating and using personal intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b + ``` + + We're using + `Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b` as the + header for the sake of the example, but you can use any + `key=value` pair you want, or `--http-header=auto` to have it + choose something automatically. + + + + Make sure your current kubect context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service by passing the + HTTP header: + + ```http + Personal-Intercept: 126a72c7-be8b-4329-af64-768e207a184b + ``` + + + + Need a browser extension to modify or remove an HTTP-request-headers? + + Chrome + {' '} + Firefox + + + + 3. Using the intercept: Send requests to your service without the + HTTP header: + + Requests without the header will be sent to the version of your + service that is running in the cluster. This enables you to share + the cluster with a team! + +### Intercepting a specific endpoint + +It's not uncommon to have one service serving several endpoints. Telepresence is capable of limiting an +intercept to only affect the endpoints you want to work with by using one of the `--http-path-xxx` +flags below in addition to using `--http-header` flags. Only one such flag can be used in an intercept +and, contrary to the `--http-header` flag, it cannot be repeated. + +The following flags are available: + +| Flag | Meaning | +|-------------------------------|------------------------------------------------------------------| +| `--http-path-equal ` | Only intercept the endpoint for this exact path | +| `--http-path-prefix ` | Only intercept endpoints with a matching path prefix | +| `--http-path-regex ` | Only intercept endpoints that match the given regular expression | + +#### Examples: + +1. A personal intercept using the header "Coder: Bob" limited to all endpoints that start with "/api': + + ```shell + telepresence intercept SERVICENAME --http-path-prefix=/api --http-header=Coder=Bob + ``` + +2. A personal intercept using the auto generated header that applies only to the endpoint "/api/version": + + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version --http-header=auto + ``` + or, since `--http-header=auto` is the implicit when using `--http` options, just: + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version + ``` + +3. A personal intercept using the auto generated header limited to all endpoints matching the regular expression "(staging-)?api/.*": + + ```shell + telepresence intercept SERVICENAME --http-path-regex='/(staging-)?api/.*' + ``` + + + diff --git a/docs/telepresence/2.9/doc-links.yml b/docs/telepresence/2.9/doc-links.yml new file mode 100644 index 000000000..427486bc5 --- /dev/null +++ b/docs/telepresence/2.9/doc-links.yml @@ -0,0 +1,102 @@ +- title: Quick start + link: quick-start +- title: Install Telepresence + items: + - title: Install + link: install/ + - title: Upgrade + link: install/upgrade/ + - title: Install Traffic Manager + link: install/manager/ + - title: Install Traffic Manager with Helm + link: install/helm/ + - title: Cloud Provider Prerequisites + link: install/cloud/ + - title: Migrate from legacy Telepresence + link: install/migrate-from-legacy/ +- title: Core concepts + items: + - title: The changing development workflow + link: concepts/devworkflow + - title: The developer experience and the inner dev loop + link: concepts/devloop + - title: 'Making the remote local: Faster feedback, collaboration and debugging' + link: concepts/faster + - title: Context propagation + link: concepts/context-prop + - title: Types of intercepts + link: concepts/intercepts +- title: How do I... + items: + - title: Intercept a service in your own environment + link: howtos/intercepts + - title: Share dev environments with preview URLs + link: howtos/preview-urls + - title: Proxy outbound traffic to my cluster + link: howtos/outbound + - title: Host a cluster in a local VM + link: howtos/cluster-in-vm + - title: Send requests to an intercepted service + link: howtos/request +- title: Telepresence for Docker + items: + - title: What is Telepresence for Docker + link: extension/intro + - title: Install into Docker-Desktop + link: extension/install + - title: Intercept into a Docker Container + link: extension/intercept +- title: Telepresence for CI + items: + - title: Github Actions + link: ci/github-actions +- title: Technical reference + items: + - title: Architecture + link: reference/architecture + - title: Client reference + link: reference/client + items: + - title: login + link: reference/client/login + - title: Laptop-side configuration + link: reference/config + - title: Cluster-side configuration + link: reference/cluster-config + - title: Using Docker for intercepts + link: reference/docker-run + - title: Running Telepresence in a Docker container + link: reference/inside-container + - title: Environment variables + link: reference/environment + - title: Intercepts + link: reference/intercepts/ + items: + - title: Manually injecting the Traffic Agent + link: reference/intercepts/manual-agent + - title: Volume mounts + link: reference/volume + - title: RESTful API service + link: reference/restapi + - title: DNS resolution + link: reference/dns + - title: RBAC + link: reference/rbac + - title: Telepresence and VPNs + link: reference/vpn + - title: Networking through Virtual Network Interface + link: reference/tun-device + - title: Connection Routing + link: reference/routing + - title: Using Telepresence with Linkerd + link: reference/linkerd +- title: FAQs + link: faqs +- title: Troubleshooting + link: troubleshooting +- title: Community + link: community +- title: Release Notes + link: release-notes +- title: Licenses + link: licenses diff --git a/docs/telepresence/2.9/extension/install.md b/docs/telepresence/2.9/extension/install.md new file mode 100644 index 000000000..471752775 --- /dev/null +++ b/docs/telepresence/2.9/extension/install.md @@ -0,0 +1,39 @@ +--- +title: "Telepresence for Docker installation and connection guide" +description: "Learn how to install and update Ambassador Labs' Telepresence for Docker." +indexable: true +--- + +# Install and connect the Telepresence Docker extension + +[Docker](https://docker.com), the popular containerized runtime environment, now offers the [Telepresence](../../../../../kubernetes-learning-center/telepresence-docker-extension/) extension for Docker Desktop. With this extension, you can quickly install Telepresence and begin using its features with your Docker containers in a matter of minutes. + +## Install Telepresence for Docker + +Telepresence for Docker is available through the Docker Destktop. To install Telepresence for Docker: + +1. Open Docker Desktop. +2. In the Docker Dashboard, click **Add Extensions** in the left navigation bar. +3. In the Extensions Marketplace, search for the Ambassador Telepresence extension. +4. Click **Install**. + +## Connect to Ambassador Cloud through the Telepresence extension. + + After you install the Telepresence extension in Docker Desktop, you need to generate an API key to connect the Telepresence extension to Ambassador Cloud. + + 1. Click the Telepresence extension in Docker Desktop, then click **Get Started**. + + 2. Click the **Get API Key** button to open Ambassador Cloud in a browser window. + + 3. Sign in with your Google, Github, or Gitlab account. + Ambassador Cloud opens to your profile and displays the API key. + + 4. Copy the API key and paste it into the API key field in the Docker Dashboard. + +## Connect to your cluster in Docker Desktop. + + 1. Select the desired clister from the dropdown menu and click **Next**. + This cluster is now set to kubectl's current context. + + 2. Click **Connect to [your cluster]**. + Your cluster is connected and you can now create [intercepts](../intercept/). \ No newline at end of file diff --git a/docs/telepresence/2.9/extension/intercept.md b/docs/telepresence/2.9/extension/intercept.md new file mode 100644 index 000000000..3868407a8 --- /dev/null +++ b/docs/telepresence/2.9/extension/intercept.md @@ -0,0 +1,48 @@ +--- +title: "Create an intercept with Telepreence for Docker" +description: "Create an intercept with Telepresence for Docker. With Telepresence, you can create intercepts to debug, " +indexable: true +--- + +# Create an intercept + +With the Telepresence for Docker extension, you can create [personal intercepts](../../concepts/intercepts/?intercept=personal). These intercepts route the cluster traffic through a proxy UTL to your local Docker container. Follow the instructions below to create an intercept with Docker Desktop. + +## Prerequisites + +Before you begin, you need: +- [Docker Desktop](https://www.docker.com/products/docker-desktop). +- The [Telepresence ](../../../../../kubernetes-learning-center/telepresence-docker-extension/) extension [installed](../install). +- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/), the Kubernetes command-line tool. + +This guide assumes you have a Kubernetes deployment with a running service, and that you can run a copy of that service in a docker container on your laptop. + +## Copy the service you want to intercept + +Once you have the Telepresence extension [installed and connected](../install/) the Telepresence extension, you need to copy the service. To do this, use the `docker run` command with the following flags: + + ```console + $ docker run --rm -it --network host + ``` + +The Telepresence extension requires the target service to be on the host network. This allows Telepresence to share a network with your container. The mounted network device redirects cluster-related traffic back into the cluster. + +## Intercept a service + +In Docker Desktop, the Telepresence extension shows all the services in the namespace. + + 1. Choose a service to intercept and click the **Intercept** button. + + 2. Select the service port for the intercept from the dropdown. + + 3. Enter the target port of the service you previously copied in the Docker container. + + 4. Click **Submit** to create the intercept. + +The intercept now shows up in the Docker Telepresence extension. + +## Test your code + +Now you can make your code changes in your preferred IDE. When you're finished, build a new container with your code changes and run your container on Docker's host network. All the traffic previously routed to and from your Kubernetes service is now routed to and from your local container. + +Click the globe icon next to your intercept to get the preview URL. From here, you can view the intercept details in Ambassador Cloud, open the preview URL in your browser to see the changes you've made in realtime, or you can share the preview URL with teammates so they can review your work. \ No newline at end of file diff --git a/docs/telepresence/2.9/extension/intro.md b/docs/telepresence/2.9/extension/intro.md new file mode 100644 index 000000000..6a653ae06 --- /dev/null +++ b/docs/telepresence/2.9/extension/intro.md @@ -0,0 +1,29 @@ +--- +title: "Telepresence for Docker introduction" +description: "Learn about the Telepresence extension for Docker." +indexable: true +--- + +# Telepresence for Docker + +Telepresence is now available as a [Docker Extension](https://www.docker.com/products/extensions/) for Docker Desktop. + +## What is the Telepresence extension for Docker? + +The [Telepresence Docker extension](../../../../../kubernetes-learning-center/telepresence-docker-extension/) is an extension that runs in Docker Desktop. This extension allows you to spin up a selection of your application and run the Telepresence daemons in that container. The Telepresence extension allows you to intercept a service and redirect cloud traffic to other containers on the Docker host network. + +## What does the Telepresence Docker extension do? + +Telepresence for Docker is designed to simplify your coding experience and test your code faster. Traditionally, you need to build a container within docker with your code changes, push them, wait for it to upload, deploy the changes, verify them, view them, and repeat that process as you continually test your changes. This makes it a slow and cumbersome process when you need to continually test changes. + +With the Telepresence extension for Docker Desktop, you can use intercepts to immediately preview changes as you make them, without the need to redeploy after every change. Because the Telepresence extension also enables you to isolate your machine and operate it entirely within the Docker runtime, this means you can make changes without root permission on your machine. + +## How does Telepresence for Docker work? + +The Telepresence extension is configured to use Docker's host network (VM network for Windows and Mac, host network on Linux). + +Telepresence runs entirely within containers. The Telepresence daemons run in a container, which can be given commands using the extension UI. When Telepresence intercepts a service, it redirects cloud traffic to other containers on the Docker host network. + +## What do I need to begin? + +All you need is [Docker Desktop](https://www.docker.com/products/docker-desktop) with the [Ambassador Telepresence extension installed](../install) and the Kubernetes command-line tool [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/). diff --git a/docs/telepresence/2.9/faqs.md b/docs/telepresence/2.9/faqs.md new file mode 100644 index 000000000..3c37f1cc5 --- /dev/null +++ b/docs/telepresence/2.9/faqs.md @@ -0,0 +1,124 @@ +--- +description: "Learn how Telepresence helps with fast development and debugging in your Kubernetes cluster." +--- + +# FAQs + +** Why Telepresence?** + +Modern microservices-based applications that are deployed into Kubernetes often consist of tens or hundreds of services. The resource constraints and number of these services means that it is often difficult to impossible to run all of this on a local development machine, which makes fast development and debugging very challenging. The fast [inner development loop](../concepts/devloop/) from previous software projects is often a distant memory for cloud developers. + +Telepresence enables you to connect your local development machine seamlessly to the cluster via a two way proxying mechanism. This enables you to code locally and run the majority of your services within a remote Kubernetes cluster -- which in the cloud means you have access to effectively unlimited resources. + +Ultimately, this empowers you to develop services locally and still test integrations with dependent services or data stores running in the remote cluster. + +You can “intercept” any requests made to a target Kubernetes workload, and code and debug your associated service locally using your favourite local IDE and in-process debugger. You can test your integrations by making requests against the remote cluster’s ingress and watching how the resulting internal traffic is handled by your service running locally. + +By using the preview URL functionality you can share access with additional developers or stakeholders to the application via an entry point associated with your intercept and locally developed service. You can make changes that are visible in near real-time to all of the participants authenticated and viewing the preview URL. All other viewers of the application entrypoint will not see the results of your changes. + +** What operating systems does Telepresence work on?** + +Telepresence currently works natively on macOS (Intel and Apple silicon), Linux, and WSL 2. Starting with v2.4.0, we are also releasing a native Windows version of Telepresence that we are considering a Developer Preview. + +** What protocols can be intercepted by Telepresence?** + +All HTTP/1.1 and HTTP/2 protocols can be intercepted. This includes: + +- REST +- JSON/XML over HTTP +- gRPC +- GraphQL + +If you need another protocol supported, please [drop us a line](https://www.getambassador.io/feedback/) to request it. + +** When using Telepresence to intercept a pod, are the Kubernetes cluster environment variables proxied to my local machine?** + +Yes, you can either set the pod's environment variables on your machine or write the variables to a file to use with Docker or another build process. Please see [the environment variable reference doc](../reference/environment) for more information. + +** When using Telepresence to intercept a pod, can the associated pod volume mounts also be mounted by my local machine?** + +Yes, please see [the volume mounts reference doc](../reference/volume/) for more information. + +** When connected to a Kubernetes cluster via Telepresence, can I access cluster-based services via their DNS name?** + +Yes. After you have successfully connected to your cluster via `telepresence connect` you will be able to access any service in your cluster via their namespace qualified DNS name. + +This means you can curl endpoints directly e.g. `curl .:8080/mypath`. + +If you create an intercept for a service in a namespace, you will be able to use the service name directly. + +This means if you `telepresence intercept -n `, you will be able to resolve just the `` DNS record. + +You can connect to databases or middleware running in the cluster, such as MySQL, PostgreSQL and RabbitMQ, via their service name. + +** When connected to a Kubernetes cluster via Telepresence, can I access cloud-based services and data stores via their DNS name?** + +You can connect to cloud-based data stores and services that are directly addressable within the cluster (e.g. when using an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) Service type), such as AWS RDS, Google pub-sub, or Azure SQL Database. + +** What types of ingress does Telepresence support for the preview URL functionality?** + +The preview URL functionality should work with most ingress configurations, including straightforward load balancer setups. + +Telepresence will discover/prompt during first use for this info and make its best guess at figuring this out and ask you to confirm or update this. + +** Why are my intercepts still reporting as active when they've been disconnected?** + + In certain cases, Telepresence might not have been able to communicate back with Ambassador Cloud to update the intercept's status. Worry not, they will get garbage collected after a period of time. + +** Why is my intercept associated with an "Unreported" cluster?** + + Intercepts tagged with "Unreported" clusters simply mean Ambassador Cloud was unable to associate a service instance with a known detailed service from an Edge Stack or API Gateway cluster. [Connecting your cluster to the Service Catalog](/docs/telepresence/latest/quick-start/) will properly match your services from multiple data sources. + +** Will Telepresence be able to intercept workloads running on a private cluster or cluster running within a virtual private cloud (VPC)?** + +Yes. The cluster has to have outbound access to the internet for the preview URLs to function correctly, but it doesn’t need to have a publicly accessible IP address. + +The cluster must also have access to an external registry in order to be able to download the traffic-manager and traffic-agent images that are deployed when connecting with Telepresence. + +** Why does running Telepresence require sudo access for the local daemon?** + +The local daemon needs sudo to create iptable mappings. Telepresence uses this to create outbound access from the laptop to the cluster. + +On Fedora, Telepresence also creates a virtual network device (a TUN network) for DNS routing. That also requires root access. + +** What components get installed in the cluster when running Telepresence?** + +A single `traffic-manager` service is deployed in the `ambassador` namespace within your cluster, and this manages resilient intercepts and connections between your local machine and the cluster. + +A Traffic Agent container is injected per pod that is being intercepted. The first time a workload is intercepted all pods associated with this workload will be restarted with the Traffic Agent automatically injected. + +** How can I remove all of the Telepresence components installed within my cluster?** + +You can run the command `telepresence uninstall --everything` to remove the `traffic-manager` service installed in the cluster and `traffic-agent` containers injected into each pod being intercepted. + +Running this command will also stop the local daemon running. + +** What language is Telepresence written in?** + +All components of the Telepresence application and cluster components are written using Go. + +** How does Telepresence connect and tunnel into the Kubernetes cluster?** + +The connection between your laptop and cluster is established by using +the `kubectl port-forward` machinery (though without actually spawning +a separate program) to establish a TCP connection to Telepresence +Traffic Manager in the cluster, and running Telepresence's custom VPN +protocol over that TCP connection. + + + +** What identity providers are supported for authenticating to view a preview URL?** + +* GitHub +* GitLab +* Google + +More authentication mechanisms and identity provider support will be added soon. Please [let us know](https://www.getambassador.io/feedback/) which providers are the most important to you and your team in order for us to prioritize those. + +** Is Telepresence open source?** + +Yes it is! You can find its source code on [GitHub](https://github.com/telepresenceio/telepresence). + +** How do I share my feedback on Telepresence?** + +Your feedback is always appreciated and helps us build a product that provides as much value as possible for our community. You can chat with us directly on our [feedback page](https://www.getambassador.io/feedback/), or you can [join our Slack channel](http://a8r.io/slack) to share your thoughts. diff --git a/docs/telepresence/2.9/howtos/cluster-in-vm.md b/docs/telepresence/2.9/howtos/cluster-in-vm.md new file mode 100644 index 000000000..4762344c9 --- /dev/null +++ b/docs/telepresence/2.9/howtos/cluster-in-vm.md @@ -0,0 +1,192 @@ +--- +title: "Considerations for locally hosted clusters | Ambassador" +description: "Use Telepresence to intercept services in a cluster running in a hosted virtual machine." +--- + +# Network considerations for locally hosted clusters + +## The problem +Telepresence creates a Virtual Network Interface ([VIF](../../reference/tun-device)) that maps the clusters subnets to the host machine when it connects. If you're running Kubernetes locally (e.g., k3s, Minikube, Docker for Desktop), you may encounter network problems because the devices in the host are also accessible from the cluster's nodes. + +### Example: +A k3s cluster runs in a headless VirtualBox machine that uses a "host-only" network. This network will allow both host-to-guest and guest-to-host connections. In other words, the cluster will have access to the host's network and, while Telepresence is connected, also to its VIF. This means that from the cluster's perspective, there will now be more than one interface that maps the cluster's subnets; the ones already present in the cluster's nodes, and then the Telepresence VIF, mapping them again. + +Now, if a request arrives to Telepresence that is covered by a subnet mapped by the VIF, the request is routed to the cluster. If the cluster for some reason doesn't find a corresponding listener that can handle the request, it will eventually try the host network, and find the VIF. The VIF routes the request to the cluster and now the recursion is in motion. The final outcome of the request will likely be a timeout but since the recursion is very resource intensive (a large amount of very rapid connection requests), this will likely also affect other connections in a bad way. + +## Solution + +### Create a bridge network +A bridge network is a Link Layer (L2) device that forwards traffic between network segments. By creating a bridge network, you can bypass the host's network stack which enable the Kubernetes cluster to connect directly to the same router as your host. + +To create a bridge network, you need to change the network settings of the guest running a cluster's node so that it connects directly to a physical network device on your host. The details on how to configure the bridge depends on what type of virtualization solution you're using. + +### Vagrant + Virtualbox + k3s example +Here's a sample `Vagrantfile` that will spin up a server node and two agent nodes in three headless instances using a bridged network. It also adds the configuration needed for the cluster to host a docker repository (very handy in case you want to save bandwidth). The Kubernetes registry manifest must be applied using `kubectl -f registry.yaml` once the cluster is up and running. + +#### Vagrantfile +```ruby +# -*- mode: ruby -*- +# vi: set ft=ruby : + +# bridge is the name of the host's default network device +$bridge = 'wlp5s0' + +# default_route should be the IP of the host's default route. +$default_route = '192.168.1.1' + +# nameserver must be the IP of an external DNS, such as 8.8.8.8 +$nameserver = '8.8.8.8' + +# server_name should also be added to the host's /etc/hosts file and point to the server_ip +# for easy access when pushing docker images +server_name = 'multi' + +# static IPs for the server and agents. Those IPs must be on the default router's subnet +server_ip = '192.168.1.110' +agents = { + 'agent1' => '192.168.1.111', + 'agent2' => '192.168.1.112', +} + +# Extra parameters in INSTALL_K3S_EXEC variable because of +# K3s picking up the wrong interface when starting server and agent +# https://github.com/alexellis/k3sup/issues/306 +server_script = <<-SHELL + sudo -i + apk add curl + export INSTALL_K3S_EXEC="--bind-address=#{server_ip} --node-external-ip=#{server_ip} --flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + echo "Sleeping for 5 seconds to wait for k3s to start" + sleep 5 + cp /var/lib/rancher/k3s/server/token /vagrant_shared + cp /etc/rancher/k3s/k3s.yaml /vagrant_shared + cp /etc/rancher/k3s/registries.yaml /vagrant_shared + SHELL + +agent_script = <<-SHELL + sudo -i + apk add curl + export K3S_TOKEN_FILE=/vagrant_shared/token + export K3S_URL=https://#{server_ip}:6443 + export INSTALL_K3S_EXEC="--flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + SHELL + +def config_vm(name, ip, script, vm) + # The network_script has two objectives: + # 1. Ensure that the guest's default route is the bridged network (bypass the network of the host) + # 2. Ensure that the DNS points to an external DNS service, as opposed to the DNS of the host that + # the NAT network provides. + network_script = <<-SHELL + sudo -i + ip route delete default 2>&1 >/dev/null || true; ip route add default via #{$default_route} + cp /etc/resolv.conf /etc/resolv.conf.orig + sed 's/^nameserver.*/nameserver #{$nameserver}/' /etc/resolv.conf.orig > /etc/resolv.conf + SHELL + + vm.hostname = name + vm.network 'public_network', bridge: $bridge, ip: ip + vm.synced_folder './shared', '/vagrant_shared' + vm.provider 'virtualbox' do |vb| + vb.memory = '4096' + vb.cpus = '2' + end + vm.provision 'shell', inline: script + vm.provision 'shell', inline: network_script, run: 'always' +end + +Vagrant.configure('2') do |config| + config.vm.box = 'generic/alpine314' + + config.vm.define 'server', primary: true do |server| + config_vm(server_name, server_ip, server_script, server.vm) + end + + agents.each do |agent_name, agent_ip| + config.vm.define agent_name do |agent| + config_vm(agent_name, agent_ip, agent_script, agent.vm) + end + end +end +``` + +The Kubernetes manifest to add the registry: + +#### registry.yaml +```yaml +apiVersion: v1 +kind: ReplicationController +metadata: + name: kube-registry-v0 + namespace: kube-system + labels: + k8s-app: kube-registry + version: v0 +spec: + replicas: 1 + selector: + app: kube-registry + version: v0 + template: + metadata: + labels: + app: kube-registry + version: v0 + spec: + containers: + - name: registry + image: registry:2 + resources: + limits: + cpu: 100m + memory: 200Mi + env: + - name: REGISTRY_HTTP_ADDR + value: :5000 + - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY + value: /var/lib/registry + volumeMounts: + - name: image-store + mountPath: /var/lib/registry + ports: + - containerPort: 5000 + name: registry + protocol: TCP + volumes: + - name: image-store + hostPath: + path: /var/lib/registry-storage +--- +apiVersion: v1 +kind: Service +metadata: + name: kube-registry + namespace: kube-system + labels: + app: kube-registry + kubernetes.io/name: "KubeRegistry" +spec: + selector: + app: kube-registry + ports: + - name: registry + port: 5000 + targetPort: 5000 + protocol: TCP + type: LoadBalancer +``` + diff --git a/docs/telepresence/2.9/howtos/intercepts.md b/docs/telepresence/2.9/howtos/intercepts.md new file mode 100644 index 000000000..87bd9f92b --- /dev/null +++ b/docs/telepresence/2.9/howtos/intercepts.md @@ -0,0 +1,108 @@ +--- +description: "Start using Telepresence in your own environment. Follow these steps to intercept your service in your cluster." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Intercept a service in your own environment + +Telepresence enables you to create intercepts to a target Kubernetes workload. Once you have created and intercept, you can code and debug your associated service locally. + +For a detailed walk-though on creating intercepts using our sample app, follow the [quick start guide](../../quick-start/demo-node/). + + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +This guide assumes you have a Kubernetes deployment and service accessible publicly by an ingress controller, and that you can run a copy of that service on your laptop. + + +## Intercept your service with a global intercept + +With Telepresence, you can create [global intercepts](../../concepts/intercepts/?intercept=global) that intercept all traffic going to a service in your cluster and route it to your local environment instead. + +1. Connect to your cluster with `telepresence connect` and connect to the Kubernetes API server: + + ```console + $ curl -ik https://kubernetes.default + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected when you first connect. + + + You now have access to your remote Kubernetes API server as if you were on the same network. You can now use any local tools to connect to any service in the cluster. + + If you have difficulties connecting, make sure you are using Telepresence 2.0.3 or a later version. Check your version by entering `telepresence version` and [upgrade if needed](../../install/upgrade/). + + +2. Enter `telepresence list` and make sure the service you want to intercept is listed. For example: + + ```console + $ telepresence list + ... + example-service: ready to intercept (traffic-agent not yet installed) + ... + ``` + +3. Get the name of the port you want to intercept on your service: + `kubectl get service --output yaml`. + + For example: + + ```console + $ kubectl get service example-service --output yaml + ... + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + ... + ``` + +4. Intercept all traffic going to the service in your cluster: + `telepresence intercept --port [:] --env-file `. + * For `--port`: specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * For `--env-file`: specify a file path for Telepresence to write the environment variables that are set in the pod. + The example below shows Telepresence intercepting traffic going to service `example-service`. Requests now reach the service on port `http` in the cluster get routed to `8080` on the workstation and write the environment variables of the service to `~/example-service-intercept.env`. + ```console + $ telepresence intercept example-service --port 8080:http --env-file ~/example-service-intercept.env + Using Deployment example-service + intercepted + Intercept name: example-service + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Intercepting : all TCP connections + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + The following are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Query the environment in which you intercepted a service and verify your local instance being invoked. + All the traffic previously routed to your Kubernetes Service is now routed to your local environment + +You can now: +- Make changes on the fly and see them reflected when interacting with + your Kubernetes environment. +- Query services only exposed in your cluster's network. +- Set breakpoints in your IDE to investigate bugs. + + + + **Didn't work?** Make sure the port you're listening on matches the one you specified when you created your intercept. + + diff --git a/docs/telepresence/2.9/howtos/outbound.md b/docs/telepresence/2.9/howtos/outbound.md new file mode 100644 index 000000000..48877df8c --- /dev/null +++ b/docs/telepresence/2.9/howtos/outbound.md @@ -0,0 +1,89 @@ +--- +description: "Telepresence can connect to your Kubernetes cluster, letting you access cluster services as if your laptop was another pod in the cluster." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Proxy outbound traffic to my cluster + +While preview URLs are a powerful feature, Telepresence offers other options for proxying traffic between your laptop and the cluster. This section discribes how to proxy outbound traffic and control outbound connectivity to your cluster. + + This guide assumes that you have the quick start sample web app running in your cluster to test accessing the web-app service. You can substitute this service for any other service you are running. + +## Proxying outbound traffic + +Connecting to the cluster instead of running an intercept allows you to access cluster workloads as if your laptop was another pod in the cluster. This enables you to access other Kubernetes services using `.`. A service running on your laptop can interact with other services on the cluster by name. + +When you connect to your cluster, the background daemon on your machine runs and installs the [Traffic Manager deployment](../../reference/architecture/) into the cluster of your current `kubectl` context. The Traffic Manager handles the service proxying. + +1. Run `telepresence connect` and enter your password to run the daemon. + + ``` + $ telepresence connect + Launching Telepresence Daemon v2.3.7 (api v3) + Need root privileges to run "/usr/local/bin/telepresence daemon-foreground /home//.cache/telepresence/logs '' ''" + [sudo] password: + Connecting to traffic manager... + Connected to context default (https://) + ``` + +2. Run `telepresence status` to confirm connection to your cluster and that it is proxying traffic. + + ``` + $ telepresence status + Root Daemon: Running + Version : v2.3.7 (api 3) + Primary DNS : "" + Fallback DNS: "" + User Daemon: Running + Version : v2.3.7 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 0 total + ``` + +3. Access your service by name with `curl web-app.emojivoto:80`. Telepresence routes the request to the cluster, as if your laptop is actually running in the cluster. + + ``` + $ curl web-app.emojivoto:80 + + + + + Emoji Vote + ... + ``` + +If you terminate the client with `telepresence quit` and try to access the service again, it will fail because traffic is no longer proxied from your laptop. + + ``` + $ telepresence quit + Telepresence Daemon quitting...done + ``` + +When using Telepresence in this way, you need to access services with the namespace qualified DNS name (<service name>.<namespace>) before you start an intercept. After you start an intercept, only <service name> is required. Read more about these differences in the DNS resolution reference guide. + +## Controlling outbound connectivity + +By default, Telepresence provides access to all Services found in all namespaces in the connected cluster. This can lead to problems if the user does not have RBAC access permissions to all namespaces. You can use the `--mapped-namespaces ` flag to control which namespaces are accessible. + +When you use the `--mapped-namespaces` flag, you need to include all namespaces containing services you want to access, as well as all namespaces that contain services related to the intercept. + +### Using local-only intercepts + +When you develop on isolated apps or on a virtualized container, you don't need an outbound connection. However, when developing services that aren't deployed to the cluster, it can be necessary to provide outbound connectivity to the namespace where the service will be deployed. This is because services that aren't exposed through ingress controllers require connectivity to those services. When you provide outbound connectivity, the service can access other services in that namespace without using qualified names. A local-only intercept does not cause outbound connections to originate from the intercepted namespace. The reason for this is to establish correct origin; the connection must be routed to a `traffic-agent`of an intercepted pod. For local-only intercepts, the outbound connections originates from the `traffic-manager`. + +To control outbound connectivity to specific namespaces, add the `--local-only` flag: + + ``` + $ telepresence intercept --namespace --local-only + ``` +The resources in the given namespace can now be accessed using unqualified names as long as the intercept is active. +You can deactivate the intercept with `telepresence leave `. This removes unqualified name access. + +### Proxy outcound connectivity for laptops + +To specify additional hosts or subnets that should be resolved inside the cluster, see [AlsoProxy](../../reference/cluster-config/#alsoproxy) for more details. diff --git a/docs/telepresence/2.9/howtos/preview-urls.md b/docs/telepresence/2.9/howtos/preview-urls.md new file mode 100644 index 000000000..15a1c5181 --- /dev/null +++ b/docs/telepresence/2.9/howtos/preview-urls.md @@ -0,0 +1,127 @@ +--- +title: "Share dev environments with preview URLs | Ambassador" +description: "Telepresence uses Preview URLs to help you collaborate on developing Kubernetes services with teammates." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Share development environments with preview URLs + +Telepresence can generate sharable preview URLs. This enables you to work on a copy of your service locally, and share that environment with a teammate for pair programming. While using preview URLs, Telepresence will route only the requests coming from that preview URL to your local environment. Requests to the ingress are routed to your cluster as usual. + +Preview URLs are protected behind authentication through Ambassador Cloud, and, access to the URL is only available to users in your organization. You can make the URL publicly accessible for sharing with outside collaborators. + +## Creating a preview URL + +1. Connect to Telepresence and enter the `telepresence list` command in your CLI to verify the service is listed. +Telepresence only supports Deployments, ReplicaSets, and StatefulSet workloads with a label that matches a Service. + +2. Enter `telepresence login` to launch Ambassador Cloud in your browser. + + If you are in an environment you can't launch Telepresence in your local browser, enter If you are in an environment where Telepresence cannot launch in a local browser, pass the [`--apikey` flag to `telepresence login`](../../reference/client/login/). + +3. Start the intercept with `telepresence intercept --port --env-file `and adjust the flags as follows: + Start the intercept: + * **port:** specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * **env-file:** specify a file path for Telepresence to write the environment variables that are set in the pod. + +4. Answer the question prompts. + * **What's your ingress' IP address?**: whether the ingress controller is expecting TLS communication on the specified port. + * **What's your ingress' TCP port number?**: the port your ingress controller is listening to. This is often 443 for TLS ports, and 80 for non-TLS ports. + * **Does that TCP port on your ingress use TLS (as opposed to cleartext)?**: whether the ingress controller is expecting TLS communication on the specified port. + * **If required by your ingress, specify a different hostname (TLS-SNI, HTTP "Host" header) to be used in requests.**: if your ingress controller routes traffic based on a domain name (often using the `Host` HTTP header), enter that value here. + + The example below shows a preview URL for `example-service` which listens on port 8080. The preview URL for ingress will use the `ambassador` service in the `ambassador` namespace on port `443` using TLS encryption and the hostname `dev-environment.edgestack.me`: + + ```console +$ telepresence intercept example-service --port 8080 --env-file ~/ex-svc.env + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Confirm the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: -]: ambassador.ambassador + + 2/4: What's your ingress' TCP port number? + + [default: -]: 80 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: y + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: ambassador.ambassador]: dev-environment.edgestack.me + + Using deployment example-service + intercepted + Intercept name : example-service + State : ACTIVE + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":example-service") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname : dev-environment.edgestack.me + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + Here are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Go to the Preview URL generated from the intercept. +Traffic is now intercepted from your preview URL without impacting other traffic from your Ingress. + + + Didn't work? It might be because you have services in between your ingress controller and the service you are intercepting that do not propagate the x-telepresence-intercept-id HTTP Header. Read more on context propagation. + + +7. Make a request on the URL you would usually query for that environment. Don't route a request to your laptop. + + Normal traffic coming into the cluster through the Ingress (i.e. not coming from the preview URL) routes to services in the cluster like normal. + +8. Share with a teammate. + + You can collaborate with teammates by sending your preview URL to them. Once your teammate logs in, they must select the same identity provider and org as you are using. This authorizes their access to the preview URL. When they visit the preview URL, they see the intercepted service running on your laptop. + You can now collaborate with a teammate to debug the service on the shared intercept URL without impacting the production environment. + +## Sharing a preview URL with people outside your team + +To collaborate with someone outside of your identity provider's organization: +Log into [Ambassador Cloud](https://app.getambassador.io/cloud/). + navigate to your service's intercepts, select the preview URL details, and click **Make Publicly Accessible**. Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on your laptop. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. Removing the preview URL either from the dashboard or by running `telepresence preview remove ` also removes all access to the preview URL. + +## Change access restrictions + +To collaborate with someone outside of your identity provider's organization, you must make your preview URL publicly accessible. + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/). +2. Select the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Make Publicly Accessible**. + +Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on a local environment. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. + +## Remove a preview URL from an Intercept + +To delete a preview URL and remove all access to the intercepted service, + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) +2. Click on the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Remove Preview**. + +Alternatively, you can remove a preview URL with the following command: +`telepresence preview remove ` diff --git a/docs/telepresence/2.9/howtos/request.md b/docs/telepresence/2.9/howtos/request.md new file mode 100644 index 000000000..1109c68df --- /dev/null +++ b/docs/telepresence/2.9/howtos/request.md @@ -0,0 +1,12 @@ +import Alert from '@material-ui/lab/Alert'; + +# Send requests to an intercepted service + +Ambassador Cloud can inform you about the required request parameters to reach an intercepted service. + + 1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) + 2. Navigate to the desired service Intercepts page + 3. Click the **Query** button to open the pop-up menu. + 4. Toggle between **CURL**, **Headers** and **Browse**. + +The pre-built queries and header information will help you get started to query the desired intercepted service and manage header propagation. diff --git a/docs/telepresence/2.9/images/container-inner-dev-loop.png b/docs/telepresence/2.9/images/container-inner-dev-loop.png new file mode 100644 index 000000000..06586cd6e Binary files /dev/null and b/docs/telepresence/2.9/images/container-inner-dev-loop.png differ diff --git a/docs/telepresence/2.9/images/docker-header-containers.png b/docs/telepresence/2.9/images/docker-header-containers.png new file mode 100644 index 000000000..06f422a93 Binary files /dev/null and b/docs/telepresence/2.9/images/docker-header-containers.png differ diff --git a/docs/telepresence/2.9/images/github-login.png b/docs/telepresence/2.9/images/github-login.png new file mode 100644 index 000000000..cfd4d4bf1 Binary files /dev/null and b/docs/telepresence/2.9/images/github-login.png differ diff --git a/docs/telepresence/2.9/images/logo.png b/docs/telepresence/2.9/images/logo.png new file mode 100644 index 000000000..701f63ba8 Binary files /dev/null and b/docs/telepresence/2.9/images/logo.png differ diff --git a/docs/telepresence/2.9/images/split-tunnel.png b/docs/telepresence/2.9/images/split-tunnel.png new file mode 100644 index 000000000..5bf30378e Binary files /dev/null and b/docs/telepresence/2.9/images/split-tunnel.png differ diff --git a/docs/telepresence/2.9/images/tracing.png b/docs/telepresence/2.9/images/tracing.png new file mode 100644 index 000000000..c374807e5 Binary files /dev/null and b/docs/telepresence/2.9/images/tracing.png differ diff --git a/docs/telepresence/2.9/images/trad-inner-dev-loop.png b/docs/telepresence/2.9/images/trad-inner-dev-loop.png new file mode 100644 index 000000000..618b674f8 Binary files /dev/null and b/docs/telepresence/2.9/images/trad-inner-dev-loop.png differ diff --git a/docs/telepresence/2.9/images/tunnelblick.png b/docs/telepresence/2.9/images/tunnelblick.png new file mode 100644 index 000000000..8944d445a Binary files /dev/null and b/docs/telepresence/2.9/images/tunnelblick.png differ diff --git a/docs/telepresence/2.9/images/vpn-dns.png b/docs/telepresence/2.9/images/vpn-dns.png new file mode 100644 index 000000000..eed535c45 Binary files /dev/null and b/docs/telepresence/2.9/images/vpn-dns.png differ diff --git a/docs/telepresence/2.9/install/cloud.md b/docs/telepresence/2.9/install/cloud.md new file mode 100644 index 000000000..9bcf9e63e --- /dev/null +++ b/docs/telepresence/2.9/install/cloud.md @@ -0,0 +1,43 @@ +# Provider Prerequisites for Traffic Manager + +## GKE + +### Firewall Rules for private clusters + +A GKE cluster with private networking will come preconfigured with firewall rules that prevent the Traffic Manager's +webhook injector from being invoked by the Kubernetes API server. +For Telepresence to work in such a cluster, you'll need to [add a firewall rule](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) allowing the Kubernetes masters to access TCP port `8443` in your pods. +For example, for a cluster named `tele-webhook-gke` in region `us-central1-c1`: + +```bash +$ gcloud container clusters describe tele-webhook-gke --region us-central1-c | grep masterIpv4CidrBlock + masterIpv4CidrBlock: 172.16.0.0/28 # Take note of the IP range, 172.16.0.0/28 + +$ gcloud compute firewall-rules list \ + --filter 'name~^gke-tele-webhook-gke' \ + --format 'table( + name, + network, + direction, + sourceRanges.list():label=SRC_RANGES, + allowed[].map().firewall_rule().list():label=ALLOW, + targetTags.list():label=TARGET_TAGS + )' + +NAME NETWORK DIRECTION SRC_RANGES ALLOW TARGET_TAGS +gke-tele-webhook-gke-33fa1791-all tele-webhook-net INGRESS 10.40.0.0/14 esp,ah,sctp,tcp,udp,icmp gke-tele-webhook-gke-33fa1791-node +gke-tele-webhook-gke-33fa1791-master tele-webhook-net INGRESS 172.16.0.0/28 tcp:10250,tcp:443 gke-tele-webhook-gke-33fa1791-node +gke-tele-webhook-gke-33fa1791-vms tele-webhook-net INGRESS 10.128.0.0/9 icmp,tcp:1-65535,udp:1-65535 gke-tele-webhook-gke-33fa1791-node +# Take note fo the TARGET_TAGS value, gke-tele-webhook-gke-33fa1791-node + +$ gcloud compute firewall-rules create gke-tele-webhook-gke-webhook \ + --action ALLOW \ + --direction INGRESS \ + --source-ranges 172.16.0.0/28 \ + --rules tcp:8443 \ + --target-tags gke-tele-webhook-gke-33fa1791-node --network tele-webhook-net +Creating firewall...⠹Created [https://www.googleapis.com/compute/v1/projects/datawire-dev/global/firewalls/gke-tele-webhook-gke-webhook]. +Creating firewall...done. +NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED +gke-tele-webhook-gke-webhook tele-webhook-net INGRESS 1000 tcp:8443 False +``` diff --git a/docs/telepresence/2.9/install/helm.md b/docs/telepresence/2.9/install/helm.md new file mode 100644 index 000000000..2709ee8f3 --- /dev/null +++ b/docs/telepresence/2.9/install/helm.md @@ -0,0 +1,181 @@ +# Install the Traffic Manager with Helm + +[Helm](https://helm.sh) is a package manager for Kubernetes that automates the release and management of software on Kubernetes. The Telepresence Traffic Manager can be installed via a Helm chart with a few simple steps. + +For more details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +## Before you begin + +Before you begin you need to have [`helm`](https://helm.sh/docs/intro/install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +The Telepresence Helm chart is hosted by Ambassador Labs and published at `https://app.getambassador.io`. + +Start by adding this repo to your Helm client with the following command: + +```shell +helm repo add datawire https://app.getambassador.io +helm repo update +``` + +## Install with Helm + +When you run the Helm chart, it installs all the components required for the Telepresence Traffic Manager. + +1. If you are installing the Telepresence Traffic Manager **for the first time on your cluster**, create the `ambassador` namespace in your cluster: + + ```shell + kubectl create namespace ambassador + ``` + +2. Install the Telepresence Traffic Manager with the following command: + + ```shell + helm install traffic-manager --namespace ambassador datawire/telepresence + ``` + +### Install into custom namespace + +The Helm chart supports being installed into any namespace, not necessarily `ambassador`. Simply pass a different `namespace` argument to `helm install`. +For example, if you wanted to deploy the traffic manager to the `staging` namespace: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence +``` + +Note that users of Telepresence will need to configure their kubeconfig to find this installation of the Traffic Manager: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +See [the kubeconfig documentation](../../reference/config#manager) for more information. + +### Upgrading the Traffic Manager. + +Versions of the Traffic Manager Helm chart are coupled to the versions of the Telepresence CLI that they are intended for. +Thus, for example, if you wish to use Telepresence `v2.4.0`, you'll need to install version `v2.4.0` of the Traffic Manager Helm chart. + +Upgrading the Traffic Manager is the same as upgrading any other Helm chart; for example, if you installed the release into the `ambassador` namespace, and you just wished to upgrade it to the latest version without changing any configuration values: + +```shell +helm repo up +helm upgrade traffic-manager datawire/telepresence --reuse-values --namespace ambassador +``` + +If you want to upgrade the Traffic-Manager to a specific version, add a `--version` flag with the version number to the upgrade command. For example: `--version v2.4.1` + +## RBAC + +### Installing a namespace-scoped traffic manager + +You might not want the Traffic Manager to have permissions across the entire kubernetes cluster, or you might want to be able to install multiple traffic managers per cluster (for example, to separate them by environment). +In these cases, the traffic manager supports being installed with a namespace scope, allowing cluster administrators to limit the reach of a traffic manager's permissions. + +For example, suppose you want a Traffic Manager that only works on namespaces `dev` and `staging`. +To do this, create a `values.yaml` like the following: + +```yaml +managerRbac: + create: true + namespaced: true + namespaces: + - dev + - staging +``` + +This can then be installed via: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence -f ./values.yaml +``` + +**NOTE** Do not install namespace-scoped Traffic Managers and a global Traffic Manager in the same cluster, as it could have unexpected effects. + +#### Namespace collision detection + +The Telepresence Helm chart will try to prevent namespace-scoped Traffic Managers from managing the same namespaces. +It will do this by creating a ConfigMap, called `traffic-manager-claim`, in each namespace that a given install manages. + +So, for example, suppose you install one Traffic Manager to manage namespaces `dev` and `staging`, as: + +```bash +helm install traffic-manager --namespace dev datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={dev,staging}' +``` + +You might then attempt to install another Traffic Manager to manage namespaces `staging` and `prod`: + +```bash +helm install traffic-manager --namespace prod datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={staging,prod}' +``` + +This would fail with an error: + +``` +Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap "traffic-manager-claim" in namespace "staging" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "prod": current value is "dev" +``` + +To fix this error, fix the overlap either by removing `staging` from the first install, or from the second. + +#### Namespace scoped user permissions + +Optionally, you can also configure user rbac to be scoped to the same namespaces as the manager itself. +You might want to do this if you don't give your users permissions throughout the cluster, and want to make sure they only have the minimum set required to perform telepresence commands on certain namespaces. + +Continuing with the `dev` and `staging` example from the previous section, simply add the following to `values.yaml` (make sure you set the `subjects`!): + +```yaml +clientRbac: + create: true + + # These are the users or groups to which the user rbac will be bound. + # This MUST be set. + subjects: {} + # - kind: User + # name: jane + # apiGroup: rbac.authorization.k8s.io + + namespaced: true + + namespaces: + - dev + - staging +``` + +#### Namespace-scoped webhook + +If you wish to use the traffic-manager's [mutating webhook](../../reference/cluster-config#mutating-webhook) with a namespace-scoped traffic manager, you will have to ensure that each namespace has an `app.kubernetes.io/name` label that is identical to its name: + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: staging + labels: + app.kubernetes.io/name: staging +``` + +You can also use `kubectl label` to add the label to an existing namespace, e.g.: + +```shell +kubectl label namespace staging app.kubernetes.io/name=staging +``` + +This is required because the mutating webhook will use the name label to find namespaces to operate on. + +**NOTE** This labelling happens automatically in kubernetes >= 1.21. + +### Installing RBAC only + +Telepresence Traffic Manager does require some [RBAC](../../reference/rbac/) for the traffic-manager deployment itself, as well as for users. +To make it easier for operators to introspect / manage RBAC separately, you can use `rbac.only=true` to +only create the rbac-related objects. +Additionally, you can use `clientRbac.create=true` and `managerRbac.create=true` to toggle which subset(s) of RBAC objects you wish to create. diff --git a/docs/telepresence/2.9/install/index.md b/docs/telepresence/2.9/install/index.md new file mode 100644 index 000000000..624cb33d6 --- /dev/null +++ b/docs/telepresence/2.9/install/index.md @@ -0,0 +1,153 @@ +import Platform from '@src/components/Platform'; + +# Install + +Install Telepresence by running the commands below for your OS. If you are not the administrator of your cluster, you will need [administrative RBAC permissions](../reference/rbac#administrating-telepresence) to install and use Telepresence in your cluster. + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## What's Next? + +Follow one of our [quick start guides](../quick-start/) to start using Telepresence, either with our sample app or in your own environment. + +## Installing nightly versions of Telepresence + +We build and publish the contents of the default branch, [release/v2](https://github.com/telepresenceio/telepresence), of Telepresence +nightly, Monday through Friday, for macOS (Intel and Apple silicon), Linux, and Windows. + +The tags are formatted like so: `vX.Y.Z-nightly-$gitShortHash`. + +`vX.Y.Z` is the most recent release of Telepresence with the patch version (Z) bumped one higher. +For example, if our last release was 2.3.4, nightly builds would start with v2.3.5, until a new +version of Telepresence is released. + +`$gitShortHash` will be the short hash of the git commit of the build. + +Use these URLs to download the most recent nightly build. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/nightly/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/nightly/telepresence.zip +``` + + + + +## Installing older versions of Telepresence + +Use these URLs to download an older version for your OS (including older nightly builds), replacing `x.y.z` with the versions you want. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/x.y.z/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/x.y.z/telepresence +``` + + + diff --git a/docs/telepresence/2.9/install/manager.md b/docs/telepresence/2.9/install/manager.md new file mode 100644 index 000000000..9a747d895 --- /dev/null +++ b/docs/telepresence/2.9/install/manager.md @@ -0,0 +1,53 @@ +# Install/Uninstall the Traffic Manager + +Telepresence uses a traffic manager to send/recieve cloud traffic to the user. Telepresence uses [Helm](https://helm.sh) under the hood to install the traffic manager in your cluster. + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/). +In addition, you may need certain prerequisites depending on your cloud provider and platform. +See the [cloud provider installation notes](../../install/cloud) for more. + +## Install the Traffic Manager + +The telepresence cli can install the traffic manager for you. The basic install will install the same version as the client used. + +1. Install the Telepresence Traffic Manager with the following command: + + ```shell + telepresence helm install + ``` + +### Customizing the Traffic Manager. + +For details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +1. Create a values.yaml file with your config values. + +2. Run the install command with the values flag set to the path to your values file. + + ```shell + telepresence helm install --values values.yaml + ``` + + +## Upgrading/Downgrading the Traffic Manager. + +1. Download the cli of the version of Telepresence you wish to use. + +2. Run the install command with the upgrade flag. + + ```shell + telepresence helm install --upgrade + ``` + + +## Uninstall + +The telepresence cli can uninstall the traffic manager for you using the `telepresence helm uninstall` command (previously `telepresence uninstall --everything`). + +1. Uninstall the Telepresence Traffic Manager and all of the agents installed by it using the following command: + + ```shell + telepresence helm uninstall + ``` diff --git a/docs/telepresence/2.9/install/migrate-from-legacy.md b/docs/telepresence/2.9/install/migrate-from-legacy.md new file mode 100644 index 000000000..94307dfa1 --- /dev/null +++ b/docs/telepresence/2.9/install/migrate-from-legacy.md @@ -0,0 +1,110 @@ +# Migrate from legacy Telepresence + +[Telepresence](/products/telepresence/) (formerly referenced as Telepresence 2, which is the current major version) has different mechanics and requires a different mental model from [legacy Telepresence 1](https://www.telepresence.io/docs/v1/) when working with local instances of your services. + +In legacy Telepresence, a pod running a service was swapped with a pod running the Telepresence proxy. This proxy received traffic intended for the service, and sent the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment". + +In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time. + +Telepresence 2 introduces a [new +architecture](../../reference/architecture/) built around "intercepts" +that addresses these problems. With the new Telepresence, a sidecar +proxy ("traffic agent") is injected onto the pod. The proxy then +intercepts traffic intended for the Pod and routes it to the +workstation/laptop. The advantage of this approach is that the +service is running at all times, and no swapping is used. By using +the proxy approach, we can also do personal intercepts, where rather +than re-routing all traffic to the laptop/workstation, it only +re-routes the traffic designated as belonging to that user, so that +multiple developers can intercept the same service at the same time +without disrupting normal operation or disrupting eacho. + +Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts. + +## Using legacy Telepresence commands + +First please ensure you've [installed Telepresence](../). + +Telepresence is able to translate common legacy Telepresence commands into native Telepresence commands. +So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used +to with the Telepresence binary. + +For example, say you have a deployment (`myserver`) that you want to swap deployment (equivalent to intercept in +Telepresence) with a python server, you could run the following command: + +``` +$ telepresence --swap-deployment myserver --expose 9090 --run python3 -m http.server 9090 +< help text > + +Legacy telepresence command used +Command roughly translates to the following in Telepresence: +telepresence intercept echo-easy --port 9090 -- python3 -m http.server 9090 +running... +Connecting to traffic manager... +Connected to context +Using Deployment myserver +intercepted + Intercept name : myserver + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:9090 + Intercepting : all TCP connections +Serving HTTP on :: port 9090 (http://[::]:9090/) ... +``` + +Telepresence will let you know what the legacy Telepresence command has mapped to and automatically +runs it. So you can get started with Telepresence today, using the commands you are used to +and it will help you learn the Telepresence syntax. + +### Legacy command mapping + +Below is the mapping of legacy Telepresence to Telepresence commands (where they exist and +are supported). + +| Legacy Telepresence Command | Telepresence Command | +|--------------------------------------------------|--------------------------------------------| +| --swap-deployment $workload | intercept $workload | +| --expose localPort[:remotePort] | intercept --port localPort[:remotePort] | +| --swap-deployment $workload --run-shell | intercept $workload -- bash | +| --swap-deployment $workload --run $cmd | intercept $workload -- $cmd | +| --swap-deployment $workload --docker-run $cmd | intercept $workload --docker-run -- $cmd | +| --run-shell | connect -- bash | +| --run $cmd | connect -- $cmd | +| --env-file,--env-json | --env-file, --env-json (haven't changed) | +| --context,--namespace | --context, --namespace (haven't changed) | +| --mount,--docker-mount | --mount, --docker-mount (haven't changed) | + +### Legacy Telepresence command limitations + +Some of the commands and flags from legacy Telepresence either didn't apply to Telepresence or +aren't yet supported in Telepresence. For some known popular commands, such as --method, +Telepresence will include output letting you know that the flag has gone away. For flags that +Telepresence can't translate yet, it will let you know that that flag is "unsupported". + +If Telepresence is missing any flags or functionality that is integral to your usage, please let us know +by [creating an issue](https://github.com/telepresenceio/telepresence/issues) and/or talking to us on our [Slack channel](http://a8r.io/slack)! + +## Telepresence changes + +Telepresence installs a Traffic Manager in the cluster and Traffic Agents alongside workloads when performing intercepts (including +with `--swap-deployment`) and leaves them. If you use `--swap-deployment`, the intercept will be left once the process +dies, but the agent will remain. There's no harm in leaving the agent running alongside your service, but when you +want to remove them from the cluster, the following Telepresence command will help: +``` +$ telepresence uninstall --help +Uninstall telepresence agents + +Usage: + telepresence uninstall [flags] { --agent |--all-agents } + +Flags: + -d, --agent uninstall intercept agent on specific deployments + -a, --all-agents uninstall intercept agent on all deployments + -h, --help help for uninstall + -n, --namespace string If present, the namespace scope for this CLI request +``` + +Since the new architecture deploys a Traffic Manager into the Ambassador namespace, please take a look at +our [rbac guide](../../reference/rbac) if you run into any issues with permissions while upgrading to Telepresence. + +The Traffic Manager can be uninstalled using `telepresence helm uninstall`. \ No newline at end of file diff --git a/docs/telepresence/2.9/install/upgrade.md b/docs/telepresence/2.9/install/upgrade.md new file mode 100644 index 000000000..8272b4844 --- /dev/null +++ b/docs/telepresence/2.9/install/upgrade.md @@ -0,0 +1,81 @@ +--- +description: "How to upgrade your installation of Telepresence and install previous versions." +--- + +# Upgrade Process +The Telepresence CLI will periodically check for new versions and notify you when an upgrade is available. Running the same commands used for installation will replace your current binary with the latest version. + +Before upgrading your CLI, you must stop any live Telepresence processes by issuing `telepresence quit -s` (or `telepresence quit -ur` +if your current version is less than 2.8.0). + + + + +```shell +# Intel Macs + +# Upgrade via brew: +brew upgrade datawire/blackbird/telepresence + +# OR upgrade manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +The Telepresence CLI contains an embedded Helm chart. See [Install/Uninstall the Traffic Manager](../manager/) if you want to also upgrade +the Traffic Manager in your cluster. diff --git a/docs/telepresence/2.9/new-in-2.9.md b/docs/telepresence/2.9/new-in-2.9.md new file mode 100644 index 000000000..1de231a80 --- /dev/null +++ b/docs/telepresence/2.9/new-in-2.9.md @@ -0,0 +1,21 @@ +# What’s new in Telepresence 2.8.0? + +## Global Client Configuration + +All values that previously was only configurable using a local `config.yml` file, or in some +cases a Kubernetes extension in the kubeconfig, on each workstation, can now be configured using +a `client:` structure in the Helm chart. A client will configure itself according to this global +configuration whenever it connects to a cluster. The local configuration still exists, and has +precedence. + +## View the Client Configuration + +A new `telepresence config view` command is added to make it easy to view the current client +configuration, as set when merging the configuration provided by the traffic-manager with the +local configuration. When called with `--client-only`, the command will only show the configuration +stored in the `config.yml` file of the client. + +## YAML Output + +The `--output` flag that is global to all `telepresence` commands, now accepts `yaml` in addition to `json` +so that output from commands like `telepresence config view` can be nicely formatted. diff --git a/docs/telepresence/2.9/quick-start/TelepresenceQuickStartLanding.js b/docs/telepresence/2.9/quick-start/TelepresenceQuickStartLanding.js new file mode 100644 index 000000000..bd375dee0 --- /dev/null +++ b/docs/telepresence/2.9/quick-start/TelepresenceQuickStartLanding.js @@ -0,0 +1,118 @@ +import queryString from 'query-string'; +import React, { useEffect, useState } from 'react'; + +import Embed from '../../../../src/components/Embed'; +import Icon from '../../../../src/components/Icon'; +import Link from '../../../../src/components/Link'; + +import './telepresence-quickstart-landing.less'; + +/** @type React.FC> */ +const RightArrow = (props) => ( + + + +); + +const TelepresenceQuickStartLanding = () => { + const [getStartedUrl, setGetStartedUrl] = useState( + 'https://app.getambassador.io/cloud/welcome?docs_source=telepresence-quick-start', + ); + + const getUrlFromQueryParams = () => { + const { docs_source, docs_campaign } = queryString.parse( + window.location.search, + ); + + if (docs_source === 'cloud-quickstart-ad' && docs_campaign === 'loops') { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=loops', + ); + } else if ( + docs_source === 'cloud-quickstart-ad' && + docs_campaign === 'environments' + ) { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=environments', + ); + } + }; + + useEffect(() => { + getUrlFromQueryParams(); + }, []); + + return ( +
+

+ Telepresence +

+

+ Set up your ideal development environment for Kubernetes in seconds. + Accelerate your inner development loop with hot reload using your + existing IDE, and workflow. +

+ +
+
+
+

+ Set Up Telepresence with Ambassador Cloud +

+

+ Seamlessly integrate Telepresence into your existing Kubernetes + environment by following our 3-step setup guide. +

+ + Get Started + +
+
+

+ + Do it Yourself: + {' '} + install Telepresence and manually connect to your Kubernetes + workloads. +

+
+ +
+
+
+

+ What Can Telepresence Do for You? +

+

Telepresence gives Kubernetes application developers:

+
    +
  • Instant feedback loops
  • +
  • Remote development environments
  • +
  • Access to your favorite local tools
  • +
  • Easy collaborative development with teammates
  • +
+ + LEARN MORE{' '} + + +
+
+ +
+
+
+
+ ); +}; + +export default TelepresenceQuickStartLanding; diff --git a/docs/telepresence/2.9/quick-start/demo-node.md b/docs/telepresence/2.9/quick-start/demo-node.md new file mode 100644 index 000000000..c1725fe30 --- /dev/null +++ b/docs/telepresence/2.9/quick-start/demo-node.md @@ -0,0 +1,155 @@ +--- +description: "Claim a remote demo cluster and learn to use Telepresence to intercept services running in a Kubernetes Cluster, speeding up local development and debugging." +--- + +import {DemoClusterMetadata, ExpirationDate} from '../../../../../src/components/DemoClusterMetadata'; +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start + +
+

Contents

+ +* [1. Get a free remote cluster](#1-get-a-free-remote-cluster) +* [2. Try the Emojivoto application](#2-try-the-emojivoto-application) +* [3. Set up your local development environment](#3-set-up-your-local-development-environment) +* [4. Testing our fix](#4-testing-our-fix) +* [5. Preview URLs](#5-preview-urls) +* [6. How/Why does this all work](#6-howwhy-does-this-all-work) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +In this guide, we'll give you a hands-on tutorial with [Telepresence](/products/telepresence/). To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js and Golang. We have a version in React if you prefer. + + +## 1. Get a free remote cluster + +[Telepresence](/docs/telepresence/) connects your local workstation with a remote Kubernetes cluster. In this tutorial, we'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. Test out the application: + +1. Go to the and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. We're going to use Telepresence shortly to fix this bug, as everyone should be able to vote for 🍩! + + + Congratulations! You've successfully accessed the Emojivoto application on your remote cluster. + + +## 3. Set up your local development environment + +We'll set up a development environment locally on your workstation. We'll then use [Telepresence](../../reference/inside-container/) to connect this local development environment to the remote Kubernetes cluster. To save time, the development environment we'll use is pre-packaged as a Docker container. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + + + + + + + +Make sure that ports 8080 and 8083 are free.
+If the Docker engine is not running, the command will fail and you will see docker: unknown server OS in your terminal. +
+ +2. The Docker container includes a copy of the Emojivoto application that fixes the bug. Visit the [leaderboard](http://localhost:8083/leaderboard) and notice how it is different from the leaderboard in your Kubernetes cluster. + +3. Vote for 🍩 on your local leaderboard, and you can see that the bug is fixed! + + + Congratulations! You have successfully set up a local development environment, and tested the fix locally. + + +## 4. Testing our fix + +A common use case for Telepresence is to connect your local development environment to a remote cluster. This way, if your application is too big or complex to run locally, you can still develop locally. In this Quick Start, we're also going to show Telepresence can be used for integration testing, by testing our fix against the services in the remote cluster. + +1. From your Docker container, create an intercept, which will tell Telepresence to send traffic to the service in your container instead of the service in the cluster: + `telepresence intercept web --port 8080` + + When prompted for ingress configuration, all default values should be correct as displayed below. + + + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Preview URLs + +Preview URLs enable you to safely share your development environment with anyone. For example, you may want your UX designer to take a quick look at what you're developing, before you commit the code. Preview URLs enable this easy collaboration. + +1. If you access the Emojivoto application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +2. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version running locally. + + +Now you're able to share your fix in your local environment with your team! + + + + To get more information regarding Preview URLs and intercepts, visit Ambassador Cloud. + + +
+ +## 6. How/Why does this all work? + +[Telepresence](../qs-go/) works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +Intercepts and preview URLs are functions of Telepresence that enable easy local development from a remote Kubernetes cluster and offer a preview environment for sharing and real-time collaboration. + +Telepresence also uses custom headers and header propagation for controllable intercepts and preview URLs. The headers facilitate the smart routing of requests either to live services in the cluster or services running locally on a developer’s machine. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to Ambassador Cloud with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. + +## What's Next? + + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! diff --git a/docs/telepresence/2.9/quick-start/demo-react.md b/docs/telepresence/2.9/quick-start/demo-react.md new file mode 100644 index 000000000..2312dbbbc --- /dev/null +++ b/docs/telepresence/2.9/quick-start/demo-react.md @@ -0,0 +1,257 @@ +--- +description: "Telepresence Quick Start - React. In this guide we'll give you everything you need in a preconfigured demo cluster: the Telepresence CLI, a config file for..." +--- + +import Alert from '@material-ui/lab/Alert'; +import QSCards26 from './qs-cards'; +import { DownloadDemo } from '../../../../../src/components/Docs/DownloadDemo'; +import { UserInterceptCommand } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start - React + +
+

Contents

+ +* [1. Download the demo cluster archive](#1-download-the-demo-cluster-archive) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Set up the sample application](#3-set-up-the-sample-application) +* [4. Test app](#4-test-app) +* [5. Run a service on your laptop](#5-run-a-service-on-your-laptop) +* [6. Make a code change](#6-make-a-code-change) +* [7. Intercept all traffic to the service](#7-intercept-all-traffic-to-the-service) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +In this guide we'll give you **everything you need in a preconfigured demo cluster:** the [Telepresence](/products/telepresence/) CLI, a config file for connecting to your demo cluster, and code to run a cluster service locally. + + + While Telepresence works with any language, this guide uses a sample app with a frontend written in React. We have a version with a Node.js backend if you prefer. + + + + +## 1. Download the demo cluster archive + +1. + +2. Extract the archive file, open the `ambassador-demo-cluster` folder, and run the installer script (the commands below might vary based on where your browser saves downloaded files). + + + This step will also install some dependency packages onto your laptop using npm, you can see those packages at ambassador-demo-cluster/edgey-corp-nodejs/DataProcessingService/package.json. + + + ``` + cd ~/Downloads + unzip ambassador-demo-cluster.zip -d ambassador-demo-cluster + cd ambassador-demo-cluster + ./install.sh + # type y to install the npm dependencies when asked + ``` + +3. Confirm that your `kubectl` is configured to use the demo cluster by getting the status of the cluster nodes, you should see a single node named `tpdemo-prod-...`: + `kubectl get nodes` + + ``` + $ kubectl get nodes + + NAME STATUS ROLES AGE VERSION + tpdemo-prod-1234 Ready control-plane,master 5d10h v1.20.2+k3s1 + ``` + +4. Confirm that the Telepresence CLI is now installed (we expect to see the daemons are not running yet): +`telepresence status` + + ``` + $ telepresence status + + Root Daemon: Not running + User Daemon: Not running + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open System Preferences → Security & Privacy → General. Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence status command. + + + + You now have Telepresence installed on your workstation and a Kubernetes cluster configured in your terminal! + + +## 2. Test Telepresence + +[Telepresence](../../reference/client/login/) connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster (this requires **root** privileges and will ask for your password): +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Set up the sample application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + +1. Clone the emojivoto app: +`git clone https://github.com/datawire/emojivoto.git` + +1. Deploy the app to your cluster: +`kubectl apply -k emojivoto/kustomize/deployment` + +1. Change the kubectl namespace: +`kubectl config set-context --current --namespace=emojivoto` + +1. List the Services: +`kubectl get svc` + + ``` + $ kubectl get svc + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + emoji-svc ClusterIP 10.43.162.236 8080/TCP,8801/TCP 29s + voting-svc ClusterIP 10.43.51.201 8080/TCP,8801/TCP 29s + web-app ClusterIP 10.43.242.240 80/TCP 29s + web-svc ClusterIP 10.43.182.119 8080/TCP 29s + ``` + +1. Since you’ve already connected Telepresence to your cluster, you can access the frontend service in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). This is the namespace qualified DNS name in the form of `service.namespace`. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Test app + +1. Vote for some emojis and see how the [leaderboard](http://web-app.emojivoto/leaderboard) changes. + +1. There is one emoji that causes an error when you vote for it. Vote for 🍩 and the leaderboard does not actually update. Also an error is shown on the browser dev console: +`GET http://web-svc.emojivoto:8080/api/vote?choice=:doughnut: 500 (Internal Server Error)` + + + Open the dev console in Chrome or Firefox with Option + ⌘ + J (macOS) or Shift + CTRL + J (Windows/Linux).
+ Open the dev console in Safari with Option + ⌘ + C. +
+ +The error is on a backend service, so **we can add an error page to notify the user** while the bug is fixed. + +## 5. Run a service on your laptop + +Now start up the `web-app` service on your laptop. We'll then make a code change and intercept this service so that we can see the immediate results of a code change to the service. + +1. **In a new terminal window**, change into the repo directory and build the application: + + `cd /emojivoto` + `make web-app-local` + + ``` + $ make web-app-local + + ... + webpack 5.34.0 compiled successfully in 4326 ms + ✨ Done in 5.38s. + ``` + +2. Change into the service's code directory and start the server: + + `cd emojivoto-web-app` + `yarn webpack serve` + + ``` + $ yarn webpack serve + + ... + ℹ 「wds」: Project is running at http://localhost:8080/ + ... + ℹ 「wdm」: Compiled successfully. + ``` + +4. Access the application at [http://localhost:8080](http://localhost:8080) and see how voting for the 🍩 is generating the same error as the application deployed in the cluster. + + + Victory, your local React server is running a-ok! + + +## 6. Make a code change +We’ve now set up a local development environment for the app. Next we'll make and locally test a code change to the app to improve the issue with voting for 🍩. + +1. In the terminal running webpack, stop the server with `Ctrl+c`. + +1. In your preferred editor open the file `emojivoto/emojivoto-web-app/js/components/Vote.jsx` and replace the `render()` function (lines 83 to the end) with [this highlighted code snippet](https://github.com/datawire/emojivoto/blob/main/assets/Vote-fixed.jsx#L83-L149). + +1. Run webpack to fully recompile the code then start the server again: + + `yarn webpack` + `yarn webpack serve` + +1. Reload the browser tab showing [http://localhost:8080](http://localhost:8080) and vote for 🍩. Notice how you see an error instead, improving the user experience. + +## 7. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the app to the version running locally instead. + + + This command must be run in the terminal window where you ran the script because the script set environment variables to access the demo cluster. Those variables will only will apply to that terminal session. + + +1. Start the intercept with the `intercept` command, setting the workload name (a Deployment in this case), namespace, and port: +`telepresence intercept web-app --namespace emojivoto --port 8080` + + ``` + $ telepresence intercept web-app --namespace emojivoto --port 8080 + + Using deployment web-app + intercepted + Intercept name: web-app-emojivoto + State : ACTIVE + ... + ``` + +2. Go to the frontend service again in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). Voting for 🍩 should now show an error message to the user. + + + The web-app Deployment is being intercepted and rerouted to the server on your laptop! + + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## What's Next? + + diff --git a/docs/telepresence/2.9/quick-start/go.md b/docs/telepresence/2.9/quick-start/go.md new file mode 100644 index 000000000..c926d7b05 --- /dev/null +++ b/docs/telepresence/2.9/quick-start/go.md @@ -0,0 +1,190 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + + +# Telepresence Quick Start - **Go** + +This guide provides you with a hands-on tutorial with Telepresence and Golang. To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + +## 1. Get a free remote cluster + +Telepresence connects your local workstation with a remote Kubernetes cluster. In this tutorial, you'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. +Test out the application: + +1. Go to the Emojivoto webapp and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. + +## 3. Run the Docker container + +The bug is present in the `voting-svc` service, you'll run that service locally. To save your time, we prepared a Docker container with this service running and all you'll need to fix the bug. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + +2. The application is failing due to a little bug inside this service which uses gRPC to communicate with the others services. We can use `grpcurl` to test the gRPC endpoint and see the error by running: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + (empty) + + Response trailers received: + content-type: application/grpc + Sent 0 requests and received 0 responses + ERROR: + Code: Unknown + Message: ERROR + ``` + +3. In order to fix the bug, use the Docker container's embedded IDE to fix this error. Go to http://localhost:8083 and open `api/api.go`. Remove the `"fmt"` package by deleting the line 5. + + ```go + 3 import ( + 4 "context" + 5 "fmt" // DELETE THIS LINE + 6 + 7 pb "github.com/buoyantio/emojivoto/emojivoto-voting-svc/gen/proto" + ``` + + and also replace the line `21`: + + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return nil, fmt.Errorf("ERROR") + 22 } + ``` + with + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return pS.vote(":doughnut:") + 22 } + ``` + Then save the file (`Ctrl+s` for Windows, `Cmd+s` for Mac or `Menu -> File -> Save`) and verify that the error is fixed now: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + content-type: application/grpc + + Response contents: + { + } + + Response trailers received: + (empty) + Sent 0 requests and received 1 response + ``` + +## 4. Telepresence intercept + +1. Now the bug is fixed, you can use Telepresence to intercept *all* the traffic through our local service. +Run the following command inside the container: + + ``` + $ telepresence intercept voting --port 8081:8080 + + Using Deployment voting + intercepted + Intercept name : voting + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8081 + Service Port Identifier: 8080 + Volume Mount Point : /tmp/telfs-XXXXXXXXX + Intercepting : all TCP connections + ``` + Now you can go back to Emojivoto webapp and you'll see that voting for 🍩 woks as expected. + +You have created an intercept to tell Telepresence where to send traffic. The `voting-svc` traffic is now destined to the local Dockerized version of the service. This intercepts *all the traffic* to the local `voting-svc` service, which has been fixed with the Telepresence intercept. + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Telepresence intercept with a preview URL + +Preview URLs allows you to safely share your development environment. With this approach, you can try and test your local service more accurately because you have a total control about which traffic is handled through your service, all of this thanks to the preview URL. + +1. First leave the current intercept: + + ``` + $ telepresence leave voting + ``` + +2. Then login to telepresence: + + + +3. Create an intercept, which will tell Telepresence to send traffic to the service in our container instead of the service in the cluster. When prompted for ingress configuration, all default values should be correct as displayed below. + + + +4. If you access the Emojivoto webapp application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +5. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version which is running locally. + +
+ +## What's Next? + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! diff --git a/docs/telepresence/2.9/quick-start/index.md b/docs/telepresence/2.9/quick-start/index.md new file mode 100644 index 000000000..e0d26fa9e --- /dev/null +++ b/docs/telepresence/2.9/quick-start/index.md @@ -0,0 +1,7 @@ +--- +description: Telepresence Quick Start. +--- + +import NewTelepresenceQuickStartLanding from './TelepresenceQuickStartLanding' + + diff --git a/docs/telepresence/2.9/quick-start/qs-cards.js b/docs/telepresence/2.9/quick-start/qs-cards.js new file mode 100644 index 000000000..5b68aa4ae --- /dev/null +++ b/docs/telepresence/2.9/quick-start/qs-cards.js @@ -0,0 +1,71 @@ +import Grid from '@material-ui/core/Grid'; +import Paper from '@material-ui/core/Paper'; +import Typography from '@material-ui/core/Typography'; +import { makeStyles } from '@material-ui/core/styles'; +import { Link as GatsbyLink } from 'gatsby'; +import React from 'react'; + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + textAlign: 'center', + alignItem: 'stretch', + padding: 0, + }, + paper: { + padding: theme.spacing(1), + textAlign: 'center', + color: 'black', + height: '100%', + }, +})); + +export default function CenteredGrid() { + const classes = useStyles(); + + return ( +
+ + + + + + Collaborating + + + + Use preview URLS to collaborate with your colleagues and others + outside of your organization. + + + + + + + + Outbound Sessions + + + + While connected to the cluster, your laptop can interact with + services as if it was another pod in the cluster. + + + + + + + + FAQs + + + + Learn more about uses cases and the technical implementation of + Telepresence. + + + + +
+ ); +} diff --git a/docs/telepresence/2.9/quick-start/qs-go.md b/docs/telepresence/2.9/quick-start/qs-go.md new file mode 100644 index 000000000..2e140f6a7 --- /dev/null +++ b/docs/telepresence/2.9/quick-start/qs-go.md @@ -0,0 +1,396 @@ +--- +description: "Telepresence Quick Start Go. You will need kubectl or oc installed and set up (Linux / macOS / Windows) to use a Kubernetes cluster, preferably an empty." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Go** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Go application](#3-install-a-sample-go-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-alttelepresence-logo--whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used [Telepresence](/products/telepresence/) previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Go application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Go. We have versions in Python (Flask), Python (FastAPI), Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-go.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-go.git + + Cloning into 'edgey-corp-go'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-go/DataProcessingService/` + +3. You will use [Fresh](https://pkg.go.dev/github.com/BUGLAN/fresh) to support auto reloading of the Go server, which we'll use later. Confirm it is installed by running: + `go get github.com/pilu/fresh` + Then start the Go server: + `$GOPATH/bin/fresh` + + ``` + $ go get github.com/pilu/fresh + + $ $GOPATH/bin/fresh + + ... + 10:23:41 app | Welcome to the DataProcessingGoService! + ``` + + + Install Go from here and set your GOPATH if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Go server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Go server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-go/DataProcessingService/main.go` in your editor and change `var color string` from `blue` to `orange`. Save the file and the Go server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.9/quick-start/qs-java.md b/docs/telepresence/2.9/quick-start/qs-java.md new file mode 100644 index 000000000..9056d61cd --- /dev/null +++ b/docs/telepresence/2.9/quick-start/qs-java.md @@ -0,0 +1,390 @@ +--- +description: "Telepresence Quick Start - Java. This document uses kubectl in all example commands, but OpenShift users should have no problem substituting in the oc command." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Java** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Java application](#3-install-a-sample-java-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Java application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Java. We have versions in Python (FastAPI), Python (Flask), Go, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-java.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-java.git + + Cloning into 'edgey-corp-java'... + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-java/DataProcessingService/` + +3. Start the Maven server. + `mvn spring-boot:run` + + + Install Java and Maven first if needed. + + + ``` + $ mvn spring-boot:run + + ... + g.d.DataProcessingServiceJavaApplication : Started DataProcessingServiceJavaApplication in 1.408 seconds (JVM running for 1.684) + + ``` + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Java server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Java server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-java/DataProcessingService/src/main/resources/application.properties` in your editor and change `app.default.color` on line 2 from `blue` to `orange`. Save the file then stop and restart your Java server. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.9/quick-start/qs-node.md b/docs/telepresence/2.9/quick-start/qs-node.md new file mode 100644 index 000000000..d4282240f --- /dev/null +++ b/docs/telepresence/2.9/quick-start/qs-node.md @@ -0,0 +1,384 @@ +--- +description: "Telepresence Quick Start Node.js. This document uses kubectl in all example commands. OpenShift users should have no problem substituting in the oc command..." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Node.js** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Node.js application](#3-install-a-sample-nodejs-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Node.js application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js. We have versions in Go, Java,Python using Flask, and Python using FastAPI if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-nodejs.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-nodejs.git + + Cloning into 'edgey-corp-nodejs'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-nodejs/DataProcessingService/` + +3. Install the dependencies and start the Node server: +`npm install && npm start` + + ``` + $ npm install && npm start + + ... + Welcome to the DataProcessingService! + { _: [] } + Server running on port 3000 + ``` + + + Install Node.js from here if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Node server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + See this doc for more information on how Telepresence resolves DNS. + + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Node server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-nodejs/DataProcessingService/app.js` in your editor and change line 6 from `blue` to `orange`. Save the file and the Node server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.9/quick-start/qs-python-fastapi.md b/docs/telepresence/2.9/quick-start/qs-python-fastapi.md new file mode 100644 index 000000000..dacfd9f25 --- /dev/null +++ b/docs/telepresence/2.9/quick-start/qs-python-fastapi.md @@ -0,0 +1,381 @@ +--- +description: "Telepresence Quick Start - Python (FastAPI) You need kubectl or oc installed & set up (Linux/macOS/Windows) to use Kubernetes cluster, preferably an empty test." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Python (FastAPI)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the FastAPI framework. We have versions in Python (Flask), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python-fastapi.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python-fastapi.git + + Cloning into 'edgey-corp-python-fastapi'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python-fastapi/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install fastapi uvicorn requests && python app.py + + Collecting fastapi + ... + Application startup complete. + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local service is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python-fastapi/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 17 from `blue` to `orange`. Save the file and the Python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080) and it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.9/quick-start/qs-python.md b/docs/telepresence/2.9/quick-start/qs-python.md new file mode 100644 index 000000000..02ad7de97 --- /dev/null +++ b/docs/telepresence/2.9/quick-start/qs-python.md @@ -0,0 +1,392 @@ +--- +description: "Telepresence Quick Start - Python (Flask). This document uses kubectl in all example commands, but OpenShift users should have no problem substituting in the oc." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + + + +# Telepresence Quick Start - **Python (Flask)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multiservice application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the Flask framework. We have versions in Python (FastAPI), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python.git + + Cloning into 'edgey-corp-python'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install flask requests && python app.py + + Collecting flask + ... + Welcome to the DataServiceProcessingPythonService! + ... + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Python server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 15 from `blue` to `orange`. Save the file and the python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/2.9/quick-start/telepresence-quickstart-landing.less b/docs/telepresence/2.9/quick-start/telepresence-quickstart-landing.less new file mode 100644 index 000000000..e2a83df4f --- /dev/null +++ b/docs/telepresence/2.9/quick-start/telepresence-quickstart-landing.less @@ -0,0 +1,152 @@ +@import '~@src/components/Layout/vars.less'; + +.doc-body .telepresence-quickstart-landing { + font-family: @InterFont; + color: @black; + margin: -8.4px auto 48px; + max-width: 1050px; + min-width: @docs-min-width; + width: 100%; + + h1 { + color: @blue-dark; + font-weight: normal; + letter-spacing: 0.25px; + font-size: 33px; + margin: 0 0 15px; + } + p { + font-size: 0.875rem; + line-height: 24px; + margin: 0; + padding: 0; + } + + .demo-cluster-container { + display: grid; + margin: 40px 0; + grid-template-columns: 1fr; + grid-template-columns: 1fr; + @media screen and (max-width: 900px) { + grid-template-columns: repeat(1, 1fr); + } + } + .main-title-container { + display: flex; + flex-direction: column; + align-items: center; + p { + text-align: center; + font-size: 0.875rem; + } + } + h2 { + font-size: 23px; + color: @black; + margin: 0 0 20px 0; + padding: 0; + &.underlined { + padding-bottom: 2px; + border-bottom: 3px solid @grey-separator; + text-align: center; + } + strong { + font-weight: 800; + } + &.subtitle { + margin-bottom: 10px; + font-size: 19px; + line-height: 28px; + } + } + .learn-more, + .get-started { + font-size: 14px; + font-weight: 600; + letter-spacing: 1.25px; + display: flex; + align-items: center; + text-decoration: none; + &.inline { + display: inline-block; + text-decoration: underline; + font-size: unset; + font-weight: normal; + &:hover { + text-decoration: none; + } + } + &.blue { + color: @blue-5; + } + &.blue:hover { + color: @blue-dark; + } + } + + .learn-more { + margin-top: 20px; + padding: 13px 0; + } + + .box-container { + &.border { + border: 1.5px solid @grey-separator; + border-radius: 5px; + padding: 10px; + } + &::before { + content: ''; + position: absolute; + width: 14px; + height: 14px; + border-radius: 50%; + top: 0; + left: 50%; + transform: translate(-50%, -50%); + } + p { + font-size: 0.875rem; + line-height: 24px; + padding: 0; + } + } + + .telepresence-video { + border: 2px solid @grey-separator; + box-shadow: -6px 12px 0px fade(@black, 12%); + border-radius: 8px; + padding: 18px; + h2.telepresence-video-title { + font-weight: 400; + font-size: 23px; + line-height: 33px; + color: @blue-6; + } + } + + .video-section { + display: grid; + grid-template-columns: 1fr 1fr; + column-gap: 20px; + @media screen and (max-width: 800px) { + grid-template-columns: 1fr; + } + ul { + font-size: 14px; + margin: 0 10px 6px 0; + } + .video-container { + position: relative; + padding-bottom: 56.25%; // 16:9 aspect ratio + height: 0; + iframe { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + } + } + } +} diff --git a/docs/telepresence/2.9/redirects.yml b/docs/telepresence/2.9/redirects.yml new file mode 100644 index 000000000..5961b3477 --- /dev/null +++ b/docs/telepresence/2.9/redirects.yml @@ -0,0 +1 @@ +- {from: "", to: "quick-start"} diff --git a/docs/telepresence/2.9/reference/architecture.md b/docs/telepresence/2.9/reference/architecture.md new file mode 100644 index 000000000..8aa90b267 --- /dev/null +++ b/docs/telepresence/2.9/reference/architecture.md @@ -0,0 +1,102 @@ +--- +description: "How Telepresence works to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Telepresence Architecture + +
+ +![Telepresence Architecture](https://www.getambassador.io/images/documentation/telepresence-architecture.inline.svg) + +
+ +## Telepresence CLI + +The Telepresence CLI orchestrates the moving parts on the workstation: it starts the Telepresence Daemons, +authenticates against Ambassador Cloud, and then acts as a user-friendly interface to the Telepresence User Daemon. + +## Telepresence Daemons +Telepresence has Daemons that run on a developer's workstation and act as the main point of communication to the cluster's +network in order to communicate with the cluster and handle intercepted traffic. + +### User-Daemon +The User-Daemon coordinates the creation and deletion of intercepts by communicating with the [Traffic Manager](#traffic-manager). +All requests from and to the cluster go through this Daemon. + +When you run telepresence login, Telepresence installs an enhanced version of the User-Daemon. This replaces the existing User-Daemon and +allows you to create intercepts on your local machine from Ambassador Cloud. + +### Root-Daemon +The Root-Daemon manages the networking necessary to handle traffic between the local workstation and the cluster by setting up a +[Virtual Network Device](../tun-device) (VIF). For a detailed description of how the VIF manages traffic and why it is necessary +please refer to this blog post: +[Implementing Telepresence Networking with a TUN Device](https://blog.getambassador.io/implementing-telepresence-networking-with-a-tun-device-a23a786d51e9). + +When you run telepresence login, Telepresence installs an enhanced Telepresence User Daemon. This replaces the open source +User Daemon and allows you to create intercepts on your local machine from Ambassador Cloud. + +## Traffic Manager + +The Traffic Manager is the central point of communication between Traffic Agents in the cluster and Telepresence Daemons +on developer workstations. It is responsible for injecting the Traffic Agent sidecar into intercepted pods, proxying all +relevant inbound and outbound traffic, and tracking active intercepts. + +The Traffic-Manager is installed, either by a cluster administrator using a Helm Chart, or on demand by the Telepresence +User Daemon. When the User Daemon performs its initial connect, it first checks the cluster for the Traffic Manager +deployment, and if missing it will make an attempt to install it using its embedded Helm Chart. + +When an intercept gets created with a Preview URL, the Traffic Manager will establish a connection with Ambassador Cloud +so that Preview URL requests can be routed to the cluster. This allows Ambassador Cloud to reach the Traffic Manager +without requiring the Traffic Manager to be publicly exposed. Once the Traffic Manager receives a request from a Preview +URL, it forwards the request to the ingress service specified at the Preview URL creation. + +## Traffic Agent + +The Traffic Agent is a sidecar container that facilitates intercepts. When an intercept is first started, the Traffic Agent +container is injected into the workload's pod(s). You can see the Traffic Agent's status by running `telepresence list` +or `kubectl describe pod `. + +Depending on the type of intercept that gets created, the Traffic Agent will either route the incoming request to the +Traffic Manager so that it gets routed to a developer's workstation, or it will pass it along to the container in the +pod usually handling requests on that port. + +## Ambassador Cloud + +Ambassador Cloud enables Preview URLs by generating random ephemeral domain names and routing requests received on those +domains from authorized users to the appropriate Traffic Manager. + +Ambassador Cloud also lets users manage their Preview URLs: making them publicly accessible, seeing users who have +accessed them and deleting them. + +## Pod-Daemon + +The Pod-Daemon is a modified version of the [Telepresence User-Daemon](#user-daemon) built as a container image so that +it can be inserted into a `Deployment` manifest as an additional container. This allows users to create intercepts completely +within the cluster with the benefit that the intercept stays active until the deployment with the Pod-Daemon container is removed. + +The Pod-Daemon will take arguments and environment variables as part of the `Deployment` manifest to specify which service the intercept +should be run on and to provide similar configuration that would be provided when using Telepresence intercepts from the command line. + +After being deployed to the cluster, it behaves similarly to the Telepresence User-Daemon and installs the [Traffic Agent Sidecar](#traffic-agent) +on the service that is being intercepted. After the intercept is created, traffic can then be redirected to the `Deployment` with the Pod-Daemon +container instead. The Pod-Daemon will automatically generate a Preview URL so that the intercept can be accessed from outside the cluster. +The Preview URL can be obtained from the Pod-Daemon logs if you are deploying it manually. + +The Pod-Daemon was created for use as a component of Deployment Previews in order to automatically create intercepts with development images built +by CI so that changes from pull requests can be quickly visualized in a live cluster before changes are landed by accessing the Preview URL +link which would be posted to an associated GitHub pull request when using Deployment Previews. + +See the [Deployment Previews quick-start](https://www.getambassador.io/docs/cloud/latest/deployment-previews/quick-start) for information on how to get started with Deployment Previews +or for a reference on how Pod-Daemon can be manually deployed to the cluster. + + +# Changes from Service Preview + +Using Ambassador's previous offering, Service Preview, the Traffic Agent had to be manually added to a pod by an +annotation. This is no longer required as the Traffic Agent is automatically injected when an intercept is started. + +Service Preview also started an intercept via `edgectl intercept`. The `edgectl` CLI is no longer required to intercept +as this functionality has been moved to the Telepresence CLI. + +For both the Traffic Manager and Traffic Agents, configuring Kubernetes ClusterRoles and ClusterRoleBindings is not +required as it was in Service Preview. Instead, the user running Telepresence must already have sufficient permissions in the cluster to add and modify deployments in the cluster. diff --git a/docs/telepresence/2.9/reference/client.md b/docs/telepresence/2.9/reference/client.md new file mode 100644 index 000000000..491dbbb8e --- /dev/null +++ b/docs/telepresence/2.9/reference/client.md @@ -0,0 +1,31 @@ +--- +description: "CLI options for Telepresence to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Client reference + +The [Telepresence CLI client](../../quick-start) is used to connect Telepresence to your cluster, start and stop intercepts, and create preview URLs. All commands are run in the form of `telepresence `. + +## Commands + +A list of all CLI commands and flags is available by running `telepresence help`, but here is more detail on the most common ones. +You can append `--help` to each command below to get even more information about its usage. + +| Command | Description | +| --- | --- | +| `connect` | Starts the local daemon and connects Telepresence to your cluster and installs the Traffic Manager if it is missing. After connecting, outbound traffic is routed to the cluster so that you can interact with services as if your laptop was another pod (for example, curling a service by it's name) | +| [`login`](login) | Authenticates you to Ambassador Cloud to create, manage, and share [preview URLs](../../howtos/preview-urls/) +| `logout` | Logs out out of Ambassador Cloud | +| `license` | Formats a license from Ambassdor Cloud into a secret that can be [applied to your cluster](../cluster-config#add-license-to-cluster) if you require features of the extension in an air-gapped environment| +| `status` | Shows the current connectivity status | +| `quit` | Tell Telepresence daemons to quit | +| `list` | Lists the current active intercepts | +| `intercept` | Intercepts a service, run followed by the service name to be intercepted and what port to proxy to your laptop: `telepresence intercept --port `. This command can also start a process so you can run a local instance of the service you are intercepting. For example the following will intercept the hello service on port 8000 and start a Python web server: `telepresence intercept hello --port 8000 -- python3 -m http.server 8000`. A special flag `--docker-run` can be used to run the local instance [in a docker container](../docker-run). | +| `leave` | Stops an active intercept: `telepresence leave hello` | +| `preview` | Create or remove [preview URLs](../../howtos/preview-urls) for existing intercepts: `telepresence preview create ` | +| `loglevel` | Temporarily change the log-level of the traffic-manager, traffic-agents, and user and root daemons | +| `gather-logs` | Gather logs from traffic-manager, traffic-agents, user, and root daemons, and export them into a zip file that can be shared with others or included with a github issue. Use `--get-pod-yaml` to include the yaml for the `traffic-manager` and `traffic-agent`s. Use `--anonymize` to replace the actual pod names + namespaces used for the `traffic-manager` and pods containing `traffic-agent`s in the logs. | +| `version` | Show version of Telepresence CLI + Traffic-Manager (if connected) | +| `uninstall` | Uninstalls Telepresence from your cluster, using the `--agent` flag to target the Traffic Agent for a specific workload, the `--all-agents` flag to remove all Traffic Agents from all workloads, or the `--everything` flag to remove all Traffic Agents and the Traffic Manager. +| `dashboard` | Reopens the Ambassador Cloud dashboard in your browser | +| `current-cluster-id` | Get cluster ID for your kubernetes cluster, used for [configuring license](../cluster-config#add-license-to-cluster) in an air-gapped environment | diff --git a/docs/telepresence/2.9/reference/client/login.md b/docs/telepresence/2.9/reference/client/login.md new file mode 100644 index 000000000..199823718 --- /dev/null +++ b/docs/telepresence/2.9/reference/client/login.md @@ -0,0 +1,79 @@ +# Telepresence Login + +```console +$ telepresence login --help +Authenticate to Ambassador Cloud + +Usage: + telepresence login [flags] + +Flags: + --apikey string Static API key to use instead of performing an interactive login +``` + +## Description + +Use `telepresence login` to explicitly authenticate with [Ambassador +Cloud](https://www.getambassador.io/docs/cloud). Unless the +[`skipLogin` option](../../config) is set, other commands will +automatically invoke the `telepresence login` interactive login +procedure as nescessary, so it is rarely nescessary to explicitly run +`telepresence login`; it should only be truly nescessary to explictly +run `telepresence login` when you require a non-interactive login. + +The normal interactive login procedure involves launching a web +browser, a user interacting with that web browser, and finally having +the web browser make callbacks to the local Telepresence process. If +it is not possible to do this (perhaps you are using a headless remote +box via SSH, or are using Telepresence in CI), then you may instead +have Ambassador Cloud issue an API key that you pass to `telepresence +login` with the `--apikey` flag. + +## Telepresence + +When you run `telepresence login`, the CLI installs +a Telepresence binary. The Telepresence enhanced free client of the [User +Daemon](../../architecture) communicates with the Ambassador Cloud to +provide fremium features including the ability to create intercepts from +Ambassador Cloud. + +## Acquiring an API key + +1. Log in to Ambassador Cloud at https://app.getambassador.io/ . + +2. Click on your profile icon in the upper-left: ![Screenshot with the + mouse pointer over the upper-left profile icon](./login/apikey-2.png) + +3. Click on the "API Keys" menu button: ![Screenshot with the mouse + pointer over the "API Keys" menu button](./login/apikey-3.png) + +4. Click on the "generate new key" button in the upper-right: + ![Screenshot with the mouse pointer over the "generate new key" + button](./login/apikey-4.png) + +5. Enter a description for the key (perhaps the name of your laptop, + or perhaps the "CI"), and click "generate api key" to create it. + +You may now pass the API key as `KEY` to `telepresence login --apikey=KEY`. + +Telepresence will use that "master" API key to create narrower keys +for different components of Telepresence. You will see these appear +in the Ambassador Cloud web interface. + + +The Ambassador agent watches for secrets with a name ending in `agent-cloud-token`. You can create this secret yourself. This API key will always be used. + + ```shell +kubectl apply -f - < + labels: + app.kubernetes.io/name: agent-cloud-token +data: + CLOUD_CONNECT_TOKEN: +EOF + ``` \ No newline at end of file diff --git a/docs/telepresence/2.9/reference/client/login/apikey-2.png b/docs/telepresence/2.9/reference/client/login/apikey-2.png new file mode 100644 index 000000000..1379502a9 Binary files /dev/null and b/docs/telepresence/2.9/reference/client/login/apikey-2.png differ diff --git a/docs/telepresence/2.9/reference/client/login/apikey-3.png b/docs/telepresence/2.9/reference/client/login/apikey-3.png new file mode 100644 index 000000000..4559b784d Binary files /dev/null and b/docs/telepresence/2.9/reference/client/login/apikey-3.png differ diff --git a/docs/telepresence/2.9/reference/client/login/apikey-4.png b/docs/telepresence/2.9/reference/client/login/apikey-4.png new file mode 100644 index 000000000..25c6581a4 Binary files /dev/null and b/docs/telepresence/2.9/reference/client/login/apikey-4.png differ diff --git a/docs/telepresence/2.9/reference/cluster-config.md b/docs/telepresence/2.9/reference/cluster-config.md new file mode 100644 index 000000000..57ab9f9c3 --- /dev/null +++ b/docs/telepresence/2.9/reference/cluster-config.md @@ -0,0 +1,369 @@ +import Alert from '@material-ui/lab/Alert'; +import { ClusterConfig, PaidPlansDisclaimer } from '../../../../../src/components/Docs/Telepresence'; + +# Cluster-side configuration + +For the most part, Telepresence doesn't require any special +configuration in the cluster and can be used right away in any +cluster (as long as the user has adequate [RBAC permissions](../rbac) +and the cluster's server version is `1.19.0` or higher). + +## Helm Chart configuration +Some cluster specific configuration can be provided when installing +or upgrading the Telepresence cluster installation using Helm. Once +installed, the Telepresence client will configure itself from values +that it receives when connecting to the Traffic manager. + +See the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) +for a full list of available configuration settings. + +### Values +To add configuration, create a yaml file with the configuration values and then use it executing `telepresence helm install [--upgrade] --values ` + +## Client Configuration + +It is possible for the Traffic Manager to automatically push config to all +connecting clients. To learn more about this, please see the [client config docs](../config#global-configuration) + +### Agent Configuration + +The `agent` structure of the Helm chart configures the behavior of the Telepresence agents. + +#### Application Protocol Selection +The `agent.appProtocolStrategy` is relevant when using personal intercepts and controls how telepresence selects the application protocol to use +when intercepting a service that has no `service.ports.appProtocol` declared. The port's `appProtocol` is always trusted if it is present. +Valid values are: + +| Value | Resulting action | +|--------------|------------------------------------------------------------------------------------------------------------------------------| +| `http2Probe` | The Telepresence Traffic Agent will probe the intercepted container to check whether it supports http2. This is the default. | +| `portName` | Telepresence will make an educated guess about the protocol based on the name of the service port | +| `http` | Telepresence will use http | +| `http2` | Telepresence will use http2 | + +When `portName` is used, Telepresence will determine the protocol by the name of the port: `[-suffix]`. The following protocols +are recognized: + +| Protocol | Meaning | +|----------|---------------------------------------| +| `http` | Plaintext HTTP/1.1 traffic | +| `http2` | Plaintext HTTP/2 traffic | +| `https` | TLS Encrypted HTTP (1.1 or 2) traffic | +| `grpc` | Same as http2 | + +The application protocol strategy can also be configured on a workstation. See [Intercepts](../config/#intercept) for more info. + +#### Envoy Configuration + +The `agent.envoy` structure contains three values: + +| Setting | Meaning | +|--------------|----------------------------------------------------------| +| `logLevel` | Log level used by the Envoy proxy. Defaults to "warning" | +| `serverPort` | Port used by the Envoy server. Default 18000. | +| `adminPort` | Port used for Envoy administration. Default 19000. | + +#### Image Configuration + +The `agent.image` structure contains the following values: + +| Setting | Meaning | +|------------|-----------------------------------------------------------------------------| +| `registry` | Registry used when downloading the image. Defaults to "docker.io/datawire". | +| `name` | The name of the image. Retrieved from Ambassador Cloud if not set. | +| `tag` | The tag of the image. Retrieved from Ambassador Cloud if not set. | + +#### Log level + +The `agent.LogLevel` controls the log level of the traffic-agent. See [Log Levels](../config/#log-levels) for more info. + +#### Resources + +The `agent.resources` and `agent.initResources` will be used as the `resources` element when injecting traffic-agents and init-containers. + +## TLS + +In this example, other applications in the cluster expect to speak TLS to your +intercepted application (perhaps you're using a service-mesh that does +mTLS). + +In order to use `--mechanism=http` (or any features that imply +`--mechanism=http`) you need to tell Telepresence about the TLS +certificates in use. + +Tell Telepresence about the certificates in use by adjusting your +[workload's](../intercepts/#supported-workloads) Pod template to set a couple of +annotations on the intercepted Pods: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ "getambassador.io/inject-terminating-tls-secret": "your-terminating-secret" # optional ++ "getambassador.io/inject-originating-tls-secret": "your-originating-secret" # optional + spec: ++ serviceAccountName: "your-account-that-has-rbac-to-read-those-secrets" + containers: +``` + +- The `getambassador.io/inject-terminating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS server + certificate to use for decrypting and responding to incoming + requests. + + When Telepresence modifies the Service and workload port + definitions to point at the Telepresence Agent sidecar's port + instead of your application's actual port, the sidecar will use this + certificate to terminate TLS. + +- The `getambassador.io/inject-originating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS + client certificate to use for communicating with your application. + + You will need to set this if your application expects incoming + requests to speak TLS (for example, your + code expects to handle mTLS itself instead of letting a service-mesh + sidecar handle mTLS for it, or the port definition that Telepresence + modified pointed at the service-mesh sidecar instead of at your + application). + + If you do set this, you should to set it to the + same client certificate Secret that you configure the Ambassador + Edge Stack to use for mTLS. + +It is only possible to refer to a Secret that is in the same Namespace +as the Pod. + +The Pod will need to have permission to `get` and `watch` each of +those Secrets. + +Telepresence understands `type: kubernetes.io/tls` Secrets and +`type: istio.io/key-and-cert` Secrets; as well as `type: Opaque` +Secrets that it detects to be formatted as one of those types. + +## Air-gapped cluster + + +If your cluster is on an isolated network such that it cannot +communicate with Ambassador Cloud, then some additional configuration +is required to acquire a license key in order to use personal +intercepts. + +### Create a license + +1. + +2. Generate a new license (if one doesn't already exist) by clicking *Generate New License*. + +3. You will be prompted for your Cluster ID. Ensure your +kubeconfig context is using the cluster you want to create a license for then +run this command to generate the Cluster ID: + + ``` + $ telepresence current-cluster-id + + Cluster ID: + ``` + +4. Click *Generate API Key* to finish generating the license. + +5. On the licenses page, download the license file associated with your cluster. + +### Add license to cluster +There are two separate ways you can add the license to your cluster: manually creating and deploying +the license secret or having the helm chart manage the secret + +You only need to do one of the two options. + +#### Manual deploy of license secret + +1. Use this command to generate a Kubernetes Secret config using the license file: + + ``` + $ telepresence license -f + + apiVersion: v1 + data: + hostDomain: + license: + kind: Secret + metadata: + creationTimestamp: null + name: systema-license + namespace: ambassador + ``` + +2. Save the output as a YAML file and apply it to your +cluster with `kubectl`. + +3. When deploying the `traffic-manager` chart, you must add the additional values when running `helm install` by putting +the following into a file (for the example we'll assume it's called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + secret: + # This tells the helm chart not to create the secret since you've created it yourself + create: false + ``` + +4. Install the helm chart into the cluster + + ``` + telepresence helm install -f license-values.yaml + ``` + +5. Ensure that you have the docker image for the Smart Agent (datawire/ambassador-telepresence-agent:1.11.0) +pulled and in a registry your cluster can pull from. + +6. Have users use the `images` [config key](../config/#images) keys so telepresence uses the aforementioned image for their agent. + +#### Helm chart manages the secret + +1. Get the jwt token from the downloaded license file + + ``` + $ cat ~/Downloads/ambassador.License_for_yourcluster + eyJhbGnotarealtoken.butanexample + ``` + +2. Create the following values file, substituting your real jwt token in for the one used in the example below. +(for this example we'll assume the following is placed in a file called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + # This is the value from the license file you download. this value is an example and will not work + value: eyJhbGnotarealtoken.butanexample + secret: + # This tells the helm chart to create the secret + create: true + ``` + +3. Install the helm chart into the cluster + + ``` + telepresence helm install -f license-values.yaml + ``` + +Users will now be able to use preview intercepts with the +`--preview-url=false` flag. Even with the license key, preview URLs +cannot be used without enabling direct communication with Ambassador +Cloud, as Ambassador Cloud is essential to their operation. + +If using Helm to install the server-side components, see the chart's [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) to learn how to configure the image registry and license secret. + +Have clients use the [skipLogin](../config/#cloud) key to ensure the cli knows it is operating in an +air-gapped environment. + +## Mutating Webhook + +Telepresence uses a Mutating Webhook to inject the [Traffic Agent](../architecture/#traffic-agent) sidecar container and update the +port definitions. This means that an intercepted workload (Deployment, StatefulSet, ReplicaSet) will remain untouched +and in sync as far as GitOps workflows (such as ArgoCD) are concerned. + +The injection will happen on demand the first time an attempt is made to intercept the workload. + +If you want to prevent that the injection ever happens, simply add the `telepresence.getambassador.io/inject-traffic-agent: disabled` +annotation to your workload template's annotations: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ telepresence.getambassador.io/inject-traffic-agent: disabled + spec: + containers: +``` + +### Service Name and Port Annotations + +Telepresence will automatically find all services and all ports that will connect to a workload and make them available +for an intercept, but you can explicitly define that only one service and/or port can be intercepted. + +```diff + spec: + template: + metadata: + labels: + service: your-service + annotations: ++ telepresence.getambassador.io/inject-service-name: my-service ++ telepresence.getambassador.io/inject-service-port: https + spec: + containers: +``` + +### Ignore Certain Volume Mounts + +An annotation `telepresence.getambassador.io/inject-ignore-volume-mounts` can be used to make the injector ignore certain volume mounts denoted by a comma-separated string. The specified volume mounts from the original container will not be appended to the agent sidecar container. + +```diff + spec: + template: + metadata: + annotations: ++ telepresence.getambassador.io/inject-ignore-volume-mounts: "foo,bar" + spec: + containers: +``` + +### Note on Numeric Ports + +If the targetPort of your intercepted service is pointing at a port number, in addition to +injecting the Traffic Agent sidecar, Telepresence will also inject an initContainer that will +reconfigure the pod's firewall rules to redirect traffic to the Traffic Agent. + + +Note that this initContainer requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + +If you need to use numeric ports without the aforementioned capabilities, you can [manually install the agent](../intercepts/manual-agent) + +For example, the following service is using a numeric port, so Telepresence would inject an initContainer into it: +```yaml +apiVersion: v1 +kind: Service +metadata: + name: your-service +spec: + type: ClusterIP + selector: + service: your-service + ports: + - port: 80 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: your-service + labels: + service: your-service +spec: + replicas: 1 + selector: + matchLabels: + service: your-service + template: + metadata: + annotations: + telepresence.getambassador.io/inject-traffic-agent: enabled + labels: + service: your-service + spec: + containers: + - name: your-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 +``` diff --git a/docs/telepresence/2.9/reference/config.md b/docs/telepresence/2.9/reference/config.md new file mode 100644 index 000000000..e69c77daa --- /dev/null +++ b/docs/telepresence/2.9/reference/config.md @@ -0,0 +1,349 @@ +# Laptop-side configuration + +There are a number of configuration values that can be tweaked to change how Telepresence behaves. +These can be set in two ways: globally, by a platform engineer with powers to deploy the Telepresence Traffic Manager, or locally by any user. +One important exception is the location of the traffic manager itself, which, if it's different from the default of `ambassador`, [must be set](#manager) locally per-cluster to be able to connect. + +## Global Configuration + +Global configuration is set at the Traffic Manager level and applies to any user connecting to that Traffic Manager. +To set it, simply pass in a `client` dictionary to the `helm install` command, with any config values you wish to set. + +### Values + +The `client` config supports values for `timeouts`, `logLevels`, `images`, `cloud`, `grpc`, `dns`, and `routing`. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +client: + timeouts: + agentInstall: 1m + intercept: 10s + logLevels: + userDaemon: debug + images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting + cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. + grpc: + maxReceiveSize: 10Mi + telepresenceAPI: + port: 9980 + dns: + includeSuffixes: [.private] + excludeSuffixes: [.se, .com, .io, .net, .org, .ru] + lookupTimeout: 30s + routing: + alsoProxySubnets: + - 1.2.3.4/32 + neverProxySubnets: + - 1.2.3.4/32 +``` + +#### Timeouts + +Values for `client.timeouts` are all durations either as a number of seconds +or as a string with a unit suffix of `ms`, `s`, `m`, or `h`. Strings +can be fractional (`1.5h`) or combined (`2h45m`). + +These are the valid fields for the `timeouts` key: + +| Field | Description | Type | Default | +|-------------------------|------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|------------| +| `agentInstall` | Waiting for Traffic Agent to be installed | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | +| `apply` | Waiting for a Kubernetes manifest to be applied | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 1 minute | +| `clusterConnect` | Waiting for cluster to be connected | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `intercept` | Waiting for an intercept to become active | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `proxyDial` | Waiting for an outbound connection to be established | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `trafficManagerConnect` | Waiting for the Traffic Manager API to connect for port forwards | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `trafficManagerAPI` | Waiting for connection to the gPRC API after `trafficManagerConnect` is successful | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 15 seconds | +| `helm` | Waiting for Helm operations (e.g. `install`) on the Traffic Manager | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | + +#### Log Levels + +Values for the `client.logLevels` fields are one of the following strings, +case-insensitive: + + - `trace` + - `debug` + - `info` + - `warning` or `warn` + - `error` + +For whichever log-level you select, you will get logs labeled with that level and of higher severity. +(e.g. if you use `info`, you will also get logs labeled `error`. You will NOT get logs labeled `debug`. + +These are the valid fields for the `client.logLevels` key: + +| Field | Description | Type | Default | +|--------------|---------------------------------------------------------------------|---------------------------------------------|---------| +| `userDaemon` | Logging level to be used by the User Daemon (logs to connector.log) | [loglevel][logrus-level] [string][yaml-str] | debug | +| `rootDaemon` | Logging level to be used for the Root Daemon (logs to daemon.log) | [loglevel][logrus-level] [string][yaml-str] | info | + +#### Images +Values for `client.images` are strings. These values affect the objects that are deployed in the cluster, +so it's important to ensure users have the same configuration. + +Additionally, you can deploy the server-side components with [Helm](../../install/helm), to prevent them +from being overridden by a client's config and use the [mutating-webhook](../cluster-config/#mutating-webhook) +to handle installation of the `traffic-agents`. + +These are the valid fields for the `client.images` key: + +| Field | Description | Type | Default | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------|----------------------| +| `registry` | Docker registry to be used for installing the Traffic Manager and default Traffic Agent. If not using a helm chart to deploy server-side objects, changing this value will create a new traffic-manager deployment when using Telepresence commands. Additionally, changing this value will update installed default `traffic-agents` to use the new registry when creating a new intercept. | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `agentImage` | `$registry/$imageName:$imageTag` to use when installing the Traffic Agent. Changing this value will update pre-existing `traffic-agents` to use this new image. *The `registry` value is not used for the `traffic-agent` if you have this value set.* | qualified Docker image name [string][yaml-str] | (unset) | +| `webhookRegistry` | The container `$registry` that the [Traffic Manager](../cluster-config/#mutating-webhook) will use with the `webhookAgentImage` *This value is only used if a new `traffic-manager` is deployed* | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `webhookAgentImage` | The container image that the [Traffic Manager](../cluster-config/#mutating-webhook) will pull from the `webhookRegistry` when installing the Traffic Agent in annotated pods *This value is only used if a new `traffic-manager` is deployed* | non-qualified Docker image name [string][yaml-str] | (unset) | + +#### Cloud +Values for `client.cloud` are listed below and their type varies, so please see the chart for the expected type for each config value. +These fields control how the client interacts with the Cloud service. + +| Field | Description | Type | Default | +|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------|----------------------| +| `skipLogin` | Whether the CLI should skip automatic login to Ambassador Cloud. If set to true, in order to perform personal intercepts you must have a [license key](../cluster-config/#air-gapped-cluster) installed in the cluster. | [bool][yaml-bool] | false | +| `refreshMessages` | How frequently the CLI should communicate with Ambassador Cloud to get new command messages, which also resets whether the message has been raised or not. You will see each message at most once within the duration given by this config | [duration][go-duration] [string][yaml-str] | 168h | +| `systemaHost` | The host used to communicate with Ambassador Cloud | [string][yaml-str] | app.getambassador.io | +| `systemaPort` | The port used with `systemaHost` to communicate with Ambassador Cloud | [string][yaml-str] | 443 | + +Telepresence attempts to auto-detect if the cluster is capable of +communication with Ambassador Cloud, but in cases where only the on-laptop client wishes to communicate with +Ambassador Cloud Telepresence may still prompt you to log in. If you want those auto-login points to be disabled +as well, or would like it to not attempt to communicate with +Ambassador Cloud at all (even for the auto-detection), then be sure to +set the `skipLogin` value to `true`. + +Reminder: To use personal intercepts, which normally require a login, +you must have a license key in your cluster and specify which +`agentImage` should be installed by also adding the following to your +`config.yml`: + +```yaml +images: + agentImage: / +``` + +#### Grpc +The `maxReceiveSize` determines how large a message that the workstation receives via gRPC can be. The default is 4Mi (determined by gRPC). All traffic to and from the cluster is tunneled via gRPC. + +The size is measured in bytes. You can express it as a plain integer or as a fixed-point number using E, G, M, or K. You can also use the power-of-two equivalents: Gi, Mi, Ki. For example, the following represent roughly the same value: +``` +128974848, 129e6, 129M, 123Mi +``` + +#### RESTful API server +The `client.telepresenceAPI` controls the behavior of Telepresence's RESTful API server that can be queried for additional information about ongoing intercepts. When present, and the `port` is set to a valid port number, it's propagated to the auto-installer so that application containers that can be intercepted gets the `TELEPRESENCE_API_PORT` environment set. The server can then be queried at `localhost:`. In addition, the `traffic-agent` and the `user-daemon` on the workstation that performs an intercept will start the server on that port. +If the `traffic-manager` is auto-installed, its webhook agent injector will be configured to add the `TELEPRESENCE_API_PORT` environment to the app container when the `traffic-agent` is injected. +See [RESTful API server](../restapi) for more info. + +### DNS + +#### Intercept +The `intercept` controls applies to how Telepresence will intercept the communications to the intercepted service. + +The `defaultPort` controls which port is selected when no `--port` flag is given to the `telepresence intercept` command. The default value is "8080". + +The `appProtocolStrategy` is only relevant when using personal intercepts. This controls how Telepresence selects the application protocol to use when intercepting a service that has no `service.ports.appProtocol` defined. Valid values are: + +| Value | Resulting action | +|--------------|--------------------------------------------------------------------------------------------------------| +| `http2Probe` | The Telepresence Traffic Agent will probe the intercepted container to check whether it supports http2 | +| `portName` | Telepresence will make an educated guess about the protocol based on the name of the service port | +| `http` | Telepresence will use http | +| `http2` | Telepresence will use http2 | + +When `portName` is used, Telepresence will determine the protocol by the name of the port: `[-suffix]`. The following protocols are recognized: + +| Protocol | Meaning | +|----------|---------------------------------------| +| `http` | Plaintext HTTP/1.1 traffic | +| `http2` | Plaintext HTTP/2 traffic | +| `https` | TLS Encrypted HTTP (1.1 or 2) traffic | +| `grpc` | Same as http2 | + +#### Daemons + +`client.daemons` controls which binary to use for the user daemon. By default it will +use the Telepresence binary. For example, this can be used to tell Telepresence to +use the Telepresence Pro binary. + +The fields for `client.dns` are: `localIP`, `excludeSuffixes`, `includeSuffixes`, and `lookupTimeout`. + +| Field | Description | Type | Default | +|-------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|--------------------------------------------------------------------------| +| `localIP` | The address of the local DNS server. This entry is only used on Linux systems that are not configured to use systemd-resolved. | IP address [string][yaml-str] | first `nameserver` mentioned in `/etc/resolv.conf` | +| `excludeSuffixes` | Suffixes for which the DNS resolver will always fail (or fallback in case of the overriding resolver). Can be globally configured in the Helm chart. | [sequence][yaml-seq] of [strings][yaml-str] | `[".arpa", ".com", ".io", ".net", ".org", ".ru"]` | +| `includeSuffixes` | Suffixes for which the DNS resolver will always attempt to do a lookup. Includes have higher priority than excludes. Can be globally configured in the Helm chart. | [sequence][yaml-seq] of [strings][yaml-str] | `[]` | +| `lookupTimeout` | Maximum time to wait for a cluster side host lookup. | [duration][go-duration] [string][yaml-str] | 4 seconds | + +Here is an example values.yaml: +```yaml +client: + dns: + includeSuffixes: [.private] + excludeSuffixes: [.se, .com, .io, .net, .org, .ru] + localIP: 8.8.8.8 + lookupTimeout: 30s +``` + +### Routing + +#### AlsoProxySubnets + +When using `alsoProxySubnets`, you provide a list of subnets to be added to the TUN device. +All connections to addresses that the subnet spans will be dispatched to the cluster + +Here is an example values.yaml for the subnet `1.2.3.4/32`: +```yaml +client: + routing: + alsoProxySubnets: + - 1.2.3.4/32 +``` + +#### NeverProxySubnets + +When using `neverProxySubnets` you provide a list of subnets. These will never be routed via the TUN device, +even if they fall within the subnets (pod or service) for the cluster. Instead, whatever route they have before +telepresence connects is the route they will keep. + +Here is an example kubeconfig for the subnet `1.2.3.4/32`: + +```yaml +client: + routing: + neverProxySubnets: + - 1.2.3.4/32 +``` + +#### Using AlsoProxy together with NeverProxy + +Never proxy and also proxy are implemented as routing rules, meaning that when the two conflict, regular routing routes apply. +Usually this means that the most specific route will win. + +So, for example, if an `alsoProxySubnets` subnet falls within a broader `neverProxySubnets` subnet: + +```yaml +neverProxySubnets: [10.0.0.0/16] +alsoProxySubnets: [10.0.5.0/24] +``` + +Then the specific `alsoProxySubnets` of `10.0.5.0/24` will be proxied by the TUN device, whereas the rest of `10.0.0.0/16` will not. + +Conversely, if a `neverProxySubnets` subnet is inside a larger `alsoProxySubnets` subnet: + +```yaml +alsoProxySubnets: [10.0.0.0/16] +neverProxySubnets: [10.0.5.0/24] +``` + +Then all of the `alsoProxySubnets` of `10.0.0.0/16` will be proxied, with the exception of the specific `neverProxySubnets` of `10.0.5.0/24` + +## Local Overrides + +In addition, it is possible to override each of these variables at the local level by setting up new values in local config files. +There are two types of config values that can be set locally: those that apply to all clusters, which are set in a single `config.yml` file, and those +that only apply to specific clusters, which are set as extensions to the `$KUBECONFIG` file. + +### Config for all clusters +Telepresence uses a `config.yml` file to store and change those configuration values that will be used for all clusters you use Telepresence with. +The location of this file varies based on your OS: + +* macOS: `$HOME/Library/Application Support/telepresence/config.yml` +* Linux: `$XDG_CONFIG_HOME/telepresence/config.yml` or, if that variable is not set, `$HOME/.config/telepresence/config.yml` +* Windows: `%APPDATA%\telepresence\config.yml` + +For Linux, the above paths are for a user-level configuration. For system-level configuration, use the file at `$XDG_CONFIG_DIRS/telepresence/config.yml` or, if that variable is empty, `/etc/xdg/telepresence/config.yml`. If a file exists at both the user-level and system-level paths, the user-level path file will take precedence. + +### Values + +The config file currently supports values for the `timeouts`, `logLevels`, `images`, `cloud`, and `grpc` keys. +The definitions of these values are identical to those values in the `client` config above. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +timeouts: + agentInstall: 1m + intercept: 10s +logLevels: + userDaemon: debug +images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting +cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. +grpc: + maxReceiveSize: 10Mi +telepresenceAPI: + port: 9980 +``` + + +## Workstation Per-Cluster Configuration + +Configuration that is specific to a cluster can also be overriden per-workstation by modifying your `$KUBECONFIG` file. +It is recommended that you do not do this, and instead rely on upstream values provided to the Traffic Manager. This ensures +that all users that connect to the Traffic Manager will have the same routing and DNS resolution behavior. +An important exception to this is the [`manager.namespace` configuration](#Manager) which must be set locally. + +### Values + +The kubeconfig supports values for `dns`, `also-proxy`, `never-proxy`, and `manager`. + +Example kubeconfig: +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + dns: + include-suffixes: [.private] + exclude-suffixes: [.se, .com, .io, .net, .org, .ru] + local-ip: 8.8.8.8 + lookup-timeout: 30s + never-proxy: [10.0.0.0/16] + also-proxy: [10.0.5.0/24] + name: example-cluster +``` + +#### Manager + +This is the one cluster configuration that cannot be set using the Helm chart because it defines how Telepresence connects to +the Traffic manager. When not default, that setting needs to be configured in the workstation's kubeconfig for the cluster. + +The `manager` key contains configuration for finding the `traffic-manager` that telepresence will connect to. It supports one key, `namespace`, indicating the namespace where the traffic manager is to be found + +Here is an example kubeconfig that will instruct telepresence to connect to a manager in namespace `staging`: + +```yaml +apiVersion: v1 +clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +[yaml-bool]: https://yaml.org/type/bool.html +[yaml-float]: https://yaml.org/type/float.html +[yaml-int]: https://yaml.org/type/int.html +[yaml-seq]: https://yaml.org/type/seq.html +[yaml-str]: https://yaml.org/type/str.html +[go-duration]: https://pkg.go.dev/time#ParseDuration +[logrus-level]: https://github.com/sirupsen/logrus/blob/v1.8.1/logrus.go#L25-L45 diff --git a/docs/telepresence/2.9/reference/dns.md b/docs/telepresence/2.9/reference/dns.md new file mode 100644 index 000000000..2f263860e --- /dev/null +++ b/docs/telepresence/2.9/reference/dns.md @@ -0,0 +1,80 @@ +# DNS resolution + +The Telepresence DNS resolver is dynamically configured to resolve names using the namespaces of currently active intercepts. Processes running locally on the desktop will have network access to all services in the such namespaces by service-name only. + +All intercepts contribute to the DNS resolver, even those that do not use the `--namespace=` option. This is because `--namespace default` is implied, and in this context, `default` is treated just like any other namespace. + +No namespaces are used by the DNS resolver (not even `default`) when no intercepts are active, which means that no service is available by `` only. Without an active intercept, the namespace qualified DNS name must be used (in the form `.`). + +See this demonstrated below, using the [quick start's](../../quick-start/) sample app services. + +No intercepts are currently running, we'll connect to the cluster and list the services that can be intercepted. + +``` +$ telepresence connect + + Connecting to traffic manager... + Connected to context default (https://) + +$ telepresence list + + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + emoji : ready to intercept (traffic-agent not yet installed) + web : ready to intercept (traffic-agent not yet installed) + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + +$ curl web-app:80 + + curl: (6) Could not resolve host: web-app + +``` + +This is expected as Telepresence cannot reach the service yet by short name without an active intercept in that namespace. + +``` +$ curl web-app.emojivoto:80 + + + + + + Emoji Vote + ... +``` + +Using the namespaced qualified DNS name though does work. +Now we'll start an intercept against another service in the same namespace. Remember, `--namespace default` is implied since it is not specified. + +``` +$ telepresence intercept web --port 8080 + + Using Deployment web + intercepted + Intercept name : web + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Volume Mount Point: /tmp/telfs-166119801 + Intercepting : HTTP requests that match all headers: + 'x-telepresence-intercept-id: 8eac04e3-bf24-4d62-b3ba-35297c16f5cd:web' + +$ curl webapp:80 + + + + + + Emoji Vote + ... +``` + +Now curling that service by its short name works and will as long as the intercept is active. + +The DNS resolver will always be able to resolve services using `.` regardless of intercepts. + +### Supported Query Types + +The Telepresence DNS resolver is now capable of resolving queries of type `A`, `AAAA`, `CNAME`, +`MX`, `NS`, `PTR`, `SRV`, and `TXT`. + +See [Outbound connectivity](../routing/#dns-resolution) for details on DNS lookups. diff --git a/docs/telepresence/2.9/reference/docker-run.md b/docs/telepresence/2.9/reference/docker-run.md new file mode 100644 index 000000000..8aa7852e5 --- /dev/null +++ b/docs/telepresence/2.9/reference/docker-run.md @@ -0,0 +1,31 @@ +--- +Description: "How a Telepresence intercept can run a Docker container with configured environment and volume mounts." +--- + +# Using Docker for intercepts + +If you want your intercept to go to a Docker container on your laptop, use the `--docker-run` option. It creates the intercept, runs your container in the foreground, then automatically ends the intercept when the container exits. + +`telepresence intercept --port --docker-run -- ` + +The `--` separates flags intended for `telepresence intercept` from flags intended for `docker run`. + +## Example + +Imagine you are working on a new version of your frontend service. It is running in your cluster as a Deployment called `frontend-v1`. You use Docker on your laptop to build an improved version of the container called `frontend-v2`. To test it out, use this command to run the new container on your laptop and start an intercept of the cluster service to your local container. + +`telepresence intercept frontend-v1 --port 8000 --docker-run -- frontend-v2` + +## Ports + +The `--port` flag can specify an additional port when `--docker-run` is used so that the local and container port can be different. This is done using `--port :`. The container port will default to the local port when using the `--port ` syntax. + +## Flags + +Telepresence will automatically pass some relevant flags to Docker in order to connect the container with the intercept. Those flags are combined with the arguments given after `--` on the command line. + +- `--dns-search tel2-search` Enables single label name lookups in intercepted namespaces +- `--env-file ` Loads the intercepted environment +- `--name intercept--` Names the Docker container, this flag is omitted if explicitly given on the command line +- `-p ` The local port for the intercept and the container port +- `-v ` Volume mount specification, see CLI help for `--mount` and `--docker-mount` flags for more info diff --git a/docs/telepresence/2.9/reference/environment.md b/docs/telepresence/2.9/reference/environment.md new file mode 100644 index 000000000..7f83ff119 --- /dev/null +++ b/docs/telepresence/2.9/reference/environment.md @@ -0,0 +1,46 @@ +--- +description: "How Telepresence can import environment variables from your Kubernetes cluster to use with code running on your laptop." +--- + +# Environment variables + +Telepresence can import environment variables from the cluster pod when running an intercept. +You can then use these variables with the code running on your laptop of the service being intercepted. + +There are three options available to do this: + +1. `telepresence intercept [service] --port [port] --env-file=FILENAME` + + This will write the environment variables to a Docker Compose `.env` file. This file can be used with `docker-compose` when starting containers locally. Please see the Docker documentation regarding the [file syntax](https://docs.docker.com/compose/env-file/) and [usage](https://docs.docker.com/compose/environment-variables/) for more information. + +2. `telepresence intercept [service] --port [port] --env-json=FILENAME` + + This will write the environment variables to a JSON file. This file can be injected into other build processes. + +3. `telepresence intercept [service] --port [port] -- [COMMAND]` + + This will run a command locally with the pod's environment variables set on your laptop. Once the command quits the intercept is stopped (as if `telepresence leave [service]` was run). This can be used in conjunction with a local server command, such as `python [FILENAME]` or `node [FILENAME]` to run a service locally while using the environment variables that were set on the pod via a ConfigMap or other means. + + Another use would be running a subshell, Bash for example: + + `telepresence intercept [service] --port [port] -- /bin/bash` + + This would start the intercept then launch the subshell on your laptop with all the same variables set as on the pod. + +## Telepresence Environment Variables + +Telepresence adds some useful environment variables in addition to the ones imported from the intercepted pod: + +### TELEPRESENCE_ROOT +Directory where all remote volumes mounts are rooted. See [Volume Mounts](../volume/) for more info. + +### TELEPRESENCE_MOUNTS +Colon separated list of remotely mounted directories. + +### TELEPRESENCE_CONTAINER +The name of the intercepted container. Useful when a pod has several containers, and you want to know which one that was intercepted by Telepresence. + +### TELEPRESENCE_INTERCEPT_ID +ID of the intercept (same as the "x-intercept-id" http header). + +Useful if you need special behavior when intercepting a pod. One example might be when dealing with pub/sub systems like Kafka, where all processes that don't have the `TELEPRESENCE_INTERCEPT_ID` set can filter out all messages that contain an `x-intercept-id` header, while those that do, instead filter based on a matching `x-intercept-id` header. This is to assure that messages belonging to a certain intercept always are consumed by the intercepting process. diff --git a/docs/telepresence/2.9/reference/inside-container.md b/docs/telepresence/2.9/reference/inside-container.md new file mode 100644 index 000000000..637e0cdfd --- /dev/null +++ b/docs/telepresence/2.9/reference/inside-container.md @@ -0,0 +1,37 @@ +# Running Telepresence inside a container + +It is sometimes desirable to run [Telepresence](/products/telepresence/) inside a container. One reason can be to avoid any side effects on the workstation's network, another can be to establish multiple sessions with the traffic manager, or even work with different clusters simultaneously. + +## Building the container + +Building a container with a ready-to-run Telepresence is easy because there are relatively few external dependencies. Add the following to a `Dockerfile`: + +```Dockerfile +# Dockerfile with telepresence and its prerequisites +FROM alpine:3.13 + +# Install Telepresence prerequisites +RUN apk add --no-cache curl iproute2 sshfs + +# Download and install the telepresence binary +RUN curl -fL https://app.getambassador.io/download/tel2/linux/amd64/latest/telepresence -o telepresence && \ + install -o root -g root -m 0755 telepresence /usr/local/bin/telepresence +``` +In order to build the container, do this in the same directory as the `Dockerfile`: +``` +$ docker build -t tp-in-docker . +``` + +## Running the container + +Telepresence will need access to the `/dev/net/tun` device on your Linux host (or, in case the host isn't Linux, the Linux VM that Docker starts automatically), and a Kubernetes config that identifies the cluster. It will also need `--cap-add=NET_ADMIN` to create its Virtual Network Interface. + +The command to run the container can look like this: +```bash +$ docker run \ + --cap-add=NET_ADMIN \ + --device /dev/net/tun:/dev/net/tun \ + --network=host \ + -v ~/.kube/config:/root/.kube/config \ + -it --rm tp-in-docker +``` diff --git a/docs/telepresence/2.9/reference/intercepts/index.md b/docs/telepresence/2.9/reference/intercepts/index.md new file mode 100644 index 000000000..08e40a60d --- /dev/null +++ b/docs/telepresence/2.9/reference/intercepts/index.md @@ -0,0 +1,403 @@ +import Alert from '@material-ui/lab/Alert'; + +# Intercepts + +When intercepting a service, Telepresence installs a *traffic-agent* +sidecar in to the workload. That traffic-agent supports one or more +intercept *mechanisms* that it uses to decide which traffic to +intercept. Telepresence has a simple default traffic-agent, however +you can configure a different traffic-agent with more sophisticated +mechanisms either by setting the [`images.agentImage` field in +`config.yml`](../config/#images) or by writing an +`extensions/${extension}.yml` file that tells +Telepresence about a traffic-agent that it can use, what mechanisms +that traffic-agent supports, and command-line flags to expose to the +user to configure that mechanism. You may tell Telepresence which +known mechanism to use with the `--mechanism=${mechanism}` flag or by +setting one of the `--${mechansim}-XXX` flags, which implicitly set +the mechanism; for example, setting `--http-header=auto` implicitly +sets `--mechanism=http`. + +The default open-source traffic-agent only supports the `tcp` +mechanism, which treats the raw layer 4 TCP streams as opaque and +sends all of that traffic down to the developer's workstation. This +means that it is a "global" intercept, affecting all users of the +cluster. + +In addition to the default open-source traffic-agent, Telepresence +already knows about the Ambassador Cloud +traffic-agent, which supports the `http` +mechanism. The `http` mechanism operates at higher layer, working +with layer 7 HTTP, and may intercept specific HTTP requests, allowing +other HTTP requests through to the regular service. This allows for +"personal" intercepts which only intercept traffic tagged as belonging +to a given developer. + +[extensions]: https://pkg.go.dev/github.com/telepresenceio/telepresence/v2@v$version$/pkg/client/cli/extensions +[ambassador-agent]: https://github.com/telepresenceio/telepresence/blob/release/v2/pkg/client/cli/extensions/builtin.go#L30-L50 + +## Intercept behavior when logged in to Ambassador Cloud + +Logging in to Ambassador Cloud (with [`telepresence +login`](../client/login/)) changes the Telepresence defaults in two +ways. + +First, being logged in to Ambassador Cloud causes Telepresence to +default to `--mechanism=http --http-header=auto --http-path-prefix=/` ( +`--mechanism=http` is redundant. It is implied by other `--http-xxx` flags). +If you hadn't been logged in it would have defaulted to +`--mechanism=tcp`. This tells Telepresence to use the Ambassador +Cloud traffic-agent to do smart "personal" intercepts and only +intercept a subset of HTTP requests, rather than just intercepting the +entirety of all TCP connections. This is important for working in a +shared cluster with teammates, and is important for the preview URL +functionality below. See `telepresence intercept --help` for +information on using the `--http-header` and `--http-path-xxx` flags to +customize which requests that are intercepted. + +Secondly, being logged in causes Telepresence to default to +`--preview-url=true`. If you hadn't been logged in it would have +defaulted to `--preview-url=false`. This tells Telepresence to take +advantage of Ambassador Cloud to create a preview URL for this +intercept, creating a shareable URL that automatically sets the +appropriate headers to have requests coming from the preview URL be +intercepted. In order to create the preview URL, it will prompt you +for four settings about how your cluster's ingress is configured. For +each, Telepresence tries to intelligently detect the correct value for +your cluster; if it detects it correctly, may simply press "enter" and +accept the default, otherwise you must tell Telepresence the correct +value. + +When creating an intercept with the `http` mechanism, the +traffic-agent sends a `GET /telepresence-http2-check` request to your +service and to the process running on your local machine at the port +specified in your intercept, in order to determine if they support +HTTP/2. This is required for the intercepts to behave correctly. If +you do not have a service running locally when the intercept is +created, the traffic-agent will use the result it got from checking +the in-cluster service. + +## Supported workloads + +Kubernetes has various +[workloads](https://kubernetes.io/docs/concepts/workloads/). +Currently, Telepresence supports intercepting (installing a +traffic-agent on) `Deployments`, `ReplicaSets`, and `StatefulSets`. + + + +While many of our examples use Deployments, they would also work on +ReplicaSets and StatefulSets + + + +## Specifying a namespace for an intercept + +The namespace of the intercepted workload is specified using the +`--namespace` option. When this option is used, and `--workload` is +not used, then the given name is interpreted as the name of the +workload and the name of the intercept will be constructed from that +name and the namespace. + +```shell +telepresence intercept hello --namespace myns --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept +`hello-myns`. In order to remove the intercept, you will need to run +`telepresence leave hello-mydns` instead of just `telepresence leave +hello`. + +The name of the intercept will be left unchanged if the workload is specified. + +```shell +telepresence intercept myhello --namespace myns --workload hello --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept `myhello`. + +## Importing environment variables + +Telepresence can import the environment variables from the pod that is +being intercepted, see [this doc](../environment/) for more details. + +## Creating an intercept without a preview URL + +If you *are not* logged in to Ambassador Cloud, the following command +will intercept all traffic bound to the service and proxy it to your +laptop. This includes traffic coming through your ingress controller, +so use this option carefully as to not disrupt production +environments. + +```shell +telepresence intercept --port= +``` + +If you *are* logged in to Ambassador Cloud, setting the +`--preview-url` flag to `false` is necessary. + +```shell +telepresence intercept --port= --preview-url=false +``` + +This will output an HTTP header that you can set on your request for +that traffic to be intercepted: + +```console +$ telepresence intercept --port= --preview-url=false +Using Deployment +intercepted + Intercept name: + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":") +``` + +Run `telepresence status` to see the list of active intercepts. + +```console +$ telepresence status +Root Daemon: Running + Version : v2.1.4 (api 3) + Primary DNS : "" + Fallback DNS: "" +User Daemon: Running + Version : v2.1.4 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 1 total + dataprocessingnodeservice: @ +``` + +Finally, run `telepresence leave ` to stop the intercept. + +## Skipping the ingress dialogue + +You can skip the ingress dialogue by setting the relevant parameters using flags. If any of the following flags are set, the dialogue will be skipped and the flag values will be used instead. If any of the required flags are missing, an error will be thrown. + +| Flag | Description | Required | +|------------------|------------------------------------------------------------------|------------| +| `--ingress-host` | The ip address for the ingress | yes | +| `--ingress-port` | The port for the ingress | yes | +| `--ingress-tls` | Whether tls should be used | no | +| `--ingress-l5` | Whether a different ip address should be used in request headers | no | + +## Creating an intercept when a service has multiple ports + +If you are trying to intercept a service that has multiple ports, you +need to tell Telepresence which service port you are trying to +intercept. To specify, you can either use the name of the service +port or the port number itself. To see which options might be +available to you and your service, use kubectl to describe your +service or look in the object's YAML. For more information on multiple +ports, see the [Kubernetes documentation][kube-multi-port-services]. + +[kube-multi-port-services]: https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services + +```console +$ telepresence intercept --port=: +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +When intercepting a service that has multiple ports, the name of the +service port that has been intercepted is also listed. + +If you want to change which port has been intercepted, you can create +a new intercept the same way you did above and it will change which +service port is being intercepted. + +## Creating an intercept When multiple services match your workload + +Oftentimes, there's a 1-to-1 relationship between a service and a +workload, so telepresence is able to auto-detect which service it +should intercept based on the workload you are trying to intercept. +But if you use something like +[Argo](https://www.getambassador.io/docs/argo/latest/), there may be +two services (that use the same labels) to manage traffic between a +canary and a stable service. + +Fortunately, if you know which service you want to use when +intercepting a workload, you can use the `--service` flag. So in the +aforementioned example, if you wanted to use the `echo-stable` service +when intercepting your workload, your command would look like this: + +```console +$ telepresence intercept echo-rollout- --port --service echo-stable +Using ReplicaSet echo-rollout- +intercepted + Intercept name : echo-rollout- + State : ACTIVE + Workload kind : ReplicaSet + Destination : 127.0.0.1:3000 + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-921196036 + Intercepting : all TCP connections +``` + +## Intercepting multiple ports + +It is possible to intercept more than one service and/or service port that are using the same workload. You do this +by creating more than one intercept that identify the same workload using the `--workload` flag. + +Let's assume that we have a service `multi-echo` with the two ports `http` and `grpc`. They are both +targeting the same `multi-echo` deployment. + +```console +$ telepresence intercept multi-echo-http --workload multi-echo --port 8080:http --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-http + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Volume Mount Point : /tmp/telfs-893700837 + Intercepting : all TCP requests + Preview URL : https://sleepy-bassi-1140.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +$ telepresence intercept multi-echo-grpc --workload multi-echo --port 8443:grpc --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-grpc + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8443 + Service Port Identifier: extra + Volume Mount Point : /tmp/telfs-1277723591 + Intercepting : all TCP requests + Preview URL : https://upbeat-thompson-6613.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +``` + +## Port-forwarding an intercepted container's sidecars + +Sidecars are containers that sit in the same pod as an application +container; they usually provide auxiliary functionality to an +application, and can usually be reached at +`localhost:${SIDECAR_PORT}`. For example, a common use case for a +sidecar is to proxy requests to a database, your application would +connect to `localhost:${SIDECAR_PORT}`, and the sidecar would then +connect to the database, perhaps augmenting the connection with TLS or +authentication. + +When intercepting a container that uses sidecars, you might want those +sidecars' ports to be available to your local application at +`localhost:${SIDECAR_PORT}`, exactly as they would be if running +in-cluster. Telepresence's `--to-pod ${PORT}` flag implements this +behavior, adding port-forwards for the port given. + +```console +$ telepresence intercept --port=: --to-pod= +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +If there are multiple ports that you need forwarded, simply repeat the +flag (`--to-pod= --to-pod=`). + +## Intercepting headless services + +Kubernetes supports creating [services without a ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services), +which, when they have a pod selector, serve to provide a DNS record that will directly point to the service's backing pods. +Telepresence supports intercepting these `headless` services as it would a regular service with a ClusterIP. +So, for example, if you have the following service: + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: my-headless +spec: + type: ClusterIP + clusterIP: None + selector: + service: my-headless + ports: + - port: 8080 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: my-headless + labels: + service: my-headless +spec: + replicas: 1 + serviceName: my-headless + selector: + matchLabels: + service: my-headless + template: + metadata: + labels: + service: my-headless + spec: + containers: + - name: my-headless + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +``` + +You can intercept it like any other: + +```console +$ telepresence intercept my-headless --port 8080 +Using StatefulSet my-headless +intercepted + Intercept name : my-headless + State : ACTIVE + Workload kind : StatefulSet + Destination : 127.0.0.1:8080 + Volume Mount Point: /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-524189712 + Intercepting : all TCP connections +``` + + +This utilizes an initContainer that requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + + +This requires the Traffic Agent to run as GID 7777. By default, this is disabled on openshift clusters. +To enable running as GID 7777 on a specific openshift namespace, run: +oc adm policy add-scc-to-group anyuid system:serviceaccounts:$NAMESPACE + + + +Intercepting headless services without a selector is not supported. + + +## Sharing intercepts with teammates + +Once a combination of flags to easily intercept a service has been found, it's useful to share it with teammates. You +can do that easily by going to [Ambassador Cloud -> Intercepts history](https://app.getambassador.io/cloud/saved-intercepts/history) +pick the intercept command from the history tab and create a Saved Intercept by giving it a name, when doing that +the intercept command will be easily accessible for all your teammates. Note that this requires the free enhanced +client to be installed and to be logged in (`telepresence login`). + +To instantiate an intercept based on a saved intercept, simply run +`telepresence intercept --use-saved-intercept `. When logged in, the command will first check for a +saved intercept in Ambassador Cloud and will use it if found, otherwise an error will be returned. + +Saved Intercepts can be [managed through Ambassador Cloud](../../../../cloud/latest/telepresence-saved-intercepts/). diff --git a/docs/telepresence/2.9/reference/intercepts/manual-agent.md b/docs/telepresence/2.9/reference/intercepts/manual-agent.md new file mode 100644 index 000000000..8c24d6dbe --- /dev/null +++ b/docs/telepresence/2.9/reference/intercepts/manual-agent.md @@ -0,0 +1,267 @@ +import Alert from '@material-ui/lab/Alert'; + +# Manually injecting the Traffic Agent + +You can directly modify your workload's YAML configuration to add the Telepresence Traffic Agent and enable it to be intercepted. + +When you use a Telepresence intercept for the first time on a Pod, the [Telepresence Mutating Webhook](../../cluster-config/#mutating-webhook) +will automatically inject a Traffic Agent sidecar into it. There might be some situations where this approach cannot be used, such +as very strict company security policies preventing it. + + +Although it is possible to manually inject the Traffic Agent, it is not the recommended approach to making a workload interceptable, +try the Mutating Webhook before proceeding. + + +## Procedure + +You can manually inject the agent into Deployments, StatefulSets, or ReplicaSets. The example on this page +uses the following Deployment and Service. It's a prerequisite that they have been applied to the cluster: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "my-service" + labels: + service: my-service +spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +--- +apiVersion: v1 +kind: Service +metadata: + name: "my-service" +spec: + type: ClusterIP + selector: + service: my-service + ports: + - port: 80 + targetPort: 8080 +``` + +### 1. Generating the YAML + +First, generate the YAML for the traffic-agent configmap entry. It's important that the generated file have +the same name as the service, and no extension: + +```console +$ telepresence genyaml config --workload my-service -o /tmp/my-service +$ cat /tmp/my-service-config.yaml +agentImage: docker.io/datawire/tel2:2.7.0 +agentName: my-service +containers: +- Mounts: null + envPrefix: A_ + intercepts: + - agentPort: 9900 + containerPort: 8080 + protocol: TCP + serviceName: my-service + servicePort: 80 + serviceUID: f6680334-10ef-4703-aa4e-bb1f9d1665fd + mountPoint: /tel_app_mounts/echo-container + name: echo-container +logLevel: info +managerHost: traffic-manager.ambassador +managerPort: 8081 +manual: true +namespace: default +workloadKind: Deployment +workloadName: my-service +``` + +Next, generate the YAML for the traffic-agent container: + +```console +$ telepresence genyaml container --config /tmp/my-service -o /tmp/my-service-agent.yaml +$ cat /tmp/my-service-agent.yaml +args: +- agent +env: +- name: _TEL_AGENT_POD_IP + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: status.podIP +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: traffic-agent +ports: +- containerPort: 9900 + protocol: TCP +readinessProbe: + exec: + command: + - /bin/stat + - /tmp/agent/ready +resources: {} +volumeMounts: +- mountPath: /tel_pod_info + name: traffic-annotations +- mountPath: /etc/traffic-agent + name: traffic-config +- mountPath: /tel_app_exports + name: export-volume + name: traffic-annotations +``` + +Next, generate the init-container + +```console +$ telepresence genyaml initcontainer --config /tmp/my-service -o /tmp/my-service-init.yaml +$ cat /tmp/my-service-init.yaml +args: +- agent-init +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: tel-agent-init +resources: {} +securityContext: + capabilities: + add: + - NET_ADMIN +volumeMounts: +- mountPath: /etc/traffic-agent + name: traffic-config +``` + +Next, generate the YAML for the volumes: + +```console +$ telepresence genyaml volume --workload my-service -o /tmp/my-service-volume.yaml +$ cat /tmp/my-service-volume.yaml +- downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.annotations + path: annotations + name: traffic-annotations +- configMap: + items: + - key: my-service + path: config.yaml + name: telepresence-agents + name: traffic-config +- emptyDir: {} + name: export-volume + +``` + + +Enter `telepresence genyaml container --help` or `telepresence genyaml volume --help` for more information about these flags. + + +### 2. Creating (or updating) the configmap + +The generated configmap entry must be insterted into the `telepresence-agents` `ConfigMap` in the same namespace as the +modified `Deployment`. If the `ConfigMap` doesn't exist yet, it can be created using the following command: + +```console +$ kubectl create configmap telepresence-agents --from-file=/tmp/my-service +``` + +If it already exists, new entries can be added under the `Data` key using `kubectl edit configmap telepresence-agents`. + +### 3. Injecting the YAML into the Deployment + +You need to add the `Deployment` YAML you genereated to include the container and the volume. These are placed as elements +of `spec.template.spec.containers`, `spec.template.spec.initContainers`, and `spec.template.spec.volumes` respectively. +You also need to modify `spec.template.metadata.annotations` and add the annotation +`telepresence.getambassador.io/manually-injected: "true"`. These changes should look like the following: + +```diff + apiVersion: apps/v1 + kind: Deployment + metadata: + name: "my-service" + labels: + service: my-service + spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service ++ annotations: ++ telepresence.getambassador.io/manually-injected: "true" + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} ++ - args: ++ - agent ++ env: ++ - name: _TEL_AGENT_POD_IP ++ valueFrom: ++ fieldRef: ++ apiVersion: v1 ++ fieldPath: status.podIP ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: traffic-agent ++ ports: ++ - containerPort: 9900 ++ protocol: TCP ++ readinessProbe: ++ exec: ++ command: ++ - /bin/stat ++ - /tmp/agent/ready ++ resources: { } ++ volumeMounts: ++ - mountPath: /tel_pod_info ++ name: traffic-annotations ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ - mountPath: /tel_app_exports ++ name: export-volume ++ initContainers: ++ - args: ++ - agent-init ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: tel-agent-init ++ resources: { } ++ securityContext: ++ capabilities: ++ add: ++ - NET_ADMIN ++ volumeMounts: ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ volumes: ++ - downwardAPI: ++ items: ++ - fieldRef: ++ apiVersion: v1 ++ fieldPath: metadata.annotations ++ path: annotations ++ name: traffic-annotations ++ - configMap: ++ items: ++ - key: my-service ++ path: config.yaml ++ name: telepresence-agents ++ name: traffic-config ++ - emptyDir: { } ++ name: export-volume +``` diff --git a/docs/telepresence/2.9/reference/linkerd.md b/docs/telepresence/2.9/reference/linkerd.md new file mode 100644 index 000000000..9b903fa76 --- /dev/null +++ b/docs/telepresence/2.9/reference/linkerd.md @@ -0,0 +1,75 @@ +--- +Description: "How to get Linkerd meshed services working with Telepresence" +--- + +# Using Telepresence with Linkerd + +## Introduction +Getting started with Telepresence on Linkerd services is as simple as adding an annotation to your Deployment: + +```yaml +spec: + template: + metadata: + annotations: + config.linkerd.io/skip-outbound-ports: "8081" +``` + +The local system and the Traffic Agent connect to the Traffic Manager using its gRPC API on port 8081. Telling Linkerd to skip that port allows the Traffic Agent sidecar to fully communicate with the Traffic Manager, and therefore the rest of the Telepresence system. + +## Prerequisites +1. [Telepresence binary](../../install) +2. Linkerd control plane [installed to cluster](https://linkerd.io/2.10/tasks/install/) +3. Kubectl +4. [Working ingress controller](https://www.getambassador.io/docs/edge-stack/latest/howtos/linkerd2) + +## Deploy +Save and deploy the following YAML. Note the `config.linkerd.io/skip-outbound-ports` annotation in the metadata of the pod template. + +```yaml +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: quote +spec: + replicas: 1 + selector: + matchLabels: + app: quote + strategy: + type: RollingUpdate + template: + metadata: + annotations: + linkerd.io/inject: "enabled" + config.linkerd.io/skip-outbound-ports: "8081,8022,6001" + labels: + app: quote + spec: + containers: + - name: backend + image: docker.io/datawire/quote:0.4.1 + ports: + - name: http + containerPort: 8000 + env: + - name: PORT + value: "8000" + resources: + limits: + cpu: "0.1" + memory: 100Mi +``` + +## Connect to Telepresence +Run `telepresence connect` to connect to the cluster. Then `telepresence list` should show the `quote` deployment as `ready to intercept`: + +``` +$ telepresence list + + quote: ready to intercept (traffic-agent not yet installed) +``` + +## Run the intercept +Run `telepresence intercept quote --port 8080:80` to direct traffic from the `quote` deployment to port 8080 on your local system. Assuming you have something listening on 8080, you should now be able to see your local service whenever attempting to access the `quote` service. diff --git a/docs/telepresence/2.9/reference/rbac.md b/docs/telepresence/2.9/reference/rbac.md new file mode 100644 index 000000000..d78133441 --- /dev/null +++ b/docs/telepresence/2.9/reference/rbac.md @@ -0,0 +1,236 @@ +import Alert from '@material-ui/lab/Alert'; + +# Telepresence RBAC +The intention of this document is to provide a template for securing and limiting the permissions of Telepresence. +This documentation covers the full extent of permissions necessary to administrate Telepresence components in a cluster. + +There are two general categories for cluster permissions with respect to Telepresence. There are RBAC settings for a User and for an Administrator described above. The User is expected to only have the minimum cluster permissions necessary to create a Telepresence [intercept](../../howtos/intercepts/), and otherwise be unable to affect Kubernetes resources. + +In addition to the above, there is also a consideration of how to manage Users and Groups in Kubernetes which is outside of the scope of the document. This document will use Service Accounts to assign Roles and Bindings. Other methods of RBAC administration and enforcement can be found on the [Kubernetes RBAC documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) page. + +## Requirements + +- Kubernetes version 1.16+ +- Cluster admin privileges to apply RBAC + +## Editing your kubeconfig + +This guide also assumes that you are utilizing a kubeconfig file that is specified by the `KUBECONFIG` environment variable. This is a `yaml` file that contains the cluster's API endpoint information as well as the user data being supplied for authentication. The Service Account name used in the example below is called tp-user. This can be replaced by any value (i.e. John or Jane) as long as references to the Service Account are consistent throughout the `yaml`. After an administrator has applied the RBAC configuration, a user should create a `config.yaml` in your current directory that looks like the following:​ + +```yaml +apiVersion: v1 +kind: Config +clusters: +- name: my-cluster # Must match the cluster value in the contexts config + cluster: + ## The cluster field is highly cloud dependent. +contexts: +- name: my-context + context: + cluster: my-cluster # Must match the name field in the clusters config + user: tp-user +users: +- name: tp-user # Must match the name of the Service Account created by the cluster admin + user: + token: # See note below +``` + +The Service Account token will be obtained by the cluster administrator after they create the user's Service Account. Creating the Service Account will create an associated Secret in the same namespace with the format `-token-`. This token can be obtained by your cluster administrator by running `kubectl get secret -n ambassador -o jsonpath='{.data.token}' | base64 -d`. + +After creating `config.yaml` in your current directory, export the file's location to KUBECONFIG by running `export KUBECONFIG=$(pwd)/config.yaml`. You should then be able to switch to this context by running `kubectl config use-context my-context`. + +## Administrating Telepresence + +Telepresence administration requires permissions for creating `Namespaces`, `ServiceAccounts`, `ClusterRoles`, `ClusterRoleBindings`, `Secrets`, `Services`, `MutatingWebhookConfiguration`, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator. The following permissions are needed for the installation and use of Telepresence: + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: telepresence-admin + namespace: default +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-admin-role +rules: + - apiGroups: [""] + resources: ["pods", "pods/log"] + verbs: ["get", "list", "create", "delete", "watch"] + - apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "update", "create", "delete"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] + - apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update", "create", "delete", "watch"] + - apiGroups: ["rbac.authorization.k8s.io"] + resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"] + verbs: ["get", "list", "watch", "create", "delete"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["create"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["get", "list", "watch", "delete"] + resourceNames: ["telepresence-agents"] + - apiGroups: [""] + resources: ["namespaces"] + verbs: ["get", "list", "watch", "create"] + - apiGroups: [""] + resources: ["secrets"] + verbs: ["get", "create", "list", "delete"] + - apiGroups: [""] + resources: ["serviceaccounts"] + verbs: ["get", "create", "delete"] + - apiGroups: ["admissionregistration.k8s.io"] + resources: ["mutatingwebhookconfigurations"] + verbs: ["get", "create", "delete"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["list", "get", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-clusterrolebinding +subjects: + - name: telepresence-admin + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-admin-role + kind: ClusterRole +``` + +There are two ways to install the traffic-manager: Using `telepresence connect` and installing the [helm chart](../../install/helm/). + +By using `telepresence connect`, Telepresence will use your kubeconfig to create the objects mentioned above in the cluster if they don't already exist. If you want the most introspection into what is being installed, we recommend using the helm chart to install the traffic-manager. + +## Cluster-wide telepresence user access + +To allow users to make intercepts across all namespaces, but with more limited `kubectl` permissions, the following `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` will allow full `telepresence intercept` functionality. + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate value + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-role +rules: +# For gather-logs command +- apiGroups: [""] + resources: ["pods/log"] + verbs: ["get"] +- apiGroups: [""] + resources: ["pods"] + verbs: ["list"] +# Needed in order to maintain a list of workloads +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +- apiGroups: [""] + resources: ["namespaces", "services"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-rolebinding +subjects: +- name: tp-user + kind: ServiceAccount + namespace: ambassador +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-role + kind: ClusterRole +``` + +### Traffic Manager connect permission +In addition to the cluster-wide permissions, the client will also need the following namespace scoped permissions +in the traffic-manager's namespace in order to establish the needed port-forward to the traffic-manager. +```yaml +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: traffic-manager-connect +rules: + - apiGroups: [""] + resources: ["pods"] + verbs: ["get", "list", "watch"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: traffic-manager-connect +subjects: + - name: telepresence-test-developer + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: traffic-manager-connect + kind: Role +``` + +## Namespace only telepresence user access + +RBAC for multi-tenant scenarios where multiple dev teams are sharing a single cluster where users are constrained to a specific namespace(s). + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +For each accessible namespace +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate user name + namespace: tp-namespace # Update value for appropriate namespace +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +rules: +- apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "watch"] +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role-binding + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount +roleRef: + kind: Role + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +``` + +The user will also need the [Traffic Manager connect permission](#traffic-manager-connect-permission) described above. diff --git a/docs/telepresence/2.9/reference/restapi.md b/docs/telepresence/2.9/reference/restapi.md new file mode 100644 index 000000000..4be1924a3 --- /dev/null +++ b/docs/telepresence/2.9/reference/restapi.md @@ -0,0 +1,93 @@ +# Telepresence RESTful API server + +[Telepresence](/products/telepresence/) can run a RESTful API server on the local host, both on the local workstation and in a pod that contains a `traffic-agent`. The server currently has two endpoints. The standard `healthz` endpoint and the `consume-here` endpoint. + +## Enabling the server +The server is enabled by setting the `telepresenceAPI.port` to a valid port number in the [Telepresence Helm Chart](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). The values may be passed explicitly to Helm during install, or configured using the [Telepresence Config](../config#restful-api-server) to impact an auto-install. + +## Querying the server +On the cluster's side, it's the `traffic-agent` of potentially intercepted pods that runs the server. The server can be accessed using `http://localhost:/` from the application container. Telepresence ensures that the container has the `TELEPRESENCE_API_PORT` environment variable set when the `traffic-agent` is installed. On the workstation, it is the `user-daemon` that runs the server. It uses the `TELEPRESENCE_API_PORT` that is conveyed in the environment of the intercept. This means that the server can be accessed the exact same way locally, provided that the environment is propagated correctly to the interceptor process. + +## Endpoints + +The `consume-here` and `intercept-info` endpoints are both intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar. Telepresence provides the ID of the intercept in the environment variable [TELEPRESENCE_INTERCEPT_ID](../environment/#telepresence_intercept_id) during an intercept. This ID must be provided in a `x-telepresence-caller-intercept-id: = ` header. [Telepresence](/products/telepresence/) needs this to identify the caller correctly. The `` will be empty when running in the cluster, but it's harmless to provide it there too, so there's no need for conditional code. + +There are three prerequisites to fulfill before testing The `consume-here` and `intercept-info` endpoints using `curl -v` on the workstation: +1. An intercept must be active +2. The "/healthz" endpoint must respond with OK +3. The ID of the intercept must be known. It will be visible as `ID` in the output of `telepresence list --debug`. + +### healthz +The `http://localhost:/healthz` endpoint should respond with status code 200 OK. If it doesn't then something isn't configured correctly. Check that the `traffic-agent` container is present and that the `TELEPRESENCE_API_PORT` has been added to the environment of the application container and/or in the environment that is propagated to the interceptor that runs on the local workstation. + +#### test endpoint using curl +A `curl -v` call can be used to test the endpoint when an intercept is active. This example assumes that the API port is configured to be 9980. +```console +$ curl -v localhost:9980/healthz +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /healthz HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Date: Fri, 26 Nov 2021 07:06:18 GMT +< Content-Length: 0 +< +* Connection #0 to host localhost left intact +``` + +### consume-here +`http://localhost:/consume-here` will respond with "true" (consume the message) or "false" (leave the message on the queue). When running in the cluster, this endpoint will respond with `false` if the headers match an ongoing intercept for the same workload because it's assumed that it's up to the intercept to consume the message. When running locally, the response is inverted. Matching headers means that the message should be consumed. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api`, we can now check that the "/consume-here" returns "true" for the path "/api" and given headers. +```console +$ curl -v localhost:9980/consume-here?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /consume-here?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Fri, 26 Nov 2021 06:43:28 GMT +< Content-Length: 4 +< +* Connection #0 to host localhost left intact +true +``` + +If you can run curl from the pod, you can try the exact same URL. The result should be "false" when there's an ongoing intercept. The `x-telepresence-caller-intercept-id` is not needed when the call is made from the pod. + +### intercept-info +`http://localhost:/intercept-info` is intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar, and will respond with a JSON structure containing the two booleans `clientSide` and `intercepted`, and a `metadata` map which corresponds to the `--http-meta` key pairs used when the intercept was created. This field is always omitted in case `intercepted` is `false`. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api --http-meta a=b --http-meta b=c`, we can now check that the "/intercept-info" returns information for the given path and headers. +```console +$ curl -v localhost:9980/intercept-info?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980...* Connected to localhost (127.0.0.1) port 9980 (#0) +> GET /intercept-info?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.79.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Tue, 01 Feb 2022 11:39:55 GMT +< Content-Length: 68 +< +{"intercepted":true,"clientSide":true,"metadata":{"a":"b","b":"c"}} +* Connection #0 to host localhost left intact +``` diff --git a/docs/telepresence/2.9/reference/routing.md b/docs/telepresence/2.9/reference/routing.md new file mode 100644 index 000000000..cc88490a0 --- /dev/null +++ b/docs/telepresence/2.9/reference/routing.md @@ -0,0 +1,69 @@ +# Connection Routing + +## Outbound + +### DNS resolution +When requesting a connection to a host, the IP of that host must be determined. Telepresence provides DNS resolvers to help with this task. There are currently four types of resolvers but only one of them will be used on a workstation at any given time. Common for all of them is that they will propagate a selection of the host lookups to be performed in the cluster. The selection normally includes all names ending with `.cluster.local` or a currently mapped namespace but more entries can be added to the list using the `includeSuffixes` option in the +[cluster DNS configuration](../cluster-config/#dns) + +#### Cluster side DNS lookups +The cluster side host lookup will be performed by the traffic-manager unless the client has an active intercept, in which case, the agent performing that intercept will be responsible for doing it. If the client has multiple intercepts, then all of them will be asked to perform the lookup, and the response to the client will contain the unique sum of IPs that they produce. It's therefore important to never have multiple intercepts that span more than one namespace[[1](#namespacelimit)] running concurrently on the same workstation because that would logically put the workstation in several namespaces and make the DNS resolution ambiguous. The reason for asking all of them is that the workstation currently impersonates multiple containers, and it is not possible to determine on behalf of what container the lookup request is made. + +#### macOS resolver +This resolver hooks into the macOS DNS system by creating files under `/etc/resolver`. Those files correspond to some domain and contain the port number of the Telepresence resolver. Telepresence creates one such file for each of the currently mapped namespaces and `include-suffixes` option. The file `telepresence.local` contains a search path that is configured based on current intercepts so that single label names can be resolved correctly. + +#### Linux systemd-resolved resolver +This resolver registers itself as part of telepresence's [VIF](../tun-device) using `systemd-resolved` and uses the DBus API to configure domains and routes that corresponds to the current set of intercepts and namespaces. + +#### Linux overriding resolver +Linux systems that aren't configured with `systemd-resolved` will use this resolver. A Typical case is when running Telepresence [inside a docker container](../inside-container). During initialization, the resolver will first establish a _fallback_ connection to the IP passed as `--dns`, the one configured as `local-ip` in the [local DNS configuration](../config/#dns-and-routing), or the primary `nameserver` registered in `/etc/resolv.conf`. It will then use iptables to actually override that IP so that requests to it instead end up in the overriding resolver, which unless it succeeds on its own, will use the _fallback_. + +#### Windows resolver +This resolver uses the DNS resolution capabilities of the [win-tun](https://www.wintun.net/) device in conjunction with [Win32_NetworkAdapterConfiguration SetDNSDomain](https://docs.microsoft.com/en-us/powershell/scripting/samples/performing-networking-tasks?view=powershell-7.2#assigning-the-dns-domain-for-a-network-adapter). + +#### DNS caching +The Telepresence DNS resolver often changes its configuration. This means that Telepresence must either flush the DNS caches on the local host, or ensure that DNS-records returned from the Telepresence resolver aren't cached (or cached for a very short time). All operating systems have different ways of flushing the DNS caches and even different versions of one system may have differences. Also, on some systems it is necessary to actually kill and restart processes to ensure a proper flush, which in turn may result in network instabilities. + +Starting with 2.4.7, Telepresence will no longer flush the host's DNS caches. Instead, all records will have a short Time To Live (TTL) so that such caches evict the entries quickly. This causes increased load on the Telepresence resolver (shorter TTL means more frequent queries) and to cater for that, telepresence now has an internal cache to minimize the number of DNS queries that it sends to the cluster. This cache is flushed as needed without causing instabilities. + +### Routing + +#### Subnets +The Telepresence `traffic-manager` service is responsible for discovering the cluster's service subnet and all subnets used by the pods. In order to do this, it needs permission to create a dummy service[[2](#servicesubnet)] in its own namespace, and the ability to list, get, and watch nodes and pods. Most clusters will expose the pod subnets as `podCIDR` in the `Node` while others, like Amazon EKS, don't. Telepresence will then fall back to deriving the subnets from the IPs of all pods. If you'd like to choose a specific method for discovering subnets, or want to provide the list yourself, you can use the `podCIDRStrategy` configuration value in the [helm](../../install/helm) chart to do that. + +The complete set of subnets that the [VIF](../tun-device) will be configured with is dynamic and may change during a connection's life cycle as new nodes arrive or disappear from the cluster. The set consists of what that the traffic-manager finds in the cluster, and the subnets configured using the [also-proxy](../cluster-config#alsoproxy) configuration option. Telepresence will remove subnets that are equal to, or completely covered by, other subnets. + +#### Connection origin +A request to connect to an IP-address that belongs to one of the subnets of the [VIF](../tun-device) will cause a connection request to be made in the cluster. As with host name lookups, the request will originate from the traffic-manager unless the client has ongoing intercepts. If it does, one of the intercepted pods will be chosen, and the request will instead originate from that pod. This is a best-effort approach. Telepresence only knows that the request originated from the workstation. It cannot know that it is intended to originate from a specific pod when multiple intercepts are active. + +A `--local-only` intercept will not have any effect on the connection origin because there is no pod from which the connection can originate. The intercept must be made on a workload that has been deployed in the cluster if there's a requirement for correct connection origin. + +There are multiple reasons for doing this. One is that it is important that the request originates from the correct namespace. Example: + +```bash +curl some-host +``` +results in a http request with header `Host: some-host`. Now, if a service-mesh like Istio performs header based routing, then it will fail to find that host unless the request originates from the same namespace as the host resides in. Another reason is that the configuration of a service mesh can contain very strict rules. If the request then originates from the wrong pod, it will be denied. Only one intercept at a time can be used if there is a need to ensure that the chosen pod is exactly right. + +### Recursion detection +It is common that clusters used in development, such as Minikube, Minishift or k3s, run on the same host as the Telepresence client, often in a Docker container. Such clusters may have access to host network, which means that both DNS and L4 routing may be subjected to recursion. + +#### DNS recursion +When a local cluster's DNS-resolver fails to resolve a hostname, it may fall back to querying the local host network. This means that the Telepresence resolver will be asked to resolve a query that was issued from the cluster. Telepresence must check if such a query is recursive because there is a chance that it actually originated from the Telepresence DNS resolver and was dispatched to the `traffic-manager`, or a `traffic-agent`. + +Telepresence handles this by sending one initial DNS-query to resolve the hostname "tel2-recursion-check.kube-system". If the cluster runs locally, and has access to the local host's network, then that query will recurse back into the Telepresence resolver. Telepresence remembers this and alters its own behavior so that queries that are believed to be recursions are detected and respond with an NXNAME record. Telepresence performs this solution to the best of its ability, but may not be completely accurate in all situations. There's a chance that the DNS-resolver will yield a false negative for the second query if the same hostname is queried more than once in rapid succession, that is when the second query is made before the first query has received a response from the cluster. + +#### Connect recursion +A cluster running locally may dispatch connection attempts to non-existing host:port combinations to the host network. This means that they may reach the Telepresence [VIF](../tun-device). Endless recursions occur if the VIF simply dispatches such attempts on to the cluster. + +The telepresence client handles this by serializing all connection attempts to one specific IP:PORT, trapping all subsequent attempts to connect to that IP:PORT until the first attempt has completed. If the first attempt was deemed a success, then the currently trapped attempts are allowed to proceed. If the first attempt failed, then the currently trapped attempts fail. + +## Inbound + +The traffic-manager and traffic-agent are mutually responsible for setting up the necessary connection to the workstation when an intercept becomes active. In versions prior to 2.3.2, this would be accomplished by the traffic-manager creating a port dynamically that it would pass to the traffic-agent. The traffic-agent would then forward the intercepted connection to that port, and the traffic-manager would forward it to the workstation. This lead to problems when integrating with service meshes like Istio since those dynamic ports needed to be configured. It also imposed an undesired requirement to be able to use mTLS between the traffic-manager and traffic-agent. + +In 2.3.2, this changes, so that the traffic-agent instead creates a tunnel to the traffic-manager using the already existing gRPC API connection. The traffic-manager then forwards that using another tunnel to the workstation. This is completely invisible to other service meshes and is therefore much easier to configure. + +##### Footnotes: +

1: Starting with 2.8.0, Telepresence will not allow that the same workstation creates concurrent intercepts that span multiple namespaces.

+

2: The error message from an attempt to create a service in a bad subnet contains the service subnet. The trick of creating a dummy service is currently the only way to get Kubernetes to expose that subnet.

diff --git a/docs/telepresence/2.9/reference/tun-device.md b/docs/telepresence/2.9/reference/tun-device.md new file mode 100644 index 000000000..4410f6f3c --- /dev/null +++ b/docs/telepresence/2.9/reference/tun-device.md @@ -0,0 +1,27 @@ +# Networking through Virtual Network Interface + +The Telepresence daemon process creates a Virtual Network Interface (VIF) when Telepresence connects to the cluster. The VIF ensures that the cluster's subnets are available to the workstation. It also intercepts DNS requests and forwards them to the traffic-manager which in turn forwards them to intercepted agents, if any, or performs a host lookup by itself. + +### TUN-Device +The VIF is a TUN-device, which means that it communicates with the workstation in terms of L3 IP-packets. The router will recognize UDP and TCP packets and tunnel their payload to the traffic-manager via its encrypted gRPC API. The traffic-manager will then establish corresponding connections in the cluster. All protocol negotiation takes place in the client because the VIF takes care of the L3 to L4 translation (i.e. the tunnel is L4, not L3). + +## Gains when using the VIF + +### Both TCP and UDP +The TUN-device is capable of routing both TCP and UDP for outbound traffic. Earlier versions of Telepresence would only allow TCP. Future enhancements might be to also route inbound UDP, and perhaps a selection of ICMP packages (to allow for things like `ping`). + +### No SSH required + +The VIF approach is somewhat similar to using `sshuttle` but without +any requirements for extra software, configuration or connections. +Using the VIF means that only one single connection needs to be +forwarded through the Kubernetes apiserver (à la `kubectl +port-forward`), using only one single port. There is no need for +`ssh` in the client nor for `sshd` in the traffic-manager. This also +means that the traffic-manager container can run as the default user. + +#### sshfs without ssh encryption +When a POD is intercepted, and its volumes are mounted on the local machine, this mount is performed by [sshfs](https://github.com/libfuse/sshfs). Telepresence will run `sshfs -o slave` which means that instead of using `ssh` to establish an encrypted communication to an `sshd`, which in turn terminates the encryption and forwards to `sftp`, the `sshfs` will talk `sftp` directly on its `stdin/stdout` pair. Telepresence tunnels that directly to an `sftp` in the agent using its already encrypted gRPC API. As a result, no `sshd` is needed in client nor in the traffic-agent, and the traffic-agent container can run as the default user. + +### No Firewall rules +With the VIF in place, there's no longer any need to tamper with firewalls in order to establish IP routes. The VIF makes the cluster subnets available during connect, and the kernel will perform the routing automatically. When the session ends, the kernel is also responsible for cleaning up. diff --git a/docs/telepresence/2.9/reference/volume.md b/docs/telepresence/2.9/reference/volume.md new file mode 100644 index 000000000..82df9cafa --- /dev/null +++ b/docs/telepresence/2.9/reference/volume.md @@ -0,0 +1,36 @@ +# Volume mounts + +import Alert from '@material-ui/lab/Alert'; + +Telepresence supports locally mounting of volumes that are mounted to your Pods. You can specify a command to run when starting the intercept, this could be a subshell or local server such as Python or Node. + +``` +telepresence intercept --port --mount=/tmp/ -- /bin/bash +``` + +In this case, Telepresence creates the intercept, mounts the Pod's volumes to locally to `/tmp`, and starts a Bash subshell. + +Telepresence can set a random mount point for you by using `--mount=true` instead, you can then find the mount point in the output of `telepresence list` or using the `$TELEPRESENCE_ROOT` variable. + +``` +$ telepresence intercept --port --mount=true -- /bin/bash +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 + Intercepting : all TCP connections + +bash-3.2$ echo $TELEPRESENCE_ROOT +/var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 +``` + +--mount=true is the default if a mount option is not specified, use --mount=false to disable mounting volumes. + +With either method, the code you run locally either from the subshell or from the intercept command will need to be prepended with the `$TELEPRESENCE_ROOT` environment variable to utilize the mounted volumes. + +For example, Kubernetes mounts secrets to `/var/run/secrets/kubernetes.io` (even if no `mountPoint` for it exists in the Pod spec). Once mounted, to access these you would need to change your code to use `$TELEPRESENCE_ROOT/var/run/secrets/kubernetes.io`. + +If using --mount=true without a command, you can use either environment variable flag to retrieve the variable. diff --git a/docs/telepresence/2.9/reference/vpn.md b/docs/telepresence/2.9/reference/vpn.md new file mode 100644 index 000000000..91213babc --- /dev/null +++ b/docs/telepresence/2.9/reference/vpn.md @@ -0,0 +1,155 @@ + +
+ +# Telepresence and VPNs + +## The test-vpn command + +You can make use of the `telepresence test-vpn` command to diagnose issues +with your VPN setup. +This guides you through a series of steps to figure out if there are +conflicts between your VPN configuration and [Telepresence](/products/telepresence/). + +### Prerequisites + +Before running `telepresence test-vpn` you should ensure that your VPN is +in split-tunnel mode. +This means that only traffic that _must_ pass through the VPN is directed +through it; otherwise, the test results may be inaccurate. + +You may need to configure this on both the client and server sides. +Client-side, taking the Tunnelblick client as an example, you must ensure that +the `Route all IPv4 traffic through the VPN` tickbox is not enabled: + +![Tunnelblick](../images/tunnelblick.png) + +Server-side, taking AWS' ClientVPN as an example, you simply have to enable +split-tunnel mode: + +![Modify client VPN Endpoint](../images/split-tunnel.png) + +In AWS, this setting can be toggled without reprovisioning the VPN. Other cloud providers may work differently. + +### Testing the VPN configuration + +To run it, enter: + +```console +$ telepresence test-vpn +``` + +The test-vpn tool begins by asking you to disconnect from your VPN; ensure you are disconnected then +press enter: + +``` +Telepresence Root Daemon is already stopped +Telepresence User Daemon is already stopped +Please disconnect from your VPN now and hit enter once you're disconnected... +``` + +Once it's gathered information about your network configuration without an active connection, +it will ask you to connect to the VPN: + +``` +Please connect to your VPN now and hit enter once you're connected... +``` + +It will then connect to the cluster: + + +``` +Launching Telepresence Root Daemon +Launching Telepresence User Daemon +Connected to context arn:aws:eks:us-east-1:914373874199:cluster/josec-tp-test-vpn-cluster (https://07C63820C58A0426296DAEFC73AED10C.gr7.us-east-1.eks.amazonaws.com) +Telepresence Root Daemon quitting... done +Telepresence User Daemon quitting... done +``` + +And show you the results of the test: + +``` +---------- Test Results: +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +✅ svc subnet 10.19.0.0/16 is clear of VPN + +Please see https://www.telepresence.io/docs/latest/reference/vpn for more info on these corrective actions, as well as examples + +Still having issues? Please create a new github issue at https://github.com/telepresenceio/telepresence/issues/new?template=Bug_report.md + Please make sure to add the following to your issue: + * Run `telepresence loglevel debug`, try to connect, then run `telepresence gather_logs`. It will produce a zipfile that you should attach to the issue. + * Which VPN client are you using? + * Which VPN server are you using? + * How is your VPN pushing DNS configuration? It may be useful to add the contents of /etc/resolv.conf +``` + +#### Interpreting test results + +##### Case 1: VPN masked by cluster + +In an instance where the VPN is masked by the cluster, the test-vpn tool informs you that a pod or service subnet is masking a CIDR that the VPN +routes: + +``` +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +``` + +This means that all VPN hosts within `10.0.0.0/19` will be rendered inaccessible while +telepresence is connected. + +The ideal resolution in this case is to move the pods to a different subnet. This is possible, +for example, in Amazon EKS by configuring a [new CIDR range](https://aws.amazon.com/premiumsupport/knowledge-center/eks-multiple-cidr-ranges/) for the pods. +In this case, configuring the pods to be located in `10.1.0.0/19` clears the VPN and allows you +to reach hosts inside the VPC's `10.0.0.0/19` + +However, it is not always possible to move the pods to a different subnet. +In these cases, you should use the [never-proxy](../cluster-config#neverproxy) configuration to prevent certain +hosts from being masked. +This might be particularly important for DNS resolution. In an AWS ClientVPN VPN it is often +customary to set the `.2` host as a DNS server (e.g. `10.0.0.2` in this case): + +![Modify Client VPN Endpoint](../images/vpn-dns.png) + +If this is the case for your VPN, you should place the DNS server in the never-proxy list for your +cluster. In the values file that you pass to `telepresence helm install [--upgrade] --values `, add a `client.routing` +entry like so: + +```yaml +client: + routing: + neverProxySubnets: + - 10.0.0.2/32 +``` + +##### Case 2: Cluster masked by VPN + +In an instance where the Cluster is masked by the VPN, the test-vpn tool informs you that a pod or service subnet is being masked by a CIDR +that the VPN routes: + +``` +❌ pod subnet 10.0.0.0/8 being masked by VPN-routed CIDR 10.0.0.0/16. This usually means that Telepresence will not be able to connect to your cluster. To resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, consider shrinking the mask of the 10.0.0.0/16 CIDR (e.g. from /16 to /8), or disabling split-tunneling +``` + +Typically this means that pods within `10.0.0.0/8` are not accessible while the VPN is +connected. + +As with the first case, the ideal resolution is to move the pods away, but this may not always +be possible. In that case, your best bet is to attempt to shrink the VPN's CIDR +(that is, make it route more hosts) to make Telepresence's routes win by virtue of specificity. +One easy way to do this may be by disabling split tunneling (see the [prerequisites](#prerequisites) +section for more on split-tunneling). + +Note that once you fix this, you may find yourself landing again in [Case 1](#case-1-vpn-masked-by-cluster), and may need +to use never-proxy rules to whitelist hosts in the VPN: + +``` +❌ pod subnet 10.0.0.0/8 is masking VPN-routed CIDR 0.0.0.0/1. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 0.0.0.0/1 are placed in the never-proxy list +``` +
diff --git a/docs/telepresence/2.9/release-notes/no-ssh.png b/docs/telepresence/2.9/release-notes/no-ssh.png new file mode 100644 index 000000000..025f20ab7 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/no-ssh.png differ diff --git a/docs/telepresence/2.9/release-notes/run-tp-in-docker.png b/docs/telepresence/2.9/release-notes/run-tp-in-docker.png new file mode 100644 index 000000000..53b66a9b2 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/run-tp-in-docker.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.2.png b/docs/telepresence/2.9/release-notes/telepresence-2.2.png new file mode 100644 index 000000000..43abc7e89 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.2.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.3.0-homebrew.png b/docs/telepresence/2.9/release-notes/telepresence-2.3.0-homebrew.png new file mode 100644 index 000000000..e203a9750 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.3.0-homebrew.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.3.0-loglevels.png b/docs/telepresence/2.9/release-notes/telepresence-2.3.0-loglevels.png new file mode 100644 index 000000000..3d628c54a Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.3.0-loglevels.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.3.1-alsoProxy.png b/docs/telepresence/2.9/release-notes/telepresence-2.3.1-alsoProxy.png new file mode 100644 index 000000000..4052b927b Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.3.1-alsoProxy.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.3.1-brew.png b/docs/telepresence/2.9/release-notes/telepresence-2.3.1-brew.png new file mode 100644 index 000000000..2af424904 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.3.1-brew.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.3.1-dns.png b/docs/telepresence/2.9/release-notes/telepresence-2.3.1-dns.png new file mode 100644 index 000000000..c6335e7a7 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.3.1-dns.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.3.1-inject.png b/docs/telepresence/2.9/release-notes/telepresence-2.3.1-inject.png new file mode 100644 index 000000000..aea1003ef Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.3.1-inject.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.3.1-large-file-transfer.png b/docs/telepresence/2.9/release-notes/telepresence-2.3.1-large-file-transfer.png new file mode 100644 index 000000000..48ceb3817 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.3.1-large-file-transfer.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.3.1-trafficmanagerconnect.png b/docs/telepresence/2.9/release-notes/telepresence-2.3.1-trafficmanagerconnect.png new file mode 100644 index 000000000..78128c174 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.3.1-trafficmanagerconnect.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.3.2-subnets.png b/docs/telepresence/2.9/release-notes/telepresence-2.3.2-subnets.png new file mode 100644 index 000000000..778c722ab Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.3.2-subnets.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.3.2-svcport-annotation.png b/docs/telepresence/2.9/release-notes/telepresence-2.3.2-svcport-annotation.png new file mode 100644 index 000000000..1e1e92408 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.3.2-svcport-annotation.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.3.3-helm.png b/docs/telepresence/2.9/release-notes/telepresence-2.3.3-helm.png new file mode 100644 index 000000000..7b81480a7 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.3.3-helm.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.3.3-namespace-config.png b/docs/telepresence/2.9/release-notes/telepresence-2.3.3-namespace-config.png new file mode 100644 index 000000000..7864d3a30 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.3.3-namespace-config.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.3.3-to-pod.png b/docs/telepresence/2.9/release-notes/telepresence-2.3.3-to-pod.png new file mode 100644 index 000000000..aa7be3f63 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.3.3-to-pod.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.3.4-improved-error.png b/docs/telepresence/2.9/release-notes/telepresence-2.3.4-improved-error.png new file mode 100644 index 000000000..fa8a12986 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.3.4-improved-error.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.3.4-ip-error.png b/docs/telepresence/2.9/release-notes/telepresence-2.3.4-ip-error.png new file mode 100644 index 000000000..1d37380c7 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.3.4-ip-error.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.3.5-agent-config.png b/docs/telepresence/2.9/release-notes/telepresence-2.3.5-agent-config.png new file mode 100644 index 000000000..67d6d3e8b Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.3.5-agent-config.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.3.5-grpc-max-receive-size.png b/docs/telepresence/2.9/release-notes/telepresence-2.3.5-grpc-max-receive-size.png new file mode 100644 index 000000000..32939f9dd Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.3.5-grpc-max-receive-size.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.3.5-skipLogin.png b/docs/telepresence/2.9/release-notes/telepresence-2.3.5-skipLogin.png new file mode 100644 index 000000000..bf79c1910 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.3.5-skipLogin.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png b/docs/telepresence/2.9/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png new file mode 100644 index 000000000..d29a05ad7 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.3.7-keydesc.png b/docs/telepresence/2.9/release-notes/telepresence-2.3.7-keydesc.png new file mode 100644 index 000000000..9bffe5ccb Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.3.7-keydesc.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.3.7-newkey.png b/docs/telepresence/2.9/release-notes/telepresence-2.3.7-newkey.png new file mode 100644 index 000000000..c7d47c42d Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.3.7-newkey.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.4.0-cloud-messages.png b/docs/telepresence/2.9/release-notes/telepresence-2.4.0-cloud-messages.png new file mode 100644 index 000000000..ffd045ae0 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.4.0-cloud-messages.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.4.0-windows.png b/docs/telepresence/2.9/release-notes/telepresence-2.4.0-windows.png new file mode 100644 index 000000000..d27ba254a Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.4.0-windows.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.4.1-systema-vars.png b/docs/telepresence/2.9/release-notes/telepresence-2.4.1-systema-vars.png new file mode 100644 index 000000000..c098b439f Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.4.1-systema-vars.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.4.10-actions.png b/docs/telepresence/2.9/release-notes/telepresence-2.4.10-actions.png new file mode 100644 index 000000000..6d849ac21 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.4.10-actions.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.4.10-intercept-config.png b/docs/telepresence/2.9/release-notes/telepresence-2.4.10-intercept-config.png new file mode 100644 index 000000000..e3f1136ac Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.4.10-intercept-config.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.4.4-gather-logs.png b/docs/telepresence/2.9/release-notes/telepresence-2.4.4-gather-logs.png new file mode 100644 index 000000000..7db541735 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.4.4-gather-logs.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.4.5-logs-anonymize.png b/docs/telepresence/2.9/release-notes/telepresence-2.4.5-logs-anonymize.png new file mode 100644 index 000000000..edd01fde4 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.4.5-logs-anonymize.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.4.5-pod-yaml.png b/docs/telepresence/2.9/release-notes/telepresence-2.4.5-pod-yaml.png new file mode 100644 index 000000000..3f565c4f8 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.4.5-pod-yaml.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.4.5-preview-url-questions.png b/docs/telepresence/2.9/release-notes/telepresence-2.4.5-preview-url-questions.png new file mode 100644 index 000000000..1823aaa14 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.4.5-preview-url-questions.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.4.6-help-text.png b/docs/telepresence/2.9/release-notes/telepresence-2.4.6-help-text.png new file mode 100644 index 000000000..aab9178ad Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.4.6-help-text.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.4.8-health-check.png b/docs/telepresence/2.9/release-notes/telepresence-2.4.8-health-check.png new file mode 100644 index 000000000..e10a0b472 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.4.8-health-check.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.4.8-vpn.png b/docs/telepresence/2.9/release-notes/telepresence-2.4.8-vpn.png new file mode 100644 index 000000000..fbb215882 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.4.8-vpn.png differ diff --git a/docs/telepresence/2.9/release-notes/telepresence-2.5.0-pro-daemon.png b/docs/telepresence/2.9/release-notes/telepresence-2.5.0-pro-daemon.png new file mode 100644 index 000000000..5b82fc769 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/telepresence-2.5.0-pro-daemon.png differ diff --git a/docs/telepresence/2.9/release-notes/tunnel.jpg b/docs/telepresence/2.9/release-notes/tunnel.jpg new file mode 100644 index 000000000..59a0397e6 Binary files /dev/null and b/docs/telepresence/2.9/release-notes/tunnel.jpg differ diff --git a/docs/telepresence/2.9/releaseNotes.yml b/docs/telepresence/2.9/releaseNotes.yml new file mode 100644 index 000000000..117f6eda1 --- /dev/null +++ b/docs/telepresence/2.9/releaseNotes.yml @@ -0,0 +1,1947 @@ +# This file should be placed in the folder for the version of the +# product that's meant to be documented. A `/release-notes` page will +# be automatically generated and populated at build time. +# +# Note that an entry needs to be added to the `doc-links.yml` file in +# order to surface the release notes in the table of contents. +# +# The YAML in this file should contain: +# +# changelog: An (optional) URL to the CHANGELOG for the product. +# items: An array of releases with the following attributes: +# - version: The (optional) version number of the release, if applicable. +# - date: The date of the release in the format YYYY-MM-DD. +# - notes: An array of noteworthy changes included in the release, each having the following attributes: +# - type: The type of change, one of `bugfix`, `feature`, `security` or `change`. +# - title: A short title of the noteworthy change. +# - body: >- +# Two or three sentences describing the change and why it +# is noteworthy. This is HTML, not plain text or +# markdown. It is handy to use YAML's ">-" feature to +# allow line-wrapping. +# - image: >- +# The URL of an image that visually represents the +# noteworthy change. This path is relative to the +# `release-notes` directory; if this file is +# `FOO/releaseNotes.yml`, then the image paths are +# relative to `FOO/release-notes/`. +# - docs: The path to the documentation page where additional information can be found. +# - href: A path from the root to a resource on the getambassador website, takes precedence over a docs link. + +docTitle: Telepresence Release Notes +docDescription: >- + Release notes for Telepresence by Ambassador Labs, a CNCF project + that enables developers to iterate rapidly on Kubernetes + microservices by arming them with infinite-scale development + environments, access to instantaneous feedback loops, and highly + customizable development environments. + +changelog: https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md + +items: + - version: 2.9.5 + date: "2022-12-08" + notes: + - type: security + title: Update to golang v1.19.4 + body: >- + Apply security updates by updating to golang v1.19.4 + docs: https://groups.google.com/g/golang-announce/c/L_3rmdT0BMU + - type: bugfix + title: GCE authentication + body: >- + Fixed a regression, that was introduced in 2.9.3, preventing use of gce authentication without also having a config element present in the gce configuration in the kubeconfig. + - version: 2.9.4 + date: "2022-12-02" + notes: + - type: feature + title: Subnet detection strategy + body: >- + The traffic-manager can automatically detect that the node subnets are different from the pod subnets, and switch detection strategy to instead use subnets that cover the pod IPs. + - type: bugfix + title: Fix `--set` flag for `telepresence helm install` + body: >- + The `telepresence helm` command `--set x=y` flag didn't correctly set values of other types than `string`. The code now uses standard Helm semantics for this flag. + - type: bugfix + title: Fix `agent.image` setting propigation + body: >- + Telepresence now uses the correct `agent.image` properties in the Helm chart when copying agent image settings from the `config.yml` file. + - type: bugfix + title: Delay file sharing until needed + body: >- + Initialization of FTP type file sharing is delayed, so that setting it using the Helm chart value `intercept.useFtp=true` works as expected. + - type: bugfix + title: Cleanup on `telepresence quit` + body: >- + The port-forward that is created when Telepresence connects to a cluster is now properly closed when `telepresence quit` is called. + - type: bugfix + title: Watch `config.yml` without panic + body: >- + The user daemon no longer panics when the `config.yml` is modified at a time when the user daemon is running but no session is active. + - type: bugfix + title: Thread safety + body: >- + Fix race condition that would occur when `telepresence connect` `telepresence leave` was called several times in rapid succession. + - version: 2.9.3 + date: "2022-11-23" + notes: + - type: feature + title: Helm options for `livenessProbe` and `readinessProbe` + body: >- + The helm chart now supports `livenessProbe` and `readinessProbe` for the traffic-manager deployment, so that the pod automatically restarts if it doesn't respond. + - type: change + title: Improved network communication + body: >- + The root daemon now communicates directly with the traffic-manager instead of routing all outbound traffic through the user daemon. + - type: bugfix + title: Root daemon debug logging + body: >- + Using `telepresence loglevel LEVEL` now also sets the log level in the root daemon. + - type: bugfix + title: Multivalue flag value propagation + body: >- + Multi valued kubernetes flags such as `--as-group` are now propagated correctly. + - type: bugfix + title: Root daemon stability + body: >- + The root daemon would sometimes hang indefinitely when quit and connect were called in rapid succession. + - type: bugfix + title: Base DNS resolver + body: >- + Don't use `systemd.resolved` base DNS resolver unless cluster is proxied. + - version: 2.9.2 + date: "2022-11-16" + notes: + - type: bugfix + title: Fix panic + body: >- + Fix panic when connecting to an older traffic-manager. + - type: bugfix + title: Fix header flag + body: >- + Fix an issue where the `http-header` flag sometimes wouldn't propagate correctly. + - version: 2.9.1 + date: "2022-11-16" + notes: + - type: bugfix + title: Connect failures due to missing auth provider. + body: >- + The regression in 2.9.0 that caused a `no Auth Provider found for name “gcp”` error when connecting was fixed. + - version: 2.9.0 + date: "2022-11-15" + notes: + - type: feature + title: New command to view client configuration. + body: >- + A new telepresence config view was added to make it easy to view the current + client configuration. + docs: new-in-2.9#view-the-client-configuration + - type: feature + title: Configure Clients using the Helm chart. + body: >- + The traffic-manager can now configure all clients that connect through the client: map in + the values.yaml file. + docs: reference/cluster-config#client-configuration + - type: feature + title: The Traffic manager version is more visible. + body: >- + The command telepresence version will now include the version of the traffic manager when + the client is connected to a cluster. + - type: feature + title: Command output in YAML format. + body: >- + The global --output flag now accepts both yaml and json. + docs: new-in-2.9#yaml-output + - type: change + title: Deprecated status command flag + body: >- + The telepresence status --json flag is deprecated. Use telepresence status --output=json instead. + - type: bugfix + title: Unqualified service name resolution in docker. + body: >- + Unqualified service names now resolves OK from the docker container when using telepresence intercept --docker-run. + docs: https://github.com/telepresenceio/telepresence/issues/2870 + - type: bugfix + title: Output no longer mixes plaintext and json. + body: >- + Informational messages that don't really originate from the command, such as "Launching Telepresence Root Daemon", + or "An update of telepresence ...", are discarded instead of being printed as plain text before the actual formatted + output when using the --output=json. + docs: https://github.com/telepresenceio/telepresence/issues/2854 + - type: bugfix + title: No more panic when invalid port names are detected. + body: >- + A `telepresence intercept` of services with invalid port no longer causes a panic. + docs: https://github.com/telepresenceio/telepresence/issues/2880 + - type: bugfix + title: Proper errors for bad output formats. + body: >- + An attempt to use an invalid value for the global --output flag now renders a proper error message. + - type: bugfix + title: Remove lingering DNS config on macOS. + body: >- + Files lingering under /etc/resolver as a result of ungraceful shutdown of the root daemon on macOS, are + now removed when a new root daemon starts. + - version: 2.8.5 + date: "2022-11-2" + notes: + - type: security + title: CVE-2022-41716 + body: >- + Updated Golang to 1.19.3 to address CVE-2022-41716. + - version: 2.8.4 + date: "2022-11-2" + notes: + - type: bugfix + title: Release Process + body: >- + This release resulted in changes to our release process. + - version: 2.8.3 + date: "2022-10-27" + notes: + - type: feature + title: Ability to disable global intercepts. + body: >- + Global intercepts (a.k.a. TCP intercepts) can now be disabled by using the new Helm chart setting intercept.disableGlobal. + docs: https://github.com/telepresenceio/telepresence/issues/2140 + - type: feature + title: Configurable mutating webhook port + body: >- + The port used for the mutating webhook can be configured using the Helm chart setting + agentInjector.webhook.port. + docs: install/helm + - type: change + title: Mutating webhook port defaults to 443 + body: >- + The default port for the mutating webhook is now 443. It used to be 8443. + - type: change + title: Agent image configuration mandatory in air-gapped environments. + body: >- + The traffic-manager will no longer default to use the tel2 image for the traffic-agent when it is + unable to connect to Ambassador Cloud. Air-gapped environments must declare what image to use in the Helm chart. + - type: bugfix + title: Can now connect to non-helm installs + body: >- + telepresence connect now works as long as the traffic manager is installed, even if + it wasn't installed via >code>helm install
+ docs: https://github.com/telepresenceio/telepresence/issues/2824 + - type: bugfix + title: check-vpn crash fixed + body: >- + telepresence check-vpn no longer crashes when the daemons don't start properly. + - version: 2.8.2 + date: "2022-10-15" + notes: + - type: bugfix + title: Reinstate 2.8.0 + body: >- + There was an issue downloading the free enhanced client. This problem was fixed, 2.8.0 was reinstated + - version: 2.8.1 + date: "2022-10-14" + notes: + - type: bugfix + title: Rollback 2.8.0 + body: >- + Rollback 2.8.0 while we investigate an issue with ambassador cloud. + - version: 2.8.0 + date: "2022-10-14" + notes: + - type: feature + title: Improved DNS resolver + body: >- + The Telepresence DNS resolver is now capable of resolving queries of type A, AAAA, CNAME, + MX, NS, PTR, SRV, and TXT. + docs: reference/dns + - type: feature + title: New `client` structure in Helm chart + body: >- + A new client struct was added to the Helm chart. It contains a connectionTTL that controls + how long the traffic manager will retain a client connection without seeing any sign of life from the client. + docs: reference/cluster-config#Client-Configuration + - type: feature + title: Include and exclude suffixes configurable using the Helm chart. + body: >- + A dns element was added to the client struct in Helm chart. It contains an includeSuffixes and + an excludeSuffixes value that controls what type of names that the DNS resolver in the client will delegate to + the cluster. + docs: reference/cluster-config#DNS + - type: feature + title: Configurable traffic-manager API port + body: >- + The API port used by the traffic-manager is now configurable using the Helm chart value apiPort. + The default port is 8081. + docs: https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence + - type: feature + title: Envoy server and admin port configuration. + body: >- + An new agent struct was added to the Helm chart. It contains an `envoy` structure where the server and + admin port of the Envoy proxy running in the enhanced traffic-agent can be configured. + docs: reference/cluster-config#Envoy-Configuration + - type: change + title: Helm chart `dnsConfig` moved to `client.routing`. + body: >- + The Helm chart dnsConfig was deprecated but retained for backward compatibility. The fields alsoProxySubnets + and neverProxySubnets can now be found under routing in the client struct. + docs: reference/cluster-config#Routing + - type: change + title: Helm chart `agentInjector.agentImage` moved to `agent.image`. + body: >- + The Helm chart agentInjector.agentImage was moved to agent.image. The old value is deprecated but + retained for backward compatibility. + docs: reference/cluster-config#Image-Configuration + - type: change + title: Helm chart `agentInjector.appProtocolStrategy` moved to `agent.appProtocolStrategy`. + body: >- + The Helm chart agentInjector.appProtocolStrategy was moved to agent.appProtocolStrategy. The old + value is deprecated but retained for backward compatibility. + docs: reference/cluster-config#Application-Protocol-Selection + - type: change + title: Helm chart `dnsServiceName`, `dnsServiceNamespace`, and `dnsServiceIP` removed. + body: >- + The Helm chart dnsServiceName, dnsServiceNamespace, and dnsServiceIP has been removed, because + they are no longer needed. The TUN-device will use the traffic-manager pod-IP on platforms where it needs to + dedicate an IP for its local resolver. + - type: change + title: Quit daemons with `telepresence quit -s` + body: >- + The former options `-u` and `-r` for `telepresence quit` has been deprecated and replaced with one option `-s` which will + quit both the root daemon and the user daemon. + - type: bugfix + title: Environment variable interpolation in pods now works. + body: >- + Environment variable interpolation now works for all definitions that are copied from pod containers + into the injected traffic-agent container. + - type: bugfix + title: Early detection of namespace conflict + body: >- + An attempt to create simultaneous intercepts that span multiple namespace on the same workstation + is detected early and prohibited instead of resulting in failing DNS lookups later on. + - type: bugfix + title: Annoying log message removed + body: >- + Spurious and incorrect ""!! SRV xxx"" messages will no longer appear in the logs when the reason + is normal context cancellation. + - type: bugfix + title: Single name DNS resolution in Docker on Linux host + body: >- + Single label names now resolves correctly when using Telepresence in Docker on a Linux host + - type: bugfix + title: Misnomer `appPortStrategy` in Helm chart renamed to `appProtocolStrategy`. + body: >- + The Helm chart value appProtocolStrategy is now correctly named (used to be appPortStategy) + - version: 2.7.6 + date: "2022-09-16" + notes: + - type: feature + title: Helm chart resource entries for injected agents + body: >- + The resources for the traffic-agent container and the optional init container can be + specified in the Helm chart using the resources and initResource fields + of the agentInjector.agentImage + - type: feature + title: Cluster event propagation when injection fails + body: >- + When the traffic-manager fails to inject a traffic-agent, the cause for the failure is + detected by reading the cluster events, and propagated to the user. + - type: feature + title: FTP-client instead of sshfs for remote mounts + body: >- + Telepresence can now use an embedded FTP client and load an existing FUSE library instead of running + an external sshfs or sshfs-win binary. This feature is experimental in 2.7.x + and enabled by setting intercept.useFtp to true> in the config.yml. + - type: change + title: Upgrade of winfsp + body: >- + Telepresence on Windows upgraded winfsp from version 1.10 to 1.11 + - type: bugfix + title: Removal of invalid warning messages + body: >- + Running CLI commands on Apple M1 machines will no longer throw warnings about /proc/cpuinfo + and /proc/self/auxv. + - version: 2.7.5 + date: "2022-09-14" + notes: + - type: change + title: Rollback of release 2.7.4 + body: >- + This release is a rollback of the changes in 2.7.4, so essentially the same as 2.7.3 + - version: 2.7.4 + date: "2022-09-14" + notes: + - type: change + body: >- + This release was broken on some platforms. Use 2.7.6 instead. + - version: 2.7.3 + date: "2022-09-07" + notes: + - type: bugfix + title: PTY for CLI commands + body: >- + CLI commands that are executed by the user daemon now use a pseudo TTY. This enables + docker run -it to allocate a TTY and will also give other commands like bash read the + same behavior as when executed directly in a terminal. + docs: https://github.com/telepresenceio/telepresence/issues/2724 + - type: bugfix + title: Traffic Manager useless warning silenced + body: >- + The traffic-manager will no longer log numerous warnings saying Issuing a + systema request without ApiKey or InstallID may result in an error. + - type: bugfix + title: Traffic Manager useless error silenced + body: >- + The traffic-manager will no longer log an error saying Unable to derive subnets + from nodes when the podCIDRStrategy is auto and it chooses to instead derive the + subnets from the pod IPs. + - version: 2.7.2 + date: "2022-08-25" + notes: + - type: feature + title: Autocompletion scripts + body: >- + Autocompletion scripts can now be generated with telepresence completion SHELL where SHELL can be bash, zsh, fish or powershell. + - type: feature + title: Connectivity check timeout + body: >- + The timeout for the initial connectivity check that Telepresence performs + in order to determine if the cluster's subnets are proxied or not can now be configured + in the config.yml file using timeouts.connectivityCheck. The default timeout was + changed from 5 seconds to 500 milliseconds to speed up the actual connect. + docs: reference/config#timeouts + - type: change + title: gather-traces feedback + body: >- + The command telepresence gather-traces now prints out a message on success. + docs: troubleshooting#distributed-tracing + - type: change + title: upload-traces feedback + body: >- + The command telepresence upload-traces now prints out a message on success. + docs: troubleshooting#distributed-tracing + - type: change + title: gather-traces tracing + body: >- + The command telepresence gather-traces now traces itself and reports errors with trace gathering. + docs: troubleshooting#distributed-tracing + - type: change + title: CLI log level + body: >- + The cli.log log is now logged at the same level as the connector.log + docs: reference/config#log-levels + - type: bugfix + title: Telepresence --help fixed + body: >- + telepresence --help now works once more even if there's no user daemon running. + docs: https://github.com/telepresenceio/telepresence/issues/2735 + - type: bugfix + title: Stream cancellation when no process intercepts + body: >- + Streams created between the traffic-agent and the workstation are now properly closed + when no interceptor process has been started on the workstation. This fixes a potential problem where + a large number of attempts to connect to a non-existing interceptor would cause stream congestion + and an unresponsive intercept. + - type: bugfix + title: List command excludes the traffic-manager + body: >- + The telepresence list command no longer includes the traffic-manager deployment. + - version: 2.7.1 + date: "2022-08-10" + notes: + - type: change + title: Reinstate telepresence uninstall + body: >- + Reinstate telepresence uninstall with --everything depreciated + - type: change + title: Reduce telepresence helm uninstall + body: >- + telepresence helm uninstall will only uninstall the traffic-manager helm chart and no longer accepts the --everything, --agent, or --all-agents flags. + - type: bugfix + title: Auto-connect for telepresence intercpet + body: >- + telepresence intercept will attempt to connect to the traffic manager before creating an intercept. + - version: 2.7.0 + date: "2022-08-07" + notes: + - type: feature + title: Saved Intercepts + body: >- + Create telepresence intercepts based on existing Saved Intercepts configurations with telepresence intercept --use-saved-intercept $SAVED_INTERCEPT_NAME + docs: reference/intercepts#sharing-intercepts-with-teammates + - type: feature + title: Distributed Tracing + body: >- + The Telepresence components now collect OpenTelemetry traces. + Up to 10MB of trace data are available at any given time for collection from + components. telepresence gather-traces is a new command that will collect + all that data and place it into a gzip file, and telepresence upload-traces is + a new command that will push the gzipped data into an OTLP collector. + docs: troubleshooting#distributed-tracing + - type: feature + title: Helm install + body: >- + A new telepresence helm command was added to provide an easy way to install, upgrade, or uninstall the telepresence traffic-manager. + docs: install/manager + - type: feature + title: Ignore Volume Mounts + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-ignore-volume-mounts, that can be used to make the injector ignore specified volume mounts denoted by a comma-separated string. + - type: feature + title: telepresence pod-daemon + body: >- + The Docker image now contains a new program in addition to + the existing traffic-manager and traffic-agent: the pod-daemon. The + pod-daemon is a trimmed-down version of the user-daemon that is + designed to run as a sidecar in a Pod, enabling CI systems to create + preview deploys. + - type: feature + title: Prometheus support for traffic manager + body: >- + Added prometheus support to the traffic manager. + - type: change + title: No install on telepresence connect + body: >- + The traffic manager is no longer automatically installed into the cluster. Connecting or creating an intercept in a cluster without a traffic manager will return an error. + docs: install/manager + - type: change + title: Helm Uninstall + body: >- + The command telepresence uninstall has been moved to telepresence helm uninstall. + docs: install/manager + - type: bugfix + title: readOnlyRootFileSystem mounts work + body: >- + Add an emptyDir volume and volume mount under /tmp on the agent sidecar so it works with `readOnlyRootFileSystem: true` + docs: https://github.com/telepresenceio/telepresence/pull/2666 + - version: 2.6.8 + date: "2022-06-23" + notes: + - type: feature + title: Specify Your DNS + body: >- + The name and namespace for the DNS Service that the traffic-manager uses in DNS auto-detection can now be specified. + - type: feature + title: Specify a Fallback DNS + body: >- + Should the DNS auto-detection logic in the traffic-manager fail, users can now specify a fallback IP to use. + - type: feature + title: Intercept UDP Ports + body: >- + It is now possible to intercept UDP ports with Telepresence and also use --to-pod to forward UDP traffic from ports on localhost. + - type: change + title: Additional Helm Values + body: >- + The Helm chart will now add the nodeSelector, affinity and tolerations values to the traffic-manager's post-upgrade-hook and pre-delete-hook jobs. + - type: bugfix + title: Agent Injection Bugfix + body: >- + Telepresence no longer fails to inject the traffic agent into the pod generated for workloads that have no volumes and `automountServiceAccountToken: false`. + - version: 2.6.7 + date: "2022-06-22" + notes: + - type: bugfix + title: Persistant Sessions + body: >- + The Telepresence client will remember and reuse the traffic-manager session after a network failure or other reason that caused an unclean disconnect. + - type: bugfix + title: DNS Requests + body: >- + Telepresence will no longer forward DNS requests for "wpad" to the cluster. + - type: bugfix + title: Graceful Shutdown + body: >- + The traffic-agent will properly shut down if one of its goroutines errors. + - version: 2.6.6 + date: "2022-06-9" + notes: + - type: bugfix + title: Env Var `TELEPRESENCE_API_PORT` + body: >- + The propagation of the TELEPRESENCE_API_PORT environment variable now works correctly. + - type: bugfix + title: Double Printing `--output json` + body: >- + The --output json global flag no longer outputs multiple objects + - version: 2.6.5 + date: "2022-06-03" + notes: + - type: feature + title: Helm Option -- `reinvocationPolicy` + body: >- + The reinvocationPolicy or the traffic-agent injector webhook can now be configured using the Helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Proxy Certificate + body: >- + The traffic manager now accepts a root CA for a proxy, allowing it to connect to ambassador cloud from behind an HTTPS proxy. This can be configured through the helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Agent Injection + body: >- + A policy that controls when the mutating webhook injects the traffic-agent was added, and can be configured in the Helm chart. + docs: install/helm + - type: change + title: Windows Tunnel Version Upgrade + body: >- + Telepresence on Windows upgraded wintun.dll from version 0.12 to version 0.14.1 + - type: change + title: Helm Version Upgrade + body: >- + Telepresence upgraded its embedded Helm from version 3.8.1 to 3.9 + - type: change + title: Kubernetes API Version Upgrade + body: >- + Telepresence upgraded its embedded Kubernetes API from version 0.23.4 to 0.24.1 + - type: feature + title: Flag `--watch` Added to `list` Command + body: >- + Added a --watch flag to telepresence list that can be used to watch interceptable workloads in a namespace. + - type: change + title: Depreciated `images.webhookAgentImage` + body: >- + The Telepresence configuration setting for `images.webhookAgentImage` is now deprecated. Use `images.agentImage` instead. + - type: bugfix + title: Default `reinvocationPolicy` Set to Never + body: >- + The reinvocationPolicy or the traffic-agent injector webhook now defaults to Never insteadof IfNeeded so that LimitRanges on namespaces can inject a missing resources element into the injected traffic-agent container. + - type: bugfix + title: UDP + body: >- + UDP based communication with services in the cluster now works as expected. + - type: bugfix + title: Telepresence `--help` + body: >- + The command help will only show Kubernetes flags on the commands that supports them + - type: change + title: Error Count + body: >- + Only the errors from the last session will be considered when counting the number of errors in the log after a command failure. + - version: 2.6.4 + date: "2022-05-23" + notes: + - type: bugfix + title: Upgrade RBAC Permissions + body: >- + The traffic-manager RBAC grants permissions to update services, deployments, replicatsets, and statefulsets. Those permissions are needed when the traffic-manager upgrades from versions < 2.6.0 and can be revoked after the upgrade. + - version: 2.6.3 + date: "2022-05-20" + notes: + - type: bugfix + title: Relative Mount Paths + body: >- + The --mount intercept flag now handles relative mount points correctly on non-windows platforms. Windows still require the argument to be a drive letter followed by a colon. + - type: bugfix + title: Traffic Agent Config + body: >- + The traffic-agent's configuration update automatically when services are added, updated or deleted. + - type: bugfix + title: Container Injection for Numeric Ports + body: >- + Telepresence will now always inject an initContainer when the service's targetPort is numeric + - type: bugfix + title: Matching Services + body: >- + Workloads that have several matching services pointing to the same target port are now handled correctly. + - type: bugfix + title: Unexpected Panic + body: >- + A potential race condition causing a panic when closing a DNS connection is now handled correctly. + - type: bugfix + title: Mount Volume Cleanup + body: >- + A container start would sometimes fail because and old directory remained in a mounted temp volume. + - version: 2.6.2 + date: "2022-05-17" + notes: + - type: bugfix + title: Argo Injection + body: >- + Workloads controlled by workloads like Argo Rollout are injected correctly. + - type: bugfix + title: Agent Port Mapping + body: >- + Multiple services appointing the same container port no longer result in duplicated ports in an injected pod. + - type: bugfix + title: GRPC Max Message Size + body: >- + The telepresence list command no longer errors out with "grpc: received message larger than max" when listing namespaces with a large number of workloads. + - version: 2.6.1 + date: "2022-05-16" + notes: + - type: bugfix + title: KUBECONFIG environment variable + body: >- + Telepresence will now handle multiple path entries in the KUBECONFIG environment correctly. + - type: bugfix + title: Don't Panic + body: >- + Telepresence will no longer panic when using preview URLs with traffic-managers < 2.6.0 + - version: 2.6.0 + date: "2022-05-13" + notes: + - type: feature + title: Intercept multiple containers in a pod, and multiple ports per container + body: >- + Telepresence can now intercept multiple services and/or service-ports that connect to the same pod. + docs: new-in-2.6#intercept-multiple-containers-and-ports + - type: feature + title: The Traffic Agent sidecar is always injected by the Traffic Manager's mutating webhook + body: >- + The client will no longer modify deployments, replicasets, or statefulsets in order to + inject a Traffic Agent into an intercepted pod. Instead, all injection is now performed by a mutating webhook. As a result, + the client now needs less permissions in the cluster. + docs: install/upgrade#important-note-about-upgrading-to-2.6.0 + - type: change + title: Automatic upgrade of Traffic Agents + body: >- + When upgrading, all workloads with injected agents will have their agent "uninstalled" automatically. + The mutating webhook will then ensure that their pods will receive an updated Traffic Agent. + docs: new-in-2.6#no-more-workload-modifications + - type: change + title: No default image in the Helm chart + body: >- + The helm chart no longer has a default set for the agentInjector.image.name, and unless it's set, the + traffic-manager will ask Ambassador Could for the preferred image. + docs: new-in-2.6#smarter-agent + - type: change + title: Upgrade to Helm version 3.8.1 + body: The Telepresence client now uses Helm version 3.8.1 when auto-installing the Traffic Manager. + - type: bugfix + title: Remote mounts will now function correctly with custom securityContext + body: >- + The bug causing permission problems when the Traffic Agent is in a Pod with a custom securityContext has been fixed. + - type: bugfix + title: Improved presentation of flags in CLI help + body: The help for commands that accept Kubernetes flags will now display those flags in a separate group. + - type: bugfix + title: Better termination of process parented by intercept + body: >- + Occasionally an intercept will spawn a command using -- on the command line, often in another console. + When you use telepresence leave or telepresence quit while the intercept with the spawned command is still active, + Telepresence will now terminate that the command because it's considered to be parented by the intercept that is being removed. + - version: 2.5.8 + date: "2022-04-27" + notes: + - type: bugfix + title: Folder creation on `telepresence login` + body: >- + Fixed a bug where the telepresence config folder would not be created if the user ran telepresence login before other commands. + - version: 2.5.7 + date: "2022-04-25" + notes: + - type: change + title: RBAC requirements + body: >- + A namespaced traffic-manager will no longer require cluster wide RBAC. Only Roles and RoleBindings are now used. + - type: bugfix + title: Windows DNS + body: >- + The DNS recursion detector didn't work correctly on Windows, resulting in sporadic failures to resolve names that were resolved correctly at other times. + - type: bugfix + title: Session TTL and Reconnect + body: >- + A telepresence session will now last for 24 hours after the user's last connectivity. If a session expires, the connector will automatically try to reconnect. + - version: 2.5.6 + date: "2022-04-18" + notes: + - type: change + title: Less Watchers + body: >- + Telepresence agents watcher will now only watch namespaces that the user has accessed since the last connect. + - type: bugfix + title: More Efficient `gather-logs` + body: >- + The gather-logs command will no longer send any logs through gRPC. + - version: 2.5.5 + date: "2022-04-08" + notes: + - type: change + title: Traffic Manager Permissions + body: >- + The traffic-manager now requires permissions to read pods across namespaces even if installed with limited permissions + - type: bugfix + title: Linux DNS Cache + body: >- + The DNS resolver used on Linux with systemd-resolved now flushes the cache when the search path changes. + - type: bugfix + title: Automatic Connect Sync + body: >- + The telepresence list command will produce a correct listing even when not preceded by a telepresence connect. + - type: bugfix + title: Disconnect Reconnect Stability + body: >- + The root daemon will no longer get into a bad state when a disconnect is rapidly followed by a new connect. + - type: bugfix + title: Limit Watched Namespaces + body: >- + The client will now only watch agents from accessible namespaces, and is also constrained to namespaces explicitly mapped using the connect command's --mapped-namespaces flag. + - type: bugfix + title: Limit Namespaces used in `gather-logs` + body: >- + The gather-logs command will only gather traffic-agent logs from accessible namespaces, and is also constrained to namespaces explicitly mapped using the connect command's --mapped-namespaces flag. + - version: 2.5.4 + date: "2022-03-29" + notes: + - type: bugfix + title: Linux DNS Concurrency + body: >- + The DNS fallback resolver on Linux now correctly handles concurrent requests without timing them out + - type: bugfix + title: Non-Functional Flag + body: >- + The ingress-l5 flag will no longer be forcefully set to equal the --ingress-host flag + - type: bugfix + title: Automatically Remove Failed Intercepts + body: >- + Intercepts that fail to create are now consistently removed to prevent non-working dangling intercepts from sticking around. + - type: bugfix + title: Agent UID + body: >- + Agent container is no longer sensitive to a random UID or an UID imposed by a SecurityContext. + - type: bugfix + title: Gather-Logs Output Filepath + body: >- + Removed a bad concatenation that corrupted the output path of telepresence gather-logs. + - type: change + title: Remove Unnecessary Error Advice + body: >- + An advice to "see logs for details" is no longer printed when the argument count is incorrect in a CLI command. + - type: bugfix + title: Garbage Collection + body: >- + Client and agent sessions no longer leaves dangling waiters in the traffic-manager when they depart. + - type: bugfix + title: Limit Gathered Logs + body: >- + The client's gather logs command and agent watcher will now respect the configured grpc.maxReceiveSize + - type: change + title: In-Cluster Checks + body: >- + The TUN device will no longer route pod or service subnets if it is running in a machine that's already connected to the cluster + - type: change + title: Expanded Status Command + body: >- + The status command includes the install id, user id, account id, and user email in its result, and can print output as JSON + - type: change + title: List Command Shows All Intercepts + body: >- + The list command, when used with the --intercepts flag, will list the users intercepts from all namespaces + - version: 2.5.3 + date: "2022-02-25" + notes: + - type: bugfix + title: TCP Connectivity + body: >- + Fixed bug in the TCP stack causing timeouts after repeated connects to the same address + - type: feature + title: Linux Binaries + body: >- + Client-side binaries for the arm64 architecture are now available for linux + - version: 2.5.2 + date: "2022-02-23" + notes: + - type: bugfix + title: DNS server bugfix + body: >- + Fixed a bug where Telepresence would use the last server in resolv.conf + - version: 2.5.1 + date: "2022-02-19" + notes: + - type: bugfix + title: Fix GKE auth issue + body: >- + Fixed a bug where using a GKE cluster would error with: No Auth Provider found for name "gcp" + - version: 2.5.0 + date: "2022-02-18" + notes: + - type: feature + title: Intercept specific endpoints + body: >- + The flags --http-path-equal, --http-path-prefix, and --http-path-regex can can be used in + addition to the --http-match flag to filter personal intercepts by the request URL path + docs: concepts/intercepts#intercepting-a-specific-endpoint + - type: feature + title: Intercept metadata + body: >- + The flag --http-meta can be used to declare metadata key value pairs that will be returned by the Telepresence rest + API endpoint /intercept-info + docs: reference/restapi#intercept-info + - type: change + title: Client RBAC watch + body: >- + The verb "watch" was added to the set of required verbs when accessing services and workloads for the client RBAC + ClusterRole + docs: reference/rbac + - type: change + title: Dropped backward compatibility with versions <=2.4.4 + body: >- + Telepresence is no longer backward compatible with versions 2.4.4 or older because the deprecated multiplexing tunnel + functionality was removed. + - type: change + title: No global networking flags + body: >- + The global networking flags are no longer used and using them will render a deprecation warning unless they are supported by the + command. The subcommands that support networking flags are connect, current-cluster-id, + and genyaml. + - type: bugfix + title: Output of status command + body: >- + The also-proxy and never-proxy subnets are now displayed correctly when using the + telepresence status command. + - type: bugfix + title: SETENV sudo privilege no longer needed + body: >- + Telepresence longer requires SETENV privileges when starting the root daemon. + - type: bugfix + title: Network device names containing dash + body: >- + Telepresence will now parse device names containing dashes correctly when determining routes that it should never block. + - type: bugfix + title: Linux uses cluster.local as domain instead of search + body: >- + The cluster domain (typically "cluster.local") is no longer added to the DNS search on Linux using + systemd-resolved. Instead, it is added as a domain so that names ending with it are routed + to the DNS server. + - version: 2.4.11 + date: "2022-02-10" + notes: + - type: change + title: Add additional logging to troubleshoot intermittent issues with intercepts + body: >- + We've noticed some issues with intercepts in v2.4.10, so we are releasing a version + with enhanced logging to help debug and fix the issue. + - version: 2.4.10 + date: "2022-01-13" + notes: + - type: feature + title: Application Protocol Strategy + body: >- + The strategy used when selecting the application protocol for personal intercepts can now be configured using + the intercept.appProtocolStrategy in the config.yml file. + docs: reference/config/#intercept + image: telepresence-2.4.10-intercept-config.png + - type: feature + title: Helm value for the Application Protocol Strategy + body: >- + The strategy when selecting the application protocol for personal intercepts in agents injected by the + mutating webhook can now be configured using the agentInjector.appProtocolStrategy in the Helm chart. + docs: install/helm + - type: feature + title: New --http-plaintext option + body: >- + The flag --http-plaintext can be used to ensure that an intercept uses plaintext http or grpc when + communicating with the workstation process. + docs: reference/intercepts/#tls + - type: feature + title: Configure the default intercept port + body: >- + The port used by default in the telepresence intercept command (8080), can now be changed by setting + the intercept.defaultPort in the config.yml file. + docs: reference/config/#intercept + - type: change + title: Telepresence CI now uses Github Actions + body: >- + Telepresence now uses Github Actions for doing unit and integration testing. It is + now easier for contributors to run tests on PRs since maintainers can add an + "ok to test" label to PRs (including from forks) to run integration tests. + docs: https://github.com/telepresenceio/telepresence/actions + image: telepresence-2.4.10-actions.png + - type: bugfix + title: Check conditions before asking questions + body: >- + User will not be asked to log in or add ingress information when creating an intercept until a check has been + made that the intercept is possible. + docs: reference/intercepts/ + - type: bugfix + title: Fix invalid log statement + body: >- + Telepresence will no longer log invalid: "unhandled connection control message: code DIAL_OK" errors. + - type: bugfix + title: Log errors from sshfs/sftp + body: >- + Output to stderr from the traffic-agent's sftp and the client's sshfs processes + are properly logged as errors. + - type: bugfix + title: Don't use Windows path separators in workload pod template + body: >- + Auto installer will no longer not emit backslash separators for the /tel-app-mounts paths in the + traffic-agent container spec when running on Windows. + - version: 2.4.9 + date: "2021-12-09" + notes: + - type: bugfix + title: Helm upgrade nil pointer error + body: >- + A helm upgrade using the --reuse-values flag no longer fails on a "nil pointer" error caused by a nil + telpresenceAPI value. + docs: install/helm#upgrading-the-traffic-manager + - version: 2.4.8 + date: "2021-12-03" + notes: + - type: feature + title: VPN diagnostics tool + body: >- + There is a new subcommand, test-vpn, that can be used to diagnose connectivity issues with a VPN. + See the VPN docs for more information on how to use it. + docs: reference/vpn + image: telepresence-2.4.8-vpn.png + + - type: feature + title: RESTful API service + body: >- + A RESTful service was added to Telepresence, both locally to the client and to the traffic-agent to + help determine if messages with a set of headers should be consumed or not from a message queue where the + intercept headers are added to the messages. + docs: reference/restapi + image: telepresence-2.4.8-health-check.png + + - type: change + title: TELEPRESENCE_LOGIN_CLIENT_ID env variable no longer used + body: >- + You could previously configure this value, but there was no reason to change it, so the value + was removed. + + - type: bugfix + title: Tunneled network connections behave more like ordinary TCP connections. + body: >- + When using Telepresence with an external cloud provider for extensions, those tunneled + connections now behave more like TCP connections, especially when it comes to timeouts. + We've also added increased testing around these types of connections. + - version: 2.4.7 + date: "2021-11-24" + notes: + - type: feature + title: Injector service-name annotation + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-service-name, that can be used to set the name of the service to be intercepted. + This will help disambiguate which service to intercept for when a workload is exposed by multiple services, such as can happen with Argo Rollouts + docs: reference/cluster-config#service-name-annotation + - type: feature + title: Skip the Ingress Dialogue + body: >- + You can now skip the ingress dialogue by setting the ingress parameters in the corresponding flags. + docs: reference/intercepts#skipping-the-ingress-dialogue + - type: feature + title: Never proxy subnets + body: >- + The kubeconfig extensions now support a never-proxy argument, + analogous to also-proxy, that defines a set of subnets that + will never be proxied via telepresence. + docs: reference/config#neverproxy + - type: change + title: Daemon versions check + body: >- + Telepresence now checks the versions of the client and the daemons and asks the user to quit and restart if they don't match. + - type: change + title: No explicit DNS flushes + body: >- + Telepresence DNS now uses a very short TTL instead of explicitly flushing DNS by killing the mDNSResponder or doing resolvectl flush-caches + docs: reference/routing#dns-caching + - type: bugfix + title: Legacy flags now work with global flags + body: >- + Legacy flags such as --swap-deployment can now be used together with global flags. + - type: bugfix + title: Outbound connection closing + body: >- + Outbound connections are now properly closed when the peer closes. + - type: bugfix + title: Prevent DNS recursion + body: >- + The DNS-resolver will trap recursive resolution attempts (may happen when the cluster runs in a docker-container on the client). + docs: reference/routing#dns-recursion + - type: bugfix + title: Prevent network recursion + body: >- + The TUN-device will trap failed connection attempts that results in recursive calls back into the TUN-device (may happen when the + cluster runs in a docker-container on the client). + docs: reference/routing#connect-recursion + - type: bugfix + title: Traffic Manager deadlock fix + body: >- + The Traffic Manager no longer runs a risk of entering a deadlock when a new Traffic agent arrives. + - type: bugfix + title: webhookRegistry config propagation + body: >- + The configured webhookRegistry is now propagated to the webhook installer even if no webhookAgentImage has been set. + docs: reference/config#images + - type: bugfix + title: Login refreshes expired tokens + body: >- + When a user's token has expired, telepresence login + will prompt the user to log in again to get a new token. Previously, + the user had to telepresence quit and telepresence logout + to get a new token. + docs: https://github.com/telepresenceio/telepresence/issues/2062 + - version: 2.4.6 + date: "2021-11-02" + notes: + - type: feature + title: Manually injecting Traffic Agent + body: >- + Telepresence now supports manually injecting the traffic-agent YAML into workload manifests. + Use the genyaml command to create the sidecar YAML, then add the telepresence.getambassador.io/manually-injected: "true" annotation to your pods to allow Telepresence to intercept them. + docs: reference/intercepts/manual-agent + + - type: feature + title: Telepresence CLI released for Apple silicon + body: >- + Telepresence is now built and released for Apple silicon. + docs: install/?os=macos + + - type: change + title: Telepresence help text now links to telepresence.io + body: >- + We now include a link to our documentation when you run telepresence --help. This will make it easier + for users to find this page whether they acquire Telepresence through Brew or some other mechanism. + image: telepresence-2.4.6-help-text.png + + - type: bugfix + title: Fixed bug when API server is inside CIDR range of pods/services + body: >- + If the API server for your kubernetes cluster had an IP that fell within the + subnet generated from pods/services in a kubernetes cluster, it would proxy traffic + to the API server which would result in hanging or a failed connection. We now ensure + that the API server is explicitly not proxied. + - version: 2.4.5 + date: "2021-10-15" + notes: + - type: feature + title: Get pod yaml with gather-logs command + body: >- + Adding the flag --get-pod-yaml to your request will get the + pod yaml manifest for all kubernetes components you are getting logs for + ( traffic-manager and/or pods containing a + traffic-agent container). This flag is set to false + by default. + docs: reference/client + image: telepresence-2.4.5-pod-yaml.png + + - type: feature + title: Anonymize pod name + namespace when using gather-logs command + body: >- + Adding the flag --anonymize to your command will + anonymize your pod names + namespaces in the output file. We replace the + sensitive names with simple names (e.g. pod-1, namespace-2) to maintain + relationships between the objects without exposing the real names of your + objects. This flag is set to false by default. + docs: reference/client + image: telepresence-2.4.5-logs-anonymize.png + + - type: feature + title: Added context and defaults to ingress questions when creating a preview URL + body: >- + Previously, we referred to OSI model layers when asking these questions, but this + terminology is not commonly used. The questions now provide a clearer context for the user, along with a default answer as an example. + docs: howtos/preview-urls + image: telepresence-2.4.5-preview-url-questions.png + + - type: feature + title: Support for intercepting headless services + body: >- + Intercepting headless services is now officially supported. You can request a + headless service on whatever port it exposes and get a response from the + intercept. This leverages the same approach as intercepting numeric ports when + using the mutating webhook injector, mainly requires the initContainer + to have NET_ADMIN capabilities. + docs: reference/intercepts/#intercepting-headless-services + + - type: change + title: Use one tunnel per connection instead of multiplexing into one tunnel + body: >- + We have changed Telepresence so that it uses one tunnel per connection instead + of multiplexing all connections into one tunnel. This will provide substantial + performance improvements. Clients will still be backwards compatible with older + managers that only support multiplexing. + + - type: bugfix + title: Added checks for Telepresence kubernetes compatibility + body: >- + Telepresence currently works with Kubernetes server versions 1.17.0 + and higher. We have added logs in the connector and traffic-manager + to let users know when they are using Telepresence with a cluster it doesn't support. + docs: reference/cluster-config + + - type: bugfix + title: Traffic Agent security context is now only added when necessary + body: >- + When creating an intercept, Telepresence will now only set the traffic agent's GID + when strictly necessary (i.e. when using headless services or numeric ports). This mitigates + an issue on openshift clusters where the traffic agent can fail to be created due to + openshift's security policies banning arbitrary GIDs. + + - version: 2.4.4 + date: "2021-09-27" + notes: + - type: feature + title: Numeric ports in agent injector + body: >- + The agent injector now supports injecting Traffic Agents into pods that have unnamed ports. + docs: reference/cluster-config/#note-on-numeric-ports + + - type: feature + title: New subcommand to gather logs and export into zip file + body: >- + Telepresence has logs for various components (the + traffic-manager, traffic-agents, the root and + user daemons), which are integral for understanding and debugging + Telepresence behavior. We have added the telepresence + gather-logs command to make it simple to compile logs for + all Telepresence components and export them in a zip file that can + be shared to others and/or included in a github issue. For more + information on usage, run telepresence gather-logs --help + . + docs: reference/client + image: telepresence-2.4.4-gather-logs.png + + - type: feature + title: Pod CIDR strategy is configurable in Helm chart + body: >- + Telepresence now enables you to directly configure how to get + pod CIDRs when deploying Telepresence with the Helm chart. + The default behavior remains the same. We've also introduced + the ability to explicitly set what the pod CIDRs should be. + docs: install/helm + + - type: bugfix + title: Compute pod CIDRs more efficiently + body: >- + When computing subnets using the pod CIDRs, the traffic-manager + now uses less CPU cycles. + docs: reference/routing/#subnets + + - type: bugfix + title: Prevent busy loop in traffic-manager + body: >- + In some circumstances, the traffic-manager's CPU + would max out and get pinned at its limit. This required a + shutdown or pod restart to fix. We've added some fixes + to prevent the traffic-manager from getting into this state. + + - type: bugfix + title: Added a fixed buffer size to TUN-device + body: >- + The TUN-device now has a max buffer size of 64K. This prevents the + buffer from growing limitlessly until it receies a PSH, which could + be a blocking operation when receiving lots of TCP-packets. + docs: reference/tun-device + + - type: bugfix + title: Fix hanging user daemon + body: >- + When Telepresence encountered an issue connecting to the cluster or + the root daemon, it could hang indefintely. It now will error correctly + when it encounters that situation. + + - type: bugfix + title: Improved proprietary agent connectivity + body: >- + To determine whether the environment cluster is air-gapped, the + proprietary agent attempts to connect to the cloud during startup. + To deal with a possible initial failure, the agent backs off + and retries the connection with an increasing backoff duration. + + - type: bugfix + title: Telepresence correctly reports intercept port conflict + body: >- + When creating a second intercept targetting the same local port, + it now gives the user an informative error message. Additionally, + it tells them which intercept is currently using that port to make + it easier to remedy. + + - version: 2.4.3 + date: "2021-09-15" + notes: + - type: feature + title: Environment variable TELEPRESENCE_INTERCEPT_ID available in interceptor's environment + body: >- + When you perform an intercept, we now include a TELEPRESENCE_INTERCEPT_ID environment + variable in the environment. + docs: reference/environment/#telepresence-environment-variables + + - type: bugfix + title: Improved daemon stability + body: >- + Fixed a timing bug that sometimes caused a "daemon did not start" failure. + + - type: bugfix + title: Complete logs for Windows + body: >- + Crash stack traces and other errors were incorrectly not written to log files. This has + been fixed so logs for Windows should be at parity with the ones in MacOS and Linux. + + - type: bugfix + title: Log rotation fix for Linux kernel 4.11+ + body: >- + On Linux kernel 4.11 and above, the log file rotation now properly reads the + birth-time of the log file. Older kernels continue to use the old behavior + of using the change-time in place of the birth-time. + + - type: bugfix + title: Improved error messaging + body: >- + When Telepresence encounters an error, it tells the user where they should look for + logs related to the error. We have refined this so that it only tells users to look + for errors in the daemon logs for issues that are logged there. + + - type: bugfix + title: Stop resolving localhost + body: >- + When using the overriding DNS resolver, it will no longer apply search paths when + resolving localhost, since that should be resolved on the user's machine + instead of the cluster. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Variable cluster domain + body: >- + Previously, the cluster domain was hardcoded to cluster.local. While this + is true for many kubernetes clusters, it is not for all of them. Now this value is + retrieved from the traffic-manager. + + - type: bugfix + title: Improved cleanup of traffic-agents + body: >- + Telepresence now uninstalls traffic-agents installed via mutating webhook + when using telepresence uninstall --everything. + + - type: bugfix + title: More large file transfer fixes + body: >- + Downloading large files during an intercept will no longer cause timeouts and hanging + traffic-agents. + + - type: bugfix + title: Setting --mount to false when intercepting works as expected + body: >- + When using --mount=false while performing an intercept, the file system + was still mounted. This has been remedied so the intercept behavior respects the + flag. + docs: reference/volume + + - type: bugfix + title: Traffic-manager establishes outbound connections in parallel + body: >- + Previously, the traffic-manager established outbound connections + sequentially. This resulted in slow (and failing) Dial calls would + block all outbound traffic from the workstation (for up to 30 seconds). We now + establish these connections in parallel so that won't occur. + docs: reference/routing/#outbound + + - type: bugfix + title: Status command reports correct DNS settings + body: >- + Telepresence status now correctly reports DNS settings for all operating + systems, instead of Local IP:nil, Remote IP:nil when they don't exist. + + - version: 2.4.2 + date: "2021-09-01" + notes: + - type: feature + title: New subcommand to temporarily change log-level + body: >- + We have added a new telepresence loglevel subcommand that enables users + to temporarily change the log-level for the local demons, the traffic-manager and + the traffic-agents. While the logLevels settings from the config will + still be used by default, this can be helpful if you are currently experiencing an issue and + want to have higher fidelity logs, without doing a telepresence quit and + telepresence connect. You can use telepresence loglevel --help to get + more information on options for the command. + docs: reference/config + + - type: change + title: All components have info as the default log-level + body: >- + We've now set the default for all components of Telepresence (traffic-agent, + traffic-manager, local daemons) to use info as the default log-level. + + - type: bugfix + title: Updating RBAC in helm chart to fix cluster-id regression + body: >- + In 2.4.1, we enabled the traffic-manager to get the cluster ID by getting the UID + of the default namespace. The helm chart was not updated to give the traffic-manager + those permissions, which has since been fixed. This impacted users who use licensed features of + the Telepresence extension in an air-gapped environment. + docs: reference/cluster-config/#air-gapped-cluster + + - type: bugfix + title: Timeouts for Helm actions are now respected + body: >- + The user-defined timeout for Helm actions wasn't always respected, causing the daemon to hang + indefinitely when failing to install the traffic-manager. + docs: reference/config#timeouts + + - version: 2.4.1 + date: "2021-08-30" + notes: + - type: feature + title: External cloud variables are now configurable + body: >- + We now support configuring the host and port for the cloud in your config.yml. These + are used when logging in to utilize features provided by an extension, and are also passed + along as environment variables when installing the traffic-manager. Additionally, we + now run our testsuite with these variables set to localhost to continue to ensure Telepresence + is fully fuctional without depeneding on an external service. The SYSTEMA_HOST and SYSTEMA_PORT + environment variables are no longer used. + image: telepresence-2.4.1-systema-vars.png + docs: reference/config/#cloud + + - type: feature + title: Helm chart can now regenerate certificate used for mutating webhook on-demand. + body: >- + You can now set agentInjector.certificate.regenerate when deploying Telepresence + with the Helm chart to automatically regenerate the certificate used by the agent injector webhook. + docs: install/helm + + - type: change + title: Traffic Manager installed via helm + body: >- + The traffic-manager is now installed via an embedded version of the Helm chart when telepresence connect is first performed on a cluster. + This change is transparent to the user. + A new configuration flag, timeouts.helm sets the timeouts for all helm operations performed by the Telepresence binary. + docs: reference/config#timeouts + + - type: change + title: traffic-manager gets cluster ID itself instead of via environment variable + body: >- + The traffic-manager used to get the cluster ID as an environment variable when running + telepresence connnect or via adding the value in the helm chart. This was + clunky so now the traffic-manager gets the value itself as long as it has permissions + to "get" and "list" namespaces (this has been updated in the helm chart). + docs: install/helm + + - type: bugfix + title: Telepresence now mounts all directories from /var/run/secrets + body: >- + In the past, we only mounted secret directories in /var/run/secrets/kubernetes.io. + We now mount *all* directories in /var/run/secrets, which, for example, includes + directories like eks.amazonaws.com used for IRSA tokens. + docs: reference/volume + + - type: bugfix + title: Max gRPC receive size correctly propagates to all grpc servers + body: >- + This fixes a bug where the max gRPC receive size was only propagated to some of the + grpc servers, causing failures when the message size was over the default. + docs: reference/config/#grpc + + - type: bugfix + title: Updated our Homebrew packaging to run manually + body: >- + We made some updates to our script that packages Telepresence for Homebrew so that it + can be run manually. This will enable maintainers of Telepresence to run the script manually + should we ever need to rollback a release and have latest point to an older verison. + docs: install/ + + - type: bugfix + title: Telepresence uses namespace from kubeconfig context on each call + body: >- + In the past, Telepresence would use whatever namespace was specified in the kubeconfig's current-context + for the entirety of the time a user was connected to Telepresence. This would lead to confusing behavior + when a user changed the context in their kubeconfig and expected Telepresence to acknowledge that change. + Telepresence now will do that and use the namespace designated by the context on each call. + + - type: bugfix + title: Idle outbound TCP connections timeout increased to 7200 seconds + body: >- + Some users were noticing that their intercepts would start failing after 60 seconds. + This was because the keep idle outbound TCP connections were set to 60 seconds, which we have + now bumped to 7200 seconds to match Linux's tcp_keepalive_time default. + + - type: bugfix + title: Telepresence will automatically remove a socket upon ungraceful termination + body: >- + When a Telepresence process terminates ungracefully, it would inform users that "this usually means + that the process has terminated ungracefully" and implied that they should remove the socket. We've + now made it so Telepresence will automatically attempt to remove the socket upon ungraceful termination. + + - type: bugfix + title: Fixed user daemon deadlock + body: >- + Remedied a situation where the user daemon could hang when a user was logged in. + + - type: bugfix + title: Fixed agentImage config setting + body: >- + The config setting images.agentImages is no longer required to contain the repository, and it + will use the value at images.repository. + docs: reference/config/#images + + - version: 2.4.0 + date: "2021-08-04" + notes: + - type: feature + title: Windows Client Developer Preview + body: >- + There is now a native Windows client for Telepresence that is being released as a Developer Preview. + All the same features supported by the MacOS and Linux client are available on Windows. + image: telepresence-2.4.0-windows.png + docs: install + + - type: feature + title: CLI raises helpful messages from Ambassador Cloud + body: >- + Telepresence can now receive messages from Ambassador Cloud and raise + them to the user when they perform certain commands. This enables us + to send you messages that may enhance your Telepresence experience when + using certain commands. Frequency of messages can be configured in your + config.yml. + image: telepresence-2.4.0-cloud-messages.png + docs: reference/config#cloud + + - type: bugfix + title: Improved stability of systemd-resolved-based DNS + body: >- + When initializing the systemd-resolved-based DNS, the routing domain + is set to improve stability in non-standard configurations. This also enables the + overriding resolver to do a proper take over once the DNS service ends. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Fixed an edge case when intercepting a container with multiple ports + body: >- + When specifying a port of a container to intercept, if there was a container in the + pod without ports, it was automatically selected. This has been fixed so we'll only + choose the container with "no ports" if there's no container that explicitly matches + the port used in your intercept. + docs: reference/intercepts/#creating-an-intercept-when-a-service-has-multiple-ports + + - type: bugfix + title: $(NAME) references in agent's environments are now interpolated correctly. + body: >- + If you had an environment variable $(NAME) in your workload that referenced another, intercepts + would not correctly interpolate $(NAME). This has been fixed and works automatically. + + - type: bugfix + title: Telepresence no longer prints INFO message when there is no config.yml + body: >- + Fixed a regression that printed an INFO message to the terminal when there wasn't a + config.yml present. The config is optional, so this message has been + removed. + docs: reference/config + + - type: bugfix + title: Telepresence no longer panics when using --http-match + body: >- + Fixed a bug where Telepresence would panic if the value passed to --http-match + didn't contain an equal sign, which has been fixed. The correct syntax is in the --help + string and looks like --http-match=HTTP2_HEADER=REGEX + docs: reference/intercepts/#intercept-behavior-when-logged-in-to-ambassador-cloud + + - type: bugfix + title: Improved subnet updates + body: >- + The traffic-manager used to update subnets whenever the Nodes or Pods changed, even if + the underlying subnet hadn't changed, which created a lot of unnecessary traffic between the + client and the traffic-manager. This has been fixed so we only send updates when the subnets + themselves actually change. + docs: reference/routing/#subnets + + - version: 2.3.7 + date: "2021-07-23" + notes: + - type: feature + title: Also-proxy in telepresence status + body: >- + An also-proxy entry in the Kubernetes cluster config will + show up in the output of the telepresence status command. + docs: reference/config + + - type: feature + title: Non-interactive telepresence login + body: >- + telepresence login now has an + --apikey=KEY flag that allows for + non-interactive logins. This is useful for headless + environments where launching a web-browser is impossible, + such as cloud shells, Docker containers, or CI. + image: telepresence-2.3.7-newkey.png + docs: reference/client/login/ + + - type: bugfix + title: Mutating webhook injector correctly hides named ports for probes. + body: >- + The mutating webhook injector has been fixed to correctly rename named ports for liveness and readiness probes + docs: reference/cluster-config + + - type: bugfix + title: telepresence current-cluster-id crash fixed + body: >- + Fixed a regression introduced in 2.3.5 that caused telepresence current-cluster-id + to crash. + docs: reference/cluster-config + + - type: bugfix + title: Better UX around intercepts with no local process running + body: >- + Requests would hang indefinitely when initiating an intercept before you + had a local process running. This has been fixed and will result in an + Empty reply from server until you start a local process. + docs: reference/intercepts + + - type: bugfix + title: API keys no longer show as "no description" + body: >- + New API keys generated internally for communication with + Ambassador Cloud no longer show up as "no description" in + the Ambassador Cloud web UI. Existing API keys generated by + older versions of Telepresence will still show up this way. + image: telepresence-2.3.7-keydesc.png + + - type: bugfix + title: Fix corruption of user-info.json + body: >- + Fixed a race condition that logging in and logging out + rapidly could cause memory corruption or corruption of the + user-info.json cache file used when + authenticating with Ambassador Cloud. + + - type: bugfix + title: Improved DNS resolver for systemd-resolved + body: + Telepresence's systemd-resolved-based DNS resolver is now more + stable and in case it fails to initialize, the overriding resolver + will no longer cause general DNS lookup failures when telepresence defaults to + using it. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Faster telepresence list command + body: + The performance of telepresence list has been increased + significantly by reducing the number of calls the command makes to the cluster. + docs: reference/client + + - version: 2.3.6 + date: "2021-07-20" + notes: + - type: bugfix + title: Fix preview URLs + body: >- + Fixed a regression introduced in 2.3.5 that caused preview + URLs to not work. + + - type: bugfix + title: Fix subnet discovery + body: >- + Fixed a regression introduced in 2.3.5 where the Traffic + Manager's RoleBinding did not correctly appoint + the traffic-manager Role, causing + subnet discovery to not be able to work correctly. + docs: reference/rbac/ + + - type: bugfix + title: Fix root-user configuration loading + body: >- + Fixed a regression introduced in 2.3.5 where the root daemon + did not correctly read the configuration file; ignoring the + user's configured log levels and timeouts. + docs: reference/config/ + + - type: bugfix + title: Fix a user daemon crash + body: >- + Fixed an issue that could cause the user daemon to crash + during shutdown, as during shutdown it unconditionally + attempted to close a channel even though the channel might + already be closed. + + - version: 2.3.5 + date: "2021-07-15" + notes: + - type: feature + title: traffic-manager in multiple namespaces + body: >- + We now support installing multiple traffic managers in the same cluster. + This will allow operators to install deployments of telepresence that are + limited to certain namespaces. + image: ./telepresence-2.3.5-traffic-manager-namespaces.png + docs: install/helm + - type: feature + title: No more dependence on kubectl + body: >- + Telepresence no longer depends on having an external + kubectl binary, which might not be present for + OpenShift users (who have oc instead of + kubectl). + - type: feature + title: Agent image now configurable + body: >- + We now support configuring which agent image + registry to use in the + config. This enables users whose laptop is an air-gapped environment to + create personal intercepts without requiring a login. It also makes it easier + for those who are developing on Telepresence to specify which agent image should + be used. Env vars TELEPRESENCE_AGENT_IMAGE and TELEPRESENCE_REGISTRY are no longer + used. + image: ./telepresence-2.3.5-agent-config.png + docs: reference/config/#images + - type: feature + title: Max gRPC receive size now configurable + body: >- + The default max size of messages received through gRPC (4 MB) is sometimes insufficient. It can now be configured. + image: ./telepresence-2.3.5-grpc-max-receive-size.png + docs: reference/config/#grpc + - type: feature + title: CLI can be used in air-gapped environments + body: >- + While Telepresence will auto-detect if your cluster is in an air-gapped environment, + we've added an option users can add to their config.yml to ensure the cli acts like it + is in an air-gapped environment. Air-gapped environments require a manually installed + licence. + docs: reference/cluster-config/#air-gapped-cluster + image: ./telepresence-2.3.5-skipLogin.png + - version: 2.3.4 + date: "2021-07-09" + notes: + - type: bugfix + title: Improved IP log statements + body: >- + Some log statements were printing incorrect characters, when they should have been IP addresses. + This has been resolved to include more accurate and useful logging. + docs: reference/config/#log-levels + image: ./telepresence-2.3.4-ip-error.png + - type: bugfix + title: Improved messaging when multiple services match a workload + body: >- + If multiple services matched a workload when performing an intercept, Telepresence would crash. + It now gives the correct error message, instructing the user on how to specify which + service the intercept should use. + image: ./telepresence-2.3.4-improved-error.png + docs: reference/intercepts + - type: bugfix + title: Traffic-manger creates services in its own namespace to determine subnet + body: >- + Telepresence will now determine the service subnet by creating a dummy-service in its own + namespace, instead of the default namespace, which was causing RBAC permissions issues in + some clusters. + docs: reference/routing/#subnets + - type: bugfix + title: Telepresence connect respects pre-existing clusterrole + body: >- + When Telepresence connects, if the traffic-manager's desired clusterrole already exists in the + cluster, Telepresence will no longer try to update the clusterrole. + docs: reference/rbac + - type: bugfix + title: Helm Chart fixed for clientRbac.namespaced + body: >- + The Telepresence Helm chart no longer fails when installing with --set clientRbac.namespaced=true. + docs: install/helm + - version: 2.3.3 + date: "2021-07-07" + notes: + - type: feature + title: Traffic Manager Helm Chart + body: >- + Telepresence now supports installing the Traffic Manager via Helm. + This will make it easy for operators to install and configure the + server-side components of Telepresence separately from the CLI (which + in turn allows for better separation of permissions). + image: ./telepresence-2.3.3-helm.png + docs: install/helm/ + - type: feature + title: Traffic-manager in custom namespace + body: >- + As the traffic-manager can now be installed in any + namespace via Helm, Telepresence can now be configured to look for the + Traffic Manager in a namespace other than ambassador. + This can be configured on a per-cluster basis. + image: ./telepresence-2.3.3-namespace-config.png + docs: reference/config + - type: feature + title: Intercept --to-pod + body: >- + telepresence intercept now supports a + --to-pod flag that can be used to port-forward sidecars' + ports from an intercepted pod. + image: ./telepresence-2.3.3-to-pod.png + docs: reference/intercepts + - type: change + title: Change in migration from edgectl + body: >- + Telepresence no longer automatically shuts down the old + api_version=1 edgectl daemon. If migrating + from such an old version of edgectl you must now manually + shut down the edgectl daemon before running Telepresence. + This was already the case when migrating from the newer + api_version=2 edgectl. + - type: bugfix + title: Fixed error during shutdown + body: >- + The root daemon no longer terminates when the user daemon disconnects + from its gRPC streams, and instead waits to be terminated by the CLI. + This could cause problems with things not being cleaned up correctly. + - type: bugfix + title: Intercepts will survive deletion of intercepted pod + body: >- + An intercept will survive deletion of the intercepted pod provided + that another pod is created (or already exists) that can take over. + - version: 2.3.2 + date: "2021-06-18" + notes: + # Headliners + - type: feature + title: Service Port Annotation + body: >- + The mutator webhook for injecting traffic-agents now + recognizes a + telepresence.getambassador.io/inject-service-port + annotation to specify which port to intercept; bringing the + functionality of the --port flag to users who + use the mutator webook in order to control Telepresence via + GitOps. + image: ./telepresence-2.3.2-svcport-annotation.png + docs: reference/cluster-config#service-port-annotation + - type: feature + title: Outbound Connections + body: >- + Outbound connections are now routed through the intercepted + Pods which means that the connections originate from that + Pod from the cluster's perspective. This allows service + meshes to correctly identify the traffic. + docs: reference/routing/#outbound + - type: change + title: Inbound Connections + body: >- + Inbound connections from an intercepted agent are now + tunneled to the manager over the existing gRPC connection, + instead of establishing a new connection to the manager for + each inbound connection. This avoids interference from + certain service mesh configurations. + docs: reference/routing/#inbound + + # RBAC changes + - type: change + title: Traffic Manager needs new RBAC permissions + body: >- + The Traffic Manager requires RBAC + permissions to list Nodes, Pods, and to create a dummy + Service in the manager's namespace. + docs: reference/routing/#subnets + - type: change + title: Reduced developer RBAC requirements + body: >- + The on-laptop client no longer requires RBAC permissions to list the Nodes + in the cluster or to create Services, as that functionality + has been moved to the Traffic Manager. + + # Bugfixes + - type: bugfix + title: Able to detect subnets + body: >- + Telepresence will now detect the Pod CIDR ranges even if + they are not listed in the Nodes. + image: ./telepresence-2.3.2-subnets.png + docs: reference/routing/#subnets + - type: bugfix + title: Dynamic IP ranges + body: >- + The list of cluster subnets that the virtual network + interface will route is now configured dynamically and will + follow changes in the cluster. + - type: bugfix + title: No duplicate subnets + body: >- + Subnets fully covered by other subnets are now pruned + internally and thus never superfluously added to the + laptop's routing table. + docs: reference/routing/#subnets + - type: change # not a bugfix, but it only makes sense to mention after the above bugfixes + title: Change in default timeout + body: >- + The trafficManagerAPI timeout default has + changed from 5 seconds to 15 seconds, in order to facilitate + the extended time it takes for the traffic-manager to do its + initial discovery of cluster info as a result of the above + bugfixes. + - type: bugfix + title: Removal of DNS config files on macOS + body: >- + On macOS, files generated under + /etc/resolver/ as the result of using + include-suffixes in the cluster config are now + properly removed on quit. + docs: reference/routing/#macos-resolver + + - type: bugfix + title: Large file transfers + body: >- + Telepresence no longer erroneously terminates connections + early when sending a large HTTP response from an intercepted + service. + - type: bugfix + title: Race condition in shutdown + body: >- + When shutting down the user-daemon or root-daemon on the + laptop, telepresence quit and related commands + no longer return early before everything is fully shut down. + Now it can be counted on that by the time the command has + returned that all of the side-effects on the laptop have + been cleaned up. + - version: 2.3.1 + date: "2021-06-14" + notes: + - title: DNS Resolver Configuration + body: "Telepresence now supports per-cluster configuration for custom dns behavior, which will enable users to determine which local + remote resolver to use and which suffixes should be ignored + included. These can be configured on a per-cluster basis." + image: ./telepresence-2.3.1-dns.png + docs: reference/config + type: feature + - title: AlsoProxy Configuration + body: "Telepresence now supports also proxying user-specified subnets so that they can access external services only accessible to the cluster while connected to Telepresence. These can be configured on a per-cluster basis and each subnet is added to the TUN device so that requests are routed to the cluster for IPs that fall within that subnet." + image: ./telepresence-2.3.1-alsoProxy.png + docs: reference/config + type: feature + - title: Mutating Webhook for Injecting Traffic Agents + body: "The Traffic Manager now contains a mutating webhook to automatically add an agent to pods that have the telepresence.getambassador.io/traffic-agent: enabled annotation. This enables Telepresence to work well with GitOps CD platforms that rely on higher level kubernetes objects matching what is stored in git. For workloads without the annotation, Telepresence will add the agent the way it has in the past" + image: ./telepresence-2.3.1-inject.png + docs: reference/rbac + type: feature + - title: Traffic Manager Connect Timeout + body: "The trafficManagerConnect timeout default has changed from 20 seconds to 60 seconds, in order to facilitate the extended time it takes to apply everything needed for the mutator webhook." + image: ./telepresence-2.3.1-trafficmanagerconnect.png + docs: reference/config + type: change + - title: Fix for large file transfers + body: "Fix a tun-device bug where sometimes large transfers from services on the cluster would hang indefinitely" + image: ./telepresence-2.3.1-large-file-transfer.png + docs: reference/tun-device + type: bugfix + - title: Brew Formula Changed + body: "Now that the Telepresence rewrite is the main version of Telepresence, you can install it via Brew like so: brew install datawire/blackbird/telepresence." + image: ./telepresence-2.3.1-brew.png + docs: install/ + type: change + - version: 2.3.0 + date: "2021-06-01" + notes: + - title: Brew install Telepresence + body: "Telepresence can now be installed via brew on macOS, which makes it easier for users to stay up-to-date with the latest telepresence version. To install via brew, you can use the following command: brew install datawire/blackbird/telepresence2." + image: ./telepresence-2.3.0-homebrew.png + docs: install/ + type: feature + - title: TCP and UDP routing via Virtual Network Interface + body: "Telepresence will now perform routing of outbound TCP and UDP traffic via a Virtual Network Interface (VIF). The VIF is a layer 3 TUN-device that exists while Telepresence is connected. It makes the subnets in the cluster available to the workstation and will also route DNS requests to the cluster and forward them to intercepted pods. This means that pods with custom DNS configuration will work as expected. Prior versions of Telepresence would use firewall rules and were only capable of routing TCP." + image: ./tunnel.jpg + docs: reference/tun-device + type: feature + - title: SSH is no longer used + body: "All traffic between the client and the cluster is now tunneled via the traffic manager gRPC API. This means that Telepresence no longer uses ssh tunnels and that the manager no longer have an sshd installed. Volume mounts are still established using sshfs but it is now configured to communicate using the sftp-protocol directly, which means that the traffic agent also runs without sshd. A desired side effect of this is that the manager and agent containers no longer need a special user configuration." + image: ./no-ssh.png + docs: reference/tun-device/#no-ssh-required + type: change + - title: Running in a Docker container + body: "Telepresence can now be run inside a Docker container. This can be useful for avoiding side effects on a workstation's network, establishing multiple sessions with the traffic manager, or working with different clusters simultaneously." + image: ./run-tp-in-docker.png + docs: reference/inside-container + type: feature + - title: Configurable Log Levels + body: "Telepresence now supports configuring the log level for Root Daemon and User Daemon logs. This provides control over the nature and volume of information that Telepresence generates in daemon.log and connector.log." + image: ./telepresence-2.3.0-loglevels.png + docs: reference/config/#log-levels + type: feature + - version: 2.2.2 + date: "2021-05-17" + notes: + - title: Legacy Telepresence subcommands + body: Telepresence is now able to translate common legacy Telepresence commands into native Telepresence commands. So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used to with the new Telepresence binary. + image: ./telepresence-2.2.png + docs: install/migrate-from-legacy/ + type: feature diff --git a/docs/telepresence/2.9/troubleshooting/index.md b/docs/telepresence/2.9/troubleshooting/index.md new file mode 100644 index 000000000..918e107b0 --- /dev/null +++ b/docs/telepresence/2.9/troubleshooting/index.md @@ -0,0 +1,226 @@ +--- +title: "Telepresence Troubleshooting" +description: "Learn how to troubleshoot common issues related to Telepresence, including intercept issues, cluster connection issues, and errors related to Ambassador Cloud." +--- +# Troubleshooting + +## Creating an intercept did not generate a preview URL + +Preview URLs can only be created if Telepresence is [logged in to +Ambassador Cloud](../reference/client/login/). When not logged in, it +will not even try to create a preview URL (additionally, by default it +will intercept all traffic rather than just a subset of the traffic). +Remove the intercept with `telepresence leave [deployment name]`, run +`telepresence login` to login to Ambassador Cloud, then recreate the +intercept. See the [intercepts how-to doc](../howtos/intercepts) for +more details. + +## Error on accessing preview URL: `First record does not look like a TLS handshake` + +The service you are intercepting is likely not using TLS, however when configuring the intercept you indicated that it does use TLS. Remove the intercept with `telepresence leave [deployment name]` and recreate it, setting `TLS` to `n`. Telepresence tries to intelligently determine these settings for you when creating an intercept and offer them as defaults, but odd service configurations might cause it to suggest the wrong settings. + +## Error on accessing preview URL: Detected a 301 Redirect Loop + +If your ingress is set to redirect HTTP requests to HTTPS and your web app uses HTTPS, but you configure the intercept to not use TLS, you will get this error when opening the preview URL. Remove the intercept with `telepresence leave [deployment name]` and recreate it, selecting the correct port and setting `TLS` to `y` when prompted. + +## Connecting to a cluster via VPN doesn't work. + +There are a few different issues that could arise when working with a VPN. Please see the [dedicated page](../reference/vpn) on Telepresence and VPNs to learn more on how to fix these. + +## Connecting to a cluster hosted in a VM on the workstation doesn't work + +The cluster probably has access to the host's network and gets confused when it is mapped by Telepresence. +Please check the [cluster in hosted vm](../howtos/cluster-in-vm) for more details. + +## Your GitHub organization isn't listed + +Ambassador Cloud needs access granted to your GitHub organization as a +third-party OAuth app. If an organization isn't listed during login +then the correct access has not been granted. + +The quickest way to resolve this is to go to the **Github menu** → +**Settings** → **Applications** → **Authorized OAuth Apps** → +**Ambassador Labs**. An organization owner will have a **Grant** +button, anyone not an owner will have **Request** which sends an email +to the owner. If an access request has been denied in the past the +user will not see the **Request** button, they will have to reach out +to the owner. + +Once access is granted, log out of Ambassador Cloud and log back in; +you should see the GitHub organization listed. + +The organization owner can go to the **GitHub menu** → **Your +organizations** → **[org name]** → **Settings** → **Third-party +access** to see if Ambassador Labs has access already or authorize a +request for access (only owners will see **Settings** on the +organization page). Clicking the pencil icon will show the +permissions that were granted. + +GitHub's documentation provides more detail about [managing access granted to third-party applications](https://docs.github.com/en/github/authenticating-to-github/connecting-with-third-party-applications) and [approving access to apps](https://docs.github.com/en/github/setting-up-and-managing-organizations-and-teams/approving-oauth-apps-for-your-organization). + +### Granting or requesting access on initial login + +When using GitHub as your identity provider, the first time you log in +to Ambassador Cloud GitHub will ask to authorize Ambassador Labs to +access your organizations and certain user data. + +Authorize Ambassador labs form + +Any listed organization with a green check has already granted access +to Ambassador Labs (you still need to authorize to allow Ambassador +Labs to read your user data and organization membership). + +Any organization with a red "X" requires access to be granted to +Ambassador Labs. Owners of the organization will see a **Grant** +button. Anyone who is not an owner will see a **Request** button. +This will send an email to the organization owner requesting approval +to access the organization. If an access request has been denied in +the past the user will not see the **Request** button, they will have +to reach out to the owner. + +Once approval is granted, you will have to log out of Ambassador Cloud +then back in to select the organization. + +## Volume mounts are not working on macOS + +It's necessary to have `sshfs` installed in order for volume mounts to work correctly during intercepts. Lately there's been some issues using `brew install sshfs` a macOS workstation because the required component `osxfuse` (now named `macfuse`) isn't open source and hence, no longer supported. As a workaround, you can now use `gromgit/fuse/sshfs-mac` instead. Follow these steps: + +1. Remove old sshfs, macfuse, osxfuse using `brew uninstall` +2. `brew install --cask macfuse` +3. `brew install gromgit/fuse/sshfs-mac` +4. `brew link --overwrite sshfs-mac` + +Now sshfs -V shows you the correct version, e.g.: +``` +$ sshfs -V +SSHFS version 2.10 +FUSE library version: 2.9.9 +fuse: no mount point +``` + +5. Next, try a mount (or an intercept that performs a mount). It will fail because you need to give permission to “Benjamin Fleischer” to execute a kernel extension (a pop-up appears that takes you to the system preferences). +6. Approve the needed permission +7. Reboot your computer. + +## Authorization for preview URLs +Services that require authentication may not function correctly with preview URLs. When accessing a preview URL, it is necessary to configure your intercept to use custom authentication headers for the preview URL. If you don't, you may receive an unauthorized response or be redirected to the login page for Ambassador Cloud. + +You can accomplish this by using a browser extension such as the `ModHeader extension` for [Chrome](https://chrome.google.com/webstore/detail/modheader/idgpnmonknjnojddfkpgkljpfnnfcklj) +or [Firefox](https://addons.mozilla.org/en-CA/firefox/addon/modheader-firefox/). + +It is important to note that Ambassador Cloud does not support OAuth browser flows when accessing a preview URL, but other auth schemes such as Basic access authentication and session cookies will work. + +## Distributed tracing + +Telepresence is a complex piece of software with components running locally on your laptop and remotely in a distributed kubernetes environment. +As such, troubleshooting investigations require tools that can give users, cluster admins, and maintainers a broad view of what these distributed components are doing. +In order to facilitate such investigations, telepresence >= 2.7.0 includes distributed tracing functionality via [OpenTelemetry](https://opentelemetry.io/) +Tracing is controlled via a `grpcPort` flag under the `tracing` configuration of your `values.yaml`. It is enabled by default and can be disabled by setting `grpcPort` to `0`, or `tracing` to an empty object: + +```yaml +tracing: {} +``` + +If tracing is configured, the traffic manager and traffic agents will open a GRPC server under the port given, from which telepresence clients will be able to gather trace data. +To collect trace data, ensure you're connected to the cluster, perform whatever operation you'd like to debug and then run `gather-traces` immediately after: + +```console +$ telepresence gather-traces +``` + +This command will gather traces from both the cloud and local components of telepresence and output them into a file called `traces.gz` in your current working directory: + +```console +$ file traces.gz + traces.gz: gzip compressed data, original size modulo 2^32 158255 +``` + +Please do not try to open or uncompress this file, as it contains binary trace data. +Instead, you can use the `upload-traces` command built into telepresence to send it to an [OpenTelemetry collector](https://opentelemetry.io/docs/collector/) for ingestion: + +```console +$ telepresence upload-traces traces.gz $OTLP_GRPC_ENDPOINT +``` + +Once that's been done, the traces will be visible via whatever means your usual collector allows. For example, this is what they look like when loaded into Jaeger's [OTLP API](https://www.jaegertracing.io/docs/1.36/apis/#opentelemetry-protocol-stable): + +![Jaeger Interface](../images/tracing.png) + +**Note:** The host and port provided for the `OTLP_GRPC_ENDPOINT` must accept OTLP formatted spans (instead of e.g. Jaeger or Zipkin specific spans) via a GRPC API (instead of the HTTP API that is also available in some collectors) +**Note:** Since traces are not automatically shipped to the backend by telepresence, they are stored in memory. Hence, to avoid running telepresence components out of memory, only the last 10MB of trace data are available for export. + +## No Sidecar Injected in GKE private clusters + +An attempt to `telepresence intercept` results in a timeout, and upon examination of the pods (`kubectl get pods`) it's discovered that the intercept command did not inject a sidecar into the workload's pods: + +```bash +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +echo-easy-7f6d54cff8-rz44k 1/1 Running 0 5m5s + +$ telepresence intercept echo-easy -p 8080 +telepresence: error: connector.CreateIntercept: request timed out while waiting for agent echo-easy.default to arrive +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +echo-easy-d8dc4cc7c-27567 1/1 Running 0 2m9s + +# Notice how 1/1 containers are ready. +``` + +If this is occurring in a GKE cluster with private networking enabled, it is likely due to firewall rules blocking the +Traffic Manager's webhook injector from the API server. +To fix this, add a firewall rule allowing your cluster's master nodes to access TCP port `443` in your cluster's pods, +or change the port number that Telepresence is using for the agent injector by providing the number of an allowed port +using the Helm chart value `agentInjector.webhook.port`. +Please refer to the [telepresence install instructions](../install/cloud#gke) or the [GCP docs](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) for information to resolve this. + +## Injected init-container doesn't function properly + +The init-container is injected to insert `iptables` rules that redirects port numbers from the app container to the +traffic-agent sidecar. This is necessary when the service's `targetPort` is numeric. It requires elevated privileges +(`NET_ADMIN` capabilities), and the inserted rules may get overridden by `iptables` rules inserted by other vendors, +such as Istio or Linkerd. + +Injection of the init-container can often be avoided by using a `targetPort` _name_ instead of a number, and ensure +that the corresponding container's `containerPort` is also named. This example uses the name "http", but any valid +name will do: +```yaml +apiVersion: v1 +kind: Pod +metadata: + ... +spec: + ... + containers: + - ... + ports: + - name: http + containerPort: 8080 +--- +apiVersion: v1 +kind: Service +metadata: + ... +spec: + ... + ports: + - port: 80 + targetPort: http +``` + +Telepresence's mutating webhook will refrain from injecting an init-container when the `targetPort` is a name. Instead, +it will do the following during the injection of the traffic-agent: + +1. Rename the designated container's port by prefixing it (i.e., containerPort: http becomes containerPort: tm-http). +2. Let the container port of our injected traffic-agent use the original name (i.e., containerPort: http). + +Kubernetes takes care of the rest and will now associate the service's `targetPort` with our traffic-agent's +`containerPort`. + +### Important note +If the service is "headless" (using `ClusterIP: None`), then using named ports won't help because the `targetPort` will +not get remapped. A headless service will always require the init-container. + +## `too many files open` error when running `telepresence connect` on Linux + +If `telepresence connect` on linux fails with a message in the logs `too many files open`, then check if `fs.inotify.max_user_instances` is set too low. Check the current settings with `sysctl fs.notify.max_user_instances` and increase it temporarily with `sudo sysctl -w fs.inotify.max_user_instances=512`. For more information about permanently increasing it see [Kernel inotify watch limit reached](https://unix.stackexchange.com/a/13757/514457). diff --git a/docs/telepresence/2.9/versions.yml b/docs/telepresence/2.9/versions.yml new file mode 100644 index 000000000..b2d9d1742 --- /dev/null +++ b/docs/telepresence/2.9/versions.yml @@ -0,0 +1,5 @@ +version: "2.9.5" +dlVersion: "latest" +docsVersion: "2.9" +branch: release/v2 +productName: "Telepresence" diff --git a/docs/telepresence/latest b/docs/telepresence/latest deleted file mode 120000 index 4171330a2..000000000 --- a/docs/telepresence/latest +++ /dev/null @@ -1 +0,0 @@ -../../../docs/telepresence/v2.15 \ No newline at end of file diff --git a/docs/telepresence/latest/ci/github-actions.md b/docs/telepresence/latest/ci/github-actions.md new file mode 100644 index 000000000..810a2d239 --- /dev/null +++ b/docs/telepresence/latest/ci/github-actions.md @@ -0,0 +1,176 @@ +--- +title: GitHub Actions for Telepresence +description: "Learn more about GitHub Actions for Telepresence and how to integrate them in your processes to run tests for your own environments and improve your CI/CD pipeline. " +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Telepresence with GitHub Actions + +Telepresence combined with [GitHub Actions](https://docs.github.com/en/actions) allows you to run integration tests in your continuous integration/continuous delivery (CI/CD) pipeline without the need to run any dependant service. When you connect to the target Kubernetes cluster, you can intercept traffic of the remote services and send it to an instance of the local service running in CI. This way, you can quickly test the bugfixes, updates, and features that you develop in your project. + +You can [register here](https://app.getambassador.io/auth/realms/production/protocol/openid-connect/auth?client_id=telepresence-github-actions&response_type=code&code_challenge=qhXI67CwarbmH-pqjDIV1ZE6kqggBKvGfs69cxst43w&code_challenge_method=S256&redirect_uri=https://app.getambassador.io) to get a free Ambassador Cloud account to try the GitHub Actions for Telepresence yourself. + +## GitHub Actions for Telepresence + +Ambassador Labs has created a set of GitHub Actions for Telepresence that enable you to run integration tests in your CI pipeline against any existing remote cluster. The GitHub Actions for Telepresence are the following: + + - **configure**: Initial configuration setup for Telepresence that is needed to run the actions successfully. + - **install**: Installs Telepresence in your CI server with latest version or the one specified by you. + - **login**: Logs into Telepresence, you can create a [personal intercept](/docs/telepresence/latest/concepts/intercepts/#personal-intercept). You'll need a Telepresence API key and set it as an environment variable in your workflow. See the [acquiring an API key guide](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) for instructions on how to get one. + - **connect**: Connects to the remote target environment. + - **intercept**: Redirects traffic to the remote service to the version of the service running in CI so you can run integration tests. + +Each action contains a post-action script to clean up resources. This includes logging out of Telepresence, closing the connection to the remote cluster, and stopping the intercept process. These post scripts are executed automatically, regardless of job result. This way, you don't have to worry about terminating the session yourself. You can look at the [GitHub Actions for Telepresence repository](https://github.com/datawire/telepresence-actions) for more information. + +# Using Telepresence in your GitHub Actions CI pipeline + +## Prerequisites + +To enable GitHub Actions with telepresence, you need: + +* A [Telepresence API KEY](/docs/telepresence/latest/reference/client/login/#acquiring-an-api-key) and set it as an environment variable in your workflow. +* Access to your remote Kubernetes cluster, like a `kubeconfig.yaml` file with the information to connect to the cluster. +* If your remote cluster already has Telepresence installed, you need to know whether Telepresence is installed [Cluster wide](/docs/telepresence/latest/reference/rbac/#cluster-wide-telepresence-user-access) or [Namespace only](/docs/telepresence/latest/reference/rbac/#namespace-only-telepresence-user-access). If telepresence is configured for namespace only, verify that your `kubeconfig.yaml` is configured to find the installation of the Traffic Manager. For example: + + ```yaml + apiVersion: v1 + clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: traffic-manager-namespace + name: example-cluster + ``` + +* If Telepresence is installed, you also need to know the version of Telepresence running in the cluster. You can run the command `kubectl describe service traffic-manager -n namespace`. The version is listed in the `labels` section of the output. +* You need a GitHub Actions secret named `TELEPRESENCE_API_KEY` in your repository that has your Telepresence API key. See [GitHub docs](https://docs.github.com/en/github-ae@latest/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository) for instructions on how to create GitHub Actions secrets. +* You need a GitHub Actions secret named `KUBECONFIG_FILE` in your repository with the content of your `kubeconfig.yaml`). + +**Does your environment look different?** We're actively working on making GitHub Actions for Telepresence more useful for more + +
+ + +
+ +## Initial configuration setup + +To be able to use the GitHub Actions for Telepresence, you need to do an initial setup to [configure Telepresence](../../reference/config/) so the repository is able to run your workflow. To complete the Telepresence setup: + + +This action only supports Ubuntu runners at the moment. + +1. In your main branch, create a `.github/workflows` directory in your GitHub repository if it does not already exist. +1. Next, in the `.github/workflows` directory, create a new YAML file named `configure-telepresence.yaml`: + + ```yaml + name: Configuring telepresence + on: workflow_dispatch + jobs: + configuring: + name: Configure telepresence + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + steps: + - name : Checkout + uses: actions/checkout@v3 + #---- here run your custom command to connect to your cluster + #- name: Connect to cluster + # shell: bash + # run: ./connnect to cluster + #---- + - name: Congifuring Telepresence + uses: datawire/telepresence-actions/configure@v1.0-rc + with: + version: latest + ``` + +1. Push the `configure-telepresence.yaml` file to your repository. +1. Run the `Configuring Telepresence Workflow` [manually](https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow) in your repository's Actions tab. + +When the workflow runs, the action caches the configuration directory of Telepresence and a Telepresence configuration file if is provided by you. This configuration file should be placed in the`/.github/telepresence-config/config.yml` with your own [Telepresence config](../../reference/config/). If you update your this file with a new configuration, you must run the `Configuring Telepresence Workflow` action manually on your main branch so your workflow detects the new configuration. + + +When you create a branch, do not remove the .telepresence/config.yml file. This is required for Telepresence to run GitHub action properly when there is a new push to the branch in your repository. + + +## Using Telepresence in your GitHub Actions workflows + +1. In the `.github/workflows` directory create a new YAML file named `run-integration-tests.yaml` and modify placeholders with real actions to run your service and perform integration tests. + + ```yaml + name: Run Integration Tests + on: + push: + branches-ignore: + - 'main' + jobs: + my-job: + name: Run Integration Test using Remote Cluster + runs-on: ubuntu-latest + env: + TELEPRESENCE_API_KEY: ${{ secrets.TELEPRESENCE_API_KEY }} + KUBECONFIG_FILE: ${{ secrets.KUBECONFIG_FILE }} + KUBECONFIG: /opt/kubeconfig + steps: + - name : Checkout + uses: actions/checkout@v3 + with: + ref: ${{ github.event.pull_request.head.sha }} + #---- here run your custom command to run your service + #- name: Run your service to test + # shell: bash + # run: ./run_local_service + #---- + # First you need to log in to Telepresence, with your api key + - name: Create kubeconfig file + run: | + cat < /opt/kubeconfig + ${{ env.KUBECONFIG_FILE }} + EOF + - name: Install Telepresence + uses: datawire/telepresence-actions/install@v1.0-rc + with: + version: 2.5.8 # Change the version number here according to the version of Telepresence in your cluster or omit this parameter to install the latest version + - name: Telepresence connect + uses: datawire/telepresence-actions/connect@v1.0-rc + - name: Login + uses: datawire/telepresence-actions/login@v1.0-rc + with: + telepresence_api_key: ${{ secrets.TELEPRESENCE_API_KEY }} + - name: Intercept the service + uses: datawire/telepresence-actions/intercept@v1.0-rc + with: + service_name: service-name + service_port: 8081:8080 + namespace: namespacename-of-your-service + http_header: "x-telepresence-intercept-id=service-intercepted" + print_logs: true # Flag to instruct the action to print out Telepresence logs and export an artifact with them + #---- here run your custom command + #- name: Run integrations test + # shell: bash + # run: ./run_integration_test + #---- + ``` + +The previous is an example of a workflow that: + +* Checks out the repository code. +* Has a placeholder step to run the service during CI. +* Creates the `/opt/kubeconfig` file with the contents of the `secrets.KUBECONFIG_FILE` to make it available for Telepresence. +* Installs Telepresence. +* Runs Telepresence Connect. +* Logs into Telepresence. +* Intercepts traffic to the service running in the remote cluster. +* A placeholder for an action that would run integration tests, such as one that makes HTTP requests to your running service and verifies it works while dependent services run in the remote cluster. + +This workflow gives you the ability to run integration tests during the CI run against an ephemeral instance of your service to verify that the any change that is pushed to the working branch works as expected. After you push the changes, the CI server will run the integration tests against the intercept. You can view the results on your GitHub repository, under "Actions" tab. diff --git a/docs/telepresence/latest/ci/pod-daemon.md b/docs/telepresence/latest/ci/pod-daemon.md new file mode 100644 index 000000000..9342a2d86 --- /dev/null +++ b/docs/telepresence/latest/ci/pod-daemon.md @@ -0,0 +1,202 @@ +--- +title: Pod Daemon +description: "Pod Daemon and how to integrate it in your processes to run tests for your own environments and improve your CI/CD pipeline." +--- + +# Telepresence with Pod Daemon + + +The Pod Daemon facilitates the execution of Telepresence by using a Pod as a sidecar to your application. This becomes particularly beneficial when intending to incorporate Deployment Previews into your pipeline. Essentially, the pod-daemon is a Telepresence instance running in a pod, rather than operating on a developer's laptop. + +This presents a compelling solution for developers who wish to share a live iteration of their work within the organization. A preview URL can be produced, which links directly to the image created during the Continuous Integration (CI) process. This Preview URL can then be appended to the pull request, streamlining the code review process and enabling real-time project sharing within the team. + +## Overview + +The Pod Daemon functions as an optimized version of Telepresence, undertaking all preliminary configuration tasks (such as login and daemon startup), and additionally executing the intercept. + +The initial setup phase involves deploying a service account with the necessary minimal permissions for running Telepresence, coupled with a secret that holds the API KEY essential for executing a Telepresence login. + +Following this setup, your main responsibility consists of deploying your operational application, which incorporates a pod daemon operating as a sidecar. The parameters for the pod daemon require the relevant details concerning your live application. As it initiates, the pod daemon will intercept your live application and divert traffic towards your working application. This traffic redirection is based on your configured headers, which come into play each time the application is accessed. + +

+ +

+ +## Usage + +To commence the setup, it's necessary to deploy both a service account and a secret. Here's how to go about it: + +1. Establish a connection to your cluster and proceed to deploy this within the namespace of your live application (default in this case). + + ```yaml + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: ambassador-deploy-previews + namespace: default + labels: + app.kubernetes.io/name: ambassador-deploy-previews + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: ambassador-deploy-previews + namespace: default + labels: + app.kubernetes.io/name: ambassador-deploy-previews + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: ambassador-deploy-previews + labels: + app.kubernetes.io/name: ambassador-deploy-previews + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: ambassador-deploy-previews + subjects: + - name: ambassador-deploy-previews + namespace: default + kind: ServiceAccount + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + labels: + rbac.getambassador.io/role-group: ambassador-deploy-previews + name: ambassador-deploy-previews + rules: + - apiGroups: [ "" ] + verbs: [ "get", "list", "watch", "create", "delete" ] + resources: + - namespaces + - pods + - pods/log + - pods/portforward + - services + - secrets + - configmaps + - endpoints + - nodes + - deployments + - serviceaccounts + + - apiGroups: [ "apps", "rbac.authorization.k8s.io", "admissionregistration.k8s.io" ] + verbs: [ "get", "list", "create", "update", "watch" ] + resources: + - deployments + - statefulsets + - clusterrolebindings + - rolebindings + - clusterroles + - replicasets + - roles + - serviceaccounts + - mutatingwebhookconfigurations + + - apiGroups: [ "getambassador.io" ] + verbs: [ "get", "list", "watch" ] + resources: [ "*" ] + + - apiGroups: [ "getambassador.io" ] + verbs: [ "update" ] + resources: [ "mappings/status" ] + + - apiGroups: [ "networking.x-k8s.io" ] + verbs: [ "get", "list", "watch" ] + resources: [ "*" ] + + - apiGroups: [ "networking.internal.knative.dev" ] + verbs: [ "get", "list", "watch" ] + resources: [ "ingresses", "clusteringresses" ] + + - apiGroups: [ "networking.internal.knative.dev" ] + verbs: [ "update" ] + resources: [ "ingresses/status", "clusteringresses/status" ] + + - apiGroups: [ "extensions", "networking.k8s.io" ] + verbs: [ "get", "list", "watch" ] + resources: [ "ingresses", "ingressclasses" ] + + - apiGroups: [ "extensions", "networking.k8s.io" ] + verbs: [ "update" ] + resources: [ "ingresses/status" ] + --- + apiVersion: v1 + kind: Secret + metadata: + name: deployment-preview-apikey + namespace: default + type: Opaque + stringData: + AMBASSADOR_CLOUD_APIKEY: "{YOUR_API_KEY}" + + ``` + +2. Following this, you will need to deploy the iteration image together with the pod daemon, serving as a sidecar. In order to utilize the pod-daemon command, the environmental variable `IS_POD_DAEMON` must be set to `True`. This setting is a prerequisite for activating the pod-daemon functionality. + + ```yaml + --- + apiVersion: apps/v1 + kind: Deployment + metadata: + name: quote-ci + spec: + selector: + matchLabels: + run: quote-ci + replicas: 1 + template: + metadata: + labels: + run: quote-ci + spec: + serviceAccountName: ambassador-deploy-previews + containers: + # Include your application container + # - name: your-original-application + # image: image-built-from-pull-request + # [...] + # Inject the pod-daemon container + # In the following example, we'll demonstrate how to integrate the pod-daemon container by intercepting the quote app + - name: pod-daemon + image: datawire/telepresence:$version$ + ports: + - name: http + containerPort: 80 + - name: https + containerPort: 443 + resources: + limits: + cpu: "0.1" + memory: 100Mi + args: + - pod-daemon + - --workload-name=quote + - --workload-namespace=default + - --workload-kind=Deployment + - --port=8080 + - --http-header=test-telepresence=1 # Custom header can be specified + - --ingress-tls=false + - --ingress-port=80 + - --ingress-host=quote.default.svc.cluster.local + - --ingress-l5host=quote.default.svc.cluster.local + env: + - name: AMBASSADOR_CLOUD_APIKEY + valueFrom: + secretKeyRef: + name: deployment-preview-apikey + key: AMBASSADOR_CLOUD_APIKEY + - name: TELEPRESENCE_MANAGER_NAMESPACE + value: ambassador + - name: IS_POD_DAEMON + value: "True" + ``` + +3. The preview URL can be located within the logs of the pod daemon: + + ```bash + kubectl logs -f quote-ci-6dcc864445-x98wt -c pod-daemon + ``` \ No newline at end of file diff --git a/docs/telepresence/latest/community.md b/docs/telepresence/latest/community.md new file mode 100644 index 000000000..922457c9d --- /dev/null +++ b/docs/telepresence/latest/community.md @@ -0,0 +1,12 @@ +# Community + +## Contributor's guide +Please review our [contributor's guide](https://github.com/telepresenceio/telepresence/blob/release/v2/DEVELOPING.md) +on GitHub to learn how you can help make Telepresence better. + +## Changelog +Our [changelog](https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md) +describes new features, bug fixes, and updates to each version of Telepresence. + +## Meetings +Check out our community [meeting schedule](https://github.com/telepresenceio/telepresence/blob/release/v2/MEETING_SCHEDULE.md) for opportunities to interact with Telepresence developers. diff --git a/docs/telepresence/latest/concepts/context-prop.md b/docs/telepresence/latest/concepts/context-prop.md new file mode 100644 index 000000000..46993af06 --- /dev/null +++ b/docs/telepresence/latest/concepts/context-prop.md @@ -0,0 +1,39 @@ +# Context propagation + +**Context propagation** is the transfer of request metadata across the services and remote processes of a distributed system. Telepresence uses context propagation to intelligently route requests to the appropriate destination. + +This metadata is the context that is transferred across system services. It commonly takes the form of HTTP headers; context propagation is usually referred to as header propagation. A component of the system (like a proxy or performance monitoring tool) injects the headers into requests as it relays them. + +Metadata propagation refers to any service or other middleware not stripping away the headers. Propagation facilitates the movement of the injected contexts between other downstream services and processes. + + +## What is distributed tracing? + +Distributed tracing is a technique for troubleshooting and profiling distributed microservices applications and is a common application for context propagation. It is becoming a key component for debugging. + +In a microservices architecture, a single request may trigger additional requests to other services. The originating service may not cause the failure or slow request directly; a downstream dependent service may instead be to blame. + +An application like Datadog or New Relic will use agents running on services throughout the system to inject traffic with HTTP headers (the context). They will track the request’s entire path from origin to destination to reply, gathering data on routes the requests follow and performance. The injected headers follow the [W3C Trace Context specification](https://www.w3.org/TR/trace-context/) (or another header format, such as [B3 headers](https://github.com/openzipkin/b3-propagation)), which facilitates maintaining the headers through every service without being stripped (the propagation). + + +## What are intercepts and preview URLs? + +[Intercepts](../../reference/intercepts) and [preview +URLs](../../howtos/preview-urls/) are functions of Telepresence that +enable easy local development from a remote Kubernetes cluster and +offer a preview environment for sharing and real-time collaboration. + +Telepresence uses custom HTTP headers and header propagation to +identify which traffic to intercept both for plain personal intercepts +and for personal intercepts with preview URLs; these techniques are +more commonly used for distributed tracing, so what they are being +used for is a little unorthodox, but the mechanisms for their use are +already widely deployed because of the prevalence of tracing. The +headers facilitate the smart routing of requests either to live +services in the cluster or services running locally on a developer’s +machine. The intercepted traffic can be further limited by using path +based routing. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to [Ambassador Cloud](https://app.getambassador.io) with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. + +If you need to implement context propagation in your environment to support personal intercepts in services deeper in the stack, we [offer a guide to doing so](https://github.com/ambassadorlabs/telepresence-header-propagation) with the lowest complexity and effort possible. diff --git a/docs/telepresence/latest/concepts/devloop.md b/docs/telepresence/latest/concepts/devloop.md new file mode 100644 index 000000000..86aac87e2 --- /dev/null +++ b/docs/telepresence/latest/concepts/devloop.md @@ -0,0 +1,54 @@ +--- +title: "The developer and the inner dev loop | Ambassador " +--- + +# The developer experience and the inner dev loop + +## How is the developer experience changing? + +The developer experience is the workflow a developer uses to develop, test, deploy, and release software. + +Typically this experience has consisted of both an inner dev loop and an outer dev loop. The inner dev loop is where the individual developer codes and tests, and once the developer pushes their code to version control, the outer dev loop is triggered. + +The outer dev loop is _everything else_ that happens leading up to release. This includes code merge, automated code review, test execution, deployment, [controlled (canary) release](https://www.getambassador.io/docs/argo/latest/concepts/canary/), and observation of results. The modern outer dev loop might include, for example, an automated CI/CD pipeline as part of a [GitOps workflow](https://www.getambassador.io/docs/argo/latest/concepts/gitops/#what-is-gitops) and a [progressive delivery](/docs/argo/latest/concepts/cicd/) strategy relying on automated canaries, i.e. to make the outer loop as fast, efficient and automated as possible. + +Cloud-native technologies have fundamentally altered the developer experience in two ways: one, developers now have to take extra steps in the inner dev loop; two, developers need to be concerned with the outer dev loop as part of their workflow, even if most of their time is spent in the inner dev loop. + +Engineers now must design and build distributed service-based applications _and_ also assume responsibility for the full development life cycle. The new developer experience means that developers can no longer rely on monolithic application developer best practices, such as checking out the entire codebase and coding locally with a rapid “live-reload” inner development loop. Now developers have to manage external dependencies, build containers, and implement orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this adds development time to the equation. + +## What is the inner dev loop? + +The inner dev loop is the single developer workflow. A single developer should be able to set up and use an inner dev loop to code and test changes quickly. + +Even within the Kubernetes space, developers will find much of the inner dev loop familiar. That is, code can still be written locally at a level that a developer controls and committed to version control. + +In a traditional inner dev loop, if a typical developer codes for 360 minutes (6 hours) a day, with a traditional local iterative development loop of 5 minutes — 3 coding, 1 building, i.e. compiling/deploying/reloading, 1 testing inspecting, and 10-20 seconds for committing code — they can expect to make ~70 iterations of their code per day. Any one of these iterations could be a release candidate. The only “developer tax” being paid here is for the commit process, which is negligible. + +![traditional inner dev loop](../images/trad-inner-dev-loop.png) + +## In search of lost time: How does containerization change the inner dev loop? + +The inner dev loop is where writing and testing code happens, and time is critical for maximum developer productivity and getting features in front of end users. The faster the feedback loop, the faster developers can refactor and test again. + +Changes to the inner dev loop process, i.e., containerization, threaten to slow this development workflow down. Coding stays the same in the new inner dev loop, but code has to be containerized. The _containerized_ inner dev loop requires a number of new steps: + +* packaging code in containers +* writing a manifest to specify how Kubernetes should run the application (e.g., YAML-based configuration information, such as how much memory should be given to a container) +* pushing the container to the registry +* deploying containers in Kubernetes + +Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. + + +![container inner dev loop](../images/container-inner-dev-loop.png) + +## Tackling the slow inner dev loop + +A slow inner dev loop can negatively impact frontend and backend teams, delaying work on individual and team levels and slowing releases into production overall. + +For example: + +* Frontend developers have to wait for previews of backend changes on a shared dev/staging environment (for example, until CI/CD deploys a new version) and/or rely on mocks/stubs/virtual services when coding their application locally. These changes are only verifiable by going through the CI/CD process to build and deploy within a target environment. +* Backend developers have to wait for CI/CD to build and deploy their app to a target environment to verify that their code works correctly with cluster or cloud-based dependencies as well as to share their work to get feedback. + +New technologies and tools can facilitate cloud-native, containerized development. And in the case of a sluggish inner dev loop, developers can accelerate productivity with tools that help speed the loop up again. diff --git a/docs/telepresence/latest/concepts/devworkflow.md b/docs/telepresence/latest/concepts/devworkflow.md new file mode 100644 index 000000000..fa24fc2bd --- /dev/null +++ b/docs/telepresence/latest/concepts/devworkflow.md @@ -0,0 +1,7 @@ +# The changing development workflow + +A changing workflow is one of the main challenges for developers adopting Kubernetes. Software development itself isn’t the challenge. Developers can continue to [code using the languages and tools with which they are most productive and comfortable](https://www.getambassador.io/resources/kubernetes-local-dev-toolkit/). That’s the beauty of containerized development. + +However, the cloud-native, Kubernetes-based approach to development means adopting a new development workflow and development environment. Beyond the basics, such as figuring out how to containerize software, [how to run containers in Kubernetes](https://www.getambassador.io/docs/kubernetes/latest/concepts/appdev/), and how to deploy changes into containers, for example, Kubernetes adds complexity before it delivers efficiency. The promise of a “quicker way to develop software” applies at least within the traditional aspects of the inner dev loop, where the single developer codes, builds and tests their software. But both within the inner dev loop and once code is pushed into version control to trigger the outer dev loop, the developer experience changes considerably from what many developers are used to. + +In this new paradigm, new steps are added to the inner dev loop, and more broadly, the developer begins to share responsibility for the full life cycle of their software. Inevitably this means taking new workflows and tools on board to ensure that the full life cycle continues full speed ahead. diff --git a/docs/telepresence/latest/concepts/faster.md b/docs/telepresence/latest/concepts/faster.md new file mode 100644 index 000000000..3950dce38 --- /dev/null +++ b/docs/telepresence/latest/concepts/faster.md @@ -0,0 +1,28 @@ +--- +title: Install the Telepresence Docker extension | Ambassador +--- +# Making the remote local: Faster feedback, collaboration and debugging + +With the goal of achieving [fast, efficient development](https://www.getambassador.io/use-case/local-kubernetes-development/), developers need a set of approaches to bridge the gap between remote Kubernetes clusters and local development, and reduce time to feedback and debugging. + +## How should I set up a Kubernetes development environment? + +[Setting up a development environment](https://www.getambassador.io/resources/development-environments-microservices/) for Kubernetes can be much more complex than the setup for traditional web applications. Creating and maintaining a Kubernetes development environment relies on a number of external dependencies, such as databases or authentication. + +While there are several ways to set up a Kubernetes development environment, most introduce complexities and impediments to speed. The dev environment should be set up to easily code and test in conditions where a service can access the resources it depends on. + +A good way to meet the goals of faster feedback, possibilities for collaboration, and scale in a realistic production environment is the "single service local, all other remote" environment. Developing in a fully remote environment offers some benefits, but for developers, it offers the slowest possible feedback loop. With local development in a remote environment, the developer retains considerable control while using tools like [Telepresence](../../quick-start/) to facilitate fast feedback, debugging and collaboration. + +## What is Telepresence? + +Telepresence is an open source tool that lets developers [code and test microservices locally against a remote Kubernetes cluster](../../quick-start/). Telepresence facilitates more efficient development workflows while relieving the need to worry about other service dependencies. + +## How can I get fast, efficient local development? + +The dev loop can be jump-started with the right development environment and Kubernetes development tools to support speed, efficiency and collaboration. Telepresence is designed to let Kubernetes developers code as though their laptop is in their Kubernetes cluster, enabling the service to run locally and be proxied into the remote cluster. Telepresence runs code locally and forwards requests to and from the remote Kubernetes cluster, bypassing the much slower process of waiting for a container to build, pushing it to registry, and deploying to production. + +A rapid and continuous feedback loop is essential for productivity and speed; Telepresence enables the fast, efficient feedback loop to ensure that developers can access the rapid local development loop they rely on without disrupting their own or other developers' workflows. Telepresence safely intercepts traffic from the production cluster and enables near-instant testing of code, local debugging in production, and [preview URL](../../howtos/preview-urls/) functionality to share dev environments with others for multi-user collaboration. + +Telepresence works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This pod proxies data from the Kubernetes environment (e.g., TCP/UDP connections, environment variables, volumes) to the local process. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +The intercept proxy works thanks to context propagation, which is most frequently associated with distributed tracing but also plays a key role in controllable intercepts and preview URLs. diff --git a/docs/telepresence/latest/concepts/goldenpaths.md b/docs/telepresence/latest/concepts/goldenpaths.md new file mode 100644 index 000000000..15f084224 --- /dev/null +++ b/docs/telepresence/latest/concepts/goldenpaths.md @@ -0,0 +1,9 @@ +# Golden Paths + +A golden path is a best practice or a standardized process you should apply to Telepresence, often used to optimize productivity or quality control. It can be used as a benchmark or a reference point for measuring success and progress towards a particular goal or outcome. + +We have provided Golden Paths for multiple use cases listed below. + +1. [Intercept Specifications](../goldenpaths/specs) +2. [Using Telepresence with Docker](../goldenpaths/docker) +3. [Docker Compose](../goldenpaths/compose) \ No newline at end of file diff --git a/docs/telepresence/latest/concepts/goldenpaths/compose.md b/docs/telepresence/latest/concepts/goldenpaths/compose.md new file mode 100644 index 000000000..e3a6db407 --- /dev/null +++ b/docs/telepresence/latest/concepts/goldenpaths/compose.md @@ -0,0 +1,63 @@ +# Telepresence with Docker Compose Golden Path + +## Why? + +When adopting Telepresence, you may be hesitant to throw away all the investment you made replicating your infrastructure with +[Docker Compose](https://docs.docker.com/compose/). + +Thankfully, it doesn't have to be this way, since you can associate the [Telepresence Specification](../specs) with [Docker mode](../docker) to integrate your Docker Compose file. + +## How? +Telepresence Intercept Specifications are integrated with Docker Compose! Let's look at an example to see how it works. + +Below is an example of an Intercept Spec and Docker Compose file that is intercepting an echo service with a custom header and being handled by a service created through Docker Compose. + +Intercept Spec: +```yaml +workloads: + - name: echo + intercepts: + - handler: echo + localport: 8080 + port: 80 + headers: + - name: "{{ .Telepresence.Username }}" + value: 1 +handlers: + - name: echo + docker: + compose: + services: + - name: echo + behavior: interceptHandler +``` + +The Docker Compose file is creating two services, a postgres database, and your local echo service. The local echo service is utilizing Docker's [watch](https://docs.docker.com/compose/file-watch/) feature to take advantage of hot reloads. + +Docker compose file: +```yaml +services: + postgres: + image: "postgres:14.1" + ports: + - "5432" + echo: + build: . + ports: + - "8080" + x-develop: + watch: + - action: rebuild + path: main.go + environment: + DATABASE_HOST: "localhost:5432" + DATABASE_PASSWORD: postgres + DEV_MODE: "true" +``` + +By combining Intercept Specifications and Docker Compose, you can intercept the traffic going to your cluster while developing on multiple local services and utilizing hot reloads. + +## Key learnings + +* Using **Docker Compose** with **Telepresence** allows you to have a **hybrid** development setup between local & remote. +* You can **reuse your existing setup** with minimum effort. diff --git a/docs/telepresence/latest/concepts/goldenpaths/docker.md b/docs/telepresence/latest/concepts/goldenpaths/docker.md new file mode 100644 index 000000000..863aa497a --- /dev/null +++ b/docs/telepresence/latest/concepts/goldenpaths/docker.md @@ -0,0 +1,70 @@ +# Telepresence with Docker Golden Path + +## Why? + +It can be tedious to adopt Telepresence across your organization, since in its handiest form, it requires admin access, and needs to get along with any exotic +networking setup that your company may have. + +If Docker is already approved in your organization, this Golden path should be considered. + +## How? + +When using Telepresence in Docker mode, users can eliminate the need for admin access on their machines, address several networking challenges, and forego the need for third-party applications to enable volume mounts. + +You can simply add the docker flag to any Telepresence command, and it will start your daemon in a container. +Thus removing the need for root access, making it easier to adopt as an organization + +Let's illustrate with a quick demo, assuming a default Kubernetes context named default, and a simple HTTP service: + +```cli +$ telepresence connect --docker +Connected to context default (https://default.cluster.bakerstreet.io) + +$ docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +7a0e01cab325 datawire/telepresence:2.12.1 "telepresence connec…" 18 seconds ago Up 16 seconds 127.0.0.1:58802->58802/tcp tp-default +``` + +This method limits the scope of the potential networking issues since everything stays inside Docker. The Telepresence daemon can be found under the name `tp-` when listing your containers. + +Start an intercept: + +```cli +$ telepresence intercept echo-easy --port 8080:80 -n default +Using Deployment echo-easy + Intercept name : echo-easy-default + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: proxied + Volume Mount Point : /var/folders/x_/4x_4pfvx2j3_94f36x551g140000gp/T/telfs-505935483 + Intercepting : HTTP requests with headers + 'x-telepresence-intercept-id: e20f0764-7fd8-45c1-b911-b2adeee1af45:echo-easy-default' + Preview URL : https://gracious-ishizaka-5365.preview.edgestack.me + Layer 5 Hostname : echo-easy.default.svc.cluster.local +``` + +Start your intercept handler (interceptor) by targeting the daemon container `--network=container:tp-`, and open the preview URL to see the traffic routed to your machine. + +```cli +$ docker run \ + --network=container:tp-default \ + -e PORT=8080 jmalloc/echo-server +Echo server listening on port 8080. +127.0.0.1:41500 | GET / +127.0.0.1:41512 | GET /favicon.ico +127.0.0.1:41500 | GET / +127.0.0.1:41512 | GET /favicon.ico +``` + +For users utilizing Docker mode in Telepresence, we strongly recommend using [Intercept Specifications](../specs) to seamlessly configure their Intercept Handler as a Docker container. + +It's essential to ensure that users also open the debugging port on their container to allow them to attach their local debugger from their IDE. +By leveraging Intercept Specifications and Docker mode together, users can optimize their Telepresence experience and streamline their debugging workflow. + +## Key learnings + +* Using the Docker mode of telepresence **do not require root access**, and make it **easier** to adopt it across your organization. +* It **limits the potential networking issues** you can encounter. +* It leverages **Docker** for your interceptor. +* You can use it with the [Intercept Specifications](../specs). diff --git a/docs/telepresence/latest/concepts/goldenpaths/specs.md b/docs/telepresence/latest/concepts/goldenpaths/specs.md new file mode 100644 index 000000000..0d8e5dc30 --- /dev/null +++ b/docs/telepresence/latest/concepts/goldenpaths/specs.md @@ -0,0 +1,80 @@ +# Intercept Specification Golden Path + +## Why? + +Telepresence can be difficult to adopt Organization-wide. Each developer has their own local setup and adds many variables to running Telepresence, duplicating work amongst developers. + +For these reasons, and many others we recommend using [Intercept Specifications](../../../reference/intercepts/specs). + +## How? + +When using an Intercept Specification you write a YAML file, similar to a CI workflow, or a Docker compose file. An Intercept Specification enables you to standardization amongst your developers. + +With a spec you will be able to define the kubernetes context to work in, the workload you want to intercept, the local intercept handler your traffic will be flowing to, and any pre/post requisties that are required to run your applications. + +Lets look at an example: + +I have a service `quote` running in the `default` namespace I want to intercept to test changes I've made before opening a Pull Request. + +I can use the Intercept Specification below to tell Telepresence to Intercept the quote serivce with a [Personal Intercept](../../../reference/intercepts#personal-intercept), in the default namespace of my cluster `test-cluster`. I also want to start the Intercept Handler, as a Docker container, with the provided image. + +```yaml +--- +connection: + context: test-cluster +workloads: + - name: quote + namespace: default + intercepts: + - headers: + - name: test-{{ .Telepresence.Username }} + value: "{{ .Telepresence.Username }}" + localPort: 8080 + mountPoint: "false" + port: 80 + handler: quote + service: quote + previewURL: + enable: true +handlers: + - name: quote + environment: + - name: PORT + value: "8080" + docker: + image: docker.io/datawire/quote:0.5.0 +``` + +You can then run this Intercept Specification with: + +```cli +telepresence intercept run quote-spec.yaml + Intercept name : quote-default + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests with headers + 'test-user =~ user' + Preview URL : https://charming-newton-3109.preview.edgestack.me + Layer 5 Hostname : quote.default.svc.cluster.local +Intercept spec "quote-spec" started successfully, use ctrl-c to cancel. +2023/04/12 16:05:00 CONSUL_IP environment variable not found, continuing without Consul registration +2023/04/12 16:05:00 listening on :8080 +``` + +You can see that the Intercept was started, and if I check the local docker containers I can see that the Telepresence daemon is running in a container, and your Intercept Handler was successfully started. + +```cli +docker ps + +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +bdd99d244fbb datawire/quote:0.5.0 "/bin/qotm" 2 minutes ago Up 2 minutes tp-quote +5966d7099adf datawire/telepresence:2.12.1 "telepresence connec…" 2 minutes ago Up 2 minutes 127.0.0.1:58443->58443/tcp tp-test-cluster +``` + +## Key Learnings + +* Using Intercept Specification enables you to create a standardized approach for Intercepts across your Organization in an easy to share way. +* You can easily leverage Docker to remove other potential hiccups associated with networking. +* There are many more great things you can do with an Intercept Specification, check those out [here](../../../reference/intercepts/specs) \ No newline at end of file diff --git a/docs/telepresence/latest/concepts/intercepts.md b/docs/telepresence/latest/concepts/intercepts.md new file mode 100644 index 000000000..bf0bfd5b3 --- /dev/null +++ b/docs/telepresence/latest/concepts/intercepts.md @@ -0,0 +1,208 @@ +--- +title: "Types of intercepts" +description: "Short demonstration of personal vs global intercepts" +--- + +import React from 'react'; + +import Alert from '@material-ui/lab/Alert'; +import AppBar from '@material-ui/core/AppBar'; +import Paper from '@material-ui/core/Paper'; +import Tab from '@material-ui/core/Tab'; +import TabContext from '@material-ui/lab/TabContext'; +import TabList from '@material-ui/lab/TabList'; +import TabPanel from '@material-ui/lab/TabPanel'; +import Animation from '@src/components/InterceptAnimation'; + +export function TabsContainer({ children, ...props }) { + const [state, setState] = React.useState({curTab: "personal"}); + React.useEffect(() => { + const query = new URLSearchParams(window.location.search); + var interceptType = query.get('intercept') || "personal"; + if (state.curTab != interceptType) { + setState({curTab: interceptType}); + } + }, [state, setState]) + var setURL = function(newTab) { + history.replaceState(null,null, + `?intercept=${newTab}${window.location.hash}`, + ); + }; + return ( +
+ + + {setState({curTab: newTab}); setURL(newTab)}} aria-label="intercept types"> + + + + + + {children} + +
+ ); +}; + +# Types of intercepts + + + + +# No intercept + + + + +This is the normal operation of your cluster without Telepresence. + + + + + +# Global intercept + + + + +**Global intercepts** replace the Kubernetes "Orders" service with the +Orders service running on your laptop. The users see no change, but +with all the traffic coming to your laptop, you can observe and debug +with all your dev tools. + + + +### Creating and using global intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=all + ``` + + + + Make sure your current kubectl context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service: + + All requests will be sent to the version of your service that is + running in the local development environment. + + + + +# Personal intercept + +**Personal intercepts** allow you to be selective and intercept only +some of the traffic to a service while not interfering with the rest +of the traffic. This allows you to share a cluster with others on your +team without interfering with their work. + +Personal intercepts are subject to different plans. +To read more about their capabilities & limits, see the [subscription management page](../../../cloud/latest/subscriptions/howtos/manage-my-subscriptions). + + + + +In the illustration above, **Orange** +requests are being made by Developer 2 on their laptop and the +**green** are made by a teammate, +Developer 1, on a different laptop. + +Each developer can intercept the Orders service for their requests only, +while sharing the rest of the development environment. + + + +### Creating and using personal intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-header=Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b + ``` + + We're using + `Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b` as the + header for the sake of the example, but you can use any + `key=value` pair you want, or `--http-header=auto` to have it + choose something automatically. + + + + Make sure your current kubect context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service by passing the + HTTP header: + + ```http + Personal-Intercept: 126a72c7-be8b-4329-af64-768e207a184b + ``` + + + + Need a browser extension to modify or remove an HTTP-request-headers? + + Chrome + {' '} + Firefox + + + + 3. Using the intercept: Send requests to your service without the + HTTP header: + + Requests without the header will be sent to the version of your + service that is running in the cluster. This enables you to share + the cluster with a team! + +### Intercepting a specific endpoint + +It's not uncommon to have one service serving several endpoints. Telepresence is capable of limiting an +intercept to only affect the endpoints you want to work with by using one of the `--http-path-xxx` +flags below in addition to using `--http-header` flags. Only one such flag can be used in an intercept +and, contrary to the `--http-header` flag, it cannot be repeated. + +The following flags are available: + +| Flag | Meaning | +|-------------------------------|------------------------------------------------------------------| +| `--http-path-equal ` | Only intercept the endpoint for this exact path | +| `--http-path-prefix ` | Only intercept endpoints with a matching path prefix | +| `--http-path-regex ` | Only intercept endpoints that match the given regular expression | + +#### Examples: + +1. A personal intercept using the header "Coder: Bob" limited to all endpoints that start with "/api': + + ```shell + telepresence intercept SERVICENAME --http-path-prefix=/api --http-header=Coder=Bob + ``` + +2. A personal intercept using the auto generated header that applies only to the endpoint "/api/version": + + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version --http-header=auto + ``` + or, since `--http-header=auto` is the implicit when using `--http` options, just: + ```shell + telepresence intercept SERVICENAME --http-path-equal=/api/version + ``` + +3. A personal intercept using the auto generated header limited to all endpoints matching the regular expression "(staging-)?api/.*": + + ```shell + telepresence intercept SERVICENAME --http-path-regex='/(staging-)?api/.*' + ``` + + + diff --git a/docs/telepresence/latest/doc-links.yml b/docs/telepresence/latest/doc-links.yml new file mode 100644 index 000000000..2ae653691 --- /dev/null +++ b/docs/telepresence/latest/doc-links.yml @@ -0,0 +1,121 @@ +- title: Quick start + link: quick-start +- title: Install Telepresence + items: + - title: Install + link: install/ + - title: Upgrade + link: install/upgrade/ + - title: Install Traffic Manager + link: install/manager/ + - title: Install Traffic Manager with Helm + link: install/helm/ + - title: Cloud Provider Prerequisites + link: install/cloud/ + - title: Migrate from legacy Telepresence + link: install/migrate-from-legacy/ +- title: Core concepts + items: + - title: The changing development workflow + link: concepts/devworkflow + - title: The developer experience and the inner dev loop + link: concepts/devloop + - title: "Making the remote local: Faster feedback, collaboration and debugging" + link: concepts/faster + - title: Context propagation + link: concepts/context-prop + - title: Types of intercepts + link: concepts/intercepts + - title: Golden Paths + link: concepts/goldenpaths + items: + - title: Intercept Specifications + link: concepts/goldenpaths/specs + - title: Docker Mode + link: concepts/goldenpaths/docker + - title: Docker Compose integration + link: concepts/goldenpaths/compose +- title: How do I... + items: + - title: Intercept a service in your own environment + link: howtos/intercepts + - title: Share dev environments with Personal Intercepts + link: howtos/personal-intercepts + - title: Share public previews with Preview URLs + link: howtos/preview-urls + - title: Proxy outbound traffic to my cluster + link: howtos/outbound + - title: Host a cluster in a local VM + link: howtos/cluster-in-vm + - title: Send requests to an intercepted service + link: howtos/request + - title: Package and share my intercepts + link: howtos/package +- title: Telepresence with Docker + items: + - title: Telepresence for Docker Compose + link: docker/compose + - title: Telepresence for Docker Extension + link: docker/extension + - title: Telepresence in Docker Mode + link: docker/cli +- title: Telepresence for CI + items: + - title: Github Actions + link: ci/github-actions + - title: Pod Daemons + link: ci/pod-daemon +- title: Technical reference + items: + - title: Architecture + link: reference/architecture + - title: Client reference + link: reference/client + items: + - title: login + link: reference/client/login + - title: Laptop-side configuration + link: reference/config + - title: Cluster-side configuration + link: reference/cluster-config + - title: Using Docker for intercepts + link: reference/docker-run + - title: Running Telepresence in a Docker container + link: reference/inside-container + - title: Environment variables + link: reference/environment + - title: Intercepts + link: reference/intercepts/ + items: + - title: Configure intercept using CLI + link: reference/intercepts/cli + - title: Configure intercept using specifications + link: reference/intercepts/specs + - title: Manually injecting the Traffic Agent + link: reference/intercepts/manual-agent + - title: Volume mounts + link: reference/volume + - title: RESTful API service + link: reference/restapi + - title: DNS resolution + link: reference/dns + - title: RBAC + link: reference/rbac + - title: Telepresence and VPNs + link: reference/vpn + - title: Networking through Virtual Network Interface + link: reference/tun-device + - title: Connection Routing + link: reference/routing + - title: Using Telepresence with Linkerd + link: reference/linkerd +- title: FAQs + link: faqs +- title: Troubleshooting + link: troubleshooting +- title: Community + link: community +- title: Release Notes + link: release-notes +- title: Licenses + link: licenses diff --git a/docs/telepresence/latest/docker/cli.md b/docs/telepresence/latest/docker/cli.md new file mode 100644 index 000000000..7b37ba2a8 --- /dev/null +++ b/docs/telepresence/latest/docker/cli.md @@ -0,0 +1,281 @@ +--- +title: "Telepresence in Docker Mode" +description: "Claim a remote demo cluster and learn about running Telepresence in Docker Mode, speeding up local development and debugging." +indexable: true +--- + +import { EmojivotoServicesList, DCPLink, Login, DemoClusterWarning } from "../../../../../src/components/Docs/Telepresence"; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; + +# Telepresence in Docker Mode + +
+

Contents

+ +* [What is Telepresence Docker Mode?](#what-is-telepresence-docker-mode) +* [Key Benefits](#key-benefits) +* [Prerequisites](#prerequisites) +* [1. Get a free remote cluster](#1-get-a-free-remote-cluster) +* [2. Try the Emojivoto application](#2-try-the-emojivoto-application) +* [3. Testing the fix in your local environment](#3-testing-the-fix-in-your-local-environment) +* [4. Download the demo cluster config file](#4-download-the-demo-cluster-config-file) +* [5. Enable Telepresence Docker mode](#5-enable-telepresence-docker-mode) +* [6. Set up your local development environment and make a global intercept](#6-set-up-your-local-development-environment-and-make-a-global-intercept) +* [7.Make a personal intercept](#7-make-a-personal-intercept)) + +
+ +Welcome to the quickstart guide for Telepresence Docker mode! In this hands-on tutorial, we will explore the powerful features of Telepresence and learn how to leverage Telepresence Docker mode to enhance local development and debugging workflows. + +## What is Telepresence Docker Mode? + +Telepresence Docker Mode enables you to run a single service locally while seamlessly connecting it to a remote Kubernetes cluster. This mode enables developers to accelerate their development cycles by providing a fast and efficient way to iterate on code changes without requiring admin access on their machines. + +## Key Benefits + +When using Telepresence in Docker mode, you can enjoy the following benefits: + +1. **Simplified Development Setup**: Eliminate the need for admin access on your local machine, making it easier to set up and configure your development environment. + +2. **Efficient Networking**: Address common networking challenges by seamlessly connecting your locally running service to a remote Kubernetes cluster. This enables you to leverage the cluster's resources and dependencies while maintaining a productive local development experience. + +3. **Enhanced Debugging**: Gain the ability to debug your service in its natural environment, directly from your local development environment. This eliminates the need for complex workarounds or third-party applications to enable volume mounts or access remote resources. + +## Prerequisites + +1. [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/). Kubectl is the official Kubernetes command-line tool. You will use it regularly to interact with your cluster, whether deploying applications, inspecting resources, or debugging issues. + +2. [Telepresence 2.13 or latest](../../install). Telepresence is a command-line tool that lets you run a single service locally, while connecting that service to a remote Kubernetes cluster. You can use Telepresence to speed up local development and debugging. + +3. [Docker Desktop](https://www.docker.com/get-started). Docker Desktop is a tool for building and sharing containerized applications and microservices. You'll use Docker Desktop to run a local development environment. + +Now that we have a clear understanding of Telepresence Docker mode and its benefits, let's dive into the hands-on tutorial! + +## 1. Get a free remote cluster + +[Telepresence](/docs/telepresence/) connects your local workstation with a remote Kubernetes cluster. In this tutorial, we'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. Test out the application: + +1. Go to the and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. We're going to use Telepresence shortly to fix this bug, as everyone should be able to vote for 🍩! + + + Congratulations! You've successfully accessed the Emojivoto application on your remote cluster. + + +## 3. Testing the fix in your local environment + +We'll set up a development environment locally on your workstation. We'll then use [Telepresence](../../reference/inside-container/) to connect this local development environment to the remote Kubernetes cluster. To save time, the development environment we'll use is pre-packaged as a Docker container. + +1. Download and run the image for the service locally: + + ```bash + docker run -d --name ambassador-demo --pull always -p 8083:8083 -p 8080:8080 --rm -it datawire/demoemojivoto + ``` + + + If you're using Docker Desktop on Windows, you may need to enable virtualization to run the container.
> + Make sure that ports 8080 and 8083 are free. If the Docker engine is not running, the command will fail and you will see docker: unknown server OS in your terminal. +
+ + The Docker container includes a copy of the Emojivoto application that fixes the bug. Visit the [leaderboard](http://localhost:8083/leaderboard) and notice how it is different from the leaderboard in your Kubernetes cluster. + +2. Now, stop the container by running the following command in your terminal: + + ```bash + docker stop ambassador-demo + ``` + +In this section of the quickstart, you ran the Emojivoto application locally. In the next section, you'll use Telepresence to connect your local development environment to the remote Kubernetes cluster. + +## 4. Download the demo cluster config file + +1. {window.open('https://app.getambassador.io/cloud/demo-cluster-download-popup/config', 'ambassador-cloud-demo-cluster', 'menubar=no,location=no,resizable=yes,scrollbars=yes,status=no,width=550,height=750'); e.preventDefault(); }} target="_blank">Download your demo cluster config file. This file contains the credentials you need to access your demo cluster. + +2. Export the file's location to KUBECONFIG by running this command in your terminal: + + + + + ```bash + export KUBECONFIG=/path/to/kubeconfig.yaml + ``` + + + + + ```bash + export KUBECONFIG=/path/to/kubeconfig.yaml + ``` + + + + + ```bash + SET KUBECONFIG=/path/to/kubeconfig.yaml + ``` + + + + + + You should now be able to run `kubectl` commands against your demo cluster. + +3. Verify that you can access the cluster by listing the app's services: + + ``` + $ kubectl get services -n emojivoto + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + emoji-svc ClusterIP 10.43.131.84 8080/TCP,8801/TCP 3h12m + voting-svc ClusterIP 10.43.32.184 8080/TCP,8801/TCP 3h12m + web-svc ClusterIP 10.43.105.110 8080/TCP 3h12m + web-app ClusterIP 10.43.53.247 80/TCP 3h12m + web-app-canary ClusterIP 10.43.8.90 80/TCP 3h12m + ``` + +## 5. Enable Telepresence Docker mode + +You can simply add the docker flag to any Telepresence command, and it will start your daemon in a container. Thus removing the need for root access, making it easier to adopt as an organization. + +1. Confirm that the Telepresence CLI is now installed, we expect to see that the daemons are not yet running: +`telepresence status` + + ``` + $ telepresence status + User Daemon: Not running + Root Daemon: Not running + Ambassador Cloud: + Status : Logged out + Traffic Manager: Not connected + Intercept Spec: Not running + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open System Preferences → Security & Privacy → General. Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence status command. + + +2. Log in to Ambassador Cloud: + + ``` + $ telepresence login + ``` + +3. Then, install the Helm chart and quit Telepresence: + + ```bash + telepresence helm install + telepresence quit -s + ``` + +4. Finally, connect to the remote cluster using Docker mode: + + ``` + $ telepresence connect --docker + Connected to context default (https://default.cluster.bakerstreet.io) + ``` + +5. Verify that you are connected to the remote cluster by listing your Docker containers: + + ``` + $ docker ps + CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES + 7a0e01cab325 datawire/telepresence:2.12.1 "telepresence connec…" 18 seconds ago Up 16 seconds + ``` + + +This method limits the scope of the potential networking issues since everything stays inside Docker. The Telepresence daemon can be found under the name `tp-` when listing your containers. + + +## 6. Set up your local development environment and make a global intercept + +Start your intercept handler (interceptor) by targeting the daemon container --network=container:tp-``, and open the preview URL to see the traffic routed to your machine. + +1. Run the Docker container locally, by running this command inside your local terminal. The image is the same as the one you ran in the previous step (step 1) but this time, you will run it with the `--network=container:tp-` flag: + + ```bash + docker run -d --name ambassador-demo --pull always --network=container:tp-default --rm -it datawire/demoemojivoto + ``` + +2. With Telepresence, you can create global intercepts that intercept all traffic going to a service in your cluster and route it to your local environment instead/ Start a global intercept by running this command in your terminal: + + ``` + $ telepresence intercept web --docker --port 8080 --ingress-port 80 --ingress-host edge-stack.ambassador -n emojivoto --ingress-l5 edge-stack.ambassador --preview-url=true + Using Deployment web + Intercept name : web-emojivoto + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Volume Mount Point : /var/folders/n5/rgwx1rvd40z3tt2v473h715c0000gp/T/telfs-2663656564 + Intercepting : HTTP requests with headers + 'x-telepresence-intercept-id: 8ff55336-9127-43b7-8175-08c598699bdb:web-emojivoto' + Preview URL : https://unruffled-morse-4172.preview.edgestack.me + Layer 5 Hostname : edge-stack.ambassador + ``` + + + Learn more about intercepts and how to use them. + + +## 7. Make a personal intercept + +Personal intercepts allow you to be selective and intercept only some of the traffic to a service while not interfering with the rest of the traffic. This allows you to share a cluster with others on your team without interfering with their work. + +1. First, connect to telepresence docker mode again: + + ``` + $ telepresence connect --docker + ``` + +2. Run the docker container again: + + ``` + $ docker run -d --name ambassador-demo --pull always --network=container:tp-default --rm -it datawire/demoemojivoto + ``` + +3. Create a personal intercept by running this command in your terminal: + + ``` + $ telepresence intercept web --docker --port 8080 --ingress-port 80 --ingress-host edge-stack.ambassador -n emojivoto --ingress-l5 edge-stack.ambassador --preview-url=true + Using Deployment web + Intercept name : web-emojivoto + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Volume Mount Point : /var/folders/n5/rgwx1rvd40z3tt2v473h715c0000gp/T/telfs-2663656564 + Intercepting : HTTP requests with headers + 'x-telepresence-intercept-id: 8ff55336-9127-43b7-8175-08c598699bdb:web-emojivoto' + Preview URL : https://unruffled-morse-4172.preview.edgestack.me + Layer 5 Hostname : edge-stack.ambassador + ``` + +4. Open the preview URL to see the traffic routed to your machine. + +5. To stop the intercept, run this command in your terminal: + + ``` + $ telepresence leave web-emojivoto + ``` +## What's Next? + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! + +
\ No newline at end of file diff --git a/docs/telepresence/latest/docker/compose.md b/docs/telepresence/latest/docker/compose.md new file mode 100644 index 000000000..45de52d2e --- /dev/null +++ b/docs/telepresence/latest/docker/compose.md @@ -0,0 +1,117 @@ +--- +title: "Telepresence for Docker Compose" +description: "Learn about how to use Docker Compose with Telepresence" +indexable: true +--- + +# Telepresence for Docker Compose + +The [Intercept Specification](../../reference/intercepts/specs) can contain an intercept handler that in turn references (or embeds) a [docker compose](../../reference/intercepts/specs#compose) specification. The docker compose services will then be used when handling the intercepted traffic. + +The intended user for the docker compose integration runs an existing compose specification locally on a workstation but wants to try it out "in the cluster" by intercepting cluster services. This is challenging, because the cluster's network, the intercepted pod's environment and volume mounts, and which of the services in the compose file to actually redirect traffic to, are not known to docker compose. In fact, the environment and volume mounts are not known until the actual intercept is activated. Telepresence helps with all of this by using an ephemeral and modified copy of the compose file that it creates when the intercept starts. The modification steps are described below. + +## Intended service behavior + +The user starts by declaring how each service in the docker compose spec. are intended to behave. These intentions can be declared directly in the Intercept spec. so that the docker compose spec. is left untouched, or they can be added to the docker compose spec. in the form of `x-telepresence` extensions. This is explained ([in detail](../../reference/intercepts/specs#service)) in the reference. + +The intended behavior can be one of `interceptHandler`, `remote`, or `local`, where `local` is the default that applies to all services that have no intended behavior specified. + +### The interceptHandler behavior + +A compose service intended to have the `interceptHandler` behavior will: + +- handle traffic from the intercepted pod +- remotely mount the volumes of the intercepted pod +- have access to the environment variables of the intercepted pod + +This means that Telepresence will: + +- modify the `network-mode` of the compose service so that it shares the network of the containerized Telepresence daemon. +- modify the `environment` of the service to include the environment variables exposed by the intercepted pod. +- create volumes that correspond to the volumes of the intercepted pod and replace volumes on the compose service that have overlapping targets. +- delete any networks from the service and instead attach those networks to the daemon. +- delete any exposed ports and instead expose them using the `telepresence` network. + +A docker compose service that originally looked like this: + +```yaml +services: + echo: + environment: + - PORT=8088 + - MODE=local + build: . + ports: + - "8088:8088" + volumes: + - local-secrets:/var/run/secrets/kubernetes.io/serviceaccount:ro + networks: + - green +``` + +when acting as an `interceptHandler` for the `echo-server` service, will instead look something like this: + +```yaml +services: + echo: + build: . + environment: + - A_TELEPRESENCE_MOUNTS=/var/run/secrets/kubernetes.io/serviceaccount + # ... other environment variables from the pod left out for brevity. + - PORT=8088 + - MODE=local + network_mode: container:tp-minikube + volumes: + - echo-server-0:/var/run/secrets/kubernetes.io/serviceaccount +``` + +and Telepresence will also have added this to the compose file: + +```yaml +volumes: + echo-server-0: + name: echo-server-0 + driver: datawire/telemount:amd64 + driver_opts: + container: echo-server + dir: /var/run/secrets/kubernetes.io/serviceaccount + host: 192.168.208.2 + port: "34439" +``` + +### The remote behavior + +A compose service intended to have the `remote` behavior will no longer run locally. Telepresence +will instead: + +- Remove the service from the docker compose spec. +- Reassign any `depends_on` for that service to what the service in turn `depends_on`. +- Inform the containerized Telepresence daemon about the `mapping` that was declared in the service intent (if any). + +### The local behavior + +A compose service intended to have the `local` behavior is more or less left untouched. If it has `depends_on` to a +service intended to have `remote` behavior, then those are swapped out for the `depends_on` in that service. + +## Other modifications + +### The telepresence network + +The default network of the docker compose file will be replaced with the `telepresence` network. This network enables +port access on the local host. + +```yaml +networks: + default: + name: telepresence + external: true + green: + name: echo_green +``` + +### Auto-detection of watcher + +Telepresence will check if the docker compose file contains a [watch](https://docs.docker.com/compose/file-watch/) +declaration for hot-deploy and start a `docker compose alpha watch` automatically when that is the case. This means that +an intercept handler that is modified will be deployed instantly even though the code runs in a container and the +changes will be visible using a preview URL. diff --git a/docs/telepresence/latest/docker/extension.md b/docs/telepresence/latest/docker/extension.md new file mode 100644 index 000000000..37da65b34 --- /dev/null +++ b/docs/telepresence/latest/docker/extension.md @@ -0,0 +1,68 @@ +--- +title: "Telepresence for Docker Extension" +description: "Learn about the Telepresence Docker Extension." +indexable: true +--- +# Telepresence for Docker Extension + +The [Telepresence Docker extension](../../../../../kubernetes-learning-center/telepresence-docker-extension/) is an extension that runs in Docker Desktop. This extension allows you to spin up a selection of your application and run the Telepresence daemons in that container. The Telepresence extension allows you to intercept a service and redirect cloud traffic to containers. + +## Quick Start + +This Quick Start guide will walk you through creating your first intercept in the Telepresence extension in Docker Desktop. + +## Connect to Ambassador Cloud through the Telepresence Docker extension. + + 1. Click the Telepresence extension in Docker Desktop, then click **Get Started**. + + 2. You'll be redirected to Ambassador Cloud for login, you can authenticate with **Docker**, Google, GitHub or GitLab account. +

+ +

+ +## Create an Intercept from a Kubernetes service + + 1. Select the Kubernetes context you would like to connect to. +

+ +

+ + 2. Once Telepresence is connected to your cluster you will see a list of services you can connect to. If you don't see the service you want to intercept, you may need to change namespaces in the dropdown menu. +

+ +

+ + 3. Click the **Intercept** button on the service you want to intercept. You will see a popup to help configure your intercept, and intercept handlers. +

+ +

+ + 4. Telepresence will start an intercept on the service and your local container on the designated port. You will then be redirected to a management page where you can view your active intercepts. +

+ +

+ + +## Create an Intercept from an Intercept Specification. + + 1. Click the dropdown on the **Connect** button to activate the option to upload an intercept specification. +

+ +

+ + 2. Once your specification has been uploaded, the extension will process it and redirect you to the running intercepts page after it has been started. + + 3. The intercept information now shows up in the Docker Telepresence extension. You can now [test your code](#test-your-code). +

+ +

+ + + For more information on Intercept Specifications see the docs here. + + +## Test your code + +Now you can make your code changes in your preferred IDE. When you're finished, build a new container with your code changes and restart your intercept. + +Click `view` next to your preview URL to open a browser tab and see the changes you've made in real time, or you can share the preview URL with teammates so they can review your work. \ No newline at end of file diff --git a/docs/telepresence/latest/faq-215-login.md b/docs/telepresence/latest/faq-215-login.md new file mode 100644 index 000000000..1a1fcab29 --- /dev/null +++ b/docs/telepresence/latest/faq-215-login.md @@ -0,0 +1,37 @@ +--- +description: "Learn about the account requirement in Telepresence 2.15." +--- + +# Telepresence v2.15 Account Requirement FAQ + +There are some big changes in Telepresence 2.15, including the need for an Ambassador Cloud account. We address that change here and have a [more comprehensive FAQ](../faq-215) on the v2.15 release. + +** Why do I need an Ambassador Cloud account to use Telepresence now?** + +The new pricing model for Telepresence accompanies the new requirement to create an account. Previously we only required an account for team features. +We’re now focused on making Telepresence an extremely capable tool for individual developers, from your first connect. We have added new capabilities to the product, which are listed in the question below comparing the two versions. +Because of that, we now require an account to use any Telepresence feature. Creating an account on Ambassador Cloud is completely free, takes a few seconds and can be done through accounts you already have, like GitHub and Google. + +** Can I get the old experience of connecting and doing global intercepts without an Ambassador Cloud account?** + +Yes! The [open source version of Telepresence](https://telepresence.io) can do Connect and Global Intercepts and is independent of Ambassador Labs’ account infrastructure. +You can install the open-source binaries from the [GitHub repository](https://github.com/telepresenceio/telepresence/releases). + +** What do I get from Telepresence that I can't get from the open-source version?** + +We distribute up-to-date Telepresence binaries through Homebrew and a Windows installer; open-source binaries must be downloaded from the GitHub repository. +Our Lite plan offers the same capabilities as open-source but with that added convenience, and is completely free. The Lite plan also includes the Docker Extension. +Our Developer plan adds features like Personal Intercepts, Intercept Specs, Docker Compose integration, and 8x5 support to help you use Telepresence effectively. + +We believe the Lite plan offers the best experience for hobbyists, the Developer plan for individual developers using Telepresence professionally, and the open-source version for users who require for compliance, or prefer, a fully open-source solution. + +** What if I'm in an air-gapped environment and can't login?"** + +Air-gapped environments are supported in the [Enteprise edition](https://www.getambassador.io/editions) of Telepresence. Please [contact our sales team](https://www.getambassador.io/contact-us). + +export const metaData = [ + {name: "Telepresence OSS", path: "https://telepresence.io"}, + {name: "Telepresence Releases", path: "https://github.com/telepresenceio/telepresence/releases"}, + {name: "Telepresence Pricing", path: "https://www.getambassador.io/editions"}, + {name: "Contact Us", path: "https://www.getambassador.io/contact-us"}, +] diff --git a/docs/telepresence/latest/faq-215.md b/docs/telepresence/latest/faq-215.md new file mode 100644 index 000000000..a5f83d044 --- /dev/null +++ b/docs/telepresence/latest/faq-215.md @@ -0,0 +1,50 @@ +--- +description: "Learn about the major changes in Telepresence v2.15." +--- + +# FAQ for v2.15 + +There are some big changes in Telepresence v2.15, read on to learn more about them. + +** What are the main differences between v2.15 and v2.14?** + +* In v2.15 we now require an Ambassador Cloud account to use most features of Telepresence. We offer [three plans](https://www.getambassador.io/editions) including a completely free tier. +* We have removed [Team Mode](../../2.14/concepts/modes#team-mode), and the default type of intercept is now a [Global Intercept](../concepts/intercepts), even when you are logged in. + +** Why do I need an Ambassador Cloud account to use Telepresence now?** + +The new pricing model for Telepresence accompanies the new requirement to create an account. Previously we only required an account for team features. +We’re now focused on making Telepresence an extremely capable tool for individual developers, from your first connect. We have added new capabilities to the product, which are listed in the question below comparing the two versions. +Because of that, we now require an account to use any Telepresence feature. Creating an account on Ambassador Cloud is completely free, takes a few seconds and can be done through accounts you already have, like GitHub and Google. + +** What do I get from Telepresence that I can't get from the open-source version?** + +We distribute up-to-date Telepresence binaries through Homebrew and a Windows installer; open-source binaries must be downloaded from the GitHub repository. +Our Lite plan offers the same capabilities as open-source but with that added convenience, and is completely free. The Lite plan also includes the Docker Extension. +Our Developer plan adds features like Personal Intercepts, Intercept Specs, Docker Compose integration, and 8x5 support to help you use Telepresence effectively. + +We believe the Lite plan offers the best experience for hobbyists, the Developer plan for individual developers using Telepresence professionally, and the [open-source version](https://telepresence.io) for users who require for compliance, or prefer, a fully open-source solution. + +** This feels like a push by Ambassador Labs to force people to the commercial version of Telepresence, what's up with that?** + +One of the most common pieces of feedback we've received is how hard it’s been for users to tell what features of Telepresence were proprietary vs open-source, and which version they were using. +We've always made it more convenient to use the commercial version of Telepresence but we want to make it clearer now what the benefits are and when you're using it. + +** What is the future of the open-source version Telepresence?** + +Development on the open-source version remains active as it is the basis for the commercial version. We're regularly improving the client, the Traffic Manager, and other pieces of Telepresence open-source. +In addition, we recently started the process to move Telepresence in the CNCF from Sandbox status to Incubating status. + +** Why are there limits on the Lite and Developer plans?** + +The limits on the Developer plan exist to prevent abuse of individual licenses. We believe they are above what an individual developer would use in a given month, but reach out to support, included in your Developer plan, if they are causing an issue for you. +The limits on the Lite plan exist because it is a free plan. + +** What if I'm in an air-gapped environment and can't login?"** + +Air-gapped environments are supported in the [Enteprise edition](https://www.getambassador.io/editions) of Telepresence. Please [contact our sales team](https://www.getambassador.io/contact-us). + +export const metaData = [ + {name: "Telepresence Pricing", path: "https://www.getambassador.io/editions"}, + {name: "Contact Us", path: "https://www.getambassador.io/contact-us"}, +] diff --git a/docs/telepresence/latest/faqs.md b/docs/telepresence/latest/faqs.md new file mode 100644 index 000000000..018658c5b --- /dev/null +++ b/docs/telepresence/latest/faqs.md @@ -0,0 +1,133 @@ +--- +description: "Learn how Telepresence helps with fast development and debugging in your Kubernetes cluster." +--- + +# FAQs + +For questions about the new account changes introduced in v2.15, please see our FAQ [specific to that topic](../faq-215). + +** Why Telepresence?** + +Modern microservices-based applications that are deployed into Kubernetes often consist of tens or hundreds of services. The resource constraints and number of these services means that it is often difficult to impossible to run all of this on a local development machine, which makes fast development and debugging very challenging. The fast [inner development loop](../concepts/devloop/) from previous software projects is often a distant memory for cloud developers. + +Telepresence enables you to connect your local development machine seamlessly to the cluster via a two way proxying mechanism. This enables you to code locally and run the majority of your services within a remote Kubernetes cluster -- which in the cloud means you have access to effectively unlimited resources. + +Ultimately, this empowers you to develop services locally and still test integrations with dependent services or data stores running in the remote cluster. + +You can “intercept” any requests made to a target Kubernetes workload, and code and debug your associated service locally using your favourite local IDE and in-process debugger. You can test your integrations by making requests against the remote cluster’s ingress and watching how the resulting internal traffic is handled by your service running locally. + +By using the preview URL functionality you can share access with additional developers or stakeholders to the application via an entry point associated with your intercept and locally developed service. You can make changes that are visible in near real-time to all of the participants authenticated and viewing the preview URL. All other viewers of the application entrypoint will not see the results of your changes. + +** What operating systems does Telepresence work on?** + +Telepresence currently works natively on macOS (Intel and Apple Silicon), Linux, and Windows. + +** What protocols can be intercepted by Telepresence?** + +Both TCP and UDP are supported for global intercepts. + +Personal intercepts require HTTP. All HTTP/1.1 and HTTP/2 protocols can be intercepted. This includes: + +- REST +- JSON/XML over HTTP +- gRPC +- GraphQL + +If you need another protocol supported, please [drop us a line](https://www.getambassador.io/feedback/) to request it. + +** When using Telepresence to intercept a pod, are the Kubernetes cluster environment variables proxied to my local machine?** + +Yes, you can either set the pod's environment variables on your machine or write the variables to a file to use with Docker or another build process. Please see [the environment variable reference doc](../reference/environment) for more information. + +** When using Telepresence to intercept a pod, can the associated pod volume mounts also be mounted by my local machine?** + +Yes, please see [the volume mounts reference doc](../reference/volume/) for more information. + +** When connected to a Kubernetes cluster via Telepresence, can I access cluster-based services via their DNS name?** + +Yes. After you have successfully connected to your cluster via `telepresence connect` you will be able to access any service in your cluster via their namespace qualified DNS name. + +This means you can curl endpoints directly e.g. `curl .:8080/mypath`. + +If you create an intercept for a service in a namespace, you will be able to use the service name directly. + +This means if you `telepresence intercept -n `, you will be able to resolve just the `` DNS record. + +You can connect to databases or middleware running in the cluster, such as MySQL, PostgreSQL and RabbitMQ, via their service name. + +** When connected to a Kubernetes cluster via Telepresence, can I access cloud-based services and data stores via their DNS name?** + +You can connect to cloud-based data stores and services that are directly addressable within the cluster (e.g. when using an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) Service type), such as AWS RDS, Google pub-sub, or Azure SQL Database. + +** What types of ingress does Telepresence support for the preview URL functionality?** + +The preview URL functionality should work with most ingress configurations, including straightforward load balancer setups. + +Telepresence will discover/prompt during first use for this info and make its best guess at figuring this out and ask you to confirm or update this. + +** Why are my intercepts still reporting as active when they've been disconnected?** + + In certain cases, Telepresence might not have been able to communicate back with Ambassador Cloud to update the intercept's status. Worry not, they will get garbage collected after a period of time. + +** Why is my intercept associated with an "Unreported" cluster?** + + Intercepts tagged with "Unreported" clusters simply mean Ambassador Cloud was unable to associate a service instance with a known detailed service from an Edge Stack or API Gateway cluster. [Connecting your cluster to the Service Catalog](/docs/telepresence/latest/quick-start/) will properly match your services from multiple data sources. + +** Will Telepresence be able to intercept workloads running on a private cluster or cluster running within a virtual private cloud (VPC)?** + +Yes. The cluster has to have outbound access to the internet for the preview URLs to function correctly, but it doesn't need to have a publicly accessible IP address. + +The cluster must also have access to an external registry in order to be able to download the traffic-manager and traffic-agent images that are deployed when connecting with Telepresence. + +** Why does running Telepresence require sudo access for the local daemon unless it runs in a Docker container?** + +The local daemon needs sudo to create a VIF (Virtual Network Interface) for outbound routing and DNS. Root access is needed to do that unless the daemon runs in a Docker container. + +** What components get installed in the cluster when running Telepresence?** + +A single `traffic-manager` service is deployed in the `ambassador` namespace within your cluster, and this manages resilient intercepts and connections between your local machine and the cluster. + +When running in `team` mode, a single `ambassador-agent` service is deployed in the `ambassador` namespace. It communicates with the cloud to keep your list of services up to date. + +A Traffic Agent container is injected per pod that is being intercepted. The first time a workload is intercepted all pods associated with this workload will be restarted with the Traffic Agent automatically injected. + +** How can I remove all the Telepresence components installed within my cluster?** + +You can run the command `telepresence helm uninstall` to remove everything from the cluster, including the `traffic-manager` and the `ambassador-agent` services, and all the `traffic-agent` containers injected into each pod being intercepted. + +Also run `telepresence quit -s` to stop the local daemon running. + +** What language is Telepresence written in?** + +All components of the Telepresence application and cluster components are written using Go. + +** How does Telepresence connect and tunnel into the Kubernetes cluster?** + +The connection between your laptop and cluster is established by using +the `kubectl port-forward` machinery (though without actually spawning +a separate program) to establish a TLS encrypted connection to Telepresence +Traffic Manager in the cluster, and running Telepresence's custom VPN +protocol over that connection. + + + +** What identity providers are supported for authenticating to view a preview URL?** + +* GitHub +* GitLab +* Google + +More authentication mechanisms and identity provider support will be added soon. Please [let us know](https://www.getambassador.io/feedback/) which providers are the most important to you and your team in order for us to prioritize those. + +** How do I disable desktop notifications from the Telepresence CLI? ** + +Desktop notifications for the Telepresence CLI tool can be activated/deactivated from Ambassador Cloud. +Users can head over to their [Notifications](https://app.getambassador.io/cloud/settings/notifications) page to configure this feature. + +** Is Telepresence open source?** + +A large part of it is! You can find its source code on [GitHub](https://github.com/telepresenceio/telepresence). + +** How do I share my feedback on Telepresence?** + +Your feedback is always appreciated and helps us build a product that provides as much value as possible for our community. You can chat with us directly on our [feedback page](https://www.getambassador.io/feedback/), or you can [join our Slack channel](http://a8r.io/slack) to share your thoughts. diff --git a/docs/telepresence/latest/howtos/cluster-in-vm.md b/docs/telepresence/latest/howtos/cluster-in-vm.md new file mode 100644 index 000000000..4762344c9 --- /dev/null +++ b/docs/telepresence/latest/howtos/cluster-in-vm.md @@ -0,0 +1,192 @@ +--- +title: "Considerations for locally hosted clusters | Ambassador" +description: "Use Telepresence to intercept services in a cluster running in a hosted virtual machine." +--- + +# Network considerations for locally hosted clusters + +## The problem +Telepresence creates a Virtual Network Interface ([VIF](../../reference/tun-device)) that maps the clusters subnets to the host machine when it connects. If you're running Kubernetes locally (e.g., k3s, Minikube, Docker for Desktop), you may encounter network problems because the devices in the host are also accessible from the cluster's nodes. + +### Example: +A k3s cluster runs in a headless VirtualBox machine that uses a "host-only" network. This network will allow both host-to-guest and guest-to-host connections. In other words, the cluster will have access to the host's network and, while Telepresence is connected, also to its VIF. This means that from the cluster's perspective, there will now be more than one interface that maps the cluster's subnets; the ones already present in the cluster's nodes, and then the Telepresence VIF, mapping them again. + +Now, if a request arrives to Telepresence that is covered by a subnet mapped by the VIF, the request is routed to the cluster. If the cluster for some reason doesn't find a corresponding listener that can handle the request, it will eventually try the host network, and find the VIF. The VIF routes the request to the cluster and now the recursion is in motion. The final outcome of the request will likely be a timeout but since the recursion is very resource intensive (a large amount of very rapid connection requests), this will likely also affect other connections in a bad way. + +## Solution + +### Create a bridge network +A bridge network is a Link Layer (L2) device that forwards traffic between network segments. By creating a bridge network, you can bypass the host's network stack which enable the Kubernetes cluster to connect directly to the same router as your host. + +To create a bridge network, you need to change the network settings of the guest running a cluster's node so that it connects directly to a physical network device on your host. The details on how to configure the bridge depends on what type of virtualization solution you're using. + +### Vagrant + Virtualbox + k3s example +Here's a sample `Vagrantfile` that will spin up a server node and two agent nodes in three headless instances using a bridged network. It also adds the configuration needed for the cluster to host a docker repository (very handy in case you want to save bandwidth). The Kubernetes registry manifest must be applied using `kubectl -f registry.yaml` once the cluster is up and running. + +#### Vagrantfile +```ruby +# -*- mode: ruby -*- +# vi: set ft=ruby : + +# bridge is the name of the host's default network device +$bridge = 'wlp5s0' + +# default_route should be the IP of the host's default route. +$default_route = '192.168.1.1' + +# nameserver must be the IP of an external DNS, such as 8.8.8.8 +$nameserver = '8.8.8.8' + +# server_name should also be added to the host's /etc/hosts file and point to the server_ip +# for easy access when pushing docker images +server_name = 'multi' + +# static IPs for the server and agents. Those IPs must be on the default router's subnet +server_ip = '192.168.1.110' +agents = { + 'agent1' => '192.168.1.111', + 'agent2' => '192.168.1.112', +} + +# Extra parameters in INSTALL_K3S_EXEC variable because of +# K3s picking up the wrong interface when starting server and agent +# https://github.com/alexellis/k3sup/issues/306 +server_script = <<-SHELL + sudo -i + apk add curl + export INSTALL_K3S_EXEC="--bind-address=#{server_ip} --node-external-ip=#{server_ip} --flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + echo "Sleeping for 5 seconds to wait for k3s to start" + sleep 5 + cp /var/lib/rancher/k3s/server/token /vagrant_shared + cp /etc/rancher/k3s/k3s.yaml /vagrant_shared + cp /etc/rancher/k3s/registries.yaml /vagrant_shared + SHELL + +agent_script = <<-SHELL + sudo -i + apk add curl + export K3S_TOKEN_FILE=/vagrant_shared/token + export K3S_URL=https://#{server_ip}:6443 + export INSTALL_K3S_EXEC="--flannel-iface=eth1" + mkdir -p /etc/rancher/k3s + cat <<-'EOF' > /etc/rancher/k3s/registries.yaml +mirrors: + "multi:5000": + endpoint: + - "http://#{server_ip}:5000" +EOF + curl -sfL https://get.k3s.io | sh - + SHELL + +def config_vm(name, ip, script, vm) + # The network_script has two objectives: + # 1. Ensure that the guest's default route is the bridged network (bypass the network of the host) + # 2. Ensure that the DNS points to an external DNS service, as opposed to the DNS of the host that + # the NAT network provides. + network_script = <<-SHELL + sudo -i + ip route delete default 2>&1 >/dev/null || true; ip route add default via #{$default_route} + cp /etc/resolv.conf /etc/resolv.conf.orig + sed 's/^nameserver.*/nameserver #{$nameserver}/' /etc/resolv.conf.orig > /etc/resolv.conf + SHELL + + vm.hostname = name + vm.network 'public_network', bridge: $bridge, ip: ip + vm.synced_folder './shared', '/vagrant_shared' + vm.provider 'virtualbox' do |vb| + vb.memory = '4096' + vb.cpus = '2' + end + vm.provision 'shell', inline: script + vm.provision 'shell', inline: network_script, run: 'always' +end + +Vagrant.configure('2') do |config| + config.vm.box = 'generic/alpine314' + + config.vm.define 'server', primary: true do |server| + config_vm(server_name, server_ip, server_script, server.vm) + end + + agents.each do |agent_name, agent_ip| + config.vm.define agent_name do |agent| + config_vm(agent_name, agent_ip, agent_script, agent.vm) + end + end +end +``` + +The Kubernetes manifest to add the registry: + +#### registry.yaml +```yaml +apiVersion: v1 +kind: ReplicationController +metadata: + name: kube-registry-v0 + namespace: kube-system + labels: + k8s-app: kube-registry + version: v0 +spec: + replicas: 1 + selector: + app: kube-registry + version: v0 + template: + metadata: + labels: + app: kube-registry + version: v0 + spec: + containers: + - name: registry + image: registry:2 + resources: + limits: + cpu: 100m + memory: 200Mi + env: + - name: REGISTRY_HTTP_ADDR + value: :5000 + - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY + value: /var/lib/registry + volumeMounts: + - name: image-store + mountPath: /var/lib/registry + ports: + - containerPort: 5000 + name: registry + protocol: TCP + volumes: + - name: image-store + hostPath: + path: /var/lib/registry-storage +--- +apiVersion: v1 +kind: Service +metadata: + name: kube-registry + namespace: kube-system + labels: + app: kube-registry + kubernetes.io/name: "KubeRegistry" +spec: + selector: + app: kube-registry + ports: + - name: registry + port: 5000 + targetPort: 5000 + protocol: TCP + type: LoadBalancer +``` + diff --git a/docs/telepresence/latest/howtos/intercepts.md b/docs/telepresence/latest/howtos/intercepts.md new file mode 100644 index 000000000..f853b134d --- /dev/null +++ b/docs/telepresence/latest/howtos/intercepts.md @@ -0,0 +1,108 @@ +--- +description: "Start using Telepresence in your own environment. Follow these steps to intercept your service in your cluster." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Intercept a service in your own environment + +Telepresence enables you to create intercepts to a target Kubernetes workload. Once you have created and intercept, you can code and debug your associated service locally. + +For a detailed walk-though on creating intercepts using our sample app, follow the [quick start guide](../../quick-start/). + + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +This guide assumes you have a Kubernetes deployment and service accessible publicly by an ingress controller, and that you can run a copy of that service on your laptop. + + +## Intercept your service with a global intercept + +With Telepresence, you can create [global intercepts](../../concepts/intercepts/?intercept=global) that intercept all traffic going to a service in your cluster and route it to your local environment instead. + +1. Connect to your cluster with `telepresence connect` and connect to the Kubernetes API server: + + ```console + $ curl -ik https://kubernetes.default + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected when you first connect. + + + You now have access to your remote Kubernetes API server as if you were on the same network. You can now use any local tools to connect to any service in the cluster. + + If you have difficulties connecting, make sure you are using Telepresence 2.0.3 or a later version. Check your version by entering `telepresence version` and [upgrade if needed](../../install/upgrade/). + + +2. Enter `telepresence list` and make sure the service you want to intercept is listed. For example: + + ```console + $ telepresence list + ... + example-service: ready to intercept (traffic-agent not yet installed) + ... + ``` + +3. Get the name of the port you want to intercept on your service: + `kubectl get service --output yaml`. + + For example: + + ```console + $ kubectl get service example-service --output yaml + ... + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + ... + ``` + +4. Intercept all traffic going to the service in your cluster: + `telepresence intercept --port [:] --env-file `. + * For `--port`: specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * For `--env-file`: specify a file path for Telepresence to write the environment variables that are set in the pod. + The example below shows Telepresence intercepting traffic going to service `example-service`. Requests now reach the service on port `http` in the cluster get routed to `8080` on the workstation and write the environment variables of the service to `~/example-service-intercept.env`. + ```console + $ telepresence intercept example-service --port 8080:http --env-file ~/example-service-intercept.env + Using Deployment example-service + intercepted + Intercept name: example-service + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Intercepting : all TCP connections + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + The following are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#env). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Query the environment in which you intercepted a service and verify your local instance being invoked. + All the traffic previously routed to your Kubernetes Service is now routed to your local environment + +You can now: +- Make changes on the fly and see them reflected when interacting with + your Kubernetes environment. +- Query services only exposed in your cluster's network. +- Set breakpoints in your IDE to investigate bugs. + + + + **Didn't work?** Make sure the port you're listening on matches the one you specified when you created your intercept. + + diff --git a/docs/telepresence/latest/howtos/outbound.md b/docs/telepresence/latest/howtos/outbound.md new file mode 100644 index 000000000..9afcb75df --- /dev/null +++ b/docs/telepresence/latest/howtos/outbound.md @@ -0,0 +1,89 @@ +--- +description: "Telepresence can connect to your Kubernetes cluster, letting you access cluster services as if your laptop was another pod in the cluster." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Proxy outbound traffic to my cluster + +While preview URLs are a powerful feature, Telepresence offers other options for proxying traffic between your laptop and the cluster. This section discribes how to proxy outbound traffic and control outbound connectivity to your cluster. + + This guide assumes that you have the quick start sample web app running in your cluster to test accessing the web-app service. You can substitute this service for any other service you are running. + +## Proxying outbound traffic + +Connecting to the cluster instead of running an intercept allows you to access cluster workloads as if your laptop was another pod in the cluster. This enables you to access other Kubernetes services using `.`. A service running on your laptop can interact with other services on the cluster by name. + +When you connect to your cluster, the background daemon on your machine runs and installs the [Traffic Manager deployment](../../reference/architecture/) into the cluster of your current `kubectl` context. The Traffic Manager handles the service proxying. + +1. Run `telepresence connect` and enter your password to run the daemon. + + ``` + $ telepresence connect + Launching Telepresence Daemon v2.3.7 (api v3) + Need root privileges to run "/usr/local/bin/telepresence daemon-foreground /home//.cache/telepresence/logs '' ''" + [sudo] password: + Connecting to traffic manager... + Connected to context default (https://) + ``` + +2. Run `telepresence status` to confirm connection to your cluster and that it is proxying traffic. + + ``` + $ telepresence status + Root Daemon: Running + Version : v2.3.7 (api 3) + Primary DNS : "" + Fallback DNS: "" + User Daemon: Running + Version : v2.3.7 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 0 total + ``` + +3. Access your service by name with `curl web-app.emojivoto:80`. Telepresence routes the request to the cluster, as if your laptop is actually running in the cluster. + + ``` + $ curl web-app.emojivoto:80 + + + + + Emoji Vote + ... + ``` + +If you terminate the client with `telepresence quit` and try to access the service again, it will fail because traffic is no longer proxied from your laptop. + + ``` + $ telepresence quit + Telepresence Daemon quitting...done + ``` + +When using Telepresence in this way, you need to access services with the namespace qualified DNS name (<service name>.<namespace>) before you start an intercept. After you start an intercept, only <service name> is required. Read more about these differences in the DNS resolution reference guide. + +## Controlling outbound connectivity + +By default, Telepresence provides access to all Services found in all namespaces in the connected cluster. This can lead to problems if the user does not have RBAC access permissions to all namespaces. You can use the `--mapped-namespaces ` flag to control which namespaces are accessible. + +When you use the `--mapped-namespaces` flag, you need to include all namespaces containing services you want to access, as well as all namespaces that contain services related to the intercept. + +### Using local-only intercepts + +When you develop on isolated apps or on a virtualized container, you don't need an outbound connection. However, when developing services that aren't deployed to the cluster, it can be necessary to provide outbound connectivity to the namespace where the service will be deployed. This is because services that aren't exposed through ingress controllers require connectivity to those services. When you provide outbound connectivity, the service can access other services in that namespace without using qualified names. A local-only intercept does not cause outbound connections to originate from the intercepted namespace. The reason for this is to establish correct origin; the connection must be routed to a `traffic-agent`of an intercepted pod. For local-only intercepts, the outbound connections originates from the `traffic-manager`. + +To control outbound connectivity to specific namespaces, add the `--local-only` flag: + + ``` + $ telepresence intercept --namespace --local-only + ``` +The resources in the given namespace can now be accessed using unqualified names as long as the intercept is active. +You can deactivate the intercept with `telepresence leave `. This removes unqualified name access. + +### Proxy outcound connectivity for laptops + +To specify additional hosts or subnets that should be resolved inside the cluster, see [AlsoProxy](../../reference/config/#alsoproxysubnets) for more details. diff --git a/docs/telepresence/latest/howtos/package.md b/docs/telepresence/latest/howtos/package.md new file mode 100644 index 000000000..2baa7a66c --- /dev/null +++ b/docs/telepresence/latest/howtos/package.md @@ -0,0 +1,178 @@ +--- +title: "How to package and share my intercepts" +description: "Use telepresence intercept specs to enable your teammates faster" +--- +# Introduction + +While telepresence takes cares of the interception part of your setup, you usually still need to script +some boiler plate code to run the local part (the handler) of your code. + +Classic solutions rely on a Makefile, or bash scripts, but this becomes cumbersome to maintain. + +Instead, you can use [telepresence intercept specs](../../reference/intercepts/specs): They allow you +to specify all aspects of an intercept, including prerequisites, the local processes that receive the intercepted traffic, +and the actual intercept. Telepresence can then run the specification. + +# Getting started + +You will need a Kubernetes cluster, a deployment, and a service to begin using an Intercept Specification. + +Once you have a Kubernetes cluster you can apply this configuration to start an echo easy deployment that +we can then use for our Intercept Specifcation + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: "echo-easy" +spec: + type: ClusterIP + selector: + service: echo-easy + ports: + - name: proxied + port: 80 + targetPort: http +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "echo-easy" + labels: + service: echo-easy +spec: + replicas: 1 + selector: + matchLabels: + service: echo-easy + template: + metadata: + labels: + service: echo-easy + spec: + containers: + - name: echo-easy + image: jmalloc/echo-server + ports: + - containerPort: 8080 + name: http + resources: + limits: + cpu: 50m + memory: 128Mi +``` + +You can create the local yaml file by using + +```console +$ cat > echo-server.yaml < my-intercept.yaml < + Telepresence global intercept architecture +

+ +## Personal intercepts + +For these cases, Telepresence has a feature in the Developer and Enterprise plans called the Personal Intercept. +When using a Personal Intercept, Telepresence can selectively route requests to a developer's computer based on an HTTP header value. +By default, Telepresence looks for the header x-telepresence-id, and a logged in Telepresence user is assigned a unique value for that +header on any intercept they create. You can also specify your own custom header. You get your test requests, your coworker gets their test requests, +and the rest of the traffic to the application goes to the original pod in the cluster. +

+ Telepresence personal intercept architecture +

+ +## Requirements + +Because Personal Intercepts rely on an HTTP header value, that header must be present in any request +I want to intercept. This is very easy in the first service behind an API gateway, as the header can +be added using Telepresence's [Preview URL feature](../preview-urls), +browser plugins or in tools like Postman, and the entire request, with headers intact, +will be forwarded to the first upstream service. +

+ Diagram of request with intercept header being sent through API gateway to Payments Service +

+ +However, the original request terminates at the first service that receives it. For the intercept header +to reach any services further upstream, the first service must _propagate_ it, by retrieving the header value +from the request and storing it somewhere or passing it down the function call chain to be retrieved +by any functions that make a network call to the upstream service. +

+ Diagram of a request losing the header when sent to the next upstream service unless propagated +

+ +## Solutions + +If the application you develop is directly the first service to receive incoming requests, you can use [Preview URLs](../preview-urls) +to generate a custom URL that automatically passes an `x-telepresence-id` header that your intercept is configured to look for. + +If your applications already propagate a header that can be used to differentiate requests between developers, you can pass the +`--http-header` [flag](../../concepts/intercepts?intercept=personal#creating-and-using-personal-intercepts) to `telepresence intercept`. + +If your applications do _not_ already propagate a header that can be used to differentiate requests, we have a +[comprehensive guide](https://github.com/ambassadorlabs/telepresence-header-propagation) +on doing so quickly and easily using OpenTelemetry auto-instrumentation. diff --git a/docs/telepresence/latest/howtos/preview-urls.md b/docs/telepresence/latest/howtos/preview-urls.md new file mode 100644 index 000000000..8923492c4 --- /dev/null +++ b/docs/telepresence/latest/howtos/preview-urls.md @@ -0,0 +1,100 @@ +--- +title: "Share dev environments with preview URLs | Ambassador" +description: "Telepresence uses Preview URLs to help you collaborate on developing Kubernetes services with teammates." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Share development environments with preview URLs + +Telepresence can generate sharable preview URLs. This enables you to work on a copy of your service locally, and share that environment with a teammate for pair programming. While using preview URLs, Telepresence will route only the requests coming from that preview URL to your local environment. Requests to the ingress are routed to your cluster as usual. + +Preview URLs are protected behind authentication through Ambassador Cloud, and, access to the URL is only available to users in your organization. You can make the URL publicly accessible for sharing with outside collaborators. + +## Creating a preview URL + +1. Enter `telepresence login` to launch Ambassador Cloud in your browser. + +If you are in an environment you can't launch Telepresence in your local browser, pass the [`--apikey` flag to `telepresence login`](../../reference/client/login/). + +2. Connect to Telepresence and enter the `telepresence list` command in your CLI to verify the service is listed. +Telepresence only supports Deployments, ReplicaSets, and StatefulSet workloads with a label that matches a Service. + +3. Start the intercept with `telepresence intercept --preview-url --port --env-file --mechanism http`and adjust the flags as follows: + Start the intercept: + * **port:** specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * **env-file:** specify a file path for Telepresence to write the environment variables that are set in the pod. + +4. Answer the question prompts. + The example below shows a preview URL for `example-service` which listens on port 8080. The preview URL for ingress will use the `ambassador` service in the `ambassador` namespace on port `443` using TLS encryption and the hostname `dev-environment.edgestack.me`: + + ```console + $ telepresence intercept example-service --preview-url --mechanism http --ingress-host ambassador.ambassador --ingress-port 80 --ingress-l5 dev-environment.edgestack.me --ingress-tls --port 8080 --env-file ~/ex-svc.env + + Using deployment example-service + intercepted + Intercept name : example-service + State : ACTIVE + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":example-service") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname : dev-environment.edgestack.me + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + Here are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#env). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Go to the Preview URL generated from the intercept. +Traffic is now intercepted from your preview URL without impacting other traffic from your Ingress. + + + Didn't work? It might be because you have services in between your ingress controller and the service you are intercepting that do not propagate the x-telepresence-intercept-id HTTP Header. Read more on context propagation. + + +7. Make a request on the URL you would usually query for that environment. Don't route a request to your laptop. + + Normal traffic coming into the cluster through the Ingress (i.e. not coming from the preview URL) routes to services in the cluster like normal. + +8. Share with a teammate. + + You can collaborate with teammates by sending your preview URL to them. Once your teammate logs in, they must select the same identity provider and org as you are using. This authorizes their access to the preview URL. When they visit the preview URL, they see the intercepted service running on your laptop. + You can now collaborate with a teammate to debug the service on the shared intercept URL without impacting the production environment. + +## Sharing a preview URL with people outside your team + +To collaborate with someone outside of your identity provider's organization: +Log into [Ambassador Cloud](https://app.getambassador.io/cloud/). + navigate to your service's intercepts, select the preview URL details, and click **Make Publicly Accessible**. Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on your laptop. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. Removing the preview URL either from the dashboard or by running `telepresence preview remove ` also removes all access to the preview URL. + +## Change access restrictions + +To collaborate with someone outside of your identity provider's organization, you must make your preview URL publicly accessible. + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/). +2. Select the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Make Publicly Accessible**. + +Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on a local environment. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. + +## Remove a preview URL from an Intercept + +To delete a preview URL and remove all access to the intercepted service, + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) +2. Click on the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Remove Preview**. + +Alternatively, you can remove a preview URL with the following command: +`telepresence preview remove ` diff --git a/docs/telepresence/latest/howtos/request.md b/docs/telepresence/latest/howtos/request.md new file mode 100644 index 000000000..1109c68df --- /dev/null +++ b/docs/telepresence/latest/howtos/request.md @@ -0,0 +1,12 @@ +import Alert from '@material-ui/lab/Alert'; + +# Send requests to an intercepted service + +Ambassador Cloud can inform you about the required request parameters to reach an intercepted service. + + 1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) + 2. Navigate to the desired service Intercepts page + 3. Click the **Query** button to open the pop-up menu. + 4. Toggle between **CURL**, **Headers** and **Browse**. + +The pre-built queries and header information will help you get started to query the desired intercepted service and manage header propagation. diff --git a/docs/telepresence/latest/images/container-inner-dev-loop.png b/docs/telepresence/latest/images/container-inner-dev-loop.png new file mode 100644 index 000000000..06586cd6e Binary files /dev/null and b/docs/telepresence/latest/images/container-inner-dev-loop.png differ diff --git a/docs/telepresence/latest/images/daemon-in-container.png b/docs/telepresence/latest/images/daemon-in-container.png new file mode 100644 index 000000000..ed02e8386 Binary files /dev/null and b/docs/telepresence/latest/images/daemon-in-container.png differ diff --git a/docs/telepresence/latest/images/docker-extension-intercept.png b/docs/telepresence/latest/images/docker-extension-intercept.png new file mode 100644 index 000000000..d01daef8f Binary files /dev/null and b/docs/telepresence/latest/images/docker-extension-intercept.png differ diff --git a/docs/telepresence/latest/images/docker-header-containers.png b/docs/telepresence/latest/images/docker-header-containers.png new file mode 100644 index 000000000..06f422a93 Binary files /dev/null and b/docs/telepresence/latest/images/docker-header-containers.png differ diff --git a/docs/telepresence/latest/images/docker_extension_button_drop_down.png b/docs/telepresence/latest/images/docker_extension_button_drop_down.png new file mode 100644 index 000000000..b65c53091 Binary files /dev/null and b/docs/telepresence/latest/images/docker_extension_button_drop_down.png differ diff --git a/docs/telepresence/latest/images/docker_extension_connect_to_cluster.png b/docs/telepresence/latest/images/docker_extension_connect_to_cluster.png new file mode 100644 index 000000000..4ec182581 Binary files /dev/null and b/docs/telepresence/latest/images/docker_extension_connect_to_cluster.png differ diff --git a/docs/telepresence/latest/images/docker_extension_login.png b/docs/telepresence/latest/images/docker_extension_login.png new file mode 100644 index 000000000..8874fa959 Binary files /dev/null and b/docs/telepresence/latest/images/docker_extension_login.png differ diff --git a/docs/telepresence/latest/images/docker_extension_running_intercepts_page.png b/docs/telepresence/latest/images/docker_extension_running_intercepts_page.png new file mode 100644 index 000000000..68a2f22fc Binary files /dev/null and b/docs/telepresence/latest/images/docker_extension_running_intercepts_page.png differ diff --git a/docs/telepresence/latest/images/docker_extension_start_intercept_page.png b/docs/telepresence/latest/images/docker_extension_start_intercept_page.png new file mode 100644 index 000000000..df2cffdd3 Binary files /dev/null and b/docs/telepresence/latest/images/docker_extension_start_intercept_page.png differ diff --git a/docs/telepresence/latest/images/docker_extension_start_intercept_popup.png b/docs/telepresence/latest/images/docker_extension_start_intercept_popup.png new file mode 100644 index 000000000..07af9e7bb Binary files /dev/null and b/docs/telepresence/latest/images/docker_extension_start_intercept_popup.png differ diff --git a/docs/telepresence/latest/images/docker_extension_upload_spec_button.png b/docs/telepresence/latest/images/docker_extension_upload_spec_button.png new file mode 100644 index 000000000..f571aefd3 Binary files /dev/null and b/docs/telepresence/latest/images/docker_extension_upload_spec_button.png differ diff --git a/docs/telepresence/latest/images/edgey-corp.png b/docs/telepresence/latest/images/edgey-corp.png new file mode 100644 index 000000000..d5f724c55 Binary files /dev/null and b/docs/telepresence/latest/images/edgey-corp.png differ diff --git a/docs/telepresence/latest/images/github-login.png b/docs/telepresence/latest/images/github-login.png new file mode 100644 index 000000000..cfd4d4bf1 Binary files /dev/null and b/docs/telepresence/latest/images/github-login.png differ diff --git a/docs/telepresence/latest/images/header_arrives.png b/docs/telepresence/latest/images/header_arrives.png new file mode 100644 index 000000000..6abc71266 Binary files /dev/null and b/docs/telepresence/latest/images/header_arrives.png differ diff --git a/docs/telepresence/latest/images/header_requires_propagating.png b/docs/telepresence/latest/images/header_requires_propagating.png new file mode 100644 index 000000000..219980292 Binary files /dev/null and b/docs/telepresence/latest/images/header_requires_propagating.png differ diff --git a/docs/telepresence/latest/images/logo.png b/docs/telepresence/latest/images/logo.png new file mode 100644 index 000000000..701f63ba8 Binary files /dev/null and b/docs/telepresence/latest/images/logo.png differ diff --git a/docs/telepresence/latest/images/mode-defaults.png b/docs/telepresence/latest/images/mode-defaults.png new file mode 100644 index 000000000..1dcca4116 Binary files /dev/null and b/docs/telepresence/latest/images/mode-defaults.png differ diff --git a/docs/telepresence/latest/images/pod-daemon-overview.png b/docs/telepresence/latest/images/pod-daemon-overview.png new file mode 100644 index 000000000..effb05314 Binary files /dev/null and b/docs/telepresence/latest/images/pod-daemon-overview.png differ diff --git a/docs/telepresence/latest/images/split-tunnel.png b/docs/telepresence/latest/images/split-tunnel.png new file mode 100644 index 000000000..5bf30378e Binary files /dev/null and b/docs/telepresence/latest/images/split-tunnel.png differ diff --git a/docs/telepresence/latest/images/tp_global_intercept.png b/docs/telepresence/latest/images/tp_global_intercept.png new file mode 100644 index 000000000..e6c8bfbe7 Binary files /dev/null and b/docs/telepresence/latest/images/tp_global_intercept.png differ diff --git a/docs/telepresence/latest/images/tp_personal_intercept.png b/docs/telepresence/latest/images/tp_personal_intercept.png new file mode 100644 index 000000000..2cfeb005a Binary files /dev/null and b/docs/telepresence/latest/images/tp_personal_intercept.png differ diff --git a/docs/telepresence/latest/images/tracing.png b/docs/telepresence/latest/images/tracing.png new file mode 100644 index 000000000..c374807e5 Binary files /dev/null and b/docs/telepresence/latest/images/tracing.png differ diff --git a/docs/telepresence/latest/images/trad-inner-dev-loop.png b/docs/telepresence/latest/images/trad-inner-dev-loop.png new file mode 100644 index 000000000..618b674f8 Binary files /dev/null and b/docs/telepresence/latest/images/trad-inner-dev-loop.png differ diff --git a/docs/telepresence/latest/images/tunnelblick.png b/docs/telepresence/latest/images/tunnelblick.png new file mode 100644 index 000000000..8944d445a Binary files /dev/null and b/docs/telepresence/latest/images/tunnelblick.png differ diff --git a/docs/telepresence/latest/images/vpn-dns.png b/docs/telepresence/latest/images/vpn-dns.png new file mode 100644 index 000000000..eed535c45 Binary files /dev/null and b/docs/telepresence/latest/images/vpn-dns.png differ diff --git a/docs/telepresence/latest/images/vpn-k8s-config.jpg b/docs/telepresence/latest/images/vpn-k8s-config.jpg new file mode 100644 index 000000000..66116e41d Binary files /dev/null and b/docs/telepresence/latest/images/vpn-k8s-config.jpg differ diff --git a/docs/telepresence/latest/images/vpn-routing.jpg b/docs/telepresence/latest/images/vpn-routing.jpg new file mode 100644 index 000000000..18410dd48 Binary files /dev/null and b/docs/telepresence/latest/images/vpn-routing.jpg differ diff --git a/docs/telepresence/latest/images/vpn-with-tele.jpg b/docs/telepresence/latest/images/vpn-with-tele.jpg new file mode 100644 index 000000000..843b253e9 Binary files /dev/null and b/docs/telepresence/latest/images/vpn-with-tele.jpg differ diff --git a/docs/telepresence/latest/install/cloud.md b/docs/telepresence/latest/install/cloud.md new file mode 100644 index 000000000..bf8c80669 --- /dev/null +++ b/docs/telepresence/latest/install/cloud.md @@ -0,0 +1,63 @@ +# Provider Prerequisites for Traffic Manager + +## GKE + +### Firewall Rules for private clusters + +A GKE cluster with private networking will come preconfigured with firewall rules that prevent the Traffic Manager's +webhook injector from being invoked by the Kubernetes API server. +For Telepresence to work in such a cluster, you'll need to [add a firewall rule](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) allowing the Kubernetes masters to access TCP port `8443` in your pods. +For example, for a cluster named `tele-webhook-gke` in region `us-central1-c1`: + +```bash +$ gcloud container clusters describe tele-webhook-gke --region us-central1-c | grep masterIpv4CidrBlock + masterIpv4CidrBlock: 172.16.0.0/28 # Take note of the IP range, 172.16.0.0/28 + +$ gcloud compute firewall-rules list \ + --filter 'name~^gke-tele-webhook-gke' \ + --format 'table( + name, + network, + direction, + sourceRanges.list():label=SRC_RANGES, + allowed[].map().firewall_rule().list():label=ALLOW, + targetTags.list():label=TARGET_TAGS + )' + +NAME NETWORK DIRECTION SRC_RANGES ALLOW TARGET_TAGS +gke-tele-webhook-gke-33fa1791-all tele-webhook-net INGRESS 10.40.0.0/14 esp,ah,sctp,tcp,udp,icmp gke-tele-webhook-gke-33fa1791-node +gke-tele-webhook-gke-33fa1791-master tele-webhook-net INGRESS 172.16.0.0/28 tcp:10250,tcp:443 gke-tele-webhook-gke-33fa1791-node +gke-tele-webhook-gke-33fa1791-vms tele-webhook-net INGRESS 10.128.0.0/9 icmp,tcp:1-65535,udp:1-65535 gke-tele-webhook-gke-33fa1791-node +# Take note fo the TARGET_TAGS value, gke-tele-webhook-gke-33fa1791-node + +$ gcloud compute firewall-rules create gke-tele-webhook-gke-webhook \ + --action ALLOW \ + --direction INGRESS \ + --source-ranges 172.16.0.0/28 \ + --rules tcp:8443 \ + --target-tags gke-tele-webhook-gke-33fa1791-node --network tele-webhook-net +Creating firewall...⠹Created [https://www.googleapis.com/compute/v1/projects/datawire-dev/global/firewalls/gke-tele-webhook-gke-webhook]. +Creating firewall...done. +NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED +gke-tele-webhook-gke-webhook tele-webhook-net INGRESS 1000 tcp:8443 False +``` + +### GKE Authentication Plugin + +Starting with Kubernetes version 1.26 GKE will require the use of the [gke-gcloud-auth-plugin](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke). +You will need to install this plugin to use Telepresence with Docker while using GKE. + +If you are using the [Telepresence Docker Extension](../../docker/extension) you will need to ensure that your `command` is set to an absolute path in your kubeconfig file. If you've installed not using homebrew you may see in your file `command: gke-gcloud-auth-plugin`. This would need to be replaced with the path to the binary. +You can check this by opening your kubeconfig file, and under the `users` section with your GKE cluster there is a `command` if you've installed with homebrew it would look like this +`command: /opt/homebrew/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/gcloud`. + +## EKS + +### EKS Authentication Plugin + +If you are using AWS CLI version earlier than `1.16.156` you will need to install [aws-iam-authenticator](https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html). +You will need to install this plugin to use Telepresence with Docker while using EKS. + +If you are using the [Telepresence Docker Extension](../../docker/extension) you will need to ensure that your `command` is set to an absolute path in your kubeconfig file instead of a relative path. +You can check this by opening your kubeconfig file, and under the `users` section with your EKS cluster there is a `command` if you've installed with homebrew it would look like this +`command: /opt/homebrew/Cellar/aws-iam-authenticator/0.6.2/bin/aws-iam-authenticator`. \ No newline at end of file diff --git a/docs/telepresence/latest/install/helm.md b/docs/telepresence/latest/install/helm.md new file mode 100644 index 000000000..8aefb1d59 --- /dev/null +++ b/docs/telepresence/latest/install/helm.md @@ -0,0 +1,181 @@ +# Install the Traffic Manager with Helm + +[Helm](https://helm.sh) is a package manager for Kubernetes that automates the release and management of software on Kubernetes. The Telepresence Traffic Manager can be installed via a Helm chart with a few simple steps. + +For more details on what the Helm chart installs and what can be configured, see the Helm chart [configuration on artifacthub](https://artifacthub.io/packages/helm/datawire/telepresence). + +## Before you begin + +Before you begin you need to have [`helm`](https://helm.sh/docs/intro/install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +The Telepresence Helm chart is hosted by Ambassador Labs and published at `https://app.getambassador.io`. + +Start by adding this repo to your Helm client with the following command: + +```shell +helm repo add datawire https://app.getambassador.io +helm repo update +``` + +## Install with Helm + +When you run the Helm chart, it installs all the components required for the Telepresence Traffic Manager. + +1. If you are installing the Telepresence Traffic Manager **for the first time on your cluster**, create the `ambassador` namespace in your cluster: + + ```shell + kubectl create namespace ambassador + ``` + +2. Install the Telepresence Traffic Manager with the following command: + + ```shell + helm install traffic-manager --namespace ambassador datawire/telepresence + ``` + +### Install into custom namespace + +The Helm chart supports being installed into any namespace, not necessarily `ambassador`. Simply pass a different `namespace` argument to `helm install`. +For example, if you wanted to deploy the traffic manager to the `staging` namespace: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence +``` + +Note that users of Telepresence will need to configure their kubeconfig to find this installation of the Traffic Manager: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +See [the kubeconfig documentation](../../reference/config#manager) for more information. + +### Upgrading the Traffic Manager. + +Versions of the Traffic Manager Helm chart are coupled to the versions of the Telepresence CLI that they are intended for. +Thus, for example, if you wish to use Telepresence `v2.4.0`, you'll need to install version `v2.4.0` of the Traffic Manager Helm chart. + +Upgrading the Traffic Manager is the same as upgrading any other Helm chart; for example, if you installed the release into the `ambassador` namespace, and you just wished to upgrade it to the latest version without changing any configuration values: + +```shell +helm repo up +helm upgrade traffic-manager datawire/telepresence --reuse-values --namespace ambassador +``` + +If you want to upgrade the Traffic-Manager to a specific version, add a `--version` flag with the version number to the upgrade command. For example: `--version v2.4.1` + +## RBAC + +### Installing a namespace-scoped traffic manager + +You might not want the Traffic Manager to have permissions across the entire kubernetes cluster, or you might want to be able to install multiple traffic managers per cluster (for example, to separate them by environment). +In these cases, the traffic manager supports being installed with a namespace scope, allowing cluster administrators to limit the reach of a traffic manager's permissions. + +For example, suppose you want a Traffic Manager that only works on namespaces `dev` and `staging`. +To do this, create a `values.yaml` like the following: + +```yaml +managerRbac: + create: true + namespaced: true + namespaces: + - dev + - staging +``` + +This can then be installed via: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence -f ./values.yaml +``` + +**NOTE** Do not install namespace-scoped Traffic Managers and a global Traffic Manager in the same cluster, as it could have unexpected effects. + +#### Namespace collision detection + +The Telepresence Helm chart will try to prevent namespace-scoped Traffic Managers from managing the same namespaces. +It will do this by creating a ConfigMap, called `traffic-manager-claim`, in each namespace that a given install manages. + +So, for example, suppose you install one Traffic Manager to manage namespaces `dev` and `staging`, as: + +```bash +helm install traffic-manager --namespace dev datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={dev,staging}' +``` + +You might then attempt to install another Traffic Manager to manage namespaces `staging` and `prod`: + +```bash +helm install traffic-manager --namespace prod datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={staging,prod}' +``` + +This would fail with an error: + +``` +Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap "traffic-manager-claim" in namespace "staging" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "prod": current value is "dev" +``` + +To fix this error, fix the overlap either by removing `staging` from the first install, or from the second. + +#### Namespace scoped user permissions + +Optionally, you can also configure user rbac to be scoped to the same namespaces as the manager itself. +You might want to do this if you don't give your users permissions throughout the cluster, and want to make sure they only have the minimum set required to perform telepresence commands on certain namespaces. + +Continuing with the `dev` and `staging` example from the previous section, simply add the following to `values.yaml` (make sure you set the `subjects`!): + +```yaml +clientRbac: + create: true + + # These are the users or groups to which the user rbac will be bound. + # This MUST be set. + subjects: {} + # - kind: User + # name: jane + # apiGroup: rbac.authorization.k8s.io + + namespaced: true + + namespaces: + - dev + - staging +``` + +#### Namespace-scoped webhook + +If you wish to use the traffic-manager's [mutating webhook](../../reference/cluster-config#mutating-webhook) with a namespace-scoped traffic manager, you will have to ensure that each namespace has an `app.kubernetes.io/name` label that is identical to its name: + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: staging + labels: + app.kubernetes.io/name: staging +``` + +You can also use `kubectl label` to add the label to an existing namespace, e.g.: + +```shell +kubectl label namespace staging app.kubernetes.io/name=staging +``` + +This is required because the mutating webhook will use the name label to find namespaces to operate on. + +**NOTE** This labelling happens automatically in kubernetes >= 1.21. + +### Installing RBAC only + +Telepresence Traffic Manager does require some [RBAC](../../reference/rbac/) for the traffic-manager deployment itself, as well as for users. +To make it easier for operators to introspect / manage RBAC separately, you can use `rbac.only=true` to +only create the rbac-related objects. +Additionally, you can use `clientRbac.create=true` and `managerRbac.create=true` to toggle which subset(s) of RBAC objects you wish to create. diff --git a/docs/telepresence/latest/install/index.md b/docs/telepresence/latest/install/index.md new file mode 100644 index 000000000..d7a5642ed --- /dev/null +++ b/docs/telepresence/latest/install/index.md @@ -0,0 +1,157 @@ +import Platform from '@src/components/Platform'; + +# Install + +Install Telepresence by running the commands below for your OS. If you are not the administrator of your cluster, you will need [administrative RBAC permissions](../reference/rbac#administrating-telepresence) to install and use Telepresence in your cluster. + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +We offer an easy installation path using an [MSI Installer](https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence-setup.exe). However if you'd like to setup Telepresence using Powershell, you can run these commands: + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## What's Next? + +Follow one of our [quick start guides](../quick-start/) to start using Telepresence, either with our sample app or in your own environment. + +## Installing nightly versions of Telepresence + +We build and publish the contents of the default branch, [release/v2](https://github.com/telepresenceio/telepresence), of Telepresence +nightly, Monday through Friday, for macOS (Intel and Apple silicon), Linux, and Windows. + +The tags are formatted like so: `vX.Y.Z-nightly-$gitShortHash`. + +`vX.Y.Z` is the most recent release of Telepresence with the patch version (Z) bumped one higher. +For example, if our last release was 2.3.4, nightly builds would start with v2.3.5, until a new +version of Telepresence is released. + +`$gitShortHash` will be the short hash of the git commit of the build. + +Use these URLs to download the most recent nightly build. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/nightly/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/nightly/telepresence.zip +``` + + + + +## Installing older versions of Telepresence + +Use these URLs to download an older version for your OS (including older nightly builds), replacing `x.y.z` with the versions you want. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/x.y.z/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/x.y.z/telepresence +``` + + + + + diff --git a/docs/telepresence/latest/install/manager.md b/docs/telepresence/latest/install/manager.md new file mode 100644 index 000000000..c192f45c1 --- /dev/null +++ b/docs/telepresence/latest/install/manager.md @@ -0,0 +1,85 @@ +# Install/Uninstall the Traffic Manager + +Telepresence uses a traffic manager to send/recieve cloud traffic to the user. Telepresence uses [Helm](https://helm.sh) under the hood to install the traffic manager in your cluster. + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/). +In addition, you may need certain prerequisites depending on your cloud provider and platform. +See the [cloud provider installation notes](../../install/cloud) for more. + +## Install the Traffic Manager + +The telepresence cli can install the traffic manager for you. The basic install will install the same version as the client used. + +1. Install the Telepresence Traffic Manager with the following command: + + ```shell + telepresence helm install + ``` + +### Customizing the Traffic Manager. + +For details on what the Helm chart installs and what can be configured, see the Helm chart [configuration on artifacthub](https://artifacthub.io/packages/helm/datawire/telepresence). + +1. Create a values.yaml file with your config values. + +2. Run the install command with the values flag set to the path to your values file. + + ```shell + telepresence helm install --values values.yaml + ``` + + +## Upgrading/Downgrading the Traffic Manager. + +1. Download the cli of the version of Telepresence you wish to use. + +2. Run the install command with the upgrade flag. + + ```shell + telepresence helm install --upgrade + ``` + + +## Uninstall + +The telepresence cli can uninstall the traffic manager for you using the `telepresence helm uninstall` command (previously `telepresence uninstall --everything`). + +1. Uninstall the Telepresence Traffic Manager and all of the agents installed by it using the following command: + + ```shell + telepresence helm uninstall + ``` + +## Ambassador Agent + +The Ambassador Agent is installed alongside the Traffic Manager to report your services to Ambassador Cloud and give you the ability to trigger intercepts from the Cloud UI. + +If you are already using the Emissary-Ingress or Edge-Stack you do not need to install the Ambassador Agent. When installing the `traffic-manager` you can add the flag `--set ambassador-agent.enabled=false`, to not include the ambassador-agent. Emissary and Edge-Stack both already include this agent within their deployments. + +If your namespace runs with tight security parameters you may need to set a few additional parameters. These parameters are `securityContext`, `tolerations`, and `resources`. +You can set these parameters in a `values.yaml` file under the `ambassador-agent` prefix to fit your namespace requirements. + +### Adding an API Key to your Ambassador Agent + +While installing the traffic-manager you can pass your cloud-token directly to the helm chart using the flag, `--set ambassador-agent.cloudConnectToken=`. +The [API Key](../reference/client/login.md) will be created as a secret and your agent will use it upon start-up. Telepresence will not override the API key given via Helm. + +### Creating a secret manually +The Ambassador agent watches for secrets with a name ending in `agent-cloud-token`. You can create this secret yourself. This API key will always be used. + + ```shell +kubectl apply -f - < + labels: + app.kubernetes.io/name: agent-cloud-token +data: + CLOUD_CONNECT_TOKEN: +EOF + ``` \ No newline at end of file diff --git a/docs/telepresence/latest/install/migrate-from-legacy.md b/docs/telepresence/latest/install/migrate-from-legacy.md new file mode 100644 index 000000000..94307dfa1 --- /dev/null +++ b/docs/telepresence/latest/install/migrate-from-legacy.md @@ -0,0 +1,110 @@ +# Migrate from legacy Telepresence + +[Telepresence](/products/telepresence/) (formerly referenced as Telepresence 2, which is the current major version) has different mechanics and requires a different mental model from [legacy Telepresence 1](https://www.telepresence.io/docs/v1/) when working with local instances of your services. + +In legacy Telepresence, a pod running a service was swapped with a pod running the Telepresence proxy. This proxy received traffic intended for the service, and sent the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment". + +In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time. + +Telepresence 2 introduces a [new +architecture](../../reference/architecture/) built around "intercepts" +that addresses these problems. With the new Telepresence, a sidecar +proxy ("traffic agent") is injected onto the pod. The proxy then +intercepts traffic intended for the Pod and routes it to the +workstation/laptop. The advantage of this approach is that the +service is running at all times, and no swapping is used. By using +the proxy approach, we can also do personal intercepts, where rather +than re-routing all traffic to the laptop/workstation, it only +re-routes the traffic designated as belonging to that user, so that +multiple developers can intercept the same service at the same time +without disrupting normal operation or disrupting eacho. + +Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts. + +## Using legacy Telepresence commands + +First please ensure you've [installed Telepresence](../). + +Telepresence is able to translate common legacy Telepresence commands into native Telepresence commands. +So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used +to with the Telepresence binary. + +For example, say you have a deployment (`myserver`) that you want to swap deployment (equivalent to intercept in +Telepresence) with a python server, you could run the following command: + +``` +$ telepresence --swap-deployment myserver --expose 9090 --run python3 -m http.server 9090 +< help text > + +Legacy telepresence command used +Command roughly translates to the following in Telepresence: +telepresence intercept echo-easy --port 9090 -- python3 -m http.server 9090 +running... +Connecting to traffic manager... +Connected to context +Using Deployment myserver +intercepted + Intercept name : myserver + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:9090 + Intercepting : all TCP connections +Serving HTTP on :: port 9090 (http://[::]:9090/) ... +``` + +Telepresence will let you know what the legacy Telepresence command has mapped to and automatically +runs it. So you can get started with Telepresence today, using the commands you are used to +and it will help you learn the Telepresence syntax. + +### Legacy command mapping + +Below is the mapping of legacy Telepresence to Telepresence commands (where they exist and +are supported). + +| Legacy Telepresence Command | Telepresence Command | +|--------------------------------------------------|--------------------------------------------| +| --swap-deployment $workload | intercept $workload | +| --expose localPort[:remotePort] | intercept --port localPort[:remotePort] | +| --swap-deployment $workload --run-shell | intercept $workload -- bash | +| --swap-deployment $workload --run $cmd | intercept $workload -- $cmd | +| --swap-deployment $workload --docker-run $cmd | intercept $workload --docker-run -- $cmd | +| --run-shell | connect -- bash | +| --run $cmd | connect -- $cmd | +| --env-file,--env-json | --env-file, --env-json (haven't changed) | +| --context,--namespace | --context, --namespace (haven't changed) | +| --mount,--docker-mount | --mount, --docker-mount (haven't changed) | + +### Legacy Telepresence command limitations + +Some of the commands and flags from legacy Telepresence either didn't apply to Telepresence or +aren't yet supported in Telepresence. For some known popular commands, such as --method, +Telepresence will include output letting you know that the flag has gone away. For flags that +Telepresence can't translate yet, it will let you know that that flag is "unsupported". + +If Telepresence is missing any flags or functionality that is integral to your usage, please let us know +by [creating an issue](https://github.com/telepresenceio/telepresence/issues) and/or talking to us on our [Slack channel](http://a8r.io/slack)! + +## Telepresence changes + +Telepresence installs a Traffic Manager in the cluster and Traffic Agents alongside workloads when performing intercepts (including +with `--swap-deployment`) and leaves them. If you use `--swap-deployment`, the intercept will be left once the process +dies, but the agent will remain. There's no harm in leaving the agent running alongside your service, but when you +want to remove them from the cluster, the following Telepresence command will help: +``` +$ telepresence uninstall --help +Uninstall telepresence agents + +Usage: + telepresence uninstall [flags] { --agent |--all-agents } + +Flags: + -d, --agent uninstall intercept agent on specific deployments + -a, --all-agents uninstall intercept agent on all deployments + -h, --help help for uninstall + -n, --namespace string If present, the namespace scope for this CLI request +``` + +Since the new architecture deploys a Traffic Manager into the Ambassador namespace, please take a look at +our [rbac guide](../../reference/rbac) if you run into any issues with permissions while upgrading to Telepresence. + +The Traffic Manager can be uninstalled using `telepresence helm uninstall`. \ No newline at end of file diff --git a/docs/telepresence/latest/install/upgrade.md b/docs/telepresence/latest/install/upgrade.md new file mode 100644 index 000000000..34385935c --- /dev/null +++ b/docs/telepresence/latest/install/upgrade.md @@ -0,0 +1,83 @@ +--- +description: "How to upgrade your installation of Telepresence and install previous versions." +--- + +# Upgrade Process +The Telepresence CLI will periodically check for new versions and notify you when an upgrade is available. Running the same commands used for installation will replace your current binary with the latest version. + +Before upgrading your CLI, you must stop any live Telepresence processes by issuing `telepresence quit -s` (or `telepresence quit -ur` +if your current version is less than 2.8.0). + + + + +```shell +# Intel Macs + +# Upgrade via brew: +brew upgrade datawire/blackbird/telepresence + +# OR upgrade manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +The [MSI Installer](https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence-setup.exe) can upgrade Telepresence, or if you installed it with Powershell: + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +The Telepresence CLI contains an embedded Helm chart. See [Install/Uninstall the Traffic Manager](../manager/) if you want to also upgrade +the Traffic Manager in your cluster. diff --git a/docs/telepresence/latest/quick-start/TelepresenceQuickStartLanding.js b/docs/telepresence/latest/quick-start/TelepresenceQuickStartLanding.js new file mode 100644 index 000000000..bd375dee0 --- /dev/null +++ b/docs/telepresence/latest/quick-start/TelepresenceQuickStartLanding.js @@ -0,0 +1,118 @@ +import queryString from 'query-string'; +import React, { useEffect, useState } from 'react'; + +import Embed from '../../../../src/components/Embed'; +import Icon from '../../../../src/components/Icon'; +import Link from '../../../../src/components/Link'; + +import './telepresence-quickstart-landing.less'; + +/** @type React.FC> */ +const RightArrow = (props) => ( + + + +); + +const TelepresenceQuickStartLanding = () => { + const [getStartedUrl, setGetStartedUrl] = useState( + 'https://app.getambassador.io/cloud/welcome?docs_source=telepresence-quick-start', + ); + + const getUrlFromQueryParams = () => { + const { docs_source, docs_campaign } = queryString.parse( + window.location.search, + ); + + if (docs_source === 'cloud-quickstart-ad' && docs_campaign === 'loops') { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=loops', + ); + } else if ( + docs_source === 'cloud-quickstart-ad' && + docs_campaign === 'environments' + ) { + setGetStartedUrl( + 'https://app.getambassador.io/cloud/welcome?docs_source=cloud-quickstart-ad&docs_campaign=environments', + ); + } + }; + + useEffect(() => { + getUrlFromQueryParams(); + }, []); + + return ( +
+

+ Telepresence +

+

+ Set up your ideal development environment for Kubernetes in seconds. + Accelerate your inner development loop with hot reload using your + existing IDE, and workflow. +

+ +
+
+
+

+ Set Up Telepresence with Ambassador Cloud +

+

+ Seamlessly integrate Telepresence into your existing Kubernetes + environment by following our 3-step setup guide. +

+ + Get Started + +
+
+

+ + Do it Yourself: + {' '} + install Telepresence and manually connect to your Kubernetes + workloads. +

+
+ +
+
+
+

+ What Can Telepresence Do for You? +

+

Telepresence gives Kubernetes application developers:

+
    +
  • Instant feedback loops
  • +
  • Remote development environments
  • +
  • Access to your favorite local tools
  • +
  • Easy collaborative development with teammates
  • +
+ + LEARN MORE{' '} + + +
+
+ +
+
+
+
+ ); +}; + +export default TelepresenceQuickStartLanding; diff --git a/docs/telepresence/latest/quick-start/index.md b/docs/telepresence/latest/quick-start/index.md new file mode 100644 index 000000000..02dc29c47 --- /dev/null +++ b/docs/telepresence/latest/quick-start/index.md @@ -0,0 +1,422 @@ +--- +title: Quick Start | Telepresence +description: "Telepresence Quick Start by Ambassador Labs: Dive into Kubernetes development with ease. Get set up swiftly and unlock efficient microservice debugging" +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards26 from './qs-cards' + +# Telepresence Quick Start + +
+ +

Contents

+ + * [Overview](#overview) + * [Prerequisites](#prerequisites) + * [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) + * [2. Set up a local cluster with sample app](#2-set-up-a-local-cluster-with-sample-app) + * [3. Use Telepresence to connect your laptop to the cluster](#3-use-telepresence-to-connect-your-laptop-to-the-cluster) + * [4. Run the sample application locally](#4-run-the-sample-application-locally) + * [5. Route traffic from the cluster to your local application](#5-route-traffic-from-the-cluster-to-your-local-application) + * [6. Make a code change (and see it reflected live?)](#6-make-a-code-change) + * [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Overview +This quickstart provides the fastest way to get an understanding of how [Telepresence](https://www.getambassador.io/products/telepresence) +can speed up your development in Kubernetes. It should take you about 5-10 minutes. You'll create a local cluster using Kind with a sample app installed, and use Telepresence to +* access services in the cluster directly from your laptop +* make changes locally and see those changes immediately in the cluster + +Then we'll point you to some next steps you can take, including trying out collaboration features and trying it in your own infrastructure. + +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) installed and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / +[macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / +[Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster. + +You will also need [Docker installed](https://docs.docker.com/get-docker/). + +The sample application instructions default to Python, which is pre-installed on MacOS and Linux. If you are on Windows and don't already have +Python installed, you can install it from the [official Python site](https://www.python.org/downloads/). + +There are also instructions for NodeJS, Java and Go if you already have those installed and prefer to work in them. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +We offer an easy installation path using an [MSI Installer](https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence-setup.exe). However if you'd like to setup Telepresence using Powershell, you can run these commands: + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Set up a local cluster with sample app + +We provide [a repo](https://github.com/ambassadorlabs/telepresence-local-quickstart) that sets up a local cluster for you +with the in-cluster Telepresence components and a sample app already installed. It does not need `sudo` or `Run as Administrator` privileges. + + + + +```shell +# Clone the repo with submodules +git clone https://github.com/ambassadorlabs/telepresence-local-quickstart.git --recurse-submodules + +# Change to the repo directory +cd telepresence-local-quickstart + +# Run the macOS setup script +./macos-setup.sh +``` + + + + +```shell +# Clone the repo with submodules +git clone https://github.com/ambassadorlabs/telepresence-local-quickstart.git --recurse-submodules + +# Change to the repo directory +cd telepresence-local-quickstart + +# Run the Linux setup script +./linux-setup.sh +``` + + + + +```powershell +# Clone the repo with submodules +git clone https://github.com/ambassadorlabs/telepresence-local-quickstart.git --recurse-submodules + +# Change to the repo directory +cd .\telepresence-local-quickstart\ + +# Run the Windows setup script +.\windows-setup.ps1 +``` + + + + +## 3. Use Telepresence to connect your laptop to the cluster + +Telepresence connects your local workstation to a remote Kubernetes cluster, allowing you to talk to cluster resources like your laptop +is in the cluster. + + + The first time you run a Telepresence command you will be prompted to create an Ambassador Labs account. Creating an account is completely free, + takes a few seconds and can be done through accounts you already have, like GitHub and Google. + + +1. Connect to the cluster: + `telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Now we'll test that Telepresence is working properly by accessing a service running in the cluster. Telepresence has merged your local IP routing +tables and DNS resolution with the clusters, so you can talk to the cluster in its DNS language and to services on their cluster IP address. + +Open up a browser and go to [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). As you can see you've loaded up a dashboard showing the architecture of the sample app. +

+ Edgey Corp Architecture +

+ +You are connected to the VeryLargeJavaService, which talks to the DataProcessingService as an upstream dependency. The DataProcessingService in turn +has a dependency on VeryLargeDatastore. You were able to connect to it using the cluster DNS name thanks to Telepresence. + +## 4. Run the sample application locally + +We'll take on the role of a DataProcessingService developer. We want to be able to connect to that big test database that everyone has that dates back to the +founding of the company and has all the critical test scenarios and is too big to run locally. In the other direction, VeryLargeJavaService is developed by another team +and we need to make sure with each change that we are being good upstream citizens and maintaining valid contracts with that service. + + +Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + + + +To run the DataProcessingService locally: + +1. Change into the repo directory, then into DataProcessingService: `cd edgey-corp-python/DataProcessingService/` +2. Install the dependencies and start the Python server: `pip install flask requests && python app.py` +3. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: `curl localhost:3000/color` + +``` +$ pip install flask requests && python app.py + +Collecting flask +... +Welcome to the DataServiceProcessingPythonService! +... + + +$ curl localhost:3000/color + +"blue" +``` + + +To run the DataProcessingService locally: + +1. Change into the repo directory, then into DataProcessingService: `cd edgey-corp-nodejs/DataProcessingService/` +2. Install the dependencies and start the NodeJS server: `npm install && npm start` +3. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: `curl localhost:3000/color` + +``` +$ npm install && npm start + +added 170 packages, and audited 171 packages in 597ms +... +Welcome to the DataServiceProcessingNodeService! +... + + +$ curl localhost:3000/color + +"blue" +``` + + +To run the DataProcessingService locally: + +1. Change into the repo directory, then into DataProcessingService: `cd edgey-corp-java/DataProcessingService/` +2. Install the dependencies and start the Java server: `mvn spring-boot:run` +3. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: `curl localhost:3000/color` + +``` +$ mvn spring-boot:run + +[INFO] Scanning for projects... +... +INFO 49318 --- [ restartedMain] g.d.DataProcessingServiceJavaApplication : Starting DataProcessingServiceJavaApplication using Java +... + + +$ curl localhost:3000/color + +"blue" +``` + + +To run the DataProcessingService locally: + +1. Change into the repo directory, then into DataProcessingService: `cd edgey-corp-go/DataProcessingService/` +2. Install the dependencies and start the Go server: `go get github.com/pilu/fresh && go install github.com/pilu/fresh && fresh` +3. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: `curl localhost:3000/color` + +``` +$ go get github.com/pilu/fresh && go install github.com/pilu/fresh && fresh + +12:24:13 runner | InitFolders +... +12:24:14 app | Welcome to the DataProcessingGoService! +... + + +$ curl localhost:3000/color + +"blue" +``` + + + + +Victory, your local server is running a-ok! + + +## 5. Route traffic from the cluster to your local application +Historically, developing with microservices on Kubernetes your choices have been to run an entire set of services in a cluster or namespace just for you, +and spend 15 minutes on every one line change pushing the code, waiting for it to build, waiting for it to deploy, etc. Or, you could run all 50 services +in your environment on your laptop, and be deafened by the fans. + +With Telepresence, you can *intercept* traffic from a service in the cluster and route it to our laptop, effectively replacing the cluster version +with your local development environment. This gives you back the fast feedback loop of local development, and access to your preferred tools like your favorite IDE or debugger. +And you still have access to all the cluster resources via `telepresence connect`. Now you'll see this in action. + +Look back at your browser tab looking at the app dashboard. You see the EdgyCorp WebApp with a green title and green pod in the diagram. +The local version of the code has the UI color set to blue instead of green. + +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP requests + ``` + +2. Go to the frontend service again in your browser and refresh. You will now see the blue elements in the app. + + +The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + + + +To update the color: + +1. Open `edgey-corp-python/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 15 from `blue` to `orange`. Save the file and the python server will auto reload. +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser and refresh. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+
+ +To update the color: + +1. Open `edgey-corp-nodejs/DataProcessingService/app.js` in your editor and change line 6 from `blue` to `orange`. Save the file and the Node server will auto reload. +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+
+ +To update the color: + +1. Open `edgey-corp-java/DataProcessingService/src/main/resources/application.properties` in your editor and change `app.default.color` on line 2 from `blue` to `orange`. Save the file then stop and restart your Java server. +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+
+ +To update the color: + +1. Open `edgey-corp-go/DataProcessingService/main.go` in your editor and change `var color string` from `blue` to `orange`. Save the file and the Go server will auto reload. +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+
+
+ +## What's Next? + + + +export const metaData = [ + {name: "Ambassador Labs", path: "https://getambassador.io"}, + {name: "Telepresence", path: "https://www.getambassador.io/products/telepresence"}, + {name: "Install Tools | Kubernetes", path: "https://kubernetes.io/docs/tasks/tools/install-kubectl/"}, + {name: "Get Docker", path: "https://docs.docker.com/get-docker/"}, + {name: "Download Python | Python.org", path: "https://www.python.org/downloads/"}, + {name: "Telepresence Local Quickstart", path: "https://github.com/ambassadorlabs/telepresence-local-quickstart"} +] diff --git a/docs/telepresence/latest/quick-start/qs-cards.js b/docs/telepresence/latest/quick-start/qs-cards.js new file mode 100644 index 000000000..084af19b3 --- /dev/null +++ b/docs/telepresence/latest/quick-start/qs-cards.js @@ -0,0 +1,68 @@ +import Grid from '@material-ui/core/Grid'; +import Paper from '@material-ui/core/Paper'; +import Typography from '@material-ui/core/Typography'; +import { makeStyles } from '@material-ui/core/styles'; +import { Link as GatsbyLink } from 'gatsby'; +import React from 'react'; + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + textAlign: 'center', + alignItem: 'stretch', + padding: 0, + }, + paper: { + padding: theme.spacing(1), + textAlign: 'center', + color: 'black', + height: '100%', + }, +})); + +export default function CenteredGrid() { + const classes = useStyles(); + + return ( +
+ + + + + + Collaborating + + + + Use personal intercepts to get specific requests when working with colleagues. + + + + + + + + Outbound Sessions + + + + Control what your laptop can reach in the cluster while connected. + + + + + + + + Telepresence for Docker Compose + + + + Develop in a hybrid local/cluster environment using Telepresence for Docker Compose. + + + + +
+ ); +} diff --git a/docs/telepresence/latest/quick-start/telepresence-quickstart-landing.less b/docs/telepresence/latest/quick-start/telepresence-quickstart-landing.less new file mode 100644 index 000000000..e2a83df4f --- /dev/null +++ b/docs/telepresence/latest/quick-start/telepresence-quickstart-landing.less @@ -0,0 +1,152 @@ +@import '~@src/components/Layout/vars.less'; + +.doc-body .telepresence-quickstart-landing { + font-family: @InterFont; + color: @black; + margin: -8.4px auto 48px; + max-width: 1050px; + min-width: @docs-min-width; + width: 100%; + + h1 { + color: @blue-dark; + font-weight: normal; + letter-spacing: 0.25px; + font-size: 33px; + margin: 0 0 15px; + } + p { + font-size: 0.875rem; + line-height: 24px; + margin: 0; + padding: 0; + } + + .demo-cluster-container { + display: grid; + margin: 40px 0; + grid-template-columns: 1fr; + grid-template-columns: 1fr; + @media screen and (max-width: 900px) { + grid-template-columns: repeat(1, 1fr); + } + } + .main-title-container { + display: flex; + flex-direction: column; + align-items: center; + p { + text-align: center; + font-size: 0.875rem; + } + } + h2 { + font-size: 23px; + color: @black; + margin: 0 0 20px 0; + padding: 0; + &.underlined { + padding-bottom: 2px; + border-bottom: 3px solid @grey-separator; + text-align: center; + } + strong { + font-weight: 800; + } + &.subtitle { + margin-bottom: 10px; + font-size: 19px; + line-height: 28px; + } + } + .learn-more, + .get-started { + font-size: 14px; + font-weight: 600; + letter-spacing: 1.25px; + display: flex; + align-items: center; + text-decoration: none; + &.inline { + display: inline-block; + text-decoration: underline; + font-size: unset; + font-weight: normal; + &:hover { + text-decoration: none; + } + } + &.blue { + color: @blue-5; + } + &.blue:hover { + color: @blue-dark; + } + } + + .learn-more { + margin-top: 20px; + padding: 13px 0; + } + + .box-container { + &.border { + border: 1.5px solid @grey-separator; + border-radius: 5px; + padding: 10px; + } + &::before { + content: ''; + position: absolute; + width: 14px; + height: 14px; + border-radius: 50%; + top: 0; + left: 50%; + transform: translate(-50%, -50%); + } + p { + font-size: 0.875rem; + line-height: 24px; + padding: 0; + } + } + + .telepresence-video { + border: 2px solid @grey-separator; + box-shadow: -6px 12px 0px fade(@black, 12%); + border-radius: 8px; + padding: 18px; + h2.telepresence-video-title { + font-weight: 400; + font-size: 23px; + line-height: 33px; + color: @blue-6; + } + } + + .video-section { + display: grid; + grid-template-columns: 1fr 1fr; + column-gap: 20px; + @media screen and (max-width: 800px) { + grid-template-columns: 1fr; + } + ul { + font-size: 14px; + margin: 0 10px 6px 0; + } + .video-container { + position: relative; + padding-bottom: 56.25%; // 16:9 aspect ratio + height: 0; + iframe { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + } + } + } +} diff --git a/docs/telepresence/latest/redirects.yml b/docs/telepresence/latest/redirects.yml new file mode 100644 index 000000000..c73de44b4 --- /dev/null +++ b/docs/telepresence/latest/redirects.yml @@ -0,0 +1,6 @@ +- {from: "", to: "quick-start"} +- {from: /docs/telepresence/v2.15/quick-start/qs-go, to: /docs/telepresence/v2.15/quickstart/} +- {from: /docs/telepresence/v2.15/quick-start/qs-java, to: /docs/telepresence/v2.15/quickstart/} +- {from: /docs/telepresence/v2.15/quick-start/qs-node, to: /docs/telepresence/v2.15/quickstart/} +- {from: /docs/telepresence/v2.15/quick-start/qs-python, to: /docs/telepresence/v2.15/quickstart/} +- {from: /docs/telepresence/v2.15/quick-start/qs-python-fastapi, to: /docs/telepresence/v2.15/quickstart/} diff --git a/docs/telepresence/latest/reference/architecture.md b/docs/telepresence/latest/reference/architecture.md new file mode 100644 index 000000000..6d45f010d --- /dev/null +++ b/docs/telepresence/latest/reference/architecture.md @@ -0,0 +1,101 @@ +--- +description: "How Telepresence works to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Telepresence Architecture + +
+ +![Telepresence Architecture](https://www.getambassador.io/images/documentation/telepresence-architecture.inline.svg) + +
+ +## Telepresence CLI + +The Telepresence CLI orchestrates the moving parts on the workstation: it starts the Telepresence Daemons, +authenticates against Ambassador Cloud, and then acts as a user-friendly interface to the Telepresence User Daemon. + +## Telepresence Daemons +Telepresence has Daemons that run on a developer's workstation and act as the main point of communication to the cluster's +network in order to communicate with the cluster and handle intercepted traffic. + +### User-Daemon +The User-Daemon coordinates the creation and deletion of intercepts by communicating with the [Traffic Manager](#traffic-manager). +All requests from and to the cluster go through this Daemon. + +When you run telepresence login, Telepresence installs an enhanced version of the User-Daemon. This replaces the existing User-Daemon and +allows you to create intercepts on your local machine from Ambassador Cloud. + +### Root-Daemon +The Root-Daemon manages the networking necessary to handle traffic between the local workstation and the cluster by setting up a +[Virtual Network Device](../tun-device) (VIF). For a detailed description of how the VIF manages traffic and why it is necessary +please refer to this blog post: +[Implementing Telepresence Networking with a TUN Device](https://blog.getambassador.io/implementing-telepresence-networking-with-a-tun-device-a23a786d51e9). + +When you run telepresence login, Telepresence installs an enhanced Telepresence User Daemon. This replaces the open source +User Daemon and allows you to create intercepts on your local machine from Ambassador Cloud. + +## Traffic Manager + +The Traffic Manager is the central point of communication between Traffic Agents in the cluster and Telepresence Daemons +on developer workstations. It is responsible for injecting the Traffic Agent sidecar into intercepted pods, proxying all +relevant inbound and outbound traffic, and tracking active intercepts. + +The Traffic-Manager is installed, either by a cluster administrator using a Helm Chart, or on demand by the Telepresence +User Daemon. When the User Daemon performs its initial connect, it first checks the cluster for the Traffic Manager +deployment, and if missing it will make an attempt to install it using its embedded Helm Chart. + +When an intercept gets created with a Preview URL, the Traffic Manager will establish a connection with Ambassador Cloud +so that Preview URL requests can be routed to the cluster. This allows Ambassador Cloud to reach the Traffic Manager +without requiring the Traffic Manager to be publicly exposed. Once the Traffic Manager receives a request from a Preview +URL, it forwards the request to the ingress service specified at the Preview URL creation. + +## Traffic Agent + +The Traffic Agent is a sidecar container that facilitates intercepts. When an intercept is first started, the Traffic Agent +container is injected into the workload's pod(s). You can see the Traffic Agent's status by running `telepresence list` +or `kubectl describe pod `. + +Depending on the type of intercept that gets created, the Traffic Agent will either route the incoming request to the +Traffic Manager so that it gets routed to a developer's workstation, or it will pass it along to the container in the +pod usually handling requests on that port. + +## Ambassador Cloud + +Ambassador Cloud enables Preview URLs by generating random ephemeral domain names and routing requests received on those +domains from authorized users to the appropriate Traffic Manager. + +Ambassador Cloud also lets users manage their Preview URLs: making them publicly accessible, seeing users who have +accessed them and deleting them. + +## Pod-Daemon + +The Pod-Daemon is a modified version of the [Telepresence User-Daemon](#user-daemon) built as a container image so that +it can be inserted into a `Deployment` manifest as an additional container. This allows users to create intercepts completely +within the cluster with the benefit that the intercept stays active until the deployment with the Pod-Daemon container is removed. + +The Pod-Daemon will take arguments and environment variables as part of the `Deployment` manifest to specify which service the intercept +should be run on and to provide similar configuration that would be provided when using Telepresence intercepts from the command line. + +After being deployed to the cluster, it behaves similarly to the Telepresence User-Daemon and installs the [Traffic Agent Sidecar](#traffic-agent) +on the service that is being intercepted. After the intercept is created, traffic can then be redirected to the `Deployment` with the Pod-Daemon +container instead. The Pod-Daemon will automatically generate a Preview URL so that the intercept can be accessed from outside the cluster. +The Preview URL can be obtained from the Pod-Daemon logs if you are deploying it manually. + +The Pod-Daemon was created for use as a component of Deployment Previews in order to automatically create intercepts with development images built +by CI so that changes from pull requests can be quickly visualized in a live cluster before changes are landed by accessing the Preview URL +link which would be posted to an associated GitHub pull request when using Deployment Previews. + +See the [Deployment Previews quick-start](../../ci/pod-daemon) for information on how to get started with Deployment Previews +or for a reference on how Pod-Daemon can be manually deployed to the cluster. + +# Changes from Service Preview + +Using Ambassador's previous offering, Service Preview, the Traffic Agent had to be manually added to a pod by an +annotation. This is no longer required as the Traffic Agent is automatically injected when an intercept is started. + +Service Preview also started an intercept via `edgectl intercept`. The `edgectl` CLI is no longer required to intercept +as this functionality has been moved to the Telepresence CLI. + +For both the Traffic Manager and Traffic Agents, configuring Kubernetes ClusterRoles and ClusterRoleBindings is not +required as it was in Service Preview. Instead, the user running Telepresence must already have sufficient permissions in the cluster to add and modify deployments in the cluster. diff --git a/docs/telepresence/latest/reference/client.md b/docs/telepresence/latest/reference/client.md new file mode 100644 index 000000000..84137db98 --- /dev/null +++ b/docs/telepresence/latest/reference/client.md @@ -0,0 +1,31 @@ +--- +description: "CLI options for Telepresence to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Client reference + +The [Telepresence CLI client](../../quick-start) is used to connect Telepresence to your cluster, start and stop intercepts, and create preview URLs. All commands are run in the form of `telepresence `. + +## Commands + +A list of all CLI commands and flags is available by running `telepresence help`, but here is more detail on the most common ones. +You can append `--help` to each command below to get even more information about its usage. + +| Command | Description | +|----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `connect` | Starts the local daemon and connects Telepresence to your cluster and installs the Traffic Manager if it is missing. After connecting, outbound traffic is routed to the cluster so that you can interact with services as if your laptop was another pod (for example, curling a service by it's name) | +| [`login`](login) | Authenticates you to Ambassador Cloud to create, manage, and share [preview URLs](../../howtos/preview-urls/) | +| `logout` | Logs out out of Ambassador Cloud | +| `license` | Formats a license from Ambassdor Cloud into a secret that can be [applied to your cluster](../cluster-config#add-license-to-cluster) if you require features of the extension in an air-gapped environment | +| `status` | Shows the current connectivity status | +| `quit` | Tell Telepresence daemons to quit | +| `list` | Lists the current active intercepts | +| `intercept` | Intercepts a service, run followed by the service name to be intercepted and what port to proxy to your laptop: `telepresence intercept --port ` (use `port/UDP` to force UDP). This command can also start a process so you can run a local instance of the service you are intercepting. For example the following will intercept the hello service on port 8000 and start a Python web server: `telepresence intercept hello --port 8000 -- python3 -m http.server 8000`. A special flag `--docker-run` can be used to run the local instance [in a docker container](../docker-run). | +| `leave` | Stops an active intercept: `telepresence leave hello` | +| `preview` | Create or remove [preview URLs](../../howtos/preview-urls) for existing intercepts: `telepresence preview create ` | +| `loglevel` | Temporarily change the log-level of the traffic-manager, traffic-agents, and user and root daemons | +| `gather-logs` | Gather logs from traffic-manager, traffic-agents, user, and root daemons, and export them into a zip file that can be shared with others or included with a github issue. Use `--get-pod-yaml` to include the yaml for the `traffic-manager` and `traffic-agent`s. Use `--anonymize` to replace the actual pod names + namespaces used for the `traffic-manager` and pods containing `traffic-agent`s in the logs. | +| `version` | Show version of Telepresence CLI + Traffic-Manager (if connected) | +| `uninstall` | Uninstalls Telepresence from your cluster, using the `--agent` flag to target the Traffic Agent for a specific workload, the `--all-agents` flag to remove all Traffic Agents from all workloads, or the `--everything` flag to remove all Traffic Agents and the Traffic Manager. | +| `dashboard` | Reopens the Ambassador Cloud dashboard in your browser | +| `current-cluster-id` | Get cluster ID for your kubernetes cluster, used for [configuring license](../cluster-config#add-license-to-cluster) in an air-gapped environment | diff --git a/docs/telepresence/latest/reference/client/login.md b/docs/telepresence/latest/reference/client/login.md new file mode 100644 index 000000000..c5a5df7b0 --- /dev/null +++ b/docs/telepresence/latest/reference/client/login.md @@ -0,0 +1,60 @@ +# Telepresence Login + +```console +$ telepresence login --help +Authenticate to Ambassador Cloud + +Usage: + telepresence login [flags] + +Flags: + --apikey string Static API key to use instead of performing an interactive login +``` + +## Description + +Use `telepresence login` to explicitly authenticate with [Ambassador +Cloud](https://www.getambassador.io/docs/cloud). Other commands will +automatically invoke the `telepresence login` interactive login +procedure as nescessary, so it is rarely nescessary to explicitly run +`telepresence login`; it should only be truly nescessary to explictly +run `telepresence login` when you require a non-interactive login. + +The normal interactive login procedure involves launching a web +browser, a user interacting with that web browser, and finally having +the web browser make callbacks to the local Telepresence process. If +it is not possible to do this (perhaps you are using a headless remote +box via SSH, or are using Telepresence in CI), then you may instead +have Ambassador Cloud issue an API key that you pass to `telepresence +login` with the `--apikey` flag. + +## Telepresence + +When you run `telepresence login`, the CLI installs +a Telepresence binary. The Telepresence enhanced free client of the [User +Daemon](../../architecture) communicates with the Ambassador Cloud to +provide fremium features including the ability to create intercepts from +Ambassador Cloud. + +## Acquiring an API key + +1. Log in to Ambassador Cloud at https://app.getambassador.io/ . + +2. Click on your profile icon in the upper-left: ![Screenshot with the + mouse pointer over the upper-left profile icon](./login/apikey-2.png) + +3. Click on the "API Keys" menu button: ![Screenshot with the mouse + pointer over the "API Keys" menu button](./login/apikey-3.png) + +4. Click on the "generate new key" button in the upper-right: + ![Screenshot with the mouse pointer over the "generate new key" + button](./login/apikey-4.png) + +5. Enter a description for the key (perhaps the name of your laptop, + or perhaps the "CI"), and click "generate api key" to create it. + +You may now pass the API key as `KEY` to `telepresence login --apikey=KEY`. + +Telepresence will use that "master" API key to create narrower keys +for different components of Telepresence. You will see these appear +in the Ambassador Cloud web interface. \ No newline at end of file diff --git a/docs/telepresence/latest/reference/client/login/apikey-2.png b/docs/telepresence/latest/reference/client/login/apikey-2.png new file mode 100644 index 000000000..1379502a9 Binary files /dev/null and b/docs/telepresence/latest/reference/client/login/apikey-2.png differ diff --git a/docs/telepresence/latest/reference/client/login/apikey-3.png b/docs/telepresence/latest/reference/client/login/apikey-3.png new file mode 100644 index 000000000..4559b784d Binary files /dev/null and b/docs/telepresence/latest/reference/client/login/apikey-3.png differ diff --git a/docs/telepresence/latest/reference/client/login/apikey-4.png b/docs/telepresence/latest/reference/client/login/apikey-4.png new file mode 100644 index 000000000..25c6581a4 Binary files /dev/null and b/docs/telepresence/latest/reference/client/login/apikey-4.png differ diff --git a/docs/telepresence/latest/reference/cluster-config.md b/docs/telepresence/latest/reference/cluster-config.md new file mode 100644 index 000000000..b538c1ef7 --- /dev/null +++ b/docs/telepresence/latest/reference/cluster-config.md @@ -0,0 +1,386 @@ +import Alert from '@material-ui/lab/Alert'; +import { ClusterConfig } from '../../../../../src/components/Docs/Telepresence'; + +# Cluster-side configuration + +For the most part, Telepresence doesn't require any special +configuration in the cluster and can be used right away in any +cluster (as long as the user has adequate [RBAC permissions](../rbac) +and the cluster's server version is `1.19.0` or higher). + +## Helm Chart configuration +Some cluster specific configuration can be provided when installing +or upgrading the Telepresence cluster installation using Helm. Once +installed, the Telepresence client will configure itself from values +that it receives when connecting to the Traffic manager. + +See the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) +for a full list of available configuration settings. + +### Values +To add configuration, create a yaml file with the configuration values and then use it executing `telepresence helm install [--upgrade] --values ` + +## Client Configuration + +It is possible for the Traffic Manager to automatically push config to all +connecting clients. To learn more about this, please see the [client config docs](../config#global-configuration) + +### Agent Configuration + +The `agent` structure of the Helm chart configures the behavior of the Telepresence agents. + +#### Application Protocol Selection +The `agent.appProtocolStrategy` is relevant when using personal intercepts and controls how telepresence selects the application protocol to use +when intercepting a service that has no `service.ports.appProtocol` declared. The port's `appProtocol` is always trusted if it is present. +Valid values are: + +| Value | Resulting action | +|--------------|------------------------------------------------------------------------------------------------------------------------------| +| `http2Probe` | The Telepresence Traffic Agent will probe the intercepted container to check whether it supports http2. This is the default. | +| `portName` | Telepresence will make an educated guess about the protocol based on the name of the service port | +| `http` | Telepresence will use http | +| `http2` | Telepresence will use http2 | + +When `portName` is used, Telepresence will determine the protocol by the name of the port: `[-suffix]`. The following protocols +are recognized: + +| Protocol | Meaning | +|----------|---------------------------------------| +| `http` | Plaintext HTTP/1.1 traffic | +| `http2` | Plaintext HTTP/2 traffic | +| `https` | TLS Encrypted HTTP (1.1 or 2) traffic | +| `grpc` | Same as http2 | + + + +#### Envoy Configuration + +The `agent.envoy` structure contains three values: + +| Setting | Meaning | +|--------------|----------------------------------------------------------| +| `logLevel` | Log level used by the Envoy proxy. Defaults to "warning" | +| `serverPort` | Port used by the Envoy server. Default 18000. | +| `adminPort` | Port used for Envoy administration. Default 19000. | + +#### Image Configuration + +The `agent.image` structure contains the following values: + +| Setting | Meaning | +|------------|-----------------------------------------------------------------------------| +| `registry` | Registry used when downloading the image. Defaults to "docker.io/datawire". | +| `name` | The name of the image. Retrieved from Ambassador Cloud if not set. | +| `tag` | The tag of the image. Retrieved from Ambassador Cloud if not set. | + +#### Log level + +The `agent.LogLevel` controls the log level of the traffic-agent. See [Log Levels](../config/#log-levels) for more info. + +#### Resources + +The `agent.resources` and `agent.initResources` will be used as the `resources` element when injecting traffic-agents and init-containers. + +## TLS + +In this example, other applications in the cluster expect to speak TLS to your +intercepted application (perhaps you're using a service-mesh that does +mTLS). + +In order to use `--mechanism=http` (or any features that imply +`--mechanism=http`) you need to tell Telepresence about the TLS +certificates in use. + +Tell Telepresence about the certificates in use by adjusting your +[workload's](../intercepts/#supported-workloads) Pod template to set a couple of +annotations on the intercepted Pods: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ "getambassador.io/inject-terminating-tls-secret": "your-terminating-secret" # optional ++ "getambassador.io/inject-originating-tls-secret": "your-originating-secret" # optional +``` + +- The `getambassador.io/inject-terminating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS server + certificate to use for decrypting and responding to incoming + requests. + + When Telepresence modifies the Service and workload port + definitions to point at the Telepresence Agent sidecar's port + instead of your application's actual port, the sidecar will use this + certificate to terminate TLS. + +- The `getambassador.io/inject-originating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS + client certificate to use for communicating with your application. + + You will need to set this if your application expects incoming + requests to speak TLS (for example, your + code expects to handle mTLS itself instead of letting a service-mesh + sidecar handle mTLS for it, or the port definition that Telepresence + modified pointed at the service-mesh sidecar instead of at your + application). + + If you do set this, you should to set it to the + same client certificate Secret that you configure the Ambassador + Edge Stack to use for mTLS. + +It is only possible to refer to a Secret that is in the same Namespace +as the Pod. The Secret will be mounted into the traffic agent's container. + +Telepresence understands `type: kubernetes.io/tls` Secrets and +`type: istio.io/key-and-cert` Secrets; as well as `type: Opaque` +Secrets that it detects to be formatted as one of those types. + +## Air-gapped cluster + +If your cluster is on an isolated network such that it cannot +communicate with Ambassador Cloud, then some additional configuration +is required to acquire a license key in order to use personal +intercepts. A buisness or enterprise plan is required to generate a license. + +### Create a license + +1. + +2. Generate a new license (if one doesn't already exist) by clicking *Generate New License*. + +3. You will be prompted for your Cluster ID. Ensure your +kubeconfig context is using the cluster you want to create a license for then +run this command to generate the Cluster ID: + + ``` + $ telepresence current-cluster-id + + Cluster ID: + ``` + +4. Click *Generate API Key* to finish generating the license. + +5. On the licenses page, download the license file associated with your cluster. + +### Add license to cluster +There are two separate ways you can add the license to your cluster: manually creating and deploying +the license secret or having the helm chart manage the secret + +You only need to do one of the two options. + +#### Manual deploy of license secret + +1. Use this command to generate a Kubernetes Secret config using the license file: + + ``` + $ telepresence license -f + + apiVersion: v1 + data: + hostDomain: + license: + kind: Secret + metadata: + creationTimestamp: null + name: systema-license + namespace: ambassador + ``` + +2. Save the output as a YAML file and apply it to your +cluster with `kubectl`. + +3. When deploying the `traffic-manager` chart, you must add the additional values when running `helm install` by putting +the following into a file (for the example we'll assume it's called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + secret: + # This tells the helm chart not to create the secret since you've created it yourself + create: false + ``` + +4. Install the helm chart into the cluster + + ``` + telepresence helm install -f license-values.yaml + ``` + +5. Ensure that you have the docker image for the Smart Agent (datawire/ambassador-telepresence-agent:1.11.0) +pulled and in a registry your cluster can pull from. + +6. Have users use the `images` [config key](../config/#images) keys so telepresence uses the aforementioned image for their agent. + +#### Helm chart manages the secret + +1. Get the jwt token from the downloaded license file + + ``` + $ cat ~/Downloads/ambassador.License_for_yourcluster + eyJhbGnotarealtoken.butanexample + ``` + +2. Create the following values file, substituting your real jwt token in for the one used in the example below. +(for this example we'll assume the following is placed in a file called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + # This is the value from the license file you download. this value is an example and will not work + value: eyJhbGnotarealtoken.butanexample + secret: + # This tells the helm chart to create the secret + create: true + ``` + +3. Install the helm chart into the cluster + + ``` + telepresence helm install -f license-values.yaml + ``` + +Users will now be able to use preview intercepts with the +`--preview-url=false` flag. Even with the license key, preview URLs +cannot be used without enabling direct communication with Ambassador +Cloud, as Ambassador Cloud is essential to their operation. + +If using Helm to install the server-side components, see the chart's [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) to learn how to configure the image registry and license secret. + +## Mutating Webhook + +Telepresence uses a Mutating Webhook to inject the [Traffic Agent](../architecture/#traffic-agent) sidecar container and update the +port definitions. This means that an intercepted workload (Deployment, StatefulSet, ReplicaSet) will remain untouched +and in sync as far as GitOps workflows (such as ArgoCD) are concerned. + +The injection will happen on demand the first time an attempt is made to intercept the workload. + +If you want to prevent that the injection ever happens, simply add the `telepresence.getambassador.io/inject-traffic-agent: disabled` +annotation to your workload template's annotations: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ telepresence.getambassador.io/inject-traffic-agent: disabled + spec: + containers: +``` + +### Service Name and Port Annotations + +Telepresence will automatically find all services and all ports that will connect to a workload and make them available +for an intercept, but you can explicitly define that only one service and/or port can be intercepted. + +```diff + spec: + template: + metadata: + labels: + service: your-service + annotations: ++ telepresence.getambassador.io/inject-service-name: my-service ++ telepresence.getambassador.io/inject-service-port: https + spec: + containers: +``` + +### Ignore Certain Volume Mounts + +An annotation `telepresence.getambassador.io/inject-ignore-volume-mounts` can be used to make the injector ignore certain volume mounts denoted by a comma-separated string. The specified volume mounts from the original container will not be appended to the agent sidecar container. + +```diff + spec: + template: + metadata: + annotations: ++ telepresence.getambassador.io/inject-ignore-volume-mounts: "foo,bar" + spec: + containers: +``` + +### Note on Numeric Ports + +If the targetPort of your intercepted service is pointing at a port number, in addition to +injecting the Traffic Agent sidecar, Telepresence will also inject an initContainer that will +reconfigure the pod's firewall rules to redirect traffic to the Traffic Agent. + + +Note that this initContainer requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + +If you need to use numeric ports without the aforementioned capabilities, you can [manually install the agent](../intercepts/manual-agent) + +For example, the following service is using a numeric port, so Telepresence would inject an initContainer into it: +```yaml +apiVersion: v1 +kind: Service +metadata: + name: your-service +spec: + type: ClusterIP + selector: + service: your-service + ports: + - port: 80 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: your-service + labels: + service: your-service +spec: + replicas: 1 + selector: + matchLabels: + service: your-service + template: + metadata: + annotations: + telepresence.getambassador.io/inject-traffic-agent: enabled + labels: + service: your-service + spec: + containers: + - name: your-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 +``` + +## Excluding Envrionment Variables + +If your pod contains sensitive variables like a database password, or third party API Key, you may want to exclude those from being propagated through an intercept. +Telepresence allows you to configure this through a ConfigMap that is then read and removes the sensitive variables. + +This can be done in two ways: + +When installing your traffic-manager through helm you can use the `--set` flag and pass a comma separated list of variables: + +`telepresence helm install --set intercept.environment.excluded="{DATABASE_PASSWORD,API_KEY}"` + +This also applies when upgrading: + +`telepresence helm upgrade --set intercept.environment.excluded="{DATABASE_PASSWORD,API_KEY}"` + +Once this is completed, the environment variables will no longer be in the environment file created by an Intercept. + +The other way to complete this is in your custom `values.yaml`. Customizing your traffic-manager through a values file can be viewed [here](../../install/manager). + +```yaml +intercept: + environment: + excluded: ['DATABASE_PASSWORD', 'API_KEY'] +``` + +You can exclude any number of variables, they just need to match the `key` of the variable within a pod to be excluded. \ No newline at end of file diff --git a/docs/telepresence/latest/reference/config.md b/docs/telepresence/latest/reference/config.md new file mode 100644 index 000000000..d3472eb01 --- /dev/null +++ b/docs/telepresence/latest/reference/config.md @@ -0,0 +1,374 @@ +# Laptop-side configuration + +There are a number of configuration values that can be tweaked to change how Telepresence behaves. +These can be set in two ways: globally, by a platform engineer with powers to deploy the Telepresence Traffic Manager, or locally by any user. +One important exception is the location of the traffic manager itself, which, if it's different from the default of `ambassador`, [must be set](#manager) locally per-cluster to be able to connect. + +## Global Configuration + +Global configuration is set at the Traffic Manager level and applies to any user connecting to that Traffic Manager. +To set it, simply pass in a `client` dictionary to the `helm install` command, with any config values you wish to set. + +### Values + +The `client` config supports values for `timeouts`, `logLevels`, `images`, `cloud`, `grpc`, `dns`, and `routing`. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +client: + timeouts: + agentInstall: 1m + intercept: 10s + logLevels: + userDaemon: debug + images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting + cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. + grpc: + maxReceiveSize: 10Mi + telepresenceAPI: + port: 9980 + dns: + includeSuffixes: [.private] + excludeSuffixes: [.se, .com, .io, .net, .org, .ru] + lookupTimeout: 30s + routing: + alsoProxySubnets: + - 1.2.3.4/32 + neverProxySubnets: + - 1.2.3.4/32 +``` + +#### Timeouts + +Values for `client.timeouts` are all durations either as a number of seconds +or as a string with a unit suffix of `ms`, `s`, `m`, or `h`. Strings +can be fractional (`1.5h`) or combined (`2h45m`). + +These are the valid fields for the `timeouts` key: + +| Field | Description | Type | Default | +|-------------------------|------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|------------| +| `agentInstall` | Waiting for Traffic Agent to be installed | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | +| `apply` | Waiting for a Kubernetes manifest to be applied | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 1 minute | +| `clusterConnect` | Waiting for cluster to be connected | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `intercept` | Waiting for an intercept to become active | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `proxyDial` | Waiting for an outbound connection to be established | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `trafficManagerConnect` | Waiting for the Traffic Manager API to connect for port forwards | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `trafficManagerAPI` | Waiting for connection to the gPRC API after `trafficManagerConnect` is successful | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 15 seconds | +| `helm` | Waiting for Helm operations (e.g. `install`) on the Traffic Manager | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | + +#### Log Levels + +Values for the `client.logLevels` fields are one of the following strings, +case-insensitive: + + - `trace` + - `debug` + - `info` + - `warning` or `warn` + - `error` + +For whichever log-level you select, you will get logs labeled with that level and of higher severity. +(e.g. if you use `info`, you will also get logs labeled `error`. You will NOT get logs labeled `debug`. + +These are the valid fields for the `client.logLevels` key: + +| Field | Description | Type | Default | +|--------------|---------------------------------------------------------------------|---------------------------------------------|---------| +| `userDaemon` | Logging level to be used by the User Daemon (logs to connector.log) | [loglevel][logrus-level] [string][yaml-str] | debug | +| `rootDaemon` | Logging level to be used for the Root Daemon (logs to daemon.log) | [loglevel][logrus-level] [string][yaml-str] | info | + +#### Images +Values for `client.images` are strings. These values affect the objects that are deployed in the cluster, +so it's important to ensure users have the same configuration. + +Additionally, you can deploy the server-side components with [Helm](../../install/helm), to prevent them +from being overridden by a client's config and use the [mutating-webhook](../cluster-config/#mutating-webhook) +to handle installation of the `traffic-agents`. + +These are the valid fields for the `client.images` key: + +| Field | Description | Type | Default | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------|----------------------| +| `registry` | Docker registry to be used for installing the Traffic Manager and default Traffic Agent. If not using a helm chart to deploy server-side objects, changing this value will create a new traffic-manager deployment when using Telepresence commands. Additionally, changing this value will update installed default `traffic-agents` to use the new registry when creating a new intercept. | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `agentImage` | `$registry/$imageName:$imageTag` to use when installing the Traffic Agent. Changing this value will update pre-existing `traffic-agents` to use this new image. *The `registry` value is not used for the `traffic-agent` if you have this value set.* | qualified Docker image name [string][yaml-str] | (unset) | +| `webhookRegistry` | The container `$registry` that the [Traffic Manager](../cluster-config/#mutating-webhook) will use with the `webhookAgentImage` *This value is only used if a new `traffic-manager` is deployed* | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `webhookAgentImage` | The container image that the [Traffic Manager](../cluster-config/#mutating-webhook) will pull from the `webhookRegistry` when installing the Traffic Agent in annotated pods *This value is only used if a new `traffic-manager` is deployed* | non-qualified Docker image name [string][yaml-str] | (unset) | + +#### Cloud +Values for `client.cloud` are listed below and their type varies, so please see the chart for the expected type for each config value. +These fields control how the client interacts with the Cloud service. + +| Field | Description | Type | Default | +|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------|----------------------| +| `refreshMessages` | How frequently the CLI should communicate with Ambassador Cloud to get new command messages, which also resets whether the message has been raised or not. You will see each message at most once within the duration given by this config | [duration][go-duration] [string][yaml-str] | 168h | +| `systemaHost` | The host used to communicate with Ambassador Cloud | [string][yaml-str] | app.getambassador.io | +| `systemaPort` | The port used with `systemaHost` to communicate with Ambassador Cloud | [string][yaml-str] | 443 | + +Telepresence attempts to auto-detect if the cluster is capable of +communication with Ambassador Cloud, but in cases where only the on-laptop client wishes to communicate with +Ambassador Cloud Telepresence may still prompt you to log in. + +Reminder: To use personal intercepts, which normally require a login, +you must have a license key in your cluster and specify which +`agentImage` should be installed by also adding the following to your +`config.yml`: + +```yaml +images: + agentImage: / +``` + +#### Air-gapped clients + +If your laptop is on an isolated network, you will need an [air-gapped license](../cluster-config/#air-gapped-cluster) in your cluster. Telepresence will check for this license before requiring a login. + +#### Grpc +The `maxReceiveSize` determines how large a message that the workstation receives via gRPC can be. The default is 4Mi (determined by gRPC). All traffic to and from the cluster is tunneled via gRPC. + +The size is measured in bytes. You can express it as a plain integer or as a fixed-point number using E, G, M, or K. You can also use the power-of-two equivalents: Gi, Mi, Ki. For example, the following represent roughly the same value: +``` +128974848, 129e6, 129M, 123Mi +``` + +#### RESTful API server +The `client.telepresenceAPI` controls the behavior of Telepresence's RESTful API server that can be queried for additional information about ongoing intercepts. When present, and the `port` is set to a valid port number, it's propagated to the auto-installer so that application containers that can be intercepted gets the `TELEPRESENCE_API_PORT` environment set. The server can then be queried at `localhost:`. In addition, the `traffic-agent` and the `user-daemon` on the workstation that performs an intercept will start the server on that port. +If the `traffic-manager` is auto-installed, its webhook agent injector will be configured to add the `TELEPRESENCE_API_PORT` environment to the app container when the `traffic-agent` is injected. +See [RESTful API server](../restapi) for more info. + +#### DNS + +The `client.dns` configuration offers options for configuring the DNS resolution behavior in a client application or system. Here is a summary of the available fields: + + + +The fields for `client.dns` are: `localIP`, `excludeSuffixes`, `includeSuffixes`, and `lookupTimeout`. + +| Field | Description | Type | Default | +|-------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|--------------------------------------------------------------------------| +| `localIP` | The address of the local DNS server. This entry is only used on Linux systems that are not configured to use systemd-resolved. | IP address [string][yaml-str] | first `nameserver` mentioned in `/etc/resolv.conf` | +| `excludeSuffixes` | Suffixes for which the DNS resolver will always fail (or fallback in case of the overriding resolver). Can be globally configured in the Helm chart. | [sequence][yaml-seq] of [strings][yaml-str] | `[".arpa", ".com", ".io", ".net", ".org", ".ru"]` | +| `includeSuffixes` | Suffixes for which the DNS resolver will always attempt to do a lookup. Includes have higher priority than excludes. Can be globally configured in the Helm chart. | [sequence][yaml-seq] of [strings][yaml-str] | `[]` | +| `lookupTimeout` | Maximum time to wait for a cluster side host lookup. | [duration][go-duration] [string][yaml-str] | 4 seconds | + +Here is an example values.yaml: +```yaml +client: + dns: + includeSuffixes: [.private] + excludeSuffixes: [.se, .com, .io, .net, .org, .ru] + localIP: 8.8.8.8 + lookupTimeout: 30s +``` + +##### Mappings + +Allows you to map hostnames to aliases. This is useful when you want to redirect traffic from one service to another within the cluster. + +In the given cluster, the service named `postgres` is located within a separate namespace titled `big-data`, and it's referred to as `psql` : + +```yaml +dns: + mappings: + - name: postgres + aliasFor: psql.big-data +``` + +##### Exclude + +Lists service names to be excluded from the Telepresence DNS server. This is useful when you want your application to interact with a local service instead of a cluster service. In this example, "redis" will not be resolved by the cluster, but locally. + +```yaml +dns: + excludes: + - redis +``` + +#### Routing + +##### AlsoProxySubnets + +When using `alsoProxySubnets`, you provide a list of subnets to be added to the TUN device. +All connections to addresses that the subnet spans will be dispatched to the cluster + +Here is an example values.yaml for the subnet `1.2.3.4/32`: +```yaml +client: + routing: + alsoProxySubnets: + - 1.2.3.4/32 +``` + +##### NeverProxySubnets + +When using `neverProxySubnets` you provide a list of subnets. These will never be routed via the TUN device, +even if they fall within the subnets (pod or service) for the cluster. Instead, whatever route they have before +telepresence connects is the route they will keep. + +Here is an example kubeconfig for the subnet `1.2.3.4/32`: + +```yaml +client: + routing: + neverProxySubnets: + - 1.2.3.4/32 +``` + +##### Using AlsoProxy together with NeverProxy + +Never proxy and also proxy are implemented as routing rules, meaning that when the two conflict, regular routing routes apply. +Usually this means that the most specific route will win. + +So, for example, if an `alsoProxySubnets` subnet falls within a broader `neverProxySubnets` subnet: + +```yaml +neverProxySubnets: [10.0.0.0/16] +alsoProxySubnets: [10.0.5.0/24] +``` + +Then the specific `alsoProxySubnets` of `10.0.5.0/24` will be proxied by the TUN device, whereas the rest of `10.0.0.0/16` will not. + +Conversely, if a `neverProxySubnets` subnet is inside a larger `alsoProxySubnets` subnet: + +```yaml +alsoProxySubnets: [10.0.0.0/16] +neverProxySubnets: [10.0.5.0/24] +``` + +Then all of the `alsoProxySubnets` of `10.0.0.0/16` will be proxied, with the exception of the specific `neverProxySubnets` of `10.0.5.0/24` + +## Local Overrides + +In addition, it is possible to override each of these variables at the local level by setting up new values in local config files. +There are two types of config values that can be set locally: those that apply to all clusters, which are set in a single `config.yml` file, and those +that only apply to specific clusters, which are set as extensions to the `$KUBECONFIG` file. + +### Config for all clusters +Telepresence uses a `config.yml` file to store and change those configuration values that will be used for all clusters you use Telepresence with. +The location of this file varies based on your OS: + +* macOS: `$HOME/Library/Application Support/telepresence/config.yml` +* Linux: `$XDG_CONFIG_HOME/telepresence/config.yml` or, if that variable is not set, `$HOME/.config/telepresence/config.yml` +* Windows: `%APPDATA%\telepresence\config.yml` + +For Linux, the above paths are for a user-level configuration. For system-level configuration, use the file at `$XDG_CONFIG_DIRS/telepresence/config.yml` or, if that variable is empty, `/etc/xdg/telepresence/config.yml`. If a file exists at both the user-level and system-level paths, the user-level path file will take precedence. + +### Values + +The config file currently supports values for the `timeouts`, `logLevels`, `images`, `cloud`, and `grpc` keys. +The definitions of these values are identical to those values in the `client` config above. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +timeouts: + agentInstall: 1m + intercept: 10s +logLevels: + userDaemon: debug +images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting +cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. +grpc: + maxReceiveSize: 10Mi +telepresenceAPI: + port: 9980 +``` + + +## Workstation Per-Cluster Configuration + +Configuration that is specific to a cluster can also be overriden per-workstation by modifying your `$KUBECONFIG` file. +It is recommended that you do not do this, and instead rely on upstream values provided to the Traffic Manager. This ensures +that all users that connect to the Traffic Manager will have the same routing and DNS resolution behavior. +An important exception to this is the [`manager.namespace` configuration](#manager) which must be set locally. + +### Values + +The kubeconfig supports values for `dns`, `also-proxy`, `never-proxy`, and `manager`. + +Example kubeconfig: +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + dns: + include-suffixes: [.private] + exclude-suffixes: [.se, .com, .io, .net, .org, .ru] + local-ip: 8.8.8.8 + lookup-timeout: 30s + never-proxy: [10.0.0.0/16] + also-proxy: [10.0.5.0/24] + name: example-cluster +``` + +#### Manager + +This is the one cluster configuration that cannot be set using the Helm chart because it defines how Telepresence connects to +the Traffic manager. When not default, that setting needs to be configured in the workstation's kubeconfig for the cluster. + +The `manager` key contains configuration for finding the `traffic-manager` that telepresence will connect to. It supports one key, `namespace`, indicating the namespace where the traffic manager is to be found + +Here is an example kubeconfig that will instruct telepresence to connect to a manager in namespace `staging`: + +```yaml +apiVersion: v1 +clusters: + - cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +[yaml-bool]: https://yaml.org/type/bool.html +[yaml-float]: https://yaml.org/type/float.html +[yaml-int]: https://yaml.org/type/int.html +[yaml-seq]: https://yaml.org/type/seq.html +[yaml-str]: https://yaml.org/type/str.html +[go-duration]: https://pkg.go.dev/time#ParseDuration +[logrus-level]: https://github.com/sirupsen/logrus/blob/v1.8.1/logrus.go#L25-L45 diff --git a/docs/telepresence/latest/reference/dns.md b/docs/telepresence/latest/reference/dns.md new file mode 100644 index 000000000..2f263860e --- /dev/null +++ b/docs/telepresence/latest/reference/dns.md @@ -0,0 +1,80 @@ +# DNS resolution + +The Telepresence DNS resolver is dynamically configured to resolve names using the namespaces of currently active intercepts. Processes running locally on the desktop will have network access to all services in the such namespaces by service-name only. + +All intercepts contribute to the DNS resolver, even those that do not use the `--namespace=` option. This is because `--namespace default` is implied, and in this context, `default` is treated just like any other namespace. + +No namespaces are used by the DNS resolver (not even `default`) when no intercepts are active, which means that no service is available by `` only. Without an active intercept, the namespace qualified DNS name must be used (in the form `.`). + +See this demonstrated below, using the [quick start's](../../quick-start/) sample app services. + +No intercepts are currently running, we'll connect to the cluster and list the services that can be intercepted. + +``` +$ telepresence connect + + Connecting to traffic manager... + Connected to context default (https://) + +$ telepresence list + + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + emoji : ready to intercept (traffic-agent not yet installed) + web : ready to intercept (traffic-agent not yet installed) + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + +$ curl web-app:80 + + curl: (6) Could not resolve host: web-app + +``` + +This is expected as Telepresence cannot reach the service yet by short name without an active intercept in that namespace. + +``` +$ curl web-app.emojivoto:80 + + + + + + Emoji Vote + ... +``` + +Using the namespaced qualified DNS name though does work. +Now we'll start an intercept against another service in the same namespace. Remember, `--namespace default` is implied since it is not specified. + +``` +$ telepresence intercept web --port 8080 + + Using Deployment web + intercepted + Intercept name : web + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Volume Mount Point: /tmp/telfs-166119801 + Intercepting : HTTP requests that match all headers: + 'x-telepresence-intercept-id: 8eac04e3-bf24-4d62-b3ba-35297c16f5cd:web' + +$ curl webapp:80 + + + + + + Emoji Vote + ... +``` + +Now curling that service by its short name works and will as long as the intercept is active. + +The DNS resolver will always be able to resolve services using `.` regardless of intercepts. + +### Supported Query Types + +The Telepresence DNS resolver is now capable of resolving queries of type `A`, `AAAA`, `CNAME`, +`MX`, `NS`, `PTR`, `SRV`, and `TXT`. + +See [Outbound connectivity](../routing/#dns-resolution) for details on DNS lookups. diff --git a/docs/telepresence/latest/reference/docker-run.md b/docs/telepresence/latest/reference/docker-run.md new file mode 100644 index 000000000..27b2f316f --- /dev/null +++ b/docs/telepresence/latest/reference/docker-run.md @@ -0,0 +1,90 @@ +--- +Description: "How a Telepresence intercept can run a Docker container with configured environment and volume mounts." +--- + +# Using Docker for intercepts + +## Use the Intercept Specification +The recommended way to use Telepresence with Docker is to create an [Intercept Specification](../intercepts/specs) that uses docker images as intercept handlers. + +## Using command flags + +### The docker flag +You can start the Telepresence daemon in a Docker container on your laptop using the command: + +```console +$ telepresence connect --docker +``` + +The `--docker` flag is a global flag, and if passed directly like `telepresence intercept --docker`, then the implicit connect that takes place if no connections is active, will use a container based daemon. + +### The docker-run flag + +If you want your intercept to go to another Docker container, you can use the `--docker-run` flag. It creates the intercept, runs your container in the foreground, then automatically ends the intercept when the container exits. + +```console +$ telepresence intercept --port --docker-run -- +``` + +The `--` separates flags intended for `telepresence intercept` from flags intended for `docker run`. + +It's recommended that you always use the `--docker-run` in combination with the global `--docker` flag, because that makes everything less intrusive. +- No admin user access is needed. Network modifications are confined to a Docker network. +- There's no need for special filesystem mount software like MacFUSE or WinFSP. The volume mounts happen in the Docker engine. + +The following happens under the hood when both flags are in use: + +- The network of for the intercept handler will be set to the same as the network used by the daemon. This guarantees that the + intercept handler can access the Telepresence VIF, and hence have access the cluster. +- Volume mounts will be automatic and made using the Telemount Docker volume plugin so that all volumes exposed by the intercepted + container are mounted on the intercept handler container. +- The environment of the intercepted container becomes the environment of the intercept handler container. + +### The docker-build flag + +The `--docker-build ` and the repeatable `docker-build-opt key=value` flags enable container's to be build on the fly by the intercept command. + +When using `--docker-build`, the image name used in the argument list must be verbatim `IMAGE`. The word acts as a placeholder and will be replaced by the ID of the image that is built. + +The `--docker-build` flag implies `--docker-run`. + +## Using docker-run flag without docker + +It is possible to use `--docker-run` with a daemon running on your host, which is the default behavior of Telepresence. + +However, it isn't recommended since you'll be in a hybrid mode: while your intercept runs in a container, the daemon will modify the host network, and if remote mounts are desired, they may require extra software. + +The ability to use this special combination is retained for backward compatibility reasons. It might be removed in a future release of Telepresence. + +The `--port` flag has slightly different semantics and can be used in situations when the local and container port must be different. This +is done using `--port :`. The container port will default to the local port when using the `--port ` syntax. + +## Examples + +Imagine you are working on a new version of your frontend service. It is running in your cluster as a Deployment called `frontend-v1`. You use Docker on your laptop to build an improved version of the container called `frontend-v2`. To test it out, use this command to run the new container on your laptop and start an intercept of the cluster service to your local container. + +```console +$ telepresence intercept --docker frontend-v1 --port 8000 --docker-run -- frontend-v2 +``` + +Now, imagine that the `frontend-v2` image is built by a `Dockerfile` that resides in the directory `images/frontend-v2`. You can build and intercept directly. + +```console +$ telepresence intercept --docker frontend-v1 --port8000 --docker-build images/frontend-v2 --docker-build-opt tag=mytag -- IMAGE +``` + +## Automatic flags + +Telepresence will automatically pass some relevant flags to Docker in order to connect the container with the intercept. Those flags are combined with the arguments given after `--` on the command line. + +- `--env-file ` Loads the intercepted environment +- `--name intercept--` Names the Docker container, this flag is omitted if explicitly given on the command line +- `-v ` Volume mount specification, see CLI help for `--docker-mount` flags for more info + +When used with a container based daemon: +- `--rm` Mandatory, because the volume mounts cannot be removed until the container is removed. +- `-v :` Volume mount specifications propagated from the intercepted container + +When used with a daemon that isn't container based: +- `--dns-search tel2-search` Enables single label name lookups in intercepted namespaces +- `-p ` The local port for the intercept and the container port diff --git a/docs/telepresence/latest/reference/environment.md b/docs/telepresence/latest/reference/environment.md new file mode 100644 index 000000000..7f83ff119 --- /dev/null +++ b/docs/telepresence/latest/reference/environment.md @@ -0,0 +1,46 @@ +--- +description: "How Telepresence can import environment variables from your Kubernetes cluster to use with code running on your laptop." +--- + +# Environment variables + +Telepresence can import environment variables from the cluster pod when running an intercept. +You can then use these variables with the code running on your laptop of the service being intercepted. + +There are three options available to do this: + +1. `telepresence intercept [service] --port [port] --env-file=FILENAME` + + This will write the environment variables to a Docker Compose `.env` file. This file can be used with `docker-compose` when starting containers locally. Please see the Docker documentation regarding the [file syntax](https://docs.docker.com/compose/env-file/) and [usage](https://docs.docker.com/compose/environment-variables/) for more information. + +2. `telepresence intercept [service] --port [port] --env-json=FILENAME` + + This will write the environment variables to a JSON file. This file can be injected into other build processes. + +3. `telepresence intercept [service] --port [port] -- [COMMAND]` + + This will run a command locally with the pod's environment variables set on your laptop. Once the command quits the intercept is stopped (as if `telepresence leave [service]` was run). This can be used in conjunction with a local server command, such as `python [FILENAME]` or `node [FILENAME]` to run a service locally while using the environment variables that were set on the pod via a ConfigMap or other means. + + Another use would be running a subshell, Bash for example: + + `telepresence intercept [service] --port [port] -- /bin/bash` + + This would start the intercept then launch the subshell on your laptop with all the same variables set as on the pod. + +## Telepresence Environment Variables + +Telepresence adds some useful environment variables in addition to the ones imported from the intercepted pod: + +### TELEPRESENCE_ROOT +Directory where all remote volumes mounts are rooted. See [Volume Mounts](../volume/) for more info. + +### TELEPRESENCE_MOUNTS +Colon separated list of remotely mounted directories. + +### TELEPRESENCE_CONTAINER +The name of the intercepted container. Useful when a pod has several containers, and you want to know which one that was intercepted by Telepresence. + +### TELEPRESENCE_INTERCEPT_ID +ID of the intercept (same as the "x-intercept-id" http header). + +Useful if you need special behavior when intercepting a pod. One example might be when dealing with pub/sub systems like Kafka, where all processes that don't have the `TELEPRESENCE_INTERCEPT_ID` set can filter out all messages that contain an `x-intercept-id` header, while those that do, instead filter based on a matching `x-intercept-id` header. This is to assure that messages belonging to a certain intercept always are consumed by the intercepting process. diff --git a/docs/telepresence/latest/reference/inside-container.md b/docs/telepresence/latest/reference/inside-container.md new file mode 100644 index 000000000..48a38b5a3 --- /dev/null +++ b/docs/telepresence/latest/reference/inside-container.md @@ -0,0 +1,19 @@ +# Running Telepresence inside a container + +All Telepresence commands now have the global option `--docker`. This option tells telepresence to start the Telepresence daemon in a +docker container. + +Running the daemon in a container brings many advantages. The daemon will no longer make modifications to the host's network or DNS, and +it will not mount files in the host's filesystem. Consequently, it will not need admin privileges to run, nor will it need special software +like macFUSE or WinFSP to mount the remote file systems. + +The intercept handler (the process that will receive the intercepted traffic) must also be a docker container, because that is the only +way to access the cluster network that the daemon makes available, and to mount the docker volumes needed. + +It's highly recommended that you use the new [Intercept Specification](../intercepts/specs) to set things up. That way, Telepresence can do +all the plumbing needed to start the intercept handler with the correct environment and volume mounts. +Otherwise, doing a fully container based intercept manually with all bells and whistles is a complicated process that involves: +- Capturing the details of an intercept +- Ensuring that the [Telemount](https://github.com/datawire/docker-volume-telemount#readme) Docker volume plugin is installed +- Creating volumes for all remotely exposed directories +- Starting the intercept handler container using the same network as the daemon. diff --git a/docs/telepresence/latest/reference/intercepts/cli.md b/docs/telepresence/latest/reference/intercepts/cli.md new file mode 100644 index 000000000..d7e482329 --- /dev/null +++ b/docs/telepresence/latest/reference/intercepts/cli.md @@ -0,0 +1,335 @@ +import Alert from '@material-ui/lab/Alert'; + +# Configuring intercept using CLI + +## Specifying a namespace for an intercept + +The namespace of the intercepted workload is specified using the +`--namespace` option. When this option is used, and `--workload` is +not used, then the given name is interpreted as the name of the +workload and the name of the intercept will be constructed from that +name and the namespace. + +```shell +telepresence intercept hello --namespace myns --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept +`hello-myns`. In order to remove the intercept, you will need to run +`telepresence leave hello-mydns` instead of just `telepresence leave +hello`. + +The name of the intercept will be left unchanged if the workload is specified. + +```shell +telepresence intercept myhello --namespace myns --workload hello --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept `myhello`. + +## Importing environment variables + +Telepresence can import the environment variables from the pod that is +being intercepted, see [this doc](../../environment/) for more details. + +## Creating an intercept without a preview URL + +If you *are not* logged in to Ambassador Cloud, the following command +will intercept all traffic bound to the service and proxy it to your +laptop. This includes traffic coming through your ingress controller, +so use this option carefully as to not disrupt production +environments. + +```shell +telepresence intercept --port= +``` + +If you *are* logged in to Ambassador Cloud, setting the +`--preview-url` flag to `false` is necessary. + +```shell +telepresence intercept --port= --preview-url=false +``` + +This will output an HTTP header that you can set on your request for +that traffic to be intercepted: + +```console +$ telepresence intercept --port= --preview-url=false +Using Deployment +intercepted + Intercept name: + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":") +``` + +Run `telepresence status` to see the list of active intercepts. + +```console +$ telepresence status +Root Daemon: Running + Version : v2.1.4 (api 3) + Primary DNS : "" + Fallback DNS: "" +User Daemon: Running + Version : v2.1.4 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 1 total + dataprocessingnodeservice: @ +``` + +Finally, run `telepresence leave ` to stop the intercept. + +## Skipping the ingress dialogue + +You can skip the ingress dialogue by setting the relevant parameters using flags. If any of the following flags are set, the dialogue will be skipped and the flag values will be used instead. If any of the required flags are missing, an error will be thrown. + +| Flag | Description | Required | +|------------------|------------------------------------------------------------------|------------| +| `--ingress-host` | The ip address for the ingress | yes | +| `--ingress-port` | The port for the ingress | yes | +| `--ingress-tls` | Whether tls should be used | no | +| `--ingress-l5` | Whether a different ip address should be used in request headers | no | + +## Creating an intercept when a service has multiple ports + +If you are trying to intercept a service that has multiple ports, you +need to tell Telepresence which service port you are trying to +intercept. To specify, you can either use the name of the service +port or the port number itself. To see which options might be +available to you and your service, use kubectl to describe your +service or look in the object's YAML. For more information on multiple +ports, see the [Kubernetes documentation][kube-multi-port-services]. + +[kube-multi-port-services]: https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services + +```console +$ telepresence intercept --port=: +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +When intercepting a service that has multiple ports, the name of the +service port that has been intercepted is also listed. + +If you want to change which port has been intercepted, you can create +a new intercept the same way you did above and it will change which +service port is being intercepted. + +## Creating an intercept When multiple services match your workload + +Oftentimes, there's a 1-to-1 relationship between a service and a +workload, so telepresence is able to auto-detect which service it +should intercept based on the workload you are trying to intercept. +But if you use something like +[Argo](https://www.getambassador.io/docs/argo/latest/), there may be +two services (that use the same labels) to manage traffic between a +canary and a stable service. + +Fortunately, if you know which service you want to use when +intercepting a workload, you can use the `--service` flag. So in the +aforementioned example, if you wanted to use the `echo-stable` service +when intercepting your workload, your command would look like this: + +```console +$ telepresence intercept echo-rollout- --port --service echo-stable +Using ReplicaSet echo-rollout- +intercepted + Intercept name : echo-rollout- + State : ACTIVE + Workload kind : ReplicaSet + Destination : 127.0.0.1:3000 + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-921196036 + Intercepting : all TCP connections +``` + +## Intercepting multiple ports + +It is possible to intercept more than one service and/or service port that are using the same workload. You do this +by creating more than one intercept that identify the same workload using the `--workload` flag. + +Let's assume that we have a service `multi-echo` with the two ports `http` and `grpc`. They are both +targeting the same `multi-echo` deployment. + +```console +$ telepresence intercept multi-echo-http --workload multi-echo --port 8080:http --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-http + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Volume Mount Point : /tmp/telfs-893700837 + Intercepting : all TCP requests + Preview URL : https://sleepy-bassi-1140.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +$ telepresence intercept multi-echo-grpc --workload multi-echo --port 8443:grpc --mechanism tcp +Using Deployment multi-echo +intercepted + Intercept name : multi-echo-grpc + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8443 + Service Port Identifier: extra + Volume Mount Point : /tmp/telfs-1277723591 + Intercepting : all TCP requests + Preview URL : https://upbeat-thompson-6613.preview.edgestack.me + Layer 5 Hostname : multi-echo.default.svc.cluster.local +``` + +## Port-forwarding an intercepted container's sidecars + +Sidecars are containers that sit in the same pod as an application +container; they usually provide auxiliary functionality to an +application, and can usually be reached at +`localhost:${SIDECAR_PORT}`. For example, a common use case for a +sidecar is to proxy requests to a database, your application would +connect to `localhost:${SIDECAR_PORT}`, and the sidecar would then +connect to the database, perhaps augmenting the connection with TLS or +authentication. + +When intercepting a container that uses sidecars, you might want those +sidecars' ports to be available to your local application at +`localhost:${SIDECAR_PORT}`, exactly as they would be if running +in-cluster. Telepresence's `--to-pod ${PORT}` flag implements this +behavior, adding port-forwards for the port given. + +```console +$ telepresence intercept --port=: --to-pod= +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +If there are multiple ports that you need forwarded, simply repeat the +flag (`--to-pod= --to-pod=`). + +## Intercepting headless services + +Kubernetes supports creating [services without a ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services), +which, when they have a pod selector, serve to provide a DNS record that will directly point to the service's backing pods. +Telepresence supports intercepting these `headless` services as it would a regular service with a ClusterIP. +So, for example, if you have the following service: + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: my-headless +spec: + type: ClusterIP + clusterIP: None + selector: + service: my-headless + ports: + - port: 8080 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: my-headless + labels: + service: my-headless +spec: + replicas: 1 + serviceName: my-headless + selector: + matchLabels: + service: my-headless + template: + metadata: + labels: + service: my-headless + spec: + containers: + - name: my-headless + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +``` + +You can intercept it like any other: + +```console +$ telepresence intercept my-headless --port 8080 +Using StatefulSet my-headless +intercepted + Intercept name : my-headless + State : ACTIVE + Workload kind : StatefulSet + Destination : 127.0.0.1:8080 + Volume Mount Point: /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-524189712 + Intercepting : all TCP connections +``` + + +This utilizes an initContainer that requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + + +This requires the Traffic Agent to run as GID 7777. By default, this is disabled on openshift clusters. +To enable running as GID 7777 on a specific openshift namespace, run: +oc adm policy add-scc-to-group anyuid system:serviceaccounts:$NAMESPACE + + + +Intercepting headless services without a selector is not supported. + + +## Sharing intercepts with teammates + +Once a combination of flags to easily intercept a service has been found, it's useful to share it with teammates. You +can do that easily by going to [Ambassador Cloud -> Intercepts history](https://app.getambassador.io/cloud/saved-intercepts) +pick the intercept command from the history tab and create a Saved Intercept by giving it a name, when doing that +the intercept command will be easily accessible for all your teammates. Note that this requires the free enhanced +client to be installed and to be logged in (`telepresence login`). + +To instantiate an intercept based on a saved intercept, simply run +`telepresence intercept --use-saved-intercept `. When logged in, the command will first check for a +saved intercept in Ambassador Cloud and will use it if found, otherwise an error will be returned. + +Saved Intercepts can be [managed through Ambassador Cloud](../../../../../cloud/latest/telepresence-saved-intercepts). + +## Specifying the intercept traffic target + +By default, it's assumed that your local app is reachable on `127.0.0.1`, and intercepted traffic will be sent to that IP +at the port given by `--port`. If you wish to change this behavior and send traffic to a different IP address, you can use the `--address` parameter +to `telepresence intercept`. Say your machine is configured to respond to HTTP requests for an intercept on `172.16.0.19:8080`. You would run this as: + +```console +$ telepresence intercept my-service --address 172.16.0.19 --port 8080 +Using Deployment echo-easy + Intercept name : echo-easy + State : ACTIVE + Workload kind : Deployment + Destination : 172.16.0.19:8080 + Service Port Identifier: proxied + Volume Mount Point : /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-517018422 + Intercepting : HTTP requests with headers + 'x-telepresence-intercept-id: 8e0dd8ea-b55a-43bd-ad04-018b9de9cfab:echo-easy' + Preview URL : https://laughing-curran-5375.preview.edgestack.me + Layer 5 Hostname : echo-easy.default.svc.cluster.local +``` diff --git a/docs/telepresence/latest/reference/intercepts/index.md b/docs/telepresence/latest/reference/intercepts/index.md new file mode 100644 index 000000000..5b317aeec --- /dev/null +++ b/docs/telepresence/latest/reference/intercepts/index.md @@ -0,0 +1,61 @@ +import Alert from '@material-ui/lab/Alert'; + +# Intercepts + +When intercepting a service, the Telepresence Traffic Manager ensures +that a Traffic Agent has been injected into the intercepted workload. +The injection is triggered by a Kubernetes Mutating Webhook and will +only happen once. The Traffic Agent is responsible for redirecting +intercepted traffic to the developer's workstation. + +An intercept is either global or personal. + +### Global intercet +This intercept will intercept all`tcp` and/or `udp` traffic to the +intercepted service and send all of that traffic down to the developer's +workstation. This means that a global intercept will affect all users of +the intercepted service. + +### Personal intercept +This intercept will intercept specific HTTP requests, allowing other HTTP +requests through to the regular service. The selection is based on http +headers or paths, and allows for intercepts which only intercept traffic +tagged as belonging to a given developer. + +There are two ways of configuring an intercept: +- one from the [CLI](./cli) directly +- one from an [Intercept Specification](./specs) + +## Intercept behavior when using single-user versus team mode. + +Switching the Traffic Manager from `single-user` mode to `team` mode changes +the Telepresence defaults in two ways. + + +First, in team mode, Telepresence will require that the user is logged in to +Ambassador Cloud, or is using an api-key. The team mode aldo causes Telepresence +to default to a personal intercept using `--http-header=auto --http-path-prefix=/`. +Personal intercepts are important for working in a shared cluster with teammates, +and is important for the preview URL functionality below. See `telepresence intercept --help` +for information on using the `--http-header` and `--http-path-xxx` flags to +customize which requests that are intercepted. + +Secondly, team mode causes Telepresence to default to`--preview-url=true`. This +tells Telepresence to take advantage of Ambassador Cloud to create a preview URL +for this intercept, creating a shareable URL that automatically sets the +appropriate headers to have requests coming from the preview URL be +intercepted. + +## Supported workloads + +Kubernetes has various +[workloads](https://kubernetes.io/docs/concepts/workloads/). +Currently, Telepresence supports intercepting (installing a +traffic-agent on) `Deployments`, `ReplicaSets`, and `StatefulSets`. + + + +While many of our examples use Deployments, they would also work on +ReplicaSets and StatefulSets + + diff --git a/docs/telepresence/latest/reference/intercepts/manual-agent.md b/docs/telepresence/latest/reference/intercepts/manual-agent.md new file mode 100644 index 000000000..8c24d6dbe --- /dev/null +++ b/docs/telepresence/latest/reference/intercepts/manual-agent.md @@ -0,0 +1,267 @@ +import Alert from '@material-ui/lab/Alert'; + +# Manually injecting the Traffic Agent + +You can directly modify your workload's YAML configuration to add the Telepresence Traffic Agent and enable it to be intercepted. + +When you use a Telepresence intercept for the first time on a Pod, the [Telepresence Mutating Webhook](../../cluster-config/#mutating-webhook) +will automatically inject a Traffic Agent sidecar into it. There might be some situations where this approach cannot be used, such +as very strict company security policies preventing it. + + +Although it is possible to manually inject the Traffic Agent, it is not the recommended approach to making a workload interceptable, +try the Mutating Webhook before proceeding. + + +## Procedure + +You can manually inject the agent into Deployments, StatefulSets, or ReplicaSets. The example on this page +uses the following Deployment and Service. It's a prerequisite that they have been applied to the cluster: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "my-service" + labels: + service: my-service +spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +--- +apiVersion: v1 +kind: Service +metadata: + name: "my-service" +spec: + type: ClusterIP + selector: + service: my-service + ports: + - port: 80 + targetPort: 8080 +``` + +### 1. Generating the YAML + +First, generate the YAML for the traffic-agent configmap entry. It's important that the generated file have +the same name as the service, and no extension: + +```console +$ telepresence genyaml config --workload my-service -o /tmp/my-service +$ cat /tmp/my-service-config.yaml +agentImage: docker.io/datawire/tel2:2.7.0 +agentName: my-service +containers: +- Mounts: null + envPrefix: A_ + intercepts: + - agentPort: 9900 + containerPort: 8080 + protocol: TCP + serviceName: my-service + servicePort: 80 + serviceUID: f6680334-10ef-4703-aa4e-bb1f9d1665fd + mountPoint: /tel_app_mounts/echo-container + name: echo-container +logLevel: info +managerHost: traffic-manager.ambassador +managerPort: 8081 +manual: true +namespace: default +workloadKind: Deployment +workloadName: my-service +``` + +Next, generate the YAML for the traffic-agent container: + +```console +$ telepresence genyaml container --config /tmp/my-service -o /tmp/my-service-agent.yaml +$ cat /tmp/my-service-agent.yaml +args: +- agent +env: +- name: _TEL_AGENT_POD_IP + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: status.podIP +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: traffic-agent +ports: +- containerPort: 9900 + protocol: TCP +readinessProbe: + exec: + command: + - /bin/stat + - /tmp/agent/ready +resources: {} +volumeMounts: +- mountPath: /tel_pod_info + name: traffic-annotations +- mountPath: /etc/traffic-agent + name: traffic-config +- mountPath: /tel_app_exports + name: export-volume + name: traffic-annotations +``` + +Next, generate the init-container + +```console +$ telepresence genyaml initcontainer --config /tmp/my-service -o /tmp/my-service-init.yaml +$ cat /tmp/my-service-init.yaml +args: +- agent-init +image: docker.io/datawire/tel2:2.7.0-beta.12 +name: tel-agent-init +resources: {} +securityContext: + capabilities: + add: + - NET_ADMIN +volumeMounts: +- mountPath: /etc/traffic-agent + name: traffic-config +``` + +Next, generate the YAML for the volumes: + +```console +$ telepresence genyaml volume --workload my-service -o /tmp/my-service-volume.yaml +$ cat /tmp/my-service-volume.yaml +- downwardAPI: + items: + - fieldRef: + apiVersion: v1 + fieldPath: metadata.annotations + path: annotations + name: traffic-annotations +- configMap: + items: + - key: my-service + path: config.yaml + name: telepresence-agents + name: traffic-config +- emptyDir: {} + name: export-volume + +``` + + +Enter `telepresence genyaml container --help` or `telepresence genyaml volume --help` for more information about these flags. + + +### 2. Creating (or updating) the configmap + +The generated configmap entry must be insterted into the `telepresence-agents` `ConfigMap` in the same namespace as the +modified `Deployment`. If the `ConfigMap` doesn't exist yet, it can be created using the following command: + +```console +$ kubectl create configmap telepresence-agents --from-file=/tmp/my-service +``` + +If it already exists, new entries can be added under the `Data` key using `kubectl edit configmap telepresence-agents`. + +### 3. Injecting the YAML into the Deployment + +You need to add the `Deployment` YAML you genereated to include the container and the volume. These are placed as elements +of `spec.template.spec.containers`, `spec.template.spec.initContainers`, and `spec.template.spec.volumes` respectively. +You also need to modify `spec.template.metadata.annotations` and add the annotation +`telepresence.getambassador.io/manually-injected: "true"`. These changes should look like the following: + +```diff + apiVersion: apps/v1 + kind: Deployment + metadata: + name: "my-service" + labels: + service: my-service + spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service ++ annotations: ++ telepresence.getambassador.io/manually-injected: "true" + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} ++ - args: ++ - agent ++ env: ++ - name: _TEL_AGENT_POD_IP ++ valueFrom: ++ fieldRef: ++ apiVersion: v1 ++ fieldPath: status.podIP ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: traffic-agent ++ ports: ++ - containerPort: 9900 ++ protocol: TCP ++ readinessProbe: ++ exec: ++ command: ++ - /bin/stat ++ - /tmp/agent/ready ++ resources: { } ++ volumeMounts: ++ - mountPath: /tel_pod_info ++ name: traffic-annotations ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ - mountPath: /tel_app_exports ++ name: export-volume ++ initContainers: ++ - args: ++ - agent-init ++ image: docker.io/datawire/tel2:2.7.0-beta.12 ++ name: tel-agent-init ++ resources: { } ++ securityContext: ++ capabilities: ++ add: ++ - NET_ADMIN ++ volumeMounts: ++ - mountPath: /etc/traffic-agent ++ name: traffic-config ++ volumes: ++ - downwardAPI: ++ items: ++ - fieldRef: ++ apiVersion: v1 ++ fieldPath: metadata.annotations ++ path: annotations ++ name: traffic-annotations ++ - configMap: ++ items: ++ - key: my-service ++ path: config.yaml ++ name: telepresence-agents ++ name: traffic-config ++ - emptyDir: { } ++ name: export-volume +``` diff --git a/docs/telepresence/latest/reference/intercepts/specs.md b/docs/telepresence/latest/reference/intercepts/specs.md new file mode 100644 index 000000000..9ac074c2e --- /dev/null +++ b/docs/telepresence/latest/reference/intercepts/specs.md @@ -0,0 +1,467 @@ +# Configuring intercept using specifications + +This page references the different options available to the telepresence intercept specification. + +With telepresence, you can provide a file to define how an intercept should work. + + +## Root + +Your intercept specification is where you can create a standard, easy to use, configuration to easily run pre and post tasks, start an intercept, and start your local application to handle the intercepted traffic. + +There are many ways to configure your specification to suit your needs, the table below shows the possible options within your specifcation, +and you can see the spec's schema, with all available options and formats, [here](#ide-integration). + +| Options | Description | +|-------------------------------------------|---------------------------------------------------------------------------------------------------------| +| [name](#name) | Name of the specification. | +| [connection](#connection) | Connection properties to use when Telepresence connects to the cluster. | +| [handlers](#handlers) | Local processes to handle traffic and/or setup . | +| [prerequisites](#prerequisites) | Things to set up prior to starting any intercepts, and tear things down once the intercept is complete. | +| [workloads](#workloads) | Remote workloads that are intercepted, keyed by workload name. | + +### Name +The name is optional. If you don't specify the name it will use the filename of the specification file. + +```yaml +name : echo-server-spec +``` + +### Connection + +The connection option is used to define how Telepresence connects to your cluster. + +```yaml +connection: + context: "shared-cluster" + mappedNamespaces: + - "my_app" +``` + +You can pass the most common parameters from telepresence connect command (`telepresence connect --help`) using a camel case format. + +Some of the most commonly used options include: + +| Options | Type | Format | Description | +|------------------|-------------|-------------------------|---------------------------------------------------------| +| context | string | N/A | The kubernetes context to use | +| mappedNamespaces | string list | [a-z0-9][a-z0-9-]{1,62} | The namespaces that Telepresence will be concerned with | + + +## Handlers + +A handler is code running locally. + +It can receive traffic for an intercepted service, or can set up prerequisites to run before/after the intercept itself. + +When it is intended as an intercept handler (i.e. to handle traffic), it's usually the service you're working on, or another dependency (database, another third party service, ...) running on your machine. +A handler can be a Docker container, or an application running natively. + +The sample below is creating an intercept handler, giving it the name `echo-server` and using a docker container. The container will +automatically have access to the ports, environment, and mounted directories of the intercepted container. + + + The ports field is important for the intercept handler while running in docker, it indicates which ports should be exposed to the host. If you want to access to it locally (to attach a debugger to your container for example), this field must be provided. + + +```yaml +handlers: + - name: echo-server + environment: + - name: PORT + value: "8080" + docker: + image: jmalloc/echo-server:latest + ports: + - 8080 +``` + +If you don't want to use Docker containers, you can still configure your handlers to start via a regular script. +The snippet below shows how to create an handler called echo-server, that sets an environment variable of `PORT=8080` +and starts the application. + + +```yaml +handlers: + - name: echo-server + environment: + - name: PORT + value: "8080" + script: + run: bin/echo-server +``` + +Keep in mind that an empty handler is still a valid handler. This is sometimes useful when you want to, for example, +simulate an intercepted service going down: + +```yaml +handlers: + - name: no-op +``` + +The table belows defines the parameters that can be used within the handlers section. + +| Options | Type | Format | Description | +|------------------------|-------------|--------------------------|------------------------------------------------------------------------------| +| name | string | [a-zA-Z][a-zA-Z0-9_-]* | Defines name of your handler that the intercepts use to reference it | +| environment | map list | N/A | Environment Defines environment variables within your handler | +| environment[*].name | string | [a-zA-Z_][a-zA-Z0-9_]* | The name of the environment variable | +| environment[*].value | string | N/A | The value for the environment variable | +| [script](#script) | map | N/A | Tells the handler to run as a script, mutually exclusive to docker | +| [docker](#docker) | map | N/A | Tells the handler to run as a docker container, mutually exclusive to script | + +### Script + +The handler's script element defines the parameters: + +| Options | Type | Format | Description | +|---------|--------|------------------------|-----------------------------------------------------------------------------------------------------------------------------| +| run | string | N/A | The script to run. Can be multi-line | +| shell | string | bash|sh|sh | Shell that will parse and run the script. Can be bash, zsh, or sh. Defaults to the value of the`SHELL` environment variable | + +### Docker +The handler's docker element defines the parameters. The `build` and `image` parameters are mutually exclusive: + +| Options | Type | Format | Description | +|---------------------|-------------|--------|--------------------------------------------------------------------------------------------------------------------------------------| +| [build](#build) | map | N/A | Defines how to build the image from source using [docker build](https://docs.docker.com/engine/reference/commandline/build/) command | +| [compose](#compose) | map | N/A | Defines how to integrate with an existing Docker Compose file | +| image | string | image | Defines which image to be used | +| ports | int list | N/A | The ports which should be exposed to the host | +| options | string list | N/A | Options for docker run [options](https://docs.docker.com/engine/reference/commandline/run/#options) | +| command | string | N/A | Optional command to run | +| args | string list | N/A | Optional command arguments | + + +#### Build + +The docker build element defines the parameters: + +| Options | Type | Format | Description | +|---------|-------------|--------|--------------------------------------------------------------------------------------------| +| context | string | N/A | Defines either a path to a directory containing a Dockerfile, or a url to a git repository | +| args | string list | N/A | Additional arguments for the docker build command. | + +For additional informations on these parameters, please check the docker [documentation](https://docs.docker.com/engine/reference/commandline/run). + +#### Compose + +The Docker Compose element defines the way to integrate with the tool of the same name. + +| Options | Type | Format | Description | +|----------------------|----------|--------------|--------------------------------------------------------------------------------------------------------| +| context | string | N/A | An optional Docker context, meaning the path to / or the directory containing your docker compose file | +| [services](#service) | map list | | The services to use with the Telepresence integration | +| spec | map | compose spec | Optional embedded docker compose specification. | + +##### Service + +The service describe how to integrate with each service from your Docker Compose file, and can be seen as an override +functionality. A service is normally not provided when you want to keep the original behavior, but can be provided for +documentation purposes using the `local` behavior. + +A service can be declared either as a property of `compose` in the Intercept Specification, or as an `x-telepresence` +extension in the Docker compose specification. The syntax is the same in both cases, but the `name` property must not be +used together with `x-telepresence` because it is implicit. + +| Options | Type | Format | Description | +|-----------------------|--------|-----------------------------------------|-----------------------------------------------------------------------------| +| name | string | [a-zA-Z][a-zA-Z0-9_-]* | The name of your service in the compose file | +| [behavior](#behavior) | string | interceptHandler|remote|local | Behavior of the service in context of the intercept. | +| [mapping](#mapping) | map | | Optional mapping to cluster service. Only applicable for `behavior: remote` | + +###### Behavior + +| Value | Description | +|------------------|-----------------------------------------------------------------------------------------------------------------| +| interceptHandler | The service runs locally and will receive traffic from the intercepted pod. | +| remote | The service will not run as part of docker compose. Instead, traffic is redirected to a service in the cluster. | +| local | The service runs locally without modifications. This is the default. | + +###### Mapping + +| Options | Type | Description | +|-----------|---------------|----------------------------------------------------------------------------------------------------| +| name | string | The name of the cluster service to link the compose service with | +| namespace | string | The cluster namespace for service. This is optional and defaults to the namespace of the intercept | + +**Examples** + +Considering the following Docker Compose file: + +```yaml +services: + redis: + image: redis:6.2.6 + ports: + - "6379" + postgres: + image: "postgres:14.1" + ports: + - "5432" + myapp: + build: + # Directory containing the Dockerfile and source code + context: ../../myapp + ports: + - "8080" + volumes: + - .:/code + environment: + DEV_MODE: "true" +``` + +This will use the myapp service as the interceptor. +```yaml +services: + - name: myapp + behavior: interceptHandler +``` + +This will prevent the service from running locally. DNS will point the service in the cluster with the same name. +```yaml +services: + - name: postgres + behavior: remote +``` + +Adding a mapping allows to select the cluster service more accurately, here by indicating to Telepresence that +the postgres service should be mapped to the **psql** service in the **big-data** namespace. + +```yaml +services: + - name: postgres + behavior: remote + mapping: + name: psql + namespace: big-data +``` + +As an alternative, the `services` can instead be added as `x-telepresence` extensions in the docker compose file: + +```yaml +services: + redis: + image: redis:6.2.6 + ports: + - "6379" + postgres: + x-telepresence: + behavior: remote + mapping: + name: psql + namespace: big-data + image: "postgres:14.1" + ports: + - "5432" + myapp: + x-telepresence: + behavior: interceptHandler + build: + # Directory containing the Dockerfile and source code + context: ../../myapp + ports: + - "8080" + volumes: + - .:/code + environment: + DEV_MODE: "true" +``` + +## Prerequisites +When creating an intercept specification there is an option to include prerequisites. + +Prerequisites give you the ability to run scripts for setup, build binaries to run as your intercept handler, or many other use cases. + +Prerequisites is an array, so it can handle many options prior to starting your intercept and running your intercept handlers. +The elements of the `prerequisites` array correspond to [`handlers`](#handlers). + +The sample below is declaring that `build-binary` and `rm-binary` are two handlers; the first will be run before any intercepts, +the second will be run after cleaning up the intercepts. + +If a prerequisite create succeeds, the corresponding delete is guaranteed to run even if the other steps in the spec fail. + +```yaml +prerequisites: + - create: build-binary + delete: rm-binary +``` + + +The table below defines the parameters availble within the prerequistes section. + +| Options | Description | +|---------|-------------------------------------------------- | +| create | The name of a handler to run before the intercept | +| delete | The name of a handler to run after the intercept | + + +## Workloads + +Workloads define the services in your cluster that will be intercepted. + +The example below is creating an intercept on a service called `echo-server` on port 8080. +It creates a personal intercept with the header of `x-intercept-id: foo`, and routes its traffic to a handler called `echo-server` + +```yaml +workloads: + # You can define one or more workload(s) + - name: echo-server: + intercepts: + # You can define one or more intercept(s) + - headers: + - name: x-intercept-id + value: foo + port: 8080 + handler: echo-server +``` + +This table defines the parameters available within a workload. + +| Options | Type | Format | Description | Default | +|---------------------------|--------------------------------|-------------------------|---------------------------------------------------------------|---------| +| name | string | [a-z][a-z0-9-]* | Name of the workload to intercept | N/A | +| namespace | string | [a-z0-9][a-z0-9-]{1,62} | Namespace of workload to intercept | N/A | +| intercepts | [intercept](#intercepts) list | N/A | The list of intercepts associated to the workload | N/A | + +### Intercepts +This table defines the parameters available for each intercept. + +| Options | Type | Format | Description | Default | +|---------------------|-------------------------|----------------------|-----------------------------------------------------------------------|----------------| +| enabled | boolean | N/A | If set to false, disables this intercept. | true | +| headers | [header](#header) list | N/A | Headers that will filter the intercept. | Auto generated | +| service | name | [a-z][a-z0-9-]{1,62} | Name of service to intercept | N/A | +| localPort | integersh|string | 0-65535 | The port for the service which is intercepted | N/A | +| port | integer | 0-65535 | The port the service in the cluster is running on | N/A | +| pathPrefix | string | N/A | Path prefix filter for the intercept. Defaults to "/" | / | +| previewURL | boolean | N/A | Determine if a preview url should be created | true | +| banner | boolean | N/A | Used in the preview url option, displays a banner on the preview page | true | + +#### Header + +You can define headers to filter the requests which should end up on your machine when intercepting. + +| Options | Type | Format | Description | Default | +|---------------------------|----------|-------------------------|---------------------------------------------------------------|---------| +| name | string | N/A | Name of the header | N/A | +| value | string | N/A | Value of the header | N/A | + +Telepresence specs also support dynamic headers with **variables**: + +```yaml +intercepts: + - headers: + - name: test-{{ .Telepresence.Username }} + value: "{{ .Telepresence.Username }}" +``` + +| Options | Type | Description | +|---------------------------|----------|------------------------------------------| +| Telepresence.Username | string | The name of the user running the spec | + + +## Usage + +### Running your specification from the CLI +After you've written your intercept specification you will want to run it. + +To start your intercept, use this command: + +```bash +telepresence intercept run +``` +This will validate and run your spec. In case you just want to validate it, you can do so by using this command: + +```bash +telepresence intercept validate +``` + +### Using and sharing your specification as a CRD + +If you want to share specifications across your team or your organization. You can save specifications as CRDs inside your cluster. + + + The Intercept Specification CRD requires Kubernetes 1.22 or higher, if you are using an old cluster you will + need to install using helm directly, and use the --disable-openapi-validation flag + + +1. Install CRD object in your cluster (one time installation) : + + ```bash + telepresence helm install --crds + ``` + +1. Then you need to deploy the specification in your cluster as a CRD: + + ```yaml + apiVersion: getambassador.io/v1alpha2 + kind: InterceptSpecification + metadata: + name: my-crd-spec + namespace: my-crd-namespace + spec: + {intercept specification} + ``` + + So `echo-server` example looks like this: + + ```bash + kubectl apply -f - < # See note below +``` + +The Service Account token will be obtained by the cluster administrator after they create the user's Service Account. Creating the Service Account will create an associated Secret in the same namespace with the format `-token-`. This token can be obtained by your cluster administrator by running `kubectl get secret -n ambassador -o jsonpath='{.data.token}' | base64 -d`. + +After creating `config.yaml` in your current directory, export the file's location to KUBECONFIG by running `export KUBECONFIG=$(pwd)/config.yaml`. You should then be able to switch to this context by running `kubectl config use-context my-context`. + +## Administrating Telepresence + +Telepresence administration requires permissions for creating `Namespaces`, `ServiceAccounts`, `ClusterRoles`, `ClusterRoleBindings`, `Secrets`, `Services`, `MutatingWebhookConfiguration`, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator. The following permissions are needed for the installation and use of Telepresence: + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: telepresence-admin + namespace: default +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-admin-role +rules: + - apiGroups: [""] + resources: ["pods", "pods/log"] + verbs: ["get", "list", "create", "delete", "watch"] + - apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "update", "create", "delete"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] + - apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update", "create", "delete", "watch"] + - apiGroups: ["rbac.authorization.k8s.io"] + resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"] + verbs: ["get", "list", "watch", "create", "delete"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["create"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["get", "list", "watch", "delete"] + resourceNames: ["telepresence-agents"] + - apiGroups: [""] + resources: ["namespaces"] + verbs: ["get", "list", "watch", "create"] + - apiGroups: [""] + resources: ["secrets"] + verbs: ["get", "create", "list", "delete"] + - apiGroups: [""] + resources: ["serviceaccounts"] + verbs: ["get", "create", "delete"] + - apiGroups: ["admissionregistration.k8s.io"] + resources: ["mutatingwebhookconfigurations"] + verbs: ["get", "create", "delete"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["list", "get", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-clusterrolebinding +subjects: + - name: telepresence-admin + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-admin-role + kind: ClusterRole +``` + +There are two ways to install the traffic-manager: Using `telepresence connect` and installing the [helm chart](../../install/helm/). + +By using `telepresence connect`, Telepresence will use your kubeconfig to create the objects mentioned above in the cluster if they don't already exist. If you want the most introspection into what is being installed, we recommend using the helm chart to install the traffic-manager. + +## Cluster-wide telepresence user access + +To allow users to make intercepts across all namespaces, but with more limited `kubectl` permissions, the following `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` will allow full `telepresence intercept` functionality. + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate value + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-role +rules: +# For gather-logs command +- apiGroups: [""] + resources: ["pods/log"] + verbs: ["get"] +- apiGroups: [""] + resources: ["pods"] + verbs: ["list"] +# Needed in order to maintain a list of workloads +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +- apiGroups: [""] + resources: ["namespaces", "services"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-rolebinding +subjects: +- name: tp-user + kind: ServiceAccount + namespace: ambassador +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-role + kind: ClusterRole +``` + +### Traffic Manager connect permission +In addition to the cluster-wide permissions, the client will also need the following namespace scoped permissions +in the traffic-manager's namespace in order to establish the needed port-forward to the traffic-manager. +```yaml +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: traffic-manager-connect +rules: + - apiGroups: [""] + resources: ["pods"] + verbs: ["get", "list", "watch"] + - apiGroups: [""] + resources: ["pods/portforward"] + verbs: ["create"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: traffic-manager-connect +subjects: + - name: telepresence-test-developer + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: traffic-manager-connect + kind: Role +``` + +## Namespace only telepresence user access + +RBAC for multi-tenant scenarios where multiple dev teams are sharing a single cluster where users are constrained to a specific namespace(s). + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +For each accessible namespace +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate user name + namespace: tp-namespace # Update value for appropriate namespace +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +rules: +- apiGroups: [""] + resources: ["services"] + verbs: ["get", "list", "watch"] +- apiGroups: ["apps"] + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "watch"] +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role-binding + namespace: tp-namespace # Should be the same as metadata.namespace of above ServiceAccount +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount +roleRef: + kind: Role + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +``` + +The user will also need the [Traffic Manager connect permission](#traffic-manager-connect-permission) described above. diff --git a/docs/telepresence/latest/reference/restapi.md b/docs/telepresence/latest/reference/restapi.md new file mode 100644 index 000000000..4be1924a3 --- /dev/null +++ b/docs/telepresence/latest/reference/restapi.md @@ -0,0 +1,93 @@ +# Telepresence RESTful API server + +[Telepresence](/products/telepresence/) can run a RESTful API server on the local host, both on the local workstation and in a pod that contains a `traffic-agent`. The server currently has two endpoints. The standard `healthz` endpoint and the `consume-here` endpoint. + +## Enabling the server +The server is enabled by setting the `telepresenceAPI.port` to a valid port number in the [Telepresence Helm Chart](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). The values may be passed explicitly to Helm during install, or configured using the [Telepresence Config](../config#restful-api-server) to impact an auto-install. + +## Querying the server +On the cluster's side, it's the `traffic-agent` of potentially intercepted pods that runs the server. The server can be accessed using `http://localhost:/` from the application container. Telepresence ensures that the container has the `TELEPRESENCE_API_PORT` environment variable set when the `traffic-agent` is installed. On the workstation, it is the `user-daemon` that runs the server. It uses the `TELEPRESENCE_API_PORT` that is conveyed in the environment of the intercept. This means that the server can be accessed the exact same way locally, provided that the environment is propagated correctly to the interceptor process. + +## Endpoints + +The `consume-here` and `intercept-info` endpoints are both intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar. Telepresence provides the ID of the intercept in the environment variable [TELEPRESENCE_INTERCEPT_ID](../environment/#telepresence_intercept_id) during an intercept. This ID must be provided in a `x-telepresence-caller-intercept-id: = ` header. [Telepresence](/products/telepresence/) needs this to identify the caller correctly. The `` will be empty when running in the cluster, but it's harmless to provide it there too, so there's no need for conditional code. + +There are three prerequisites to fulfill before testing The `consume-here` and `intercept-info` endpoints using `curl -v` on the workstation: +1. An intercept must be active +2. The "/healthz" endpoint must respond with OK +3. The ID of the intercept must be known. It will be visible as `ID` in the output of `telepresence list --debug`. + +### healthz +The `http://localhost:/healthz` endpoint should respond with status code 200 OK. If it doesn't then something isn't configured correctly. Check that the `traffic-agent` container is present and that the `TELEPRESENCE_API_PORT` has been added to the environment of the application container and/or in the environment that is propagated to the interceptor that runs on the local workstation. + +#### test endpoint using curl +A `curl -v` call can be used to test the endpoint when an intercept is active. This example assumes that the API port is configured to be 9980. +```console +$ curl -v localhost:9980/healthz +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /healthz HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Date: Fri, 26 Nov 2021 07:06:18 GMT +< Content-Length: 0 +< +* Connection #0 to host localhost left intact +``` + +### consume-here +`http://localhost:/consume-here` will respond with "true" (consume the message) or "false" (leave the message on the queue). When running in the cluster, this endpoint will respond with `false` if the headers match an ongoing intercept for the same workload because it's assumed that it's up to the intercept to consume the message. When running locally, the response is inverted. Matching headers means that the message should be consumed. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api`, we can now check that the "/consume-here" returns "true" for the path "/api" and given headers. +```console +$ curl -v localhost:9980/consume-here?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /consume-here?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Fri, 26 Nov 2021 06:43:28 GMT +< Content-Length: 4 +< +* Connection #0 to host localhost left intact +true +``` + +If you can run curl from the pod, you can try the exact same URL. The result should be "false" when there's an ongoing intercept. The `x-telepresence-caller-intercept-id` is not needed when the call is made from the pod. + +### intercept-info +`http://localhost:/intercept-info` is intended to be queried with an optional path query and a set of headers, typically obtained from a Kafka message or similar, and will respond with a JSON structure containing the two booleans `clientSide` and `intercepted`, and a `metadata` map which corresponds to the `--http-meta` key pairs used when the intercept was created. This field is always omitted in case `intercepted` is `false`. + +#### test endpoint using curl +Assuming that the API-server runs on port 9980, that the intercept was started with `--http-header x=y --http-path-prefix=/api --http-meta a=b --http-meta b=c`, we can now check that the "/intercept-info" returns information for the given path and headers. +```console +$ curl -v localhost:9980/intercept-info?path=/api -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'x: y' +* Trying ::1:9980...* Connected to localhost (127.0.0.1) port 9980 (#0) +> GET /intercept-info?path=/api HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.79.1 +> Accept: */* +> x: y +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: application/json +< Date: Tue, 01 Feb 2022 11:39:55 GMT +< Content-Length: 68 +< +{"intercepted":true,"clientSide":true,"metadata":{"a":"b","b":"c"}} +* Connection #0 to host localhost left intact +``` diff --git a/docs/telepresence/latest/reference/routing.md b/docs/telepresence/latest/reference/routing.md new file mode 100644 index 000000000..e974adbe1 --- /dev/null +++ b/docs/telepresence/latest/reference/routing.md @@ -0,0 +1,69 @@ +# Connection Routing + +## Outbound + +### DNS resolution +When requesting a connection to a host, the IP of that host must be determined. Telepresence provides DNS resolvers to help with this task. There are currently four types of resolvers but only one of them will be used on a workstation at any given time. Common for all of them is that they will propagate a selection of the host lookups to be performed in the cluster. The selection normally includes all names ending with `.cluster.local` or a currently mapped namespace but more entries can be added to the list using the `includeSuffixes` option in the +[cluster DNS configuration](../config/#dns) + +#### Cluster side DNS lookups +The cluster side host lookup will be performed by the traffic-manager unless the client has an active intercept, in which case, the agent performing that intercept will be responsible for doing it. If the client has multiple intercepts, then all of them will be asked to perform the lookup, and the response to the client will contain the unique sum of IPs that they produce. It's therefore important to never have multiple intercepts that span more than one namespace[[1](#namespacelimit)] running concurrently on the same workstation because that would logically put the workstation in several namespaces and make the DNS resolution ambiguous. The reason for asking all of them is that the workstation currently impersonates multiple containers, and it is not possible to determine on behalf of what container the lookup request is made. + +#### macOS resolver +This resolver hooks into the macOS DNS system by creating files under `/etc/resolver`. Those files correspond to some domain and contain the port number of the Telepresence resolver. Telepresence creates one such file for each of the currently mapped namespaces and `include-suffixes` option. The file `telepresence.local` contains a search path that is configured based on current intercepts so that single label names can be resolved correctly. + +#### Linux systemd-resolved resolver +This resolver registers itself as part of telepresence's [VIF](../tun-device) using `systemd-resolved` and uses the DBus API to configure domains and routes that corresponds to the current set of intercepts and namespaces. + +#### Linux overriding resolver +Linux systems that aren't configured with `systemd-resolved` will use this resolver. A Typical case is when running Telepresence [inside a docker container](../inside-container). During initialization, the resolver will first establish a _fallback_ connection to the IP passed as `--dns`, the one configured as `local-ip` in the [local DNS configuration](../config#dns), or the primary `nameserver` registered in `/etc/resolv.conf`. It will then use iptables to actually override that IP so that requests to it instead end up in the overriding resolver, which unless it succeeds on its own, will use the _fallback_. + +#### Windows resolver +This resolver uses the DNS resolution capabilities of the [win-tun](https://www.wintun.net/) device in conjunction with [Win32_NetworkAdapterConfiguration SetDNSDomain](https://docs.microsoft.com/en-us/powershell/scripting/samples/performing-networking-tasks?view=powershell-7.2#assigning-the-dns-domain-for-a-network-adapter). + +#### DNS caching +The Telepresence DNS resolver often changes its configuration. This means that Telepresence must either flush the DNS caches on the local host, or ensure that DNS-records returned from the Telepresence resolver aren't cached (or cached for a very short time). All operating systems have different ways of flushing the DNS caches and even different versions of one system may have differences. Also, on some systems it is necessary to actually kill and restart processes to ensure a proper flush, which in turn may result in network instabilities. + +Starting with 2.4.7, Telepresence will no longer flush the host's DNS caches. Instead, all records will have a short Time To Live (TTL) so that such caches evict the entries quickly. This causes increased load on the Telepresence resolver (shorter TTL means more frequent queries) and to cater for that, telepresence now has an internal cache to minimize the number of DNS queries that it sends to the cluster. This cache is flushed as needed without causing instabilities. + +### Routing + +#### Subnets +The Telepresence `traffic-manager` service is responsible for discovering the cluster's service subnet and all subnets used by the pods. In order to do this, it needs permission to create a dummy service[[2](#servicesubnet)] in its own namespace, and the ability to list, get, and watch nodes and pods. Most clusters will expose the pod subnets as `podCIDR` in the `Node` while others, like Amazon EKS, don't. Telepresence will then fall back to deriving the subnets from the IPs of all pods. If you'd like to choose a specific method for discovering subnets, or want to provide the list yourself, you can use the `podCIDRStrategy` configuration value in the [helm](../../install/helm) chart to do that. + +The complete set of subnets that the [VIF](../tun-device) will be configured with is dynamic and may change during a connection's life cycle as new nodes arrive or disappear from the cluster. The set consists of what that the traffic-manager finds in the cluster, and the subnets configured using the [also-proxy](../config#alsoproxysubnets) configuration option. Telepresence will remove subnets that are equal to, or completely covered by, other subnets. + +#### Connection origin +A request to connect to an IP-address that belongs to one of the subnets of the [VIF](../tun-device) will cause a connection request to be made in the cluster. As with host name lookups, the request will originate from the traffic-manager unless the client has ongoing intercepts. If it does, one of the intercepted pods will be chosen, and the request will instead originate from that pod. This is a best-effort approach. Telepresence only knows that the request originated from the workstation. It cannot know that it is intended to originate from a specific pod when multiple intercepts are active. + +A `--local-only` intercept will not have any effect on the connection origin because there is no pod from which the connection can originate. The intercept must be made on a workload that has been deployed in the cluster if there's a requirement for correct connection origin. + +There are multiple reasons for doing this. One is that it is important that the request originates from the correct namespace. Example: + +```bash +curl some-host +``` +results in a http request with header `Host: some-host`. Now, if a service-mesh like Istio performs header based routing, then it will fail to find that host unless the request originates from the same namespace as the host resides in. Another reason is that the configuration of a service mesh can contain very strict rules. If the request then originates from the wrong pod, it will be denied. Only one intercept at a time can be used if there is a need to ensure that the chosen pod is exactly right. + +### Recursion detection +It is common that clusters used in development, such as Minikube, Minishift or k3s, run on the same host as the Telepresence client, often in a Docker container. Such clusters may have access to host network, which means that both DNS and L4 routing may be subjected to recursion. + +#### DNS recursion +When a local cluster's DNS-resolver fails to resolve a hostname, it may fall back to querying the local host network. This means that the Telepresence resolver will be asked to resolve a query that was issued from the cluster. Telepresence must check if such a query is recursive because there is a chance that it actually originated from the Telepresence DNS resolver and was dispatched to the `traffic-manager`, or a `traffic-agent`. + +Telepresence handles this by sending one initial DNS-query to resolve the hostname "tel2-recursion-check.kube-system". If the cluster runs locally, and has access to the local host's network, then that query will recurse back into the Telepresence resolver. Telepresence remembers this and alters its own behavior so that queries that are believed to be recursions are detected and respond with an NXNAME record. Telepresence performs this solution to the best of its ability, but may not be completely accurate in all situations. There's a chance that the DNS-resolver will yield a false negative for the second query if the same hostname is queried more than once in rapid succession, that is when the second query is made before the first query has received a response from the cluster. + +#### Connect recursion +A cluster running locally may dispatch connection attempts to non-existing host:port combinations to the host network. This means that they may reach the Telepresence [VIF](../tun-device). Endless recursions occur if the VIF simply dispatches such attempts on to the cluster. + +The telepresence client handles this by serializing all connection attempts to one specific IP:PORT, trapping all subsequent attempts to connect to that IP:PORT until the first attempt has completed. If the first attempt was deemed a success, then the currently trapped attempts are allowed to proceed. If the first attempt failed, then the currently trapped attempts fail. + +## Inbound + +The traffic-manager and traffic-agent are mutually responsible for setting up the necessary connection to the workstation when an intercept becomes active. In versions prior to 2.3.2, this would be accomplished by the traffic-manager creating a port dynamically that it would pass to the traffic-agent. The traffic-agent would then forward the intercepted connection to that port, and the traffic-manager would forward it to the workstation. This lead to problems when integrating with service meshes like Istio since those dynamic ports needed to be configured. It also imposed an undesired requirement to be able to use mTLS between the traffic-manager and traffic-agent. + +In 2.3.2, this changes, so that the traffic-agent instead creates a tunnel to the traffic-manager using the already existing gRPC API connection. The traffic-manager then forwards that using another tunnel to the workstation. This is completely invisible to other service meshes and is therefore much easier to configure. + +##### Footnotes: +

1: Starting with 2.8.0, Telepresence will not allow that the same workstation creates concurrent intercepts that span multiple namespaces.

+

2: The error message from an attempt to create a service in a bad subnet contains the service subnet. The trick of creating a dummy service is currently the only way to get Kubernetes to expose that subnet.

diff --git a/docs/telepresence/latest/reference/tun-device.md b/docs/telepresence/latest/reference/tun-device.md new file mode 100644 index 000000000..af7e3828c --- /dev/null +++ b/docs/telepresence/latest/reference/tun-device.md @@ -0,0 +1,27 @@ +# Networking through Virtual Network Interface + +The Telepresence daemon process creates a Virtual Network Interface (VIF) when Telepresence connects to the cluster. The VIF ensures that the cluster's subnets are available to the workstation. It also intercepts DNS requests and forwards them to the traffic-manager which in turn forwards them to intercepted agents, if any, or performs a host lookup by itself. + +### TUN-Device +The VIF is a TUN-device, which means that it communicates with the workstation in terms of L3 IP-packets. The router will recognize UDP and TCP packets and tunnel their payload to the traffic-manager via its encrypted gRPC API. The traffic-manager will then establish corresponding connections in the cluster. All protocol negotiation takes place in the client because the VIF takes care of the L3 to L4 translation (i.e. the tunnel is L4, not L3). + +## Gains when using the VIF + +### Both TCP and UDP +The TUN-device is capable of routing both TCP and UDP traffic. + +### No SSH required + +The VIF approach is somewhat similar to using `sshuttle` but without +any requirements for extra software, configuration or connections. +Using the VIF means that only one single connection needs to be +forwarded through the Kubernetes apiserver (à la `kubectl +port-forward`), using only one single port. There is no need for +`ssh` in the client nor for `sshd` in the traffic-manager. This also +means that the traffic-manager container can run as the default user. + +#### sshfs without ssh encryption +When a POD is intercepted, and its volumes are mounted on the local machine, this mount is performed by [sshfs](https://github.com/libfuse/sshfs). Telepresence will run `sshfs -o slave` which means that instead of using `ssh` to establish an encrypted communication to an `sshd`, which in turn terminates the encryption and forwards to `sftp`, the `sshfs` will talk `sftp` directly on its `stdin/stdout` pair. Telepresence tunnels that directly to an `sftp` in the agent using its already encrypted gRPC API. As a result, no `sshd` is needed in client nor in the traffic-agent, and the traffic-agent container can run as the default user. + +### No Firewall rules +With the VIF in place, there's no longer any need to tamper with firewalls in order to establish IP routes. The VIF makes the cluster subnets available during connect, and the kernel will perform the routing automatically. When the session ends, the kernel is also responsible for cleaning up. diff --git a/docs/telepresence/latest/reference/volume.md b/docs/telepresence/latest/reference/volume.md new file mode 100644 index 000000000..82df9cafa --- /dev/null +++ b/docs/telepresence/latest/reference/volume.md @@ -0,0 +1,36 @@ +# Volume mounts + +import Alert from '@material-ui/lab/Alert'; + +Telepresence supports locally mounting of volumes that are mounted to your Pods. You can specify a command to run when starting the intercept, this could be a subshell or local server such as Python or Node. + +``` +telepresence intercept --port --mount=/tmp/ -- /bin/bash +``` + +In this case, Telepresence creates the intercept, mounts the Pod's volumes to locally to `/tmp`, and starts a Bash subshell. + +Telepresence can set a random mount point for you by using `--mount=true` instead, you can then find the mount point in the output of `telepresence list` or using the `$TELEPRESENCE_ROOT` variable. + +``` +$ telepresence intercept --port --mount=true -- /bin/bash +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 + Intercepting : all TCP connections + +bash-3.2$ echo $TELEPRESENCE_ROOT +/var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 +``` + +--mount=true is the default if a mount option is not specified, use --mount=false to disable mounting volumes. + +With either method, the code you run locally either from the subshell or from the intercept command will need to be prepended with the `$TELEPRESENCE_ROOT` environment variable to utilize the mounted volumes. + +For example, Kubernetes mounts secrets to `/var/run/secrets/kubernetes.io` (even if no `mountPoint` for it exists in the Pod spec). Once mounted, to access these you would need to change your code to use `$TELEPRESENCE_ROOT/var/run/secrets/kubernetes.io`. + +If using --mount=true without a command, you can use either environment variable flag to retrieve the variable. diff --git a/docs/telepresence/latest/reference/vpn.md b/docs/telepresence/latest/reference/vpn.md new file mode 100644 index 000000000..457cc873c --- /dev/null +++ b/docs/telepresence/latest/reference/vpn.md @@ -0,0 +1,89 @@ + +
+ + +# Telepresence and VPNs + +It is often important to set up Kubernetes API server endpoints to be only accessible via a VPN. +In setups like these, users need to connect first to their VPN, and then use Telepresence to connect +to their cluster. As Telepresence uses many of the same underlying technologies that VPNs use, +the two can sometimes conflict. This page will help you identify and resolve such VPN conflicts. + + + +The test-vpn command, which was once part of Telepresence, became obsolete in 2.14 due to a change in functionality and was subsequently removed. + + + +## VPN Configuration + +Let's begin by reviewing what a VPN does and imagining a sample configuration that might come +to conflict with Telepresence. +Usually, a VPN client adds two kinds of routes to your machine when you connect. +The first serves to override your default route; in other words, it makes sure that packets +you send out to the public internet go through the private tunnel instead of your +ethernet or wifi adapter. We'll call this a `public VPN route`. +The second kind of route is a `private VPN route`. These are the routes that allow your +machine to access hosts inside the VPN that are not accessible to the public internet. +Generally speaking, this is a more circumscribed route that will connect your machine +only to reachable hosts on the private network, such as your Kubernetes API server. + +This diagram represents what happens when you connect to a VPN, supposing that your +private network spans the CIDR range: `10.0.0.0/8`. + +![VPN routing](../images/vpn-routing.jpg) + +## Kubernetes configuration + +One of the things a Kubernetes cluster does for you is assign IP addresses to pods and services. +This is one of the key elements of Kubernetes networking, as it allows applications on the cluster +to reach each other. When Telepresence connects you to the cluster, it will try to connect you +to the IP addresses that your cluster assigns to services and pods. +Cluster administrators can configure, on cluster creation, the CIDR ranges that the Kubernetes +cluster will place resources in. Let's imagine your cluster is configured to place services in +`10.130.0.0/16` and pods in `10.132.0.0/16`: + +![VPN Kubernetes config](../images/vpn-k8s-config.jpg) + +## Telepresence conflicts + +When you run `telepresence connect` to connect to a cluster, it talks to the API server +to figure out what pod and service CIDRs it needs to map in your machine. If it detects +that these CIDR ranges are already mapped by a VPN's `private route`, it will produce an +error and inform you of the conflicting subnets: + +```console +$ telepresence connect +telepresence connect: error: connector.Connect: failed to connect to root daemon: rpc error: code = Unknown desc = subnet 10.43.0.0/16 overlaps with existing route "10.0.0.0/8 via 10.0.0.0 dev utun4, gw 10.0.0.1" +``` + +To resolve this, you'll need to carefully consider what your network layout looks like. +Telepresence is refusing to map these conflicting subnets because its mapping them +could render certain hosts that are inside the VPN completely unreachable. However, +you (or your network admin) know better than anyone how hosts are spread out inside your VPN. +Even if the private route routes ALL of `10.0.0.0/8`, it's possible that hosts are only +being spun up in one of the subblocks of the `/8` space. Let's say, for example, +that you happen to know that all your hosts in the VPN are bunched up in the first +half of the space -- `10.0.0.0/9` (and that you know that any new hosts will +only be assigned IP addresses from the `/9` block). In this case you +can configure Telepresence to override the other half of this CIDR block, which is where the +services and pods happen to be. +To do this, all you have to do is configure the `client.routing.allowConflictingSubnets` flag +in the Telepresence helm chart. You can do this directly via `telepresence helm upgrade`: + +```console +$ telepresence helm upgrade --set client.routing.allowConflictingSubnets="{10.128.0.0/9}" +``` + +You can also choose to be more specific about this, and only allow the CIDRs that you KNOW +are in use by the cluster: + +```console +$ telepresence helm upgrade --set client.routing.allowConflictingSubnets="{10.130.0.0/16,10.132.0.0/16}" +``` + +The end result of this (assuming an allow list of `/9`) will be a configuration like this: + +![VPN Telepresence](../images/vpn-with-tele.jpg) + +
diff --git a/docs/telepresence/latest/release-notes/no-ssh.png b/docs/telepresence/latest/release-notes/no-ssh.png new file mode 100644 index 000000000..025f20ab7 Binary files /dev/null and b/docs/telepresence/latest/release-notes/no-ssh.png differ diff --git a/docs/telepresence/latest/release-notes/run-tp-in-docker.png b/docs/telepresence/latest/release-notes/run-tp-in-docker.png new file mode 100644 index 000000000..53b66a9b2 Binary files /dev/null and b/docs/telepresence/latest/release-notes/run-tp-in-docker.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.2.png b/docs/telepresence/latest/release-notes/telepresence-2.2.png new file mode 100644 index 000000000..43abc7e89 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.2.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.3.0-homebrew.png b/docs/telepresence/latest/release-notes/telepresence-2.3.0-homebrew.png new file mode 100644 index 000000000..e203a9750 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.3.0-homebrew.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.3.0-loglevels.png b/docs/telepresence/latest/release-notes/telepresence-2.3.0-loglevels.png new file mode 100644 index 000000000..3d628c54a Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.3.0-loglevels.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.3.1-alsoProxy.png b/docs/telepresence/latest/release-notes/telepresence-2.3.1-alsoProxy.png new file mode 100644 index 000000000..4052b927b Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.3.1-alsoProxy.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.3.1-brew.png b/docs/telepresence/latest/release-notes/telepresence-2.3.1-brew.png new file mode 100644 index 000000000..2af424904 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.3.1-brew.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.3.1-dns.png b/docs/telepresence/latest/release-notes/telepresence-2.3.1-dns.png new file mode 100644 index 000000000..c6335e7a7 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.3.1-dns.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.3.1-inject.png b/docs/telepresence/latest/release-notes/telepresence-2.3.1-inject.png new file mode 100644 index 000000000..aea1003ef Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.3.1-inject.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.3.1-large-file-transfer.png b/docs/telepresence/latest/release-notes/telepresence-2.3.1-large-file-transfer.png new file mode 100644 index 000000000..48ceb3817 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.3.1-large-file-transfer.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.3.1-trafficmanagerconnect.png b/docs/telepresence/latest/release-notes/telepresence-2.3.1-trafficmanagerconnect.png new file mode 100644 index 000000000..78128c174 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.3.1-trafficmanagerconnect.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.3.2-subnets.png b/docs/telepresence/latest/release-notes/telepresence-2.3.2-subnets.png new file mode 100644 index 000000000..778c722ab Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.3.2-subnets.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.3.2-svcport-annotation.png b/docs/telepresence/latest/release-notes/telepresence-2.3.2-svcport-annotation.png new file mode 100644 index 000000000..1e1e92408 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.3.2-svcport-annotation.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.3.3-helm.png b/docs/telepresence/latest/release-notes/telepresence-2.3.3-helm.png new file mode 100644 index 000000000..7b81480a7 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.3.3-helm.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.3.3-namespace-config.png b/docs/telepresence/latest/release-notes/telepresence-2.3.3-namespace-config.png new file mode 100644 index 000000000..7864d3a30 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.3.3-namespace-config.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.3.3-to-pod.png b/docs/telepresence/latest/release-notes/telepresence-2.3.3-to-pod.png new file mode 100644 index 000000000..aa7be3f63 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.3.3-to-pod.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.3.4-improved-error.png b/docs/telepresence/latest/release-notes/telepresence-2.3.4-improved-error.png new file mode 100644 index 000000000..fa8a12986 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.3.4-improved-error.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.3.4-ip-error.png b/docs/telepresence/latest/release-notes/telepresence-2.3.4-ip-error.png new file mode 100644 index 000000000..1d37380c7 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.3.4-ip-error.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.3.5-agent-config.png b/docs/telepresence/latest/release-notes/telepresence-2.3.5-agent-config.png new file mode 100644 index 000000000..67d6d3e8b Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.3.5-agent-config.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.3.5-grpc-max-receive-size.png b/docs/telepresence/latest/release-notes/telepresence-2.3.5-grpc-max-receive-size.png new file mode 100644 index 000000000..32939f9dd Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.3.5-grpc-max-receive-size.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.3.5-skipLogin.png b/docs/telepresence/latest/release-notes/telepresence-2.3.5-skipLogin.png new file mode 100644 index 000000000..bf79c1910 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.3.5-skipLogin.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png b/docs/telepresence/latest/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png new file mode 100644 index 000000000..d29a05ad7 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.3.7-keydesc.png b/docs/telepresence/latest/release-notes/telepresence-2.3.7-keydesc.png new file mode 100644 index 000000000..9bffe5ccb Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.3.7-keydesc.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.3.7-newkey.png b/docs/telepresence/latest/release-notes/telepresence-2.3.7-newkey.png new file mode 100644 index 000000000..c7d47c42d Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.3.7-newkey.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.4.0-cloud-messages.png b/docs/telepresence/latest/release-notes/telepresence-2.4.0-cloud-messages.png new file mode 100644 index 000000000..ffd045ae0 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.4.0-cloud-messages.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.4.0-windows.png b/docs/telepresence/latest/release-notes/telepresence-2.4.0-windows.png new file mode 100644 index 000000000..d27ba254a Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.4.0-windows.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.4.1-systema-vars.png b/docs/telepresence/latest/release-notes/telepresence-2.4.1-systema-vars.png new file mode 100644 index 000000000..c098b439f Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.4.1-systema-vars.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.4.10-actions.png b/docs/telepresence/latest/release-notes/telepresence-2.4.10-actions.png new file mode 100644 index 000000000..6d849ac21 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.4.10-actions.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.4.10-intercept-config.png b/docs/telepresence/latest/release-notes/telepresence-2.4.10-intercept-config.png new file mode 100644 index 000000000..e3f1136ac Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.4.10-intercept-config.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.4.4-gather-logs.png b/docs/telepresence/latest/release-notes/telepresence-2.4.4-gather-logs.png new file mode 100644 index 000000000..7db541735 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.4.4-gather-logs.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.4.5-logs-anonymize.png b/docs/telepresence/latest/release-notes/telepresence-2.4.5-logs-anonymize.png new file mode 100644 index 000000000..edd01fde4 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.4.5-logs-anonymize.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.4.5-pod-yaml.png b/docs/telepresence/latest/release-notes/telepresence-2.4.5-pod-yaml.png new file mode 100644 index 000000000..3f565c4f8 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.4.5-pod-yaml.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.4.5-preview-url-questions.png b/docs/telepresence/latest/release-notes/telepresence-2.4.5-preview-url-questions.png new file mode 100644 index 000000000..1823aaa14 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.4.5-preview-url-questions.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.4.6-help-text.png b/docs/telepresence/latest/release-notes/telepresence-2.4.6-help-text.png new file mode 100644 index 000000000..aab9178ad Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.4.6-help-text.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.4.8-health-check.png b/docs/telepresence/latest/release-notes/telepresence-2.4.8-health-check.png new file mode 100644 index 000000000..e10a0b472 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.4.8-health-check.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.4.8-vpn.png b/docs/telepresence/latest/release-notes/telepresence-2.4.8-vpn.png new file mode 100644 index 000000000..fbb215882 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.4.8-vpn.png differ diff --git a/docs/telepresence/latest/release-notes/telepresence-2.5.0-pro-daemon.png b/docs/telepresence/latest/release-notes/telepresence-2.5.0-pro-daemon.png new file mode 100644 index 000000000..5b82fc769 Binary files /dev/null and b/docs/telepresence/latest/release-notes/telepresence-2.5.0-pro-daemon.png differ diff --git a/docs/telepresence/latest/release-notes/tunnel.jpg b/docs/telepresence/latest/release-notes/tunnel.jpg new file mode 100644 index 000000000..59a0397e6 Binary files /dev/null and b/docs/telepresence/latest/release-notes/tunnel.jpg differ diff --git a/docs/telepresence/latest/releaseNotes.yml b/docs/telepresence/latest/releaseNotes.yml new file mode 100644 index 000000000..f078704c3 --- /dev/null +++ b/docs/telepresence/latest/releaseNotes.yml @@ -0,0 +1,2425 @@ +# This file should be placed in the folder for the version of the +# product that's meant to be documented. A `/release-notes` page will +# be automatically generated and populated at build time. +# +# Note that an entry needs to be added to the `doc-links.yml` file in +# order to surface the release notes in the table of contents. +# +# The YAML in this file should contain: +# +# changelog: An (optional) URL to the CHANGELOG for the product. +# items: An array of releases with the following attributes: +# - version: The (optional) version number of the release, if applicable. +# - date: The date of the release in the format YYYY-MM-DD. +# - notes: An array of noteworthy changes included in the release, each having the following attributes: +# - type: The type of change, one of `bugfix`, `feature`, `security` or `change`. +# - title: A short title of the noteworthy change. +# - body: >- +# Two or three sentences describing the change and why it +# is noteworthy. This is HTML, not plain text or +# markdown. It is handy to use YAML's ">-" feature to +# allow line-wrapping. +# - image: >- +# The URL of an image that visually represents the +# noteworthy change. This path is relative to the +# `release-notes` directory; if this file is +# `FOO/releaseNotes.yml`, then the image paths are +# relative to `FOO/release-notes/`. +# - docs: The path to the documentation page where additional information can be found. +# - href: A path from the root to a resource on the getambassador website, takes precedence over a docs link. + +docTitle: Telepresence Release Notes +docDescription: >- + Release notes for Telepresence by Ambassador Labs, a CNCF project + that enables developers to iterate rapidly on Kubernetes + microservices by arming them with infinite-scale development + environments, access to instantaneous feedback loops, and highly + customizable development environments. + +changelog: https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md + +items: + - version: 2.15.1 + date: "2023-09-08" + notes: + - type: security + title: Rebuild with go 1.21.1 + body: >- + Rebuild Telepresence with go 1.21.1 to address CVEs. + - type: security + title: Set security context for traffic agent + body: >- + Openshift users reported that the traffic agent injection was failing due to a missing security context. + - version: 2.15.0 + date: "2023-08-28" + notes: + - type: change + title: When logging out you will now automatically be disconnected + body: >- + With the change of always being required to login for Telepresence commands, you will now be disconnected from + any existing sessions when logging out. + - type: feature + title: Add ASLR to binaries not in docker + body: >- + Addresses PEN test issue. + docs: https://github.com/datawire/telepresence2-proprietary/issues/315 + - type: bugfix + title: Ensure that the x-telepresence-intercept-id header is read-only. + body: >- + The system assumes that the x-telepresence-intercept-id header contains the ID of the intercept + when it is present, and attempts to redefine it will now result in an error instead of causing a malfunction + when using preview URLs. + - type: bugfix + title: Fix parsing of multiple --http-header arguments + body: >- + An intercept using multiple header flags, e.g. --http-header a=b --http-header x=y would assemble + them incorrectly into one header as --http-header a=b,x=y which were then interpreted as a match + for the header a with value b,x=y. + - type: bugfix + title: Fixed bug in telepresence status when apikey login fails + body: >- + A bug was found when the docker-desktop extension would issue a telepresence status command with an expired or invalid apikey. This would + cause the extension to get stuck in an authentication loop. This bug was addressed and resolved. + - version: 2.14.2 + date: "2023-07-26" + notes: + - type: feature + title: Incorporation of the last version of Telepresence. + body: >- + A new version of Telepresence OSS was published. + - version: 2.14.1 + date: "2023-07-07" + notes: + - type: feature + title: More flexible templating in the Intercept Speficiation. + body: >- + The Sprig template functions can now be used + in many unconstrained fields of an Intercept Specification, such as environments, arguments, scripts, + commands, and intercept headers. + - type: bug + title: User daemon would panic during connect + body: >- + An attempt to connect on a host where no login has ever been made, could cause the user daemon to panic. + - version: 2.14.0 + date: "2023-06-12" + notes: + - type: feature + title: Telepresence with Docker Compose + body: >- + Telepresence now is integrated with Docker Compose. You can now use a compose file as an Intercept Handler in your Intercept Specifcations to utilize you local dev stack alongside an Intercept. + docs: reference/with-compose + - type: feature + title: Added the ability to exclude envrionment variables + body: >- + You can now configure your traffic-manager to exclude certain environment variables from being propagated to your local environment while doing an intercept. + docs: reference/cluster-config#excluding-envrionment-variables + - type: change + title: Routing conflict reporting. + body: >- + Telepresence will now attempt to detect and report routing conflicts with other running VPN software on client machines. + There is a new configuration flag that can be tweaked to allow certain CIDRs to be overridden by Telepresence. + docs: reference/vpn + - type: change + title: Migration of Pod Daemon to the proprietary version of Telepresence + body: >- + Pod Daemon has been successfully integrated with the most recent proprietary version of Telepresence. This development allows users to leverage the datawire/telepresence image for their deployment previews. This enhancement streamlines the process, improving the efficiency and effectiveness of deployment preview scenarios. + docs: ci/pod-daemon + + - version: 2.13.3 + date: "2023-05-25" + notes: + - type: feature + title: Add imagePullSecrets to hooks + body: >- + Add .Values.hooks.curl.imagePullSecrets and .Values.hooks curl.imagePullSecrets to Helm values. + docs: https://github.com/telepresenceio/telepresence/pull/3079 + + - type: change + title: Change reinvocation policy to Never for the mutating webhook + body: >- + The default setting of the reinvocationPolicy for the mutating webhook dealing with agent injections changed from Never to IfNeeded. + + - type: bugfix + title: Fix mounting fail of IAM roles for service accounts web identity token + body: >- + The eks.amazonaws.com/serviceaccount volume injected by EKS is now exported and remotely mounted during an intercept. + docs: https://github.com/telepresenceio/telepresence/issues/3166 + + - type: bugfix + title: Correct namespace selector for cluster versions with non-numeric characters + body: >- + The mutating webhook now correctly applies the namespace selector even if the cluster version contains non-numeric characters. For example, it can now handle versions such as Major:"1", Minor:"22+". + docs: https://github.com/telepresenceio/telepresence/pull/3184 + + - type: bugfix + title: Enable IPv6 on the telepresence docker network + body: >- + The "telepresence" Docker network will now propagate DNS AAAA queries to the Telepresence DNS resolver when it runs in a Docker container. + docs: https://github.com/telepresenceio/telepresence/issues/3179 + + - type: bugfix + title: Fix the crash when intercepting with --local-only and --docker-run + body: >- + Running telepresence intercept --local-only --docker-run no longer results in a panic. + docs: https://github.com/telepresenceio/telepresence/issues/3171 + + - type: bugfix + title: Fix incorrect error message with local-only mounts + body: >- + Running telepresence intercept --local-only --mount false no longer results in an incorrect error message saying "a local-only intercept cannot have mounts". + docs: https://github.com/telepresenceio/telepresence/issues/3171 + + - type: bugfix + title: specify port in hook urls + body: >- + The helm chart now correctly handles custom agentInjector.webhook.port that was not being set in hook URLs. + docs: https://github.com/telepresenceio/telepresence/pull/3161 + + - type: bugfix + title: Fix wrong default value for disableGlobal and agentArrival + body: >- + Params .intercept.disableGlobal and .timeouts.agentArrival are now correctly honored. + + - version: 2.13.2 + date: "2023-05-12" + notes: + - type: bugfix + title: Authenticator Service Update + body: >- + Replaced / characters with a - when the authenticator service creates the kubeconfig in the Telepresence cache. + docs: https://github.com/telepresenceio/telepresence/pull/3167 + + - type: bugfix + title: Enhanced DNS Search Path Configuration for Windows (Auto, PowerShell, and Registry Options) + body: >- + Configurable strategy (auto, powershell. or registry) to set the global DNS search path on Windows. Default is auto which means try powershell first, and if it fails, fall back to registry. + docs: https://github.com/telepresenceio/telepresence/pull/3154 + + - type: feature + title: Configurable Traffic Manager Timeout in values.yaml + body: >- + The timeout for the traffic manager to wait for traffic agent to arrive can now be configured in the values.yaml file using timeouts.agentArrival. The default timeout is still 30 seconds. + docs: https://github.com/telepresenceio/telepresence/pull/3148 + + - type: bugfix + title: Enhanced Local Cluster Discovery for macOS and Windows + body: >- + The automatic discovery of a local container based cluster (minikube or kind) used when the Telepresence daemon runs in a container, now works on macOS and Windows, and with different profiles, ports, and cluster names + docs: https://github.com/telepresenceio/telepresence/pull/3165 + + - type: bugfix + title: FTP Stability Improvements + body: >- + Multiple simultaneous intercepts can transfer large files in bidirectionally and in parallel. + docs: https://github.com/telepresenceio/telepresence/pull/3157 + + - type: bugfix + title: Intercepted Persistent Volume Pods No Longer Cause Timeouts + body: >- + Pods using persistent volumes no longer causes timeouts when intercepted. + docs: https://github.com/telepresenceio/telepresence/pull/3157 + + - type: bugfix + title: Successful 'Telepresence Connect' Regardless of DNS Configuration + body: >- + Ensure that `telepresence connect`` succeeds even though DNS isn't configured correctly. + docs: https://github.com/telepresenceio/telepresence/pull/3154 + + - type: bugfix + title: Traffic-Manager's 'Close of Closed Channel' Panic Issue + body: >- + The traffic-manager would sometimes panic with a "close of closed channel" message and exit. + docs: https://github.com/telepresenceio/telepresence/pull/3160 + + - type: bugfix + title: Traffic-Manager's Type Cast Panic Issue + body: >- + The traffic-manager would sometimes panic and exit after some time due to a type cast panic. + docs: https://github.com/telepresenceio/telepresence/pull/3153 + + - type: bugfix + title: Login Friction + body: >- + Improve login behavior by clearing the saved intermediary API Keys when a user logins to force Telepresence to generate new ones. + + - version: 2.13.1 + date: "2023-04-20" + notes: + - type: change + title: Update ambassador-telepresence-agent to version 1.13.13 + body: >- + The malfunction of the Ambassador Telepresence Agent occurred as a result of an update which compressed the executable file. + + - version: 2.13.0 + date: "2023-04-18" + notes: + - type: feature + title: Better kind / minikube network integration with docker + body: >- + The Docker network used by a Kind or Minikube (using the "docker" driver) installation, is automatically detected and connected to a Docker container running the Telepresence daemon. + docs: https://github.com/telepresenceio/telepresence/pull/3104 + + - type: feature + title: New mapped namespace output + body: >- + Mapped namespaces are included in the output of the telepresence status command. + + - type: feature + title: Setting of the target IP of the intercept + docs: reference/intercepts/cli + body: >- + There's a new --address flag to the intercept command allowing users to set the target IP of the intercept. + + - type: feature + title: Multi-tenant support + body: >- + The client will no longer need cluster wide permissions when connected to a namespace scoped Traffic Manager. + + - type: bugfix + title: Cluster domain resolution bugfix + body: >- + The Traffic Manager now uses a fail-proof way to determine the cluster domain. + docs: https://github.com/telepresenceio/telepresence/issues/3114 + + - type: bugfix + title: Windows DNS + body: >- + DNS on windows is more reliable and performant. + docs: https://github.com/telepresenceio/telepresence/issues/2939 + + - type: bugfix + title: Agent injection with huge amount of deployments + body: >- + The agent is now correctly injected even with a high number of deployment starting at the same time. + docs: https://github.com/telepresenceio/telepresence/issues/3025 + + - type: bugfix + title: Self-contained kubeconfig with Docker + body: >- + The kubeconfig is made self-contained before running Telepresence daemon in a Docker container. + docs: https://github.com/telepresenceio/telepresence/issues/3099 + + - type: bugfix + title: Version command error + body: >- + The version command won't throw an error anymore if there is no kubeconfig file defined. + docs: https://github.com/telepresenceio/telepresence/issues/3095 + + - type: change + title: Intercept Spec CRD v1alpha1 depreciated + body: >- + Please use version v1alpha2 of the intercept spec crd. + + - version: 2.12.2 + date: "2023-04-04" + notes: + - type: security + title: Update Golang build version to 1.20.3 + body: >- + Update Golang to 1.20.3 to address CVE-2023-24534, CVE-2023-24536, CVE-2023-24537, and CVE-2023-24538 + - version: 2.12.1 + date: "2023-03-22" + notes: + - type: feature + title: Additions to gather-logs + body: >- + Telepresence now includes the kubeauth logs when running + the gather-logs command + - type: bugfix + title: Airgapped Clusters can once again create personal intercepts + body: >- + Telepresence on airgapped clusters regained the ability to use the + skipLogin config option to bypass login and create personal intercepts. + - type: bugfix + title: Environment Variables are now propagated to kubeauth + body: >- + Telepresence now propagates environment variables properly + to the kubeauth-foreground to be used with cluster authentication + - version: 2.12.0 + date: "2023-03-20" + notes: + - type: feature + title: Intercept spec can build images from source + body: >- + Handlers in the Intercept Specification can now specify a build property instead of an image so that + the image is built when the spec runs. + docs: reference/intercepts/specs#build + - type: feature + title: Improve volume mount experience for Windows and Mac users + body: >- + On macOS and Windows platforms, the installation of sshfs or platform specific FUSE implementations such as macFUSE or WinFSP are + no longer needed when running an Intercept Specification that uses docker images. + docs: reference/intercepts/specs + - type: feature + title: Check for service connectivity independently from pod connectivity + body: >- + Telepresence now enables you to check for a service and pod's connectivity independently, so that it can proxy one without proxying the other. + docs: https://github.com/telepresenceio/telepresence/issues/2911 + - type: bugfix + title: Fix cluster authentication when running the telepresence daemon in a docker container. + body: >- + Authentication to EKS and GKE clusters have been fixed (k8s >= v1.26) + docs: https://github.com/telepresenceio/telepresence/pull/3055 + - type: bugfix + title: The Intercept spec image pattern now allows nested and sha256 images. + body: >- + Telepresence Intercept Specifications now handle passing nested images or the sha256 of an image + docs: https://github.com/telepresenceio/telepresence/issues/3064 + - type: bugfix + body: >- + Telepresence will not longer panic when a CNAME does not contain the .svc in it + title: Fix panic when CNAME of kubernetes.default doesn't contain .svc + docs: https://github.com/telepresenceio/telepresence/issues/3015 + - version: 2.11.1 + date: "2023-02-27" + notes: + - type: bugfix + title: Multiple architectures + docs: https://github.com/telepresenceio/telepresence/issues/3043 + body: >- + The multi-arch build for the ambassador-telepresence-manager and ambassador-telepresence-agent now + works for both amd64 and arm64. + - type: bugfix + title: Ambassador agent Helm chart duplicates + docs: https://github.com/telepresenceio/telepresence/issues/3046 + body: >- + Some labels in the Helm chart for the Ambassador Agent were duplicated, causing problems for FluxCD. + - version: 2.11.0 + date: "2023-02-22" + notes: + - type: feature + title: Intercept specification + body: >- + It is now possible to leverage the intercept specification to spin up your environment without extra tools. + - type: feature + title: Support for arm64 (Apple Silicon) + body: >- + The ambassador-telepresence-manager and ambassador-telepresence-agent are now distributed as + multi-architecture images and can run natively on both linux/amd64 and linux/arm64. + - type: bugfix + title: Connectivity check can break routing in VPN setups + docs: https://github.com/telepresenceio/telepresence/issues/3006 + body: >- + The connectivity check failed to recognize that the connected peer wasn't a traffic-manager. Consequently, + it didn't proxy the cluster because it incorrectly assumed that a successful connect meant cluster connectivity, + - type: bugfix + title: VPN routes not detected by telepresence test-vpn on macOS + docs: https://github.com/telepresenceio/telepresence/pull/3038 + body: >- + The telepresence test-vpn did not include routes of type link when checking for subnet + conflicts. + - version: 2.10.5 + date: "2023-02-06" + notes: + - type: change + title: mTLS secrets mount + body: >- + mTLS Secrets will now be mounted into the traffic agent, instead of expected to be read by it from the API. + This is only applicable to users of team mode and the proprietary agent + docs: reference/cluster-config#tls + - type: bugfix + title: Daemon reconnection fix + body: >- + Fixed a bug that prevented the local daemons from automatically reconnecting to the traffic manager when the network connection was lost. + - version: 2.10.4 + date: "2023-01-20" + notes: + - type: bugfix + title: Backward compatibility restored + body: >- + Telepresence can now create intercepts with traffic-managers of version 2.9.5 and older. + - type: bugfix + title: Saved intercepts now works with preview URLs. + body: >- + Preview URLs are now included/excluded correctly when using saved intercepts. + - version: 2.10.3 + date: "2023-01-17" + notes: + - type: bugfix + title: Saved intercepts + body: >- + Fixed an issue which was causing the saved intercepts to not be completely interpreted by telepresence. + - type: bugfix + title: Traffic manager restart during upgrade to team mode + body: >- + Fixed an issue which was causing the traffic manager to be redeployed after an upgrade to the team mode. + docs: https://github.com/telepresenceio/telepresence/pull/2979 + - version: 2.10.2 + date: "2023-01-16" + notes: + - type: bugfix + title: version consistency in helm commands + body: >- + Ensure that CLI and user-daemon binaries are the same version when running
telepresence helm install + or telepresence helm upgrade. + docs: https://github.com/telepresenceio/telepresence/pull/2975 + - type: bugfix + title: Release Process + body: >- + Fixed an issue that prevented the --use-saved-intercept flag from working. + - version: 2.10.1 + date: "2023-01-11" + notes: + - type: bugfix + title: Release Process + body: >- + Fixed a regex in our release process that prevented 2.10.0 promotion. + - version: 2.10.0 + date: "2023-01-11" + notes: + - type: feature + title: Team Mode and Single User Mode + body: >- + The Traffic Manager can now be set to either "team" mode or "single user" mode. When in team mode, intercepts will default to http intercepts. + - type: feature + title: Added `insert` and `upgrade` Subcommands to `telepresence helm` + body: >- + The `telepresence helm` sub-commands `insert` and `upgrade` now accepts all types of helm `--set-XXX` flags. + - type: feature + title: Added Image Pull Secrets to Helm Chart + body: >- + Image pull secrets for the traffic-agent can now be added using the Helm chart setting `agent.image.pullSecrets`. + - type: change + title: Rename Configmap + body: >- + The configmap `traffic-manager-clients` has been renamed to `traffic-manager`. + - type: change + title: Webhook Namespace Field + body: >- + If the cluster is Kubernetes 1.21 or later, the mutating webhook will find the correct namespace using the label `kubernetes.io/metadata.name` rather than `app.kuberenetes.io/name`. + docs: https://github.com/telepresenceio/telepresence/issues/2913 + - type: change + title: Rename Webhook + body: >- + The name of the mutating webhook now contains the namespace of the traffic-manager so that the webhook is easier to identify when there are multiple namespace scoped telepresence installations in the cluster. + - type: change + title: OSS Binaries + body: >- + The OSS Helm chart is no longer pushed to the datawire Helm repository. It will instead be pushed from the telepresence proprietary repository. The OSS Helm chart is still what's embedded in the OSS telepresence client. + docs: https://github.com/telepresenceio/telepresence/pull/2943 + - type: bugfix + title: Fix Panic Using `--docker-run` + body: >- + Telepresence no longer panics when `--docker-run` is combined with `--name ` instead of `--name=`. + docs: https://github.com/telepresenceio/telepresence/issues/2953 + - type: bugfix + title: Stop assuming cluster domain + body: >- + Telepresence traffic-manager extracts the cluster domain (e.g. "cluster.local") using a CNAME lookup for "kubernetes.default" instead of "kubernetes.default.svc". + docs: https://github.com/telepresenceio/telepresence/pull/2959 + - type: bugfix + title: Uninstall hook timeout + body: >- + A timeout was added to the pre-delete hook `uninstall-agents`, so that a helm uninstall doesn't hang when there is no running traffic-manager. + docs: https://github.com/telepresenceio/telepresence/pull/2937 + - type: bugfix + title: Uninstall hook check + body: >- + The `Helm.Revision` is now used to prevent that Helm hook calls are served by the wrong revision of the traffic-manager. + docs: https://github.com/telepresenceio/telepresence/issues/2954 + - version: 2.9.5 + date: "2022-12-08" + notes: + - type: security + title: Update to golang v1.19.4 + body: >- + Apply security updates by updating to golang v1.19.4 + docs: https://groups.google.com/g/golang-announce/c/L_3rmdT0BMU + - type: bugfix + title: GCE authentication + body: >- + Fixed a regression, that was introduced in 2.9.3, preventing use of gce authentication without also having a config element present in the gce configuration in the kubeconfig. + - version: 2.9.4 + date: "2022-12-02" + notes: + - type: feature + title: Subnet detection strategy + body: >- + The traffic-manager can automatically detect that the node subnets are different from the pod subnets, and switch detection strategy to instead use subnets that cover the pod IPs. + - type: bugfix + title: Fix `--set` flag for `telepresence helm install` + body: >- + The `telepresence helm` command `--set x=y` flag didn't correctly set values of other types than `string`. The code now uses standard Helm semantics for this flag. + - type: bugfix + title: Fix `agent.image` setting propigation + body: >- + Telepresence now uses the correct `agent.image` properties in the Helm chart when copying agent image settings from the `config.yml` file. + - type: bugfix + title: Delay file sharing until needed + body: >- + Initialization of FTP type file sharing is delayed, so that setting it using the Helm chart value `intercept.useFtp=true` works as expected. + - type: bugfix + title: Cleanup on `telepresence quit` + body: >- + The port-forward that is created when Telepresence connects to a cluster is now properly closed when `telepresence quit` is called. + - type: bugfix + title: Watch `config.yml` without panic + body: >- + The user daemon no longer panics when the `config.yml` is modified at a time when the user daemon is running but no session is active. + - type: bugfix + title: Thread safety + body: >- + Fix race condition that would occur when `telepresence connect` `telepresence leave` was called several times in rapid succession. + - version: 2.9.3 + date: "2022-11-23" + notes: + - type: feature + title: Helm options for `livenessProbe` and `readinessProbe` + body: >- + The helm chart now supports `livenessProbe` and `readinessProbe` for the traffic-manager deployment, so that the pod automatically restarts if it doesn't respond. + - type: change + title: Improved network communication + body: >- + The root daemon now communicates directly with the traffic-manager instead of routing all outbound traffic through the user daemon. + - type: bugfix + title: Root daemon debug logging + body: >- + Using `telepresence loglevel LEVEL` now also sets the log level in the root daemon. + - type: bugfix + title: Multivalue flag value propagation + body: >- + Multi valued kubernetes flags such as `--as-group` are now propagated correctly. + - type: bugfix + title: Root daemon stability + body: >- + The root daemon would sometimes hang indefinitely when quit and connect were called in rapid succession. + - type: bugfix + title: Base DNS resolver + body: >- + Don't use `systemd.resolved` base DNS resolver unless cluster is proxied. + - version: 2.9.2 + date: "2022-11-16" + notes: + - type: bugfix + title: Fix panic + body: >- + Fix panic when connecting to an older traffic-manager. + - type: bugfix + title: Fix header flag + body: >- + Fix an issue where the `http-header` flag sometimes wouldn't propagate correctly. + - version: 2.9.1 + date: "2022-11-16" + notes: + - type: bugfix + title: Connect failures due to missing auth provider. + body: >- + The regression in 2.9.0 that caused a `no Auth Provider found for name “gcp”` error when connecting was fixed. + - version: 2.9.0 + date: "2022-11-15" + notes: + - type: feature + title: New command to view client configuration. + body: >- + A new telepresence config view was added to make it easy to view the current + client configuration. + docs: new-in-2.9#view-the-client-configuration + - type: feature + title: Configure Clients using the Helm chart. + body: >- + The traffic-manager can now configure all clients that connect through the client: map in + the values.yaml file. + docs: reference/cluster-config#client-configuration + - type: feature + title: The Traffic manager version is more visible. + body: >- + The command telepresence version will now include the version of the traffic manager when + the client is connected to a cluster. + - type: feature + title: Command output in YAML format. + body: >- + The global --output flag now accepts both yaml and json. + docs: new-in-2.9#yaml-output + - type: change + title: Deprecated status command flag + body: >- + The telepresence status --json flag is deprecated. Use telepresence status --output=json instead. + - type: bugfix + title: Unqualified service name resolution in docker. + body: >- + Unqualified service names now resolves OK from the docker container when using telepresence intercept --docker-run. + docs: https://github.com/telepresenceio/telepresence/issues/2870 + - type: bugfix + title: Output no longer mixes plaintext and json. + body: >- + Informational messages that don't really originate from the command, such as "Launching Telepresence Root Daemon", + or "An update of telepresence ...", are discarded instead of being printed as plain text before the actual formatted + output when using the --output=json. + docs: https://github.com/telepresenceio/telepresence/issues/2854 + - type: bugfix + title: No more panic when invalid port names are detected. + body: >- + A `telepresence intercept` of services with invalid port no longer causes a panic. + docs: https://github.com/telepresenceio/telepresence/issues/2880 + - type: bugfix + title: Proper errors for bad output formats. + body: >- + An attempt to use an invalid value for the global --output flag now renders a proper error message. + - type: bugfix + title: Remove lingering DNS config on macOS. + body: >- + Files lingering under /etc/resolver as a result of ungraceful shutdown of the root daemon on macOS, are + now removed when a new root daemon starts. + - version: 2.8.5 + date: "2022-11-2" + notes: + - type: security + title: CVE-2022-41716 + body: >- + Updated Golang to 1.19.3 to address CVE-2022-41716. + - version: 2.8.4 + date: "2022-11-2" + notes: + - type: bugfix + title: Release Process + body: >- + This release resulted in changes to our release process. + - version: 2.8.3 + date: "2022-10-27" + notes: + - type: feature + title: Ability to disable global intercepts. + body: >- + Global intercepts (a.k.a. TCP intercepts) can now be disabled by using the new Helm chart setting intercept.disableGlobal. + docs: https://github.com/telepresenceio/telepresence/issues/2140 + - type: feature + title: Configurable mutating webhook port + body: >- + The port used for the mutating webhook can be configured using the Helm chart setting + agentInjector.webhook.port. + docs: install/helm + - type: change + title: Mutating webhook port defaults to 443 + body: >- + The default port for the mutating webhook is now 443. It used to be 8443. + - type: change + title: Agent image configuration mandatory in air-gapped environments. + body: >- + The traffic-manager will no longer default to use the tel2 image for the traffic-agent when it is + unable to connect to Ambassador Cloud. Air-gapped environments must declare what image to use in the Helm chart. + - type: bugfix + title: Can now connect to non-helm installs + body: >- + telepresence connect now works as long as the traffic manager is installed, even if + it wasn't installed via >code>helm install + docs: https://github.com/telepresenceio/telepresence/issues/2824 + - type: bugfix + title: check-vpn crash fixed + body: >- + telepresence check-vpn no longer crashes when the daemons don't start properly. + - version: 2.8.2 + date: "2022-10-15" + notes: + - type: bugfix + title: Reinstate 2.8.0 + body: >- + There was an issue downloading the free enhanced client. This problem was fixed, 2.8.0 was reinstated + - version: 2.8.1 + date: "2022-10-14" + notes: + - type: bugfix + title: Rollback 2.8.0 + body: >- + Rollback 2.8.0 while we investigate an issue with ambassador cloud. + - version: 2.8.0 + date: "2022-10-14" + notes: + - type: feature + title: Improved DNS resolver + body: >- + The Telepresence DNS resolver is now capable of resolving queries of type A, AAAA, CNAME, + MX, NS, PTR, SRV, and TXT. + docs: reference/dns + - type: feature + title: New `client` structure in Helm chart + body: >- + A new client struct was added to the Helm chart. It contains a connectionTTL that controls + how long the traffic manager will retain a client connection without seeing any sign of life from the client. + docs: reference/cluster-config#Client-Configuration + - type: feature + title: Include and exclude suffixes configurable using the Helm chart. + body: >- + A dns element was added to the client struct in Helm chart. It contains an includeSuffixes and + an excludeSuffixes value that controls what type of names that the DNS resolver in the client will delegate to + the cluster. + docs: reference/cluster-config#DNS + - type: feature + title: Configurable traffic-manager API port + body: >- + The API port used by the traffic-manager is now configurable using the Helm chart value apiPort. + The default port is 8081. + docs: https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence + - type: feature + title: Envoy server and admin port configuration. + body: >- + An new agent struct was added to the Helm chart. It contains an `envoy` structure where the server and + admin port of the Envoy proxy running in the enhanced traffic-agent can be configured. + docs: reference/cluster-config#Envoy-Configuration + - type: change + title: Helm chart `dnsConfig` moved to `client.routing`. + body: >- + The Helm chart dnsConfig was deprecated but retained for backward compatibility. The fields alsoProxySubnets + and neverProxySubnets can now be found under routing in the client struct. + docs: reference/cluster-config#Routing + - type: change + title: Helm chart `agentInjector.agentImage` moved to `agent.image`. + body: >- + The Helm chart agentInjector.agentImage was moved to agent.image. The old value is deprecated but + retained for backward compatibility. + docs: reference/cluster-config#Image-Configuration + - type: change + title: Helm chart `agentInjector.appProtocolStrategy` moved to `agent.appProtocolStrategy`. + body: >- + The Helm chart agentInjector.appProtocolStrategy was moved to agent.appProtocolStrategy. The old + value is deprecated but retained for backward compatibility. + docs: reference/cluster-config#Application-Protocol-Selection + - type: change + title: Helm chart `dnsServiceName`, `dnsServiceNamespace`, and `dnsServiceIP` removed. + body: >- + The Helm chart dnsServiceName, dnsServiceNamespace, and dnsServiceIP has been removed, because + they are no longer needed. The TUN-device will use the traffic-manager pod-IP on platforms where it needs to + dedicate an IP for its local resolver. + - type: change + title: Quit daemons with `telepresence quit -s` + body: >- + The former options `-u` and `-r` for `telepresence quit` has been deprecated and replaced with one option `-s` which will + quit both the root daemon and the user daemon. + - type: bugfix + title: Environment variable interpolation in pods now works. + body: >- + Environment variable interpolation now works for all definitions that are copied from pod containers + into the injected traffic-agent container. + - type: bugfix + title: Early detection of namespace conflict + body: >- + An attempt to create simultaneous intercepts that span multiple namespace on the same workstation + is detected early and prohibited instead of resulting in failing DNS lookups later on. + - type: bugfix + title: Annoying log message removed + body: >- + Spurious and incorrect ""!! SRV xxx"" messages will no longer appear in the logs when the reason + is normal context cancellation. + - type: bugfix + title: Single name DNS resolution in Docker on Linux host + body: >- + Single label names now resolves correctly when using Telepresence in Docker on a Linux host + - type: bugfix + title: Misnomer `appPortStrategy` in Helm chart renamed to `appProtocolStrategy`. + body: >- + The Helm chart value appProtocolStrategy is now correctly named (used to be appPortStategy) + - version: 2.7.6 + date: "2022-09-16" + notes: + - type: feature + title: Helm chart resource entries for injected agents + body: >- + The resources for the traffic-agent container and the optional init container can be + specified in the Helm chart using the resources and initResource fields + of the agentInjector.agentImage + - type: feature + title: Cluster event propagation when injection fails + body: >- + When the traffic-manager fails to inject a traffic-agent, the cause for the failure is + detected by reading the cluster events, and propagated to the user. + - type: feature + title: FTP-client instead of sshfs for remote mounts + body: >- + Telepresence can now use an embedded FTP client and load an existing FUSE library instead of running + an external sshfs or sshfs-win binary. This feature is experimental in 2.7.x + and enabled by setting intercept.useFtp to true> in the config.yml. + - type: change + title: Upgrade of winfsp + body: >- + Telepresence on Windows upgraded winfsp from version 1.10 to 1.11 + - type: bugfix + title: Removal of invalid warning messages + body: >- + Running CLI commands on Apple M1 machines will no longer throw warnings about /proc/cpuinfo + and /proc/self/auxv. + - version: 2.7.5 + date: "2022-09-14" + notes: + - type: change + title: Rollback of release 2.7.4 + body: >- + This release is a rollback of the changes in 2.7.4, so essentially the same as 2.7.3 + - version: 2.7.4 + date: "2022-09-14" + notes: + - type: change + body: >- + This release was broken on some platforms. Use 2.7.6 instead. + - version: 2.7.3 + date: "2022-09-07" + notes: + - type: bugfix + title: PTY for CLI commands + body: >- + CLI commands that are executed by the user daemon now use a pseudo TTY. This enables + docker run -it to allocate a TTY and will also give other commands like bash read the + same behavior as when executed directly in a terminal. + docs: https://github.com/telepresenceio/telepresence/issues/2724 + - type: bugfix + title: Traffic Manager useless warning silenced + body: >- + The traffic-manager will no longer log numerous warnings saying Issuing a + systema request without ApiKey or InstallID may result in an error. + - type: bugfix + title: Traffic Manager useless error silenced + body: >- + The traffic-manager will no longer log an error saying Unable to derive subnets + from nodes when the podCIDRStrategy is auto and it chooses to instead derive the + subnets from the pod IPs. + - version: 2.7.2 + date: "2022-08-25" + notes: + - type: feature + title: Autocompletion scripts + body: >- + Autocompletion scripts can now be generated with telepresence completion SHELL where SHELL can be bash, zsh, fish or powershell. + - type: feature + title: Connectivity check timeout + body: >- + The timeout for the initial connectivity check that Telepresence performs + in order to determine if the cluster's subnets are proxied or not can now be configured + in the config.yml file using timeouts.connectivityCheck. The default timeout was + changed from 5 seconds to 500 milliseconds to speed up the actual connect. + docs: reference/config#timeouts + - type: change + title: gather-traces feedback + body: >- + The command telepresence gather-traces now prints out a message on success. + docs: troubleshooting#distributed-tracing + - type: change + title: upload-traces feedback + body: >- + The command telepresence upload-traces now prints out a message on success. + docs: troubleshooting#distributed-tracing + - type: change + title: gather-traces tracing + body: >- + The command telepresence gather-traces now traces itself and reports errors with trace gathering. + docs: troubleshooting#distributed-tracing + - type: change + title: CLI log level + body: >- + The cli.log log is now logged at the same level as the connector.log + docs: reference/config#log-levels + - type: bugfix + title: Telepresence --help fixed + body: >- + telepresence --help now works once more even if there's no user daemon running. + docs: https://github.com/telepresenceio/telepresence/issues/2735 + - type: bugfix + title: Stream cancellation when no process intercepts + body: >- + Streams created between the traffic-agent and the workstation are now properly closed + when no interceptor process has been started on the workstation. This fixes a potential problem where + a large number of attempts to connect to a non-existing interceptor would cause stream congestion + and an unresponsive intercept. + - type: bugfix + title: List command excludes the traffic-manager + body: >- + The telepresence list command no longer includes the traffic-manager deployment. + - version: 2.7.1 + date: "2022-08-10" + notes: + - type: change + title: Reinstate telepresence uninstall + body: >- + Reinstate telepresence uninstall with --everything depreciated + - type: change + title: Reduce telepresence helm uninstall + body: >- + telepresence helm uninstall will only uninstall the traffic-manager helm chart and no longer accepts the --everything, --agent, or --all-agents flags. + - type: bugfix + title: Auto-connect for telepresence intercpet + body: >- + telepresence intercept will attempt to connect to the traffic manager before creating an intercept. + - version: 2.7.0 + date: "2022-08-07" + notes: + - type: feature + title: Saved Intercepts + body: >- + Create telepresence intercepts based on existing Saved Intercepts configurations with telepresence intercept --use-saved-intercept $SAVED_INTERCEPT_NAME + docs: reference/intercepts#sharing-intercepts-with-teammates + - type: feature + title: Distributed Tracing + body: >- + The Telepresence components now collect OpenTelemetry traces. + Up to 10MB of trace data are available at any given time for collection from + components. telepresence gather-traces is a new command that will collect + all that data and place it into a gzip file, and telepresence upload-traces is + a new command that will push the gzipped data into an OTLP collector. + docs: troubleshooting#distributed-tracing + - type: feature + title: Helm install + body: >- + A new telepresence helm command was added to provide an easy way to install, upgrade, or uninstall the telepresence traffic-manager. + docs: install/manager + - type: feature + title: Ignore Volume Mounts + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-ignore-volume-mounts, that can be used to make the injector ignore specified volume mounts denoted by a comma-separated string. + - type: feature + title: telepresence pod-daemon + body: >- + The Docker image now contains a new program in addition to + the existing traffic-manager and traffic-agent: the pod-daemon. The + pod-daemon is a trimmed-down version of the user-daemon that is + designed to run as a sidecar in a Pod, enabling CI systems to create + preview deploys. + - type: feature + title: Prometheus support for traffic manager + body: >- + Added prometheus support to the traffic manager. + - type: change + title: No install on telepresence connect + body: >- + The traffic manager is no longer automatically installed into the cluster. Connecting or creating an intercept in a cluster without a traffic manager will return an error. + docs: install/manager + - type: change + title: Helm Uninstall + body: >- + The command telepresence uninstall has been moved to telepresence helm uninstall. + docs: install/manager + - type: bugfix + title: readOnlyRootFileSystem mounts work + body: >- + Add an emptyDir volume and volume mount under /tmp on the agent sidecar so it works with `readOnlyRootFileSystem: true` + docs: https://github.com/telepresenceio/telepresence/pull/2666 + - version: 2.6.8 + date: "2022-06-23" + notes: + - type: feature + title: Specify Your DNS + body: >- + The name and namespace for the DNS Service that the traffic-manager uses in DNS auto-detection can now be specified. + - type: feature + title: Specify a Fallback DNS + body: >- + Should the DNS auto-detection logic in the traffic-manager fail, users can now specify a fallback IP to use. + - type: feature + title: Intercept UDP Ports + body: >- + It is now possible to intercept UDP ports with Telepresence and also use --to-pod to forward UDP traffic from ports on localhost. + - type: change + title: Additional Helm Values + body: >- + The Helm chart will now add the nodeSelector, affinity and tolerations values to the traffic-manager's post-upgrade-hook and pre-delete-hook jobs. + - type: bugfix + title: Agent Injection Bugfix + body: >- + Telepresence no longer fails to inject the traffic agent into the pod generated for workloads that have no volumes and `automountServiceAccountToken: false`. + - version: 2.6.7 + date: "2022-06-22" + notes: + - type: bugfix + title: Persistant Sessions + body: >- + The Telepresence client will remember and reuse the traffic-manager session after a network failure or other reason that caused an unclean disconnect. + - type: bugfix + title: DNS Requests + body: >- + Telepresence will no longer forward DNS requests for "wpad" to the cluster. + - type: bugfix + title: Graceful Shutdown + body: >- + The traffic-agent will properly shut down if one of its goroutines errors. + - version: 2.6.6 + date: "2022-06-9" + notes: + - type: bugfix + title: Env Var `TELEPRESENCE_API_PORT` + body: >- + The propagation of the TELEPRESENCE_API_PORT environment variable now works correctly. + - type: bugfix + title: Double Printing `--output json` + body: >- + The --output json global flag no longer outputs multiple objects + - version: 2.6.5 + date: "2022-06-03" + notes: + - type: feature + title: Helm Option -- `reinvocationPolicy` + body: >- + The reinvocationPolicy or the traffic-agent injector webhook can now be configured using the Helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Proxy Certificate + body: >- + The traffic manager now accepts a root CA for a proxy, allowing it to connect to ambassador cloud from behind an HTTPS proxy. This can be configured through the helm chart. + docs: install/helm + - type: feature + title: Helm Option -- Agent Injection + body: >- + A policy that controls when the mutating webhook injects the traffic-agent was added, and can be configured in the Helm chart. + docs: install/helm + - type: change + title: Windows Tunnel Version Upgrade + body: >- + Telepresence on Windows upgraded wintun.dll from version 0.12 to version 0.14.1 + - type: change + title: Helm Version Upgrade + body: >- + Telepresence upgraded its embedded Helm from version 3.8.1 to 3.9 + - type: change + title: Kubernetes API Version Upgrade + body: >- + Telepresence upgraded its embedded Kubernetes API from version 0.23.4 to 0.24.1 + - type: feature + title: Flag `--watch` Added to `list` Command + body: >- + Added a --watch flag to telepresence list that can be used to watch interceptable workloads in a namespace. + - type: change + title: Depreciated `images.webhookAgentImage` + body: >- + The Telepresence configuration setting for `images.webhookAgentImage` is now deprecated. Use `images.agentImage` instead. + - type: bugfix + title: Default `reinvocationPolicy` Set to Never + body: >- + The reinvocationPolicy or the traffic-agent injector webhook now defaults to Never insteadof IfNeeded so that LimitRanges on namespaces can inject a missing resources element into the injected traffic-agent container. + - type: bugfix + title: UDP + body: >- + UDP based communication with services in the cluster now works as expected. + - type: bugfix + title: Telepresence `--help` + body: >- + The command help will only show Kubernetes flags on the commands that supports them + - type: change + title: Error Count + body: >- + Only the errors from the last session will be considered when counting the number of errors in the log after a command failure. + - version: 2.6.4 + date: "2022-05-23" + notes: + - type: bugfix + title: Upgrade RBAC Permissions + body: >- + The traffic-manager RBAC grants permissions to update services, deployments, replicatsets, and statefulsets. Those permissions are needed when the traffic-manager upgrades from versions < 2.6.0 and can be revoked after the upgrade. + - version: 2.6.3 + date: "2022-05-20" + notes: + - type: bugfix + title: Relative Mount Paths + body: >- + The --mount intercept flag now handles relative mount points correctly on non-windows platforms. Windows still require the argument to be a drive letter followed by a colon. + - type: bugfix + title: Traffic Agent Config + body: >- + The traffic-agent's configuration update automatically when services are added, updated or deleted. + - type: bugfix + title: Container Injection for Numeric Ports + body: >- + Telepresence will now always inject an initContainer when the service's targetPort is numeric + - type: bugfix + title: Matching Services + body: >- + Workloads that have several matching services pointing to the same target port are now handled correctly. + - type: bugfix + title: Unexpected Panic + body: >- + A potential race condition causing a panic when closing a DNS connection is now handled correctly. + - type: bugfix + title: Mount Volume Cleanup + body: >- + A container start would sometimes fail because and old directory remained in a mounted temp volume. + - version: 2.6.2 + date: "2022-05-17" + notes: + - type: bugfix + title: Argo Injection + body: >- + Workloads controlled by workloads like Argo Rollout are injected correctly. + - type: bugfix + title: Agent Port Mapping + body: >- + Multiple services appointing the same container port no longer result in duplicated ports in an injected pod. + - type: bugfix + title: GRPC Max Message Size + body: >- + The telepresence list command no longer errors out with "grpc: received message larger than max" when listing namespaces with a large number of workloads. + - version: 2.6.1 + date: "2022-05-16" + notes: + - type: bugfix + title: KUBECONFIG environment variable + body: >- + Telepresence will now handle multiple path entries in the KUBECONFIG environment correctly. + - type: bugfix + title: Don't Panic + body: >- + Telepresence will no longer panic when using preview URLs with traffic-managers < 2.6.0 + - version: 2.6.0 + date: "2022-05-13" + notes: + - type: feature + title: Intercept multiple containers in a pod, and multiple ports per container + body: >- + Telepresence can now intercept multiple services and/or service-ports that connect to the same pod. + docs: new-in-2.6#intercept-multiple-containers-and-ports + - type: feature + title: The Traffic Agent sidecar is always injected by the Traffic Manager's mutating webhook + body: >- + The client will no longer modify deployments, replicasets, or statefulsets in order to + inject a Traffic Agent into an intercepted pod. Instead, all injection is now performed by a mutating webhook. As a result, + the client now needs less permissions in the cluster. + docs: install/upgrade#important-note-about-upgrading-to-2.6.0 + - type: change + title: Automatic upgrade of Traffic Agents + body: >- + When upgrading, all workloads with injected agents will have their agent "uninstalled" automatically. + The mutating webhook will then ensure that their pods will receive an updated Traffic Agent. + docs: new-in-2.6#no-more-workload-modifications + - type: change + title: No default image in the Helm chart + body: >- + The helm chart no longer has a default set for the agentInjector.image.name, and unless it's set, the + traffic-manager will ask Ambassador Could for the preferred image. + docs: new-in-2.6#smarter-agent + - type: change + title: Upgrade to Helm version 3.8.1 + body: The Telepresence client now uses Helm version 3.8.1 when auto-installing the Traffic Manager. + - type: bugfix + title: Remote mounts will now function correctly with custom securityContext + body: >- + The bug causing permission problems when the Traffic Agent is in a Pod with a custom securityContext has been fixed. + - type: bugfix + title: Improved presentation of flags in CLI help + body: The help for commands that accept Kubernetes flags will now display those flags in a separate group. + - type: bugfix + title: Better termination of process parented by intercept + body: >- + Occasionally an intercept will spawn a command using -- on the command line, often in another console. + When you use telepresence leave or telepresence quit while the intercept with the spawned command is still active, + Telepresence will now terminate that the command because it's considered to be parented by the intercept that is being removed. + - version: 2.5.8 + date: "2022-04-27" + notes: + - type: bugfix + title: Folder creation on `telepresence login` + body: >- + Fixed a bug where the telepresence config folder would not be created if the user ran telepresence login before other commands. + - version: 2.5.7 + date: "2022-04-25" + notes: + - type: change + title: RBAC requirements + body: >- + A namespaced traffic-manager will no longer require cluster wide RBAC. Only Roles and RoleBindings are now used. + - type: bugfix + title: Windows DNS + body: >- + The DNS recursion detector didn't work correctly on Windows, resulting in sporadic failures to resolve names that were resolved correctly at other times. + - type: bugfix + title: Session TTL and Reconnect + body: >- + A telepresence session will now last for 24 hours after the user's last connectivity. If a session expires, the connector will automatically try to reconnect. + - version: 2.5.6 + date: "2022-04-18" + notes: + - type: change + title: Less Watchers + body: >- + Telepresence agents watcher will now only watch namespaces that the user has accessed since the last connect. + - type: bugfix + title: More Efficient `gather-logs` + body: >- + The gather-logs command will no longer send any logs through gRPC. + - version: 2.5.5 + date: "2022-04-08" + notes: + - type: change + title: Traffic Manager Permissions + body: >- + The traffic-manager now requires permissions to read pods across namespaces even if installed with limited permissions + - type: bugfix + title: Linux DNS Cache + body: >- + The DNS resolver used on Linux with systemd-resolved now flushes the cache when the search path changes. + - type: bugfix + title: Automatic Connect Sync + body: >- + The telepresence list command will produce a correct listing even when not preceded by a telepresence connect. + - type: bugfix + title: Disconnect Reconnect Stability + body: >- + The root daemon will no longer get into a bad state when a disconnect is rapidly followed by a new connect. + - type: bugfix + title: Limit Watched Namespaces + body: >- + The client will now only watch agents from accessible namespaces, and is also constrained to namespaces explicitly mapped using the connect command's --mapped-namespaces flag. + - type: bugfix + title: Limit Namespaces used in `gather-logs` + body: >- + The gather-logs command will only gather traffic-agent logs from accessible namespaces, and is also constrained to namespaces explicitly mapped using the connect command's --mapped-namespaces flag. + - version: 2.5.4 + date: "2022-03-29" + notes: + - type: bugfix + title: Linux DNS Concurrency + body: >- + The DNS fallback resolver on Linux now correctly handles concurrent requests without timing them out + - type: bugfix + title: Non-Functional Flag + body: >- + The ingress-l5 flag will no longer be forcefully set to equal the --ingress-host flag + - type: bugfix + title: Automatically Remove Failed Intercepts + body: >- + Intercepts that fail to create are now consistently removed to prevent non-working dangling intercepts from sticking around. + - type: bugfix + title: Agent UID + body: >- + Agent container is no longer sensitive to a random UID or an UID imposed by a SecurityContext. + - type: bugfix + title: Gather-Logs Output Filepath + body: >- + Removed a bad concatenation that corrupted the output path of telepresence gather-logs. + - type: change + title: Remove Unnecessary Error Advice + body: >- + An advice to "see logs for details" is no longer printed when the argument count is incorrect in a CLI command. + - type: bugfix + title: Garbage Collection + body: >- + Client and agent sessions no longer leaves dangling waiters in the traffic-manager when they depart. + - type: bugfix + title: Limit Gathered Logs + body: >- + The client's gather logs command and agent watcher will now respect the configured grpc.maxReceiveSize + - type: change + title: In-Cluster Checks + body: >- + The TUN device will no longer route pod or service subnets if it is running in a machine that's already connected to the cluster + - type: change + title: Expanded Status Command + body: >- + The status command includes the install id, user id, account id, and user email in its result, and can print output as JSON + - type: change + title: List Command Shows All Intercepts + body: >- + The list command, when used with the --intercepts flag, will list the users intercepts from all namespaces + - version: 2.5.3 + date: "2022-02-25" + notes: + - type: bugfix + title: TCP Connectivity + body: >- + Fixed bug in the TCP stack causing timeouts after repeated connects to the same address + - type: feature + title: Linux Binaries + body: >- + Client-side binaries for the arm64 architecture are now available for linux + - version: 2.5.2 + date: "2022-02-23" + notes: + - type: bugfix + title: DNS server bugfix + body: >- + Fixed a bug where Telepresence would use the last server in resolv.conf + - version: 2.5.1 + date: "2022-02-19" + notes: + - type: bugfix + title: Fix GKE auth issue + body: >- + Fixed a bug where using a GKE cluster would error with: No Auth Provider found for name "gcp" + - version: 2.5.0 + date: "2022-02-18" + notes: + - type: feature + title: Intercept specific endpoints + body: >- + The flags --http-path-equal, --http-path-prefix, and --http-path-regex can can be used in + addition to the --http-match flag to filter personal intercepts by the request URL path + docs: concepts/intercepts#intercepting-a-specific-endpoint + - type: feature + title: Intercept metadata + body: >- + The flag --http-meta can be used to declare metadata key value pairs that will be returned by the Telepresence rest + API endpoint /intercept-info + docs: reference/restapi#intercept-info + - type: change + title: Client RBAC watch + body: >- + The verb "watch" was added to the set of required verbs when accessing services and workloads for the client RBAC + ClusterRole + docs: reference/rbac + - type: change + title: Dropped backward compatibility with versions <=2.4.4 + body: >- + Telepresence is no longer backward compatible with versions 2.4.4 or older because the deprecated multiplexing tunnel + functionality was removed. + - type: change + title: No global networking flags + body: >- + The global networking flags are no longer used and using them will render a deprecation warning unless they are supported by the + command. The subcommands that support networking flags are connect, current-cluster-id, + and genyaml. + - type: bugfix + title: Output of status command + body: >- + The also-proxy and never-proxy subnets are now displayed correctly when using the + telepresence status command. + - type: bugfix + title: SETENV sudo privilege no longer needed + body: >- + Telepresence longer requires SETENV privileges when starting the root daemon. + - type: bugfix + title: Network device names containing dash + body: >- + Telepresence will now parse device names containing dashes correctly when determining routes that it should never block. + - type: bugfix + title: Linux uses cluster.local as domain instead of search + body: >- + The cluster domain (typically "cluster.local") is no longer added to the DNS search on Linux using + systemd-resolved. Instead, it is added as a domain so that names ending with it are routed + to the DNS server. + - version: 2.4.11 + date: "2022-02-10" + notes: + - type: change + title: Add additional logging to troubleshoot intermittent issues with intercepts + body: >- + We've noticed some issues with intercepts in v2.4.10, so we are releasing a version + with enhanced logging to help debug and fix the issue. + - version: 2.4.10 + date: "2022-01-13" + notes: + - type: feature + title: Application Protocol Strategy + body: >- + The strategy used when selecting the application protocol for personal intercepts can now be configured using + the intercept.appProtocolStrategy in the config.yml file. + docs: reference/config/#intercept + image: telepresence-2.4.10-intercept-config.png + - type: feature + title: Helm value for the Application Protocol Strategy + body: >- + The strategy when selecting the application protocol for personal intercepts in agents injected by the + mutating webhook can now be configured using the agentInjector.appProtocolStrategy in the Helm chart. + docs: install/helm + - type: feature + title: New --http-plaintext option + body: >- + The flag --http-plaintext can be used to ensure that an intercept uses plaintext http or grpc when + communicating with the workstation process. + docs: reference/intercepts/#tls + - type: feature + title: Configure the default intercept port + body: >- + The port used by default in the telepresence intercept command (8080), can now be changed by setting + the intercept.defaultPort in the config.yml file. + docs: reference/config/#intercept + - type: change + title: Telepresence CI now uses Github Actions + body: >- + Telepresence now uses Github Actions for doing unit and integration testing. It is + now easier for contributors to run tests on PRs since maintainers can add an + "ok to test" label to PRs (including from forks) to run integration tests. + docs: https://github.com/telepresenceio/telepresence/actions + image: telepresence-2.4.10-actions.png + - type: bugfix + title: Check conditions before asking questions + body: >- + User will not be asked to log in or add ingress information when creating an intercept until a check has been + made that the intercept is possible. + docs: reference/intercepts/ + - type: bugfix + title: Fix invalid log statement + body: >- + Telepresence will no longer log invalid: "unhandled connection control message: code DIAL_OK" errors. + - type: bugfix + title: Log errors from sshfs/sftp + body: >- + Output to stderr from the traffic-agent's sftp and the client's sshfs processes + are properly logged as errors. + - type: bugfix + title: Don't use Windows path separators in workload pod template + body: >- + Auto installer will no longer not emit backslash separators for the /tel-app-mounts paths in the + traffic-agent container spec when running on Windows. + - version: 2.4.9 + date: "2021-12-09" + notes: + - type: bugfix + title: Helm upgrade nil pointer error + body: >- + A helm upgrade using the --reuse-values flag no longer fails on a "nil pointer" error caused by a nil + telpresenceAPI value. + docs: install/helm#upgrading-the-traffic-manager + - version: 2.4.8 + date: "2021-12-03" + notes: + - type: feature + title: VPN diagnostics tool + body: >- + There is a new subcommand, test-vpn, that can be used to diagnose connectivity issues with a VPN. + See the VPN docs for more information on how to use it. + docs: reference/vpn + image: telepresence-2.4.8-vpn.png + + - type: feature + title: RESTful API service + body: >- + A RESTful service was added to Telepresence, both locally to the client and to the traffic-agent to + help determine if messages with a set of headers should be consumed or not from a message queue where the + intercept headers are added to the messages. + docs: reference/restapi + image: telepresence-2.4.8-health-check.png + + - type: change + title: TELEPRESENCE_LOGIN_CLIENT_ID env variable no longer used + body: >- + You could previously configure this value, but there was no reason to change it, so the value + was removed. + + - type: bugfix + title: Tunneled network connections behave more like ordinary TCP connections. + body: >- + When using Telepresence with an external cloud provider for extensions, those tunneled + connections now behave more like TCP connections, especially when it comes to timeouts. + We've also added increased testing around these types of connections. + - version: 2.4.7 + date: "2021-11-24" + notes: + - type: feature + title: Injector service-name annotation + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-service-name, that can be used to set the name of the service to be intercepted. + This will help disambiguate which service to intercept for when a workload is exposed by multiple services, such as can happen with Argo Rollouts + docs: reference/cluster-config#service-name-annotation + - type: feature + title: Skip the Ingress Dialogue + body: >- + You can now skip the ingress dialogue by setting the ingress parameters in the corresponding flags. + docs: reference/intercepts#skipping-the-ingress-dialogue + - type: feature + title: Never proxy subnets + body: >- + The kubeconfig extensions now support a never-proxy argument, + analogous to also-proxy, that defines a set of subnets that + will never be proxied via telepresence. + docs: reference/config#neverproxy + - type: change + title: Daemon versions check + body: >- + Telepresence now checks the versions of the client and the daemons and asks the user to quit and restart if they don't match. + - type: change + title: No explicit DNS flushes + body: >- + Telepresence DNS now uses a very short TTL instead of explicitly flushing DNS by killing the mDNSResponder or doing resolvectl flush-caches + docs: reference/routing#dns-caching + - type: bugfix + title: Legacy flags now work with global flags + body: >- + Legacy flags such as --swap-deployment can now be used together with global flags. + - type: bugfix + title: Outbound connection closing + body: >- + Outbound connections are now properly closed when the peer closes. + - type: bugfix + title: Prevent DNS recursion + body: >- + The DNS-resolver will trap recursive resolution attempts (may happen when the cluster runs in a docker-container on the client). + docs: reference/routing#dns-recursion + - type: bugfix + title: Prevent network recursion + body: >- + The TUN-device will trap failed connection attempts that results in recursive calls back into the TUN-device (may happen when the + cluster runs in a docker-container on the client). + docs: reference/routing#connect-recursion + - type: bugfix + title: Traffic Manager deadlock fix + body: >- + The Traffic Manager no longer runs a risk of entering a deadlock when a new Traffic agent arrives. + - type: bugfix + title: webhookRegistry config propagation + body: >- + The configured webhookRegistry is now propagated to the webhook installer even if no webhookAgentImage has been set. + docs: reference/config#images + - type: bugfix + title: Login refreshes expired tokens + body: >- + When a user's token has expired, telepresence login + will prompt the user to log in again to get a new token. Previously, + the user had to telepresence quit and telepresence logout + to get a new token. + docs: https://github.com/telepresenceio/telepresence/issues/2062 + - version: 2.4.6 + date: "2021-11-02" + notes: + - type: feature + title: Manually injecting Traffic Agent + body: >- + Telepresence now supports manually injecting the traffic-agent YAML into workload manifests. + Use the genyaml command to create the sidecar YAML, then add the telepresence.getambassador.io/manually-injected: "true" annotation to your pods to allow Telepresence to intercept them. + docs: reference/intercepts/manual-agent + + - type: feature + title: Telepresence CLI released for Apple silicon + body: >- + Telepresence is now built and released for Apple silicon. + docs: install/?os=macos + + - type: change + title: Telepresence help text now links to telepresence.io + body: >- + We now include a link to our documentation when you run telepresence --help. This will make it easier + for users to find this page whether they acquire Telepresence through Brew or some other mechanism. + image: telepresence-2.4.6-help-text.png + + - type: bugfix + title: Fixed bug when API server is inside CIDR range of pods/services + body: >- + If the API server for your kubernetes cluster had an IP that fell within the + subnet generated from pods/services in a kubernetes cluster, it would proxy traffic + to the API server which would result in hanging or a failed connection. We now ensure + that the API server is explicitly not proxied. + - version: 2.4.5 + date: "2021-10-15" + notes: + - type: feature + title: Get pod yaml with gather-logs command + body: >- + Adding the flag --get-pod-yaml to your request will get the + pod yaml manifest for all kubernetes components you are getting logs for + ( traffic-manager and/or pods containing a + traffic-agent container). This flag is set to false + by default. + docs: reference/client + image: telepresence-2.4.5-pod-yaml.png + + - type: feature + title: Anonymize pod name + namespace when using gather-logs command + body: >- + Adding the flag --anonymize to your command will + anonymize your pod names + namespaces in the output file. We replace the + sensitive names with simple names (e.g. pod-1, namespace-2) to maintain + relationships between the objects without exposing the real names of your + objects. This flag is set to false by default. + docs: reference/client + image: telepresence-2.4.5-logs-anonymize.png + + - type: feature + title: Added context and defaults to ingress questions when creating a preview URL + body: >- + Previously, we referred to OSI model layers when asking these questions, but this + terminology is not commonly used. The questions now provide a clearer context for the user, along with a default answer as an example. + docs: howtos/preview-urls + image: telepresence-2.4.5-preview-url-questions.png + + - type: feature + title: Support for intercepting headless services + body: >- + Intercepting headless services is now officially supported. You can request a + headless service on whatever port it exposes and get a response from the + intercept. This leverages the same approach as intercepting numeric ports when + using the mutating webhook injector, mainly requires the initContainer + to have NET_ADMIN capabilities. + docs: reference/intercepts/#intercepting-headless-services + + - type: change + title: Use one tunnel per connection instead of multiplexing into one tunnel + body: >- + We have changed Telepresence so that it uses one tunnel per connection instead + of multiplexing all connections into one tunnel. This will provide substantial + performance improvements. Clients will still be backwards compatible with older + managers that only support multiplexing. + + - type: bugfix + title: Added checks for Telepresence kubernetes compatibility + body: >- + Telepresence currently works with Kubernetes server versions 1.17.0 + and higher. We have added logs in the connector and traffic-manager + to let users know when they are using Telepresence with a cluster it doesn't support. + docs: reference/cluster-config + + - type: bugfix + title: Traffic Agent security context is now only added when necessary + body: >- + When creating an intercept, Telepresence will now only set the traffic agent's GID + when strictly necessary (i.e. when using headless services or numeric ports). This mitigates + an issue on openshift clusters where the traffic agent can fail to be created due to + openshift's security policies banning arbitrary GIDs. + + - version: 2.4.4 + date: "2021-09-27" + notes: + - type: feature + title: Numeric ports in agent injector + body: >- + The agent injector now supports injecting Traffic Agents into pods that have unnamed ports. + docs: reference/cluster-config/#note-on-numeric-ports + + - type: feature + title: New subcommand to gather logs and export into zip file + body: >- + Telepresence has logs for various components (the + traffic-manager, traffic-agents, the root and + user daemons), which are integral for understanding and debugging + Telepresence behavior. We have added the telepresence + gather-logs command to make it simple to compile logs for + all Telepresence components and export them in a zip file that can + be shared to others and/or included in a github issue. For more + information on usage, run telepresence gather-logs --help + . + docs: reference/client + image: telepresence-2.4.4-gather-logs.png + + - type: feature + title: Pod CIDR strategy is configurable in Helm chart + body: >- + Telepresence now enables you to directly configure how to get + pod CIDRs when deploying Telepresence with the Helm chart. + The default behavior remains the same. We've also introduced + the ability to explicitly set what the pod CIDRs should be. + docs: install/helm + + - type: bugfix + title: Compute pod CIDRs more efficiently + body: >- + When computing subnets using the pod CIDRs, the traffic-manager + now uses less CPU cycles. + docs: reference/routing/#subnets + + - type: bugfix + title: Prevent busy loop in traffic-manager + body: >- + In some circumstances, the traffic-manager's CPU + would max out and get pinned at its limit. This required a + shutdown or pod restart to fix. We've added some fixes + to prevent the traffic-manager from getting into this state. + + - type: bugfix + title: Added a fixed buffer size to TUN-device + body: >- + The TUN-device now has a max buffer size of 64K. This prevents the + buffer from growing limitlessly until it receies a PSH, which could + be a blocking operation when receiving lots of TCP-packets. + docs: reference/tun-device + + - type: bugfix + title: Fix hanging user daemon + body: >- + When Telepresence encountered an issue connecting to the cluster or + the root daemon, it could hang indefintely. It now will error correctly + when it encounters that situation. + + - type: bugfix + title: Improved proprietary agent connectivity + body: >- + To determine whether the environment cluster is air-gapped, the + proprietary agent attempts to connect to the cloud during startup. + To deal with a possible initial failure, the agent backs off + and retries the connection with an increasing backoff duration. + + - type: bugfix + title: Telepresence correctly reports intercept port conflict + body: >- + When creating a second intercept targetting the same local port, + it now gives the user an informative error message. Additionally, + it tells them which intercept is currently using that port to make + it easier to remedy. + + - version: 2.4.3 + date: "2021-09-15" + notes: + - type: feature + title: Environment variable TELEPRESENCE_INTERCEPT_ID available in interceptor's environment + body: >- + When you perform an intercept, we now include a TELEPRESENCE_INTERCEPT_ID environment + variable in the environment. + docs: reference/environment/#telepresence-environment-variables + + - type: bugfix + title: Improved daemon stability + body: >- + Fixed a timing bug that sometimes caused a "daemon did not start" failure. + + - type: bugfix + title: Complete logs for Windows + body: >- + Crash stack traces and other errors were incorrectly not written to log files. This has + been fixed so logs for Windows should be at parity with the ones in MacOS and Linux. + + - type: bugfix + title: Log rotation fix for Linux kernel 4.11+ + body: >- + On Linux kernel 4.11 and above, the log file rotation now properly reads the + birth-time of the log file. Older kernels continue to use the old behavior + of using the change-time in place of the birth-time. + + - type: bugfix + title: Improved error messaging + body: >- + When Telepresence encounters an error, it tells the user where they should look for + logs related to the error. We have refined this so that it only tells users to look + for errors in the daemon logs for issues that are logged there. + + - type: bugfix + title: Stop resolving localhost + body: >- + When using the overriding DNS resolver, it will no longer apply search paths when + resolving localhost, since that should be resolved on the user's machine + instead of the cluster. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Variable cluster domain + body: >- + Previously, the cluster domain was hardcoded to cluster.local. While this + is true for many kubernetes clusters, it is not for all of them. Now this value is + retrieved from the traffic-manager. + + - type: bugfix + title: Improved cleanup of traffic-agents + body: >- + Telepresence now uninstalls traffic-agents installed via mutating webhook + when using telepresence uninstall --everything. + + - type: bugfix + title: More large file transfer fixes + body: >- + Downloading large files during an intercept will no longer cause timeouts and hanging + traffic-agents. + + - type: bugfix + title: Setting --mount to false when intercepting works as expected + body: >- + When using --mount=false while performing an intercept, the file system + was still mounted. This has been remedied so the intercept behavior respects the + flag. + docs: reference/volume + + - type: bugfix + title: Traffic-manager establishes outbound connections in parallel + body: >- + Previously, the traffic-manager established outbound connections + sequentially. This resulted in slow (and failing) Dial calls would + block all outbound traffic from the workstation (for up to 30 seconds). We now + establish these connections in parallel so that won't occur. + docs: reference/routing/#outbound + + - type: bugfix + title: Status command reports correct DNS settings + body: >- + Telepresence status now correctly reports DNS settings for all operating + systems, instead of Local IP:nil, Remote IP:nil when they don't exist. + + - version: 2.4.2 + date: "2021-09-01" + notes: + - type: feature + title: New subcommand to temporarily change log-level + body: >- + We have added a new telepresence loglevel subcommand that enables users + to temporarily change the log-level for the local demons, the traffic-manager and + the traffic-agents. While the logLevels settings from the config will + still be used by default, this can be helpful if you are currently experiencing an issue and + want to have higher fidelity logs, without doing a telepresence quit and + telepresence connect. You can use telepresence loglevel --help to get + more information on options for the command. + docs: reference/config + + - type: change + title: All components have info as the default log-level + body: >- + We've now set the default for all components of Telepresence (traffic-agent, + traffic-manager, local daemons) to use info as the default log-level. + + - type: bugfix + title: Updating RBAC in helm chart to fix cluster-id regression + body: >- + In 2.4.1, we enabled the traffic-manager to get the cluster ID by getting the UID + of the default namespace. The helm chart was not updated to give the traffic-manager + those permissions, which has since been fixed. This impacted users who use licensed features of + the Telepresence extension in an air-gapped environment. + docs: reference/cluster-config/#air-gapped-cluster + + - type: bugfix + title: Timeouts for Helm actions are now respected + body: >- + The user-defined timeout for Helm actions wasn't always respected, causing the daemon to hang + indefinitely when failing to install the traffic-manager. + docs: reference/config#timeouts + + - version: 2.4.1 + date: "2021-08-30" + notes: + - type: feature + title: External cloud variables are now configurable + body: >- + We now support configuring the host and port for the cloud in your config.yml. These + are used when logging in to utilize features provided by an extension, and are also passed + along as environment variables when installing the traffic-manager. Additionally, we + now run our testsuite with these variables set to localhost to continue to ensure Telepresence + is fully fuctional without depeneding on an external service. The SYSTEMA_HOST and SYSTEMA_PORT + environment variables are no longer used. + image: telepresence-2.4.1-systema-vars.png + docs: reference/config/#cloud + + - type: feature + title: Helm chart can now regenerate certificate used for mutating webhook on-demand. + body: >- + You can now set agentInjector.certificate.regenerate when deploying Telepresence + with the Helm chart to automatically regenerate the certificate used by the agent injector webhook. + docs: install/helm + + - type: change + title: Traffic Manager installed via helm + body: >- + The traffic-manager is now installed via an embedded version of the Helm chart when telepresence connect is first performed on a cluster. + This change is transparent to the user. + A new configuration flag, timeouts.helm sets the timeouts for all helm operations performed by the Telepresence binary. + docs: reference/config#timeouts + + - type: change + title: traffic-manager gets cluster ID itself instead of via environment variable + body: >- + The traffic-manager used to get the cluster ID as an environment variable when running + telepresence connnect or via adding the value in the helm chart. This was + clunky so now the traffic-manager gets the value itself as long as it has permissions + to "get" and "list" namespaces (this has been updated in the helm chart). + docs: install/helm + + - type: bugfix + title: Telepresence now mounts all directories from /var/run/secrets + body: >- + In the past, we only mounted secret directories in /var/run/secrets/kubernetes.io. + We now mount *all* directories in /var/run/secrets, which, for example, includes + directories like eks.amazonaws.com used for IRSA tokens. + docs: reference/volume + + - type: bugfix + title: Max gRPC receive size correctly propagates to all grpc servers + body: >- + This fixes a bug where the max gRPC receive size was only propagated to some of the + grpc servers, causing failures when the message size was over the default. + docs: reference/config/#grpc + + - type: bugfix + title: Updated our Homebrew packaging to run manually + body: >- + We made some updates to our script that packages Telepresence for Homebrew so that it + can be run manually. This will enable maintainers of Telepresence to run the script manually + should we ever need to rollback a release and have latest point to an older verison. + docs: install/ + + - type: bugfix + title: Telepresence uses namespace from kubeconfig context on each call + body: >- + In the past, Telepresence would use whatever namespace was specified in the kubeconfig's current-context + for the entirety of the time a user was connected to Telepresence. This would lead to confusing behavior + when a user changed the context in their kubeconfig and expected Telepresence to acknowledge that change. + Telepresence now will do that and use the namespace designated by the context on each call. + + - type: bugfix + title: Idle outbound TCP connections timeout increased to 7200 seconds + body: >- + Some users were noticing that their intercepts would start failing after 60 seconds. + This was because the keep idle outbound TCP connections were set to 60 seconds, which we have + now bumped to 7200 seconds to match Linux's tcp_keepalive_time default. + + - type: bugfix + title: Telepresence will automatically remove a socket upon ungraceful termination + body: >- + When a Telepresence process terminates ungracefully, it would inform users that "this usually means + that the process has terminated ungracefully" and implied that they should remove the socket. We've + now made it so Telepresence will automatically attempt to remove the socket upon ungraceful termination. + + - type: bugfix + title: Fixed user daemon deadlock + body: >- + Remedied a situation where the user daemon could hang when a user was logged in. + + - type: bugfix + title: Fixed agentImage config setting + body: >- + The config setting images.agentImages is no longer required to contain the repository, and it + will use the value at images.repository. + docs: reference/config/#images + + - version: 2.4.0 + date: "2021-08-04" + notes: + - type: feature + title: Windows Client Developer Preview + body: >- + There is now a native Windows client for Telepresence that is being released as a Developer Preview. + All the same features supported by the MacOS and Linux client are available on Windows. + image: telepresence-2.4.0-windows.png + docs: install + + - type: feature + title: CLI raises helpful messages from Ambassador Cloud + body: >- + Telepresence can now receive messages from Ambassador Cloud and raise + them to the user when they perform certain commands. This enables us + to send you messages that may enhance your Telepresence experience when + using certain commands. Frequency of messages can be configured in your + config.yml. + image: telepresence-2.4.0-cloud-messages.png + docs: reference/config#cloud + + - type: bugfix + title: Improved stability of systemd-resolved-based DNS + body: >- + When initializing the systemd-resolved-based DNS, the routing domain + is set to improve stability in non-standard configurations. This also enables the + overriding resolver to do a proper take over once the DNS service ends. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Fixed an edge case when intercepting a container with multiple ports + body: >- + When specifying a port of a container to intercept, if there was a container in the + pod without ports, it was automatically selected. This has been fixed so we'll only + choose the container with "no ports" if there's no container that explicitly matches + the port used in your intercept. + docs: reference/intercepts/#creating-an-intercept-when-a-service-has-multiple-ports + + - type: bugfix + title: $(NAME) references in agent's environments are now interpolated correctly. + body: >- + If you had an environment variable $(NAME) in your workload that referenced another, intercepts + would not correctly interpolate $(NAME). This has been fixed and works automatically. + + - type: bugfix + title: Telepresence no longer prints INFO message when there is no config.yml + body: >- + Fixed a regression that printed an INFO message to the terminal when there wasn't a + config.yml present. The config is optional, so this message has been + removed. + docs: reference/config + + - type: bugfix + title: Telepresence no longer panics when using --http-match + body: >- + Fixed a bug where Telepresence would panic if the value passed to --http-match + didn't contain an equal sign, which has been fixed. The correct syntax is in the --help + string and looks like --http-match=HTTP2_HEADER=REGEX + docs: reference/intercepts/#intercept-behavior-when-logged-in-to-ambassador-cloud + + - type: bugfix + title: Improved subnet updates + body: >- + The traffic-manager used to update subnets whenever the Nodes or Pods changed, even if + the underlying subnet hadn't changed, which created a lot of unnecessary traffic between the + client and the traffic-manager. This has been fixed so we only send updates when the subnets + themselves actually change. + docs: reference/routing/#subnets + + - version: 2.3.7 + date: "2021-07-23" + notes: + - type: feature + title: Also-proxy in telepresence status + body: >- + An also-proxy entry in the Kubernetes cluster config will + show up in the output of the telepresence status command. + docs: reference/config + + - type: feature + title: Non-interactive telepresence login + body: >- + telepresence login now has an + --apikey=KEY flag that allows for + non-interactive logins. This is useful for headless + environments where launching a web-browser is impossible, + such as cloud shells, Docker containers, or CI. + image: telepresence-2.3.7-newkey.png + docs: reference/client/login/ + + - type: bugfix + title: Mutating webhook injector correctly hides named ports for probes. + body: >- + The mutating webhook injector has been fixed to correctly rename named ports for liveness and readiness probes + docs: reference/cluster-config + + - type: bugfix + title: telepresence current-cluster-id crash fixed + body: >- + Fixed a regression introduced in 2.3.5 that caused telepresence current-cluster-id + to crash. + docs: reference/cluster-config + + - type: bugfix + title: Better UX around intercepts with no local process running + body: >- + Requests would hang indefinitely when initiating an intercept before you + had a local process running. This has been fixed and will result in an + Empty reply from server until you start a local process. + docs: reference/intercepts + + - type: bugfix + title: API keys no longer show as "no description" + body: >- + New API keys generated internally for communication with + Ambassador Cloud no longer show up as "no description" in + the Ambassador Cloud web UI. Existing API keys generated by + older versions of Telepresence will still show up this way. + image: telepresence-2.3.7-keydesc.png + + - type: bugfix + title: Fix corruption of user-info.json + body: >- + Fixed a race condition that logging in and logging out + rapidly could cause memory corruption or corruption of the + user-info.json cache file used when + authenticating with Ambassador Cloud. + + - type: bugfix + title: Improved DNS resolver for systemd-resolved + body: + Telepresence's systemd-resolved-based DNS resolver is now more + stable and in case it fails to initialize, the overriding resolver + will no longer cause general DNS lookup failures when telepresence defaults to + using it. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Faster telepresence list command + body: + The performance of telepresence list has been increased + significantly by reducing the number of calls the command makes to the cluster. + docs: reference/client + + - version: 2.3.6 + date: "2021-07-20" + notes: + - type: bugfix + title: Fix preview URLs + body: >- + Fixed a regression introduced in 2.3.5 that caused preview + URLs to not work. + + - type: bugfix + title: Fix subnet discovery + body: >- + Fixed a regression introduced in 2.3.5 where the Traffic + Manager's RoleBinding did not correctly appoint + the traffic-manager Role, causing + subnet discovery to not be able to work correctly. + docs: reference/rbac/ + + - type: bugfix + title: Fix root-user configuration loading + body: >- + Fixed a regression introduced in 2.3.5 where the root daemon + did not correctly read the configuration file; ignoring the + user's configured log levels and timeouts. + docs: reference/config/ + + - type: bugfix + title: Fix a user daemon crash + body: >- + Fixed an issue that could cause the user daemon to crash + during shutdown, as during shutdown it unconditionally + attempted to close a channel even though the channel might + already be closed. + + - version: 2.3.5 + date: "2021-07-15" + notes: + - type: feature + title: traffic-manager in multiple namespaces + body: >- + We now support installing multiple traffic managers in the same cluster. + This will allow operators to install deployments of telepresence that are + limited to certain namespaces. + image: ./telepresence-2.3.5-traffic-manager-namespaces.png + docs: install/helm + - type: feature + title: No more dependence on kubectl + body: >- + Telepresence no longer depends on having an external + kubectl binary, which might not be present for + OpenShift users (who have oc instead of + kubectl). + - type: feature + title: Agent image now configurable + body: >- + We now support configuring which agent image + registry to use in the + config. This enables users whose laptop is an air-gapped environment to + create personal intercepts without requiring a login. It also makes it easier + for those who are developing on Telepresence to specify which agent image should + be used. Env vars TELEPRESENCE_AGENT_IMAGE and TELEPRESENCE_REGISTRY are no longer + used. + image: ./telepresence-2.3.5-agent-config.png + docs: reference/config/#images + - type: feature + title: Max gRPC receive size now configurable + body: >- + The default max size of messages received through gRPC (4 MB) is sometimes insufficient. It can now be configured. + image: ./telepresence-2.3.5-grpc-max-receive-size.png + docs: reference/config/#grpc + - type: feature + title: CLI can be used in air-gapped environments + body: >- + While Telepresence will auto-detect if your cluster is in an air-gapped environment, + we've added an option users can add to their config.yml to ensure the cli acts like it + is in an air-gapped environment. Air-gapped environments require a manually installed + licence. + docs: reference/cluster-config/#air-gapped-cluster + image: ./telepresence-2.3.5-skipLogin.png + - version: 2.3.4 + date: "2021-07-09" + notes: + - type: bugfix + title: Improved IP log statements + body: >- + Some log statements were printing incorrect characters, when they should have been IP addresses. + This has been resolved to include more accurate and useful logging. + docs: reference/config/#log-levels + image: ./telepresence-2.3.4-ip-error.png + - type: bugfix + title: Improved messaging when multiple services match a workload + body: >- + If multiple services matched a workload when performing an intercept, Telepresence would crash. + It now gives the correct error message, instructing the user on how to specify which + service the intercept should use. + image: ./telepresence-2.3.4-improved-error.png + docs: reference/intercepts + - type: bugfix + title: Traffic-manger creates services in its own namespace to determine subnet + body: >- + Telepresence will now determine the service subnet by creating a dummy-service in its own + namespace, instead of the default namespace, which was causing RBAC permissions issues in + some clusters. + docs: reference/routing/#subnets + - type: bugfix + title: Telepresence connect respects pre-existing clusterrole + body: >- + When Telepresence connects, if the traffic-manager's desired clusterrole already exists in the + cluster, Telepresence will no longer try to update the clusterrole. + docs: reference/rbac + - type: bugfix + title: Helm Chart fixed for clientRbac.namespaced + body: >- + The Telepresence Helm chart no longer fails when installing with --set clientRbac.namespaced=true. + docs: install/helm + - version: 2.3.3 + date: "2021-07-07" + notes: + - type: feature + title: Traffic Manager Helm Chart + body: >- + Telepresence now supports installing the Traffic Manager via Helm. + This will make it easy for operators to install and configure the + server-side components of Telepresence separately from the CLI (which + in turn allows for better separation of permissions). + image: ./telepresence-2.3.3-helm.png + docs: install/helm/ + - type: feature + title: Traffic-manager in custom namespace + body: >- + As the traffic-manager can now be installed in any + namespace via Helm, Telepresence can now be configured to look for the + Traffic Manager in a namespace other than ambassador. + This can be configured on a per-cluster basis. + image: ./telepresence-2.3.3-namespace-config.png + docs: reference/config + - type: feature + title: Intercept --to-pod + body: >- + telepresence intercept now supports a + --to-pod flag that can be used to port-forward sidecars' + ports from an intercepted pod. + image: ./telepresence-2.3.3-to-pod.png + docs: reference/intercepts + - type: change + title: Change in migration from edgectl + body: >- + Telepresence no longer automatically shuts down the old + api_version=1 edgectl daemon. If migrating + from such an old version of edgectl you must now manually + shut down the edgectl daemon before running Telepresence. + This was already the case when migrating from the newer + api_version=2 edgectl. + - type: bugfix + title: Fixed error during shutdown + body: >- + The root daemon no longer terminates when the user daemon disconnects + from its gRPC streams, and instead waits to be terminated by the CLI. + This could cause problems with things not being cleaned up correctly. + - type: bugfix + title: Intercepts will survive deletion of intercepted pod + body: >- + An intercept will survive deletion of the intercepted pod provided + that another pod is created (or already exists) that can take over. + - version: 2.3.2 + date: "2021-06-18" + notes: + # Headliners + - type: feature + title: Service Port Annotation + body: >- + The mutator webhook for injecting traffic-agents now + recognizes a + telepresence.getambassador.io/inject-service-port + annotation to specify which port to intercept; bringing the + functionality of the --port flag to users who + use the mutator webook in order to control Telepresence via + GitOps. + image: ./telepresence-2.3.2-svcport-annotation.png + docs: reference/cluster-config#service-port-annotation + - type: feature + title: Outbound Connections + body: >- + Outbound connections are now routed through the intercepted + Pods which means that the connections originate from that + Pod from the cluster's perspective. This allows service + meshes to correctly identify the traffic. + docs: reference/routing/#outbound + - type: change + title: Inbound Connections + body: >- + Inbound connections from an intercepted agent are now + tunneled to the manager over the existing gRPC connection, + instead of establishing a new connection to the manager for + each inbound connection. This avoids interference from + certain service mesh configurations. + docs: reference/routing/#inbound + + # RBAC changes + - type: change + title: Traffic Manager needs new RBAC permissions + body: >- + The Traffic Manager requires RBAC + permissions to list Nodes, Pods, and to create a dummy + Service in the manager's namespace. + docs: reference/routing/#subnets + - type: change + title: Reduced developer RBAC requirements + body: >- + The on-laptop client no longer requires RBAC permissions to list the Nodes + in the cluster or to create Services, as that functionality + has been moved to the Traffic Manager. + + # Bugfixes + - type: bugfix + title: Able to detect subnets + body: >- + Telepresence will now detect the Pod CIDR ranges even if + they are not listed in the Nodes. + image: ./telepresence-2.3.2-subnets.png + docs: reference/routing/#subnets + - type: bugfix + title: Dynamic IP ranges + body: >- + The list of cluster subnets that the virtual network + interface will route is now configured dynamically and will + follow changes in the cluster. + - type: bugfix + title: No duplicate subnets + body: >- + Subnets fully covered by other subnets are now pruned + internally and thus never superfluously added to the + laptop's routing table. + docs: reference/routing/#subnets + - type: change # not a bugfix, but it only makes sense to mention after the above bugfixes + title: Change in default timeout + body: >- + The trafficManagerAPI timeout default has + changed from 5 seconds to 15 seconds, in order to facilitate + the extended time it takes for the traffic-manager to do its + initial discovery of cluster info as a result of the above + bugfixes. + - type: bugfix + title: Removal of DNS config files on macOS + body: >- + On macOS, files generated under + /etc/resolver/ as the result of using + include-suffixes in the cluster config are now + properly removed on quit. + docs: reference/routing/#macos-resolver + + - type: bugfix + title: Large file transfers + body: >- + Telepresence no longer erroneously terminates connections + early when sending a large HTTP response from an intercepted + service. + - type: bugfix + title: Race condition in shutdown + body: >- + When shutting down the user-daemon or root-daemon on the + laptop, telepresence quit and related commands + no longer return early before everything is fully shut down. + Now it can be counted on that by the time the command has + returned that all of the side-effects on the laptop have + been cleaned up. + - version: 2.3.1 + date: "2021-06-14" + notes: + - title: DNS Resolver Configuration + body: "Telepresence now supports per-cluster configuration for custom dns behavior, which will enable users to determine which local + remote resolver to use and which suffixes should be ignored + included. These can be configured on a per-cluster basis." + image: ./telepresence-2.3.1-dns.png + docs: reference/config + type: feature + - title: AlsoProxy Configuration + body: "Telepresence now supports also proxying user-specified subnets so that they can access external services only accessible to the cluster while connected to Telepresence. These can be configured on a per-cluster basis and each subnet is added to the TUN device so that requests are routed to the cluster for IPs that fall within that subnet." + image: ./telepresence-2.3.1-alsoProxy.png + docs: reference/config + type: feature + - title: Mutating Webhook for Injecting Traffic Agents + body: "The Traffic Manager now contains a mutating webhook to automatically add an agent to pods that have the telepresence.getambassador.io/traffic-agent: enabled annotation. This enables Telepresence to work well with GitOps CD platforms that rely on higher level kubernetes objects matching what is stored in git. For workloads without the annotation, Telepresence will add the agent the way it has in the past" + image: ./telepresence-2.3.1-inject.png + docs: reference/rbac + type: feature + - title: Traffic Manager Connect Timeout + body: "The trafficManagerConnect timeout default has changed from 20 seconds to 60 seconds, in order to facilitate the extended time it takes to apply everything needed for the mutator webhook." + image: ./telepresence-2.3.1-trafficmanagerconnect.png + docs: reference/config + type: change + - title: Fix for large file transfers + body: "Fix a tun-device bug where sometimes large transfers from services on the cluster would hang indefinitely" + image: ./telepresence-2.3.1-large-file-transfer.png + docs: reference/tun-device + type: bugfix + - title: Brew Formula Changed + body: "Now that the Telepresence rewrite is the main version of Telepresence, you can install it via Brew like so: brew install datawire/blackbird/telepresence." + image: ./telepresence-2.3.1-brew.png + docs: install/ + type: change + - version: 2.3.0 + date: "2021-06-01" + notes: + - title: Brew install Telepresence + body: "Telepresence can now be installed via brew on macOS, which makes it easier for users to stay up-to-date with the latest telepresence version. To install via brew, you can use the following command: brew install datawire/blackbird/telepresence2." + image: ./telepresence-2.3.0-homebrew.png + docs: install/ + type: feature + - title: TCP and UDP routing via Virtual Network Interface + body: "Telepresence will now perform routing of outbound TCP and UDP traffic via a Virtual Network Interface (VIF). The VIF is a layer 3 TUN-device that exists while Telepresence is connected. It makes the subnets in the cluster available to the workstation and will also route DNS requests to the cluster and forward them to intercepted pods. This means that pods with custom DNS configuration will work as expected. Prior versions of Telepresence would use firewall rules and were only capable of routing TCP." + image: ./tunnel.jpg + docs: reference/tun-device + type: feature + - title: SSH is no longer used + body: "All traffic between the client and the cluster is now tunneled via the traffic manager gRPC API. This means that Telepresence no longer uses ssh tunnels and that the manager no longer have an sshd installed. Volume mounts are still established using sshfs but it is now configured to communicate using the sftp-protocol directly, which means that the traffic agent also runs without sshd. A desired side effect of this is that the manager and agent containers no longer need a special user configuration." + image: ./no-ssh.png + docs: reference/tun-device/#no-ssh-required + type: change + - title: Running in a Docker container + body: "Telepresence can now be run inside a Docker container. This can be useful for avoiding side effects on a workstation's network, establishing multiple sessions with the traffic manager, or working with different clusters simultaneously." + image: ./run-tp-in-docker.png + docs: reference/inside-container + type: feature + - title: Configurable Log Levels + body: "Telepresence now supports configuring the log level for Root Daemon and User Daemon logs. This provides control over the nature and volume of information that Telepresence generates in daemon.log and connector.log." + image: ./telepresence-2.3.0-loglevels.png + docs: reference/config/#log-levels + type: feature + - version: 2.2.2 + date: "2021-05-17" + notes: + - title: Legacy Telepresence subcommands + body: Telepresence is now able to translate common legacy Telepresence commands into native Telepresence commands. So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used to with the new Telepresence binary. + image: ./telepresence-2.2.png + docs: install/migrate-from-legacy/ + type: feature diff --git a/docs/telepresence/latest/troubleshooting/index.md b/docs/telepresence/latest/troubleshooting/index.md new file mode 100644 index 000000000..5a477f20a --- /dev/null +++ b/docs/telepresence/latest/troubleshooting/index.md @@ -0,0 +1,331 @@ +--- +title: "Telepresence Troubleshooting" +description: "Learn how to troubleshoot common issues related to Telepresence, including intercept issues, cluster connection issues, and errors related to Ambassador Cloud." +--- +# Troubleshooting + + +## Creating an intercept did not generate a preview URL + +Preview URLs can only be created if Telepresence is [logged in to +Ambassador Cloud](../reference/client/login/). When not logged in, it +will not even try to create a preview URL (additionally, by default it +will intercept all traffic rather than just a subset of the traffic). +Remove the intercept with `telepresence leave [deployment name]`, run +`telepresence login` to login to Ambassador Cloud, then recreate the +intercept. See the [intercepts how-to doc](../howtos/intercepts) for +more details. + +## Error on accessing preview URL: `First record does not look like a TLS handshake` + +The service you are intercepting is likely not using TLS, however when configuring the intercept you indicated that it does use TLS. Remove the intercept with `telepresence leave [deployment name]` and recreate it, setting `TLS` to `n`. Telepresence tries to intelligently determine these settings for you when creating an intercept and offer them as defaults, but odd service configurations might cause it to suggest the wrong settings. + +## Error on accessing preview URL: Detected a 301 Redirect Loop + +If your ingress is set to redirect HTTP requests to HTTPS and your web app uses HTTPS, but you configure the intercept to not use TLS, you will get this error when opening the preview URL. Remove the intercept with `telepresence leave [deployment name]` and recreate it, selecting the correct port and setting `TLS` to `y` when prompted. + +## Connecting to a cluster via VPN doesn't work. + +There are a few different issues that could arise when working with a VPN. Please see the [dedicated page](../reference/vpn) on Telepresence and VPNs to learn more on how to fix these. + +## Connecting to a cluster hosted in a VM on the workstation doesn't work + +The cluster probably has access to the host's network and gets confused when it is mapped by Telepresence. +Please check the [cluster in hosted vm](../howtos/cluster-in-vm) for more details. + +## Your GitHub organization isn't listed + +Ambassador Cloud needs access granted to your GitHub organization as a +third-party OAuth app. If an organization isn't listed during login +then the correct access has not been granted. + +The quickest way to resolve this is to go to the **Github menu** → +**Settings** → **Applications** → **Authorized OAuth Apps** → +**Ambassador Labs**. An organization owner will have a **Grant** +button, anyone not an owner will have **Request** which sends an email +to the owner. If an access request has been denied in the past the +user will not see the **Request** button, they will have to reach out +to the owner. + +Once access is granted, log out of Ambassador Cloud and log back in; +you should see the GitHub organization listed. + +The organization owner can go to the **GitHub menu** → **Your +organizations** → **[org name]** → **Settings** → **Third-party +access** to see if Ambassador Labs has access already or authorize a +request for access (only owners will see **Settings** on the +organization page). Clicking the pencil icon will show the +permissions that were granted. + +GitHub's documentation provides more detail about [managing access granted to third-party applications](https://docs.github.com/en/github/authenticating-to-github/connecting-with-third-party-applications) and [approving access to apps](https://docs.github.com/en/github/setting-up-and-managing-organizations-and-teams/approving-oauth-apps-for-your-organization). + +### Granting or requesting access on initial login + +When using GitHub as your identity provider, the first time you log in +to Ambassador Cloud GitHub will ask to authorize Ambassador Labs to +access your organizations and certain user data. + +Authorize Ambassador labs form + +Any listed organization with a green check has already granted access +to Ambassador Labs (you still need to authorize to allow Ambassador +Labs to read your user data and organization membership). + +Any organization with a red "X" requires access to be granted to +Ambassador Labs. Owners of the organization will see a **Grant** +button. Anyone who is not an owner will see a **Request** button. +This will send an email to the organization owner requesting approval +to access the organization. If an access request has been denied in +the past the user will not see the **Request** button, they will have +to reach out to the owner. + +Once approval is granted, you will have to log out of Ambassador Cloud +then back in to select the organization. + +## Volume mounts are not working on macOS + +It's necessary to have `sshfs` installed in order for volume mounts to work correctly during intercepts. Lately there's been some issues using `brew install sshfs` a macOS workstation because the required component `osxfuse` (now named `macfuse`) isn't open source and hence, no longer supported. As a workaround, you can now use `gromgit/fuse/sshfs-mac` instead. Follow these steps: + +1. Remove old sshfs, macfuse, osxfuse using `brew uninstall` +2. `brew install --cask macfuse` +3. `brew install gromgit/fuse/sshfs-mac` +4. `brew link --overwrite sshfs-mac` + +Now sshfs -V shows you the correct version, e.g.: +``` +$ sshfs -V +SSHFS version 2.10 +FUSE library version: 2.9.9 +fuse: no mount point +``` + +5. Next, try a mount (or an intercept that performs a mount). It will fail because you need to give permission to “Benjamin Fleischer” to execute a kernel extension (a pop-up appears that takes you to the system preferences). +6. Approve the needed permission +7. Reboot your computer. + +## Volume mounts are not working on Linux +It's necessary to have `sshfs` installed in order for volume mounts to work correctly during intercepts. + +After you've installed `sshfs`, if mounts still aren't working: +1. Uncomment `user_allow_other` in `/etc/fuse.conf` +2. Add your user to the "fuse" group with: `sudo usermod -a -G fuse ` +3. Restart your computer after uncommenting `user_allow_other` + + +## Authorization for preview URLs +Services that require authentication may not function correctly with preview URLs. When accessing a preview URL, it is necessary to configure your intercept to use custom authentication headers for the preview URL. If you don't, you may receive an unauthorized response or be redirected to the login page for Ambassador Cloud. + +You can accomplish this by using a browser extension such as the `ModHeader extension` for [Chrome](https://chrome.google.com/webstore/detail/modheader/idgpnmonknjnojddfkpgkljpfnnfcklj) +or [Firefox](https://addons.mozilla.org/en-CA/firefox/addon/modheader-firefox/). + +It is important to note that Ambassador Cloud does not support OAuth browser flows when accessing a preview URL, but other auth schemes such as Basic access authentication and session cookies will work. + +## Distributed tracing + +Telepresence is a complex piece of software with components running locally on your laptop and remotely in a distributed kubernetes environment. +As such, troubleshooting investigations require tools that can give users, cluster admins, and maintainers a broad view of what these distributed components are doing. +In order to facilitate such investigations, telepresence >= 2.7.0 includes distributed tracing functionality via [OpenTelemetry](https://opentelemetry.io/) +Tracing is controlled via a `grpcPort` flag under the `tracing` configuration of your `values.yaml`. It is enabled by default and can be disabled by setting `grpcPort` to `0`, or `tracing` to an empty object: + +```yaml +tracing: {} +``` + +If tracing is configured, the traffic manager and traffic agents will open a GRPC server under the port given, from which telepresence clients will be able to gather trace data. +To collect trace data, ensure you're connected to the cluster, perform whatever operation you'd like to debug and then run `gather-traces` immediately after: + +```console +$ telepresence gather-traces +``` + +This command will gather traces from both the cloud and local components of telepresence and output them into a file called `traces.gz` in your current working directory: + +```console +$ file traces.gz + traces.gz: gzip compressed data, original size modulo 2^32 158255 +``` + +Please do not try to open or uncompress this file, as it contains binary trace data. +Instead, you can use the `upload-traces` command built into telepresence to send it to an [OpenTelemetry collector](https://opentelemetry.io/docs/collector/) for ingestion: + +```console +$ telepresence upload-traces traces.gz $OTLP_GRPC_ENDPOINT +``` + +Once that's been done, the traces will be visible via whatever means your usual collector allows. For example, this is what they look like when loaded into Jaeger's [OTLP API](https://www.jaegertracing.io/docs/1.36/apis/#opentelemetry-protocol-stable): + +![Jaeger Interface](../images/tracing.png) + +**Note:** The host and port provided for the `OTLP_GRPC_ENDPOINT` must accept OTLP formatted spans (instead of e.g. Jaeger or Zipkin specific spans) via a GRPC API (instead of the HTTP API that is also available in some collectors) +**Note:** Since traces are not automatically shipped to the backend by telepresence, they are stored in memory. Hence, to avoid running telepresence components out of memory, only the last 10MB of trace data are available for export. + +## No Sidecar Injected in GKE private clusters + +An attempt to `telepresence intercept` results in a timeout, and upon examination of the pods (`kubectl get pods`) it's discovered that the intercept command did not inject a sidecar into the workload's pods: + +```bash +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +echo-easy-7f6d54cff8-rz44k 1/1 Running 0 5m5s + +$ telepresence intercept echo-easy -p 8080 +telepresence: error: connector.CreateIntercept: request timed out while waiting for agent echo-easy.default to arrive +$ kubectl get pod +NAME READY STATUS RESTARTS AGE +echo-easy-d8dc4cc7c-27567 1/1 Running 0 2m9s + +# Notice how 1/1 containers are ready. +``` + +If this is occurring in a GKE cluster with private networking enabled, it is likely due to firewall rules blocking the +Traffic Manager's webhook injector from the API server. +To fix this, add a firewall rule allowing your cluster's master nodes to access TCP port `443` in your cluster's pods, +or change the port number that Telepresence is using for the agent injector by providing the number of an allowed port +using the Helm chart value `agentInjector.webhook.port`. +Please refer to the [telepresence install instructions](../install/cloud#gke) or the [GCP docs](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) for information to resolve this. + +## Injected init-container doesn't function properly + +The init-container is injected to insert `iptables` rules that redirects port numbers from the app container to the +traffic-agent sidecar. This is necessary when the service's `targetPort` is numeric. It requires elevated privileges +(`NET_ADMIN` capabilities), and the inserted rules may get overridden by `iptables` rules inserted by other vendors, +such as Istio or Linkerd. + +Injection of the init-container can often be avoided by using a `targetPort` _name_ instead of a number, and ensure +that the corresponding container's `containerPort` is also named. This example uses the name "http", but any valid +name will do: +```yaml +apiVersion: v1 +kind: Pod +metadata: + ... +spec: + ... + containers: + - ... + ports: + - name: http + containerPort: 8080 +--- +apiVersion: v1 +kind: Service +metadata: + ... +spec: + ... + ports: + - port: 80 + targetPort: http +``` + +Telepresence's mutating webhook will refrain from injecting an init-container when the `targetPort` is a name. Instead, +it will do the following during the injection of the traffic-agent: + +1. Rename the designated container's port by prefixing it (i.e., containerPort: http becomes containerPort: tm-http). +2. Let the container port of our injected traffic-agent use the original name (i.e., containerPort: http). + +Kubernetes takes care of the rest and will now associate the service's `targetPort` with our traffic-agent's +`containerPort`. + +### Important note +If the service is "headless" (using `ClusterIP: None`), then using named ports won't help because the `targetPort` will +not get remapped. A headless service will always require the init-container. + +## Error connecting to GKE or EKS cluster + +GKE and EKS require a plugin that utilizes their resepective IAM providers. +You will need to install the [gke](../install/cloud#gke-authentication-plugin) or [eks](../install/cloud#eks-authentication-plugin) plugins +for Telepresence to connect to your cluster. + +## `too many files open` error when running `telepresence connect` on Linux + +If `telepresence connect` on linux fails with a message in the logs `too many files open`, then check if `fs.inotify.max_user_instances` is set too low. Check the current settings with `sysctl fs.notify.max_user_instances` and increase it temporarily with `sudo sysctl -w fs.inotify.max_user_instances=512`. For more information about permanently increasing it see [Kernel inotify watch limit reached](https://unix.stackexchange.com/a/13757/514457). + +## Connected to cluster via VPN but IPs don't resolve + +If `telepresence connect` succeeds, but you find yourself unable to reach services on your cluster, a routing conflict may be to blame. This frequently happens when connecting to a VPN at the same time as telepresence, +as often VPN clients may add routes that conflict with those added by telepresence. To debug this, pick an IP address in the cluster and get its route information. In this case, we'll get the route for `100.124.150.45`, and discover +that it's running through a `tailscale` device. + + + + +```console +$ route -n get 100.124.150.45 + route to: 100.64.2.3 +destination: 100.64.0.0 + mask: 255.192.0.0 + interface: utun4 + flags: + recvpipe sendpipe ssthresh rtt,msec rttvar hopcount mtu expire + 0 0 0 0 0 0 1280 0 +``` + +Note that in macos it's difficult to determine what software the name of a virtual interface corresponds to -- `utun4` doesn't indicate that it was created by tailscale. +One option is to look at the output of `ifconfig` before and after connecting to your VPN to see if the interface in question is being added upon connection + + + + +```console +$ ip route get 100.124.150.45 +100.64.2.3 dev tailscale0 table 52 src 100.111.250.89 uid 0 +``` + + + + +```console +$ Find-NetRoute -RemoteIPAddress 100.124.150.45 + +IPAddress : 100.102.111.26 +InterfaceIndex : 29 +InterfaceAlias : Tailscale +AddressFamily : IPv4 +Type : Unicast +PrefixLength : 32 +PrefixOrigin : Manual +SuffixOrigin : Manual +AddressState : Preferred +ValidLifetime : Infinite ([TimeSpan]::MaxValue) +PreferredLifetime : Infinite ([TimeSpan]::MaxValue) +SkipAsSource : False +PolicyStore : ActiveStore + + +Caption : +Description : +ElementName : +InstanceID : ;::8;;;8 + + +This will tell you which device the traffic is being routed through. As a rule, if the traffic is not being routed by the telepresence device, +your VPN may need to be reconfigured, as its routing configuration is conflicting with telepresence. One way to determine if this is the case +is to run `telepresence quit -s`, check the route for an IP in the cluster (see commands above), run `telepresence connect`, and re-run the commands to see if the output changes. +If it doesn't change, that means telepresence is unable to override your VPN routes, and your VPN may need to be reconfigured. Talk to your network admins +to configure it such that clients do not add routes that conflict with the pod and service CIDRs of the clusters. How this will be done will +vary depending on the VPN provider. +Future versions of telepresence will be smarter about informing you of such conflicts upon connection. diff --git a/docs/telepresence/latest/versions.yml b/docs/telepresence/latest/versions.yml new file mode 100644 index 000000000..3ce9080fe --- /dev/null +++ b/docs/telepresence/latest/versions.yml @@ -0,0 +1,5 @@ +version: "2.15.1" +dlVersion: "latest" +docsVersion: "2.15" +branch: release/v2 +productName: "Telepresence" diff --git a/docs/telepresence/pre-release b/docs/telepresence/pre-release deleted file mode 120000 index 81b41bff0..000000000 --- a/docs/telepresence/pre-release +++ /dev/null @@ -1 +0,0 @@ -../../../docs/telepresence/v2 \ No newline at end of file diff --git a/docs/telepresence/pre-release/community.md b/docs/telepresence/pre-release/community.md new file mode 100644 index 000000000..922457c9d --- /dev/null +++ b/docs/telepresence/pre-release/community.md @@ -0,0 +1,12 @@ +# Community + +## Contributor's guide +Please review our [contributor's guide](https://github.com/telepresenceio/telepresence/blob/release/v2/DEVELOPING.md) +on GitHub to learn how you can help make Telepresence better. + +## Changelog +Our [changelog](https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md) +describes new features, bug fixes, and updates to each version of Telepresence. + +## Meetings +Check out our community [meeting schedule](https://github.com/telepresenceio/telepresence/blob/release/v2/MEETING_SCHEDULE.md) for opportunities to interact with Telepresence developers. diff --git a/docs/telepresence/pre-release/concepts/context-prop.md b/docs/telepresence/pre-release/concepts/context-prop.md new file mode 100644 index 000000000..dc9ee18f3 --- /dev/null +++ b/docs/telepresence/pre-release/concepts/context-prop.md @@ -0,0 +1,36 @@ +# Context propagation + +**Context propagation** is the transfer of request metadata across the services and remote processes of a distributed system. Telepresence uses context propagation to intelligently route requests to the appropriate destination. + +This metadata is the context that is transferred across system services. It commonly takes the form of HTTP headers; context propagation is usually referred to as header propagation. A component of the system (like a proxy or performance monitoring tool) injects the headers into requests as it relays them. + +Metadata propagation refers to any service or other middleware not stripping away the headers. Propagation facilitates the movement of the injected contexts between other downstream services and processes. + + +## What is distributed tracing? + +Distributed tracing is a technique for troubleshooting and profiling distributed microservices applications and is a common application for context propagation. It is becoming a key component for debugging. + +In a microservices architecture, a single request may trigger additional requests to other services. The originating service may not cause the failure or slow request directly; a downstream dependent service may instead be to blame. + +An application like Datadog or New Relic will use agents running on services throughout the system to inject traffic with HTTP headers (the context). They will track the request’s entire path from origin to destination to reply, gathering data on routes the requests follow and performance. The injected headers follow the [W3C Trace Context specification](https://www.w3.org/TR/trace-context/) (or another header format, such as [B3 headers](https://github.com/openzipkin/b3-propagation)), which facilitates maintaining the headers through every service without being stripped (the propagation). + + +## What are intercepts and preview URLs? + +[Intercepts](../../reference/intercepts) and [preview +URLs](../../howtos/preview-urls/) are functions of Telepresence that +enable easy local development from a remote Kubernetes cluster and +offer a preview environment for sharing and real-time collaboration. + +Telepresence uses custom HTTP headers and header propagation to +identify which traffic to intercept both for plain personal intercepts +and for personal intercepts with preview URLs; these techniques are +more commonly used for distributed tracing, so what they are being +used for is a little unorthodox, but the mechanisms for their use are +already widely deployed because of the prevalence of tracing. The +headers facilitate the smart routing of requests either to live +services in the cluster or services running locally on a developer’s +machine. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to [Ambassador Cloud](https://app.getambassador.io) with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. diff --git a/docs/telepresence/pre-release/concepts/devloop.md b/docs/telepresence/pre-release/concepts/devloop.md new file mode 100644 index 000000000..8b1fbf354 --- /dev/null +++ b/docs/telepresence/pre-release/concepts/devloop.md @@ -0,0 +1,50 @@ +# The developer experience and the inner dev loop + +## How is the developer experience changing? + +The developer experience is the workflow a developer uses to develop, test, deploy, and release software. + +Typically this experience has consisted of both an inner dev loop and an outer dev loop. The inner dev loop is where the individual developer codes and tests, and once the developer pushes their code to version control, the outer dev loop is triggered. + +The outer dev loop is _everything else_ that happens leading up to release. This includes code merge, automated code review, test execution, deployment, [controlled (canary) release](https://www.getambassador.io/docs/argo/latest/concepts/canary/), and observation of results. The modern outer dev loop might include, for example, an automated CI/CD pipeline as part of a [GitOps workflow](https://www.getambassador.io/docs/argo/latest/concepts/gitops/#what-is-gitops) and a progressive delivery strategy relying on automated canaries, i.e. to make the outer loop as fast, efficient and automated as possible. + +Cloud-native technologies have fundamentally altered the developer experience in two ways: one, developers now have to take extra steps in the inner dev loop; two, developers need to be concerned with the outer dev loop as part of their workflow, even if most of their time is spent in the inner dev loop. + +Engineers now must design and build distributed service-based applications _and_ also assume responsibility for the full development life cycle. The new developer experience means that developers can no longer rely on monolithic application developer best practices, such as checking out the entire codebase and coding locally with a rapid “live-reload” inner development loop. Now developers have to manage external dependencies, build containers, and implement orchestration configuration (e.g. Kubernetes YAML). This may appear trivial at first glance, but this adds development time to the equation. + +## What is the inner dev loop? + +The inner dev loop is the single developer workflow. A single developer should be able to set up and use an inner dev loop to code and test changes quickly. + +Even within the Kubernetes space, developers will find much of the inner dev loop familiar. That is, code can still be written locally at a level that a developer controls and committed to version control. + +In a traditional inner dev loop, if a typical developer codes for 360 minutes (6 hours) a day, with a traditional local iterative development loop of 5 minutes — 3 coding, 1 building, i.e. compiling/deploying/reloading, 1 testing inspecting, and 10-20 seconds for committing code — they can expect to make ~70 iterations of their code per day. Any one of these iterations could be a release candidate. The only “developer tax” being paid here is for the commit process, which is negligible. + +![traditional inner dev loop](../../images/trad-inner-dev-loop.png) + +## In search of lost time: How does containerization change the inner dev loop? + +The inner dev loop is where writing and testing code happens, and time is critical for maximum developer productivity and getting features in front of end users. The faster the feedback loop, the faster developers can refactor and test again. + +Changes to the inner dev loop process, i.e., containerization, threaten to slow this development workflow down. Coding stays the same in the new inner dev loop, but code has to be containerized. The _containerized_ inner dev loop requires a number of new steps: + +* packaging code in containers +* writing a manifest to specify how Kubernetes should run the application (e.g., YAML-based configuration information, such as how much memory should be given to a container) +* pushing the container to the registry +* deploying containers in Kubernetes + +Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. + + +![container inner dev loop](../../images/container-inner-dev-loop.png) + +## Tackling the slow inner dev loop + +A slow inner dev loop can negatively impact frontend and backend teams, delaying work on individual and team levels and slowing releases into production overall. + +For example: + +* Frontend developers have to wait for previews of backend changes on a shared dev/staging environment (for example, until CI/CD deploys a new version) and/or rely on mocks/stubs/virtual services when coding their application locally. These changes are only verifiable by going through the CI/CD process to build and deploy within a target environment. +* Backend developers have to wait for CI/CD to build and deploy their app to a target environment to verify that their code works correctly with cluster or cloud-based dependencies as well as to share their work to get feedback. + +New technologies and tools can facilitate cloud-native, containerized development. And in the case of a sluggish inner dev loop, developers can accelerate productivity with tools that help speed the loop up again. diff --git a/docs/telepresence/pre-release/concepts/devworkflow.md b/docs/telepresence/pre-release/concepts/devworkflow.md new file mode 100644 index 000000000..fa24fc2bd --- /dev/null +++ b/docs/telepresence/pre-release/concepts/devworkflow.md @@ -0,0 +1,7 @@ +# The changing development workflow + +A changing workflow is one of the main challenges for developers adopting Kubernetes. Software development itself isn’t the challenge. Developers can continue to [code using the languages and tools with which they are most productive and comfortable](https://www.getambassador.io/resources/kubernetes-local-dev-toolkit/). That’s the beauty of containerized development. + +However, the cloud-native, Kubernetes-based approach to development means adopting a new development workflow and development environment. Beyond the basics, such as figuring out how to containerize software, [how to run containers in Kubernetes](https://www.getambassador.io/docs/kubernetes/latest/concepts/appdev/), and how to deploy changes into containers, for example, Kubernetes adds complexity before it delivers efficiency. The promise of a “quicker way to develop software” applies at least within the traditional aspects of the inner dev loop, where the single developer codes, builds and tests their software. But both within the inner dev loop and once code is pushed into version control to trigger the outer dev loop, the developer experience changes considerably from what many developers are used to. + +In this new paradigm, new steps are added to the inner dev loop, and more broadly, the developer begins to share responsibility for the full life cycle of their software. Inevitably this means taking new workflows and tools on board to ensure that the full life cycle continues full speed ahead. diff --git a/docs/telepresence/pre-release/concepts/faster.md b/docs/telepresence/pre-release/concepts/faster.md new file mode 100644 index 000000000..b649e4153 --- /dev/null +++ b/docs/telepresence/pre-release/concepts/faster.md @@ -0,0 +1,25 @@ +# Making the remote local: Faster feedback, collaboration and debugging + +With the goal of achieving [fast, efficient development](https://www.getambassador.io/use-case/local-kubernetes-development/), developers need a set of approaches to bridge the gap between remote Kubernetes clusters and local development, and reduce time to feedback and debugging. + +## How should I set up a Kubernetes development environment? + +[Setting up a development environment](https://www.getambassador.io/resources/development-environments-microservices/) for Kubernetes can be much more complex than the set up for traditional web applications. Creating and maintaining a Kubernetes development environment relies on a number of external dependencies, such as databases or authentication. + +While there are several ways to set up a Kubernetes development environment, most introduce complexities and impediments to speed. The dev environment should be set up to easily code and test in conditions where a service can access the resources it depends on. + +A good way to meet the goals of faster feedback, possibilities for collaboration, and scale in a realistic production environment is the "single service local, all other remote" environment. Developing in a fully remote environment offers some benefits, but for developers, it offers the slowest possible feedback loop. With local development in a remote environment, the developer retains considerable control while using tools like [Telepresence](../../quick-start/) to facilitate fast feedback, debugging and collaboration. + +## What is Telepresence? + +Telepresence is an open source tool that lets developers [code and test microservices locally against a remote Kubernetes cluster](../../quick-start/). Telepresence facilitates more efficient development workflows while relieving the need to worry about other service dependencies. + +## How can I get fast, efficient local development? + +The dev loop can be jump-started with the right development environment and Kubernetes development tools to support speed, efficiency and collaboration. Telepresence is designed to let Kubernetes developers code as though their laptop is in their Kubernetes cluster, enabling the service to run locally and be proxied into the remote cluster. Telepresence runs code locally and forwards requests to and from the remote Kubernetes cluster, bypassing the much slower process of waiting for a container to build, pushing it to registry, and deploying to production. + +A rapid and continuous feedback loop is essential for productivity and speed; Telepresence enables the fast, efficient feedback loop to ensure that developers can access the rapid local development loop they rely on without disrupting their own or other developers' workflows. Telepresence safely intercepts traffic from the production cluster and enables near-instant testing of code, local debugging in production, and [preview URL](../../howtos/preview-urls/) functionality to share dev environments with others for multi-user collaboration. + +Telepresence works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This pod proxies data from the Kubernetes environment (e.g., TCP connections, environment variables, volumes) to the local process. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +The intercept proxy works thanks to context propagation, which is most frequently associated with distributed tracing but also plays a key role in controllable intercepts and preview URLs. diff --git a/docs/telepresence/pre-release/concepts/intercepts.md b/docs/telepresence/pre-release/concepts/intercepts.md new file mode 100644 index 000000000..dd7c14d23 --- /dev/null +++ b/docs/telepresence/pre-release/concepts/intercepts.md @@ -0,0 +1,166 @@ +--- +title: "Types of intercepts" +description: "Short demonstration of personal vs global intercepts" +--- + +import React from 'react'; + +import Alert from '@material-ui/lab/Alert'; +import AppBar from '@material-ui/core/AppBar'; +import Paper from '@material-ui/core/Paper'; +import Tab from '@material-ui/core/Tab'; +import TabContext from '@material-ui/lab/TabContext'; +import TabList from '@material-ui/lab/TabList'; +import TabPanel from '@material-ui/lab/TabPanel'; +import Animation from '@src/components/InterceptAnimation'; + +export function TabsContainer({ children, ...props }) { + const [state, setState] = React.useState({curTab: "personal"}); + React.useEffect(() => { + const query = new URLSearchParams(window.location.search); + var interceptType = query.get('intercept') || "personal"; + if (state.curTab != interceptType) { + setState({curTab: interceptType}); + } + }, [state, setState]) + var setURL = function(newTab) { + history.replaceState(null,null, + `?intercept=${newTab}${window.location.hash}`, + ); + }; + return ( +
+ + + {setState({curTab: newTab}); setURL(newTab)}} aria-label="intercept types"> + + + + + + {children} + +
+ ); +}; + +# Types of intercepts + + + + +# No intercept + + + + +This is the normal operation of your cluster without Telepresence. + + + + + +# Global intercept + + + + +**Global intercepts** replace the Kubernetes "Orders" service with the +Orders service running on your laptop. The users see no change, but +with all the traffic coming to your laptop, you can observe and debug +with all your dev tools. + + + +### Creating and using global intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-match=all + ``` + + + + Make sure your current kubectl context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service: + + All requests will be sent to the version of your service that is + running in the local development environment. + + + + +# Personal intercept + +**Personal intercepts** allow you to be selective and intercept only +some of the traffic to a service while not interfering with the rest +of the traffic. This allows you to share a cluster with others on your +team without interfering with their work. + + + + +In the illustration above, **Orange** +requests are being made by Developer 2 on their laptop and the +**green** are made by a teammate, +Developer 1, on a different laptop. + +Each developer can intercept the Orders service for their requests only, +while sharing the rest of the development environment. + + + +### Creating and using personal intercepts + + 1. Creating the intercept: Intercept your service from your CLI: + + ```shell + telepresence intercept SERVICENAME --http-match=Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b + ``` + + We're using + `Personal-Intercept=126a72c7-be8b-4329-af64-768e207a184b` as the + header for the sake of the example, but you can use any + `key=value` pair you want, or `--http-match=auto` to have it + choose something automatically. + + + + Make sure your current kubect context points to the target + cluster. If your service is running in a different namespace than + your current active context, use or change the `--namespace` flag. + + + + 2. Using the intercept: Send requests to your service by passing the + HTTP header: + + ```http + Personal-Intercept: 126a72c7-be8b-4329-af64-768e207a184b + ``` + + + + Need a browser extension to modify or remove an HTTP-request-headers? + + Chrome + {' '} + Firefox + + + + 3. Using the intercept: Send requests to your service without the + HTTP header: + + Requests without the header will be sent to the version of your + service that is running in the cluster. This enables you to share + the cluster with a team! + + + diff --git a/docs/telepresence/pre-release/doc-links.yml b/docs/telepresence/pre-release/doc-links.yml new file mode 100644 index 000000000..fec17daf4 --- /dev/null +++ b/docs/telepresence/pre-release/doc-links.yml @@ -0,0 +1,84 @@ + - title: Quick start + link: quick-start + - title: Install Telepresence + items: + - title: Install + link: install/ + - title: Upgrade + link: install/upgrade/ + - title: Install Traffic Manager with Helm + link: install/helm/ + - title: Migrate from legacy Telepresence + link: install/migrate-from-legacy/ + - title: Core concepts + items: + - title: The changing development workflow + link: concepts/devworkflow + - title: The developer experience and the inner dev loop + link: concepts/devloop + - title: 'Making the remote local: Faster feedback, collaboration and debugging' + link: concepts/faster + - title: Context propagation + link: concepts/context-prop + - title: Types of intercepts + link: concepts/intercepts + - title: How do I... + items: + - title: Intercept a service in your own environment + link: howtos/intercepts + - title: Share dev environments with preview URLs + link: howtos/preview-urls + - title: Proxy outbound traffic to my cluster + link: howtos/outbound + - title: Send requests to an intercepted service + link: howtos/request + - title: Technical reference + items: + - title: Architecture + link: reference/architecture + - title: Client reference + link: reference/client + items: + - title: login + link: reference/client/login + - title: Laptop-side configuration + link: reference/config + - title: Cluster-side configuration + link: reference/cluster-config + - title: Using Docker for intercepts + link: reference/docker-run + - title: Running Telepresence in a Docker container + link: reference/inside-container + - title: Environment variables + link: reference/environment + - title: Intercepts + link: reference/intercepts/ + items: + - title: Manually injecting the Traffic Agent + link: reference/intercepts/manual-agent + - title: Volume mounts + link: reference/volume + - title: RESTful API service + link: reference/restapi + - title: DNS resolution + link: reference/dns + - title: RBAC + link: reference/rbac + - title: Telepresence and VPNs + link: reference/vpn + - title: Networking through Virtual Network Interface + link: reference/tun-device + - title: Connection Routing + link: reference/routing + - title: Using Telepresence with Linkerd + link: reference/linkerd + - title: FAQs + link: faqs + - title: Troubleshooting + link: troubleshooting + - title: Community + link: community + - title: Release Notes + link: release-notes + - title: Licenses + link: licenses diff --git a/docs/telepresence/pre-release/faqs.md b/docs/telepresence/pre-release/faqs.md new file mode 100644 index 000000000..fed7f066c --- /dev/null +++ b/docs/telepresence/pre-release/faqs.md @@ -0,0 +1,122 @@ +--- +description: "Learn how Telepresence helps with fast development and debugging in your Kubernetes cluster." +--- + +# FAQs + +** Why Telepresence?** + +Modern microservices-based applications that are deployed into Kubernetes often consist of tens or hundreds of services. The resource constraints and number of these services means that it is often difficult to impossible to run all of this on a local development machine, which makes fast development and debugging very challenging. The fast [inner development loop](../concepts/devloop/) from previous software projects is often a distant memory for cloud developers. + +Telepresence enables you to connect your local development machine seamlessly to the cluster via a two way proxying mechanism. This enables you to code locally and run the majority of your services within a remote Kubernetes cluster -- which in the cloud means you have access to effectively unlimited resources. + +Ultimately, this empowers you to develop services locally and still test integrations with dependent services or data stores running in the remote cluster. + +You can “intercept” any requests made to a target Kubernetes workload, and code and debug your associated service locally using your favourite local IDE and in-process debugger. You can test your integrations by making requests against the remote cluster’s ingress and watching how the resulting internal traffic is handled by your service running locally. + +By using the preview URL functionality you can share access with additional developers or stakeholders to the application via an entry point associated with your intercept and locally developed service. You can make changes that are visible in near real-time to all of the participants authenticated and viewing the preview URL. All other viewers of the application entrypoint will not see the results of your changes. + +** What operating systems does Telepresence work on?** + +Telepresence currently works natively on macOS (Intel and Apple silicon), Linux, and WSL 2. Starting with v2.4.0, we are also releasing a native Windows version of Telepresence that we are considering a Developer Preview. + +** What protocols can be intercepted by Telepresence?** + +All HTTP/1.1 and HTTP/2 protocols can be intercepted. This includes: + +- REST +- JSON/XML over HTTP +- gRPC +- GraphQL + +If you need another protocol supported, please [drop us a line](https://www.getambassador.io/feedback/) to request it. + +** When using Telepresence to intercept a pod, are the Kubernetes cluster environment variables proxied to my local machine?** + +Yes, you can either set the pod's environment variables on your machine or write the variables to a file to use with Docker or another build process. Please see [the environment variable reference doc](../reference/environment) for more information. + +** When using Telepresence to intercept a pod, can the associated pod volume mounts also be mounted by my local machine?** + +Yes, please see [the volume mounts reference doc](../reference/volume/) for more information. + +** When connected to a Kubernetes cluster via Telepresence, can I access cluster-based services via their DNS name?** + +Yes. After you have successfully connected to your cluster via `telepresence connect` you will be able to access any service in your cluster via their namespace qualified DNS name. + +This means you can curl endpoints directly e.g. `curl .:8080/mypath`. + +If you create an intercept for a service in a namespace, you will be able to use the service name directly. + +This means if you `telepresence intercept -n `, you will be able to resolve just the `` DNS record. + +You can connect to databases or middleware running in the cluster, such as MySQL, PostgreSQL and RabbitMQ, via their service name. + +** When connected to a Kubernetes cluster via Telepresence, can I access cloud-based services and data stores via their DNS name?** + +You can connect to cloud-based data stores and services that are directly addressable within the cluster (e.g. when using an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) Service type), such as AWS RDS, Google pub-sub, or Azure SQL Database. + +** What types of ingress does Telepresence support for the preview URL functionality?** + +The preview URL functionality should work with most ingress configurations, including straightforward load balancer setups. + +Telepresence will discover/prompt during first use for this info and make its best guess at figuring this out and ask you to confirm or update this. + +** Why are my intercepts still reporting as active when they've been disconnected?** + + In certain cases, Telepresence might not have been able to communicate back with Ambassador Cloud to update the intercept's status. Worry not, they will get garbage collected after a period of time. + +** Why is my intercept associated with an "Unreported" cluster?** + + Intercepts tagged with "Unreported" clusters simply mean Ambassador Cloud was unable to associate a service instance with a known detailed service from an Edge Stack or API Gateway cluster. [Connecting your cluster to the Service Catalog](/docs/telepresence/latest/quick-start/) will properly match your services from multiple data sources. + +** Will Telepresence be able to intercept workloads running on a private cluster or cluster running within a virtual private cloud (VPC)?** + +Yes. The cluster has to have outbound access to the internet for the preview URLs to function correctly, but it doesn’t need to have a publicly accessible IP address. + +The cluster must also have access to an external registry in order to be able to download the traffic-manager and traffic-agent images that are deployed when connecting with Telepresence. + +** Why does running Telepresence require sudo access for the local daemon?** + +Telepresence creates, and manages, a virtual network device (a TUN network) for routing of outbound traffic to the cluster and perform DNS resolution. That requires elevated access. + +** What components get installed in the cluster when running Telepresence?** + +A single `traffic-manager` service is deployed in the `ambassador` namespace within your cluster, and this manages resilient intercepts and connections between your local machine and the cluster. + +A Traffic Agent container is injected per pod that is being intercepted. The first time a workload is intercepted all pods associated with this workload will be restarted with the Traffic Agent automatically injected. + +** How can I remove all of the Telepresence components installed within my cluster?** + +You can run the command `telepresence uninstall --everything` to remove the `traffic-manager` service installed in the cluster and `traffic-agent` containers injected into each pod being intercepted. + +Running this command will also stop the local daemon running. + +** What language is Telepresence written in?** + +All components of the Telepresence application and cluster components are written using Go. + +** How does Telepresence connect and tunnel into the Kubernetes cluster?** + +The connection between your laptop and cluster is established by using +the `kubectl port-forward` machinery (though without actually spawning +a separate program) to establish a TCP connection to Telepresence +Traffic Manager in the cluster, and running Telepresence's custom VPN +protocol over that TCP connection. + + + +** What identity providers are supported for authenticating to view a preview URL?** + +* GitHub +* GitLab +* Google + +More authentication mechanisms and identity provider support will be added soon. Please [let us know](https://www.getambassador.io/feedback/) which providers are the most important to you and your team in order for us to prioritize those. + +** Is Telepresence open source?** + +Yes it is! You can find its source code on [GitHub](https://github.com/telepresenceio/telepresence). + +** How do I share my feedback on Telepresence?** + +Your feedback is always appreciated and helps us build a product that provides as much value as possible for our community. You can chat with us directly on our [feedback page](https://www.getambassador.io/feedback/), or you can [join our Slack channel](http://a8r.io/slack) to share your thoughts. diff --git a/docs/telepresence/pre-release/howtos/intercepts.md b/docs/telepresence/pre-release/howtos/intercepts.md new file mode 100644 index 000000000..87bd9f92b --- /dev/null +++ b/docs/telepresence/pre-release/howtos/intercepts.md @@ -0,0 +1,108 @@ +--- +description: "Start using Telepresence in your own environment. Follow these steps to intercept your service in your cluster." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from '../quick-start/qs-cards' + +# Intercept a service in your own environment + +Telepresence enables you to create intercepts to a target Kubernetes workload. Once you have created and intercept, you can code and debug your associated service locally. + +For a detailed walk-though on creating intercepts using our sample app, follow the [quick start guide](../../quick-start/demo-node/). + + +## Prerequisites + +Before you begin, you need to have [Telepresence installed](../../install/), and either the Kubernetes command-line tool, [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/), or the OpenShift Container Platform command-line interface, [`oc`](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html#cli-installing-cli_cli-developer-commands). This document uses kubectl in all example commands. OpenShift users can substitute oc [commands instead](https://docs.openshift.com/container-platform/4.1/cli_reference/developer-cli-commands.html). + +This guide assumes you have a Kubernetes deployment and service accessible publicly by an ingress controller, and that you can run a copy of that service on your laptop. + + +## Intercept your service with a global intercept + +With Telepresence, you can create [global intercepts](../../concepts/intercepts/?intercept=global) that intercept all traffic going to a service in your cluster and route it to your local environment instead. + +1. Connect to your cluster with `telepresence connect` and connect to the Kubernetes API server: + + ```console + $ curl -ik https://kubernetes.default + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected when you first connect. + + + You now have access to your remote Kubernetes API server as if you were on the same network. You can now use any local tools to connect to any service in the cluster. + + If you have difficulties connecting, make sure you are using Telepresence 2.0.3 or a later version. Check your version by entering `telepresence version` and [upgrade if needed](../../install/upgrade/). + + +2. Enter `telepresence list` and make sure the service you want to intercept is listed. For example: + + ```console + $ telepresence list + ... + example-service: ready to intercept (traffic-agent not yet installed) + ... + ``` + +3. Get the name of the port you want to intercept on your service: + `kubectl get service --output yaml`. + + For example: + + ```console + $ kubectl get service example-service --output yaml + ... + ports: + - name: http + port: 80 + protocol: TCP + targetPort: http + ... + ``` + +4. Intercept all traffic going to the service in your cluster: + `telepresence intercept --port [:] --env-file `. + * For `--port`: specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * For `--env-file`: specify a file path for Telepresence to write the environment variables that are set in the pod. + The example below shows Telepresence intercepting traffic going to service `example-service`. Requests now reach the service on port `http` in the cluster get routed to `8080` on the workstation and write the environment variables of the service to `~/example-service-intercept.env`. + ```console + $ telepresence intercept example-service --port 8080:http --env-file ~/example-service-intercept.env + Using Deployment example-service + intercepted + Intercept name: example-service + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Intercepting : all TCP connections + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + The following are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Query the environment in which you intercepted a service and verify your local instance being invoked. + All the traffic previously routed to your Kubernetes Service is now routed to your local environment + +You can now: +- Make changes on the fly and see them reflected when interacting with + your Kubernetes environment. +- Query services only exposed in your cluster's network. +- Set breakpoints in your IDE to investigate bugs. + + + + **Didn't work?** Make sure the port you're listening on matches the one you specified when you created your intercept. + + diff --git a/docs/telepresence/pre-release/howtos/outbound.md b/docs/telepresence/pre-release/howtos/outbound.md new file mode 100644 index 000000000..bd3c2b4c7 --- /dev/null +++ b/docs/telepresence/pre-release/howtos/outbound.md @@ -0,0 +1,98 @@ +--- +description: "Telepresence can connect to your Kubernetes cluster, letting you access cluster services as if your laptop was another pod in the cluster." +--- + +import Alert from '@material-ui/lab/Alert'; + +# Proxy outbound traffic to my cluster + +While preview URLs are a powerful feature, Telepresence offers other options for proxying traffic between your laptop and the cluster. This section discribes how to proxy outbound traffic and control outbound connectivity to your cluster. + + This guide assumes that you have the quick start sample web app running in your cluster to test accessing the web-app service. You can substitute this service for any other service you are running. + +## Proxying outbound traffic + +Connecting to the cluster instead of running an intercept allows you to access cluster workloads as if your laptop was another pod in the cluster. This enables you to access other Kubernetes services using `.`. A service running on your laptop can interact with other services on the cluster by name. + +When you connect to your cluster, the background daemon on your machine runs and installs the [Traffic Manager deployment](../../reference/architecture/) into the cluster of your current `kubectl` context. The Traffic Manager handles the service proxying. + +1. Run `telepresence connect` and enter your password to run the daemon. + + ```console + $ telepresence connect + Launching Telepresence Daemon v2.4.10 (api v3) + Need root privileges to run "/usr/local/bin/telepresence daemon-foreground /home//.cache/telepresence/logs '' ''" + [sudo] password: + Launching Telepresence Root Daemon + Launching Telepresence User Daemon + Connected to context default (https://) + ``` + +Check this [FAQ entry](../../troubleshooting#daemon-service-did-not-start) in case the daemon does not start. + +2. Run `telepresence status` to confirm connection to your cluster and that it is proxying traffic. + + ```console + $ telepresence status + Root Daemon: Running + Version : v2.4.10 (api 3) + DNS : + Remote IP : + Exclude suffixes: [.arpa .com .io .net .org .ru] + Include suffixes: [] + Timeout : 4s + Also Proxy : (0 subnets) + Never Proxy: (1 subnets) + User Daemon: Running + Version : v2.4.10 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 0 total + ``` + +3. Access your service by name with `curl web-app.emojivoto:80`. Telepresence routes the request to the cluster, as if your laptop is actually running in the cluster. + + ```console + $ curl web-app.emojivoto:80 + + + + + Emoji Vote + ... + ``` + +If you terminate the client with `telepresence quit` and try to access the service again, it will fail because traffic is no longer proxied from your laptop. + + ```console + $ telepresence quit + Telepresence Root Daemon quitting... done + Telepresence User Daemon quitting... done + ``` + +When using Telepresence in this way, you need to access services with the namespace qualified DNS name (<service name>.<namespace>) before you start an intercept. After you start an intercept, only <service name> is required. Read more about these differences in the DNS resolution reference guide. + +## Controlling outbound connectivity + +By default, Telepresence provides access to all Services found in all namespaces in the connected cluster. This can lead to problems if the user does not have RBAC access permissions to all namespaces. You can use the `--mapped-namespaces ` flag to control which namespaces are accessible. + +When you use the `--mapped-namespaces` flag, you need to include all namespaces containing services you want to access, as well as all namespaces that contain services related to the intercept. + +### Using local-only intercepts + +When you develop on isolated apps or on a virtualized container, you don't need an outbound connection. However, when developing services that aren't deployed to the cluster, it can be necessary to provide outbound connectivity to the namespace where the service will be deployed. This is because services that aren't exposed through ingress controllers require connectivity to those services. When you provide outbound connectivity, the service can access other services in that namespace without using qualified names. A local-only intercept does not cause outbound connections to originate from the intercepted namespace. The reason for this is to establish correct origin; the connection must be routed to a `traffic-agent`of an intercepted pod. For local-only intercepts, the outbound connections originates from the `traffic-manager`. + +To control outbound connectivity to specific namespaces, add the `--local-only` flag: + + ```console + $ telepresence intercept --namespace --local-only + ``` +The resources in the given namespace can now be accessed using unqualified names as long as the intercept is active. +You can deactivate the intercept with `telepresence leave `. This removes unqualified name access. + +### Proxy outcound connectivity for laptops + +To specify additional hosts or subnets that should be resolved inside of the cluster, see [AlsoProxy](../../reference/config/#alsoproxy) for more details. \ No newline at end of file diff --git a/docs/telepresence/pre-release/howtos/preview-urls.md b/docs/telepresence/pre-release/howtos/preview-urls.md new file mode 100644 index 000000000..49c43ebf2 --- /dev/null +++ b/docs/telepresence/pre-release/howtos/preview-urls.md @@ -0,0 +1,127 @@ +--- +description: "Telepresence uses Preview URLs to help you collaborate on developing Kubernetes services with teammates." +indexable: false +--- + +import Alert from '@material-ui/lab/Alert'; + +# Share development environments with preview URLs + +Telepresence can generate sharable preview URLs. This enables you to work on a copy of your service locally, and share that environment with a teammate for pair programming. While using preview URLs, Telepresence will route only the requests coming from that preview URL to your local environment. Requests to the ingress are routed to your cluster as usual. + +Preview URLs are protected behind authentication through Ambassador Cloud, and, access to the URL is only available to users in your organization. You can make the URL publicly accessible for sharing with outside collaborators. + +## Creating a preview URL + +1. Connect to Telepresence and enter the `telepresence list` command in your CLI to verify the service is listed. +Telepresence only supports Deployments, ReplicaSets, and StatefulSet workloads with a label that matches a Service. + +2. Enter `telepresence login` to launch Ambassador Cloud in your browser. + + If you are in an environment you can't launch Telepresence in your local browser, enter If you are in an environment where Telepresence cannot launch in a local browser, pass the [`--apikey` flag to `telepresence login`](../../reference/client/login/). + +3. Start the intercept with `telepresence intercept --port --env-file `and adjust the flags as follows: + Start the intercept: + * **port:** specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon. + * **env-file:** specify a file path for Telepresence to write the environment variables that are set in the pod. + +4. Answer the question prompts. + * **IWhat's your ingress' IP address?**: whether the ingress controller is expecting TLS communication on the specified port. + * **What's your ingress' TCP port number?**: the port your ingress controller is listening to. This is often 443 for TLS ports, and 80 for non-TLS ports. + * **Does that TCP port on your ingress use TLS (as opposed to cleartext)?**: whether the ingress controller is expecting TLS communication on the specified port. + * **If required by your ingress, specify a different hostname (TLS-SNI, HTTP "Host" header) to be used in requests.**: if your ingress controller routes traffic based on a domain name (often using the `Host` HTTP header), enter that value here. + + The example below shows a preview URL for `example-service` which listens on port 8080. The preview URL for ingress will use the `ambassador` service in the `ambassador` namespace on port `443` using TLS encryption and the hostname `dev-environment.edgestack.me`: + + ```console +$ telepresence intercept example-service --port 8080 --env-file ~/ex-svc.env + + To create a preview URL, telepresence needs to know how cluster + ingress works for this service. Please Confirm the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: -]: ambassador.ambassador + + 2/4: What's your ingress' TCP port number? + + [default: -]: 80 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: y + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: ambassador.ambassador]: dev-environment.edgestack.me + + Using deployment example-service + intercepted + Intercept name : example-service + State : ACTIVE + Destination : 127.0.0.1:8080 + Service Port Identifier: http + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":example-service") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname : dev-environment.edgestack.me + ``` + +5. Start your local environment using the environment variables retrieved in the previous step. + + Here are some examples of how to pass the environment variables to your local process: + * **Docker:** enter `docker run` and provide the path to the file using the `--env-file` argument. For more information about Docker run commands, see the [Docker command-line reference documentation](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file). + * **Visual Studio Code:** specify the path to the environment variables file in the `envFile` field of your configuration. + * **JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.):** use the [EnvFile plugin](https://plugins.jetbrains.com/plugin/7861-envfile). + +6. Go to the Preview URL generated from the intercept. +Traffic is now intercepted from your preview URL without impacting other traffic from your Ingress. + + + Didn't work? It might be because you have services in between your ingress controller and the service you are intercepting that do not propagate the x-telepresence-intercept-id HTTP Header. Read more on context propagation. + + +7. Make a request on the URL you would usually query for that environment. Don't route a request to your laptop. + + Normal traffic coming into the cluster through the Ingress (i.e. not coming from the preview URL) routes to services in the cluster like normal. + +8. Share with a teammate. + + You can collaborate with teammates by sending your preview URL to them. Once your teammate logs in, they must select the same identity provider and org as you are using. This authorizes their access to the preview URL. When they visit the preview URL, they see the intercepted service running on your laptop. + You can now collaborate with a teammate to debug the service on the shared intercept URL without impacting the production environment. + +## Sharing a preview URL with people outside your team + +To collaborate with someone outside of your identity provider's organization: +Log into [Ambassador Cloud](https://app.getambassador.io/cloud/). + navigate to your service's intercepts, select the preview URL details, and click **Make Publicly Accessible**. Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on your laptop. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. Removing the preview URL either from the dashboard or by running `telepresence preview remove ` also removes all access to the preview URL. + +## Change access restrictions + +To collaborate with someone outside of your identity provider's organization, you must make your preview URL publicly accessible. + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/). +2. Select the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Make Publicly Accessible**. + +Now anyone with the link will have access to the preview URL. When they visit the preview URL, they will see the intercepted service running on a local environment. + +To disable sharing the preview URL publicly, click **Require Authentication** in the dashboard. + +## Remove a preview URL from an Intercept + +To delete a preview URL and remove all access to the intercepted service, + +1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) +2. Click on the service you want to share and open the service details page. +3. Click the **Intercepts** tab and expand the preview URL details. +4. Click **Remove Preview**. + +Alternatively, you can remove a preview URL with the following command: +`telepresence preview remove ` diff --git a/docs/telepresence/pre-release/howtos/request.md b/docs/telepresence/pre-release/howtos/request.md new file mode 100644 index 000000000..1109c68df --- /dev/null +++ b/docs/telepresence/pre-release/howtos/request.md @@ -0,0 +1,12 @@ +import Alert from '@material-ui/lab/Alert'; + +# Send requests to an intercepted service + +Ambassador Cloud can inform you about the required request parameters to reach an intercepted service. + + 1. Go to [Ambassador Cloud](https://app.getambassador.io/cloud/) + 2. Navigate to the desired service Intercepts page + 3. Click the **Query** button to open the pop-up menu. + 4. Toggle between **CURL**, **Headers** and **Browse**. + +The pre-built queries and header information will help you get started to query the desired intercepted service and manage header propagation. diff --git a/docs/telepresence/pre-release/images/container-inner-dev-loop.png b/docs/telepresence/pre-release/images/container-inner-dev-loop.png new file mode 100644 index 000000000..06586cd6e Binary files /dev/null and b/docs/telepresence/pre-release/images/container-inner-dev-loop.png differ diff --git a/docs/telepresence/pre-release/images/github-login.png b/docs/telepresence/pre-release/images/github-login.png new file mode 100644 index 000000000..cfd4d4bf1 Binary files /dev/null and b/docs/telepresence/pre-release/images/github-login.png differ diff --git a/docs/telepresence/pre-release/images/logo.png b/docs/telepresence/pre-release/images/logo.png new file mode 100644 index 000000000..701f63ba8 Binary files /dev/null and b/docs/telepresence/pre-release/images/logo.png differ diff --git a/docs/telepresence/pre-release/images/split-tunnel.png b/docs/telepresence/pre-release/images/split-tunnel.png new file mode 100644 index 000000000..5bf30378e Binary files /dev/null and b/docs/telepresence/pre-release/images/split-tunnel.png differ diff --git a/docs/telepresence/pre-release/images/trad-inner-dev-loop.png b/docs/telepresence/pre-release/images/trad-inner-dev-loop.png new file mode 100644 index 000000000..618b674f8 Binary files /dev/null and b/docs/telepresence/pre-release/images/trad-inner-dev-loop.png differ diff --git a/docs/telepresence/pre-release/images/tunnelblick.png b/docs/telepresence/pre-release/images/tunnelblick.png new file mode 100644 index 000000000..8944d445a Binary files /dev/null and b/docs/telepresence/pre-release/images/tunnelblick.png differ diff --git a/docs/telepresence/pre-release/images/vpn-dns.png b/docs/telepresence/pre-release/images/vpn-dns.png new file mode 100644 index 000000000..eed535c45 Binary files /dev/null and b/docs/telepresence/pre-release/images/vpn-dns.png differ diff --git a/docs/telepresence/pre-release/install/helm.md b/docs/telepresence/pre-release/install/helm.md new file mode 100644 index 000000000..688d2f20a --- /dev/null +++ b/docs/telepresence/pre-release/install/helm.md @@ -0,0 +1,181 @@ +# Install with Helm + +[Helm](https://helm.sh) is a package manager for Kubernetes that automates the release and management of software on Kubernetes. The Telepresence Traffic Manager can be installed via a Helm chart with a few simple steps. + +**Note** that installing the Traffic Manager through Helm will prevent `telepresence connect` from ever upgrading it. If you wish to upgrade a Traffic Manager that was installed via the Helm chart, please see the steps [below](#upgrading-the-traffic-manager) + +For more details on what the Helm chart installs and what can be configured, see the Helm chart [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). + +## Before you begin + +The Telepresence Helm chart is hosted by Ambassador Labs and published at `https://app.getambassador.io`. + +Start by adding this repo to your Helm client with the following command: + +```shell +helm repo add datawire https://app.getambassador.io +helm repo update +``` + +## Install with Helm + +When you run the Helm chart, it installs all the components required for the Telepresence Traffic Manager. + +1. If you are installing the Telepresence Traffic Manager **for the first time on your cluster**, create the `ambassador` namespace in your cluster: + + ```shell + kubectl create namespace ambassador + ``` + +2. Install the Telepresence Traffic Manager with the following command: + + ```shell + helm install traffic-manager --namespace ambassador datawire/telepresence + ``` + +### Install into custom namespace + +The Helm chart supports being installed into any namespace, not necessarily `ambassador`. Simply pass a different `namespace` argument to `helm install`. +For example, if you wanted to deploy the traffic manager to the `staging` namespace: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence +``` + +Note that users of Telepresence will need to configure their kubeconfig to find this installation of the Traffic Manager: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +See [the kubeconfig documentation](../../reference/config#manager) for more information. + +### Upgrading the Traffic Manager. + +Versions of the Traffic Manager Helm chart are coupled to the versions of the Telepresence CLI that they are intended for. +Thus, for example, if you wish to use Telepresence `v2.4.0`, you'll need to install version `v2.4.0` of the Traffic Manager Helm chart. + +Upgrading the Traffic Manager is the same as upgrading any other Helm chart; for example, if you installed the release into the `ambassador` namespace, and you just wished to upgrade it to the latest version without changing any configuration values: + +```shell +helm repo up +helm upgrade traffic-manager datawire/telepresence --reuse-values --namespace ambassador +``` + +If you want to upgrade the Traffic-Manager to a specific version, add a `--version` flag with the version number to the upgrade command. For example: `--version v2.4.1` + +## RBAC + +### Installing a namespace-scoped traffic manager + +You might not want the Traffic Manager to have permissions across the entire kubernetes cluster, or you might want to be able to install multiple traffic managers per cluster (for example, to separate them by environment). +In these cases, the traffic manager supports being installed with a namespace scope, allowing cluster administrators to limit the reach of a traffic manager's permissions. + +For example, suppose you want a Traffic Manager that only works on namespaces `dev` and `staging`. +To do this, create a `values.yaml` like the following: + +```yaml +managerRbac: + create: true + namespaced: true + namespaces: + - dev + - staging +``` + +This can then be installed via: + +```bash +helm install traffic-manager --namespace staging datawire/telepresence -f ./values.yaml +``` + +**NOTE** Do not install namespace-scoped Traffic Managers and a global Traffic Manager in the same cluster, as it could have unexpected effects. + +#### Namespace collision detection + +The Telepresence Helm chart will try to prevent namespace-scoped Traffic Managers from managing the same namespaces. +It will do this by creating a ConfigMap, called `traffic-manager-claim`, in each namespace that a given install manages. + +So, for example, suppose you install one Traffic Manager to manage namespaces `dev` and `staging`, as: + +```bash +helm install traffic-manager --namespace dev datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={dev,staging}' +``` + +You might then attempt to install another Traffic Manager to manage namespaces `staging` and `prod`: + +```bash +helm install traffic-manager --namespace prod datawire/telepresence --set 'managerRbac.namespaced=true' --set 'managerRbac.namespaces={staging,prod}' +``` + +This would fail with an error: + +``` +Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap "traffic-manager-claim" in namespace "staging" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "prod": current value is "dev" +``` + +To fix this error, fix the overlap either by removing `staging` from the first install, or from the second. + +#### Namespace scoped user permissions + +Optionally, you can also configure user rbac to be scoped to the same namespaces as the manager itself. +You might want to do this if you don't give your users permissions throughout the cluster, and want to make sure they only have the minimum set required to perform telepresence commands on certain namespaces. + +Continuing with the `dev` and `staging` example from the previous section, simply add the following to `values.yaml` (make sure you set the `subjects`!): + +```yaml +clientRbac: + create: true + + # These are the users or groups to which the user rbac will be bound. + # This MUST be set. + subjects: {} + # - kind: User + # name: jane + # apiGroup: rbac.authorization.k8s.io + + namespaced: true + + namespaces: + - dev + - staging +``` + +#### Namespace-scoped webhook + +If you wish to use the traffic-manager's [mutating webhook](../../reference/cluster-config#mutating-webhook) with a namespace-scoped traffic manager, you will have to ensure that each namespace has an `app.kubernetes.io/name` label that is identical to its name: + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: staging + labels: + app.kubernetes.io/name: staging +``` + +You can also use `kubectl label` to add the label to an existing namespace, e.g.: + +```shell +kubectl label namespace staging app.kubernetes.io/name=staging +``` + +This is required because the mutating webhook will use the name label to find namespaces to operate on. + +**NOTE** This labelling happens automatically in kubernetes >= 1.21. + +### Installing RBAC only + +Telepresence Traffic Manager does require some [RBAC](../../reference/rbac/) for the traffic-manager deployment itself, as well as for users. +To make it easier for operators to introspect / manage RBAC separately, you can use `rbac.only=true` to +only create the rbac-related objects. +Additionally, you can use `clientRbac.create=true` and `managerRbac.create=true` to toggle which subset(s) of RBAC objects you wish to create. diff --git a/docs/telepresence/pre-release/install/index.md b/docs/telepresence/pre-release/install/index.md new file mode 100644 index 000000000..e103afa86 --- /dev/null +++ b/docs/telepresence/pre-release/install/index.md @@ -0,0 +1,153 @@ +import Platform from '@src/components/Platform'; + +# Install + +Install Telepresence by running the commands below for your OS. If you are not the administrator of your cluster, you will need [administrative RBAC permissions](../reference/rbac#administrating-telepresence) to install and use Telepresence in your cluster. + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## What's Next? + +Follow one of our [quick start guides](../quick-start/) to start using Telepresence, either with our sample app or in your own environment. + +## Installing nightly versions of Telepresence + +We build and publish the contents of the default branch, [release/v2](https://github.com/telepresenceio/telepresence), of Telepresence +nightly, Monday through Friday, for macOS (Intel and Apple silicon), Linux, and Windows. + +The tags are formatted like so: `vX.Y.Z-nightly-$gitShortHash`. + +`vX.Y.Z` is the most recent release of Telepresence with the patch version (Z) bumped one higher. +For example, if our last release was 2.3.4, nightly builds would start with v2.3.5, until a new +version of Telepresence is released. + +`$gitShortHash` will be the short hash of the git commit of the build. + +Use these URLs to download the most recent nightly build. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/nightly/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/nightly/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/nightly/telepresence.zip +``` + + + + +## Installing older versions of Telepresence + +Use these URLs to download an older version for your OS (including older nightly builds), replacing `x.y.z` with the versions you want. + + + + +```shell +# Intel Macs +https://app.getambassador.io/download/tel2/darwin/amd64/x.y.z/telepresence + +# Apple silicon Macs +https://app.getambassador.io/download/tel2/darwin/arm64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/linux/amd64/x.y.z/telepresence +``` + + + + +``` +https://app.getambassador.io/download/tel2/windows/amd64/x.y.z/telepresence +``` + + + diff --git a/docs/telepresence/pre-release/install/migrate-from-legacy.md b/docs/telepresence/pre-release/install/migrate-from-legacy.md new file mode 100644 index 000000000..61701c9a9 --- /dev/null +++ b/docs/telepresence/pre-release/install/migrate-from-legacy.md @@ -0,0 +1,109 @@ +# Migrate from legacy Telepresence + +Telepresence (formerly referenced as Telepresence 2, which is the current major version) has different mechanics and requires a different mental model from [legacy Telepresence 1](https://www.telepresence.io/docs/v1/) when working with local instances of your services. + +In legacy Telepresence, a pod running a service was swapped with a pod running the Telepresence proxy. This proxy received traffic intended for the service, and sent the traffic onward to the target workstation or laptop. We called this mechanism "swap-deployment". + +In practice, this mechanism, while simple in concept, had some challenges. Losing the connection to the cluster would leave the deployment in an inconsistent state. Swapping the pods would take time. + +Telepresence 2 introduces a [new +architecture](../../reference/architecture/) built around "intercepts" +that addresses these problems. With the new Telepresence, a sidecar +proxy ("traffic agent") is injected onto the pod. The proxy then +intercepts traffic intended for the Pod and routes it to the +workstation/laptop. The advantage of this approach is that the +service is running at all times, and no swapping is used. By using +the proxy approach, we can also do personal intercepts, where rather +than re-routing all traffic to the laptop/workstation, it only +re-routes the traffic designated as belonging to that user, so that +multiple developers can intercept the same service at the same time +without disrupting normal operation or disrupting eacho. + +Please see [the Telepresence quick start](../../quick-start/) for an introduction to running intercepts and [the intercept reference doc](../../reference/intercepts/) for a deep dive into intercepts. + +## Using legacy Telepresence commands + +First please ensure you've [installed Telepresence](../). + +Telepresence is able to translate common legacy Telepresence commands into native Telepresence commands. +So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used +to with the Telepresence binary. + +For example, say you have a deployment (`myserver`) that you want to swap deployment (equivalent to intercept in +Telepresence) with a python server, you could run the following command: + +``` +$ telepresence --swap-deployment myserver --expose 9090 --run python3 -m http.server 9090 +< help text > + +Legacy telepresence command used +Command roughly translates to the following in Telepresence: +telepresence intercept echo-easy --port 9090 -- python3 -m http.server 9090 +running... +Connecting to traffic manager... +Connected to context +Using Deployment myserver +intercepted + Intercept name : myserver + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:9090 + Intercepting : all TCP connections +Serving HTTP on :: port 9090 (http://[::]:9090/) ... +``` + +Telepresence will let you know what the legacy Telepresence command has mapped to and automatically +runs it. So you can get started with Telepresence today, using the commands you are used to +and it will help you learn the Telepresence syntax. + +### Legacy command mapping + +Below is the mapping of legacy Telepresence to Telepresence commands (where they exist and +are supported). + +| Legacy Telepresence Command | Telepresence Command | +|--------------------------------------------------|--------------------------------------------| +| --swap-deployment $workload | intercept $workload | +| --expose localPort[:remotePort] | intercept --port localPort[:remotePort] | +| --swap-deployment $workload --run-shell | intercept $workload -- bash | +| --swap-deployment $workload --run $cmd | intercept $workload -- $cmd | +| --swap-deployment $workload --docker-run $cmd | intercept $workload --docker-run -- $cmd | +| --run-shell | connect -- bash | +| --run $cmd | connect -- $cmd | +| --env-file,--env-json | --env-file, --env-json (haven't changed) | +| --context,--namespace | --context, --namespace (haven't changed) | +| --mount,--docker-mount | --mount, --docker-mount (haven't changed) | + +### Legacy Telepresence command limitations + +Some of the commands and flags from legacy Telepresence either didn't apply to Telepresence or +aren't yet supported in Telepresence. For some known popular commands, such as --method, +Telepresence will include output letting you know that the flag has gone away. For flags that +Telepresence can't translate yet, it will let you know that that flag is "unsupported". + +If Telepresence is missing any flags or functionality that is integral to your usage, please let us know +by [creating an issue](https://github.com/telepresenceio/telepresence/issues) and/or talking to us on our [Slack channel](http://a8r.io/slack)! + +## Telepresence changes + +Telepresence installs a Traffic Manager in the cluster and Traffic Agents alongside workloads when performing intercepts (including +with `--swap-deployment`) and leaves them. If you use `--swap-deployment`, the intercept will be left once the process +dies, but the agent will remain. There's no harm in leaving the agent running alongside your service, but when you +want to remove them from the cluster, the following Telepresence command will help: +``` +$ telepresence uninstall --help +Uninstall telepresence agents and manager + +Usage: + telepresence uninstall [flags] { --agent |--all-agents | --everything } + +Flags: + -d, --agent uninstall intercept agent on specific deployments + -a, --all-agents uninstall intercept agent on all deployments + -e, --everything uninstall agents and the traffic manager + -h, --help help for uninstall + -n, --namespace string If present, the namespace scope for this CLI request +``` + +Since the new architecture deploys a Traffic Manager into the Ambassador namespace, please take a look at +our [rbac guide](../../reference/rbac) if you run into any issues with permissions while upgrading to Telepresence. diff --git a/docs/telepresence/pre-release/install/upgrade.md b/docs/telepresence/pre-release/install/upgrade.md new file mode 100644 index 000000000..c0678450d --- /dev/null +++ b/docs/telepresence/pre-release/install/upgrade.md @@ -0,0 +1,81 @@ +--- +description: "How to upgrade your installation of Telepresence and install previous versions." +--- + +import Platform from '@src/components/Platform'; + +# Upgrade Process +The Telepresence CLI will periodically check for new versions and notify you when an upgrade is available. Running the same commands used for installation will replace your current binary with the latest version. + + + + +```shell +# Intel Macs + +# Upgrade via brew: +brew upgrade datawire/blackbird/telepresence + +# OR upgrade manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +After upgrading your CLI you must stop any live Telepresence processes by issuing `telepresence quit`, then upgrade the Traffic Manager by running `telepresence connect` + +**Note** that if the Traffic Manager has been installed via Helm, `telepresence connect` will never upgrade it. If you wish to upgrade a Traffic Manager that was installed via the Helm chart, please see the [the Helm documentation](../helm#upgrading-the-traffic-manager) diff --git a/docs/telepresence/pre-release/licenses.md b/docs/telepresence/pre-release/licenses.md new file mode 100644 index 000000000..7e4bd7adf --- /dev/null +++ b/docs/telepresence/pre-release/licenses.md @@ -0,0 +1,18 @@ +Telepresence CLI incorporates Free and Open Source software under the following licenses: + +* [2-clause BSD license](https://opensource.org/licenses/BSD-2-Clause) +* [3-clause BSD license](https://opensource.org/licenses/BSD-3-Clause) +* [Apache License 2.0](https://opensource.org/licenses/Apache-2.0) +* [ISC license](https://opensource.org/licenses/ISC) +* [MIT license](https://opensource.org/licenses/MIT) +* [Mozilla Public License 2.0](https://opensource.org/licenses/MPL-2.0) + + +Smart Agent incorporates Free and Open Source software under the following licenses: + +* [2-clause BSD license](https://opensource.org/licenses/BSD-2-Clause) +* [3-clause BSD license](https://opensource.org/licenses/BSD-3-Clause) +* [Apache License 2.0](https://opensource.org/licenses/Apache-2.0) +* [ISC license](https://opensource.org/licenses/ISC) +* [MIT license](https://opensource.org/licenses/MIT) +* [Mozilla Public License 2.0](https://opensource.org/licenses/MPL-2.0) diff --git a/docs/telepresence/pre-release/quick-start/TelepresenceQuickStartLanding.js b/docs/telepresence/pre-release/quick-start/TelepresenceQuickStartLanding.js new file mode 100644 index 000000000..3e87c3ad6 --- /dev/null +++ b/docs/telepresence/pre-release/quick-start/TelepresenceQuickStartLanding.js @@ -0,0 +1,129 @@ +import React from 'react'; + +import Embed from '../../../../src/components/Embed'; +import Icon from '../../../../src/components/Icon'; + +import './telepresence-quickstart-landing.less'; + +/** @type React.FC> */ +const RightArrow = (props) => ( + + + +); + +/** @type React.FC<{color: 'green'|'blue', withConnector: boolean}> */ +const Box = ({ children, color = 'blue', withConnector = false }) => ( + <> + {withConnector && ( +
+ +
+ )} +
{children}
+ +); + +const TelepresenceQuickStartLanding = () => ( +
+

+ Telepresence +

+

+ Explore the use cases of Telepresence with a free remote Kubernetes + cluster, or dive right in using your own. +

+ +
+
+
+

+ Use Our Free Demo Cluster +

+

+ See how Telepresence works without having to mess with your + production environments. +

+
+ +

6 minutes

+

Integration Testing

+

+ See how changes to a single service impact your entire application + without having to run your entire app locally. +

+ + GET STARTED{' '} + + +
+ +

5 minutes

+

Fast code changes

+

+ Make changes to your service locally and see the results instantly, + without waiting for containers to build. +

+ + GET STARTED{' '} + + +
+
+
+
+

+ Use Your Cluster +

+

+ Understand how Telepresence fits in to your Kubernetes development + workflow. +

+
+ +

10 minutes

+

Intercept your service in your cluster

+

+ Query services only exposed in your cluster's network. Make changes + and see them instantly in your K8s environment. +

+ + GET STARTED{' '} + + +
+
+
+ +
+

Watch the Demo

+
+
+

+ See Telepresence in action in our 3-minute demo + video that you can share with your teammates. +

+
    +
  • Instant feedback loops
  • +
  • Infinite-scale development environments
  • +
  • Access to your favorite local tools
  • +
  • Easy collaborative development with teammates
  • +
+
+
+ +
+
+
+
+); + +export default TelepresenceQuickStartLanding; diff --git a/docs/telepresence/pre-release/quick-start/demo-node.md b/docs/telepresence/pre-release/quick-start/demo-node.md new file mode 100644 index 000000000..288bbb0ca --- /dev/null +++ b/docs/telepresence/pre-release/quick-start/demo-node.md @@ -0,0 +1,156 @@ +--- +description: "Claim a remote demo cluster and learn to use Telepresence to intercept services running in a Kubernetes Cluster, speeding up local development and debugging." +--- + +import {DemoClusterMetadata, ExpirationDate} from '../../../../../src/components/DemoClusterMetadata'; +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start + +
+

Contents

+ +* [1. Get a free remote cluster](#1-get-a-free-remote-cluster) +* [2. Try the Emojivoto application](#2-try-the-emojivoto-application) +* [3. Set up your local development environment](#3-set-up-your-local-development-environment) +* [4. Testing our fix](#4-testing-our-fix) +* [5. Preview URLs](#5-preview-urls) +* [6. How/Why does this all work](#6-howwhy-does-this-all-work) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +In this guide, we'll give you a hands-on tutorial with Telepresence. To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js and Golang. We have a version in React if you prefer. + + +## 1. Get a free remote cluster + +Telepresence connects your local workstation with a remote Kubernetes cluster. In this tutorial, we'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. Test out the application: + +1. Go to the and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. We're going to use Telepresence shortly to fix this bug, as everyone should be able to vote for 🍩! + + + Congratulations! You've successfully accessed the Emojivoto application on your remote cluster. + + +## 3. Set up your local development environment + +We'll set up a development environment locally on your workstation. We'll then use Telepresence to connect this local development environment to the remote Kubernetes cluster. To save time, the development environment we'll use is pre-packaged as a Docker container. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + + + + + + + +Make sure that ports 8080 and 8083 are free.
+If the Docker engine is not running, the command will fail and you will see docker: unknown server OS in your terminal. +
+ +2. The Docker container includes a copy of the Emojivoto application that fixes the bug. Visit the [leaderboard](http://localhost:8083/leaderboard) and notice how it is different from the leaderboard in your Kubernetes cluster. + +3. Vote for 🍩 on your local leaderboard, and you can see that the bug is fixed! + + + Congratulations! You have successfully set up a local development environment, and tested the fix locally. + + +## 4. Testing our fix + +A common use case for Telepresence is to connect your local development environment to a remote cluster. This way, if your application is too big or complex to run locally, you can still develop locally. In this Quick Start, we're also going to show Telepresence can be used for integration testing, by testing our fix against the services in the remote cluster. + +1. From your Docker container, create an intercept, which will tell Telepresence to send traffic to the service in your container instead of the service in the cluster: + `telepresence intercept web --port 8080` + + When prompted for ingress configuration, all default values should be correct as displayed below. + + + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Preview URLs + +Preview URLs enable you to safely share your development environment with anyone. For example, you may want your UX designer to take a quick look at what you're developing, before you commit the code. Preview URLs enable this easy collaboration. + +1. If you access the Emojivoto application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +2. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version running locally. + + +Now you're able to share your fix in your local environment with your team! + + + + To get more information regarding Preview URLs and intercepts, visit the Developer Control Plane . + + +
+ +## 6. How/Why does this all work? + +Telepresence works by deploying a two-way network proxy in a pod running in a Kubernetes cluster. This proxy can intercept traffic meant for the service and reroute it to a local copy, which is ready for further (local) development. + +Intercepts and preview URLs are functions of Telepresence that enable easy local development from a remote Kubernetes cluster and offer a preview environment for sharing and real-time collaboration. + +Telepresence also uses custom headers and header propagation for controllable intercepts and preview URLs. The headers facilitate the smart routing of requests either to live services in the cluster or services running locally on a developer’s machine. + +Preview URLs, when created, generate an ingress request containing a custom header with a token (the context). Telepresence sends this token to Ambassador Cloud with other information about the preview. Visiting the preview URL directs the user to Ambassador Cloud, which proxies the user to the cluster ingress with the token header injected into the request. The request carrying the header is routed in the cluster to the appropriate pod (the propagation). The Traffic Agent on the service pod sees the header and intercepts the request, redirecting it to the local developer machine that ran the intercept. + +## What's Next? + + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! diff --git a/docs/telepresence/pre-release/quick-start/demo-react.md b/docs/telepresence/pre-release/quick-start/demo-react.md new file mode 100644 index 000000000..c887e6e59 --- /dev/null +++ b/docs/telepresence/pre-release/quick-start/demo-react.md @@ -0,0 +1,259 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import QSCards from './qs-cards'; +import { DownloadDemo } from '../../../../../src/components/Docs/DownloadDemo'; +import { UserInterceptCommand } from '../../../../../src/components/Docs/Telepresence'; + +# Telepresence Quick Start - React + +
+

Contents

+ +* [1. Download the demo cluster archive](#1-download-the-demo-cluster-archive) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Set up the sample application](#3-set-up-the-sample-application) +* [4. Test app](#4-test-app) +* [5. Run a service on your laptop](#5-run-a-service-on-your-laptop) +* [6. Make a code change](#6-make-a-code-change) +* [7. Intercept all traffic to the service](#7-intercept-all-traffic-to-the-service) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +In this guide we'll give you **everything you need in a preconfigured demo cluster:** the Telepresence CLI, a config file for connecting to your demo cluster, and code to run a cluster service locally. + + + While Telepresence works with any language, this guide uses a sample app with a frontend written in React. We have a version with a Node.js backend if you prefer. + + + + +## 1. Download the demo cluster archive + +1. + +2. Extract the archive file, open the `ambassador-demo-cluster` folder, and run the installer script (the commands below might vary based on where your browser saves downloaded files). + + + This step will also install some dependency packages onto your laptop using npm, you can see those packages at ambassador-demo-cluster/edgey-corp-nodejs/DataProcessingService/package.json. + + + ``` + cd ~/Downloads + unzip ambassador-demo-cluster.zip -d ambassador-demo-cluster + cd ambassador-demo-cluster + ./install.sh + # type y to install the npm dependencies when asked + ``` + +3. Confirm that your `kubectl` is configured to use the demo cluster by getting the status of the cluster nodes, you should see a single node named `tpdemo-prod-...`: + `kubectl get nodes` + + ``` + $ kubectl get nodes + + NAME STATUS ROLES AGE VERSION + tpdemo-prod-1234 Ready control-plane,master 5d10h v1.20.2+k3s1 + ``` + +4. Confirm that the Telepresence CLI is now installed (we expect to see the daemons are not running yet): +`telepresence status` + + ``` + $ telepresence status + + Root Daemon: Not running + User Daemon: Not running + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open System Preferences → Security & Privacy → General. Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence status command. + + + + You now have Telepresence installed on your workstation and a Kubernetes cluster configured in your terminal! + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster (this requires **root** privileges and will ask for your password): +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + +Check this [FAQ entry](../../troubleshooting#daemon-service-did-not-start) in case the daemon does not start. + +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Set up the sample application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + +1. Clone the emojivoto app: +`git clone https://github.com/datawire/emojivoto.git` + +1. Deploy the app to your cluster: +`kubectl apply -k emojivoto/kustomize/deployment` + +1. Change the kubectl namespace: +`kubectl config set-context --current --namespace=emojivoto` + +1. List the Services: +`kubectl get svc` + + ``` + $ kubectl get svc + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + emoji-svc ClusterIP 10.43.162.236 8080/TCP,8801/TCP 29s + voting-svc ClusterIP 10.43.51.201 8080/TCP,8801/TCP 29s + web-app ClusterIP 10.43.242.240 80/TCP 29s + web-svc ClusterIP 10.43.182.119 8080/TCP 29s + ``` + +1. Since you’ve already connected Telepresence to your cluster, you can access the frontend service in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). This is the namespace qualified DNS name in the form of `service.namespace`. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Test app + +1. Vote for some emojis and see how the [leaderboard](http://web-app.emojivoto/leaderboard) changes. + +1. There is one emoji that causes an error when you vote for it. Vote for 🍩 and the leaderboard does not actually update. Also an error is shown on the browser dev console: +`GET http://web-svc.emojivoto:8080/api/vote?choice=:doughnut: 500 (Internal Server Error)` + + + Open the dev console in Chrome or Firefox with Option + ⌘ + J (macOS) or Shift + CTRL + J (Windows/Linux).
+ Open the dev console in Safari with Option + ⌘ + C. +
+ +The error is on a backend service, so **we can add an error page to notify the user** while the bug is fixed. + +## 5. Run a service on your laptop + +Now start up the `web-app` service on your laptop. We'll then make a code change and intercept this service so that we can see the immediate results of a code change to the service. + +1. **In a new terminal window**, change into the repo directory and build the application: + + `cd /emojivoto` + `make web-app-local` + + ``` + $ make web-app-local + + ... + webpack 5.34.0 compiled successfully in 4326 ms + ✨ Done in 5.38s. + ``` + +2. Change into the service's code directory and start the server: + + `cd emojivoto-web-app` + `yarn webpack serve` + + ``` + $ yarn webpack serve + + ... + ℹ 「wds」: Project is running at http://localhost:8080/ + ... + ℹ 「wdm」: Compiled successfully. + ``` + +4. Access the application at [http://localhost:8080](http://localhost:8080) and see how voting for the 🍩 is generating the same error as the application deployed in the cluster. + + + Victory, your local React server is running a-ok! + + +## 6. Make a code change +We’ve now set up a local development environment for the app. Next we'll make and locally test a code change to the app to improve the issue with voting for 🍩. + +1. In the terminal running webpack, stop the server with `Ctrl+c`. + +1. In your preferred editor open the file `emojivoto/emojivoto-web-app/js/components/Vote.jsx` and replace the `render()` function (lines 83 to the end) with [this highlighted code snippet](https://github.com/datawire/emojivoto/blob/main/assets/Vote-fixed.jsx#L83-L149). + +1. Run webpack to fully recompile the code then start the server again: + + `yarn webpack` + `yarn webpack serve` + +1. Reload the browser tab showing [http://localhost:8080](http://localhost:8080) and vote for 🍩. Notice how you see an error instead, improving the user experience. + +## 7. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the app to the version running locally instead. + + + This command must be run in the terminal window where you ran the script because the script set environment variables to access the demo cluster. Those variables will only will apply to that terminal session. + + +1. Start the intercept with the `intercept` command, setting the workload name (a Deployment in this case), namespace, and port: +`telepresence intercept web-app --namespace emojivoto --port 8080` + + ``` + $ telepresence intercept web-app --namespace emojivoto --port 8080 + + Using deployment web-app + intercepted + Intercept name: web-app-emojivoto + State : ACTIVE + ... + ``` + +2. Go to the frontend service again in your browser at [http://web-app.emojivoto](http://web-app.emojivoto). Voting for 🍩 should now show an error message to the user. + + + The web-app Deployment is being intercepted and rerouted to the server on your laptop! + + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## What's Next? + + diff --git a/docs/telepresence/pre-release/quick-start/go.md b/docs/telepresence/pre-release/quick-start/go.md new file mode 100644 index 000000000..2c6c3e5dc --- /dev/null +++ b/docs/telepresence/pre-release/quick-start/go.md @@ -0,0 +1,191 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import { +EmojivotoServicesList, +DCPLink, +Login, +LoginCommand, +DockerCommand, +PreviewUrl, +ExternalIp +} from '../../../../../src/components/Docs/Telepresence'; +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import { UserInterceptCommand, DemoClusterWarning } from '../../../../../src/components/Docs/Telepresence'; + + +# Telepresence Quick Start - **Go** + +This guide provides you with a hands-on tutorial with Telepresence and Golang. To go through this tutorial, the only thing you'll need is a computer that runs Docker Desktop >=20.10.7. We'll give you a pre-configured remote Kubernetes cluster and a Docker container to run locally. + +If you don't have Docker Desktop already installed, go to the [Docker download page](https://www.docker.com/get-started) and install Docker. + +## 1. Get a free remote cluster + +Telepresence connects your local workstation with a remote Kubernetes cluster. In this tutorial, you'll start with a pre-configured, remote cluster. + +1. +2. Go to the Service Catalog to see all the services deployed on your cluster. + + The Service Catalog gives you a consolidated view of all your services across development, staging, and production. After exploring the Service Catalog, continue with this tutorial to test the application in your demo cluster. + + + + +
+ +## 2. Try the Emojivoto application + +The remote cluster is running the Emojivoto application, which consists of four services. +Test out the application: + +1. Go to the Emojivoto webapp and vote for some emojis. + + If the link to the remote demo cluster doesn't work, make sure you don't have an ad blocker preventing it from opening. + + +2. Now, click on the 🍩 emoji. You'll see that a bug is present, and voting 🍩 doesn't work. + +## 3. Run the Docker container + +The bug is present in the `voting-svc` service, you'll run that service locally. To save your time, we prepared a Docker container with this service running and all you'll need to fix the bug. + +1. Run the Docker container locally, by running this command inside your local terminal: + + + + + + + + + + + + + + + +2. The application is failing due to a little bug inside this service which uses gRPC to communicate with the others services. We can use `grpcurl` to test the gRPC endpoint and see the error by running: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + (empty) + + Response trailers received: + content-type: application/grpc + Sent 0 requests and received 0 responses + ERROR: + Code: Unknown + Message: ERROR + ``` + +3. In order to fix the bug, use the Docker container's embedded IDE to fix this error. Go to http://localhost:8083 and open `api/api.go`. Remove the `"fmt"` package by deleting the line 5. + + ```go + 3 import ( + 4 "context" + 5 "fmt" // DELETE THIS LINE + 6 + 7 pb "github.com/buoyantio/emojivoto/emojivoto-voting-svc/gen/proto" + ``` + + and also replace the line `21`: + + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return nil, fmt.Errorf("ERROR") + 22 } + ``` + with + ```go + 20 func (pS *PollServiceServer) VoteDoughnut(_ context.Context, _ *pb.VoteRequest) (*pb.VoteResponse, error) { + 21 return pS.vote(":doughnut:") + 22 } + ``` + Then save the file (`Ctrl+s` for Windows, `Cmd+s` for Mac or `Menu -> File -> Save`) and verify that the error is fixed now: + + ``` + $ grpcurl -v -plaintext -import-path ./proto -proto Voting.proto localhost:8081 emojivoto.v1.VotingService.VoteDoughnut + + Resolved method descriptor: + rpc VoteDoughnut ( .emojivoto.v1.VoteRequest ) returns ( .emojivoto.v1.VoteResponse ); + + Request metadata to send: + (empty) + + Response headers received: + content-type: application/grpc + + Response contents: + { + } + + Response trailers received: + (empty) + Sent 0 requests and received 1 response + ``` + +## 4. Telepresence intercept + +1. Now the bug is fixed, you can use Telepresence to intercept *all* the traffic through our local service. +Run the following command inside the container: + + ``` + $ telepresence intercept voting --port 8081:8080 + + Using Deployment voting + intercepted + Intercept name : voting + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8081 + Service Port Identifier: 8080 + Volume Mount Point : /tmp/telfs-XXXXXXXXX + Intercepting : all TCP connections + ``` + Now you can go back to Emojivoto webapp and you'll see that voting for 🍩 woks as expected. + +You have created an intercept to tell Telepresence where to send traffic. The `voting-svc` traffic is now destined to the local Dockerized version of the service. This intercepts *all the traffic* to the local `voting-svc` service, which has been fixed with the Telepresence intercept. + + + Congratulations! Traffic to the remote service is now being routed to your local laptop, and you can see how the local fix works on the remote environment! + + +## 5. Telepresence intercept with a preview URL + +Preview URLs allows you to safely share your development environment. With this approach, you can try and test your local service more accurately because you have a total control about which traffic is handled through your service, all of this thank to the preview URL. + +1. First leave the current intercept: + + ``` + $ telepresence leave voting + ``` + +2. Then login to telepresence: + + + +3. Create an intercept, which will tell Telepresence to send traffic to the service in our container instead of the service in the cluster. When prompted for ingress configuration, all default values should be correct as displayed below. + + + +4. If you access the Emojivoto webapp application on your remote cluster and vote for the 🍩 emoji, you'll see the bug is still present. + +5. Vote for the 🍩 emoji using the Preview URL obtained in the previous step, and you will see that the bug is fixed, since traffic is being routed to the fixed version which is running locally. + +
+ +## What's Next? + +You've intercepted a service in one of our demo clusters, now you can use Telepresence to [intercept a service in your own environment](https://www.getambassador.io/docs/telepresence/latest/howtos/intercepts/)! diff --git a/docs/telepresence/pre-release/quick-start/index.md b/docs/telepresence/pre-release/quick-start/index.md new file mode 100644 index 000000000..efcb65b52 --- /dev/null +++ b/docs/telepresence/pre-release/quick-start/index.md @@ -0,0 +1,7 @@ +--- + description: Telepresence Quick Start. +--- + +import TelepresenceQuickStartLanding from './TelepresenceQuickStartLanding' + + diff --git a/docs/telepresence/pre-release/quick-start/qs-cards.js b/docs/telepresence/pre-release/quick-start/qs-cards.js new file mode 100644 index 000000000..31582355b --- /dev/null +++ b/docs/telepresence/pre-release/quick-start/qs-cards.js @@ -0,0 +1,70 @@ +import Grid from '@material-ui/core/Grid'; +import Paper from '@material-ui/core/Paper'; +import Typography from '@material-ui/core/Typography'; +import { makeStyles } from '@material-ui/core/styles'; +import React from 'react'; + +const useStyles = makeStyles((theme) => ({ + root: { + flexGrow: 1, + textAlign: 'center', + alignItem: 'stretch', + padding: 0, + }, + paper: { + padding: theme.spacing(1), + textAlign: 'center', + color: 'black', + height: '100%', + }, +})); + +export default function CenteredGrid() { + const classes = useStyles(); + + return ( +
+ + + + + + Collaborating + + + + Use preview URLS to collaborate with your colleagues and others + outside of your organization. + + + + + + + + Outbound Sessions + + + + While connected to the cluster, your laptop can interact with + services as if it was another pod in the cluster. + + + + + + + + FAQs + + + + Learn more about uses cases and the technical implementation of + Telepresence. + + + + +
+ ); +} diff --git a/docs/telepresence/pre-release/quick-start/qs-go.md b/docs/telepresence/pre-release/quick-start/qs-go.md new file mode 100644 index 000000000..f2b6575c3 --- /dev/null +++ b/docs/telepresence/pre-release/quick-start/qs-go.md @@ -0,0 +1,396 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Go** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Go application](#3-install-a-sample-go-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Go application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Go. We have versions in Python (Flask), Python (FastAPI), Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-go.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-go.git + + Cloning into 'edgey-corp-go'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-go/DataProcessingService/` + +3. You will use [Fresh](https://pkg.go.dev/github.com/BUGLAN/fresh) to support auto reloading of the Go server, which we'll use later. Confirm it is installed by running: + `go get github.com/pilu/fresh` + Then start the Go server: + `$GOPATH/bin/fresh` + + ``` + $ go get github.com/pilu/fresh + + $ $GOPATH/bin/fresh + + ... + 10:23:41 app | Welcome to the DataProcessingGoService! + ``` + + + Install Go from here and set your GOPATH if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Go server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Go server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-go/DataProcessingService/main.go` in your editor and change `var color string` from `blue` to `orange`. Save the file and the Go server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/pre-release/quick-start/qs-java.md b/docs/telepresence/pre-release/quick-start/qs-java.md new file mode 100644 index 000000000..a42558c81 --- /dev/null +++ b/docs/telepresence/pre-release/quick-start/qs-java.md @@ -0,0 +1,390 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Java** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Java application](#3-install-a-sample-java-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Java application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Java. We have versions in Python (FastAPI), Python (Flask), Go, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-java/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-java.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-java.git + + Cloning into 'edgey-corp-java'... + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-java/DataProcessingService/` + +3. Start the Maven server. + `mvn spring-boot:run` + + + Install Java and Maven first if needed. + + + ``` + $ mvn spring-boot:run + + ... + g.d.DataProcessingServiceJavaApplication : Started DataProcessingServiceJavaApplication in 1.408 seconds (JVM running for 1.684) + + ``` + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Java server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Java server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-java/DataProcessingService/src/main/resources/application.properties` in your editor and change `app.default.color` on line 2 from `blue` to `orange`. Save the file then stop and restart your Java server. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/pre-release/quick-start/qs-node.md b/docs/telepresence/pre-release/quick-start/qs-node.md new file mode 100644 index 000000000..ff37ffa29 --- /dev/null +++ b/docs/telepresence/pre-release/quick-start/qs-node.md @@ -0,0 +1,384 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Node.js** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Node.js application](#3-install-a-sample-nodejs-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Node.js application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Node.js. We have versions in Go, Java,Python using Flask, and Python using FastAPI if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-nodejs/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-nodejs.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-nodejs.git + + Cloning into 'edgey-corp-nodejs'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-nodejs/DataProcessingService/` + +3. Install the dependencies and start the Node server: +`npm install && npm start` + + ``` + $ npm install && npm start + + ... + Welcome to the DataProcessingService! + { _: [] } + Server running on port 3000 + ``` + + + Install Node.js from here if needed. + + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Node server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + See this doc for more information on how Telepresence resolves DNS. + + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Node server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-nodejs/DataProcessingService/app.js` in your editor and change line 6 from `blue` to `orange`. Save the file and the Node server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/pre-release/quick-start/qs-python-fastapi.md b/docs/telepresence/pre-release/quick-start/qs-python-fastapi.md new file mode 100644 index 000000000..3fc049314 --- /dev/null +++ b/docs/telepresence/pre-release/quick-start/qs-python-fastapi.md @@ -0,0 +1,381 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Python (FastAPI)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + ... + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the FastAPI framework. We have versions in Python (Flask), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python-fastapi/main/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python-fastapi.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python-fastapi.git + + Cloning into 'edgey-corp-python-fastapi'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python-fastapi/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install fastapi uvicorn requests && python app.py + + Collecting fastapi + ... + Application startup complete. + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local service is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python-fastapi/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 17 from `blue` to `orange`. Save the file and the Python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080) and it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/pre-release/quick-start/qs-python.md b/docs/telepresence/pre-release/quick-start/qs-python.md new file mode 100644 index 000000000..e4c7b4996 --- /dev/null +++ b/docs/telepresence/pre-release/quick-start/qs-python.md @@ -0,0 +1,392 @@ +--- +description: "Install Telepresence and learn to use it to intercept services running in your Kubernetes cluster, speeding up local development and debugging." +--- + +import Alert from '@material-ui/lab/Alert'; +import Platform from '@src/components/Platform'; +import QSCards from './qs-cards' + + + +# Telepresence Quick Start - **Python (Flask)** + +
+

Contents

+ +* [Prerequisites](#prerequisites) +* [1. Install the Telepresence CLI](#1-install-the-telepresence-cli) +* [2. Test Telepresence](#2-test-telepresence) +* [3. Install a sample Python application](#3-install-a-sample-python-application) +* [4. Set up a local development environment](#4-set-up-a-local-development-environment) +* [5. Intercept all traffic to the service](#5-intercept-all-traffic-to-the-service) +* [6. Make a code change](#6-make-a-code-change) +* [7. Create a Preview URL](#7-create-a-preview-url) +* [What's next?](#img-classos-logo-srcimageslogopng-whats-next) + +
+ +## Prerequisites + +You’ll need [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) or `oc` installed +and set up +([Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#verify-kubectl-configuration) / + [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#verify-kubectl-configuration) / + [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#verify-kubectl-configuration)) +to use a Kubernetes cluster, preferably an empty test cluster. This +document uses `kubectl` in all example commands, but OpenShift +users should have no problem substituting in the `oc` command instead. + + + Need a cluster? We provide free demo clusters preconfigured to follow this quick start. Switch over to that version of the guide here. + + +If you have used Telepresence previously, please first reset your Telepresence deployment with: +`telepresence uninstall --everything`. + +## 1. Install the Telepresence CLI + + + + +```shell +# Intel Macs + +# Install via brew: +brew install datawire/blackbird/telepresence + +# OR install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence + +# Apple silicon Macs + +# Install via brew: +brew install datawire/blackbird/telepresence-arm64 + +# OR Install manually: +# 1. Download the latest binary (~60 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```shell +# 1. Download the latest binary (~50 MB): +sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/$dlVersion$/telepresence -o /usr/local/bin/telepresence + +# 2. Make the binary executable: +sudo chmod a+x /usr/local/bin/telepresence +``` + + + + +```powershell +# To install Telepresence, run the following commands +# from PowerShell as Administrator. + +# 1. Download the latest windows zip containing telepresence.exe and its dependencies (~50 MB): +Invoke-WebRequest https://app.getambassador.io/download/tel2/windows/amd64/$dlVersion$/telepresence.zip -OutFile telepresence.zip + +# 2. Unzip the telepresence.zip file to the desired directory, then remove the zip file: +Expand-Archive -Path telepresence.zip -DestinationPath telepresenceInstaller/telepresence +Remove-Item 'telepresence.zip' +cd telepresenceInstaller/telepresence + +# 3. Run the install-telepresence.ps1 to install telepresence's dependencies. It will install telepresence to +# C:\telepresence by default, but you can specify a custom path by passing in -Path C:\my\custom\path +powershell.exe -ExecutionPolicy bypass -c " . '.\install-telepresence.ps1';" + +# 4. Remove the unzipped directory: +cd ../.. +Remove-Item telepresenceInstaller -Recurse -Confirm:$false -Force + +# 5. Telepresence is now installed and you can use telepresence commands in PowerShell. +``` + + + + +## 2. Test Telepresence + +Telepresence connects your local workstation to a remote Kubernetes cluster. + +1. Connect to the cluster: +`telepresence connect` + + ``` + $ telepresence connect + + Launching Telepresence Daemon + ... + Connected to context default (https://) + ``` + + + macOS users: If you receive an error when running Telepresence that the developer cannot be verified, open +
+ System Preferences → Security & Privacy → General. +
+ Click Open Anyway at the bottom to bypass the security block. Then retry the telepresence connect command. +
+ +2. Test that Telepresence is working properly by connecting to the Kubernetes API server: +`curl -ik https://kubernetes.default` + + Didn't work? Make sure you are using Telepresence 2.0.3 or greater, check with telepresence version and upgrade here if needed. + + ``` + $ curl -ik https://kubernetes.default + + HTTP/1.1 401 Unauthorized + Cache-Control: no-cache, private + Content-Type: application/json + Www-Authenticate: Basic realm="kubernetes-master" + Date: Tue, 09 Feb 2021 23:21:51 GMT + Content-Length: 165 + + { + "kind": "Status", + "apiVersion": "v1", + "metadata": { + + }, + "status": "Failure", + "message": "Unauthorized", + "reason": "Unauthorized", + "code": 401 + }% + + ``` + + The 401 response is expected. What's important is that you were able to contact the API. + + + + Congratulations! You’ve just accessed your remote Kubernetes API server, as if you were on the same network! With Telepresence, you’re able to use any tool that you have locally to connect to any service in the cluster. + + +## 3. Install a sample Python application + +Your local workstation may not have the compute or memory resources necessary to run all the services in a multi-service application. In this example, we’ll show you how Telepresence can give you a fast development loop, even in this situation. + + + While Telepresence works with any language, this guide uses a sample app written in Python using the Flask framework. We have versions in Python (FastAPI), Go, Java, and NodeJS if you prefer. + + +1. Start by installing a sample application that consists of multiple services: +`kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml` + + ``` + $ kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml + + deployment.apps/dataprocessingservice created + service/dataprocessingservice created + ... + + ``` + +2. Give your cluster a few moments to deploy the sample application. + + Use `kubectl get pods` to check the status of your pods: + + ``` + $ kubectl get pods + + NAME READY STATUS RESTARTS AGE + verylargedatastore-855c8b8789-z8nhs 1/1 Running 0 78s + verylargejavaservice-7dfddbc95c-696br 1/1 Running 0 78s + dataprocessingservice-5f6bfdcf7b-qvd27 1/1 Running 0 79s + ``` + +3. Once all the pods are in a `Running` state, go to the frontend service in your browser at [http://verylargejavaservice.default:8080](http://verylargejavaservice.default:8080). + +4. You should see the EdgyCorp WebApp with a green title and green pod in the diagram. + + + Congratulations, you can now access services running in your cluster by name from your laptop! + + +## 4. Set up a local development environment +You will now download the repo containing the services' code and run the DataProcessingService service locally. This version of the code has the UI color set to blue instead of green. + + + Confirm first that nothing is running locally on port 3000! If curl localhost:3000 returns Connection refused then you should be good to go. + + +1. Clone the web app’s GitHub repo: +`git clone https://github.com/datawire/edgey-corp-python.git` + + ``` + $ git clone https://github.com/datawire/edgey-corp-python.git + + Cloning into 'edgey-corp-python'... + remote: Enumerating objects: 441, done. + ... + ``` + +2. Change into the repo directory, then into DataProcessingService: +`cd edgey-corp-python/DataProcessingService/` + +3. Install the dependencies and start the Python server. +Python 2.x: `pip install fastapi uvicorn requests && python app.py` +Python 3.x: `pip3 install fastapi uvicorn requests && python3 app.py` + + ``` + $ pip install flask requests && python app.py + + Collecting flask + ... + Welcome to the DataServiceProcessingPythonService! + ... + + ``` + + Install Python from here if needed. + +4. In a **new terminal window**, curl the service running locally to confirm it’s set to blue: +`curl localhost:3000/color` + + ``` + $ curl localhost:3000/color + + "blue" + ``` + + + Victory, your local Python server is running a-ok! + + +## 5. Intercept all traffic to the service +Next, we’ll create an intercept. An intercept is a rule that tells Telepresence where to send traffic. In this example, we will send all traffic destined for the DataProcessingService to the version of the DataProcessingService running locally instead: + +1. Start the intercept with the `intercept` command, setting the service name and port: +`telepresence intercept dataprocessingservice --port 3000` + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + Using Deployment dataprocessingservice + intercepted + Intercept name: dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : all TCP connections + ``` + +2. Go to the frontend service again in your browser. Since the service is now intercepted it can be reached directly by its service name at [http://verylargejavaservice:8080](http://verylargejavaservice:8080). You will now see the blue elements in the app. + + + The frontend’s request to DataProcessingService is being intercepted and rerouted to the Python server on your laptop! + + +## 6. Make a code change +We’ve now set up a local development environment for the DataProcessingService, and we’ve created an intercept that sends traffic in the cluster to our local environment. We can now combine these two concepts to show how we can quickly make and test changes. + +1. Open `edgey-corp-python/DataProcessingService/app.py` in your editor and change `DEFAULT_COLOR` on line 15 from `blue` to `orange`. Save the file and the python server will auto reload. + +2. Now, visit [http://verylargejavaservice:8080](http://verylargejavaservice:8080) again in your browser. You will now see the orange elements in the application. + + + We’ve just shown how we can edit code locally, and immediately see these changes in the cluster. +
+ Normally, this process would require a container build, push to registry, and deploy. +
+ With Telepresence, these changes happen instantly. +
+ +## 7. Create a Preview URL + +Create a personal intercept with a preview URL; meaning that only +traffic coming from the preview URL will be intercepted, so you can +easily share the services you’re working on with your teammates. + +1. Clean up your previous intercept by removing it: +`telepresence leave dataprocessingservice` + +2. Log in to Ambassador Cloud, a web interface for managing and + sharing preview URLs: + + ```console + $ telepresence login + Launching browser authentication flow... + + Login successful. + ``` + + If you are in an environment where Telepresence cannot launch a + local browser for you to interact with, you will need to pass the + [`--apikey` flag to `telepresence + login`](../../reference/client/login/). + +3. Start the intercept again: +`telepresence intercept dataprocessingservice --port 3000` + You will be asked for your ingress layer 3 address; specify the front end service: `verylargejavaservice.default` + Then when asked for the port, type `8080`, for "use TLS", type `n` and finally confirm the layer 5 hostname. + + ``` + $ telepresence intercept dataprocessingservice --port 3000 + + To create a preview URL, telepresence needs to know how requests enter + your cluster. Please Select the ingress to use. + + 1/4: What's your ingress' IP address? + You may use an IP address or a DNS name (this is usually a + "service.namespace" DNS name). + + [default: dataprocessingservice.default]: verylargejavaservice.default + + 2/4: What's your ingress' TCP port number? + + [default: 80]: 8080 + + 3/4: Does that TCP port on your ingress use TLS (as opposed to cleartext)? + + [default: n]: + + 4/4: If required by your ingress, specify a different hostname + (TLS-SNI, HTTP "Host" header) to be used in requests. + + [default: verylargejavaservice.default]: + + Using Deployment dataprocessingservice + intercepted + Intercept name : dataprocessingservice + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:3000 + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp("86cb4a70-c7e1-1138-89c2-d8fed7a46cae:dataprocessingservice") + Preview URL : https://.preview.edgestack.me + Layer 5 Hostname: verylargejavaservice.default + ``` + +4. Wait a moment for the intercept to start; it will also output a preview URL. Go to this URL in your browser, it will be the orange version of the app. + +5. Now go again to [http://verylargejavaservice:8080](http://verylargejavaservice:8080), it’s still green. + +Normal traffic coming to your app gets the green cluster service, but traffic coming from the preview URL goes to your laptop and gets the orange local service! + + + The Preview URL now shows exactly what is running on your local laptop -- in a way that can be securely shared with anyone you work with. + + +## What's Next? + + diff --git a/docs/telepresence/pre-release/quick-start/telepresence-quickstart-landing.less b/docs/telepresence/pre-release/quick-start/telepresence-quickstart-landing.less new file mode 100644 index 000000000..1a8c3ddc7 --- /dev/null +++ b/docs/telepresence/pre-release/quick-start/telepresence-quickstart-landing.less @@ -0,0 +1,185 @@ +@import '~@src/components/Layout/vars.less'; + +.doc-body .telepresence-quickstart-landing { + font-family: @InterFont; + color: @black; + margin: 0 auto 140px; + max-width: @docs-max-width; + min-width: @docs-min-width; + + h1, + h2 { + color: @blue-dark; + font-style: normal; + font-weight: normal; + letter-spacing: 0.25px; + } + + h1 { + font-size: 33px; + line-height: 40px; + + svg { + vertical-align: text-bottom; + } + } + + h2 { + font-size: 23px; + line-height: 33px; + margin: 0 0 1rem; + + .highlight-mark { + background: transparent; + color: @blue-dark; + background: -moz-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: -webkit-gradient( + linear, + left top, + left bottom, + color-stop(0%, transparent), + color-stop(60%, transparent), + color-stop(60%, fade(@blue-electric, 15%)), + color-stop(100%, fade(@blue-electric, 15%)) + ); + background: -webkit-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: -o-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: -ms-linear-gradient( + top, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + background: linear-gradient( + to bottom, + transparent 0%, + transparent 60%, + fade(@blue-electric, 15%) 60%, + fade(@blue-electric, 15%) 100% + ); + filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='transparent', endColorstr='fade(@blue-electric, 15%)',GradientType=0 ); + padding: 0 3px; + margin: 0 0.1em 0 0; + } + } + + .telepresence-choice { + background: @white; + border: 2px solid @grey-separator; + box-shadow: -6px 12px 0px fade(@black, 12%); + border-radius: 8px; + padding: 20px; + + strong { + color: @blue; + } + } + + .telepresence-choice-wrapper { + border-bottom: solid 1px @grey-separator; + column-gap: 60px; + display: inline-grid; + grid-template-columns: repeat(2, 1fr); + margin: 20px 0 50px; + padding: 0 0 62px; + width: 100%; + + .telepresence-choice { + ol { + li { + font-size: 14px; + } + } + + .get-started-button { + background-color: @green; + border-radius: 5px; + color: @white; + display: inline-flex; + font-style: normal; + font-weight: 600; + font-size: 14px; + line-height: 24px; + margin: 0 0 15px 5px; + padding: 13px 20px; + align-items: center; + letter-spacing: 1.25px; + text-decoration: none; + text-transform: uppercase; + transition: background-color 200ms linear 0ms; + + svg { + fill: @white; + height: 20px; + width: 20px; + } + + &:hover { + background-color: @green-dark; + text-decoration: none; + } + } + + p { + font-style: normal; + font-weight: normal; + font-size: 16px; + line-height: 26px; + letter-spacing: 0.5px; + } + } + } + + .video-wrapper { + display: flex; + flex-direction: row; + + ul { + li { + font-size: 14px; + margin: 0 10px 10px 0; + } + } + + div { + &.video-container { + flex: 1 1 70%; + position: relative; + width: 100%; + padding-bottom: 39.375%; + + .video { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + border: 0; + } + } + + &.description { + flex: 0 1 30%; + } + } + } +} diff --git a/docs/telepresence/pre-release/redirects.yml b/docs/telepresence/pre-release/redirects.yml new file mode 100644 index 000000000..5961b3477 --- /dev/null +++ b/docs/telepresence/pre-release/redirects.yml @@ -0,0 +1 @@ +- {from: "", to: "quick-start"} diff --git a/docs/telepresence/pre-release/reference/architecture.md b/docs/telepresence/pre-release/reference/architecture.md new file mode 100644 index 000000000..0b0992ba5 --- /dev/null +++ b/docs/telepresence/pre-release/reference/architecture.md @@ -0,0 +1,95 @@ +--- +description: "How Telepresence works to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Telepresence Architecture + +
+ +![Telepresence Architecture](../../../../../images/documentation/telepresence-architecture.inline.svg) + +
+ +## Telepresence CLI + +The Telepresence CLI orchestrates all the moving parts: it starts the Telepresence User-Daemon, installs the Traffic Manager +in your cluster, authenticates against Ambassador Cloud and configure all those elements to communicate with one +another. + +## Telepresence Daemons +Telepresence has Daemons that run on a developer's workstation and act as the main point of communication to the cluster's +network in order to communicate with the cluster and handle intercepted traffic. + +### User-Daemon +The User-Daemon installs the Traffic Manager in your cluster and coordinates the creation and deletion of intercepts +by communicating with the [Traffic Manager](#traffic-manager) once it is running. + +When you run telepresence login, Telepresence installs an enhanced version of the User-Daemon. This replaces the existing User-Daemon and +allows you to create intercepts on your local machine from Ambassador Cloud. + +### Root-Daemon +The Root-Daemon manages the networking necessary to handle traffic between the local workstation and the cluster by setting up a TUN device. +For a detailed description of how the TUN device manages traffic and why it is necessary please refer to this blog post: [Implementing Telepresence Networking with a TUN Device](https://blog.getambassador.io/implementing-telepresence-networking-with-a-tun-device-a23a786d51e9). + +## Traffic Manager + +The Traffic Manager is the central point of communication between Traffic Agents in the cluster and Telepresence Daemons +on developer workstations, proxying all relevant inbound and outbound traffic and tracking active intercepts. When +Telepresence is run with either the `connect`, `intercept`, or `list` commands, the Telepresence CLI first checks the +cluster for the Traffic Manager deployment, and if missing it creates it. + +When an intercept gets created with a Preview URL, the Traffic Manager will establish a connection with Ambassador Cloud +so that Preview URL requests can be routed to the cluster. This allows Ambassador Cloud to reach the Traffic Manager +without requiring the Traffic Manager to be publicly exposed. Once the Traffic Manager receives a request from a Preview +URL, it forwards the request to the ingress service specified at the Preview URL creation. + +## Traffic Agent + +The Traffic Agent is a sidecar container that facilitates intercepts. When an intercept is started, the Traffic Agent +container is injected into the workload's pod(s). You can see the Traffic Agent's status by running `kubectl describe +pod `. + +Depending on the type of intercept that gets created, the Traffic Agent will either route the incoming request to the +Traffic Manager so that it gets routed to a developer's workstation, or it will pass it along to the container in the +pod usually handling requests on that port. + +## Ambassador Cloud + +Ambassador Cloud enables Preview URLs by generating random ephemeral domain names and routing requests received on those +domains from authorized users to the appropriate Traffic Manager. + +Ambassador Cloud also lets users manage their Preview URLs: making them publicly accessible, seeing users who have +accessed them and deleting them. + +## Pod-Daemon + +The Pod-Daemon is a modified version of the [Telepresence User-Daemon](#user-daemon) built as a container image so that +it can be inserted into a `Deployment` manifest as an additional container. This allows users to create intercepts completely +within the cluster with the benefit that the intercept stays active until the deployment with the Pod-Daemon container is removed. + +The Pod-Daemon will take arguments and environment variables as part of the `Deployment` manifest to specify which service the intercept +should be run on and to provide similar configuration that would be provided when using Telepresence intercepts from the command line. + +After being deployed to the cluster, it behaves similarly to the Telepresence User-Daemon and installs the [Traffic Agent Sidecar](#traffic-agent) +on the service that is being intercepted. After the intercept is created, traffic can then be redirected to the `Deployment` with the Pod-Daemon +container instead. The Pod-Daemon will automatically generate a Preview URL so that the intercept can be accessed from outside the cluster. +The Preview URL can be obtained from the Pod-Daemon logs if you are deploying it manually. + +The Pod-Daemon was created for use as a component of Deployment Previews in order to automatically create intercepts with development images built +by CI so that changes from pull requests can be quickly visualized in a live cluster before changes are landed by accessing the Preview URL +link which would be posted to an associated GitHub pull request when using Deployment Previews. + +See the [Deployment Previews quick-start](../../../../cloud/latest/deployment-previews/quick-start) for information on how to get started with Deployment Previews +or for a reference on how Pod-Daemon can be manually deployed to the cluster. + + +# Changes from Service Preview + +Using Ambassador's previous offering, Service Preview, the Traffic Agent had to be manually added to a pod by an +annotation. This is no longer required as the Traffic Agent is automatically injected when an intercept is started. + +Service Preview also started an intercept via `edgectl intercept`. The `edgectl` CLI is no longer required to intercept +as this functionality has been moved to the Telepresence CLI. + +For both the Traffic Manager and Traffic Agents, configuring Kubernetes ClusterRoles and ClusterRoleBindings is not +required as it was in Service Preview. Instead, the user running Telepresence must already have sufficient permissions in the cluster to add and modify deployments in the cluster. diff --git a/docs/telepresence/pre-release/reference/client.md b/docs/telepresence/pre-release/reference/client.md new file mode 100644 index 000000000..491dbbb8e --- /dev/null +++ b/docs/telepresence/pre-release/reference/client.md @@ -0,0 +1,31 @@ +--- +description: "CLI options for Telepresence to intercept traffic from your Kubernetes cluster to code running on your laptop." +--- + +# Client reference + +The [Telepresence CLI client](../../quick-start) is used to connect Telepresence to your cluster, start and stop intercepts, and create preview URLs. All commands are run in the form of `telepresence `. + +## Commands + +A list of all CLI commands and flags is available by running `telepresence help`, but here is more detail on the most common ones. +You can append `--help` to each command below to get even more information about its usage. + +| Command | Description | +| --- | --- | +| `connect` | Starts the local daemon and connects Telepresence to your cluster and installs the Traffic Manager if it is missing. After connecting, outbound traffic is routed to the cluster so that you can interact with services as if your laptop was another pod (for example, curling a service by it's name) | +| [`login`](login) | Authenticates you to Ambassador Cloud to create, manage, and share [preview URLs](../../howtos/preview-urls/) +| `logout` | Logs out out of Ambassador Cloud | +| `license` | Formats a license from Ambassdor Cloud into a secret that can be [applied to your cluster](../cluster-config#add-license-to-cluster) if you require features of the extension in an air-gapped environment| +| `status` | Shows the current connectivity status | +| `quit` | Tell Telepresence daemons to quit | +| `list` | Lists the current active intercepts | +| `intercept` | Intercepts a service, run followed by the service name to be intercepted and what port to proxy to your laptop: `telepresence intercept --port `. This command can also start a process so you can run a local instance of the service you are intercepting. For example the following will intercept the hello service on port 8000 and start a Python web server: `telepresence intercept hello --port 8000 -- python3 -m http.server 8000`. A special flag `--docker-run` can be used to run the local instance [in a docker container](../docker-run). | +| `leave` | Stops an active intercept: `telepresence leave hello` | +| `preview` | Create or remove [preview URLs](../../howtos/preview-urls) for existing intercepts: `telepresence preview create ` | +| `loglevel` | Temporarily change the log-level of the traffic-manager, traffic-agents, and user and root daemons | +| `gather-logs` | Gather logs from traffic-manager, traffic-agents, user, and root daemons, and export them into a zip file that can be shared with others or included with a github issue. Use `--get-pod-yaml` to include the yaml for the `traffic-manager` and `traffic-agent`s. Use `--anonymize` to replace the actual pod names + namespaces used for the `traffic-manager` and pods containing `traffic-agent`s in the logs. | +| `version` | Show version of Telepresence CLI + Traffic-Manager (if connected) | +| `uninstall` | Uninstalls Telepresence from your cluster, using the `--agent` flag to target the Traffic Agent for a specific workload, the `--all-agents` flag to remove all Traffic Agents from all workloads, or the `--everything` flag to remove all Traffic Agents and the Traffic Manager. +| `dashboard` | Reopens the Ambassador Cloud dashboard in your browser | +| `current-cluster-id` | Get cluster ID for your kubernetes cluster, used for [configuring license](../cluster-config#add-license-to-cluster) in an air-gapped environment | diff --git a/docs/telepresence/pre-release/reference/client/login.md b/docs/telepresence/pre-release/reference/client/login.md new file mode 100644 index 000000000..d1d0d8fad --- /dev/null +++ b/docs/telepresence/pre-release/reference/client/login.md @@ -0,0 +1,53 @@ +# Telepresence Login + +```console +$ telepresence login --help +Authenticate to Ambassador Cloud + +Usage: + telepresence login [flags] + +Flags: + --apikey string Static API key to use instead of performing an interactive login +``` + +## Description + +Use `telepresence login` to explicitly authenticate with [Ambassador +Cloud](https://www.getambassador.io/docs/cloud). Unless the +[`skipLogin` option](../../config) is set, other commands will +automatically invoke the `telepresence login` interactive login +procedure as necessary, so it is rarely necessary to explicitly run +`telepresence login`; it should only be truly necessary to explictly +run `telepresence login` when you require a non-interactive login. + +The normal interactive login procedure involves launching a web +browser, a user interacting with that web browser, and finally having +the web browser make callbacks to the local Telepresence process. If +it is not possible to do this (perhaps you are using a headless remote +box via SSH, or are using Telepresence in CI), then you may instead +have Ambassador Cloud issue an API key that you pass to `telepresence +login` with the `--apikey` flag. + +## Acquiring an API key + +1. Log in to Ambassador Cloud at https://app.getambassador.io/ . + +2. Click on your profile icon in the upper-left: ![Screenshot with the + mouse pointer over the upper-left profile icon](./apikey-2.png) + +3. Click on the "API Keys" menu button: ![Screenshot with the mouse + pointer over the "API Keys" menu button](./apikey-3.png) + +4. Click on the "generate new key" button in the upper-right: + ![Screenshot with the mouse pointer over the "generate new key" + button](./apikey-4.png) + +5. Enter a description for the key (perhaps the name of your laptop, + or perhaps the "CI"), and click "generate api key" to create it. + +You may now pass the API key as `KEY` to `telepresence login --apikey=KEY`. + +Telepresence will use that "master" API key to create narrower keys +for different components of Telepresence. You will see these appear +in the Ambassador Cloud web interface. diff --git a/docs/telepresence/pre-release/reference/client/login/apikey-2.png b/docs/telepresence/pre-release/reference/client/login/apikey-2.png new file mode 100644 index 000000000..1379502a9 Binary files /dev/null and b/docs/telepresence/pre-release/reference/client/login/apikey-2.png differ diff --git a/docs/telepresence/pre-release/reference/client/login/apikey-3.png b/docs/telepresence/pre-release/reference/client/login/apikey-3.png new file mode 100644 index 000000000..4559b784d Binary files /dev/null and b/docs/telepresence/pre-release/reference/client/login/apikey-3.png differ diff --git a/docs/telepresence/pre-release/reference/client/login/apikey-4.png b/docs/telepresence/pre-release/reference/client/login/apikey-4.png new file mode 100644 index 000000000..25c6581a4 Binary files /dev/null and b/docs/telepresence/pre-release/reference/client/login/apikey-4.png differ diff --git a/docs/telepresence/pre-release/reference/cluster-config.md b/docs/telepresence/pre-release/reference/cluster-config.md new file mode 100644 index 000000000..aad5b64b4 --- /dev/null +++ b/docs/telepresence/pre-release/reference/cluster-config.md @@ -0,0 +1,312 @@ +import Alert from '@material-ui/lab/Alert'; +import { ClusterConfig } from '../../../../../src/components/Docs/Telepresence'; + +# Cluster-side configuration + +For the most part, Telepresence doesn't require any special +configuration in the cluster and can be used right away in any +cluster (as long as the user has adequate [RBAC permissions](../rbac) +and the cluster's server version is `1.17.0` or higher). + +However, some advanced features do require some configuration in the +cluster. + +## TLS + +In this example, other applications in the cluster expect to speak TLS to your +intercepted application (perhaps you're using a service-mesh that does +mTLS). + +In order to use `--mechanism=http` (or any features that imply +`--mechanism=http`) you need to tell Telepresence about the TLS +certificates in use. + +Tell Telepresence about the certificates in use by adjusting your +[workload's](../intercepts/#supported-workloads) Pod template to set a couple of +annotations on the intercepted Pods: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ "getambassador.io/inject-terminating-tls-secret": "your-terminating-secret" # optional ++ "getambassador.io/inject-originating-tls-secret": "your-originating-secret" # optional + spec: ++ serviceAccountName: "your-account-that-has-rbac-to-read-those-secrets" + containers: +``` + +- The `getambassador.io/inject-terminating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS server + certificate to use for decrypting and responding to incoming + requests. + + When Telepresence modifies the Service and workload port + definitions to point at the Telepresence Agent sidecar's port + instead of your application's actual port, the sidecar will use this + certificate to terminate TLS. + +- The `getambassador.io/inject-originating-tls-secret` annotation + (optional) names the Kubernetes Secret that contains the TLS + client certificate to use for communicating with your application. + + You will need to set this if your application expects incoming + requests to speak TLS (for example, your + code expects to handle mTLS itself instead of letting a service-mesh + sidecar handle mTLS for it, or the port definition that Telepresence + modified pointed at the service-mesh sidecar instead of at your + application). + + If you do set this, you should to set it to the + same client certificate Secret that you configure the Ambassador + Edge Stack to use for mTLS. + +It is only possible to refer to a Secret that is in the same Namespace +as the Pod. + +The Pod will need to have permission to `get` and `watch` each of +those Secrets. + +Telepresence understands `type: kubernetes.io/tls` Secrets and +`type: istio.io/key-and-cert` Secrets; as well as `type: Opaque` +Secrets that it detects to be formatted as one of those types. + +## Air gapped cluster + +If your cluster is on an isolated network such that it cannot +communicate with Ambassador Cloud, then some additional configuration +is required to acquire a license key in order to use personal +intercepts. + +### Create a license + +1. + +2. Generate a new license (if one doesn't already exist) by clicking *Generate New License*. + +3. You will be prompted for your Cluster ID. Ensure your +kubeconfig context is using the cluster you want to create a license for then +run this command to generate the Cluster ID: + + ``` + $ telepresence current-cluster-id + + Cluster ID: + ``` + +4. Click *Generate API Key* to finish generating the license. + +5. On the licenses page, download the license file associated with your cluster. + +### Add license to cluster +There are two separate ways you can add the license to your cluster: manually creating and deploying +the license secret or having the helm chart manage the secret + +You only need to do one of the two options. + +#### Manual deploy of license secret + +1. Use this command to generate a Kubernetes Secret config using the license file: + + ``` + $ telepresence license -f + + apiVersion: v1 + data: + hostDomain: + license: + kind: Secret + metadata: + creationTimestamp: null + name: systema-license + namespace: ambassador + ``` + +2. Save the output as a YAML file and apply it to your +cluster with `kubectl`. + +3. When deploying the `traffic-manager` chart, you must add the additional values when running `helm install` by putting +the following into a file (for the example we'll assume it's called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + secret: + # This tells the helm chart not to create the secret since you've created it yourself + create: false + ``` + +4. Install the helm chart into the cluster + + ``` + helm install traffic-manager -n ambassador datawire/telepresence --create-namespace -f license-values.yaml + ``` + +5. Ensure that you have the docker image for the Smart Agent (datawire/ambassador-telepresence-agent:1.11.0) +pulled and in a registry your cluster can pull from. + +6. Have users use the `images` [config key](../config/#images) keys so telepresence uses the aforementioned image for their agent. + +#### Helm chart manages the secret + +1. Get the jwt token from the downloaded license file + + ``` + $ cat ~/Downloads/ambassador.License_for_yourcluster + eyJhbGnotarealtoken.butanexample + ``` + +2. Create the following values file, substituting your real jwt token in for the one used in the example below. +(for this example we'll assume the following is placed in a file called license-values.yaml) + + ``` + licenseKey: + # This mounts the secret into the traffic-manager + create: true + # This is the value from the license file you download. this value is an example and will not work + value: eyJhbGnotarealtoken.butanexample + secret: + # This tells the helm chart to create the secret + create: true + ``` + +3. Install the helm chart into the cluster + + ``` + helm install traffic-manager charts/telepresence -n ambassador --create-namespace -f license-values.yaml + ``` + +Users will now be able to use preview intercepts with the +`--preview-url=false` flag. Even with the license key, preview URLs +cannot be used without enabling direct communication with Ambassador +Cloud, as Ambassador Cloud is essential to their operation. + +If using Helm to install the server-side components, see the chart's [README](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence) to learn how to configure the image registry and license secret. + +Have clients use the [skipLogin](../config/#cloud) key to ensure the cli knows it is operating in an +air-gapped environment. + +## Mutating Webhook + +By default, Telepresence updates the intercepted workload (Deployment, StatefulSet, ReplicaSet) +template to add the [Traffic Agent](../architecture/#traffic-agent) sidecar container and update the +port definitions. If you use GitOps workflows (with tools like ArgoCD) to automatically update your +cluster so that it reflects the desired state from an external Git repository, this behavior can make +your workload out of sync with that external desired state. + +To solve this issue, you can use Telepresence's Mutating Webhook alternative mechanism. Intercepted +workloads will then stay untouched and only the underlying pods will be modified to inject the Traffic +Agent sidecar container and update the port definitions. + +Simply add the `telepresence.getambassador.io/inject-traffic-agent: enabled` annotation to your +workload template's annotations: + +```diff + spec: + template: + metadata: + labels: + service: your-service ++ annotations: ++ telepresence.getambassador.io/inject-traffic-agent: enabled + spec: + containers: +``` + +### Service Port Annotation + +A service port annotation can be added to the workload to make the Mutating Webhook select a specific port +in the service. This is necessary when the service has multiple ports. + +```diff + spec: + template: + metadata: + labels: + service: your-service + annotations: + telepresence.getambassador.io/inject-traffic-agent: enabled ++ telepresence.getambassador.io/inject-service-port: https + spec: + containers: +``` + +### Service Name Annotation + +A service name annotation can be added to the workload to make the Mutating Webhook select a specific Kubernetes service. +This is necessary when the workload is exposed by multiple services. + +```diff + spec: + template: + metadata: + labels: + service: your-service + annotations: + telepresence.getambassador.io/inject-traffic-agent: enabled ++ telepresence.getambassador.io/inject-service-name: my-service + spec: + containers: +``` + +### Note on Numeric Ports + +If the targetPort of your intercepted service is pointing at a port number, in addition to +injecting the Traffic Agent sidecar, Telepresence will also inject an initContainer that will +reconfigure the pod's firewall rules to redirect traffic to the Traffic Agent. + + +Note that this initContainer requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + + +This requires the Traffic Agent to run as GID 7777. By default, this is disabled on openshift clusters. +To enable running as GID 7777 on a specific openshift namespace, run: +oc adm policy add-scc-to-group anyuid system:serviceaccounts:$NAMESPACE + + +If you need to use numeric ports without the aforementioned capabilities, you can [manually install the agent](../intercepts/manual-agent) + +For example, the following service is using a numeric port, so Telepresence would inject an initContainer into it: +```yaml +apiVersion: v1 +kind: Service +metadata: + name: your-service +spec: + type: ClusterIP + selector: + service: your-service + ports: + - port: 80 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: your-service + labels: + service: your-service +spec: + replicas: 1 + selector: + matchLabels: + service: your-service + template: + metadata: + annotations: + telepresence.getambassador.io/inject-traffic-agent: enabled + labels: + service: your-service + spec: + containers: + - name: your-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 +``` diff --git a/docs/telepresence/pre-release/reference/config.md b/docs/telepresence/pre-release/reference/config.md new file mode 100644 index 000000000..3d42b005b --- /dev/null +++ b/docs/telepresence/pre-release/reference/config.md @@ -0,0 +1,298 @@ +# Laptop-side configuration + +## Global Configuration +Telepresence uses a `config.yml` file to store and change certain global configuration values that will be used for all clusters you use Telepresence with. The location of this file varies based on your OS: + +* macOS: `$HOME/Library/Application Support/telepresence/config.yml` +* Linux: `$XDG_CONFIG_HOME/telepresence/config.yml` or, if that variable is not set, `$HOME/.config/telepresence/config.yml` +* Windows: `%APPDATA%\telepresence\config.yml` + +For Linux, the above paths are for a user-level configuration. For system-level configuration, use the file at `$XDG_CONFIG_DIRS/telepresence/config.yml` or, if that variable is empty, `/etc/xdg/telepresence/config.yml`. If a file exists at both the user-level and system-level paths, the user-level path file will take precedence. + +### Values + +The config file currently supports values for the `timeouts`, `logLevels`, `images`, `cloud`, and `grpc` keys. + +Here is an example configuration to show you the conventions of how Telepresence is configured: +**note: This config shouldn't be used verbatim, since the registry `privateRepo` used doesn't exist** + +```yaml +timeouts: + agentInstall: 1m + intercept: 10s +logLevels: + userDaemon: debug +images: + registry: privateRepo # This overrides the default docker.io/datawire repo + agentImage: ambassador-telepresence-agent:1.8.0 # This overrides the agent image to inject when intercepting +cloud: + refreshMessages: 24h # Refresh messages from cloud every 24 hours instead of the default, which is 1 week. +grpc: + maxReceiveSize: 10Mi +telepresenceAPI: + port: 9980 +intercept: + appProtocolStrategy: portName + defaultPort: "8088" +``` + +#### Timeouts + +Values for `timeouts` are all durations either as a number of seconds +or as a string with a unit suffix of `ms`, `s`, `m`, or `h`. Strings +can be fractional (`1.5h`) or combined (`2h45m`). + +These are the valid fields for the `timeouts` key: + +| Field | Description | Type | Default | +|-------------------------|------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|------------| +| `agentInstall` | Waiting for Traffic Agent to be installed | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | +| `apply` | Waiting for a Kubernetes manifest to be applied | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 1 minute | +| `clusterConnect` | Waiting for cluster to be connected | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `intercept` | Waiting for an intercept to become active | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `proxyDial` | Waiting for an outbound connection to be established | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 5 seconds | +| `trafficManagerConnect` | Waiting for the Traffic Manager API to connect for port fowards | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 20 seconds | +| `trafficManagerAPI` | Waiting for connection to the gPRC API after `trafficManagerConnect` is successful | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 15 seconds | +| `helm` | Waiting for Helm operations (e.g. `install`) on the Traffic Manager | [int][yaml-int] or [float][yaml-float] number of seconds, or [duration][go-duration] [string][yaml-str] | 2 minutes | + +#### Log Levels + +Values for the `logLevels` fields are one of the following strings, +case insensitive: + + - `trace` + - `debug` + - `info` + - `warning` or `warn` + - `error` + +For whichever log-level you select, you will get logs labeled with that level and of higher severity. +(e.g. if you use `info`, you will also get logs labeled `error`. You will NOT get logs labeled `debug`. + +These are the valid fields for the `logLevels` key: + +| Field | Description | Type | Default | +|--------------|---------------------------------------------------------------------|---------------------------------------------|---------| +| `userDaemon` | Logging level to be used by the User Daemon (logs to connector.log) | [loglevel][logrus-level] [string][yaml-str] | debug | +| `rootDaemon` | Logging level to be used for the Root Daemon (logs to daemon.log) | [loglevel][logrus-level] [string][yaml-str] | info | + +#### Images +Values for `images` are strings. These values affect the objects that are deployed in the cluster, +so it's important to ensure users have the same configuration. + +Additionally, you can deploy the server-side components with [Helm](../../install/helm), to prevent them +from being overridden by a client's config and use the [mutating-webhook](../cluster-config/#mutating-webhook) +to handle installation of the `traffic-agents`. + +These are the valid fields for the `images` key: + +| Field | Description | Type | Default | +|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------|----------------------| +| `registry` | Docker registry to be used for installing the Traffic Manager and default Traffic Agent. If not using a helm chart to deploy server-side objects, changing this value will create a new traffic-manager deployment when using Telepresence commands. Additionally, changing this value will update installed default `traffic-agents` to use the new registry when creating a new intercept. | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `agentImage` | `$registry/$imageName:$imageTag` to use when installing the Traffic Agent. Changing this value will update pre-existing `traffic-agents` to use this new image. *The `registry` value is not used for the `traffic-agent` if you have this value set.* | qualified Docker image name [string][yaml-str] | (unset) | +| `webhookRegistry` | The container `$registry` that the [Traffic Manager](../cluster-config/#mutating-webhook) will use with the `webhookAgentImage` *This value is only used if a new `traffic-manager` is deployed* | Docker registry name [string][yaml-str] | `docker.io/datawire` | +| `webhookAgentImage` | The container image that the [Traffic Manager](../cluster-config/#mutating-webhook) will pull from the `webhookRegistry` when installing the Traffic Agent in annotated pods *This value is only used if a new `traffic-manager` is deployed* | non-qualified Docker image name [string][yaml-str] | (unset) | + +#### Cloud +Values for `cloud` are listed below and their type varies, so please see the chart for the expected type for each config value. +These fields control how the client interacts with the Cloud service. + +| Field | Description | Type | Default | +|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------|---------| +| `skipLogin` | Whether the CLI should skip automatic login to Ambassador Cloud. If set to true, in order to perform personal intercepts you must have a [license key](../cluster-config/#air-gapped-cluster) installed in the cluster. | [bool][yaml-bool] | false | +| `refreshMessages` | How frequently the CLI should communicate with Ambassador Cloud to get new command messages, which also resets whether the message has been raised or not. You will see each message at most once within the duration given by this config | [duration][go-duration] [string][yaml-str] | 168h | +| `systemaHost` | The host used to communicate with Ambassador Cloud | [string][yaml-str] | app.getambassador.io | +| `systemaPort` | The port used with `systemaHost` to communicate with Ambassador Cloud | [string][yaml-str] | 443 | + +Telepresence attempts to auto-detect if the cluster is capable of +communication with Ambassador Cloud, but may still prompt you to log +in in cases where only the on-laptop client wishes to communicate with +Ambassador Cloud. If you want those auto-login points to be disabled +as well, or would like it to not attempt to communicate with +Ambassador Cloud at all (even for the auto-detection), then be sure to +set the `skipLogin` value to `true`. + +Reminder: To use personal intercepts, which normally require a login, +you must have a license key in your cluster and specify which +`agentImage` should be installed by also adding the following to your +`config.yml`: + +```yaml +images: + agentImage: / +``` + +#### Grpc +The `maxReceiveSize` determines how large a message that the workstation receives via gRPC can be. The default is 4Mi (determined by gRPC). All traffic to and from the cluster is tunneled via gRPC. + +The size is measured in bytes. You can express it as a plain integer or as a fixed-point number using E, G, M, or K. You can also use the power-of-two equivalents: Gi, Mi, Ki. For example, the following represent roughly the same value: +``` +128974848, 129e6, 129M, 123Mi +``` + +#### RESTful API server +The `telepresenceAPI` controls the behavior of Telepresence's RESTful API server that can be queried for additional information about ongoing intercepts. When present, and the `port` is set to a valid port number, it's propagated to the auto-installer so that application containers that can be intercepted gets the `TELEPRESENCE_API_PORT` environment set. The server can then be queried at `localhost:`. In addition, the `traffic-agent` and the `user-daemon` on the workstation that performs an intercept will start the server on that port. +If the `traffic-manager` is auto-installed, its webhook agent injector will be configured to add the `TELEPRESENCE_API_PORT` environment to the app container when the `traffic-agent` is injected. +See [RESTful API server](../restapi) for more info. + +#### Intercept +The `intercept` controls applies to how telepresence will intercept the communications to the intercepted service. + +The `defaultPort` controls which port is selected when no `--port` flag is given to the `telepresence intercept` command. The default value is "8080". + +The `appProtocolStrategy` is only relevant when using personal intercepts. This controls how telepresence selects the application protocol to use when intercepting a service that has no `service.ports.appProtocol` defined. Valid values are: + +| Value | Resulting action | +|--------------|--------------------------------------------------------------------------------------------------------| +| `http2Probe` | The telepresence traffic-agent will probe the intercepted container to check whether it supports http2 | +| `portName` | Telepresence will make an educated guess about the protocol based on the name of the service port | +| `http` | Telepresence will use http | +| `http2` | Telepresence will use http2 | + +When `portName` is used, Telepresence will determine the protocol by the name of the port: `[-suffix]`. The following protocols are recognized: + +| Protocol | Meaning | +|----------|---------------------------------------| +| `http` | Plaintext HTTP/1.1 traffic | +| `http2` | Plaintext HTTP/2 traffic | +| `https` | TLS Encrypted HTTP (1.1 or 2) traffic | +| `grpc` | Same as http2 | + +## Per-Cluster Configuration +Some configuration is not global to Telepresence and is actually specific to a cluster. Thus, we store that config information in your kubeconfig file, so that it is easier to maintain per-cluster configuration. + +### Values +The current per-cluster configuration supports `dns`, `alsoProxy`, and `manager` keys. +To add configuration, simply add a `telepresence.io` entry to the cluster in your kubeconfig like so: + +``` +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + dns: + also-proxy: + manager: + name: example-cluster +``` +#### DNS +The fields for `dns` are: local-ip, remote-ip, exclude-suffixes, include-suffixes, and lookup-timeout. + +| Field | Description | Type | Default | +|--------------------|---------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|-----------------------------------------------------------------------------| +| `local-ip` | The address of the local DNS server. This entry is only used on Linux systems that are not configured to use systemd-resolved. | IP address [string][yaml-str] | first `nameserver` mentioned in `/etc/resolv.conf` | +| `remote-ip` | The address of the cluster's DNS service. | IP address [string][yaml-str] | IP of the `kube-dns.kube-system` or the `dns-default.openshift-dns` service | +| `exclude-suffixes` | Suffixes for which the DNS resolver will always fail (or fallback in case of the overriding resolver) | [sequence][yaml-seq] of [strings][yaml-str] | `[".arpa", ".com", ".io", ".net", ".org", ".ru"]` | +| `include-suffixes` | Suffixes for which the DNS resolver will always attempt to do a lookup. Includes have higher priority than excludes. | [sequence][yaml-seq] of [strings][yaml-str] | `[]` | +| `lookup-timeout` | Maximum time to wait for a cluster side host lookup. | [duration][go-duration] [string][yaml-str] | 4 seconds | + +Here is an example kubeconfig: +``` +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + dns: + include-suffixes: + - .se + exclude-suffixes: + - .com + name: example-cluster +``` + + +#### AlsoProxy + +When using `also-proxy`, you provide a list of subnets after the key in your kubeconfig file to be added to the TUN device. +All connections to addresses that the subnet spans will be dispatched to the cluster + +Here is an example kubeconfig for the subnet `1.2.3.4/32`: +``` +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + also-proxy: + - 1.2.3.4/32 + name: example-cluster +``` + +#### NeverProxy + +When using `never-proxy` you provide a list of subnets after the key in your kubeconfig file. These will never be routed via the +TUN device, even if they fall within the subnets (pod or service) for the cluster. Instead, whatever route they have before +telepresence connects is the route they will keep. + +Here is an example kubeconfig for the subnet `1.2.3.4/32`: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + never-proxy: + - 1.2.3.4/32 + name: example-cluster +``` + +##### Using AlsoProxy together with NeverProxy + +Never proxy and also proxy are implemented as routing rules, meaning that when the two conflict, regular routing routes apply. +Usually this means that the most specific route will win. + +So, for example, if an `also-proxy` subnet falls within a broader `never-proxy` subnet: + +```yaml +never-proxy: [10.0.0.0/16] +also-proxy: [10.0.5.0/24] +``` + +Then the specific `also-proxy` of `10.0.5.0/24` will be proxied by the TUN device, whereas the rest of `10.0.0.0/16` will not. + +Conversely if a `never-proxy` subnet is inside a larger `also-proxy` subnet: + +```yaml +also-proxy: [10.0.0.0/16] +never-proxy: [10.0.5.0/24] +``` + +Then all of the also-proxy of `10.0.0.0/16` will be proxied, with the exception of the specific `never-proxy` of `10.0.5.0/24` + +#### Manager + +The `manager` key contains configuration for finding the `traffic-manager` that telepresence will connect to. It supports one key, `namespace`, indicating the namespace where the traffic manager is to be found + +Here is an example kubeconfig that will instruct telepresence to connect to a manager in namespace `staging`: + +```yaml +apiVersion: v1 +clusters: +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + manager: + namespace: staging + name: example-cluster +``` + +[yaml-bool]: https://yaml.org/type/bool.html +[yaml-float]: https://yaml.org/type/float.html +[yaml-int]: https://yaml.org/type/int.html +[yaml-seq]: https://yaml.org/type/seq.html +[yaml-str]: https://yaml.org/type/str.html +[go-duration]: https://pkg.go.dev/time#ParseDuration +[logrus-level]: https://github.com/sirupsen/logrus/blob/v1.8.1/logrus.go#L25-L45 diff --git a/docs/telepresence/pre-release/reference/dns.md b/docs/telepresence/pre-release/reference/dns.md new file mode 100644 index 000000000..e38fbc61d --- /dev/null +++ b/docs/telepresence/pre-release/reference/dns.md @@ -0,0 +1,75 @@ +# DNS resolution + +The Telepresence DNS resolver is dynamically configured to resolve names using the namespaces of currently active intercepts. Processes running locally on the desktop will have network access to all services in the such namespaces by service-name only. + +All intercepts contribute to the DNS resolver, even those that do not use the `--namespace=` option. This is because `--namespace default` is implied, and in this context, `default` is treated just like any other namespace. + +No namespaces are used by the DNS resolver (not even `default`) when no intercepts are active, which means that no service is available by `` only. Without an active intercept, the namespace qualified DNS name must be used (in the form `.`). + +See this demonstrated below, using the [quick start's](../../quick-start/) sample app services. + +No intercepts are currently running, we'll connect to the cluster and list the services that can be intercepted. + +``` +$ telepresence connect + + Connecting to traffic manager... + Connected to context default (https://) + +$ telepresence list + + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + emoji : ready to intercept (traffic-agent not yet installed) + web : ready to intercept (traffic-agent not yet installed) + web-app-5d568ccc6b : ready to intercept (traffic-agent not yet installed) + +$ curl web-app:80 + + curl: (6) Could not resolve host: web-app + +``` + +This is expected as Telepresence cannot reach the service yet by short name without an active intercept in that namespace. + +``` +$ curl web-app.emojivoto:80 + + + + + + Emoji Vote + ... +``` + +Using the namespaced qualified DNS name though does work. +Now we'll start an intercept against another service in the same namespace. Remember, `--namespace default` is implied since it is not specified. + +``` +$ telepresence intercept web --port 8080 + + Using Deployment web + intercepted + Intercept name : web + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1:8080 + Volume Mount Point: /tmp/telfs-166119801 + Intercepting : HTTP requests that match all headers: + 'x-telepresence-intercept-id: 8eac04e3-bf24-4d62-b3ba-35297c16f5cd:web' + +$ curl webapp:80 + + + + + + Emoji Vote + ... +``` + +Now curling that service by its short name works and will as long as the intercept is active. + +The DNS resolver will always be able to resolve services using `.` regardless of intercepts. + +See [Outbound connectivity](../routing/#dns-resolution) for details on DNS lookups. diff --git a/docs/telepresence/pre-release/reference/docker-run.md b/docs/telepresence/pre-release/reference/docker-run.md new file mode 100644 index 000000000..2262f0a55 --- /dev/null +++ b/docs/telepresence/pre-release/reference/docker-run.md @@ -0,0 +1,31 @@ +--- +Description: "How a Telepresence intercept can run a Docker container with configured environment and volume mounts." +--- + +# Using Docker for intercepts + +If you want your intercept to go to a Docker container on your laptop, use the `--docker-run` option. It creates the intercept, runs your container in the foreground, then automatically ends the intercept when the container exits. + +`telepresence intercept --port --docker-run -- ` + +The `--` separates flags intended for `telepresence intercept` from flags intended for `docker run`. + +## Example + +Imagine you are working on a new version of a your frontend service. It is running in your cluster as a Deployment called `frontend-v1`. You use Docker on your laptop to build an improved version of the container called `frontend-v2`. To test it out, use this command to run the new container on your laptop and start an intercept of the cluster service to your local container. + +`telepresence intercept frontend-v1 --port 8000 --docker-run -- frontend-v2` + +## Ports + +The `--port` flag can specify an additional port when `--docker-run` is used so that the local and container port can be different. This is done using `--port :`. The container port will default to the local port when using the `--port ` syntax. + +## Flags + +Telepresence will automatically pass some relevant flags to Docker in order to connect the container with the intercept. Those flags are combined with the arguments given after `--` on the command line. + +- `--dns-search tel2-search` Enables single label name lookups in intercepted namespaces +- `--env-file ` Loads the intercepted environment +- `--name intercept--` Names the Docker container, this flag is omitted if explicitly given on the command line +- `-p ` The local port for the intercept and the container port +- `-v ` Volume mount specification, see CLI help for `--mount` and `--docker-mount` flags for more info diff --git a/docs/telepresence/pre-release/reference/environment.md b/docs/telepresence/pre-release/reference/environment.md new file mode 100644 index 000000000..7f83ff119 --- /dev/null +++ b/docs/telepresence/pre-release/reference/environment.md @@ -0,0 +1,46 @@ +--- +description: "How Telepresence can import environment variables from your Kubernetes cluster to use with code running on your laptop." +--- + +# Environment variables + +Telepresence can import environment variables from the cluster pod when running an intercept. +You can then use these variables with the code running on your laptop of the service being intercepted. + +There are three options available to do this: + +1. `telepresence intercept [service] --port [port] --env-file=FILENAME` + + This will write the environment variables to a Docker Compose `.env` file. This file can be used with `docker-compose` when starting containers locally. Please see the Docker documentation regarding the [file syntax](https://docs.docker.com/compose/env-file/) and [usage](https://docs.docker.com/compose/environment-variables/) for more information. + +2. `telepresence intercept [service] --port [port] --env-json=FILENAME` + + This will write the environment variables to a JSON file. This file can be injected into other build processes. + +3. `telepresence intercept [service] --port [port] -- [COMMAND]` + + This will run a command locally with the pod's environment variables set on your laptop. Once the command quits the intercept is stopped (as if `telepresence leave [service]` was run). This can be used in conjunction with a local server command, such as `python [FILENAME]` or `node [FILENAME]` to run a service locally while using the environment variables that were set on the pod via a ConfigMap or other means. + + Another use would be running a subshell, Bash for example: + + `telepresence intercept [service] --port [port] -- /bin/bash` + + This would start the intercept then launch the subshell on your laptop with all the same variables set as on the pod. + +## Telepresence Environment Variables + +Telepresence adds some useful environment variables in addition to the ones imported from the intercepted pod: + +### TELEPRESENCE_ROOT +Directory where all remote volumes mounts are rooted. See [Volume Mounts](../volume/) for more info. + +### TELEPRESENCE_MOUNTS +Colon separated list of remotely mounted directories. + +### TELEPRESENCE_CONTAINER +The name of the intercepted container. Useful when a pod has several containers, and you want to know which one that was intercepted by Telepresence. + +### TELEPRESENCE_INTERCEPT_ID +ID of the intercept (same as the "x-intercept-id" http header). + +Useful if you need special behavior when intercepting a pod. One example might be when dealing with pub/sub systems like Kafka, where all processes that don't have the `TELEPRESENCE_INTERCEPT_ID` set can filter out all messages that contain an `x-intercept-id` header, while those that do, instead filter based on a matching `x-intercept-id` header. This is to assure that messages belonging to a certain intercept always are consumed by the intercepting process. diff --git a/docs/telepresence/pre-release/reference/inside-container.md b/docs/telepresence/pre-release/reference/inside-container.md new file mode 100644 index 000000000..f83ef3575 --- /dev/null +++ b/docs/telepresence/pre-release/reference/inside-container.md @@ -0,0 +1,37 @@ +# Running Telepresence inside a container + +It is sometimes desirable to run Telepresence inside a container. One reason can be to avoid any side effects on the workstation's network, another can be to establish multiple sessions with the traffic manager, or even work with different clusters simultaneously. + +## Building the container + +Building a container with a ready-to-run Telepresence is easy because there are relatively few external dependencies. Add the following to a `Dockerfile`: + +```Dockerfile +# Dockerfile with telepresence and its prerequisites +FROM alpine:3.13 + +# Install Telepresence prerequisites +RUN apk add --no-cache curl iproute2 sshfs + +# Download and install the telepresence binary +RUN curl -fL https://app.getambassador.io/download/tel2/linux/amd64/latest/telepresence -o telepresence && \ + install -o root -g root -m 0755 telepresence /usr/local/bin/telepresence +``` +In order to build the container, do this in the same directory as the `Dockerfile`: +``` +$ docker build -t tp-in-docker . +``` + +## Running the container + +Telepresence will need access to the `/dev/net/tun` device on your Linux host (or, in case the host isn't Linux, the Linux VM that Docker starts automatically), and a Kubernetes config that identifies the cluster. It will also need `--cap-add=NET_ADMIN` to create its Virtual Network Interface. + +The command to run the container can look like this: +```bash +$ docker run \ + --cap-add=NET_ADMIN \ + --device /dev/net/tun:/dev/net/tun \ + --network=host \ + -v ~/.kube/config:/root/.kube/config \ + -it --rm tp-in-docker +``` diff --git a/docs/telepresence/pre-release/reference/intercepts/index.md b/docs/telepresence/pre-release/reference/intercepts/index.md new file mode 100644 index 000000000..283711ff8 --- /dev/null +++ b/docs/telepresence/pre-release/reference/intercepts/index.md @@ -0,0 +1,366 @@ +import Alert from '@material-ui/lab/Alert'; + +# Intercepts + +When intercepting a service, Telepresence installs a *traffic-agent* +sidecar in to the workload. That traffic-agent supports one or more +intercept *mechanisms* that it uses to decide which traffic to +intercept. Telepresence has a simple default traffic-agent, however +you can configure a different traffic-agent with more sophisticated +mechanisms either by setting the [`images.agentImage` field in +`config.yml`](../config/#images) or by writing an +[`extensions/${extension}.yml`][extensions] file that tells +Telepresence about a traffic-agent that it can use, what mechanisms +that traffic-agent supports, and command-line flags to expose to the +user to configure that mechanism. You may tell Telepresence which +known mechanism to use with the `--mechanism=${mechanism}` flag or by +setting one of the `--${mechansim}-XXX` flags, which implicitly set +the mechanism; for example, setting `--http-match=auto` implicitly +sets `--mechanism=http`. + +The default open-source traffic-agent only supports the `tcp` +mechanism, which treats the raw layer 4 TCP streams as opaque and +sends all of that traffic down to the developer's workstation. This +means that it is a "global" intercept, affecting all users of the +cluster. + +In addition to the default open-source traffic-agent, Telepresence +already knows about the Ambassador Cloud +[traffic-agent][ambassador-agent], which supports the `http` +mechanism. The `http` mechanism operates at higher layer, working +with layer 7 HTTP, and may intercept specific HTTP requests, allowing +other HTTP requests through to the regular service. This allows for +"personal" intercepts which only intercept traffic tagged as belonging +to a given developer. + +[extensions]: https://pkg.go.dev/github.com/telepresenceio/telepresence/v2@v$version$/pkg/client/cli/extensions +[ambassador-agent]: https://github.com/telepresenceio/telepresence/blob/release/v2.4/pkg/client/cli/extensions/builtin.go#L30-L50 + +## Intercept behavior when logged in to Ambassador Cloud + +Logging in to Ambassador Cloud (with [`telepresence +login`](../client/login/)) changes the Telepresence defaults in two +ways. + +First, being logged in to Ambassador Cloud causes Telepresence to +default to `--mechanism=http --http-match=auto` (or just +`--http-match=auto`, as `--http-match` implies `--mechanism=http`). +If you hadn't been logged in it would have defaulted to +`--mechanism=tcp`. This tells Telepresence to use the Ambassador +Cloud traffic-agent to do smart "personal" intercepts and only +intercept a subset of HTTP requests, rather than just intercepting the +entirety of all TCP connections. This is important for working in a +shared cluster with teammates, and is important for the preview URL +functionality below. See `telepresence intercept --help` for +information on using `--http-match` to customize which requests it +intercepts. + +Secondly, being logged in causes Telepresence to default to +`--preview-url=true`. If you hadn't been logged in it would have +defaulted to `--preview-url=false`. This tells Telepresence to take +advantage of Ambassador Cloud to create a preview URL for this +intercept, creating a shareable URL that automatically sets the +appropriate headers to have requests coming from the preview URL be +intercepted. In order to create the preview URL, it will prompt you +for four settings about how your cluster's ingress is configured. For +each, Telepresence tries to intelligently detect the correct value for +your cluster; if it detects it correctly, may simply press "enter" and +accept the default, otherwise you must tell Telepresence the correct +value. + +When you create an intercept with the `http` mechanism, Telepresence +determines whether the application protocol uses HTTP/1.1 or HTTP/2. If the +service's `ports.appProtocol` field is set, Telepresence uses that. If not, +then Telepresence uses the configured application protocol strategy to determine +the protocol. The default behavior (`http2Probe` strategy) sends a +`GET /telepresence-http2-check` request to your service to determine if it supports +HTTP/2. This is required for the intercepts to behave correctly. + +### TLS + +If the intercepted service has been set up for `--mechanism=http`, Telepresence +needs to terminate the TLS connection for the `http` mechanism to function in your +intercepts. Additionally, you need to ensure the +[TLS annotations](../cluster-config/#tls) are properly entered in your workload’s +Pod template to designate that requests leaving your service still speak TLS +outside of the service as expected. + +Use the `--http-plaintext` flag when doing an intercept when the service in the +cluster is using TLS in case you want to use plaintext for the communication with the +process on your local workstation. + +## Supported workloads + +Kubernetes has various +[workloads](https://kubernetes.io/docs/concepts/workloads/). +Currently, Telepresence supports intercepting (installing a +traffic-agent on) `Deployments`, `ReplicaSets`, and `StatefulSets`. + + + +While many of our examples use Deployments, they would also work on +ReplicaSets and StatefulSets + + + +## Specifying a namespace for an intercept + +The namespace of the intercepted workload is specified using the +`--namespace` option. When this option is used, and `--workload` is +not used, then the given name is interpreted as the name of the +workload and the name of the intercept will be constructed from that +name and the namespace. + +```shell +telepresence intercept hello --namespace myns --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept +`hello-myns`. In order to remove the intercept, you will need to run +`telepresence leave hello-mydns` instead of just `telepresence leave +hello`. + +The name of the intercept will be left unchanged if the workload is specified. + +```shell +telepresence intercept myhello --namespace myns --workload hello --port 9000 +``` + +This will intercept a workload named `hello` and name the intercept `myhello`. + +## Importing environment variables + +Telepresence can import the environment variables from the pod that is +being intercepted, see [this doc](../environment/) for more details. + +## Creating an intercept without a preview URL + +If you *are not* logged in to Ambassador Cloud, the following command +will intercept all traffic bound to the service and proxy it to your +laptop. This includes traffic coming through your ingress controller, +so use this option carefully as to not disrupt production +environments. + +```shell +telepresence intercept --port= +``` + +If you *are* logged in to Ambassador Cloud, setting the +`--preview-url` flag to `false` is necessary. + +```shell +telepresence intercept --port= --preview-url=false +``` + +This will output an HTTP header that you can set on your request for +that traffic to be intercepted: + +```console +$ telepresence intercept --port= --preview-url=false +Using Deployment +intercepted + Intercept name: + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Intercepting : HTTP requests that match all of: + header("x-telepresence-intercept-id") ~= regexp(":") +``` + +Run `telepresence status` to see the list of active intercepts. + +```console +$ telepresence status +Root Daemon: Running + Version : v2.1.4 (api 3) + Primary DNS : "" + Fallback DNS: "" +User Daemon: Running + Version : v2.1.4 (api 3) + Ambassador Cloud : Logged out + Status : Connected + Kubernetes server : https:// + Kubernetes context: default + Telepresence proxy: ON (networking to the cluster is enabled) + Intercepts : 1 total + dataprocessingnodeservice: @ +``` + +Finally, run `telepresence leave ` to stop the intercept. + +## Skipping the ingress dialogue + +You can skip the ingress dialogue by setting the relevant parameters using flags. If any of the following flags are set, the dialogue will be skipped and the flag values will be used instead. If any of the required flags are missing, an error will be thrown. + +| Flag | Description | Required | +| -------------- | ------------------------------ | --- | +| `--ingress-host` | The ip address for the ingress | yes | +| `--ingress-port` | The port for the ingress | yes | +| `--ingress-tls` | Whether tls should be used | no | +| `--ingress-l5` | Whether a different ip address should be used in request headers | no | + +## Creating an intercept when a service has multiple ports + +If you are trying to intercept a service that has multiple ports, you +need to tell Telepresence which service port you are trying to +intercept. To specify, you can either use the name of the service +port or the port number itself. To see which options might be +available to you and your service, use kubectl to describe your +service or look in the object's YAML. For more information on multiple +ports, see the [Kubernetes documentation][kube-multi-port-services]. + +[kube-multi-port-services]: https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services + +```console +$ telepresence intercept --port=: +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +When intercepting a service that has multiple ports, the name of the +service port that has been intercepted is also listed. + +If you want to change which port has been intercepted, you can create +a new intercept the same way you did above and it will change which +service port is being intercepted. + +## Creating an intercept When multiple services match your workload + +Oftentimes, there's a 1-to-1 relationship between a service and a +workload, so telepresence is able to auto-detect which service it +should intercept based on the workload you are trying to intercept. +But if you use something like +[Argo](https://www.getambassador.io/docs/argo/latest/quick-start/), there may be +two services (that use the same labels) to manage traffic between a +canary and a stable service. + +Fortunately, if you know which service you want to use when +intercepting a workload, you can use the `--service` flag. So in the +aforementioned example, if you wanted to use the `echo-stable` service +when intercepting your workload, your command would look like this: + +```console +$ telepresence intercept echo-rollout- --port --service echo-stable +Using ReplicaSet echo-rollout- +intercepted + Intercept name : echo-rollout- + State : ACTIVE + Workload kind : ReplicaSet + Destination : 127.0.0.1:3000 + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-921196036 + Intercepting : all TCP connections +``` + +## Port-forwarding an intercepted container's sidecars + +Sidecars are containers that sit in the same pod as an application +container; they usually provide auxiliary functionality to an +application, and can usually be reached at +`localhost:${SIDECAR_PORT}`. For example, a common use case for a +sidecar is to proxy requests to a database, your application would +connect to `localhost:${SIDECAR_PORT}`, and the sidecar would then +connect to the database, perhaps augmenting the connection with TLS or +authentication. + +When intercepting a container that uses sidecars, you might want those +sidecars' ports to be available to your local application at +`localhost:${SIDECAR_PORT}`, exactly as they would be if running +in-cluster. Telepresence's `--to-pod ${PORT}` flag implements this +behavior, adding port-forwards for the port given. + +```console +$ telepresence intercept --port=: --to-pod= +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Service Port Identifier: + Intercepting : all TCP connections +``` + +If there are multiple ports that you need forwarded, simply repeat the +flag (`--to-pod= --to-pod=`). + +## Intercepting headless services + +Kubernetes supports creating [services without a ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services), +which, when they have a pod selector, serve to provide a DNS record that will directly point to the service's backing pods. +Telepresence supports intercepting these `headless` services as it would a regular service with a ClusterIP. +So, for example, if you have the following service: + +```yaml +--- +apiVersion: v1 +kind: Service +metadata: + name: my-headless +spec: + type: ClusterIP + clusterIP: None + selector: + service: my-headless + ports: + - port: 8080 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: my-headless + labels: + service: my-headless +spec: + replicas: 1 + serviceName: my-headless + selector: + matchLabels: + service: my-headless + template: + metadata: + labels: + service: my-headless + spec: + containers: + - name: my-headless + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +``` + +You can intercept it like any other: + +```console +$ telepresence intercept my-headless --port 8080 +Using StatefulSet my-headless +intercepted + Intercept name : my-headless + State : ACTIVE + Workload kind : StatefulSet + Destination : 127.0.0.1:8080 + Volume Mount Point: /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-524189712 + Intercepting : all TCP connections +``` + + +This utilizes an initContainer that requires `NET_ADMIN` capabilities. +If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector. + + + +This requires the Traffic Agent to run as GID 7777. By default, this is disabled on openshift clusters. +To enable running as GID 7777 on a specific openshift namespace, run: +oc adm policy add-scc-to-group anyuid system:serviceaccounts:$NAMESPACE + + + +Intercepting headless services without a selector is not supported. + diff --git a/docs/telepresence/pre-release/reference/intercepts/manual-agent.md b/docs/telepresence/pre-release/reference/intercepts/manual-agent.md new file mode 100644 index 000000000..e818171ce --- /dev/null +++ b/docs/telepresence/pre-release/reference/intercepts/manual-agent.md @@ -0,0 +1,221 @@ +import Alert from '@material-ui/lab/Alert'; + +# Manually injecting the Traffic Agent + +You can directly modify your workload's YAML configuration to add the Telepresence Traffic Agent and enable it to be intercepted. + +When you use a Telepresence intercept, Telepresence automatically edits the workload and services when you use +`telepresence uninstall --agent `. In some GitOps workflows, you may need to use the +[Telepresence Mutating Webhook](../../cluster-config/#mutating-webhook) to keep intercepted workloads unmodified +while you target changes on specific pods. + + +In situations where you don't have access to the proper permissions for numeric ports, as noted in the Note on numeric ports +section of the documentation, it is possible to manually inject the Traffic Agent. Because this is not the recommended approach +to making a workload interceptable, try the Mutating Webhook before proceeding." + + +## Procedure + +You can manually inject the agent into Deployments, StatefulSets, or ReplicaSets. The example on this page +uses the following Deployment: + + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "my-service" + labels: + service: my-service +spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} +``` + +The deployment is being exposed by the following service: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: "my-service" +spec: + type: ClusterIP + selector: + service: my-service + ports: + - port: 80 + targetPort: 8080 +``` + +### 1. Generating the YAML + +First, generate the YAML for the traffic-agent container: + +```console +$ telepresence genyaml container --container-name echo-container --port 8080 --output - --input deployment.yaml +args: +- agent +env: +- name: TELEPRESENCE_CONTAINER + value: echo-container +- name: _TEL_AGENT_LOG_LEVEL + value: info +- name: _TEL_AGENT_NAME + value: my-service +- name: _TEL_AGENT_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace +- name: _TEL_AGENT_POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP +- name: _TEL_AGENT_APP_PORT + value: "8080" +- name: _TEL_AGENT_AGENT_PORT + value: "9900" +- name: _TEL_AGENT_MANAGER_HOST + value: traffic-manager.ambassador +image: docker.io/datawire/tel2:2.4.6 +name: traffic-agent +ports: +- containerPort: 9900 + protocol: TCP +readinessProbe: + exec: + command: + - /bin/stat + - /tmp/agent/ready +resources: {} +volumeMounts: +- mountPath: /tel_pod_info + name: traffic-annotations +``` + +Next, generate the YAML for the volume: + +```console +$ telepresence genyaml volume --output - --input deployment.yaml +downwardAPI: + items: + - fieldRef: + fieldPath: metadata.annotations + path: annotations +name: traffic-annotations +``` + + +Enter `telepresence genyaml container --help` or `telepresence genyaml volume --help` for more information about these flags. + + +### 2. Injecting the YAML into the Deployment + +You need to add the `Deployment` YAML you genereated to include the container and the volume. These are placed as elements of `spec.template.spec.containers` and `spec.template.spec.volumes` respectively. +You also need to modify `spec.template.metadata.annotations` and add the annotation `telepresence.getambassador.io/manually-injected: "true"`. +These changes should look like the following: + +```diff +apiVersion: apps/v1 +kind: Deployment +metadata: + name: "my-service" + labels: + service: my-service +spec: + replicas: 1 + selector: + matchLabels: + service: my-service + template: + metadata: + labels: + service: my-service ++ annotations: ++ telepresence.getambassador.io/manually-injected: "true" + spec: + containers: + - name: echo-container + image: jmalloc/echo-server + ports: + - containerPort: 8080 + resources: {} ++ - args: ++ - agent ++ env: ++ - name: TELEPRESENCE_CONTAINER ++ value: echo-container ++ - name: _TEL_AGENT_LOG_LEVEL ++ value: info ++ - name: _TEL_AGENT_NAME ++ value: my-service ++ - name: _TEL_AGENT_NAMESPACE ++ valueFrom: ++ fieldRef: ++ fieldPath: metadata.namespace ++ - name: _TEL_AGENT_POD_IP ++ valueFrom: ++ fieldRef: ++ fieldPath: status.podIP ++ - name: _TEL_AGENT_APP_PORT ++ value: "8080" ++ - name: _TEL_AGENT_AGENT_PORT ++ value: "9900" ++ - name: _TEL_AGENT_MANAGER_HOST ++ value: traffic-manager.ambassador ++ image: docker.io/datawire/tel2:2.4.6 ++ name: traffic-agent ++ ports: ++ - containerPort: 9900 ++ protocol: TCP ++ readinessProbe: ++ exec: ++ command: ++ - /bin/stat ++ - /tmp/agent/ready ++ resources: {} ++ volumeMounts: ++ - mountPath: /tel_pod_info ++ name: traffic-annotations ++ volumes: ++ - downwardAPI: ++ items: ++ - fieldRef: ++ fieldPath: metadata.annotations ++ path: annotations ++ name: traffic-annotations +``` + +### 3. Modifying the service + +Once the modified deployment YAML has been applied to the cluster, you need to modify the Service to route traffic to the Traffic Agent. +You can do this by changing the exposed `targetPort` to `9900`. The resulting service should look like: + +```diff +apiVersion: v1 +kind: Service +metadata: + name: "my-service" +spec: + type: ClusterIP + selector: + service: my-service + ports: + - port: 80 +- targetPort: 8080 ++ targetPort: 9900 +``` diff --git a/docs/telepresence/pre-release/reference/linkerd.md b/docs/telepresence/pre-release/reference/linkerd.md new file mode 100644 index 000000000..9b903fa76 --- /dev/null +++ b/docs/telepresence/pre-release/reference/linkerd.md @@ -0,0 +1,75 @@ +--- +Description: "How to get Linkerd meshed services working with Telepresence" +--- + +# Using Telepresence with Linkerd + +## Introduction +Getting started with Telepresence on Linkerd services is as simple as adding an annotation to your Deployment: + +```yaml +spec: + template: + metadata: + annotations: + config.linkerd.io/skip-outbound-ports: "8081" +``` + +The local system and the Traffic Agent connect to the Traffic Manager using its gRPC API on port 8081. Telling Linkerd to skip that port allows the Traffic Agent sidecar to fully communicate with the Traffic Manager, and therefore the rest of the Telepresence system. + +## Prerequisites +1. [Telepresence binary](../../install) +2. Linkerd control plane [installed to cluster](https://linkerd.io/2.10/tasks/install/) +3. Kubectl +4. [Working ingress controller](https://www.getambassador.io/docs/edge-stack/latest/howtos/linkerd2) + +## Deploy +Save and deploy the following YAML. Note the `config.linkerd.io/skip-outbound-ports` annotation in the metadata of the pod template. + +```yaml +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: quote +spec: + replicas: 1 + selector: + matchLabels: + app: quote + strategy: + type: RollingUpdate + template: + metadata: + annotations: + linkerd.io/inject: "enabled" + config.linkerd.io/skip-outbound-ports: "8081,8022,6001" + labels: + app: quote + spec: + containers: + - name: backend + image: docker.io/datawire/quote:0.4.1 + ports: + - name: http + containerPort: 8000 + env: + - name: PORT + value: "8000" + resources: + limits: + cpu: "0.1" + memory: 100Mi +``` + +## Connect to Telepresence +Run `telepresence connect` to connect to the cluster. Then `telepresence list` should show the `quote` deployment as `ready to intercept`: + +``` +$ telepresence list + + quote: ready to intercept (traffic-agent not yet installed) +``` + +## Run the intercept +Run `telepresence intercept quote --port 8080:80` to direct traffic from the `quote` deployment to port 8080 on your local system. Assuming you have something listening on 8080, you should now be able to see your local service whenever attempting to access the `quote` service. diff --git a/docs/telepresence/pre-release/reference/rbac.md b/docs/telepresence/pre-release/reference/rbac.md new file mode 100644 index 000000000..2c9af7c1c --- /dev/null +++ b/docs/telepresence/pre-release/reference/rbac.md @@ -0,0 +1,291 @@ +import Alert from '@material-ui/lab/Alert'; + +# Telepresence RBAC +The intention of this document is to provide a template for securing and limiting the permissions of Telepresence. +This documentation covers the full extent of permissions necessary to administrate Telepresence components in a cluster. + +There are two general categories for cluster permissions with respect to Telepresence. There are RBAC settings for a User and for an Administrator described above. The User is expected to only have the minimum cluster permissions necessary to create a Telepresence [intercept](../../howtos/intercepts/), and otherwise be unable to affect Kubernetes resources. + +In addition to the above, there is also a consideration of how to manage Users and Groups in Kubernetes which is outside of the scope of the document. This document will use Service Accounts to assign Roles and Bindings. Other methods of RBAC administration and enforcement can be found on the [Kubernetes RBAC documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) page. + +## Requirements + +- Kubernetes version 1.16+ +- Cluster admin privileges to apply RBAC + +## Editing your kubeconfig + +This guide also assumes that you are utilizing a kubeconfig file that is specified by the `KUBECONFIG` environment variable. This is a `yaml` file that contains the cluster's API endpoint information as well as the user data being supplied for authentication. The Service Account name used in the example below is called tp-user. This can be replaced by any value (i.e. John or Jane) as long as references to the Service Account are consistent throughout the `yaml`. After an administrator has applied the RBAC configuration, a user should create a `config.yaml` in your current directory that looks like the following:​ + +```yaml +apiVersion: v1 +kind: Config +clusters: +- name: my-cluster # Must match the cluster value in the contexts config + cluster: + ## The cluster field is highly cloud dependent. +contexts: +- name: my-context + context: + cluster: my-cluster # Must match the name field in the clusters config + user: tp-user +users: +- name: tp-user # Must match the name of the Service Account created by the cluster admin + user: + token: # See note below +``` + +The Service Account token will be obtained by the cluster administrator after they create the user's Service Account. Creating the Service Account will create an associated Secret in the same namespace with the format `-token-`. This token can be obtained by your cluster administrator by running `kubectl get secret -n ambassador -o jsonpath='{.data.token}' | base64 -d`. + +After creating `config.yaml` in your current directory, export the file's location to KUBECONFIG by running `export KUBECONFIG=$(pwd)/config.yaml`. You should then be able to switch to this context by running `kubectl config use-context my-context`. + +## Administrating Telepresence + +Telepresence administration requires permissions for creating `Namespaces`, `ServiceAccounts`, `ClusterRoles`, `ClusterRoleBindings`, `Secrets`, `Services`, `MutatingWebhookConfiguration`, and for creating the `traffic-manager` [deployment](../architecture/#traffic-manager) which is typically done by a full cluster administrator. The following permissions are needed for the installation and use of Telepresence: + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: telepresence-admin + namespace: default +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-admin-role +rules: + - apiGroups: + - "" + resources: ["pods", "pods/log"] + verbs: ["get", "list", "create", "delete", "watch"] + - apiGroups: + - "" + resources: ["services"] + verbs: ["get", "list", "update", "create", "delete"] + - apiGroups: + - "" + resources: ["pods/portforward"] + verbs: ["create"] + - apiGroups: + - "apps" + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update", "create", "delete", "watch"] + - apiGroups: + - "getambassador.io" + resources: ["hosts", "mappings"] + verbs: ["*"] + - apiGroups: + - "" + resources: ["endpoints"] + verbs: ["get", "list"] + - apiGroups: + - "rbac.authorization.k8s.io" + resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"] + verbs: ["get", "list", "watch", "create", "delete"] + - apiGroups: + - "" + resources: ["namespaces"] + verbs: ["get", "list", "watch", "create"] + - apiGroups: + - "" + resources: ["secrets"] + verbs: ["get", "create", "list", "delete"] + - apiGroups: + - "" + resources: ["serviceaccounts"] + verbs: ["get", "create", "delete"] + - apiGroups: + - "admissionregistration.k8s.io" + resources: ["mutatingwebhookconfigurations"] + verbs: ["get", "create", "delete"] + - apiGroups: + - "" + resources: ["nodes"] + verbs: ["list", "get", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-clusterrolebinding +subjects: + - name: telepresence-admin + kind: ServiceAccount + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-admin-role + kind: ClusterRole +``` + +There are two ways to install the traffic-manager: Using `telepresence connect` and installing the [helm chart](../../install/helm/). + +By using `telepresence connect`, Telepresence will use your kubeconfig to create the objects mentioned above in the cluster if they don't already exist. If you want the most introspection into what is being installed, we recommend using the helm chart to install the traffic-manager. + +## Cluster-wide telepresence user access + +To allow users to make intercepts across all namespaces, but with more limited `kubectl` permissions, the following `ServiceAccount`, `ClusterRole`, and `ClusterRoleBinding` will allow full `telepresence intercept` functionality. + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate value + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: telepresence-role +rules: +- apiGroups: + - "" + resources: ["pods", "pods/log"] + verbs: ["get", "list", "create", "delete"] +- apiGroups: + - "" + resources: ["services"] + verbs: ["get", "list", "update"] +- apiGroups: + - "" + resources: ["pods/portforward"] + verbs: ["create"] +- apiGroups: + - "apps" + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update", "patch"] +- apiGroups: + - "getambassador.io" + resources: ["hosts", "mappings"] + verbs: ["*"] +- apiGroups: + - "" + resources: ["endpoints"] + verbs: ["get", "list"] +- apiGroups: + - "rbac.authorization.k8s.io" + resources: ["clusterroles", "clusterrolebindings"] + verbs: ["get", "list", "watch"] +- apiGroups: + - "" + resources: ["namespaces"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: telepresence-rolebinding +subjects: +- name: tp-user + kind: ServiceAccount + namespace: ambassador +roleRef: + apiGroup: rbac.authorization.k8s.io + name: telepresence-role + kind: ClusterRole +``` + +## Namespace only telepresence user access + +RBAC for multi-tenant scenarios where multiple dev teams are sharing a single cluster where users are constrained to a specific namespace(s). + +The following RBAC configurations assume that there is already a Traffic Manager deployment set up by a Cluster Administrator + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tp-user # Update value for appropriate user name + namespace: ambassador # Traffic-Manager is deployed to Ambassador namespace +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-role +rules: +- apiGroups: + - "" + resources: ["pods"] + verbs: ["get", "list", "create", "watch", "delete"] +- apiGroups: + - "" + resources: ["services"] + verbs: ["update"] +- apiGroups: + - "" + resources: ["pods/portforward"] + verbs: ["create"] +- apiGroups: + - "apps" + resources: ["deployments", "replicasets", "statefulsets"] + verbs: ["get", "list", "update"] +- apiGroups: + - "getambassador.io" + resources: ["hosts", "mappings"] + verbs: ["*"] +- apiGroups: + - "" + resources: ["endpoints"] + verbs: ["get", "list", "watch"] +--- +kind: RoleBinding # RBAC to access ambassador namespace +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: t2-ambassador-binding + namespace: ambassador +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount + namespace: ambassador +roleRef: + kind: ClusterRole + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +--- +kind: RoleBinding # RoleBinding T2 namespace to be intecpeted +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-test-binding # Update "test" for appropriate namespace to be intercepted + namespace: test # Update "test" for appropriate namespace to be intercepted +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount + namespace: ambassador +roleRef: + kind: ClusterRole + name: telepresence-role + apiGroup: rbac.authorization.k8s.io +​ +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-namespace-role +rules: +- apiGroups: + - "" + resources: ["namespaces"] + verbs: ["get", "list", "watch"] +- apiGroups: + - "" + resources: ["services"] + verbs: ["get", "list", "watch"] +--- +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: telepresence-namespace-binding +subjects: +- kind: ServiceAccount + name: tp-user # Should be the same as metadata.name of above ServiceAccount + namespace: ambassador +roleRef: + kind: ClusterRole + name: telepresence-namespace-role + apiGroup: rbac.authorization.k8s.io +``` diff --git a/docs/telepresence/pre-release/reference/restapi.md b/docs/telepresence/pre-release/reference/restapi.md new file mode 100644 index 000000000..e3934abd4 --- /dev/null +++ b/docs/telepresence/pre-release/reference/restapi.md @@ -0,0 +1,117 @@ +# Telepresence RESTful API server + +Telepresence can run a RESTful API server on the local host, both on the local workstation and in a pod that contains a `traffic-agent`. The server currently has two endpoints. The standard `healthz` endpoint and the `consume-here` endpoint. + +## Enabling the server +The server is enabled by setting the `telepresenceAPI.port` to a valid port number in the [Telepresence Helm Chart](https://github.com/telepresenceio/telepresence/tree/release/v2/charts/telepresence). The values may be passed explicitly to Helm during install, or configured using the [Telepresence Config](../config#restful-api-server) to impact an auto-install. + +## Querying the server +On the cluster's side, it's the `traffic-agent` of potentially intercepted pods that runs the server. The server can be accessed using `http://localhost:/` from the application container. Telepresence ensures that the container has the `TELEPRESENCE_API_PORT` environment variable set when the `traffic-agent` is installed. On the workstation, it is the `user-daemon` that runs the server. It uses the `TELEPRESENCE_API_PORT` that is conveyed in the environment of the intercept. This means that the server can be accessed the exact same way locally, provided that the environment is propagated correctly to the interceptor process. + +## Endpoints + +### healthz +The `http://localhost:/healthz` endpoint should respond with status code 200 OK. If it doesn't then something isn't configured correctly. Check that the `traffic-agent` container is present and that the `TELEPRESENCE_API_PORT` has been added to the environment of the application container and/or in the environment that is propagated to the interceptor that runs on the local workstation. + +#### test endpoint using curl +A `curl -v` call can be used to test the endpoint when an intercept is active. This example assumes that the API port is configured to be 9980. +```console +$ curl -v localhost:9980/healthz +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /healthz HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Date: Fri, 26 Nov 2021 07:06:18 GMT +< Content-Length: 0 +< +* Connection #0 to host localhost left intact +``` + +### consume-here +`http://localhost:/consume-here` is intended to be queried with a set of headers, typically obtained from a Kafka message or similar, and will respond with "true" (consume the message) or "false" (leave the message on the queue). When running in the cluster, this endpoint will respond with `false` if the headers match an ongoing intercept for the same workload because it's assumed that it's up to the intercept to consume the message. When running locally, the response is inverted. Matching headers means that the message should be consumed. + +Telepresence provides the ID of the intercept in the environment variable [TELEPRESENCE_INTERCEPT_ID](../environment/#telepresence_intercept_id) during an intercept. This ID must be provided in a `x-caller-intercept-id: = ` header. Telepresence needs this to identify the caller correctly. The `` will be empty when running in the cluster, but it's harmless to provide it there too, so there's no need for conditional code. + +#### test endpoint using curl +There are three prerequisites to fulfill before testing this endpoint using `curl -v` on the workstation. +1. An intercept must be active +2. The "/healtz" endpoint must respond with OK +3. The ID of the intercept must be known. It will be visible as `x-telepresence-intercept-id` in the output of the `telepresence intercept` and `telepresence list` commands unless the intercept was started with `--http-match` flags. If it was, the `--env-file ` or `--env-json ` flag must be also be used so that the environment can be examined. The variable to look for in the file is `TELEPRESENCE_INTERCEPT_ID`. + +Assuming that the API-server runs on port 9980, that the intercept was started with `-H 'foo: bar`, we can now check that the "/consume-here" returns "true" for the given headers. +```console +$ curl -v localhost:9980/consume-here -H 'x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest' -H 'foo: bar' +* Trying ::1:9980... +* Connected to localhost (::1) port 9980 (#0) +> GET /consume-here HTTP/1.1 +> Host: localhost:9980 +> User-Agent: curl/7.76.1 +> Accept: */* +> x-telepresence-caller-intercept-id: 4392d394-100e-4f15-a89b-426012f10e05:apitest +> foo: bar +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 200 OK +< Content-Type: text/plain +< Date: Fri, 26 Nov 2021 06:43:28 GMT +< Content-Length: 4 +< +* Connection #0 to host localhost left intact +true% +``` + +If you can run curl from the pod, you can try the exact same URL. The result should be "false" when there's an ongoing intercept. The `x-telepresence-caller-intercept-id` is not needed when the call is made from the pod. +#### Example code: + +Here's an example filter written in Go. It divides the actual URL creation (only needs to run once) from the filter function to make the filter more performant: +```go +const portEnv = "TELEPRESENCE_API_PORT" +const interceptIdEnv = "TELEPRESENCE_INTERCEPT_ID" + +// apiURL creates the generic URL needed to access the service +func apiURL() (string, error) { + pe := os.Getenv(portEnv) + if _, err := strconv.ParseUint(pe, 10, 16); err != nil { + return "", fmt.Errorf("value %q of env %s does not represent a valid port number", pe, portEnv) + } + return "http://localhost:" + pe, nil +} + +// consumeHereURL creates the URL for the "consume-here" endpoint +func consumeHereURL() (string, error) { + apiURL, err := apiURL() + if err != nil { + return "", err + } + return apiURL + "/consume-here", nil +} + +// consumeHere expects an url created using consumeHereURL() and calls the endpoint with the given +// headers and returns the result +func consumeHere(url string, hm map[string]string) (bool, error) { + rq, err := http.NewRequest("GET", url, nil) + if err != nil { + return false, err + } + rq.Header = make(http.Header, len(hm)+1) + rq.Header.Set("X-Telepresence-Caller-Intercept-Id", os.Getenv(interceptIdEnv)) + for k, v := range hm { + rq.Header.Set(k, v) + } + rs, err := http.DefaultClient.Do(rq) + if err != nil { + return false, err + } + defer rs.Body.Close() + b, err := io.ReadAll(rs.Body) + if err != nil { + return false, err + } + return strconv.ParseBool(string(b)) +} +``` \ No newline at end of file diff --git a/docs/telepresence/pre-release/reference/routing.md b/docs/telepresence/pre-release/reference/routing.md new file mode 100644 index 000000000..671dae5d8 --- /dev/null +++ b/docs/telepresence/pre-release/reference/routing.md @@ -0,0 +1,69 @@ +# Connection Routing + +## Outbound + +### DNS resolution +When requesting a connection to a host, the IP of that host must be determined. Telepresence provides DNS resolvers to help with this task. There are currently four types of resolvers but only one of them will be used on a workstation at any given time. Common for all of them is that they will propagate a selection of the host lookups to be performed in the cluster. The selection normally includes all names ending with `.cluster.local` or a currently mapped namespace but more entries can be added to the list using the `include-suffixes` option in the +[local DNS configuration](../config/#dns) + +#### Cluster side DNS lookups +The cluster side host lookup will be performed by the traffic-manager unless the client has an active intercept, in which case, the agent performing that intercept will be responsible for doing it. If the client has multiple intercepts, then all of them will be asked to perform the lookup, and the response to the client will contain the unique sum of IPs that they produce. It's therefore important to never have multiple intercepts that span more than one namespace[[1](#namespacelimit)] running concurrently on the same workstation because that would logically put the workstation in several namespaces and make the DNS resolution ambiguous. The reason for asking all of them is that the workstation currently impersonates multiple containers, and it is not possible to determine on behalf of what container the lookup request is made. + +#### macOS resolver +This resolver hooks into the macOS DNS system by creating files under `/etc/resolver`. Those files correspond to some domain and contain the port number of the Telepresence resolver. Telepresence creates one such file for each of the currently mapped namespaces and `include-suffixes` option. The file `telepresence.local` contains a search path that is configured based on current intercepts so that single label names can be resolved correctly. + +#### Linux systemd-resolved resolver +This resolver registers itself as part of telepresence's [VIF](../tun-device) using `systemd-resolved` and uses the DBus API to configure domains and routes that corresponds to the current set of intercepts and namespaces. + +#### Linux overriding resolver +Linux systems that aren't configured with `systemd-resolved` will use this resolver. A Typical case is when running Telepresence [inside a docker container](../inside-container). During initialization, the resolver will first establish a _fallback_ connection to the IP passed as `--dns`, the one configured as `local-ip` in the [local DNS configuration](../config/#dns), or the primary `nameserver` registered in `/etc/resolv.conf`. It will then use iptables to actually override that IP so that requests to it instead end up in the overriding resolver, which unless it succeeds on its own, will use the _fallback_. + +#### Windows resolver +This resolver uses the DNS resolution capabilities of the [win-tun](https://www.wintun.net/) device in conjunction with [Win32_NetworkAdapterConfiguration SetDNSDomain](https://docs.microsoft.com/en-us/powershell/scripting/samples/performing-networking-tasks?view=powershell-7.2#assigning-the-dns-domain-for-a-network-adapter). + +#### DNS caching +The Telepresence DNS resolver often changes its configuration. This means that Telepresence must either flush the DNS caches on the local host, or ensure that DNS-records returned from the Telepresence resolver aren't cached (or cached for a very short time). All operating systems have different ways of flushing the DNS caches and even different versions of one system may have differences. Also, on some systems it is necessary to actually kill and restart processes to ensure a proper flush, which in turn may result in network instabilities. + +Starting with 2.4.7, Telepresence will no longer flush the host's DNS caches. Instead, all records will have a short Time To Live (TTL) so that such caches evict the entries quickly. This causes increased load on the Telepresence resolver (shorter TTL means more frequent queries) and to cater for that, telepresence now has an internal cache to minimize the number of DNS queries that it sends to the cluster. This cache is flushed as needed without causing instabilities. + +### Routing + +#### Subnets +The Telepresence `traffic-manager` service is responsible for discovering the cluster's service subnet and all subnets used by the pods. In order to do this, it needs permission to create a dummy service[[2](#servicesubnet)] in its own namespace, and the ability to list, get, and watch nodes and pods. Most clusters will expose the pod subnets as `podCIDR` in the `Node` while others, like Amazon EKS, don't. Telepresence will then fall back to deriving the subnets from the IPs of all pods. If you'd like to choose a specific method for discovering subnets, or want to provide the list yourself, you can use the `podCIDRStrategy` configuration value in the [helm](../../install/helm) chart to do that. + +The complete set of subnets that the [VIF](../tun-device) will be configured with is dynamic and may change during a connection's life cycle as new nodes arrive or disappear from the cluster. The set consists of what that the traffic-manager finds in the cluster, and the subnets configured using the [also-proxy](../config#alsoproxy) configuration option. Telepresence will remove subnets that are equal to, or completely covered by, other subnets. + +#### Connection origin +A request to connect to an IP-address that belongs to one of the subnets of the [VIF](../tun-device) will cause a connection request to be made in the cluster. As with host name lookups, the request will originate from the traffic-manager unless the client has ongoing intercepts. If it does, one of the intercepted pods will be chosen, and the request will instead originate from that pod. This is a best-effort approach. Telepresence only knows that the request originated from the workstation. It cannot know that it is intended to originate from a specific pod when multiple intercepts are active. + +A `--local-only` intercept will not have any effect on the connection origin because there is no pod from which the connection can originate. The intercept must be made on a workload that has been deployed in the cluster if there's a requirement for correct connection origin. + +There are multiple reasons for doing this. One is that it is important that the request originates from the correct namespace. Example: + +```bash +curl some-host +``` +results in a http request with header `Host: some-host`. Now, if a service-mesh like Istio performs header based routing, then it will fail to find that host unless the request originates from the same namespace as the host resides in. Another reason is that the configuration of a service mesh can contain very strict rules. If the request then originates from the wrong pod, it will be denied. Only one intercept at a time can be used if there is a need to ensure that the chosen pod is exactly right. + +### Recursion detection +It is common that clusters used in development, such as Minikube, Minishift or k3s, run on the same host as the Telepresence client, often in a Docker container. Such clusters may have access to host network, which means that both DNS and L4 routing may be subjected to recursion. + +#### DNS recursion +When a local cluster's DNS-resolver fails to resolve a hostname, it may fall back to querying the local host network. This means that the Telepresence resolver will be asked to resolve a query that was issued from the cluster. Telepresence must check if such a query is recursive because there is a chance that it actually originated from the Telepresence DNS resolver and was dispatched to the `traffic-manager`, or a `traffic-agent`. + +Telepresence handles this by sending one initial DNS-query to resolve the hostname "tel2-recursion-check.kube-system". If the cluster runs locally, and has access to the local host's network, then that query will recurse back into the Telepresence resolver. Telepresence remembers this and alters its own behavior so that queries that are believed to be recursions are detected and respond with an NXNAME record. Telepresence performs this solution to the best of its ability, but may not be completely accurate in all situations. There's a chance that the DNS-resolver will yield a false negative for the second query if the same hostname is queried more than once in rapid succession, that is when the second query is made before the first query has received a response from the cluster. + +#### Connect recursion +A cluster running locally may dispatch connection attempts to non-existing host:port combinations to the host network. This means that they may reach the Telepresence [VIF](../tun-device). Endless recursions occur if the VIF simply dispatches such attempts on to the cluster. + +The telepresence client handles this by serializing all connection attempts to one specific IP:PORT, trapping all subsequent attempts to connect to that IP:PORT until the first attempt has completed. If the first attempt was deemed a success, then the currently trapped attempts are allowed to proceed. If the first attempt failed, then the currently trapped attempts fail. + +## Inbound + +The traffic-manager and traffic-agent are mutually responsible for setting up the necessary connection to the workstation when an intercept becomes active. In versions prior to 2.3.2, this would be accomplished by the traffic-manager creating a port dynamically that it would pass to the traffic-agent. The traffic-agent would then forward the intercepted connection to that port, and the traffic-manager would forward it to the workstation. This lead to problems when integrating with service meshes like Istio since those dynamic ports needed to be configured. It also imposed an undesired requirement to be able to use mTLS between the traffic-manager and traffic-agent. + +In 2.3.2, this changes, so that the traffic-agent instead creates a tunnel to the traffic-manager using the already existing gRPC API connection. The traffic-manager then forwards that using another tunnel to the workstation. This is completely invisible to other service meshes and is therefore much easier to configure. + +##### Footnotes: +

1: A future version of Telepresence will not allow that the same workstation creates concurrent intercepts that span multiple namespaces.

+

2: The error message from an attempt to create a service in a bad subnet contains the service subnet. The trick of creating a dummy service is currently the only way to get Kubernetes to expose that subnet.

diff --git a/docs/telepresence/pre-release/reference/tun-device.md b/docs/telepresence/pre-release/reference/tun-device.md new file mode 100644 index 000000000..4410f6f3c --- /dev/null +++ b/docs/telepresence/pre-release/reference/tun-device.md @@ -0,0 +1,27 @@ +# Networking through Virtual Network Interface + +The Telepresence daemon process creates a Virtual Network Interface (VIF) when Telepresence connects to the cluster. The VIF ensures that the cluster's subnets are available to the workstation. It also intercepts DNS requests and forwards them to the traffic-manager which in turn forwards them to intercepted agents, if any, or performs a host lookup by itself. + +### TUN-Device +The VIF is a TUN-device, which means that it communicates with the workstation in terms of L3 IP-packets. The router will recognize UDP and TCP packets and tunnel their payload to the traffic-manager via its encrypted gRPC API. The traffic-manager will then establish corresponding connections in the cluster. All protocol negotiation takes place in the client because the VIF takes care of the L3 to L4 translation (i.e. the tunnel is L4, not L3). + +## Gains when using the VIF + +### Both TCP and UDP +The TUN-device is capable of routing both TCP and UDP for outbound traffic. Earlier versions of Telepresence would only allow TCP. Future enhancements might be to also route inbound UDP, and perhaps a selection of ICMP packages (to allow for things like `ping`). + +### No SSH required + +The VIF approach is somewhat similar to using `sshuttle` but without +any requirements for extra software, configuration or connections. +Using the VIF means that only one single connection needs to be +forwarded through the Kubernetes apiserver (à la `kubectl +port-forward`), using only one single port. There is no need for +`ssh` in the client nor for `sshd` in the traffic-manager. This also +means that the traffic-manager container can run as the default user. + +#### sshfs without ssh encryption +When a POD is intercepted, and its volumes are mounted on the local machine, this mount is performed by [sshfs](https://github.com/libfuse/sshfs). Telepresence will run `sshfs -o slave` which means that instead of using `ssh` to establish an encrypted communication to an `sshd`, which in turn terminates the encryption and forwards to `sftp`, the `sshfs` will talk `sftp` directly on its `stdin/stdout` pair. Telepresence tunnels that directly to an `sftp` in the agent using its already encrypted gRPC API. As a result, no `sshd` is needed in client nor in the traffic-agent, and the traffic-agent container can run as the default user. + +### No Firewall rules +With the VIF in place, there's no longer any need to tamper with firewalls in order to establish IP routes. The VIF makes the cluster subnets available during connect, and the kernel will perform the routing automatically. When the session ends, the kernel is also responsible for cleaning up. diff --git a/docs/telepresence/pre-release/reference/volume.md b/docs/telepresence/pre-release/reference/volume.md new file mode 100644 index 000000000..82df9cafa --- /dev/null +++ b/docs/telepresence/pre-release/reference/volume.md @@ -0,0 +1,36 @@ +# Volume mounts + +import Alert from '@material-ui/lab/Alert'; + +Telepresence supports locally mounting of volumes that are mounted to your Pods. You can specify a command to run when starting the intercept, this could be a subshell or local server such as Python or Node. + +``` +telepresence intercept --port --mount=/tmp/ -- /bin/bash +``` + +In this case, Telepresence creates the intercept, mounts the Pod's volumes to locally to `/tmp`, and starts a Bash subshell. + +Telepresence can set a random mount point for you by using `--mount=true` instead, you can then find the mount point in the output of `telepresence list` or using the `$TELEPRESENCE_ROOT` variable. + +``` +$ telepresence intercept --port --mount=true -- /bin/bash +Using Deployment +intercepted + Intercept name : + State : ACTIVE + Workload kind : Deployment + Destination : 127.0.0.1: + Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 + Intercepting : all TCP connections + +bash-3.2$ echo $TELEPRESENCE_ROOT +/var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-988349784 +``` + +--mount=true is the default if a mount option is not specified, use --mount=false to disable mounting volumes. + +With either method, the code you run locally either from the subshell or from the intercept command will need to be prepended with the `$TELEPRESENCE_ROOT` environment variable to utilize the mounted volumes. + +For example, Kubernetes mounts secrets to `/var/run/secrets/kubernetes.io` (even if no `mountPoint` for it exists in the Pod spec). Once mounted, to access these you would need to change your code to use `$TELEPRESENCE_ROOT/var/run/secrets/kubernetes.io`. + +If using --mount=true without a command, you can use either environment variable flag to retrieve the variable. diff --git a/docs/telepresence/pre-release/reference/vpn.md b/docs/telepresence/pre-release/reference/vpn.md new file mode 100644 index 000000000..cb3f8acf2 --- /dev/null +++ b/docs/telepresence/pre-release/reference/vpn.md @@ -0,0 +1,157 @@ + +
+ +# Telepresence and VPNs + +## The test-vpn command + +You can make use of the `telepresence test-vpn` command to diagnose issues +with your VPN setup. +This guides you through a series of steps to figure out if there are +conflicts between your VPN configuration and telepresence. + +### Prerequisites + +Before running `telepresence test-vpn` you should ensure that your VPN is +in split-tunnel mode. +This means that only traffic that _must_ pass through the VPN is directed +through it; otherwise, the test results may be inaccurate. + +You may need to configure this on both the client and server sides. +Client-side, taking the Tunnelblick client as an example, you must ensure that +the `Route all IPv4 traffic through the VPN` tickbox is not enabled: + + + +Server-side, taking AWS' ClientVPN as an example, you simply have to enable +split-tunnel mode: + + + +In AWS, this setting can be toggled without reprovisioning the VPN. Other cloud providers may work differently. + +### Testing the VPN configuration + +To run it, enter: + +```console +$ telepresence test-vpn +``` + +The test-vpn tool begins by asking you to disconnect from your VPN; ensure you are disconnected then +press enter: + +``` +Telepresence Root Daemon is already stopped +Telepresence User Daemon is already stopped +Please disconnect from your VPN now and hit enter once you're disconnected... +``` + +Once it's gathered information about your network configuration without an active connection, +it will ask you to connect to the VPN: + +``` +Please connect to your VPN now and hit enter once you're connected... +``` + +It will then connect to the cluster: + + +``` +Launching Telepresence Root Daemon +Launching Telepresence User Daemon +Connected to context arn:aws:eks:us-east-1:914373874199:cluster/josec-tp-test-vpn-cluster (https://07C63820C58A0426296DAEFC73AED10C.gr7.us-east-1.eks.amazonaws.com) +Telepresence Root Daemon quitting... done +Telepresence User Daemon quitting... done +``` + +And show you the results of the test: + +``` +---------- Test Results: +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +✅ svc subnet 10.19.0.0/16 is clear of VPN + +Please see https://www.telepresence.io/docs/latest/reference/vpn for more info on these corrective actions, as well as examples + +Still having issues? Please create a new github issue at https://github.com/telepresenceio/telepresence/issues/new?template=Bug_report.md + Please make sure to add the following to your issue: + * Run `telepresence loglevel debug`, try to connect, then run `telepresence gather_logs`. It will produce a zipfile that you should attach to the issue. + * Which VPN client are you using? + * Which VPN server are you using? + * How is your VPN pushing DNS configuration? It may be useful to add the contents of /etc/resolv.conf +``` + +#### Interpreting test results + +##### Case 1: VPN masked by cluster + +In an instance where the VPN is masked by the cluster, the test-vpn tool informs you that a pod or service subnet is masking a CIDR that the VPN +routes: + +``` +❌ pod subnet 10.0.0.0/19 is masking VPN-routed CIDR 10.0.0.0/16. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/19 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 10.0.0.0/16 are placed in the never-proxy list +``` + +This means that all VPN hosts within `10.0.0.0/19` will be rendered inaccessible while +telepresence is connected. + +The ideal resolution in this case is to move the pods to a different subnet. This is possible, +for example, in Amazon EKS by configuring a [new CIDR range](https://aws.amazon.com/premiumsupport/knowledge-center/eks-multiple-cidr-ranges/) for the pods. +In this case, configuring the pods to be located in `10.1.0.0/19` clears the VPN and allows you +to reach hosts inside the VPC's `10.0.0.0/19` + +However, it is not always possible to move the pods to a different subnet. +In these cases, you should use the [never-proxy](../config#neverproxy) configuration to prevent certain +hosts from being masked. +This might be particularly important for DNS resolution. In an AWS ClientVPN VPN it is often +customary to set the `.2` host as a DNS server (e.g. `10.0.0.2` in this case): + + + +If this is the case for your VPN, you should place the DNS server in the never-proxy list for your +cluster. In your kubeconfig file, add a `telepresence` extension like so: + +```yaml +- cluster: + server: https://127.0.0.1 + extensions: + - name: telepresence.io + extension: + never-proxy: + - 10.0.0.2/32 +``` + +##### Case 2: Cluster masked by VPN + +In an instance where the Cluster is masked by the VPN, the test-vpn tool informs you that a pod or service subnet is being masked by a CIDR +that the VPN routes: + +``` +❌ pod subnet 10.0.0.0/8 being masked by VPN-routed CIDR 10.0.0.0/16. This usually means that Telepresence will not be able to connect to your cluster. To resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, consider shrinking the mask of the 10.0.0.0/16 CIDR (e.g. from /16 to /8), or disabling split-tunneling +``` + +Typically this means that pods within `10.0.0.0/8` are not accessible while the VPN is +connected. + +As with the first case, the ideal resolution is to move the pods away, but this may not always +be possible. In that case, your best bet is to attempt to shrink the VPN's CIDR +(that is, make it route more hosts) to make Telepresence's routes win by virtue of specificity. +One easy way to do this may be by disabling split tunneling (see the [prerequisites](#prerequisites) +section for more on split-tunneling). + +Note that once you fix this, you may find yourself landing again in [Case 1](#case-1-vpn-masked-by-cluster), and may need +to use never-proxy rules to whitelist hosts in the VPN: + +``` +❌ pod subnet 10.0.0.0/8 is masking VPN-routed CIDR 0.0.0.0/1. This usually means Telepresence will be able to connect to your cluster, but hosts on your VPN may be inaccessible while telepresence is connected; to resolve: + * Move pod subnet 10.0.0.0/8 to a subnet not mapped by the VPN + * If this is not possible, ensure that any hosts in CIDR 0.0.0.0/1 are placed in the never-proxy list +``` +
diff --git a/docs/telepresence/pre-release/release-notes/no-ssh.png b/docs/telepresence/pre-release/release-notes/no-ssh.png new file mode 100644 index 000000000..025f20ab7 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/no-ssh.png differ diff --git a/docs/telepresence/pre-release/release-notes/run-tp-in-docker.png b/docs/telepresence/pre-release/release-notes/run-tp-in-docker.png new file mode 100644 index 000000000..53b66a9b2 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/run-tp-in-docker.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.2.png b/docs/telepresence/pre-release/release-notes/telepresence-2.2.png new file mode 100644 index 000000000..43abc7e89 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.2.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.3.0-homebrew.png b/docs/telepresence/pre-release/release-notes/telepresence-2.3.0-homebrew.png new file mode 100644 index 000000000..e203a9750 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.3.0-homebrew.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.3.0-loglevels.png b/docs/telepresence/pre-release/release-notes/telepresence-2.3.0-loglevels.png new file mode 100644 index 000000000..3d628c54a Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.3.0-loglevels.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.3.1-alsoProxy.png b/docs/telepresence/pre-release/release-notes/telepresence-2.3.1-alsoProxy.png new file mode 100644 index 000000000..4052b927b Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.3.1-alsoProxy.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.3.1-brew.png b/docs/telepresence/pre-release/release-notes/telepresence-2.3.1-brew.png new file mode 100644 index 000000000..2af424904 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.3.1-brew.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.3.1-dns.png b/docs/telepresence/pre-release/release-notes/telepresence-2.3.1-dns.png new file mode 100644 index 000000000..c6335e7a7 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.3.1-dns.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.3.1-inject.png b/docs/telepresence/pre-release/release-notes/telepresence-2.3.1-inject.png new file mode 100644 index 000000000..aea1003ef Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.3.1-inject.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.3.1-large-file-transfer.png b/docs/telepresence/pre-release/release-notes/telepresence-2.3.1-large-file-transfer.png new file mode 100644 index 000000000..48ceb3817 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.3.1-large-file-transfer.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.3.1-trafficmanagerconnect.png b/docs/telepresence/pre-release/release-notes/telepresence-2.3.1-trafficmanagerconnect.png new file mode 100644 index 000000000..78128c174 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.3.1-trafficmanagerconnect.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.3.2-subnets.png b/docs/telepresence/pre-release/release-notes/telepresence-2.3.2-subnets.png new file mode 100644 index 000000000..778c722ab Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.3.2-subnets.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.3.2-svcport-annotation.png b/docs/telepresence/pre-release/release-notes/telepresence-2.3.2-svcport-annotation.png new file mode 100644 index 000000000..1e1e92408 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.3.2-svcport-annotation.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.3.3-helm.png b/docs/telepresence/pre-release/release-notes/telepresence-2.3.3-helm.png new file mode 100644 index 000000000..7b81480a7 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.3.3-helm.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.3.3-namespace-config.png b/docs/telepresence/pre-release/release-notes/telepresence-2.3.3-namespace-config.png new file mode 100644 index 000000000..7864d3a30 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.3.3-namespace-config.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.3.3-to-pod.png b/docs/telepresence/pre-release/release-notes/telepresence-2.3.3-to-pod.png new file mode 100644 index 000000000..aa7be3f63 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.3.3-to-pod.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.3.4-improved-error.png b/docs/telepresence/pre-release/release-notes/telepresence-2.3.4-improved-error.png new file mode 100644 index 000000000..fa8a12986 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.3.4-improved-error.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.3.4-ip-error.png b/docs/telepresence/pre-release/release-notes/telepresence-2.3.4-ip-error.png new file mode 100644 index 000000000..1d37380c7 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.3.4-ip-error.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.3.5-agent-config.png b/docs/telepresence/pre-release/release-notes/telepresence-2.3.5-agent-config.png new file mode 100644 index 000000000..67d6d3e8b Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.3.5-agent-config.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.3.5-grpc-max-receive-size.png b/docs/telepresence/pre-release/release-notes/telepresence-2.3.5-grpc-max-receive-size.png new file mode 100644 index 000000000..32939f9dd Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.3.5-grpc-max-receive-size.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.3.5-skipLogin.png b/docs/telepresence/pre-release/release-notes/telepresence-2.3.5-skipLogin.png new file mode 100644 index 000000000..bf79c1910 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.3.5-skipLogin.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png b/docs/telepresence/pre-release/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png new file mode 100644 index 000000000..d29a05ad7 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.3.5-traffic-manager-namespaces.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.3.7-keydesc.png b/docs/telepresence/pre-release/release-notes/telepresence-2.3.7-keydesc.png new file mode 100644 index 000000000..9bffe5ccb Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.3.7-keydesc.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.3.7-newkey.png b/docs/telepresence/pre-release/release-notes/telepresence-2.3.7-newkey.png new file mode 100644 index 000000000..c7d47c42d Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.3.7-newkey.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.4.0-cloud-messages.png b/docs/telepresence/pre-release/release-notes/telepresence-2.4.0-cloud-messages.png new file mode 100644 index 000000000..ffd045ae0 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.4.0-cloud-messages.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.4.0-windows.png b/docs/telepresence/pre-release/release-notes/telepresence-2.4.0-windows.png new file mode 100644 index 000000000..d27ba254a Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.4.0-windows.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.4.1-systema-vars.png b/docs/telepresence/pre-release/release-notes/telepresence-2.4.1-systema-vars.png new file mode 100644 index 000000000..c098b439f Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.4.1-systema-vars.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.4.10-actions.png b/docs/telepresence/pre-release/release-notes/telepresence-2.4.10-actions.png new file mode 100644 index 000000000..6d849ac21 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.4.10-actions.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.4.10-intercept-config.png b/docs/telepresence/pre-release/release-notes/telepresence-2.4.10-intercept-config.png new file mode 100644 index 000000000..e3f1136ac Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.4.10-intercept-config.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.4.4-gather-logs.png b/docs/telepresence/pre-release/release-notes/telepresence-2.4.4-gather-logs.png new file mode 100644 index 000000000..7db541735 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.4.4-gather-logs.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.4.5-logs-anonymize.png b/docs/telepresence/pre-release/release-notes/telepresence-2.4.5-logs-anonymize.png new file mode 100644 index 000000000..edd01fde4 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.4.5-logs-anonymize.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.4.5-pod-yaml.png b/docs/telepresence/pre-release/release-notes/telepresence-2.4.5-pod-yaml.png new file mode 100644 index 000000000..3f565c4f8 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.4.5-pod-yaml.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.4.5-preview-url-questions.png b/docs/telepresence/pre-release/release-notes/telepresence-2.4.5-preview-url-questions.png new file mode 100644 index 000000000..1823aaa14 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.4.5-preview-url-questions.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.4.6-help-text.png b/docs/telepresence/pre-release/release-notes/telepresence-2.4.6-help-text.png new file mode 100644 index 000000000..aab9178ad Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.4.6-help-text.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.4.8-health-check.png b/docs/telepresence/pre-release/release-notes/telepresence-2.4.8-health-check.png new file mode 100644 index 000000000..e10a0b472 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.4.8-health-check.png differ diff --git a/docs/telepresence/pre-release/release-notes/telepresence-2.4.8-vpn.png b/docs/telepresence/pre-release/release-notes/telepresence-2.4.8-vpn.png new file mode 100644 index 000000000..fbb215882 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/telepresence-2.4.8-vpn.png differ diff --git a/docs/telepresence/pre-release/release-notes/tunnel.jpg b/docs/telepresence/pre-release/release-notes/tunnel.jpg new file mode 100644 index 000000000..59a0397e6 Binary files /dev/null and b/docs/telepresence/pre-release/release-notes/tunnel.jpg differ diff --git a/docs/telepresence/pre-release/releaseNotes.yml b/docs/telepresence/pre-release/releaseNotes.yml new file mode 100644 index 000000000..b91a78ecd --- /dev/null +++ b/docs/telepresence/pre-release/releaseNotes.yml @@ -0,0 +1,1085 @@ +# This file should be placed in the folder for the version of the +# product that's meant to be documented. A `/release-notes` page will +# be automatically generated and populated at build time. +# +# Note that an entry needs to be added to the `doc-links.yml` file in +# order to surface the release notes in the table of contents. +# +# The YAML in this file should contain: +# +# changelog: An (optional) URL to the CHANGELOG for the product. +# items: An array of releases with the following attributes: +# - version: The (optional) version number of the release, if applicable. +# - date: The date of the release in the format YYYY-MM-DD. +# - notes: An array of noteworthy changes included in the release, each having the following attributes: +# - type: The type of change, one of `bugfix`, `feature`, `security` or `change`. +# - title: A short title of the noteworthy change. +# - body: >- +# Two or three sentences describing the change and why it +# is noteworthy. This is HTML, not plain text or +# markdown. It is handy to use YAML's ">-" feature to +# allow line-wrapping. +# - image: >- +# The URL of an image that visually represents the +# noteworthy change. This path is relative to the +# `release-notes` directory; if this file is +# `FOO/releaseNotes.yml`, then the image paths are +# relative to `FOO/release-notes/`. +# - docs: The path to the documentation page where additional information can be found. + +docTitle: Telepresence Release Notes +docDescription: >- + Release notes for Telepresence by Ambassador Labs, a CNCF project + that enables developers to iterate rapidly on Kubernetes + microservices by arming them with infinite-scale development + environments, access to instantaneous feedback loops, and highly + customizable development environments. + +changelog: https://github.com/telepresenceio/telepresence/blob/$branch$/CHANGELOG.md + +items: + - version: 2.4.11 + date: "2022-02-10" + notes: + - type: change + title: Add additional logging to troubleshoot intermittent issues with intercepts + body: >- + We've noticed some issues with intercepts in v2.4.10, so we are releasing a version + with enhanced logging to help debug and fix the issue. + - version: 2.4.10 + date: "2022-01-13" + notes: + - type: feature + title: Application Protocol Strategy + body: >- + The strategy used when selecting the application protocol for personal intercepts can now be configured using + the intercept.appProtocolStrategy in the config.yml file. + docs: reference/config/#intercept + image: telepresence-2.4.10-intercept-config.png + - type: feature + title: Helm value for the Application Protocol Strategy + body: >- + The strategy when selecting the application protocol for personal intercepts in agents injected by the + mutating webhook can now be configured using the agentInjector.appProtocolStrategy in the Helm chart. + docs: install/helm + - type: feature + title: New --http-plaintext option + body: >- + The flag --http-plaintext can be used to ensure that an intercept uses plaintext http or grpc when + communicating with the workstation process. + docs: reference/intercepts/#tls + - type: feature + title: Configure the default intercept port + body: >- + The port used by default in the telepresence intercept command (8080), can now be changed by setting + the intercept.defaultPort in the config.yml file. + docs: reference/config/#intercept + - type: change + title: Telepresence CI now uses Github Actions + body: >- + Telepresence now uses Github Actions for doing unit and integration testing. It is + now easier for contributors to run tests on PRs since maintainers can add an + "ok to test" label to PRs (including from forks) to run integration tests. + docs: https://github.com/telepresenceio/telepresence/actions + image: telepresence-2.4.10-actions.png + - type: bugfix + title: Check conditions before asking questions + body: >- + User will not be asked to log in or add ingress information when creating an intercept until a check has been + made that the intercept is possible. + docs: reference/intercepts/ + - type: bugfix + title: Fix invalid log statement + body: >- + Telepresence will no longer log invalid: "unhandled connection control message: code DIAL_OK" errors. + - type: bugfix + title: Log errors from sshfs/sftp + body: >- + Output to stderr from the traffic-agent's sftp and the client's sshfs processes + are properly logged as errors. + - type: bugfix + title: Don't use Windows path separators in workload pod template + body: >- + Auto installer will no longer not emit backslash separators for the /tel-app-mounts paths in the + traffic-agent container spec when running on Windows. + - version: 2.4.9 + date: "2021-12-09" + notes: + - type: bugfix + title: Helm upgrade nil pointer error + body: >- + A helm upgrade using the --reuse-values flag no longer fails on a "nil pointer" error caused by a nil + telpresenceAPI value. + docs: install/helm#upgrading-the-traffic-manager + - version: 2.4.8 + date: "2021-12-03" + notes: + - type: feature + title: VPN diagnostics tool + body: >- + There is a new subcommand, test-vpn, that can be used to diagnose connectivity issues with a VPN. + See the VPN docs for more information on how to use it. + docs: reference/vpn + image: telepresence-2.4.8-vpn.png + + - type: feature + title: RESTful API service + body: >- + A RESTful service was added to Telepresence, both locally to the client and to the traffic-agent to + help determine if messages with a set of headers should be consumed or not from a message queue where the + intercept headers are added to the messages. + docs: reference/restapi + image: telepresence-2.4.8-health-check.png + + - type: change + title: TELEPRESENCE_LOGIN_CLIENT_ID env variable no longer used + body: >- + You could previously configure this value, but there was no reason to change it, so the value + was removed. + + - type: bugfix + title: Tunneled network connections behave more like ordinary TCP connections. + body: >- + When using Telepresence with an external cloud provider for extensions, those tunneled + connections now behave more like TCP connections, especially when it comes to timeouts. + We've also added increased testing around these types of connections. + - version: 2.4.7 + date: "2021-11-24" + notes: + - type: feature + title: Injector service-name annotation + body: >- + The agent injector now supports a new annotation, telepresence.getambassador.io/inject-service-name, that can be used to set the name of the service to be intercepted. + This will help disambiguate which service to intercept for when a workload is exposed by multiple services, such as can happen with Argo Rollouts + docs: reference/cluster-config#service-name-annotation + - type: feature + title: Skip the Ingress Dialogue + body: >- + You can now skip the ingress dialogue by setting the ingress parameters in the corresponding flags. + docs: reference/intercepts#skipping-the-ingress-dialogue + - type: feature + title: Never proxy subnets + body: >- + The kubeconfig extensions now support a never-proxy argument, + analogous to also-proxy, that defines a set of subnets that + will never be proxied via telepresence. + docs: reference/config#neverproxy + - type: change + title: Daemon versions check + body: >- + Telepresence now checks the versions of the client and the daemons and asks the user to quit and restart if they don't match. + - type: change + title: No explicit DNS flushes + body: >- + Telepresence DNS now uses a very short TTL instead of explicitly flushing DNS by killing the mDNSResponder or doing resolvectl flush-caches + docs: reference/routing#dns-caching + - type: bugfix + title: Legacy flags now work with global flags + body: >- + Legacy flags such as `--swap-deployment` can now be used together with global flags. + - type: bugfix + title: Outbound connection closing + body: >- + Outbound connections are now properly closed when the peer closes. + - type: bugfix + title: Prevent DNS recursion + body: >- + The DNS-resolver will trap recursive resolution attempts (may happen when the cluster runs in a docker-container on the client). + docs: reference/routing#dns-recursion + - type: bugfix + title: Prevent network recursion + body: >- + The TUN-device will trap failed connection attempts that results in recursive calls back into the TUN-device (may happen when the + cluster runs in a docker-container on the client). + docs: reference/routing#connect-recursion + - type: bugfix + title: Traffic Manager deadlock fix + body: >- + The Traffic Manager no longer runs a risk of entering a deadlock when a new Traffic agent arrives. + - type: bugfix + title: webhookRegistry config propagation + body: >- + The configured webhookRegistry is now propagated to the webhook installer even if no webhookAgentImage has been set. + docs: reference/config#images + - type: bugfix + title: Login refreshes expired tokens + body: >- + When a user's token has expired, telepresence login + will prompt the user to log in again to get a new token. Previously, + the user had to telepresence quit and telepresence logout + to get a new token. + docs: https://github.com/telepresenceio/telepresence/issues/2062 + - version: 2.4.6 + date: "2021-11-02" + notes: + - type: feature + title: Manually injecting Traffic Agent + body: >- + Telepresence now supports manually injecting the traffic-agent YAML into workload manifests. + Use the genyaml command to create the sidecar YAML, then add the telepresence.getambassador.io/manually-injected: "true" annotation to your pods to allow Telepresence to intercept them. + docs: reference/intercepts/manual-agent + + - type: feature + title: Telepresence CLI released for Apple silicon + body: >- + Telepresence is now built and released for Apple silicon. + docs: install/?os=macos + + - type: change + title: Telepresence help text now links to telepresence.io + body: >- + We now include a link to our documentation when you run telepresence --help. This will make it easier + for users to find this page whether they acquire Telepresence through Brew or some other mechanism. + image: telepresence-2.4.6-help-text.png + + - type: bugfix + title: Fixed bug when API server is inside CIDR range of pods/services + body: >- + If the API server for your kubernetes cluster had an IP that fell within the + subnet generated from pods/services in a kubernetes cluster, it would proxy traffic + to the API server which would result in hanging or a failed connection. We now ensure + that the API server is explicitly not proxied. + - version: 2.4.5 + date: "2021-10-15" + notes: + - type: feature + title: Get pod yaml with gather-logs command + body: >- + Adding the flag --get-pod-yaml to your request will get the + pod yaml manifest for all kubernetes components you are getting logs for + ( traffic-manager and/or pods containing a + traffic-agent container). This flag is set to false + by default. + docs: reference/client + image: telepresence-2.4.5-pod-yaml.png + + - type: feature + title: Anonymize pod name + namespace when using gather-logs command + body: >- + Adding the flag --anonymize to your command will + anonymize your pod names + namespaces in the output file. We replace the + sensitive names with simple names (e.g. pod-1, namespace-2) to maintain + relationships between the objects without exposing the real names of your + objects. This flag is set to false by default. + docs: reference/client + image: telepresence-2.4.5-logs-anonymize.png + + - type: feature + title: Added context and defaults to ingress questions when creating a preview URL + body: >- + Previously, we referred to OSI model layers when asking these questions, but this + terminology is not commonly used. The questions now provide a clearer context for the user, along with a default answer as an example. + docs: howtos/preview-urls + image: telepresence-2.4.5-preview-url-questions.png + + - type: feature + title: Support for intercepting headless services + body: >- + Intercepting headless services is now officially supported. You can request a + headless service on whatever port it exposes and get a response from the + intercept. This leverages the same approach as intercepting numeric ports when + using the mutating webhook injector, mainly requires the initContainer + to have NET_ADMIN capabilities. + docs: reference/intercepts/#intercepting-headless-services + + - type: change + title: Use one tunnel per connection instead of multiplexing into one tunnel + body: >- + We have changed Telepresence so that it uses one tunnel per connection instead + of multiplexing all connections into one tunnel. This will provide substantial + performance improvements. Clients will still be backwards compatible with older + managers that only support multiplexing. + + - type: bugfix + title: Added checks for Telepresence kubernetes compatibility + body: >- + Telepresence currently works with Kubernetes server versions 1.17.0 + and higher. We have added logs in the connector and traffic-manager + to let users know when they are using Telepresence with a cluster it doesn't support. + docs: reference/cluster-config + + - type: bugfix + title: Traffic Agent security context is now only added when necessary + body: >- + When creating an intercept, Telepresence will now only set the traffic agent's GID + when strictly necessary (i.e. when using headless services or numeric ports). This mitigates + an issue on openshift clusters where the traffic agent can fail to be created due to + openshift's security policies banning arbitrary GIDs. + + - version: 2.4.4 + date: "2021-09-27" + notes: + - type: feature + title: Numeric ports in agent injector + body: >- + The agent injector now supports injecting Traffic Agents into pods that have unnamed ports. + docs: reference/cluster-config/#note-on-numeric-ports + + - type: feature + title: New subcommand to gather logs and export into zip file + body: >- + Telepresence has logs for various components (the + traffic-manager, traffic-agents, the root and + user daemons), which are integral for understanding and debugging + Telepresence behavior. We have added the telepresence + gather-logs command to make it simple to compile logs for + all Telepresence components and export them in a zip file that can + be shared to others and/or included in a github issue. For more + information on usage, run telepresence gather-logs --help + . + docs: reference/client + image: telepresence-2.4.4-gather-logs.png + + - type: feature + title: Pod CIDR strategy is configurable in Helm chart + body: >- + Telepresence now enables you to directly configure how to get + pod CIDRs when deploying Telepresence with the Helm chart. + The default behavior remains the same. We've also introduced + the ability to explicitly set what the pod CIDRs should be. + docs: install/helm + + - type: bugfix + title: Compute pod CIDRs more efficiently + body: >- + When computing subnets using the pod CIDRs, the traffic-manager + now uses less CPU cycles. + docs: reference/routing/#subnets + + - type: bugfix + title: Prevent busy loop in traffic-manager + body: >- + In some circumstances, the traffic-manager's CPU + would max out and get pinned at its limit. This required a + shutdown or pod restart to fix. We've added some fixes + to prevent the traffic-manager from getting into this state. + + - type: bugfix + title: Added a fixed buffer size to TUN-device + body: >- + The TUN-device now has a max buffer size of 64K. This prevents the + buffer from growing limitlessly until it receies a PSH, which could + be a blocking operation when receiving lots of TCP-packets. + docs: reference/tun-device + + - type: bugfix + title: Fix hanging user daemon + body: >- + When Telepresence encountered an issue connecting to the cluster or + the root daemon, it could hang indefintely. It now will error correctly + when it encounters that situation. + + - type: bugfix + title: Improved proprietary agent connectivity + body: >- + To determine whether the environment cluster is air-gapped, the + proprietary agent attempts to connect to the cloud during startup. + To deal with a possible initial failure, the agent backs off + and retries the connection with an increasing backoff duration. + + - type: bugfix + title: Telepresence correctly reports intercept port conflict + body: >- + When creating a second intercept targetting the same local port, + it now gives the user an informative error message. Additionally, + it tells them which intercept is currently using that port to make + it easier to remedy. + + - version: 2.4.3 + date: "2021-09-15" + notes: + - type: feature + title: Environment variable TELEPRESENCE_INTERCEPT_ID available in interceptor's environment + body: >- + When you perform an intercept, we now include a TELEPRESENCE_INTERCEPT_ID environment + variable in the environment. + docs: reference/environment/#telepresence-environment-variables + + - type: bugfix + title: Improved daemon stability + body: >- + Fixed a timing bug that sometimes caused a "daemon did not start" failure. + + - type: bugfix + title: Complete logs for Windows + body: >- + Crash stack traces and other errors were incorrectly not written to log files. This has + been fixed so logs for Windows should be at parity with the ones in MacOS and Linux. + + - type: bugfix + title: Log rotation fix for Linux kernel 4.11+ + body: >- + On Linux kernel 4.11 and above, the log file rotation now properly reads the + birth-time of the log file. Older kernels continue to use the old behavior + of using the change-time in place of the birth-time. + + - type: bugfix + title: Improved error messaging + body: >- + When Telepresence encounters an error, it tells the user where they should look for + logs related to the error. We have refined this so that it only tells users to look + for errors in the daemon logs for issues that are logged there. + + - type: bugfix + title: Stop resolving localhost + body: >- + When using the overriding DNS resolver, it will no longer apply search paths when + resolving localhost, since that should be resolved on the user's machine + instead of the cluster. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Variable cluster domain + body: >- + Previously, the cluster domain was hardcoded to cluster.local. While this + is true for many kubernetes clusters, it is not for all of them. Now this value is + retrieved from the traffic-manager. + + - type: bugfix + title: Improved cleanup of traffic-agents + body: >- + Telepresence now uninstalls traffic-agents installed via mutating webhook + when using telepresence uninstall --everything. + + - type: bugfix + title: More large file transfer fixes + body: >- + Downloading large files during an intercept will no longer cause timeouts and hanging + traffic-agents. + + - type: bugfix + title: Setting --mount to false when intercepting works as expected + body: >- + When using --mount=false while performing an intercept, the file system + was still mounted. This has been remedied so the intercept behavior respects the + flag. + docs: reference/volume + + - type: bugfix + title: Traffic-manager establishes outbound connections in parallel + body: >- + Previously, the traffic-manager established outbound connections + sequentially. This resulted in slow (and failing) Dial calls would + block all outbound traffic from the workstation (for up to 30 seconds). We now + establish these connections in parallel so that won't occur. + docs: reference/routing/#outbound + + - type: bugfix + title: Status command reports correct DNS settings + body: >- + Telepresence status now correctly reports DNS settings for all operating + systems, instead of Local IP:nil, Remote IP:nil when they don't exist. + + - version: 2.4.2 + date: "2021-09-01" + notes: + - type: feature + title: New subcommand to temporarily change log-level + body: >- + We have added a new telepresence loglevel subcommand that enables users + to temporarily change the log-level for the local demons, the traffic-manager and + the traffic-agents. While the logLevels settings from the config will + still be used by default, this can be helpful if you are currently experiencing an issue and + want to have higher fidelity logs, without doing a telepresence quit and + telepresence connect. You can use telepresence loglevel --help to get + more information on options for the command. + docs: reference/config + + - type: change + title: All components have info as the default log-level + body: >- + We've now set the default for all components of Telepresence (traffic-agent, + traffic-manager, local daemons) to use info as the default log-level. + + - type: bugfix + title: Updating RBAC in helm chart to fix cluster-id regression + body: >- + In 2.4.1, we enabled the traffic-manager to get the cluster ID by getting the UID + of the default namespace. The helm chart was not updated to give the traffic-manager + those permissions, which has since been fixed. This impacted users who use licensed features of + the Telepresence extension in an air-gapped environment. + docs: reference/cluster-config/#air-gapped-cluster + + - type: bugfix + title: Timeouts for Helm actions are now respected + body: >- + The user-defined timeout for Helm actions wasn't always respected, causing the daemon to hang + indefinitely when failing to install the traffic-manager. + docs: reference/config#timeouts + + - version: 2.4.1 + date: "2021-08-30" + notes: + - type: feature + title: External cloud variables are now configurable + body: >- + We now support configuring the host and port for the cloud in your config.yml. These + are used when logging in to utilize features provided by an extension, and are also passed + along as environment variables when installing the `traffic-manager`. Additionally, we + now run our testsuite with these variables set to localhost to continue to ensure Telepresence + is fully fuctional without depeneding on an external service. The SYSTEMA_HOST and SYSTEMA_PORT + environment variables are no longer used. + image: telepresence-2.4.1-systema-vars.png + docs: reference/config/#cloud + + - type: feature + title: Helm chart can now regenerate certificate used for mutating webhook on-demand. + body: >- + You can now set agentInjector.certificate.regenerate when deploying Telepresence + with the Helm chart to automatically regenerate the certificate used by the agent injector webhook. + docs: install/helm + + - type: change + title: Traffic Manager installed via helm + body: >- + The traffic-manager is now installed via an embedded version of the Helm chart when telepresence connect is first performed on a cluster. + This change is transparent to the user. + A new configuration flag, timeouts.helm sets the timeouts for all helm operations performed by the Telepresence binary. + docs: reference/config#timeouts + + - type: change + title: traffic-manager gets cluster ID itself instead of via environment variable + body: >- + The traffic-manager used to get the cluster ID as an environment variable when running + telepresence connnect or via adding the value in the helm chart. This was + clunky so now the traffic-manager gets the value itself as long as it has permissions + to "get" and "list" namespaces (this has been updated in the helm chart). + docs: install/helm + + - type: bugfix + title: Telepresence now mounts all directories from /var/run/secrets + body: >- + In the past, we only mounted secret directories in /var/run/secrets/kubernetes.io. + We now mount *all* directories in /var/run/secrets, which, for example, includes + directories like eks.amazonaws.com used for IRSA tokens. + docs: reference/volume + + - type: bugfix + title: Max gRPC receive size correctly propagates to all grpc servers + body: >- + This fixes a bug where the max gRPC receive size was only propagated to some of the + grpc servers, causing failures when the message size was over the default. + docs: reference/config/#grpc + + - type: bugfix + title: Updated our Homebrew packaging to run manually + body: >- + We made some updates to our script that packages Telepresence for Homebrew so that it + can be run manually. This will enable maintainers of Telepresence to run the script manually + should we ever need to rollback a release and have latest point to an older verison. + docs: install/ + + - type: bugfix + title: Telepresence uses namespace from kubeconfig context on each call + body: >- + In the past, Telepresence would use whatever namespace was specified in the kubeconfig's current-context + for the entirety of the time a user was connected to Telepresence. This would lead to confusing behavior + when a user changed the context in their kubeconfig and expected Telepresence to acknowledge that change. + Telepresence now will do that and use the namespace designated by the context on each call. + + - type: bugfix + title: Idle outbound TCP connections timeout increased to 7200 seconds + body: >- + Some users were noticing that their intercepts would start failing after 60 seconds. + This was because the keep idle outbound TCP connections were set to 60 seconds, which we have + now bumped to 7200 seconds to match Linux's tcp_keepalive_time default. + + - type: bugfix + title: Telepresence will automatically remove a socket upon ungraceful termination + body: >- + When a Telepresence process terminates ungracefully, it would inform users that "this usually means + that the process has terminated ungracefully" and implied that they should remove the socket. We've + now made it so Telepresence will automatically attempt to remove the socket upon ungraceful termination. + + - type: bugfix + title: Fixed user daemon deadlock + body: >- + Remedied a situation where the user daemon could hang when a user was logged in. + + - type: bugfix + title: Fixed agentImage config setting + body: >- + The config setting images.agentImages is no longer required to contain the repository, and it + will use the value at images.repository. + docs: reference/config/#images + + - version: 2.4.0 + date: "2021-08-04" + notes: + - type: feature + title: Windows Client Developer Preview + body: >- + There is now a native Windows client for Telepresence that is being released as a Developer Preview. + All the same features supported by the MacOS and Linux client are available on Windows. + image: telepresence-2.4.0-windows.png + docs: install + + - type: feature + title: CLI raises helpful messages from Ambassador Cloud + body: >- + Telepresence can now receive messages from Ambassador Cloud and raise + them to the user when they perform certain commands. This enables us + to send you messages that may enhance your Telepresence experience when + using certain commands. Frequency of messages can be configured in your + config.yml. + image: telepresence-2.4.0-cloud-messages.png + docs: reference/config#cloud + + - type: bugfix + title: Improved stability of systemd-resolved-based DNS + body: >- + When initializing the systemd-resolved-based DNS, the routing domain + is set to improve stability in non-standard configurations. This also enables the + overriding resolver to do a proper take over once the DNS service ends. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Fixed an edge case when intercepting a container with multiple ports + body: >- + When specifying a port of a container to intercept, if there was a container in the + pod without ports, it was automatically selected. This has been fixed so we'll only + choose the container with "no ports" if there's no container that explicitly matches + the port used in your intercept. + docs: reference/intercepts/#creating-an-intercept-when-a-service-has-multiple-ports + + - type: bugfix + title: $(NAME) references in agent's environments are now interpolated correctly. + body: >- + If you had an environment variable $(NAME) in your workload that referenced another, intercepts + would not correctly interpolate $(NAME). This has been fixed and works automatically. + + - type: bugfix + title: Telepresence no longer prints INFO message when there is no config.yml + body: >- + Fixed a regression that printed an INFO message to the terminal when there wasn't a + config.yml present. The config is optional, so this message has been + removed. + docs: reference/config + + - type: bugfix + title: Telepresence no longer panics when using --http-match + body: >- + Fixed a bug where Telepresence would panic if the value passed to --http-match + didn't contain an equal sign, which has been fixed. The correct syntax is in the --help + string and looks like --http-match=HTTP2_HEADER=REGEX + docs: reference/intercepts/#intercept-behavior-when-logged-in-to-ambassador-cloud + + - type: bugfix + title: Improved subnet updates + body: >- + The `traffic-manager` used to update subnets whenever the `Nodes` or `Pods` changed, even if + the underlying subnet hadn't changed, which created a lot of unnecessary traffic between the + client and the `traffic-manager`. This has been fixed so we only send updates when the subnets + themselves actually change. + docs: reference/routing/#subnets + + - version: 2.3.7 + date: "2021-07-23" + notes: + - type: feature + title: Also-proxy in telepresence status + body: >- + An also-proxy entry in the Kubernetes cluster config will + show up in the output of the telepresence status command. + docs: reference/config + + - type: feature + title: Non-interactive telepresence login + body: >- + telepresence login now has an + --apikey=KEY flag that allows for + non-interactive logins. This is useful for headless + environments where launching a web-browser is impossible, + such as cloud shells, Docker containers, or CI. + image: telepresence-2.3.7-newkey.png + docs: reference/client/login/ + + - type: bugfix + title: Mutating webhook injector correctly hides named ports for probes. + body: >- + The mutating webhook injector has been fixed to correctly rename named ports for liveness and readiness probes + docs: reference/cluster-config + + - type: bugfix + title: telepresence current-cluster-id crash fixed + body: >- + Fixed a regression introduced in 2.3.5 that caused `telepresence current-cluster-id` + to crash. + docs: reference/cluster-config + + - type: bugfix + title: Better UX around intercepts with no local process running + body: >- + Requests would hang indefinitely when initiating an intercept before you + had a local process running. This has been fixed and will result in an + Empty reply from server until you start a local process. + docs: reference/intercepts + + - type: bugfix + title: API keys no longer show as "no description" + body: >- + New API keys generated internally for communication with + Ambassador Cloud no longer show up as "no description" in + the Ambassador Cloud web UI. Existing API keys generated by + older versions of Telepresence will still show up this way. + image: telepresence-2.3.7-keydesc.png + + - type: bugfix + title: Fix corruption of user-info.json + body: >- + Fixed a race condition that logging in and logging out + rapidly could cause memory corruption or corruption of the + user-info.json cache file used when + authenticating with Ambassador Cloud. + + - type: bugfix + title: Improved DNS resolver for systemd-resolved + body: + Telepresence's systemd-resolved-based DNS resolver is now more + stable and in case it fails to initialize, the overriding resolver + will no longer cause general DNS lookup failures when telepresence defaults to + using it. + docs: reference/routing#linux-systemd-resolved-resolver + + - type: bugfix + title: Faster telepresence list command + body: + The performance of telepresence list has been increased + significantly by reducing the number of calls the command makes to the cluster. + docs: reference/client + + - version: 2.3.6 + date: "2021-07-20" + notes: + - type: bugfix + title: Fix preview URLs + body: >- + Fixed a regression introduced in 2.3.5 that caused preview + URLs to not work. + + - type: bugfix + title: Fix subnet discovery + body: >- + Fixed a regression introduced in 2.3.5 where the Traffic + Manager's RoleBinding did not correctly appoint + the traffic-manager Role, causing + subnet discovery to not be able to work correctly. + docs: reference/rbac/ + + - type: bugfix + title: Fix root-user configuration loading + body: >- + Fixed a regression introduced in 2.3.5 where the root daemon + did not correctly read the configuration file; ignoring the + user's configured log levels and timeouts. + docs: reference/config/ + + - type: bugfix + title: Fix a user daemon crash + body: >- + Fixed an issue that could cause the user daemon to crash + during shutdown, as during shutdown it unconditionally + attempted to close a channel even though the channel might + already be closed. + + - version: 2.3.5 + date: "2021-07-15" + notes: + - type: feature + title: traffic-manager in multiple namespaces + body: >- + We now support installing multiple traffic managers in the same cluster. + This will allow operators to install deployments of telepresence that are + limited to certain namespaces. + image: ./telepresence-2.3.5-traffic-manager-namespaces.png + docs: install/helm + - type: feature + title: No more dependence on kubectl + body: >- + Telepresence no longer depends on having an external + kubectl binary, which might not be present for + OpenShift users (who have oc instead of + kubectl). + - type: feature + title: Agent image now configurable + body: >- + We now support configuring which agent image + registry to use in the + config. This enables users whose laptop is an air-gapped environment to + create personal intercepts without requiring a login. It also makes it easier + for those who are developing on Telepresence to specify which agent image should + be used. Env vars TELEPRESENCE_AGENT_IMAGE and TELEPRESENCE_REGISTRY are no longer + used. + image: ./telepresence-2.3.5-agent-config.png + docs: reference/config/#images + - type: feature + title: Max gRPC receive size now configurable + body: >- + The default max size of messages received through gRPC (4 MB) is sometimes insufficient. It can now be configured. + image: ./telepresence-2.3.5-grpc-max-receive-size.png + docs: reference/config/#grpc + - type: feature + title: CLI can be used in air-gapped environments + body: >- + While Telepresence will auto-detect if your cluster is in an air-gapped environment, + we've added an option users can add to their config.yml to ensure the cli acts like it + is in an air-gapped environment. Air-gapped environments require a manually installed + licence. + docs: reference/cluster-config/#air-gapped-cluster + image: ./telepresence-2.3.5-skipLogin.png + - version: 2.3.4 + date: "2021-07-09" + notes: + - type: bugfix + title: Improved IP log statements + body: >- + Some log statements were printing incorrect characters, when they should have been IP addresses. + This has been resolved to include more accurate and useful logging. + docs: reference/config/#log-levels + image: ./telepresence-2.3.4-ip-error.png + - type: bugfix + title: Improved messaging when multiple services match a workload + body: >- + If multiple services matched a workload when performing an intercept, Telepresence would crash. + It now gives the correct error message, instructing the user on how to specify which + service the intercept should use. + image: ./telepresence-2.3.4-improved-error.png + docs: reference/intercepts + - type: bugfix + title: Traffic-manger creates services in its own namespace to determine subnet + body: >- + Telepresence will now determine the service subnet by creating a dummy-service in its own + namespace, instead of the default namespace, which was causing RBAC permissions issues in + some clusters. + docs: reference/routing/#subnets + - type: bugfix + title: Telepresence connect respects pre-existing clusterrole + body: >- + When Telepresence connects, if the traffic-manager's desired clusterrole already exists in the + cluster, Telepresence will no longer try to update the clusterrole. + docs: reference/rbac + - type: bugfix + title: Helm Chart fixed for clientRbac.namespaced + body: >- + The Telepresence Helm chart no longer fails when installing with --set clientRbac.namespaced=true. + docs: install/helm + - version: 2.3.3 + date: "2021-07-07" + notes: + - type: feature + title: Traffic Manager Helm Chart + body: >- + Telepresence now supports installing the Traffic Manager via Helm. + This will make it easy for operators to install and configure the + server-side components of Telepresence separately from the CLI (which + in turn allows for better separation of permissions). + image: ./telepresence-2.3.3-helm.png + docs: install/helm/ + - type: feature + title: Traffic-manager in custom namespace + body: >- + As the traffic-manager can now be installed in any + namespace via Helm, Telepresence can now be configured to look for the + Traffic Manager in a namespace other than ambassador. + This can be configured on a per-cluster basis. + image: ./telepresence-2.3.3-namespace-config.png + docs: reference/config + - type: feature + title: Intercept --to-pod + body: >- + telepresence intercept now supports a + --to-pod flag that can be used to port-forward sidecars' + ports from an intercepted pod. + image: ./telepresence-2.3.3-to-pod.png + docs: reference/intercepts + - type: change + title: Change in migration from edgectl + body: >- + Telepresence no longer automatically shuts down the old + api_version=1 edgectl daemon. If migrating + from such an old version of edgectl you must now manually + shut down the edgectl daemon before running Telepresence. + This was already the case when migrating from the newer + api_version=2 edgectl. + - type: bugfix + title: Fixed error during shutdown + body: >- + The root daemon no longer terminates when the user daemon disconnects + from its gRPC streams, and instead waits to be terminated by the CLI. + This could cause problems with things not being cleaned up correctly. + - type: bugfix + title: Intercepts will survive deletion of intercepted pod + body: >- + An intercept will survive deletion of the intercepted pod provided + that another pod is created (or already exists) that can take over. + - version: 2.3.2 + date: "2021-06-18" + notes: + # Headliners + - type: feature + title: Service Port Annotation + body: >- + The mutator webhook for injecting traffic-agents now + recognizes a + telepresence.getambassador.io/inject-service-port + annotation to specify which port to intercept; bringing the + functionality of the --port flag to users who + use the mutator webook in order to control Telepresence via + GitOps. + image: ./telepresence-2.3.2-svcport-annotation.png + docs: reference/cluster-config#service-port-annotation + - type: feature + title: Outbound Connections + body: >- + Outbound connections are now routed through the intercepted + Pods which means that the connections originate from that + Pod from the cluster's perspective. This allows service + meshes to correctly identify the traffic. + docs: reference/routing/#outbound + - type: change + title: Inbound Connections + body: >- + Inbound connections from an intercepted agent are now + tunneled to the manager over the existing gRPC connection, + instead of establishing a new connection to the manager for + each inbound connection. This avoids interference from + certain service mesh configurations. + docs: reference/routing/#inbound + + # RBAC changes + - type: change + title: Traffic Manager needs new RBAC permissions + body: >- + The Traffic Manager requires RBAC + permissions to list Nodes, Pods, and to create a dummy + Service in the manager's namespace. + docs: reference/routing/#subnets + - type: change + title: Reduced developer RBAC requirements + body: >- + The on-laptop client no longer requires RBAC permissions to list the Nodes + in the cluster or to create Services, as that functionality + has been moved to the Traffic Manager. + + # Bugfixes + - type: bugfix + title: Able to detect subnets + body: >- + Telepresence will now detect the Pod CIDR ranges even if + they are not listed in the Nodes. + image: ./telepresence-2.3.2-subnets.png + docs: reference/routing/#subnets + - type: bugfix + title: Dynamic IP ranges + body: >- + The list of cluster subnets that the virtual network + interface will route is now configured dynamically and will + follow changes in the cluster. + - type: bugfix + title: No duplicate subnets + body: >- + Subnets fully covered by other subnets are now pruned + internally and thus never superfluously added to the + laptop's routing table. + docs: reference/routing/#subnets + - type: change # not a bugfix, but it only makes sense to mention after the above bugfixes + title: Change in default timeout + body: >- + The trafficManagerAPI timeout default has + changed from 5 seconds to 15 seconds, in order to facilitate + the extended time it takes for the traffic-manager to do its + initial discovery of cluster info as a result of the above + bugfixes. + - type: bugfix + title: Removal of DNS config files on macOS + body: >- + On macOS, files generated under + /etc/resolver/ as the result of using + include-suffixes in the cluster config are now + properly removed on quit. + docs: reference/routing/#macos-resolver + + - type: bugfix + title: Large file transfers + body: >- + Telepresence no longer erroneously terminates connections + early when sending a large HTTP response from an intercepted + service. + - type: bugfix + title: Race condition in shutdown + body: >- + When shutting down the user-daemon or root-daemon on the + laptop, telepresence quit and related commands + no longer return early before everything is fully shut down. + Now it can be counted on that by the time the command has + returned that all of the side-effects on the laptop have + been cleaned up. + - version: 2.3.1 + date: "2021-06-14" + notes: + - title: DNS Resolver Configuration + body: "Telepresence now supports per-cluster configuration for custom dns behavior, which will enable users to determine which local + remote resolver to use and which suffixes should be ignored + included. These can be configured on a per-cluster basis." + image: ./telepresence-2.3.1-dns.png + docs: reference/config + type: feature + - title: AlsoProxy Configuration + body: "Telepresence now supports also proxying user-specified subnets so that they can access external services only accessible to the cluster while connected to Telepresence. These can be configured on a per-cluster basis and each subnet is added to the TUN device so that requests are routed to the cluster for IPs that fall within that subnet." + image: ./telepresence-2.3.1-alsoProxy.png + docs: reference/config + type: feature + - title: Mutating Webhook for Injecting Traffic Agents + body: "The Traffic Manager now contains a mutating webhook to automatically add an agent to pods that have the telepresence.getambassador.io/traffic-agent: enabled annotation. This enables Telepresence to work well with GitOps CD platforms that rely on higher level kubernetes objects matching what is stored in git. For workloads without the annotation, Telepresence will add the agent the way it has in the past" + image: ./telepresence-2.3.1-inject.png + docs: reference/rbac + type: feature + - title: Traffic Manager Connect Timeout + body: "The trafficManagerConnect timeout default has changed from 20 seconds to 60 seconds, in order to facilitate the extended time it takes to apply everything needed for the mutator webhook." + image: ./telepresence-2.3.1-trafficmanagerconnect.png + docs: reference/config + type: change + - title: Fix for large file transfers + body: "Fix a tun-device bug where sometimes large transfers from services on the cluster would hang indefinitely" + image: ./telepresence-2.3.1-large-file-transfer.png + docs: reference/tun-device + type: bugfix + - title: Brew Formula Changed + body: "Now that the Telepresence rewrite is the main version of Telepresence, you can install it via Brew like so: brew install datawire/blackbird/telepresence." + image: ./telepresence-2.3.1-brew.png + docs: install/ + type: change + - version: 2.3.0 + date: "2021-06-01" + notes: + - title: Brew install Telepresence + body: "Telepresence can now be installed via brew on macOS, which makes it easier for users to stay up-to-date with the latest telepresence version. To install via brew, you can use the following command: brew install datawire/blackbird/telepresence2." + image: ./telepresence-2.3.0-homebrew.png + docs: install/ + type: feature + - title: TCP and UDP routing via Virtual Network Interface + body: "Telepresence will now perform routing of outbound TCP and UDP traffic via a Virtual Network Interface (VIF). The VIF is a layer 3 TUN-device that exists while Telepresence is connected. It makes the subnets in the cluster available to the workstation and will also route DNS requests to the cluster and forward them to intercepted pods. This means that pods with custom DNS configuration will work as expected. Prior versions of Telepresence would use firewall rules and were only capable of routing TCP." + image: ./tunnel.jpg + docs: reference/tun-device + type: feature + - title: SSH is no longer used + body: "All traffic between the client and the cluster is now tunneled via the traffic manager gRPC API. This means that Telepresence no longer uses ssh tunnels and that the manager no longer have an sshd installed. Volume mounts are still established using sshfs but it is now configured to communicate using the sftp-protocol directly, which means that the traffic agent also runs without sshd. A desired side effect of this is that the manager and agent containers no longer need a special user configuration." + image: ./no-ssh.png + docs: reference/tun-device/#no-ssh-required + type: change + - title: Running in a Docker container + body: "Telepresence can now be run inside a Docker container. This can be useful for avoiding side effects on a workstation's network, establishing multiple sessions with the traffic manager, or working with different clusters simultaneously." + image: ./run-tp-in-docker.png + docs: reference/inside-container + type: feature + - title: Configurable Log Levels + body: "Telepresence now supports configuring the log level for Root Daemon and User Daemon logs. This provides control over the nature and volume of information that Telepresence generates in daemon.log and connector.log." + image: ./telepresence-2.3.0-loglevels.png + docs: reference/config/#log-levels + type: feature + - version: 2.2.2 + date: "2021-05-17" + notes: + - title: Legacy Telepresence subcommands + body: Telepresence is now able to translate common legacy Telepresence commands into native Telepresence commands. So if you want to get started quickly, you can just use the same legacy Telepresence commands you are used to with the new Telepresence binary. + image: ./telepresence-2.2.png + docs: install/migrate-from-legacy/ + type: feature diff --git a/docs/telepresence/pre-release/troubleshooting/index.md b/docs/telepresence/pre-release/troubleshooting/index.md new file mode 100644 index 000000000..d3a72dc62 --- /dev/null +++ b/docs/telepresence/pre-release/troubleshooting/index.md @@ -0,0 +1,118 @@ +--- +description: "Troubleshooting issues related to Telepresence." +--- +# Troubleshooting + +## Creating an intercept did not generate a preview URL + +Preview URLs can only be created if Telepresence is [logged in to +Ambassador Cloud](../reference/client/login/). When not logged in, it +will not even try to create a preview URL (additionally, by default it +will intercept all traffic rather than just a subset of the traffic). +Remove the intercept with `telepresence leave [deployment name]`, run +`telepresence login` to login to Ambassador Cloud, then recreate the +intercept. See the [intercepts how-to doc](../howtos/intercepts) for +more details. + +## Error on accessing preview URL: `First record does not look like a TLS handshake` + +The service you are intercepting is likely not using TLS, however when configuring the intercept you indicated that it does use TLS. Remove the intercept with `telepresence leave [deployment name]` and recreate it, setting `TLS` to `n`. Telepresence tries to intelligently determine these settings for you when creating an intercept and offer them as defaults, but odd service configurations might cause it to suggest the wrong settings. + +## Error on accessing preview URL: Detected a 301 Redirect Loop + +If your ingress is set to redirect HTTP requests to HTTPS and your web app uses HTTPS, but you configure the intercept to not use TLS, you will get this error when opening the preview URL. Remove the intercept with `telepresence leave [deployment name]` and recreate it, selecting the correct port and setting `TLS` to `y` when prompted. + +## Connecting to a cluster via VPN doesn't work. + +There are a few different issues that could arise when working with a VPN. Please see the [dedicated page](../reference/vpn) on Telepresence and VPNs to learn more on how to fix these. + +## Your GitHub organization isn't listed + +Ambassador Cloud needs access granted to your GitHub organization as a +third-party OAuth app. If an organization isn't listed during login +then the correct access has not been granted. + +The quickest way to resolve this is to go to the **Github menu** → +**Settings** → **Applications** → **Authorized OAuth Apps** → +**Ambassador Labs**. An organization owner will have a **Grant** +button, anyone not an owner will have **Request** which sends an email +to the owner. If an access request has been denied in the past the +user will not see the **Request** button, they will have to reach out +to the owner. + +Once access is granted, log out of Ambassador Cloud and log back in; +you should see the GitHub organization listed. + +The organization owner can go to the **GitHub menu** → **Your +organizations** → **[org name]** → **Settings** → **Third-party +access** to see if Ambassador Labs has access already or authorize a +request for access (only owners will see **Settings** on the +organization page). Clicking the pencil icon will show the +permissions that were granted. + +GitHub's documentation provides more detail about [managing access granted to third-party applications](https://docs.github.com/en/github/authenticating-to-github/connecting-with-third-party-applications) and [approving access to apps](https://docs.github.com/en/github/setting-up-and-managing-organizations-and-teams/approving-oauth-apps-for-your-organization). + +### Granting or requesting access on initial login + +When using GitHub as your identity provider, the first time you log in +to Ambassador Cloud GitHub will ask to authorize Ambassador Labs to +access your organizations and certain user data. + + + +Any listed organization with a green check has already granted access +to Ambassador Labs (you still need to authorize to allow Ambassador +Labs to read your user data and organization membership). + +Any organization with a red "X" requires access to be granted to +Ambassador Labs. Owners of the organization will see a **Grant** +button. Anyone who is not an owner will see a **Request** button. +This will send an email to the organization owner requesting approval +to access the organization. If an access request has been denied in +the past the user will not see the **Request** button, they will have +to reach out to the owner. + +Once approval is granted, you will have to log out of Ambassador Cloud +then back in to select the organization. + +### Volume mounts are not working on macOS + +It's necessary to have `sshfs` installed in order for volume mounts to work correctly during intercepts. Lately there's been some issues using `brew install sshfs` a macOS workstation because the required component `osxfuse` (now named `macfuse`) isn't open source and hence, no longer supported. As a workaround, you can now use `gromgit/fuse/sshfs-mac` instead. Follow these steps: + +1. Remove old sshfs, macfuse, osxfuse using `brew uninstall` +2. `brew install --cask macfuse` +3. `brew install gromgit/fuse/sshfs-mac` +4. `brew link --overwrite sshfs-mac` + +Now sshfs -V shows you the correct version, e.g.: +``` +$ sshfs -V +SSHFS version 2.10 +FUSE library version: 2.9.9 +fuse: no mount point +``` + +but one more thing must be done before it works OK: +5. Try a mount (or an intercept that performs a mount). It will fail because you need to give permission to “Benjamin Fleischer” to execute a kernel extension (a pop-up appears that takes you to the system preferences). +6. Approve the needed permission +7. Reboot your computer. + +### Daemon service did not start + +An attempt to do `telepresence connect` results in the error message `daemon service did not start: timeout while waiting for daemon to start` and +the logs show no helpful error. + +The likely cause of this is that the user lack permission to run `sudo --preserve-env`. Here is a workaround for this problem. Edit the +sudoers file with: + +```command +$ sudo visudo +``` + +and add the following line: + +``` + ALL=(ALL) NOPASSWD: SETENV: /usr/local/bin/telepresence +``` + +DO NOT fix this by making the telepresence binary a SUID root. It must only run as root when invoked with `--daemon-foreground`. \ No newline at end of file diff --git a/docs/telepresence/pre-release/versions.yml b/docs/telepresence/pre-release/versions.yml new file mode 100644 index 000000000..ef6d2867e --- /dev/null +++ b/docs/telepresence/pre-release/versions.yml @@ -0,0 +1,5 @@ +version: "2.4.10" +dlVersion: "latest" +docsVersion: "pre-release" +branch: release/v2 +productName: "Telepresence"